Thursday, 25 July 2019

find and kill... that's harsh. find problematic IO intensive oracle operations and prevent them causing too much carnage...

This is a continuation of my IOPs challenges.

This is a non DBA's cheat sheet for finding IO in a standard oracle database

Good to find out what queries are smashing the disk:

select
   p.spid,
   s.sid,
   s.serial#,
   s.process cli_process,
   s.status,t.disk_reads,
   s.last_call_et/3600 last_call_et_Hrs,
   s.action,
   s.program,
   t.sql_fulltext
from
   v$session s,
   v$sqlarea t,
   v$process p
where
   s.sql_address = t.address
and
   s.sql_hash_value = t.hash_value
and
   p.addr = s.paddr
-- and
--t.disk_reads > 10
order by
   t.disk_reads desc;

PID                            SID    SERIAL# CLI_PROCESS              STATUS   DISK_READS LAST_CALL_ET_HRS ACTION                                                           PROGRAM                                          SQL_FULLTEXT                                                                   
------------------------ ---------- ---------- ------------------------ -------- ---------- ---------------- ---------------------------------------------------------------- ------------------------------------------------ --------------------------------------------------------------------------------
2924                           3880      15998 1234                     INACTIVE   48832423           4.4775                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
4939                           7678       8934 1234                     INACTIVE   48832423       4.57472222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
4935                          10191      19604 1234                     INACTIVE   48832423       4.51472222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
3679                          10175      40187 1234                     INACTIVE   20300027            10.71                                                                  JDBC Thin Client                                 SELECT SDAN8,SDQTYT,SDPPDJ,SDUORG,SDDCT,SDFRGD,SDDELN,SDPA8,SDADTM,SDTHRP,SDSRP2
4931                          19066        290 1234                     INACTIVE   16277598       6.58472222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
2181                           1311       2983 1234                     INACTIVE    7032445       41.3938889                                                                  JDBC Thin Client                                 SELECT  DISTINCT GLDOC,GLPOST,GLLT,GLDGJ,GLKCO,GLDCT,GLEXA,GLR1,GLRE,GLPN,GLICU,
9811                          15283      13699 1234                     INACTIVE    7032445       41.3938889                                                                  JDBC Thin Client                                 SELECT  DISTINCT GLDOC,GLPOST,GLLT,GLDGJ,GLKCO,GLDCT,GLEXA,GLR1,GLRE,GLPN,GLICU,
7281                          15258       2380 1234                     INACTIVE    3379248       37.2380556                                                                  JDBC Thin Client                                 SELECT SDAN8,SDQTYT,SDPPDJ,SDUORG,SDDCT,SDFRGD,SDDELN,SDPA8,SDADTM,SDTHRP,SDSRP2
27197                            41       1604 1234                     INACTIVE    2911686       27.3166667                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
13675                         16529        830 1234                     INACTIVE    1207700       48.6297222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G
29908                          2602       9964 1234                     INACTIVE    1207700       48.6297222                                                                  JDBC Thin Client                                 SELECT T1.GMANS,T0.GLODCT,T2.MCADDS,T2.MCRP20,T0.GLALT4,T2.MCCLNU,T0.GLPYID,T0.G

Remember that
If the session STATUS is currently ACTIVE, then the value represents the elapsed time in seconds since the session has become active.
If the session STATUS is currently INACTIVE, then the value represents the elapsed time in seconds since the session has become inactive.

Handy to also look at longops to know how long they might take to complete, if they are listed there.
 
   select
   l.sid,
   l.sofar,
   l.totalwork,
   l.start_time,
   l.last_update_time,
   s.sql_text
from
   v$session_longops l      
left outer join
    v$sql s
on
   s.hash_value = l.sql_hash_value
and
   s.address = l.sql_address
and
   s.child_number = 0
   order by TOTALWORK desc;

The above is cool for seeing long operations, if you want to see active longops, add this where clause:
 
   select
   l.sid,
   l.sofar,
   l.totalwork,
   l.start_time,
   l.last_update_time,
   s.sql_text
from
   v$session_longops l      
left outer join
    v$sql s
on
   s.hash_value = l.sql_hash_value
and
   s.address = l.sql_address
and
   s.child_number = 0
where sofar < totalwork
   order by TOTALWORK desc;

full text from long ops - so you can do query plans.

select
   l.sid, 
   l.sofar, 
   l.totalwork, 
   l.start_time, 
   l.last_update_time, 
   t.sql_fulltext
from
   v$session_longops l       
left outer join
    v$sql s 
on 
   s.hash_value = l.sql_hash_value
and
   s.address = l.sql_address
and
   s.child_number = 0
left outer join  v$sqlarea t
on
   l.sql_hash_value = t.hash_value 
   order by TOTALWORK desc;


Then kill it, remember RDS on AWS does not let you run this – no matter who you are connected as:  Remember that if there are java processes in longops, basically you can kill them.  I would not touch runbatch longops -that is legitimate.  Quite often jdenet_k processes cannot really run long processes - you need to be a little bit careful here.

   alter system kill session '13993,34274'
Error report -
ORA-01031: insufficient privileges
01031. 00000 -  "insufficient privileges"
*Cause:    An attempt was made to perform a database operation without
           the necessary privileges.
*Action:   Ask your database administrator or designated security
           administrator to grant you the necessary privileges

You need to run their prooceure.

begin
    rdsadmin.rdsadmin_util.kill(
        sid    => 13993,
        serial => 34274);
end;

Wednesday, 24 July 2019

sqlplus commitment issues

What happens when you exit sqlplus without using a commit statement.  It does a rollback, right?  Wrong!

We’ve been doing some scripting in SQLPlus and I was asked this question, and I gave a confident reply - of course it rolls back.  And then decided to test this.

My assumption was if you did not commit, your transactions would ROLLBACK.  It seems I was totally wrong!!
  

$ sqlplus jde@jdeprod

SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 24 15:21:50 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Last Successful login time: Wed Jul 24 2019 15:21:43 +10:00
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
SQL> create table shae (username varchar(20)) ;
Table created.
SQL> insert into shae values ('nigel') ;
1 row created.
SQL> quit
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production

I would think that now, this record would not exist…  but as you can see, after a standard quit (or exit), a commit is issued!

$ sqlplus jde@jdeprod
SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 24 15:24:43 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Last Successful login time: Wed Jul 24 2019 15:24:38 +10:00
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
SQL> select count(1) from shae ;
  COUNT(1)
----------
         1
SQL> select * from shae ;
USERNAME
--------------------
nigel
SQL> update shae set username = 'ralph' ;
1 row updated.
SQL> quit

Holy moly!
You need to do this if you want to rollback, specify it in the exit command:

SQL> update shae set username = 'testing';
1 row updated.
SQL> exit rollback ;
Disconnected from Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
$ sqlplus jde@jdeprod
SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 24 15:59:10 2019
Copyright (c) 1982, 2016, Oracle.  All rights reserved.
Enter password:
Last Successful login time: Wed Jul 24 2019 15:59:05 +10:00
Connected to:
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
SQL> select * from shae
  2  ;
USERNAME
--------------------
shannon

And the supporting documentation


EXIT
Syntax
{EXIT | QUIT} [SUCCESS | FAILURE | WARNING | n | variable | :BindVariable] [COMMIT | ROLLBACK]
Commits or rolls back all pending changes, logs out of Oracle Database, terminates SQL*Plus and returns control to the operating system.
In iSQL*Plus, commits or rolls back all pending changes, stops processing the current iSQL*Plus script and returns focus to the Input area. There is no way to access the return code in iSQL*Plus. In iSQL*Plus click the Logout button to exit the Oracle Database.
Commit on exit, or commit on termination of processing in iSQL*Plus, is performed regardless of the status of SET AUTOCOMMIT.

Monday, 15 July 2019

fixing fat clients... again

Why am I still doing this.  Fixing fat clients is not a lot of fun.  Here is a couple of tips for connecting to the database using NTS and the correct sqlplus executable.  And then fixing an import that has gone bad.

I feel that I should also point out that I'm using an AWS workspace as my thick client.  (full disclosure).  There seem to be some fairly large IOPs issues using this config and perhaps this is why I'm having issues with the JDE thick client installation "out of the box".  But, I do persist...

I tried to solve an easy problem today, saw in an AIS log, evidence of a form that has gone bad…

15 Jul 2019 14:44:14,137[WARN][SMOIR][RUNTIME]CheckBoxEngine.initForDDInfo(): There is no Data Dictionary item associated with this check box. The value may be incorrect | CheckBox ID: 24, Form Name : P55ACCAM_W55ACCAMG Corrective Action :Please associate a Data Dictionary Item with the check box

Okay, fat client, get into designer.

Not as easy as I thought.

Could not log into DV:
Loads of errors about cannot find spec__blah.

19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:41.679000 jdb_ctl.c4208 Starting OneWorld
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:46.515000 jdecsec.c2873 Security Server returned error: eSecInvalidPassword: Invalid Password
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:46.515001 jdecsec.c308 Failed to validate user SMOIR by password
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:46.515002 jdb_ctl.c4865 JDB1100018 - Failed to get past Security check
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:48.263000 msc_signon.cpp184 ValidateUser failed from SignOn
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694000 dbcolind.c141 OCI0000017 - Unable to execute statement for describe - SELECT  *  FROM SPEC_DVB81210F.F98710DVB81210F  WHERE  ( THOBNM = :KEY1 )
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694001 dbcolind.c148 OCI0000018 - Error - ORA-00942: table or view does not exist 19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694002 dbinitrq.c1009 OCI0000143 - Failed to determine column order - SELECT  *  FROM SPEC_DVB81210F.F98710DVB81210F  WHERE  ( THOBNM = :KEY1 )
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694003 dbinitrq.c1016 OCI0000144 - Error - ORA-00942: table or view does not exist 19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694004 jdb_drvm.c908 JDB9900168 - Failed to initialize db request
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694005 JTP_CM.c1009 JDB9909007 - Unable to obtain driver request handle
19116/7984 MAIN_THREAD                        Mon Jul 15 17:34:51.694006 jdb_rst.c1779 JDB9900318 - Failed to find table information in TAM using RDB


This is always a dead give-away

Error Opening F98MOQUE Table.

Then you get the locked up menu design






Okay, I’ll try a new PY package.

I installed a new package, and got the same problems.  Hmm, that is annoying.  Take a look in sqlDeveloper.

Change sqlnet.ora in D:\Oracle12c\E1Local\NETWORK\ADMIN to have NTS auth

# Generated by OEESETUP.EXE
SQLNET.AUTHENTICATION_SERVICES=(NTS)
NAMES.DIRECTORY_PATH=(TNSNAMES)


run the correct sqlplus, "where sqlplus",  I'm finding and using sqlplus to create a user that I can use with SQLDeveloper.  Long story, but I find this is the easiest way of authenticating easily to the database. 

D:\Oracle12c\E1Local\BIN>where sqlplusD:\Oracle12c\E1Local\BIN\sqlplus.exeD:\oracle12c\product\12.2.0\client_1\bin\sqlplus.exe

Cd D:\Oracle12c\E1Local\bin

D:\Oracle12c\E1Local\BIN>.\sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 15 16:56:56 2019

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions

SQL> create user jdedba identified by jdedba11 ;

User created.

SQL> grant dba to jdedba ;

Grant succeeded.

SQL> grant create session to jdedba ;

Grant succeeded.

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64
bit Production


SQLDeveloper (logged in as jdedba) showed me that the package owner did not exist…   Though, my DV package does not work because there are no tables... wow, this is all over the place.

okay – must have been errors in the install:

C:\jdeinst.log

JD Edwards EnterpriseOne Client Install Log

Look in the log C:\Program Files (x86)\Oracle\Inventory\logs\installActions2019-07-15_04-18-42PM.log for more information.

Congratulations

The above file is the juicy one, but EVERYTHING told me the install was success…  except something here is fishy…


CMD: sqlplus.exe -S -L
INP: SYSTEM@E1Local
INP: ******
INP: ALTER TABLESPACE SPEC_PYC90501F READ WRITE ;
INP: EXIT
OUT: ALTER TABLESPACE SPEC_PYC90501F READ WRITE
*
ERROR at line 1:
ORA-00959: tablespace 'SPEC_PYC90501F' does not exist

So, find the commands in the install log and run them again, namely – sqldeveloper (logged in as jdedba user you created)

create user SPEC_PYC90501F identified by HELLO;
GRANT CREATE SESSION, ALTER SESSION, CREATE TABLE, CREATE VIEW TO SPEC_PYC90501F ;

Then copy Spec files from the spec dir to the data dir?  Don’t ask me:

Then the big one at the command line:

D:\Oracle12c\E1Local\BIN>impdp TRANSPORT_DATAFILES='D:\E920\PY920\spec\spec_pyc90501f.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_pyc90501f.dmp' REMAP_TABLESPACE=SPEC__PYC90501F:SPEC_PYC90501F REMAP_SCHEMA=SPEC__PYC90501F:SPEC_PYC90501F LOGFILE='impspec_pyc90501f.log'

Import: Release 12.1.0.2.0 - Production on Mon Jul 15 17:06:45 2019

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: jdedba
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit
Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing opt
ions
Master table "JDEDBA"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded

Starting "JDEDBA"."SYS_IMPORT_TRANSPORTABLE_01":  "jdedba/********" TRANSPORT_DA
TAFILES='D:\E920\PY920\spec\spec_pyc90501f.dbf' DIRECTORY=PKGDIR DUMPFILE='spec_
pyc90501f.dmp' REMAP_TABLESPACE=SPEC__PYC90501F:SPEC_PYC90501F REMAP_SCHEMA=SPEC
__PYC90501F:SPEC_PYC90501F LOGFILE='impspec_pyc90501f.log'
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "JDEDBA"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Mon Jul 15
17:07:42 2019 elapsed 0 00:00:49


Done!

sqlplus one more time:
ALTER TABLESPACE SPEC_PYC90501F READ WRITE;


Now I can log into JDE – painful though




Thursday, 11 July 2019

UBE Performance suite - with a dash of cloud and AI


Understand your batch performance, immediately and over time.


Batch performance in JD Edwards is a strange one.  You only give it ANY attention when it’s diabolical…  If it’s reasonable then you leave it.  My clients start to get nervous about batch performance when they are getting close to the start and the finish of their batch windows.  Another classic example of batch performance getting attention is when scheduled jobs do not finish or there is a problem in the evening.

I like to be a little more proactive in this situation and have developed some insights with my team to allow you to quickly identify trends, oh – and then we’ve sprinkled a little bit of AI over the top to give you some amazing exception handling.  That’s right, AI in JD Edwards UBE processing – all will be revealed.

Firstly we need a mechanism of taking data out of the JD Edwards tables that are rich with UBE execution details, we upload them into google big query and then report over this data with some great dashboards.  We accelerate the value in this process by plugging each execution into AI and asking it whether this was a valid result – given the past results of that UBE. 

Firstly we have an agent that can run on any machine “on premise” that has internet access and access to your server map data sources.  It’d got some intelligence built in so that you can schedule it at a cadence that you like, and it’ll extract the relevant records and place them into some cloud storage [secured by keys and tokens and encryption and more]. 

I know a pretty graph is not normal in JDE (this can be hosted as cafe1 or e1page too) so that you see all of the relevant information at the source.



What this pretty graph can do is give you KEY metrics on all UBE processing, like rows processed, time taken and number of executions.  You have controls where you can slice and dice this interactively:









If you choose a particular environment (as above), user or date range, all reports and graphs are going to change.  You can look at particular queues or batch servers if you like


The example above shows the jobs for JDE and SCHEDULER and only the JPD920 environment – to narrow your focus.

We then provide a number of screens, depending on what you are after:


If you are looking for the history and trend line of a single job, you look at the job focus report:


We can see actual processing times, how many times run, who is running the jobs and how long the job is taking on a regular basis.  This is great trend information.  Also, we do not purge your cloud data – so you can do complete analysis on what jobs are running and who are running them – while keeping your ERP lean and mean.  We could even put your output in the cloud if you want – much cheaper storage!


I really like the graph above, this shows me ALL history of ALL jobs and how long they are taking on average and how many rows they are processing.  This is really valuable when looking for potential improvements.

See how many jobs are running at each hour of the day – knowing when the hot spots are for batch

You can look at your queues and find out what queues are quiet for the next processing opportunity.

You can get some great insights to solve performance problems, to know who is running what, and to keep your complete batch history.

Now for the AI

I’m a victim of technology, I want to put AI into everything – and this is a great use case.  We have the ability to look at things like return codes, rows processed, time of day and runtime and use AI to determine if the metrics are expected.  If the algorithms (that have been trained with ALL your historical data) think that there is an issue with any of those dimensions, they can raise an exception to you.  This is great for what I call “silent killers”.  If a batch job generally processes 40000 rows and processes 0 one night, it’ll still finish with status ‘D’ – yet AI is sure to determine that this is an exception and it’ll send you a message.  That is going to save time and money when fixing all the scheduled jobs that run without sales update having been run properly!  The nice thing about AI, is that is looks at the time of day and makes genuine decisions about the exceptions.

We run this as an end to end service, allowing clients to have access to all consoles and reporting.  We can also schedule any of the reports to be delivered at a cadence that suits.  Reach out if you want to know more about your batch processing!  There is a small monthly cost to the service.

Extending JDE to generative AI