Monday, 31 July 2017

Cannot determine database driver name for driver type "O"

This is quite a specific post.

I’ve been doing a lot of performance testing lately.  One of the tests that I’ve been using to extract a little more performance out of JDE is to look at oracle database 12c. 

I’ve been testing many permutations and combinations of client and server, but I started to get the error below: (after installing JDE on a new and existing template enterprise server).

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024551      jdb_drvm.c460
         JDB9900436 - Cannot determine database driver name for driver type "O"

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024587      jdb_omp1.c1928
         JDB9900254 - Failed to initialize driver.

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024603      jtp_cm.c209
         JDB9909002 - Could not init connect.

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024617      jtp_tm.c1140
         JDB9909100 - Get connect info failed: Transaction ID =

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024630      jdb_rq1.c2452
         JDB3100013 - Failed to get connectinfo


This is on an older tools release (EnterpriseOne 9.1.3.3) (ptf.txt in $SYSTEM/bin32).

I tried 100 different things involving my environment variables and .profile, .bash_profile.  I messed around with a ton of things, but then thought – wait.  This is a 9.1.3.3 tools and I actually put this down on a existing enterprise server with oracle database 12c client (32 bit).  And, this database did not exist when this tools was released(well it was not supported). 

It turns out that my error above is because JDE wants to load certain dll’s from the oracle client dir, and it cannot do this from a 12c client.

To get around this, I just installed a new copy of the oracle client (11.2.0.4) and hooked this up to JD Edwards.  As soon as I did this, viola! the enterprise server is working perfectly.  Note also that this 11.2.0.4 client is talking to a 12c database, as there is reverse compatibility between client and server.

Another slightly interesting thing here is that all I did was tar up the client dir from another machine and untar it on this one – no installers (because I had no graphical interface for the oracle install and I also could not be bothered fighting the responseFile for the next 3 months).  As soon as I sprayed out the dir on the linux machine, it all just worked!  Just remember that this is a POC machine, so don’t stress, I will not run a production environment like this – it’s just good to know.

At the end of the day, I used a template AWS vm (that was built for another purpose), I unzipped the enterprise server install dir (e900) and oracle client, updated tnsnames.ora and the machine just WORKS. 

Complete enterprise server in less than 2 hours?  Don’t mind if I do!

Saturday, 29 July 2017

Slightly interesting… How big could my data get?

Forget the custom tables, but if you have 13,844,608 rows in your sales history table, then in oracle, that is going to be about 34GB, so we are talking about 2.3GB per million rows.

This is handy and simple maths for working out data growth and what that might mean to you.  that F0911 is a classic!  306GB for 255Million.


. . imported "CRPDTA"."F42119"                           33.72 GB 13844608 rows
. . imported "CRPDTA"."F04572OW"                         11.74 GB 4133179 rows
. . imported "CRPDTA"."F4111"                            139.9 GB 165666358 rows
. . imported "CRPDTA"."F4101Z1"                          7.578 GB 3731963 rows
. . imported "CRPDTA"."F3111"                            71.49 GB 85907305 rows
. . imported "CRPDTA"."F3102"                            20.75 GB 61873382 rows
. . imported "CRPDTA"."F47012"                           2.870 GB 2013678 rows
. . imported "CRPDTA"."F4074"                            70.35 GB 108624053 rows
. . imported "CRPDTA"."F56105"                           12.09 GB 43949816 rows
. . imported "CRPDTA"."F47003"                           19.27 GB 48747556 rows
. . imported "CRPDTA"."F43199"                           22.07 GB 11543632 rows
. . imported "CRPDTA"."F03B11"                           12.09 GB 10304061 rows
. . imported "CRPDTA"."F5646"                            8.429 GB 27358198 rows
. . imported "CRPDTA"."F4211"                            1.155 GB  478334 rows
. . imported "CRPDTA"."F6402"                            6.649 GB 43428152 rows
. . imported "CRPDTA"."F4105"                            20.18 GB 56430529 rows
. . imported "CRPDTA"."F0911"                            306.0 GB 254224657 rows
. . imported "CRPDTA"."F4006"                            3.686 GB 5964283 rows
. . imported "CRPDTA"."F47047"                           14.19 GB 6007023 rows
. . imported "CRPDTA"."F43121"                           28.40 GB 17036864 rows
. . imported "CRPDTA"."F03B13"                           1.600 GB 1842917 rows
. . imported "CRPDTA"."F1632"                            1.647 GB 5088270 rows
. . imported "CRPDTA"."F47036"                           1.628 GB 1808215 rows
. . imported "CRPDTA"."F57205"                           1.630 GB 4249200 rows
. . imported "CRPDTA"."F470371"                          15.32 GB 5804643 rows
. . imported "CRPDTA"."F6411"                            1.625 GB 6558558 rows
. . imported "CRPDTA"."F6412"                            1.605 GB 9334567 rows
. . imported "CRPDTA"."F4079"                            1.532 GB 6868444 rows
. . imported "CRPDTA"."F42420"                           1.514 GB 1392630 rows
. . imported "CRPDTA"."F0101Z2"                          1.331 GB  683139 rows
. . imported "CRPDTA"."F4311"                            13.42 GB 6834435 rows
. . imported "CRPDTA"."F3460"                            1.490 GB 6548503 rows
. . imported "CRPDTA"."F5763"                            3.708 GB 10283632 rows

Friday, 21 July 2017

Generate missing indexes in 1 environment from another–oracle

I’ve done a heap of tuning in production, created a bunch of indexes and I’m pretty happy with how it looks.  Remember that you only need to create the indexes in the database if they are for tuning – they don’t need to be added to the table specs in JDE.

So, how do I easily generate all of the DDL for these indexes and create them in other locations?

I’’’ generate the create index statements while reconciling

select 'SELECT DBMS_METADATA.GET_DDL(''INDEX'',''' || index_name || ''',''' || OWNER || ''') ||'';'' FROM dual ;'
from all_indexes t1 where t1.owner = 'CRPDTA' and not exists (select 1 from all_indexes t2 where t2.owner = 'TESTDTA' and t1.index_name = t2.index_name) ;

Which will give you a bunch of results like this:


SELECT DBMS_METADATA.GET_DDL('INDEX','F0902_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F41001_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F3413_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F03B11_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F03012Z1_ORA1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F03012Z1_ORA0','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F01151_ORA0','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F5646_SRM1','CRPDTA') ||';' FROM dual ;
SELECT DBMS_METADATA.GET_DDL('INDEX','F0414_9SRM','CRPDTA') ||';' FROM dual ;

So whack some headers on this to trim the output:

set heading off
set feedback off
set long 99999
set pages 0
set heading off
set lines 1000
set wrap on

And use the run script button in SQL Developer:

image

You’ll get a pile of output like this:


CREATE INDEX "CRPDTA"."F01151_ORA0" ON "CRPDTA"."F01151" ("EAAN8", "EAIDLN")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL ;


CREATE INDEX "CRPDTA"."F5646_SRM1" ON "CRPDTA"."F5646" ("ALDOCO", "ALLNID")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL ;

You can then change the tablespace and owner information and run in your other environments.

Thursday, 20 July 2017

want to know more about ASM on the ODA?

Here are a couple of handy commands, especially if you are on an ODA

As root, you can see what space is being used by what database:

[root@sodax6-1 datastore]# oakcli show dbstorage

All the DBs with DB TYPE as non-CDB share the same volumes

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
JDEPROD, JDETEST   non-CDB    /u01/app/oracle/oradata/datastore                   31G    16.26G      14.74G              3G        REDO
                               /u02/app/oracle/oradata/datastore                 4496G  4346.01G     149.99G            102G        DATA
                               /u01/app/oracle/fast_recovery_area/datastore      1370G   761.84G     608.16G             36G        RECO

Of course, this is what ACFS thinks:

[grid@sodax6-1 ~]$ df -k
Filesystem            1K-blocks       Used  Available Use% Mounted on
/dev/xvda2             57191708   14193400   40093068  27% /
tmpfs                 264586120    1246300  263339820   1% /dev/shm
/dev/xvda1               471012      35731     410961   8% /boot
/dev/xvdb1             96119564   50087440   41149436  55% /u01
/dev/asm/testing-216 1048576000  601013732  447562268  58% /u01/app/sharedrepo/testing
/dev/asm/datastore-344
                        32505856   17050504   15455352  53% /u01/app/oracle/oradata/datastore
/dev/asm/acfsvol-49    52428800     194884   52233916   1% /cloudfs
/dev/asm/datastore-49
                      1436549120  798850156  637698964  56% /u01/app/oracle/fast_recovery_area/datastore
/dev/asm/testing2-216
                      4194304000 1575520568 2618783432  38% /u01/app/sharedrepo/testing2
/dev/asm/datastore-216
                      4714397696 4661989408   52408288  99% /u02/app/oracle/oradata/datastore


Now, you might want to take a look at what ASM thinks about this

[grid@sodax6-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  4194304  19660800   198252           983040         -392394              0             Y  DATA/
MOUNTED  NORMAL  N         512   4096  4194304   3230720   321792           161536           80128              0             N  RECO/
MOUNTED  HIGH    N         512   4096  4194304    762880   667144           381440           95234              0             N  REDO/


A bit more detial thanks:


[grid@sodax6-1 ~]$ asmcmd volinfo -G DATA -a
Diskgroup Name: DATA

     Volume Name: DATASTORE
      Volume Device: /dev/asm/datastore-216
      State: ENABLED
      Size (MB): 4603904
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u02/app/oracle/oradata/datastore
 
      Volume Name: TESTING
      Volume Device: /dev/asm/testing-216
      State: ENABLED
      Size (MB): 1024000
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u01/app/sharedrepo/testing
 
      Volume Name: TESTING2
      Volume Device: /dev/asm/testing2-216
      State: ENABLED
      Size (MB): 4096000
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u01/app/sharedrepo/testing2

So now, I want to resize, as I’ve made my repo TESTING2 too big and I need some more space in my DATASTORE – so…

[grid@sodax6-1 ~]$ acfsutil size -1T /u01/app/sharedrepo/testing2
acfsutil size: new file system size: 3195455668224 (3047424MB)

and you can see that ACFS actually uses the “Auto-resize increment” to add to the FS when it’s low:

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
JDEPROD, JDETEST   non-CDB    /u01/app/oracle/oradata/datastore                   31G    16.26G      14.74G              3G        REDO
                               /u02/app/oracle/oradata/datastore                 4598G  4446.22G     151.78G            102G        DATA
                               /u01/app/oracle/fast_recovery_area/datastore      1370G   761.84G     608.16G             36G        RECO

In my example it’ll add 102GB when low.  So before I resized the /TESTING2 repo, things looked like this:

/dev/asm/datastore-216
                      4714397696 4661989408   52408288  99% /u02/app/oracle/oradata/datastore

After resizing

/dev/asm/datastore-216
                      4821352448 4662201568  159150880  97% /u02/app/oracle/oradata/datastore

So it’s seen that there is some free space (the 1TB I stole) and has given this back to the data area.

Note that I could have done this with oakcli resize repo (but I did not know that at the time).

Wednesday, 19 July 2017

EM express for 12c

This is cool, no more emctl start dbconsole

I went snooping around for emctl and did not find one under 12c

google and found this gem:  http://www.oracle.com/technetwork/database/manageability/emx-intro-1965965.html#A1 This is probably all you need, but I needed more

When I followed the steps, my browsers got security errors.  Interestingly I only had a port come up with https, not http:

image

Secure Connection Failed

The connection to sodax6-1.oda.aus.osc:5500 was interrupted while the page was loading.

    The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
     Please contact the website owners to inform them of this problem.


I checked the ports that were open and found that http was not.


SQL> select dbms_xdb.getHttpPort() from dual;

GETHTTPPORT
-----------
       0

SQL> select dbms_xdb_config.getHttpsPort() from dual;


GETHTTPSPORT
------------
        5500


So I ran the below:

exec DBMS_XDB_CONFIG.SETHTTPPORT(5500);

Then was able to login

image

ODA goodness–has this thing started to win me over

If you know about ODA’s, you probably know why I like the X6 about 100000 times more than anything before it – it all comes down to IOPS.  If you want more than 1500 IOPs consistently – then you might want to move on from the X5 if you have a very large database.  The X5 does have some cool stuff to mitigate it (it being the lack of IOPS), but at the end of the day there is limited FLASH to get that slow SAS data closer to the CPU.

But, the X6 is very fast and very nice and very FLASH

One thing I needed to do is quickly test 12c database version, so this can be done with a 1 click [need to be honest here, there is NO graphical interface native on the ODA, you need to start getting very familiar with oakcli commands.  Although this has escalated my confidence, I’ve started writing ksh scripts and automating everything I need on this machine.

Take a look at the above, 1 oakcli command and we are upgrading to 12C, both RAC nodes – everything.

That is cool!  (PS. I know that I can also do this is AWS RDS – and that is a click – so I guess this is just okay)…

There is no progress indicator, just “It will take a few minutes”.

A little “extra for experts”, do not modify the .bash_profile for oracle on the oda_base.  I had it prompting me for what oracle home I wanted and this was breaking a bunch of commands – what a dope I am.

I might make another post in 1 hour when this has broken and I’m picking up the pieces…

Extending JDE to generative AI