Thursday, 16 April 2015

oracle in-memory and JD Edwards

I’ve been lucky enough to be exposed (said the vicar to the nun) to some oracle 12c in memory and do performance testing against this at one of my clients.  This has been a very interesting exercise, of course I can only comment from basic DBA skills that I have, well – let’s be honest – basic everything skills.

I’ve been executing performance testing against the inmemory oracle 12c configuration that has been implemented in AWS.   To be honest (sorry AWS) we are doing everything to try and get some performance back to the ERP, inmemory is one way of mitigating the issues that we are seeing with poor disk performance in the cloud.  You can google in memory, but it’s a new 12C feature (you need to pay for it too) available in EE that allows you to pin tables in memory (wow).  Not only this there is a heap of jiggery pokery that uses columnar compression and all sorts of things that allows you to somhow squeeze about 180GB of data into 21GB of inmemory allocation.  TRUE!!!!

What tables do you put in memory?

Anything big really.  Anything that is read a lot!  There are probably a lot of other rules, but I’m dealing with an ERP OLTP environment, so that is all I really care about.

Some SQL you can use to see what is in memory.  Note that big tables take a long time to put into memory, so when you start the database, you need to allow a couple of hours to put the data into memory.  the database is available, but not fast until it’s all been loaded.

 

      select v.owner
, v.segment_name name
, v.populate_status status
from v$im_segments v;

PRODDTA    F03012    COMPLETED
PRODDTA    F55B30    STARTED
PRODDTA    F0111    COMPLETED
PRODDTA    F0006    COMPLETED
PRODDTA    F0901    COMPLETED
PRODDTA    F03B21    COMPLETED
PRODDTA    F03B11    COMPLETED
PRODDTA    F0115    COMPLETED
PRODDTA    F0150    COMPLETED
PRODDTA    F0116    COMPLETED
PRODDTA    F0101    COMPLETED
PRODDTA    F03B13    COMPLETED

  show parameter INMEMORY ;

                                           TYPE        VALUE                                                                                               
-------------------------------------------------- ----------- ----------------------------------------------------------------------------------------------------
inmemory_clause_default                            string                                                                                                          
inmemory_force                                     string      DEFAULT                                                                                             
inmemory_max_populate_servers                      integer     2                                                                                                   
inmemory_query                                     string      ENABLE                                                                                              
inmemory_size                                      big integer 32G                                                                                                 
inmemory_trickle_repopulate_servers_percent        integer     1                                                                                                   
optimizer_inmemory_aware                           boolean     TRUE
                                  


 


Also what tables are configured:

select table_name
, inmemory
, inmemory_priority
from all_tables where owner = 'PRODDTA'
order by inmemory desc;

 


What are the results?


We’ve been trying a bunch of permutations and combinations of inmemory to get the best results for this site and their usage and data.  I believe that there is not going to be a single answer for all clients, but initially the results for inmemory are very promising.  We were able to kick off all the scheduled jobs (a mixed load of 46 UBEs) and judge the performance of these against known benchmarks (existing on prem 812) etc.  We’re also closely monitoring the DB performance metrics too, to make sure that we are not cooking it.


Initially inmemory causes much higher CPU, especially when higher rates of compression are applied to the tables.  Despite the CPU being used a lot more, the overall results are better for us.  Over 80% of the jobs run faster (some by MANY factors).  There are a couple of exceptions, but this is where the optimizer is getting confused about in memory and whether it should be doing a table scan inmemory or an indexed lookup.  Although this increase in CPU requirements is going to potentially make you need more EE licences and therefore more inmemory options, but hey – that’s the price of performance.


I was able to quantify the savings that inmemory provided with a figure in minutes (which was really nice and very real for the client).  I calculated the number of seconds the complete schedule took in all DB configurations and then got a aggregated saving / cost in minutes for the entire run.  It was a nice way of appreciating the difference that the tuning made.

No comments:

Extending JDE to generative AI