Thursday, 17 December 2020

Performance patterns to analyse after upgrading your database.

19c is going to make everything faster, right?  Enterprise edition with multiple processes per query is going to dramatically improve JDE performance, right?  Wrong.  In my experience you will receive very little in terms of performance gains by using EE, or by upgrading to the latest oracle database release.  What you will get is patching and better responses to any issues that you might have.

The reason for this is fairly simple, JDE is no good at complex queries – period.  Generally joins a maximum of 3 tables together and then rarely uses aggregate based queries.  You can imagine that the relational operators that JDE employs on any database are tried and tested.  We are not really trying to push bitmap indexes or massively parallel queries.  I agree that you might get some benefits with some HUGE long running batch, but I’d also like to see that in practice.

In my experience, enabling parallel in an EE database for JDE caused more performance problems than benefits, as the query optimiser wanted to parallel simple queries and slowed the simple stuff down.  This might have been me doing the wrong things, but I spent quite a bit of time on it…  Anyway, this blog entry is about how I justify to a JD Edwards customer the actual results of a database upgrade.

Batch Performance

Firstly, lets look at the performance from a batch perspective.  The new database has only been live for 2 business days, so the data that we are looking at is limited to 20,000 batch jobs.  If course this is going to grow rapidly and our comparison is going to be quicker.  I can generally compare the trend analysis for EVERY batch job in a matter of minutes.


Firstly, we can look at yesterdays performance and compare this with the last 30 days and get a % change in that performance.  So really, for these often running jobs I know immediately if we have a problem or not.  I can also see that for all of the jobs, we are actually 1.8% better off after the implementation of 19c over the weekend – when comparing the average runtime of nearly 18000 jobs.  This is a positive result.

We can also drill down on any of these results and see how many times a day these jobs are running, how much data they are processing and of course – how long they are taking.



For example, for R47042 we can see that it ran 603 times on the 14th, it processed 6.8 rows on average and on average took 33 seconds to run…  This is slightly faster that the average for the last 30 days of 42 seconds.  So looking at the jobs that runs the most – the database upgrade is a great success.

Of course, we look through all of the jobs to ensure that there are not any exceptional results or slowdowns – which could mean missing statistics or missing indexes.



We are also able to quickly compare groups of jobs against the previous month to ensure that the performance is consistent.

The next place we look is the batch queue, to ensure that there has not been any excessive waiting or processing.




We can see from the above that QBATCH ran many more jobs than normal, but this is probably the scheduler catching up.

Looking at this data using runtime seconds.  We can see that perhaps QBATCH2 was not normal, but not too bad either:



Queue time is very interesting:


We can see that the queues have been held on the 3rd and the 12th of December, note the number of seconds that concurrent jobs have queued in qbatch2.

Although, in summary – everything is looking normal.



Note that we have the options of graphing any of the above metrics on the chart, jobs run, wait time, ruin time, rows processed or rows per second.  All of them help determine how things are processing and where you can make some improvements.

Generally I look at the most common jobs (the bread and butter jobs) and ensure that there is no negative impact.  Then some checking of the weekend integrities is also important.  Batch is generally a little easier to manage than the interactive side.

Note that I’m focused on the runtime, but you should also be looking at CPU utilisation and Disk activity.

Interactive Analysis




On aggregate, we can see from the above that the performance might have been impacted negatively from an interactive perspective.  We can see that the last 5 days have been quite a bit slower than has been recorded previously.

You can see that when I choose the last 5 days, I get the following


An average page load time of more than 2 seconds!

If I compare the previous 20 days, the average page load time is 1.8 seconds – so a 10% slow down – I guess that is not too bad.

We can begin to look at this data from a regional perspective too


Potentially this single table could answer all of your questions about JD Edwards performance and how things are changing month on month or week on week.

We can see how user counts change, how activity changes, and then we can see all of the performance metrics also.

You can see here how many pages are being loaded, the average server response time the average page load time and also the page download time.  Any changes to any of these metrics are very easy to find and very easy to report on.

Use this data to guarantee performance for your environment and regions.








No comments:

Extending JDE to generative AI