Tuesday, 20 August 2019

TIP 5: LOAD TEST YOUR BATCH ACTIVITY

TIP 5: LOAD TEST YOUR BATCH ACTIVITY


Batch activity is really a pure performance comparison which takes away a potential tier in your traditional 3 tier architecture using JD Edwards (Web, App & DB).  The nice thing about this also is that you are really only testing your batch server and your database server.

JD Edwards (in my mind) submits two types of jobs
  1. Type 1 runs a series of large SQL statements.  These are generally not complex, as the batch engine’s capacity to run complex statements (even simple aggregates) is not good.  Therefore you are going to get large open selects, which will generally then perform subsequent actions based upon each row that is returned in the main loop.  (eg. R09705 - Compare Account Balances to Transactions)
  2. Punchy UBE that gets in with some tight data selection, generally runs a pile of BSFNs and then jumps out again. (R42565 – Invoice Print)
It’s easy to categorise these jobs because of the amazing job Oracle did with “Execution Detail”, specifically rows processed.
Figure 6: View taken from "Execution Detail" row exit from Work with Submitted Jobs (WSJ)

You can actually databrowse this (V986114A) and see column Alias: PWPRCD, defined as “The number of rows processed by main driver section of the batch job”.  I use this in a lot of SQL around performance, as I can get rows per second for my UBEs – which is a great comparison device.  If you see consistent low numbers here, probably a punchy UBE – lots of rows, probably category 1.

Make sure that you test UBEs in both of the categories that I have listed above.  Some are going to test the database more, some are going to test the CPU on the batch server and some are going to test network I/O.  Make sure that you “tweak” your TCP/IP too, as I have seen this make some impressive differences in batch performance. (search Doc ID 1633930.1 and tweak).

The Fusion5 UBE Analytics suite allows you to do this comparison immediately and gives you some impressive power to compare periods, servers and more.
Figure 7: UBE Analytics summary screen - week on week performance comparison
We can choose a date range for compare and let the system do the rest.

You can see that we can tell for each UBE and version combination that has been run for this fictional client in the date range specified, if you compare it with the previous period – performance has slowed down in the top 12 rows.  I’d be looking at what has changed!
The UBE Analytics data is not stored in JD Edwards, so you never lose your history.

No comments:

Extending JDE to generative AI