Tuesday, 14 January 2014

Performance testing oracle JD Edwards EnterpriseOne JDE

I’ve done a lot of performance testing of EnterpriseOne.  Back in time I was involved in testing over 10000 users with load runner on a mix of web and citrix.  I’ve now started doing a lot of specific JD Edwards load testing using Oracle Application Testing Suite (OATS). 

Of course OATS can test all of the JD Edwards suite natively, because it’s oracle.  But it can also natively test all of the oracle product suite natively (in fact – ANY web based application).  OATs has accelerators for many of the oracle products, which make recording and reading the scripts a breeze.

Load testing itself is not overly complicated and you can execute this as a “one off” without any problems.  But how do you know what is normal? What is fast? Where improvements can be made? It’s difficult when you only have one set of performance data to use for your comparisons.  What I mean in this situation is, if you are getting average response times of .75 sec, is this good for a 400 user concurrent test?  Can you improve that?  Is 50% CPU utilisation on the database server normal with 300 users connected?  All of these questions are difficult to answer without a frame of reference.

At Myriad, we do load testing for lots of clients with lots of different architectures and we can tell what is fast and what is slow at the various application tiers.  We know where you should be putting your horsepower and we know what is going to make the best difference for your ERP performance.

So what I guess I’m saying is that you can have all of the findings and metrics in the world, but the interpretation of these results is where the value is.  I guess there is a similar analogy with big data, you can have all of the data in the world, but it means nothing until you’ve interpreted it.  It seems to me that the real value of metrics is found with interpretation & comparison rather than merely representation.  In the simplest form, your information is valuable as soon as you make a change and then compare the results.  The metrics start to have value with comparison.  This is true with your performance tuning data, you need to compare your previous results to know if you are making improvements or not, but unless you compare with industry standard benchmarks, you won’t know if there are other gains that you can easily make.

The best thing to do is create a benchmark. Then any changes you make in the system can be compared against this benchmark.  You’ll be able to quantify any performance changes with the user community by comparing the results with your known benchmark.  This could be done for a new tools release, a new piece of hardware, network architecture…  Anything.

Our LTAAS (load testing as a service) offering can enable you to do this remotely, any time you like.  We are able to engineer specific tests for your site and execute them when you want in the concurrency that you want.  We are also able to configure all monitoring to ensure that not only the performance data (in terms of response times) is stored, but also the performance metrics from the hardware.  We can save off your results and then compare them if you execute the tests again.  We are also able to compare these results with know industry benchmarks to ensure that your performance profile is within industry standards.

Remember that you can do all of the tuning in the world, but 1 bad query from a user can bring the system to it’s knees!

This is a great toolset that you are able to augment with your own bag of tricks.  Send me an email shannon.moir@myriad-it.com if you’d like to know more.

1 comment:

Unknown said...
This comment has been removed by a blog administrator.

Extending JDE to generative AI