I do blog about this a lot, I'm sorry. UBE performance is cool though, and important.
No matter what type of performance tracking you are trying to do, using batch jobs as your benchmark is a very good place to start. I've mentioned this a number of times, but batch jobs (UBEs) give you a fairly pure performance reading - as it's two tier. Unless you've done something crazy like map UBE BSFN's to another server - I really hope you have not done this.
This is a good method of ensuring you have consistency too. If you are considering any change of technology - a consistent UBE performance is going to give you peace of mind. Finally, UBE performance also gives you some great ideas of where to look for archive and purge opportunities. I've see too many Z files grow so large that they have a negative impact on performance.
You can do everything I'm going to show you here without ERPInsights, but our product ERPInsights just makes it all much easier.
You can use SQL over F986110 and F986114 that will extract all trend data for you, but I'm going to demonstrate how I show clients their performance results. I'm often engaged during load testing exercises to give really deep insights into queuing, runtime and rows processed data. Performance testing requires detailed analysis of the batch queues, and you will generally run batch next to your interactive load. Quite often the interactive performance will fluctuate more than batch - but this will give you really good pointers into what is going on.
I used to use SQL over F986110 and F986114, but now I just grab a copy of both files and upload them into bigquery, where I can plugin my insights deashboards. It only takes a couple of minutes to upload millions of WSJ rows and get a complete understanding of all your batch performance - both holistically and on a job by job and day by day basis.
Quickly, the above shows me this month all of my batch jobs, and how their load profile compares with the previous month. The report defaults to one month, this would be 1 week or 1 year. You can compare rows processed, total runtime or average runtime quickly for any period you choose. You can see that this sample data set has run nearly 400000 UBE's in the last month, which is up 15% on the previous period. We can then compare the average runtime for each individual job to look for changes.
I also have the ability to plot that data over the last month to see when things have changed. I can see the rows processed (by the main select loop), the average runtime and the record count (number of jobs processed per day). I just need to scroll down the list and look at the profile for each job. This is nice, because I'm seeing the representation of 16000 rows of data instantly, and understanding the load profile of this job completely. Another really nice thing about something like the R47042 is that the rows processed data actually represents order volumes - so there is some empirical business data that you can extract from the above. Of course we don't mind if the jobs takes twice as long - especially if it's processing twice the data!
The reporting suite contains loads of insights out of the box, and takes moments to navigate. None of the data is on premise anymore, so there is no performance degradation to JDE when you are getting runtime history for 1000000 batch jobs.
No comments:
Post a Comment