Friday 17 September 2021

Yay - we went live... But how can I measure success?

Go-lives are exciting, but I'm going to cover off go-lives for upgrades - which can be more exciting in some respects.  Why, well everyone knows how to use the system, so it's generally going to be hit pretty hard pretty early.  You users will also have their specific environment that they want to test, and when you mess up their UDO's and grid formats - you are going to hear about it.

I'm focused on, as a management team, how do you know that you upgrade is a success.

I have a number of measures that I try to use to indicate whether a go-live has been successful:

  1. Number of issues is important, but often they do not get raised.
  2. Interactive system usage
    1. how many users, applications, modules and jobs are being run and how that compares with normal usage.
    2. Performance; I want to know what is working well and what needs to be tuned
    3. Time on page - by user and by application
    4. Batch and queue performance and saturation - to ensure that my jobs are not queuing too much
Here is some example reports of these metrics which allow you to define success, measure and improve.



I find the above report is handy, as this is able to show me a comparison of the last week to the last 50 days...  So I get instant trend information.  You can see that I can drill down to ANY report to quickly see what it's runtime was over the last 50 days.  The other nice thing is that the table shows you exactly what is slower and by how much.  I can spend 5 minutes and tell a client all of the jobs that are performing better and which ones they need to focus on.

The next place I look for a comparison is here:


Above I'm able to see my busy jobs and how many times they are running, how much data they are processing or how many seconds they are running for.  I know my go-live date, so that makes things easy too.

I can do this on a queue by queue basis if needed too.

We can also quickly see that users are using all parts of the application that they should be, comparing the previous period - this is user count, pages loaded and time on page.






From a webpage performance point of view, we can report over the actual user experience.  This is going to tell us the page download time, the server response time and the page load time - all critical signals for a healthy system.

All of these reports can be sent to you on a weekly or monthly cadence to ensure that you know exactly how JDE is performing for you.

No comments: