Friday 24 September 2021

Orchestration caching frustration - Cross Reference

 Perhaps this is just me and my dodgy use case.

I'm using jitterbit and JD Edwards orchestration to synchronise the address book between MSFT CEC and JD Edwards.  Nice and simple.  

I have a JB developer doing his thing and I said that I would donate my time to do the orchestration development - coz I'm a nice guy.

All started well and is not ending well.




A super simple start...  well...  There are no fields in the addressbook that are long enough to take the 40ish character CEC unique ID. I thought that an easy thing to use would be cross reference.  This is designed for this purpose... right?

So I start checking for my new AIS cross reference, if it does not exist - create it with the AN8 record




Easy hey?  Then I I do a similar check for edit and I delete both on a delete!  The perfect crime.



And finally delete



It's all fine until you string it all together.

Add:



Works fine.  Cross reference says it aborted, but it's configured to continue.  This is dodgy.  We can see that the AN8 has been created and now there is a value in the cross reference.


and AN8




I can now run all of my edits, all work well.  Address changes and cross reference picks up the add without an issue.


I run my delete:


I can see that the cross reference has been deleted and the AN8 is gone



See above that 06 is now missing.  BUT - if I try and create it again...  Everything craps out, because the value is cached in AIS



I think personally that is a bug.  If we are using highly dynamic reference data [which is reasonable], then I believe that the cache should stay current with the list.

You need to clear the JAS cache for this to work - crazy I feel.  

Anyway, this took me quite a while to find and makes me think I cannot use cross reference.









Friday 17 September 2021

Yay - we went live... But how can I measure success?

Go-lives are exciting, but I'm going to cover off go-lives for upgrades - which can be more exciting in some respects.  Why, well everyone knows how to use the system, so it's generally going to be hit pretty hard pretty early.  You users will also have their specific environment that they want to test, and when you mess up their UDO's and grid formats - you are going to hear about it.

I'm focused on, as a management team, how do you know that you upgrade is a success.

I have a number of measures that I try to use to indicate whether a go-live has been successful:

  1. Number of issues is important, but often they do not get raised.
  2. Interactive system usage
    1. how many users, applications, modules and jobs are being run and how that compares with normal usage.
    2. Performance; I want to know what is working well and what needs to be tuned
    3. Time on page - by user and by application
    4. Batch and queue performance and saturation - to ensure that my jobs are not queuing too much
Here is some example reports of these metrics which allow you to define success, measure and improve.



I find the above report is handy, as this is able to show me a comparison of the last week to the last 50 days...  So I get instant trend information.  You can see that I can drill down to ANY report to quickly see what it's runtime was over the last 50 days.  The other nice thing is that the table shows you exactly what is slower and by how much.  I can spend 5 minutes and tell a client all of the jobs that are performing better and which ones they need to focus on.

The next place I look for a comparison is here:


Above I'm able to see my busy jobs and how many times they are running, how much data they are processing or how many seconds they are running for.  I know my go-live date, so that makes things easy too.

I can do this on a queue by queue basis if needed too.

We can also quickly see that users are using all parts of the application that they should be, comparing the previous period - this is user count, pages loaded and time on page.






From a webpage performance point of view, we can report over the actual user experience.  This is going to tell us the page download time, the server response time and the page load time - all critical signals for a healthy system.

All of these reports can be sent to you on a weekly or monthly cadence to ensure that you know exactly how JDE is performing for you.