Monday, 18 July 2016

0–zero downtime upgrade 9.1 to 9.2–work with me!

This is going to be a soapbox post, but sometimes you need to get up there and do that.

I think that being agile and dynamic is important, so I think that upgrading your ERP regularly is important too.

One of the main problems with regular upgrades is the time it takes to test, the people that do the testing, everything about UAT seems to be painful.  Also the big bang nature of JD upgrades, where everything needs to be done at once.  Do you think it would be nice to be able to upgrade a portion of JDE to 9.2 (AP for example), do all the testing and training and then choose the next module for retrofit testing and training.  Keeping both 9.1 and 9.2 active at any point in time.

I think it’s an option to do a hybrid blue / green deployment model for JD Edwards, and especially the upgrade between 9.1 and 9.2.

What do I mean by blue / green deployment.  I mean have both application releases talking to a single database.  Be live on 9.1 and 9.2 at the same time and control the processes that run on 9.2 carefully.  Slowly migrating the users from 9.1 to 9.2 when the system is ready.  NO big bang.  Users come over when the retrofitting is done and the testing is complete.  This could be done with an additional URL or it could even be done with some really smart menus (call the 9.2 application from 9.1 menu as a link as opposed to an app) [if your cookies and sites were set up correctly].

I still advocating testing, lots of testing…  But I’m also advocating making the transition more gentle with more production based testing and better monitoring from the IT team to ensure that things are not going wrong.  There are holes in all IT testing, UAT testing has problems.  Integrations are not complete, data sets can be different – there are always issues with testing. 

So lets say I worked out which tables are changed between 9.1 and 9.2 – Imagine I created a list for you.

('F04514WF','F07600','F07601','F078504','F07855','F30UI004','F30UI008','F30UI012','F31B03E','F41UI001','F42565','F427UI01','F4801Z1','F54HS01','F54HS01M','F54HS02','F54HS02M','F54HS03','F54HS03M','F54HS06','F54HS06M','F740018A','F74405','F74B200','F74L920','F74RUI31','F74RUI41','F74S72','F74S79','F75I100','F75I10A','F75I15A','F75I20A','F75I344Y','F75I396B','F75Z0005','F7608B','F76B016')

You then determined if you had data in any of these tables.  You then did some smarts to retrofit the TC (table conversion) logic into a TER or temp table trigger (like 21CFR11 auditing of sorts) and made the 9.1 transactions populate the additional columns in 9.2 – for example.  You could also have a test in your trigger to say “if 9.2 system user then ignore trigger, else fire), because you only want your TC trigger to fire when 9.1 connects, as 9.2 will be native with the new table columns.

okay, you still with me?  We’ve got 2 systems, 2 DD’s, 2 OL’s, 2 server maps – two path codes, but only one set of data and control.

There would need to be a moratorium on development in 9.1 during the project, but that could be managed.

There would need to be some synchronisation of system tables between the systems (although I’d be tempted to use views – as I don’t think that any of them are changing).

You’d need to be very careful with UDO’s, they might cause some problems between the two releases.

You would need to be careful with single threaded queues too, as they would exist on two systems.

But, at the end of the day – you could have 9.2 and 9.1  running against a single data source, which would allow you to gradually migrate functionality without big bang.

Technology like AWS would make this very easy too, as you could scale up and scale down the environments as modules moved over from one system to the other.

Eventually when everyone was using 9.2, you could phase out the use of 9.1 altogether, using DNS changes or what ever you wanted to do.

Your investment in this is table triggers for the data tables that have changed and have data, also some investment in ensuring things like passwords and default printers and things are kept in synch too.  You need to keep some additional machines up and running while you have two environments running, but that is it. 

You could also put all of this procedure through change control and the standard JDE SDLC (apart from DB triggers and things) to ensure that environments would concurrently in UA and PY.

I think it could work – what about you?

No comments:

Extending JDE to generative AI