Wednesday, 18 December 2019

Modernizing the shop floor - AI + camera = actionable insights

I was told a fact that the average piece of equipment on the shop floor in Australia is 21 years old, wow – they just don't make them like they used to.  I tried to back this up with some quotes (https://www.hpe.com/us/en/insights/articles/4-ways-to-bring-legacy-manufacturing-equipment-into-the-iot-age-1903.html) says 20-25, that'll do.

So how can a traditional manufacturing organisation adopt some modern processing capabilities that are being facilitated by modern cloud compute and of course IoT?  The shop floor needs processes to tell you when something is wrong, or validate that an even has occurred without expensive retrofitting of custom hardware?  I think it's very possible - to the point of being simple.  For the purposes of this blog entry, I'm going to specifically talk about image processing to facilitate shop floor productivity.

We've been working on some pretty cool technology in the background.  As a group at ww.fusion5.com.au we have been focused on keeping up to date on the latest cloud algorithms that we can call to create value.   One that I really like is all of the AI processing that you can call on your images (in bulk)

Here is a sample image - let's use the cloud to extract some value from it...  I appreciate that this is not the shop floor, but I'm getting there...  And I love to explain things with images of beer!


Nice, this is a beer list from a great pub that I like to visit.  If I wanted to index the contents so that it was searchable, I need to employ some OCR to rip the text out, I have a demo here:


You should be able to login and use it.

Look what it extracts - amazing!

[ "carringbush hotel victoria pale ale 4.2% 4,6% 3.5% temp bad shepher sold out! thund road sorry.. * drayght bodriggy * speccy juice * krush pale * summer ale *mornington \"twc bir stc gr. 4.7% * kaiju! tropical 'ko 4.7% mt.goat solts out \"r session ipa 5.5% kaiju! signal sun s fixation ipa * xxxx gold 6.4% of 3.5% *coldstream sold pple cider 5% 45% american pale aleale 5.7% billy the mid & raspberry 3.57 grapefruit 150ml 7/\" out \"mt.goat * steam ale sorry! carringbush hotel pe closed on xmas boxing day, new *monteiths r's eve and new year's day. will be open all mt.goat infysed days from 12pm until late. ballast sculpin point ipa * feral biggie juice neipa 6% matso's hard lemon lemonade alcoholic 4,5%" ]

Let's look at what else can be done.



This is a great example of some serialized boxes from my medicine cabinet, probably expired.  But it had me thinking about how this technology really can make things easier.  People could take a picture of their medicine cabinet and this could tell you what is expired or quickly what they do!

[
"prescription on keep out of read 30 tab solone prednisolone each table prednisolo inova l10,12 helpo nsw 2067 code no gujdrugs- la064303 190311b 2022.02 lot 6104550 gtinc01) : 18901079058253 b (10):gvh017006 exp(17):06/2020 sr no (21) : tbukns31589w your assurance of quality healthcare. this product has been developed and manufactured to the highest standards. johnson & johnson pacific australia · new zealand 45 jones street, ultimo nsw 2007, australia and auckland, new zealand for more information call toll free: 1800 029 979 australia, 0800 446 147 new zealand or visit www.codral.com.au or www.codral.co.nz made in australia from imported and local components. 9 300607 1805 10\" batch no: 2 8 7963 11/21 expiry: 60027607 lot 01-2020 exp 2402000706 5152506623 astrazeneca py77012 may2020"
]

As you can see from the above, you get the serials and everything from the labels.  You can easily turn this into indexed data and search on it, as well as audit-able data.



Imagine that you have a camera mounted on a production line and this camera is going to take a video feed of everything in it'd field of vision.

Then, when the data changes, it is going to call a webhook.  Simple…  Right? 

The power of this very simple process is amazing.

We could have a basic web camera on the shop floor that can then ANY analogue  information (a dial or a readout) to a digital signal.  This information is then ready for your data lake or your big data locations for storage.  This data is going to provide you with a wealth of information on the shop floor process.  For example, just say you were a food exporter and you needed to ensure that all of your labels had been printed properly – or you are going to lose your export licence?  This simple solution could do this, very simply.

Running an AI fuelled OCR process on the label above gives the following output:

[
"any meat works co long flat road anytown nsw boneless beef la(*l)+ product of australia *ygs* striploin znuy701 over 4.5kg iw/vac 32307 keep chilled o°c (01) 9 9316710123453 (3101) 000262 (13) 150310 (21)41457354 品質保持期限090613 s/o: 3345rt5 austral 9999 nspecte pkd on 10-mar-2015 17:26 best before 22-may-2015 26.2kg net weight carton id 41457354 est. no.9999"
]

Great, that was simple.  I got my development team to then create a bit of an interface to enable me (or anyone) to extract the relevant data from what OCR wants, of course I call back on a regex:

.*\ss\/o:\s(?<serial>\S+)\s.*pkd\son\s(?<packedDate>\d{1,2}-\S{3}-\d{4}).*\s(?<weight>\d+\.\d+)kg.*

This has been defined to take any picture, AI tears that down, and then the regex is applied to extract 3 named groups of data:

Named Groups
{
"serial": "3345rt5",
"packedDate": "10-mar-2015",
"weight": "26.2"
}

Awesome, so we have 3 named groups identified and extracted.  We now also have an interface to program our orchestration end point:


Okay, so you do not have my entire basic Auth string, but you can see what is being done.

From my named extraction groups, I can place those variables in the orchestration call in JD Edwards (of course this could be any web hook).

And with this simple piece of code, I'm able to complete finished goods in JD Edwards from the installation of a web camera and some smart code behind it.

Imagine reading gauges, temperatures, movement of devices etc – all could be done with the same simple steps.

Plugging the power of the cloud (into the edge if you must) and then helping this data with forecasting and shop floor efficiency.

This is completely generic too, applying this to stock takes very simply.  Using the power of AI to do the counting for you!

I take this to the next level with another example: box counting
produces
[ "aperf free free shipping free free free shipping free shipping shipping free free free shipping free free shipping shipping shipping shipping free free shipping free shipping free shipping free free shipping shipping shipping free stipping free free free shipping free free free shipping erf ship shipping shipping shipping free free lee shipping ping free free shipping shipping shipping drf" ]


With a quick bit of excel, I find that there is 28 instances of the word free...  So I can assume that there is about 28 boxes.  This is a trivial example.  But imagine a single high quality image of a warehouse shelf, if it was properly faced - the ability to stocktake with a single image could be amazing.

You can see that we have stood up the ability to extract ANY data from an image (I've shown some simple examples, but we can train specific models to look for quality issues or interpret images as you need) and record that data immediately into JD Edwards.  ALL you need is the camera (we partner with meraki if you need some help) and we can do all the rest...  Extract the data from the image and get that into JD Edwards (or any system).

At fusion5 we have our #fastfive innovation offering where we come to you and assist you uncover innovation opportunities rapidly.  We run some workshops and then can put together some very impressive POC's with small amounts of money!






Tuesday, 19 November 2019

How far away from code current are you?

How is everyone doing keeping “code current”?

Code currency is an arduous task, but it really does not need to be.  We need to transfer the power from the dark arts of CNC to the blissfully easy world of BI reporting – surely!  Is that not the panacea for all problems.  No, I agree - it's not going to be that easy.

I’ve been working on a suite that can tell you everything you need to know about being code current, including items like:

  • How modified an object is – to help you gauge retrofit
  • How much you use the object (in terms of users, engagement time and also page loads)
  • Is the code the same as current ESU level from oracle?

The above points alone would allow you to look at all of your code, and then determine how far from pristine you are.  It would allow you to then look at usage and modification information and then choose how much effort you need to put into getting yourself code current.

As soon as you make changes and promote to prod, all of the reports can be updated to reflect this…


For example, the report of gold is a “honey pot” for determining code currency

This shows you all JDE applications being used and how much they are being used.
This report is comparing the code at this site with the latest oracle patches (yes true, this is what is being done) and can tell you if the code is a binary equivalent or not to the latest ESU's from oracle.
It’s only listing the programs that are not the same as the latest pristine
Clients can see what objects are being used and they can also see how much screen time and how many users are using the objects (for the specified date range)
All these data points allow clients to make better decisions on what modifications to keep.  You can quickly see those that are not use by many people and have many OMW actions – mods that are not used… Get rid of them
You can quickly slice and dice by system code and know what needs to be tested when you get code current.

One of the nice things is that we keep a copy of pristine updates with all ESU's and then we generate hashes (like a unique signature of the code) in a cloud based database.  We have code that will enable you to create a signature of your code and viola - we can tell if you are the same as PS920 with ALL the ESUs.



You can see from the above that we can have a very similar view for reports.
We can see how often the reports are run and only see those that have changed in the latest code from oracle.
This allows us to see the reports that we need to retro

I think that with a dashboard like the above (note that you can actually compare all object types

Wednesday, 16 October 2019

JDE call an orchestration from excel - RPA it yourself

Imagine you go to your boss and explain: 

"I just made myself redundant.  I created a pile of orchestrations and macro's in spreadsheets that does all of my mundane tasks and now I have 20 hours a week free.  and guess what, I'm not making ANY mistakes!"

You'd get promoted right?  RIGHT!  I'd promote you.

Do you ever get a spreadsheet from someone and they say “just punch all of this into JDE”.  Are you like me and when you need to do something more than 6 times (I think that this is my repetition threshold) you have a burning desire for automation?  Actually, you cannot physically bring yourself to doing the task because you know you can make it easier and more rewarding...

Well, this post might help you!

Here is everything you need to create a spreadsheet that can call JDE functionality via orchestration.

First I create an orchestration that takes the following input

{
  "orch_input_TimeOfReading" : "163403",
  "orch_input_dateMeasurement" : "10/01/2019",
  "orch_input_remarkForReading" : "Latest Temp",
  "orch_input_Temperature" : "36.25",
  "P1204_Version" : "",
  "orch_input_szAssetNumber" : "1007"
}

I'm not going to cover off how to create an orchestration, there is a lot of content out there.  It's easy and "point and clicky" and aimed at the functional person.  Hey, us techos are not going to be needed soon. 

The orchestration client screen look like this - I do some testing to ensure that it's recording the data in JDE in the correct place.

Nice,  it's working.

So then I use postman to do some independent testing - get the exact syntax.  Know the headers I need to set, get my auth correct... 


Wow, postman is too cool – what about this for the docs:


It is amazing!!!

Back to excel:


My sheet looks like this, I have a single activeX button and a single field (for the obfuscated password).  Wow!

My code is like this:

This is super simple and readable, this is why I did it.  Also I'm no expert at vbscript'ing - so... This is the results of 1 hour of google and some testing.

Private Sub CommandButton1_Click()
 
  Sheet1.Cells(11, 4).Value = "Processing"
  Sheet1.Cells(11, 5).Value = 0
  CallJDEOrchestration
 
End Sub

Sub CallJDEOrchestration()

  Dim URL As String
  Dim JSONString As String
  Dim objHTTP As New WinHttpRequest
  Dim stringme As String
 
  stringme = "A" & "B" & Sheet1.Cells(10, 2).Value
 
  Dim Username As String
  Dim password As String
  Dim auth As String
 
  
  Username = Sheet1.Cells(1, 2).Value
  password = passwordTxtBox.Value
 
  auth = EncodeBase64(Username & ":" & password)
 
  
  'MsgBox auth, vbCritical, "Hello World"
 
  URL = "https://f5dv.mye1.com/jderest/orchestrator/orch_AddTempReadingForAsset"
  objHTTP.Open "POST", URL, False
  objHTTP.SetRequestHeader "Authorization", "Basic " & auth
  objHTTP.SetRequestHeader "Content-Type", "application/json"
  JSONString = "{""orch_input_TimeOfReading"" : """ & Sheet1.Cells(10, 2).Value & _
  """,""orch_input_dateMeasurement"" : """ & Sheet1.Cells(9, 2).Value & _
  """,""orch_input_remarkForReading"" : """ & Sheet1.Cells(7, 2).Value & _
  """,""orch_input_Temperature"" : """ & Sheet1.Cells(8, 2).Value & _
  """,""P1204_Version"" : ""ZJDE0001"",""orch_input_szAssetNumber"" : """ & Sheet1.Cells(6, 2).Value & _
  """}"
 
  objHTTP.Send JSONString
  Sheet1.Cells(11, 4).Value = objHTTP.ResponseText
  Sheet1.Cells(11, 5).Value = objHTTP.Status
 
End Sub

Function EncodeBase64(text As String) As String
  Dim arrData() As Byte
  arrData = StrConv(text, vbFromUnicode)

  Dim objXML As MSXML2.DOMDocument
  Dim objNode As MSXML2.IXMLDOMElement

  Set objXML = New MSXML2.DOMDocument
  Set objNode = objXML.createElement("b64")

  objNode.DataType = "bin.base64"
  objNode.nodeTypedValue = arrData
  EncodeBase64 = objNode.text

  Set objNode = Nothing
  Set objXML = Nothing
End Function



You will need to ensure that your project has the following add-ins enabled (tools -> references):


You should then be able to change your URL and username and password (note that the field for the password is called passwordTxtBox)

This is using basicAuth, so that needs to be enabled on the AIS server if it’s going to work

You can find a copy here – if you want to rip it apart:


 You could do some pretty amazing and complex processing in JDE directly from excel.  And... you won't have the haters saying "but you need to do that in JDE", because you actually did.

Enjoy.


Thursday, 12 September 2019

Creating custom metrics just got a whole lot easier

The world is full of data and extracting insights from this is always a challenge.

Patterns in data can assist us predict the future, there is no doubt in that.  If you can determine a predictor for poor sales or poor performance, then this might enable to you be proactive the next time things occur.  This is fairly cryptic, but what if I could tell you that the sales order entry screens were running less over the last 2 weeks and that the average lines processed by R42565 (invoice print) was also down over the last couple of weeks.  Well, this is a good indicator that sales are going to be down too – but what if user behaviour was a lead indicator.  What if you could see that activity was down and talk to your staff about why this is occurring.  The same insights could be make in all modules in JD Edwards.   Everyone is looking at the transactional data – I’m looking at the user behaviour.

At fusion5 we created ERP Analytics about 10 years ago, giving our clients some really great insights into their user behaviours.  We’ve augmented this recently with UBE analytics, which allows you to see exactly what is going on in your batch job activities.  You can see tows processed and runtime, critical for evaluating performance.

Now, the combination of these two tools can allow you to create the most insightful and simple reporting tools around your ERP.  You can create reports on engagement time, trend data about performance or nice and easy to read gauges that all of you users can consume in e1Pages!



As you can see from the above, I have defined these custom controls in data studio to report on very distinct values.  I’ve defined the graphs to have custom ranges, this is really easy in data studio


I can set colours [colors for my American readers], maximums and minimums for any piece of that I have.  I can also filter the data.

In this instance, I can look at any data available from batch or interactive JDE usage.

Things that you can put onto any report or graph:
  • How long a user spent on a screen (name the screen, name the user if you want – or group of users)
  • How many rows a UBE processed
  • How often a UBE is run
  • How long a UBE took -  and compare months, weeks or days
  • How many times a version of a form has been loaded
  • How many pages loaded a day
  • Average server response time for loading forms in certain system codes – or all of them


Above is a list of the fields that are available for evaluating batch


Just some of the fields available for interactive

You get the picture, really easy to select the metric, define some ranges and GO!


Here we can see that I’m looking at the average runtime for UBE’s over the last week and have defined the ranges that are appropriate for this client.  I could further refine this for UBE’s that I’m interested in, like invoice print, or sales update.


Here, you can see your report in JDE using e1pages

Those colours are terrible – employ the classic JDE blue - #1e4a6dff


Or specific information in JDE itself…


Thursday, 5 September 2019

UBE Analytics - short demo - understand JDE batch performance better

I'm trying to do a few more short video's on how UBE analytics works and what you can do with the reporting and dash-boarding.  Here is the first of a number.



Let me know what you think.

As you are probably aware, UBE analytics is a service that Fusion5 have created.  You are able to subscribe to this service, and then extract insights from your UBE processing data.

This is really handy for comparing days or weeks of processing.  Comparing regions or actual package deployments.  Super simple.

None of the history needs to be kept in JD Edwards, it can all be moved safely and securely to the cloud.

We provide an agent that runs on premise and copies the data to the cloud.  You schedule this as often as you need - our dashboards look after the rest.


Thursday, 29 August 2019

blocking / locking... potato... potato


Blocking is a funny problem and quite often one of the last things that I look for when there are JD Edwards issues.

We've had some recent problems with some serious blocking, but the reconciliation between client and server has made the log analysis almost impossible...  What I mean is that there are often client errors with no server errors.  The web server will give up, nothing in the enterprise server logs...  Until there are IPC errors because queues are too large [too many BSFNs running in queue]  But there are lessons in this.

First, we could see that there was an increase in the number of instances of COSE#1000 
ERP analytics shows us all historical details, wow that is a lot of problems:


Detailed AWS cloudwatch insights allow us unparalleled capability to query and audit all of our logs:

Can u see what is actually being done here, is it SOOOO amazing.   Looking at all logs in the last X[hours|days|minutes] in all log files for 14 servers, enterprise and web…  Looking for the relevant client [web server] and server [app server] logs that relate to and BSFN errors between the two.  Then showing this as a timeline.

I think that the big lesson here is adopting a consolidated approach to your logging, like a SPLUNK type approach.  If you adopt a some consistency, then all of the advantages of monitoring and better global interrogation are opened up to you.

What we have actually done in this instance is use cloudwatch to ingest all of the JD Edwards log files.  We are consolidating server manager, weblogic, enterprise server, system out and  /var/log into our own log streams that we can query.

Any example of a fairly complex query is below

**CloudWatch Logs Insights** 
region: ap-southeast-2 
#Here are the log files that I want to query
log-group-names: jde_wlsServerJASLogsout, jde_wlsServerJASLogs, jde_webServerLogs, jde_entServerLogs  
#looking at the last hour
start-time: -3600s 
end-time: 0s 
query-string:
```
#fields I want to look at and display on the console
fields @timestamp  ,@message,  @logStream, @log, BSFN
#search string in the logs - this is a simple example that will only match web server logs
|filter @message like 'COSE#1000'
#Allows me to create my own fields from the output and summarise based upon these with a little bit of regex magic
|parse @message /(?<date>\d{2}\s+\S{3}\s+\d{4})\s+(?
| sort @timestamp asc
| limit 10000



#And the results:

```
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|       @timestamp        |                                                                                                                 @message                                                                                                                  |                @logStream                 |              @log              |        BSFN         |    date     |     time     | errorlevel | module  |   user    |   Env    |
|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------|--------------------------------|---------------------|-------------|--------------|------------|---------|-----------|----------|
| 2019-08-28 06:15:19.809 | 28 Aug 2019 07:15:19,573 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@3f09721b: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user:USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:15:19,573 | WARN       | RUNTIME | AMRE001 | JPD920UK |
| 2019-08-28 06:24:31.258 | 28 Aug 2019 07:24:31,092 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@2c3526cf: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:24:31,092 | WARN       | RUNTIME | AMI001 | JPD920UK |
| 2019-08-28 06:34:21.978 | 28 Aug 2019 07:34:21,802 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@74a86b0a: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:34:21,802 | WARN       | RUNTIME | AME001 | JPD920UK |
| 2019-08-28 06:42:52.420 | 28 Aug 2019 07:42:52,371 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@12ddb7bb: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:42:52,371 | WARN       | RUNTIME | AE001 | JPD920UK |
| 2019-08-28 06:45:25.972 | 28 Aug 2019 07:45:25,747 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@256577d3: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6024) SocID:37846 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:45:25,747 | WARN       | RUNTIME | AM001 | JPD920UK |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 Entries like the above can mean many things, network problems, security problems...  But when you look at the server logs and see nothing...  Start to think about blocking.  Also look at the BSFN summary page for the web server in question, this will show you the BSFN runtimes - max, min and average.  If this shows the functions are generally very fast - then you know that you might have some locking problems.

Now, validate this at the database.  These statements will only work while there is locking / blocking - so have them ready.  These are also oracle statements.

At the database:

Show me the blocking:

select
   (select username from v$session where sid=a.sid) blocker,   a.sid,   ' is blocking ',   (select username from v$session where sid=b.sid) blockee,   b.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2;

What are they doing / blocking / locking?

Select    c.owner,   c.object_name,   c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machinefrom    v$locked_object a ,   v$session b,   dba_objects cwhere   b.sid = a.session_idand  a.object_id = c.object_idand b.sid = 1340;

All in one – with JDE information - machine process type and statement

Select    c.owner,   c.object_name, c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machine, b.process, b.program, b.sql_id, REPLACE(d.SQL_TEXT,CHR(10),'') STMTfrom    v$locked_object a ,   v$session b,   dba_objects c, v$sqltext dwhere   b.sid = a.session_idand  a.object_id = c.object_idand d.address = b.sql_addressand b.sql_hash_value = d.hash_valueand b.sid in(select
   a.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2);

If you see a long list of results (or a single line), you might want to consider your next actions.  Blocking can be pretty normal, but can also be a result of a deadlock (surely that should be a movie title).   A dead lock occurs when two sessions are fighting for the same lock.  Interestingly I've seen locks last for hours without being deadlocks.  Another very interesting problem that does occur is that a client (JDBC) process can lock be locked by a server process (jdenet_k).  I've seen this quite a bit and even programmed enough bad code to create it myself - ha!

Things to remember is that if a jdenet_k process is blocking for > 5 minutes, there is probably something wrong (the client will have given up - timeout's probably 90 seconds)...  So the BSFN is still running.  The actual human that tried the first time is probably trying again in another JAS session...  Things could be escalating...  Generally I think you can kill those sessions.  UBE's are a little different, leave them alone.

If you see java doing the locking.  You need to make the call.  Try and get back to the JDE session that is doing the damage and see what the user is/was doing [they'll generally deny it].  It can be little things that cause big locks.


Monday, 26 August 2019

TIP 10: CONTINUOUS IMPROVEMENT

Tip 10: Continuous improvement

It’s a theme that is not going to go away.  If you have implemented all of the shiny new toys from JD Edwards, then you need to show ROI. 

This is a theme that we are going to hear a lot in the coming years.  Even the way Oracle is releasing JD Edwards functionality follows the ideals of continuous delivery by committing to release 9.2 until 2030.  We are getting improvements continuously, not in major releases. 

I think there are 3 ways that you can make a difference to your business in relation to adopting continuous improvements with JD Edwards.

Finding and implementing RPA (Robotic Process Automation) Opportunities

There is so much opportunity here, and all of the tools are at your fingertips.  You can use ERP Analytics to find processes (applications / sequences) that are run frequently.  Use this data to go back to the business and observe what the end users are doing.  For instance, if you see that P42101 is being run 2,000 times a day – look for opportunities to improve this process.  This could be EDI, this could be spreadsheet macros that call an orchestration.  What’s an orchestration I hear you ask? 

Orchestration is the ability to turn any piece of JD Edwards functionality into an API.  An API that is easy to call and can be authenticated with the user’s username and password.  So, exposing functionality to an Excel Macro – would be very easy.  You could write an orchestration to enter a sales order (in my example) and then a smart macro to call the orchestration with the data on the spreadsheet.  It could prompt for a username and password.  If your users are being sent orders in spreadsheets – you may have just increased their productivity and reduced a significant amount of human errors.

RPA implementation can be for simple or complex processes.  Look for repetition and eliminate it, as an ex-programmer – if you see repetition in code – there are inefficiencies in that code.  ERP Analytics will then allow you to measure the success of your RPA, as the usage of the applications should go down with good RPA implementation.

Orchestration is free to implement and can make a huge difference to mundane tasks.

Continually optimise your architecture

This may be more relevant for public cloud implementations, but let’s be honest – most are public cloud implementations.  You must continually drive for reduced hosting costs for all of the JD Edwards assets.  Quite often this is difficult, unless you have architected your solution for the cloud, turned the monolithic JD Edwards into an elastic cloud tenant.  This can be done.
Fusion5 has created what we think is a world first elastic JD Edwards cloud formation for AWS.  This architecture has the ability to expand and contract with load and demand.  We are able to define the rules to create new web servers and new batch servers and then retire them when they are not needed.  This allows our clients to have a very efficient architecture and if they feel that they are paying too much, we can reduce the size and number of machines accordingly.

Figure 14: Choosing between re-platforming and rehosting can be difficult, a re-platform is going to provide payback over time

A good illustration of the options you have available to you when you migrate is above, a lift and shift [rehost] is a simple project – but will not allow you to get true cloud benefits from native constructs (cheaper storage, elasticity or additional security).  If you do a re-platform (as I recommend) you can reshape JD Edwards to be a much more flexible cloud tenant.
If you did a rehost, I’d guess you might implement about 8 cloud constructs (EC2, EBS, ALB, multiple AZ, EFS (if you are lucky), whereas if you were re-platforming, you might use (RDS, EC2, EFS, ALB, ASG, CloudWatch, step functions, route53, S3, Launch Templates, Target Groups and more!)
It is much easier to get savings out of a re-platformed architecture.
At a number of sites I’ve seen savings of more than 50% month on month when we work hard at cloud cost reduction.

Continue to update JD Edwards

Patches for JD Edwards are now continuous, so your adoption should also be continuous.  I recommend making a plan, with a schedule of when you are going to take patches, when you are going to test patches and when you are going to put them into prod.  Start simple, choose twice a year and then work backwards for how long you are going to test, how long for retrofit etc. 
If you’ve been lucky enough to re-platform (as above) then you are going to have some distinct advantages when it comes to deployment.  That is that changes can be deployed and tested much more rapidly and actually, continuously.  If you have a flexible cloud implementation you could build and deploy an alternate package for production and ease this out into the user community.  Our AWS cloud formation allows us to deploy packages without outages, we can do this on a schedule and therefore allow environments to consume change at their own pace.  If there is an issue, we can back it out immediately and fix it.
Figure 15:  Sample continuous deployment schedule, simplicity is important.
A flexible architecture allows you to be more aggressive with your consumption of change and keep more “up to date” with the latest code from Oracle.



Sunday, 25 August 2019

TIP 9: MONITOR RELENTLESSLY

Tip 9: Monitor relentlessly

Tip 9 and 10 are closely related, but this tip is all about feedback.  If you know things are going well it’s great.  If you know things are going poorly, that is great too – because you can fix it.  The worst case scenario is that things are going pear-shaped in the background and you only hear about it when a user raises a case.  You need to be KPI’d on finding errors before your users – period.

How can you find errors before your users?  Here are a couple of tricks that Fusion5 implements for our clients:
ERP Analytics
Sometimes referred to the black box for your ERP, we use this to monitor performance and usage of JD Edwards. It records every page load, by every user – every minute of every day.  This information is incredibly powerful for benchmarking and comparing ANY change in your entire enterprise.
UBE Analytics
Having access to the runtime, rows processed, user and server information for every batch job allows us to continually monitor the critical two tiers of JD Edwards.  Reading this with ERP Analytics gives more information on where performance problems might be and another point of data to compare with.
Log monitoring
Fusion5 has a very advanced cloud formation in AWS which utilises cloudwatch to monitor all log files, UBEs and JD Edwards connections.  This enables us to graph and monitor user connections, concurrent UBEs and search ANY logfile for ANY error – ever.  This is a single console for all logs across JD Edwards.  This approach and consistency can be used with many different installation types, not just limited to AWS.
DB growth monitoring
Keeping an eye on table by table database growth is critical for understanding if a process has gone rogue.  It’s also critical for maintaining consistent database performance.  Regular rowcount reporting and size reporting will ensure that you can deliver a service level to your users that is acceptable.  Maintenance of your data size is important for costs and restoration times.

Figure 12: Sample custom dashboard showing metrics that are 100% relevant for JD Edwards

Figure 13: AWS log insights provides intelligence that would previously be impossible to find.  This shows a graphical representation of errors and type of errors over 8 separate web servers.


Extending JDE to generative AI