Wednesday, 16 October 2019

JDE call an orchestration from excel - RPA it yourself

Imagine you go to your boss and explain: 

"I just made myself redundant.  I created a pile of orchestrations and macro's in spreadsheets that does all of my mundane tasks and now I have 20 hours a week free.  and guess what, I'm not making ANY mistakes!"

You'd get promoted right?  RIGHT!  I'd promote you.

Do you ever get a spreadsheet from someone and they say “just punch all of this into JDE”.  Are you like me and when you need to do something more than 6 times (I think that this is my repetition threshold) you have a burning desire for automation?  Actually, you cannot physically bring yourself to doing the task because you know you can make it easier and more rewarding...

Well, this post might help you!

Here is everything you need to create a spreadsheet that can call JDE functionality via orchestration.

First I create an orchestration that takes the following input

  "orch_input_TimeOfReading" : "163403",
  "orch_input_dateMeasurement" : "10/01/2019",
  "orch_input_remarkForReading" : "Latest Temp",
  "orch_input_Temperature" : "36.25",
  "P1204_Version" : "",
  "orch_input_szAssetNumber" : "1007"

I'm not going to cover off how to create an orchestration, there is a lot of content out there.  It's easy and "point and clicky" and aimed at the functional person.  Hey, us techos are not going to be needed soon. 

The orchestration client screen look like this - I do some testing to ensure that it's recording the data in JDE in the correct place.

Nice,  it's working.

So then I use postman to do some independent testing - get the exact syntax.  Know the headers I need to set, get my auth correct... 

Wow, postman is too cool – what about this for the docs:

It is amazing!!!

Back to excel:

My sheet looks like this, I have a single activeX button and a single field (for the obfuscated password).  Wow!

My code is like this:

This is super simple and readable, this is why I did it.  Also I'm no expert at vbscript'ing - so... This is the results of 1 hour of google and some testing.

Private Sub CommandButton1_Click()
  Sheet1.Cells(11, 4).Value = "Processing"
  Sheet1.Cells(11, 5).Value = 0
End Sub

Sub CallJDEOrchestration()

  Dim URL As String
  Dim JSONString As String
  Dim objHTTP As New WinHttpRequest
  Dim stringme As String
  stringme = "A" & "B" & Sheet1.Cells(10, 2).Value
  Dim Username As String
  Dim password As String
  Dim auth As String
  Username = Sheet1.Cells(1, 2).Value
  password = passwordTxtBox.Value
  auth = EncodeBase64(Username & ":" & password)
  'MsgBox auth, vbCritical, "Hello World"
  URL = ""
  objHTTP.Open "POST", URL, False
  objHTTP.SetRequestHeader "Authorization", "Basic " & auth
  objHTTP.SetRequestHeader "Content-Type", "application/json"
  JSONString = "{""orch_input_TimeOfReading"" : """ & Sheet1.Cells(10, 2).Value & _
  """,""orch_input_dateMeasurement"" : """ & Sheet1.Cells(9, 2).Value & _
  """,""orch_input_remarkForReading"" : """ & Sheet1.Cells(7, 2).Value & _
  """,""orch_input_Temperature"" : """ & Sheet1.Cells(8, 2).Value & _
  """,""P1204_Version"" : ""ZJDE0001"",""orch_input_szAssetNumber"" : """ & Sheet1.Cells(6, 2).Value & _
  objHTTP.Send JSONString
  Sheet1.Cells(11, 4).Value = objHTTP.ResponseText
  Sheet1.Cells(11, 5).Value = objHTTP.Status
End Sub

Function EncodeBase64(text As String) As String
  Dim arrData() As Byte
  arrData = StrConv(text, vbFromUnicode)

  Dim objXML As MSXML2.DOMDocument
  Dim objNode As MSXML2.IXMLDOMElement

  Set objXML = New MSXML2.DOMDocument
  Set objNode = objXML.createElement("b64")

  objNode.DataType = "bin.base64"
  objNode.nodeTypedValue = arrData
  EncodeBase64 = objNode.text

  Set objNode = Nothing
  Set objXML = Nothing
End Function

You will need to ensure that your project has the following add-ins enabled (tools -> references):

You should then be able to change your URL and username and password (note that the field for the password is called passwordTxtBox)

This is using basicAuth, so that needs to be enabled on the AIS server if it’s going to work

You can find a copy here – if you want to rip it apart:

 You could do some pretty amazing and complex processing in JDE directly from excel.  And... you won't have the haters saying "but you need to do that in JDE", because you actually did.


Thursday, 12 September 2019

Creating custom metrics just got a whole lot easier

The world is full of data and extracting insights from this is always a challenge.

Patterns in data can assist us predict the future, there is no doubt in that.  If you can determine a predictor for poor sales or poor performance, then this might enable to you be proactive the next time things occur.  This is fairly cryptic, but what if I could tell you that the sales order entry screens were running less over the last 2 weeks and that the average lines processed by R42565 (invoice print) was also down over the last couple of weeks.  Well, this is a good indicator that sales are going to be down too – but what if user behaviour was a lead indicator.  What if you could see that activity was down and talk to your staff about why this is occurring.  The same insights could be make in all modules in JD Edwards.   Everyone is looking at the transactional data – I’m looking at the user behaviour.

At fusion5 we created ERP Analytics about 10 years ago, giving our clients some really great insights into their user behaviours.  We’ve augmented this recently with UBE analytics, which allows you to see exactly what is going on in your batch job activities.  You can see tows processed and runtime, critical for evaluating performance.

Now, the combination of these two tools can allow you to create the most insightful and simple reporting tools around your ERP.  You can create reports on engagement time, trend data about performance or nice and easy to read gauges that all of you users can consume in e1Pages!

As you can see from the above, I have defined these custom controls in data studio to report on very distinct values.  I’ve defined the graphs to have custom ranges, this is really easy in data studio

I can set colours [colors for my American readers], maximums and minimums for any piece of that I have.  I can also filter the data.

In this instance, I can look at any data available from batch or interactive JDE usage.

Things that you can put onto any report or graph:
  • How long a user spent on a screen (name the screen, name the user if you want – or group of users)
  • How many rows a UBE processed
  • How often a UBE is run
  • How long a UBE took -  and compare months, weeks or days
  • How many times a version of a form has been loaded
  • How many pages loaded a day
  • Average server response time for loading forms in certain system codes – or all of them

Above is a list of the fields that are available for evaluating batch

Just some of the fields available for interactive

You get the picture, really easy to select the metric, define some ranges and GO!

Here we can see that I’m looking at the average runtime for UBE’s over the last week and have defined the ranges that are appropriate for this client.  I could further refine this for UBE’s that I’m interested in, like invoice print, or sales update.

Here, you can see your report in JDE using e1pages

Those colours are terrible – employ the classic JDE blue - #1e4a6dff

Or specific information in JDE itself…

Thursday, 5 September 2019

UBE Analytics - short demo - understand JDE batch performance better

I'm trying to do a few more short video's on how UBE analytics works and what you can do with the reporting and dash-boarding.  Here is the first of a number.

Let me know what you think.

As you are probably aware, UBE analytics is a service that Fusion5 have created.  You are able to subscribe to this service, and then extract insights from your UBE processing data.

This is really handy for comparing days or weeks of processing.  Comparing regions or actual package deployments.  Super simple.

None of the history needs to be kept in JD Edwards, it can all be moved safely and securely to the cloud.

We provide an agent that runs on premise and copies the data to the cloud.  You schedule this as often as you need - our dashboards look after the rest.

Thursday, 29 August 2019

blocking / locking... potato... potato

Blocking is a funny problem and quite often one of the last things that I look for when there are JD Edwards issues.

We've had some recent problems with some serious blocking, but the reconciliation between client and server has made the log analysis almost impossible...  What I mean is that there are often client errors with no server errors.  The web server will give up, nothing in the enterprise server logs...  Until there are IPC errors because queues are too large [too many BSFNs running in queue]  But there are lessons in this.

First, we could see that there was an increase in the number of instances of COSE#1000 
ERP analytics shows us all historical details, wow that is a lot of problems:

Detailed AWS cloudwatch insights allow us unparalleled capability to query and audit all of our logs:

Can u see what is actually being done here, is it SOOOO amazing.   Looking at all logs in the last X[hours|days|minutes] in all log files for 14 servers, enterprise and web…  Looking for the relevant client [web server] and server [app server] logs that relate to and BSFN errors between the two.  Then showing this as a timeline.

I think that the big lesson here is adopting a consolidated approach to your logging, like a SPLUNK type approach.  If you adopt a some consistency, then all of the advantages of monitoring and better global interrogation are opened up to you.

What we have actually done in this instance is use cloudwatch to ingest all of the JD Edwards log files.  We are consolidating server manager, weblogic, enterprise server, system out and  /var/log into our own log streams that we can query.

Any example of a fairly complex query is below

**CloudWatch Logs Insights** 
region: ap-southeast-2 
#Here are the log files that I want to query
log-group-names: jde_wlsServerJASLogsout, jde_wlsServerJASLogs, jde_webServerLogs, jde_entServerLogs  
#looking at the last hour
start-time: -3600s 
end-time: 0s 
#fields I want to look at and display on the console
fields @timestamp  ,@message,  @logStream, @log, BSFN
#search string in the logs - this is a simple example that will only match web server logs
|filter @message like 'COSE#1000'
#Allows me to create my own fields from the output and summarise based upon these with a little bit of regex magic
|parse @message /(?<date>\d{2}\s+\S{3}\s+\d{4})\s+(?
| sort @timestamp asc
| limit 10000

#And the results:

|       @timestamp        |                                                                                                                 @message                                                                                                                  |                @logStream                 |              @log              |        BSFN         |    date     |     time     | errorlevel | module  |   user    |   Env    |
| 2019-08-28 06:15:19.809 | 28 Aug 2019 07:15:19,573 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@3f09721b: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user:USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:15:19,573 | WARN       | RUNTIME | AMRE001 | JPD920UK |
| 2019-08-28 06:24:31.258 | 28 Aug 2019 07:24:31,092 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@2c3526cf: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:24:31,092 | WARN       | RUNTIME | AMI001 | JPD920UK |
| 2019-08-28 06:34:21.978 | 28 Aug 2019 07:34:21,802 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@74a86b0a: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:34:21,802 | WARN       | RUNTIME | AME001 | JPD920UK |
| 2019-08-28 06:42:52.420 | 28 Aug 2019 07:42:52,371 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@12ddb7bb: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:42:52,371 | WARN       | RUNTIME | AE001 | JPD920UK |
| 2019-08-28 06:45:25.972 | 28 Aug 2019 07:45:25,747 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@256577d3: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6024) SocID:37846 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:45:25,747 | WARN       | RUNTIME | AM001 | JPD920UK |

 Entries like the above can mean many things, network problems, security problems...  But when you look at the server logs and see nothing...  Start to think about blocking.  Also look at the BSFN summary page for the web server in question, this will show you the BSFN runtimes - max, min and average.  If this shows the functions are generally very fast - then you know that you might have some locking problems.

Now, validate this at the database.  These statements will only work while there is locking / blocking - so have them ready.  These are also oracle statements.

At the database:

Show me the blocking:

   (select username from v$session where sid=a.sid) blocker,   a.sid,   ' is blocking ',   (select username from v$session where sid=b.sid) blockee,   b.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2;

What are they doing / blocking / locking?

Select    c.owner,   c.object_name,   c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machinefrom    v$locked_object a ,   v$session b,   dba_objects cwhere   b.sid = a.session_idand  a.object_id = c.object_idand b.sid = 1340;

All in one – with JDE information - machine process type and statement

Select    c.owner,   c.object_name, c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machine, b.process, b.program, b.sql_id, REPLACE(d.SQL_TEXT,CHR(10),'') STMTfrom    v$locked_object a ,   v$session b,   dba_objects c, v$sqltext dwhere   b.sid = a.session_idand  a.object_id = c.object_idand d.address = b.sql_addressand b.sql_hash_value = d.hash_valueand b.sid in(select
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2);

If you see a long list of results (or a single line), you might want to consider your next actions.  Blocking can be pretty normal, but can also be a result of a deadlock (surely that should be a movie title).   A dead lock occurs when two sessions are fighting for the same lock.  Interestingly I've seen locks last for hours without being deadlocks.  Another very interesting problem that does occur is that a client (JDBC) process can lock be locked by a server process (jdenet_k).  I've seen this quite a bit and even programmed enough bad code to create it myself - ha!

Things to remember is that if a jdenet_k process is blocking for > 5 minutes, there is probably something wrong (the client will have given up - timeout's probably 90 seconds)...  So the BSFN is still running.  The actual human that tried the first time is probably trying again in another JAS session...  Things could be escalating...  Generally I think you can kill those sessions.  UBE's are a little different, leave them alone.

If you see java doing the locking.  You need to make the call.  Try and get back to the JDE session that is doing the damage and see what the user is/was doing [they'll generally deny it].  It can be little things that cause big locks.

Monday, 26 August 2019


Tip 10: Continuous improvement

It’s a theme that is not going to go away.  If you have implemented all of the shiny new toys from JD Edwards, then you need to show ROI. 

This is a theme that we are going to hear a lot in the coming years.  Even the way Oracle is releasing JD Edwards functionality follows the ideals of continuous delivery by committing to release 9.2 until 2030.  We are getting improvements continuously, not in major releases. 

I think there are 3 ways that you can make a difference to your business in relation to adopting continuous improvements with JD Edwards.

Finding and implementing RPA (Robotic Process Automation) Opportunities

There is so much opportunity here, and all of the tools are at your fingertips.  You can use ERP Analytics to find processes (applications / sequences) that are run frequently.  Use this data to go back to the business and observe what the end users are doing.  For instance, if you see that P42101 is being run 2,000 times a day – look for opportunities to improve this process.  This could be EDI, this could be spreadsheet macros that call an orchestration.  What’s an orchestration I hear you ask? 

Orchestration is the ability to turn any piece of JD Edwards functionality into an API.  An API that is easy to call and can be authenticated with the user’s username and password.  So, exposing functionality to an Excel Macro – would be very easy.  You could write an orchestration to enter a sales order (in my example) and then a smart macro to call the orchestration with the data on the spreadsheet.  It could prompt for a username and password.  If your users are being sent orders in spreadsheets – you may have just increased their productivity and reduced a significant amount of human errors.

RPA implementation can be for simple or complex processes.  Look for repetition and eliminate it, as an ex-programmer – if you see repetition in code – there are inefficiencies in that code.  ERP Analytics will then allow you to measure the success of your RPA, as the usage of the applications should go down with good RPA implementation.

Orchestration is free to implement and can make a huge difference to mundane tasks.

Continually optimise your architecture

This may be more relevant for public cloud implementations, but let’s be honest – most are public cloud implementations.  You must continually drive for reduced hosting costs for all of the JD Edwards assets.  Quite often this is difficult, unless you have architected your solution for the cloud, turned the monolithic JD Edwards into an elastic cloud tenant.  This can be done.
Fusion5 has created what we think is a world first elastic JD Edwards cloud formation for AWS.  This architecture has the ability to expand and contract with load and demand.  We are able to define the rules to create new web servers and new batch servers and then retire them when they are not needed.  This allows our clients to have a very efficient architecture and if they feel that they are paying too much, we can reduce the size and number of machines accordingly.

Figure 14: Choosing between re-platforming and rehosting can be difficult, a re-platform is going to provide payback over time

A good illustration of the options you have available to you when you migrate is above, a lift and shift [rehost] is a simple project – but will not allow you to get true cloud benefits from native constructs (cheaper storage, elasticity or additional security).  If you do a re-platform (as I recommend) you can reshape JD Edwards to be a much more flexible cloud tenant.
If you did a rehost, I’d guess you might implement about 8 cloud constructs (EC2, EBS, ALB, multiple AZ, EFS (if you are lucky), whereas if you were re-platforming, you might use (RDS, EC2, EFS, ALB, ASG, CloudWatch, step functions, route53, S3, Launch Templates, Target Groups and more!)
It is much easier to get savings out of a re-platformed architecture.
At a number of sites I’ve seen savings of more than 50% month on month when we work hard at cloud cost reduction.

Continue to update JD Edwards

Patches for JD Edwards are now continuous, so your adoption should also be continuous.  I recommend making a plan, with a schedule of when you are going to take patches, when you are going to test patches and when you are going to put them into prod.  Start simple, choose twice a year and then work backwards for how long you are going to test, how long for retrofit etc. 
If you’ve been lucky enough to re-platform (as above) then you are going to have some distinct advantages when it comes to deployment.  That is that changes can be deployed and tested much more rapidly and actually, continuously.  If you have a flexible cloud implementation you could build and deploy an alternate package for production and ease this out into the user community.  Our AWS cloud formation allows us to deploy packages without outages, we can do this on a schedule and therefore allow environments to consume change at their own pace.  If there is an issue, we can back it out immediately and fix it.
Figure 15:  Sample continuous deployment schedule, simplicity is important.
A flexible architecture allows you to be more aggressive with your consumption of change and keep more “up to date” with the latest code from Oracle.

Sunday, 25 August 2019


Tip 9: Monitor relentlessly

Tip 9 and 10 are closely related, but this tip is all about feedback.  If you know things are going well it’s great.  If you know things are going poorly, that is great too – because you can fix it.  The worst case scenario is that things are going pear-shaped in the background and you only hear about it when a user raises a case.  You need to be KPI’d on finding errors before your users – period.

How can you find errors before your users?  Here are a couple of tricks that Fusion5 implements for our clients:
ERP Analytics
Sometimes referred to the black box for your ERP, we use this to monitor performance and usage of JD Edwards. It records every page load, by every user – every minute of every day.  This information is incredibly powerful for benchmarking and comparing ANY change in your entire enterprise.
UBE Analytics
Having access to the runtime, rows processed, user and server information for every batch job allows us to continually monitor the critical two tiers of JD Edwards.  Reading this with ERP Analytics gives more information on where performance problems might be and another point of data to compare with.
Log monitoring
Fusion5 has a very advanced cloud formation in AWS which utilises cloudwatch to monitor all log files, UBEs and JD Edwards connections.  This enables us to graph and monitor user connections, concurrent UBEs and search ANY logfile for ANY error – ever.  This is a single console for all logs across JD Edwards.  This approach and consistency can be used with many different installation types, not just limited to AWS.
DB growth monitoring
Keeping an eye on table by table database growth is critical for understanding if a process has gone rogue.  It’s also critical for maintaining consistent database performance.  Regular rowcount reporting and size reporting will ensure that you can deliver a service level to your users that is acceptable.  Maintenance of your data size is important for costs and restoration times.

Figure 12: Sample custom dashboard showing metrics that are 100% relevant for JD Edwards

Figure 13: AWS log insights provides intelligence that would previously be impossible to find.  This shows a graphical representation of errors and type of errors over 8 separate web servers.

Friday, 23 August 2019


Tip 8: Security

Security sometimes takes a back seat, but please, please – don’t let it.  Without exaggeration it’s about 1,000,000 times easier to make security changes before a go-live than after.

Simple things need to be completed and tested before go-live:
  • Complete production security model in place, including row security
  • All table auditing enabled (if you do this)
  • Complex JDE password for database and for JDE
  • Do not use the JDE account for the system account for your users.  Please do not do this, create “jdeuser” or something much better that will not get locked out
  • Check that the password expiry policy for Oracle is not default, or else your system accounts will start locking out
  • Change your default node configuration, do NOT leave this standard.  This is one of the largest security holes in the suite.
  • LDAP or SSO is critical.  Controlling user access is easiest if someone else is doing it (I find).  So if the desktop team is decommissioning users (oh and changing passwords) this is a big bonus and will save time and money.   The Fusion5 SSO offering is really cool too, especially if you want to use Azure to MFA people under certain criteria – all done by someone else!
  • Make sure that your data is encrypted at rest and in transit
  • Get your security groups tight and your firewalls enabled
  • Default access should be no access
  • Adopt the most stringent security posture your business can afford
Here is an interesting tip, quite often row security can be good for performance.  Why?  Because it ensures that there is a where clause that is generally an indexed field.  If you are row securing MCU or CO, then the where clause is enforcing less IO and hopefully a quicker result!