Wednesday, 25 May 2016

better script to reconcile F00165 to media Objects

This script has the bones of everything you need to see how many of your media object records actually have physical files.  You could augment this with curl and a bunch of smarts to to URL’s also, but my script is simple for the time being.

Note that you need to create your reconciliation table:

drop table jde.f00165srm;
create table jde.f00165srm as select GDOBNM, GDTXKY, GDMOSEQN, '     ' AS FILEEXISTS from crpdta.f00165 where 1=0 ;

This is the vbs script, just create and paste.  you’ll need to create an ODBC with the name TVLNE8 (in this example)



fileSuffix=Left(FormatDateTime(dateStamp, 1),6) & day(dateStamp) & monthName(month(dateStamp)) & hour(dateStamp)
Const ForAppending = 8
logfile="h:\data\myr" & fileSuffix & ".txt"
Set objFSO = CreateObject("Scripting.FileSystemObject")
set objTextFile = objFSO.OpenTextFile(logfile, ForAppending, True)
DIM fso
Set fso = CreateObject("Scripting.FileSystemObject")

Dim Oracon
set oraccon = wscript.createobject("ADODB.Connection")
Dim recset
set recset = wscript.createobject("ADODB.Recordset")
set Insertresults = wscript.createobject("ADODB.Recordset")
set f98moquerecset = wscript.createobject("ADODB.Recordset")
Dim cmd
set cmd = wscript.createobject("ADODB.Command")
set cmdInsert = wscript.createobject("ADODB.Command")
set cmdf98moque = wscript.createobject("ADODB.Command")
Set Oracon = wscript.CreateObject("ADODB.Connection")

Oracon.ConnectionString = "DSN=TVLNE8;" & _
"User ID=jde;" & _

Set cmd.ActiveConnection = Oracon
Set cmdInsert.ActiveConnection = Oracon
cmd.CommandText = "Select gdobnm, gdtxky, gdmoseqn, gdgtfilenm, gdgtmotype, gdgtitnm, gdqunam from crpdta.F00165 where gdgtmotype in ('5','2','1') and gdobnm like 'GT%'"
'cmd.CommandText = "Select gdobnm, gdtxky, gdgtfilenm, gdgtmotype, gdgtitnm, gdqunam from proddta.F00165_91 where gdgtmotype in ('1')"
Set recset = cmd.Execute

while (recset.EOF=false)
'Generally this would be in a function, but it needs to be fast!!
'NOte that type 2 and 1 cld be same function, as they are based on queues
'but i'm doing an upgrade, and it's complicated... this hard code of DIR
'does not really need to be there
if recset("GDGTMOTYPE") = "2" then
  filenametotest = "D:\MEDIAOBJ\Oleque\" & trim(recset("GDGTFILENM"))
elseif recset("GDGTMOTYPE") = "1" then
  filenametotest = moqueue(trim(recset("GDQUNAM"))) & "\" & trim(recset("GDGTFILENM"))
  'wscript.echo "queue (" & trim(recset("GDQUNAM")) & ") path (" & moqueue(trim(recset("GDQUNAM"))) & ")"
  filenametotest = trim(recset("GDGTFILENM"))
end if

If (fso.FileExists(filenametotest)) Then
  'WScript.Echo("File exists! TP(" & recset("GDGTMOTYPE") &"): " & filenametotest)
  'WScript.Echo("File does not exist! TP(" & recset("GDGTMOTYPE") &"): " & filenametotest)
End If

cmdInsert.CommandTExt = "INSERT INTO JDE.f00165SRM (GDOBNM, GDTXKY, GDMOSEQN, FILEEXISTS) values ('" & _
recset("GDOBNM") & "', '" & _
recset("GDTXKY") & "', '" & _
recset("GDMOSEQN") & "', '" & _
fileexistsstatus & "')"
'wscript.echo "SQL:" & cmdInsert.CommandTExt
Set insertrecset = cmdInsert.Execute

'wscript.echo recset("gdgtfilenm")
'tuple=" "

set recset = nothing
set oracon = nothing

Check the progress of the script with:

select * from jde.f00165srm;
select count(*) from jde.f00165srm;

Now look at the results.  Note that this also demonstrates a handy left outer with multiple join items.

select t1.gdtxky, t1.gdobnm, t1.fileexists, t2.gdgtfilenm from jde.f00165srm t1 left outer join crpdta.f00165 t2 on t1.gdobnm = t2.gdobnm and t1.gdtxky = t2.gdtxky and t1.gdmoseqn = t2.gdmoseqn;

Mapping central objects files to spec files

Ahh, my package build report has an error on FDASPEC 0 records…. 


I can never remember what file FDA spec is… Actually, I can.  I open UTB, I open spec files.


Then choose a pathcode and see the relationships – great!


I then used a web based OCR program to covert this image to words, so we can all do a search on this information and find it quicker next time!  Nice!



































So now we know the relationship of spec files to central objects tables.

Friday, 20 May 2016

using checksum to verify files before spending lots of time on them…

I had this issue where I was trying to migrate a client to the cloud and I kept getting problems with the zip file not extracting properly.  I kept getting various errors.  I started to trace the files history, it gets created on the client deployment server, then ftp’d to our ftp server and the downloaded from the ftp server.

This technique is really handy if you are going to move huge bits of data, and want to know that the file is sane before you start to use it in anger.

download windows fciv 

I check my client copy and see:

// File Checksum Integrity Verifier version 2.05.

okay, not check the ftp server copy – this is a linux box, not a different utility that gives the same information – phew!

sudo apt-get install md5sum

myriad@mywebftp:/var/www/html/upload/files$ md5sum

That shows me that I’m downloading (destination) exactly what is on the FTP server and these are the same.

Then I log into the source server

// File Checksum Integrity Verifier version 2.05.

TOTALLY different (well, from an md5 point of view) – what does this mean? 

I’ve moved from using the FTP server to using s3 buckets directly in AWS.  I was not going to do this because of problems with getting to on the client site, but then noticed that I could install the AWS command line tools and run them without any problems.  Using AWS S3, I have no problems with the zip files or the extraction.

Note that I also get VERY good speed on the up and down, because there are some fat pipes going into AWS.  About 24 GB download in 15 minutes

JD Edwards performance, 4 ways of measuring performance for a complete picture

This article tries to explain 5 different methods you can use to measure (and therefore improve) performance for your JD Edwards implementation.  Regular analysis of these metrics will give you terrific insight into your ERP performance and allow you to improve it and also highlight when things are going wrong.

Subjective vs Objective

You can spend a lot of time and effort on load testing, and there are gains that you can make, generally incremental to produce a faster ERP, these changes are generally iterative in nature.  You can also, however, make some great improvements in stability with the use of load testing, as you can get the system up to a threshold which might exhibit certain characteristics. 

By their nature, some problems only occur every week or month in your system – but load testing can force them to generally pop up a little more frequently and then also allow you to log these situations much better.

The title of this blog is about the comparison of load testing data against your own data to get a very subjective result of performance.  This means that you can compare with your own internal benchmark and also know and be able to measure any improvements (or slow downs) caused by changes  in your architecture.  This provides cold hard data when users are ringing up and telling you that the system is slow.  But your system could be considerably slower than the system in the next city or suburb – why and does this matter?

If you users are happy and your managers are happy – I guess it does not matter.  If you are using 10% of your servers capacity day in and day out – well personally – I think that matters!  I really do not think that you need to save hardware for a rainy day – especially things like memory – JD Edwards will generally have a high water mark that will not change, day in and day out.  There is no need to have 32GB of RAM on the logic server if your kernels and UBE’s will only ever consume 8.  If you fell that your system is not quick enough, then you need to compare it with industry standards to know how you are performing.

1.  Performance Workbench - free

I developed the concept of a performance benchmark to try and address the objective view of performance, this is where you can compare your systems performance against industry norms.

You can read about it and download it from the above link.  It takes a bunch of accurate timestamps for numerous I/O and computational metrics in JD Edwards


Download the par file and install the code from it on your system.  Create the tables in the project.  You run P55PERFT and get the screen above.  Click the big button and the system will go away and start testing performance.  It’s busy and will sit there for quite a while running inserts, updates, deletes, BSFNs etc etc.

Note that there are some fields in the above that allow you to specify a reply email address and some notes for ME.  If you choose to do this, then I get a copy of the results and can compare them with industry standards (I maintain a master list of averages).

So this can give you an objective and subjective view of performance at your JD Edwards site.


You can see the above (it’s a bit dodgy for this run), but it graphs the current numbers, but allows analysis on the results too (comparison)



So you can see from the analysis above, you can choose the test that you want to see the history for – and the history (in my case 37 previous results) and graphed – having the latest and the right of the graph (as you are looking at it).

Of course the above it a nice litmus test, but if you really want to start to deep dive into performance, then you need to start using oracle application testing suite (OATS).  We own and lease our copy of OATS to clients.  We have various performance offerings that allow us to remotely load test your system and provide you with objective and subjective analysis.  We’ve been lucky enough to load test JD Edwards on ODA’s, exadata’s, AS/400’s, windows, large unix and linux.  The results between all of the platforms are very interesting.

I have to say that if you are preparing for a go-live, load testing is a must do project item.  Every time we’ve performed this exercise we’ve made considerable improvements to performance and stability.  We also allow you to fine tune your hardware allocations.  We also do this load testing for all AWS migrations and implementations that we’ve been working on.  This is a perfect way to compare the cloud and understand the amount of hardware you need to put in the cloud to get “like for like” performance between on prem.  With a known hardware budget and proper elasticity – you are on your way to save money in the cloud.

2.  BSFN performance metrics - free

Remember that there are also some other great benchmarking tools for BSFN performance -


You can get this from Server Manager, just navigate to a web server and then choose call object stats:


This is pretty easy to curl and script so that you can have an interactive dashboard on your BSFN performance.  This can also tell you when things are going wrong.  Ensure that you create a benchmark so that you can compare.

3.  Batch (UBE) performance analysis - free

Another handy place to go, remember to compare your batch performance regularly:

Using techniques described in this article you can determine whether today’s UBE’s were slower than yesterdays…  Or last month or last year

4.  Google Analytics for JDE – subscription cost for managed solution (or free)

Finally – my favourite – Google analytics for a complete ERP performance review.


Above is showing you a two week comparison of over 1000000 interactions with an ERP!  It’s analysed the average performance for two weeks and has overlayed this information with the metrics from the current two weeks.  You can choose what ever ranges you want to compare with.

What does this data tell you – well it seem that there was a public holiday in Australia on April 25 – ANZAC day in Australia!

We could drill down on this data and tell you if sales order entry was slower on one day compared with another, one week or one month compared to another. 

Whos’ using what browser in the last month?


What apps are being used and how many times used and how long to load and how much time spent of pages


Usage and speed by hour of the day


IT never ends!

impact of 80 JD Edwards users on a windows system

I’m running a fairly intensive interactive load and some batch activity – 80 concurrent users across 2 web servers and 2 enterprise servers.

Everything is running okay, I’d recommend more CPU for enterprise servers, leave the RAM at 8 for this.  You can see what the boxes are doing at the web tier and the enterprise server tier below.  JDE Ent servers are generally very light on RAM, especially when you tune your kernels appropriately.  CPU is important here, and network latency.

Enterprise Server


Note that there are 4 cores viewable in resource monitor.




Another chart that is really handy, showing the amount of disk and also network traffic that is generated from this exercise. 


Server 2 of my pigeon pair is also loaded up the same way.  Note that these servers have a single CPU and 8GB of RAM.

Also, this is a


Note that there are 4 cores viewable in resource monitor.



Thursday, 19 May 2016

JD Edwards and ORA-03113 again

Are you running an oracle database and JD Edwards.  Do you ever get seemingly random ORA-03113 errors, and when they start they can often go downhill fast.  I’ve blogged about this for a while, this error does occur on and 12c database.  Note also that you need to take a client patch, as it’s OCI that seems to have the issue.

I come across this all the time at big sites and the fix is great and easy.

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932000    dbperfrq.c471

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932001    dbperfrq.c477
    OCI0000179 - Error - ORA-03113: end-of-file on communication channel
Process ID: 115202
Session ID: 414 Serial number: 19786
2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932002    JDB_DRVM.C1005
    JDB9900401 - Failed to execute db request

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932003    JTP_CM.C1344
    JDB9900255 - Database connection to F0150 (Business Data - UAT) has been lost.

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932004    JTP_CM.C1298
    JDB9900256 - Database connection to (Business Data - UAT) has been re-established. is a good summary about the fix.

Note that i’m currently fixing this at a site using exadata, same problem same everything!!

All windows this time, but still getting ORA-03113’s when I load up the system with OATS (Oracle Application Testing Suite).

Remember to NEVER trust windows find:

D:\JDEdwards\E910\log>findstr "ORA-03" *.log
jde_2508.log:   OCI0000179 - Error - ORA-03113: end-of-file on communication channel


Tuesday, 17 May 2016

When the JDE interface is not intuitive enough

What do you mean?  This has never occurred.  Surely, everyone at any “JDE101” course in the world has thought, this is the most intuitive software that has ever been created.  So easy to use… No…  Just me…

We’ve had a situation recently where users (proper licensed users) for HS&E just want to enter their issues using a simple web form.  They don’t want to authenticate, they don’t want to navigate, they just want to type and upload – fair enough really.  I think that there is about 1000 people that can enter these incidents – so best we make it easy and lower the costs of training and also have a layer of abstraction from the ERP so that tools releases and things will not confuse anyone!

So, the webform takes shape:


Nice and easy to use, easy to navigate.

Simply adds incidents into JD Edwards using AIS – yes another use for AIS.

This solution creates a cache every X minutes of all of the lookup fields from various JD Edwards tables.  This is done in batch in the background, therefore JDE could be down and this solution could continue to run.  If JDE was not available for the final “submit”, then the JSON payload is saved off for later execution.

Pretty neat hey?  This type of thing could be on a tablet and eventually made available to the General Public in an offline format and batch uploaded to JDE.

This is to demonstrate the power of “Web Forms” and how these can be used to get data into your ERP.

One really nice thing about this is that it’s completely mobile native and tablet native.  The code knows the resolution of the device and can render the screen based upon this.


As you can see from the above, the forms are also able to use the devices location and also camera functionality to facilitate the data entry process.

Mobile native, intuitive data entry to JD Edwards!  Thanks AIS, thanks Myriad Mobile developers!

Sunday, 8 May 2016

integrating UPK and OATs–not what I expected!

I’ve been dreaming about this process for a long time. 

Create detailed training material.  Use this for your automated regression testing.  Surely if you run through the scenario in UPK, it could publish the steps in a format that would work with OATS.  This script could then be run as an automated functional test in OATS.  This would be really nice.  So I spent some time looking into this and getting it working.

First, record something in UPK.  I chose a basic Address book navigation process:


A couple of important items is that if you fill in the expected results manually, this will help after you import into OATS, all will be revealed


So my scenario is recorded.  I guess that this is pretty handy for documentation.  I have some different recommendations for you.  Publish this as a test document and also a test case. 


Specifically choose oracle application testing suite, note that I cannot choose a file format.  Because the excel on my machine was 2013, it created .xlsx – this could not be opened by OTM.  So, I needed to open the file and save as .xls (not just rename).


Okay, so now you’ve published your content – ready to import into OTM and run an automated test (NO – you will not get an automated test…  please wait to see what you actually import. ).  To be honest, open the excel file and you’ll see the extent of what you are importing, it’s just a list of steps – nothing else.   No automation no nothing.  A very basic list of steps for manual regression testing.  Wow!

In OTM goto menu project-> import data – choose your antiquated .xls file, the upload

Select test type – Manual test


Then use the automatch function




Awesome list of steps, but look at some of them (because of bad page titles), this is not going to help you one little bit.

You can then run the test:


What I do however think is much better, is using the published test document to then record an automated functional test that does the same thing.

Thursday, 5 May 2016

auto scaling group in AWS

Another cool exercise while doing the AWS training is the formation of an auto-scale group.  Despite the fact that the exercise was pretty trivial, (in terms of workload), it’s an amazing exercise to stand up 5 m4.10xlarges do run a “stress” test in about 15 minutes.

I was able to throw 200 CPU’s at this and getting the following graph:


This is 200 CPU’s and 5x160GB of RAM 800GB of RAM for a demo…  It’s totally amazing to be able to execute this workload in less than 15 minutes of config.  So I’ve got 5 servers running my CPU intense workload in < 15 minutes.

stress --cpu 40 --io 8 --vm 6 --hdd 8 -t 3600
My autoscale group worked like a charm, I had it spin up another instance when the CPU got over 60% – which is bascially all of the time with my command:

So my ELB is taking on all the new work compute each time the CPU threshold is being reached:


My launch configuration is here:


As you see, I’m not being nice at all.

Then my auto scaling group is doing the rest:



Note that my history is a little “spotty”, as the limits of my account means I can only run 5 x m4.10xlarge machines


I’m going to get this working with web server and ent server pairs.  I’m also going to look at the internal LB usage from web server to ent server.  With the appropriate affinity, I think I can get all of JDE to scale up and down.  Scaling up for batch in the PM is going to be easy too.  I look forward to seeing if the M4’s are much quicker than the M3’s for the ERP payload.

This flexibility is unparalleled in the physical world – of course the workload is hard to perceive also in ERP, but it’s incredible.

AWS architect course

I’ve been doing more and more with AWS and I’ll begin to blog more and more about this, it really must be the future of compute, and the ERP workload is perfect for it.  Designing JD Edwards in this elastic environment is something that I’m going to complete over the next 6 months.  I plan on have AMI’s (containers, VPC’s, ELBs, EC2, S3, AZ’s) that will deliver a completely elastic environment for JD Edwards.  I plan to spin up batch and interactive capacity on demand and also contract when required.  This is going to create a cost effective method for JD Edwards.  Remember!  Friends don’t let friends by hardware anymore (unless there is a screen attached to the purchase).

This is more of a interest post, where AWS training is excellent.  They give you credentials and you’re able to spin up EC2 instances using your training account.  You’ll see that I was supposed to create a micro instance, but I decided to create a monster and see if the alarm bells went off.

i2.8xlarge, this is 32 cores and 256GB of memory.  not 1 core and 1GB memory.

I then decided to download and install stress, via yum and then run a 1 hour stress test of the 32 cores, using the command below:

stress --cpu 32 --timeout 6099


As you can see from above, the machine is cooking. 

But, can you believe how simple this was.  I stood up a 32 way machine, installed software (via my internet gateway) and loaded it up with my choice of workload in 5 minutes or less.  This is the entire procurement process in 5 minutes – plus it’s all on the training account budget.

I do hope this workload is fairly anonymous…