Wednesday 25 May 2016

better script to reconcile F00165 to media Objects

This script has the bones of everything you need to see how many of your media object records actually have physical files.  You could augment this with curl and a bunch of smarts to to URL’s also, but my script is simple for the time being.

Note that you need to create your reconciliation table:

drop table jde.f00165srm;
create table jde.f00165srm as select GDOBNM, GDTXKY, GDMOSEQN, '     ' AS FILEEXISTS from crpdta.f00165 where 1=0 ;

This is the vbs script, just create and paste.  you’ll need to create an ODBC with the name TVLNE8 (in this example)

image

Script:

fileSuffix=Left(FormatDateTime(dateStamp, 1),6) & day(dateStamp) & monthName(month(dateStamp)) & hour(dateStamp)
Const ForAppending = 8
logfile="h:\data\myr" & fileSuffix & ".txt"
Set objFSO = CreateObject("Scripting.FileSystemObject")
set objTextFile = objFSO.OpenTextFile(logfile, ForAppending, True)
DIM fso
Set fso = CreateObject("Scripting.FileSystemObject")

Dim Oracon
set oraccon = wscript.createobject("ADODB.Connection")
Dim recset
set recset = wscript.createobject("ADODB.Recordset")
set Insertresults = wscript.createobject("ADODB.Recordset")
set f98moquerecset = wscript.createobject("ADODB.Recordset")
Dim cmd
set cmd = wscript.createobject("ADODB.Command")
set cmdInsert = wscript.createobject("ADODB.Command")
set cmdf98moque = wscript.createobject("ADODB.Command")
Set Oracon = wscript.CreateObject("ADODB.Connection")

Oracon.ConnectionString = "DSN=TVLNE8;" & _
"User ID=jde;" & _
"Password=jde;"

Oracon.Open
Set cmd.ActiveConnection = Oracon
Set cmdInsert.ActiveConnection = Oracon
cmd.CommandText = "Select gdobnm, gdtxky, gdmoseqn, gdgtfilenm, gdgtmotype, gdgtitnm, gdqunam from crpdta.F00165 where gdgtmotype in ('5','2','1') and gdobnm like 'GT%'"
'cmd.CommandText = "Select gdobnm, gdtxky, gdgtfilenm, gdgtmotype, gdgtitnm, gdqunam from proddta.F00165_91 where gdgtmotype in ('1')"
Set recset = cmd.Execute

while (recset.EOF=false)
'Generally this would be in a function, but it needs to be fast!!
'NOte that type 2 and 1 cld be same function, as they are based on queues
'but i'm doing an upgrade, and it's complicated... this hard code of DIR
'does not really need to be there
if recset("GDGTMOTYPE") = "2" then
  filenametotest = "D:\MEDIAOBJ\Oleque\" & trim(recset("GDGTFILENM"))
elseif recset("GDGTMOTYPE") = "1" then
  filenametotest = moqueue(trim(recset("GDQUNAM"))) & "\" & trim(recset("GDGTFILENM"))
  'wscript.echo "queue (" & trim(recset("GDQUNAM")) & ") path (" & moqueue(trim(recset("GDQUNAM"))) & ")"
else
  filenametotest = trim(recset("GDGTFILENM"))
end if

If (fso.FileExists(filenametotest)) Then
  'WScript.Echo("File exists! TP(" & recset("GDGTMOTYPE") &"): " & filenametotest)
  fileexistsstatus="YES"
Else
  fileexistsstatus="NO"
  'WScript.Echo("File does not exist! TP(" & recset("GDGTMOTYPE") &"): " & filenametotest)
End If

cmdInsert.CommandTExt = "INSERT INTO JDE.f00165SRM (GDOBNM, GDTXKY, GDMOSEQN, FILEEXISTS) values ('" & _
recset("GDOBNM") & "', '" & _
recset("GDTXKY") & "', '" & _
recset("GDMOSEQN") & "', '" & _
fileexistsstatus & "')"
'wscript.echo "SQL:" & cmdInsert.CommandTExt
Set insertrecset = cmdInsert.Execute

recset.MoveNext
'wscript.echo recset("gdgtfilenm")
'tuple=" "
wend

set recset = nothing
oracon.close
set oracon = nothing

Check the progress of the script with:

select * from jde.f00165srm;
select count(*) from jde.f00165srm;

Now look at the results.  Note that this also demonstrates a handy left outer with multiple join items.

select t1.gdtxky, t1.gdobnm, t1.fileexists, t2.gdgtfilenm from jde.f00165srm t1 left outer join crpdta.f00165 t2 on t1.gdobnm = t2.gdobnm and t1.gdtxky = t2.gdtxky and t1.gdmoseqn = t2.gdmoseqn;

Mapping central objects files to spec files

Ahh, my package build report has an error on FDASPEC 0 records…. 

image

I can never remember what file FDA spec is… Actually, I can.  I open UTB, I open spec files.

image

Then choose a pathcode and see the relationships – great!

image

I then used a web based OCR program http://www.onlineocr.net/ to covert this image to words, so we can all do a search on this information and find it quicker next time!  Nice!

F98752

JDESPECTYPE_SVRHDR

F98753

JDESPECTYPE_SVRDTI

F98720

JDESPECTYPE_BUSVIEW

F98710

JDESPECTYPE_DDTABL

F98711

JDESPECTYPE_DDCLMN

F98712

JDESPECTYPE_DDKEYH

F98713

JDESPECTYPE_DDKEYD

F98743

JDESPECTYPE_DSTMPL

F98751

JDESPECTYPE_FDASPEC

F98750

JDESPECTYPE_FDATEXT

F98761

JDESPECTYPE_RDASPEC

F98760

JDESPECTYPE_RDATEXT

F98740

JDESPECTYPE_GBRLINK

F98741

JDESPECTYPE_GBRSPEC

F98762

JDESPECTYPE_BUSFUNC

F98745

JDESPECTYPE_SMRTIMPL

F98306

JDESPECTYPE_POTEXT

So now we know the relationship of spec files to central objects tables.

Friday 20 May 2016

using checksum to verify files before spending lots of time on them…

I had this issue where I was trying to migrate a client to the cloud and I kept getting problems with the zip file not extracting properly.  I kept getting various errors.  I started to trace the files history, it gets created on the client deployment server, then ftp’d to our ftp server and the downloaded from the ftp server.

This technique is really handy if you are going to move huge bits of data, and want to know that the file is sane before you start to use it in anger.

download windows fciv https://www.microsoft.com/en-us/download/details.aspx?id=11533 

I check my client copy and see:

C:\Users\shannonm\Downloads>c:\temp\fciv.exe DV910include.zip
//
// File Checksum Integrity Verifier version 2.05.
//
062172c62ed68fc54422629fb593c8a6 dv910include.zip

okay, not check the ftp server copy – this is a linux box, not a different utility that gives the same information – phew!

sudo apt-get install md5sum

myriad@mywebftp:/var/www/html/upload/files$ md5sum DV910include.zip
062172c62ed68fc54422629fb593c8a6  DV910include.zip

That shows me that I’m downloading (destination) exactly what is on the FTP server and these are the same.

Then I log into the source server

H:\data>fciv DV910include.zip
//
// File Checksum Integrity Verifier version 2.05.
//
0964db2477f1724ec8e5cd5c275cebc1 dv910include.zip

TOTALLY different (well, from an md5 point of view) – what does this mean? https://en.wikipedia.org/wiki/Checksum 

I’ve moved from using the FTP server to using s3 buckets directly in AWS.  I was not going to do this because of problems with getting to https://aws.amazon.com on the client site, but then noticed that I could install the AWS command line tools and run them without any problems.  Using AWS S3, I have no problems with the zip files or the extraction.

Note that I also get VERY good speed on the up and down, because there are some fat pipes going into AWS.  About 24 GB download in 15 minutes

JD Edwards performance, 4 ways of measuring performance for a complete picture

This article tries to explain 5 different methods you can use to measure (and therefore improve) performance for your JD Edwards implementation.  Regular analysis of these metrics will give you terrific insight into your ERP performance and allow you to improve it and also highlight when things are going wrong.

Subjective vs Objective

You can spend a lot of time and effort on load testing, and there are gains that you can make, generally incremental to produce a faster ERP, these changes are generally iterative in nature.  You can also, however, make some great improvements in stability with the use of load testing, as you can get the system up to a threshold which might exhibit certain characteristics. 

By their nature, some problems only occur every week or month in your system – but load testing can force them to generally pop up a little more frequently and then also allow you to log these situations much better.

The title of this blog is about the comparison of load testing data against your own data to get a very subjective result of performance.  This means that you can compare with your own internal benchmark and also know and be able to measure any improvements (or slow downs) caused by changes  in your architecture.  This provides cold hard data when users are ringing up and telling you that the system is slow.  But your system could be considerably slower than the system in the next city or suburb – why and does this matter?

If you users are happy and your managers are happy – I guess it does not matter.  If you are using 10% of your servers capacity day in and day out – well personally – I think that matters!  I really do not think that you need to save hardware for a rainy day – especially things like memory – JD Edwards will generally have a high water mark that will not change, day in and day out.  There is no need to have 32GB of RAM on the logic server if your kernels and UBE’s will only ever consume 8.  If you fell that your system is not quick enough, then you need to compare it with industry standards to know how you are performing.

1.  Performance Workbench - free

I developed the concept of a performance benchmark to try and address the objective view of performance, this is where you can compare your systems performance against industry norms.

http://myriad-it.com/solution/performance-benchmark/

You can read about it and download it from the above link.  It takes a bunch of accurate timestamps for numerous I/O and computational metrics in JD Edwards

image

Download the par file and install the code from it on your system.  Create the tables in the project.  You run P55PERFT and get the screen above.  Click the big button and the system will go away and start testing performance.  It’s busy and will sit there for quite a while running inserts, updates, deletes, BSFNs etc etc.

Note that there are some fields in the above that allow you to specify a reply email address and some notes for ME.  If you choose to do this, then I get a copy of the results and can compare them with industry standards (I maintain a master list of averages).

So this can give you an objective and subjective view of performance at your JD Edwards site.

image

You can see the above (it’s a bit dodgy for this run), but it graphs the current numbers, but allows analysis on the results too (comparison)

image

image

So you can see from the analysis above, you can choose the test that you want to see the history for – and the history (in my case 37 previous results) and graphed – having the latest and the right of the graph (as you are looking at it).

Of course the above it a nice litmus test, but if you really want to start to deep dive into performance, then you need to start using oracle application testing suite (OATS).  We own and lease our copy of OATS to clients.  We have various performance offerings that allow us to remotely load test your system and provide you with objective and subjective analysis.  We’ve been lucky enough to load test JD Edwards on ODA’s, exadata’s, AS/400’s, windows, large unix and linux.  The results between all of the platforms are very interesting.

I have to say that if you are preparing for a go-live, load testing is a must do project item.  Every time we’ve performed this exercise we’ve made considerable improvements to performance and stability.  We also allow you to fine tune your hardware allocations.  We also do this load testing for all AWS migrations and implementations that we’ve been working on.  This is a perfect way to compare the cloud and understand the amount of hardware you need to put in the cloud to get “like for like” performance between on prem.  With a known hardware budget and proper elasticity – you are on your way to save money in the cloud.

2.  BSFN performance metrics - free

Remember that there are also some other great benchmarking tools for BSFN performance -

image

You can get this from Server Manager, just navigate to a web server and then choose call object stats:

image

This is pretty easy to curl and script so that you can have an interactive dashboard on your BSFN performance.  This can also tell you when things are going wrong.  Ensure that you create a benchmark so that you can compare.

3.  Batch (UBE) performance analysis - free

Another handy place to go, remember to compare your batch performance regularly:

Using techniques described in this article http://shannonscncjdeblog.blogspot.com.au/2012/03/which-of-todays-ubes-ran-slower-than.html you can determine whether today’s UBE’s were slower than yesterdays…  Or last month or last year

4.  Google Analytics for JDE – subscription cost for managed solution (or free)

Finally – my favourite – Google analytics for a complete ERP performance review.

 image

Above is showing you a two week comparison of over 1000000 interactions with an ERP!  It’s analysed the average performance for two weeks and has overlayed this information with the metrics from the current two weeks.  You can choose what ever ranges you want to compare with.

What does this data tell you – well it seem that there was a public holiday in Australia on April 25 – ANZAC day in Australia!

We could drill down on this data and tell you if sales order entry was slower on one day compared with another, one week or one month compared to another. 

Whos’ using what browser in the last month?

image

What apps are being used and how many times used and how long to load and how much time spent of pages

image

Usage and speed by hour of the day

image

IT never ends!

impact of 80 JD Edwards users on a windows system

I’m running a fairly intensive interactive load and some batch activity – 80 concurrent users across 2 web servers and 2 enterprise servers.

Everything is running okay, I’d recommend more CPU for enterprise servers, leave the RAM at 8 for this.  You can see what the boxes are doing at the web tier and the enterprise server tier below.  JDE Ent servers are generally very light on RAM, especially when you tune your kernels appropriately.  CPU is important here, and network latency.

Enterprise Server

image

Note that there are 4 cores viewable in resource monitor.

image

 

image

Another chart that is really handy, showing the amount of disk and also network traffic that is generated from this exercise. 

image

Server 2 of my pigeon pair is also loaded up the same way.  Note that these servers have a single CPU and 8GB of RAM.

Also, this is a

Web

Note that there are 4 cores viewable in resource monitor.

image

image

Thursday 19 May 2016

JD Edwards and ORA-03113 again

Are you running an oracle database and JD Edwards.  Do you ever get seemingly random ORA-03113 errors, and when they start they can often go downhill fast.  I’ve blogged about this for a while, this error does occur on 11.2.0.4 and 12c database.  Note also that you need to take a client patch, as it’s OCI that seems to have the issue.

I come across this all the time at big sites and the fix is great and easy.

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932000    dbperfrq.c471
    OCI0000178 - Unable to execute - SELECT MAOSTP, MAPA8, MAAN8, MASYNCS FROM UATDTA.F0150  WHERE  ( MAOSTP = :KEY1 AND MAAN8 = :KEY2 )  ORDER BY MAOSTP ASC,MAAN8 ASC,MAPA8 ASC

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932001    dbperfrq.c477
    OCI0000179 - Error - ORA-03113: end-of-file on communication channel
Process ID: 115202
Session ID: 414 Serial number: 19786
 
2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932002    JDB_DRVM.C1005
    JDB9900401 - Failed to execute db request

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932003    JTP_CM.C1344
    JDB9900255 - Database connection to F0150 (Business Data - UAT) has been lost.

2508/5060 WRK:DGTEST26_09632238_P01012          Thu May 19 14:15:49.932004    JTP_CM.C1298
    JDB9900256 - Database connection to (Business Data - UAT) has been re-established.

http://shannonscncjdeblog.blogspot.com.au/2016/04/simple-post-about-oracle-clientsnot.html is a good summary about the fix.

Note that i’m currently fixing this at a site using exadata, same problem same everything!!

All windows this time, but still getting ORA-03113’s when I load up the system with OATS (Oracle Application Testing Suite).

Remember to NEVER trust windows find:

D:\JDEdwards\E910\log>findstr "ORA-03" *.log
jde_2508.log:   OCI0000179 - Error - ORA-03113: end-of-file on communication channel

image

Tuesday 17 May 2016

When the JDE interface is not intuitive enough

What do you mean?  This has never occurred.  Surely, everyone at any “JDE101” course in the world has thought, this is the most intuitive software that has ever been created.  So easy to use… No…  Just me…

We’ve had a situation recently where users (proper licensed users) for HS&E just want to enter their issues using a simple web form.  They don’t want to authenticate, they don’t want to navigate, they just want to type and upload – fair enough really.  I think that there is about 1000 people that can enter these incidents – so best we make it easy and lower the costs of training and also have a layer of abstraction from the ERP so that tools releases and things will not confuse anyone!

So, the webform takes shape:

image

Nice and easy to use, easy to navigate.

Simply adds incidents into JD Edwards using AIS – yes another use for AIS.

This solution creates a cache every X minutes of all of the lookup fields from various JD Edwards tables.  This is done in batch in the background, therefore JDE could be down and this solution could continue to run.  If JDE was not available for the final “submit”, then the JSON payload is saved off for later execution.

Pretty neat hey?  This type of thing could be on a tablet and eventually made available to the General Public in an offline format and batch uploaded to JDE.

This is to demonstrate the power of “Web Forms” and how these can be used to get data into your ERP.

One really nice thing about this is that it’s completely mobile native and tablet native.  The code knows the resolution of the device and can render the screen based upon this.

Screenshot_2016-05-17-16-47-54_resizedScreenshot_2016-05-17-16-47-34_resized

As you can see from the above, the forms are also able to use the devices location and also camera functionality to facilitate the data entry process.

Mobile native, intuitive data entry to JD Edwards!  Thanks AIS, thanks Myriad Mobile developers!

Sunday 8 May 2016

integrating UPK and OATs–not what I expected!

I’ve been dreaming about this process for a long time. 

Create detailed training material.  Use this for your automated regression testing.  Surely if you run through the scenario in UPK, it could publish the steps in a format that would work with OATS.  This script could then be run as an automated functional test in OATS.  This would be really nice.  So I spent some time looking into this and getting it working.

First, record something in UPK.  I chose a basic Address book navigation process:

image

A couple of important items is that if you fill in the expected results manually, this will help after you import into OATS, all will be revealed

image

So my scenario is recorded.  I guess that this is pretty handy for documentation.  I have some different recommendations for you.  Publish this as a test document and also a test case. 

image

Specifically choose oracle application testing suite, note that I cannot choose a file format.  Because the excel on my machine was 2013, it created .xlsx – this could not be opened by OTM.  So, I needed to open the file and save as .xls (not just rename).

image

Okay, so now you’ve published your content – ready to import into OTM and run an automated test (NO – you will not get an automated test…  please wait to see what you actually import. ).  To be honest, open the excel file and you’ll see the extent of what you are importing, it’s just a list of steps – nothing else.   No automation no nothing.  A very basic list of steps for manual regression testing.  Wow!

In OTM goto menu project-> import data – choose your antiquated .xls file, the upload

Select test type – Manual test

image

Then use the automatch function

image

okay!

image

Awesome list of steps, but look at some of them (because of bad page titles), this is not going to help you one little bit.

You can then run the test:

image

What I do however think is much better, is using the published test document to then record an automated functional test that does the same thing.

Thursday 5 May 2016

auto scaling group in AWS

Another cool exercise while doing the AWS training is the formation of an auto-scale group.  Despite the fact that the exercise was pretty trivial, (in terms of workload), it’s an amazing exercise to stand up 5 m4.10xlarges do run a “stress” test in about 15 minutes.

I was able to throw 200 CPU’s at this and getting the following graph:

image

This is 200 CPU’s and 5x160GB of RAM 800GB of RAM for a demo…  It’s totally amazing to be able to execute this workload in less than 15 minutes of config.  So I’ve got 5 servers running my CPU intense workload in < 15 minutes.

stress --cpu 40 --io 8 --vm 6 --hdd 8 -t 3600
My autoscale group worked like a charm, I had it spin up another instance when the CPU got over 60% – which is bascially all of the time with my command:

So my ELB is taking on all the new work compute each time the CPU threshold is being reached:

image

My launch configuration is here:

image

As you see, I’m not being nice at all.

Then my auto scaling group is doing the rest:

image

image

Note that my history is a little “spotty”, as the limits of my account means I can only run 5 x m4.10xlarge machines

image

I’m going to get this working with web server and ent server pairs.  I’m also going to look at the internal LB usage from web server to ent server.  With the appropriate affinity, I think I can get all of JDE to scale up and down.  Scaling up for batch in the PM is going to be easy too.  I look forward to seeing if the M4’s are much quicker than the M3’s for the ERP payload.

This flexibility is unparalleled in the physical world – of course the workload is hard to perceive also in ERP, but it’s incredible.

AWS architect course

I’ve been doing more and more with AWS and I’ll begin to blog more and more about this, it really must be the future of compute, and the ERP workload is perfect for it.  Designing JD Edwards in this elastic environment is something that I’m going to complete over the next 6 months.  I plan on have AMI’s (containers, VPC’s, ELBs, EC2, S3, AZ’s) that will deliver a completely elastic environment for JD Edwards.  I plan to spin up batch and interactive capacity on demand and also contract when required.  This is going to create a cost effective method for JD Edwards.  Remember!  Friends don’t let friends by hardware anymore (unless there is a screen attached to the purchase).

This is more of a interest post, where AWS training is excellent.  They give you credentials and you’re able to spin up EC2 instances using your training account.  You’ll see that I was supposed to create a micro instance, but I decided to create a monster and see if the alarm bells went off.

i2.8xlarge, this is 32 cores and 256GB of memory.  not 1 core and 1GB memory.

I then decided to download and install stress, via yum and then run a 1 hour stress test of the 32 cores, using the command below:

stress --cpu 32 --timeout 6099

image

As you can see from above, the machine is cooking. 

But, can you believe how simple this was.  I stood up a 32 way machine, installed software (via my internet gateway) and loaded it up with my choice of workload in 5 minutes or less.  This is the entire procurement process in 5 minutes – plus it’s all on the training account budget.

I do hope this workload is fairly anonymous…