Thursday, 29 December 2016

E920 upgrade JDE TCs failing, F98950 R89952450

Getting this in the logs:

2400/6448 WRK:Starting jdeCallObject            Thu Dec 29 14:04:11.762065    jdb_rst.c1243
    JDB9900307 - Failed to find table specifications

2400/6448 WRK:Starting jdeCallObject            Thu Dec 29 14:04:11.762067    jdb_rq1.c1883
    JDB3100007 - Failed to get valid table specifications

2400/6448 WRK:Starting jdeCallObject            Thu Dec 29 14:04:11.762070    tcinit.c1250
    TCE009101 - Couldn't open F952450.

Funny thing is (well, it’s not funny…  This is a new pathcode that is being upgraded – UA920.

This table is new in 920 and should be in central objects.

I can see that it does exist and has the right column count:

select count(1) from qsys2.syscolumns where table_schema = 'COUA920' and table_name = 'F952450';

This did not fail for DV920 or PY920… so we have something specific for UA920.

I enabled logging on the dep server to find out more…

ec 29 14:04:11.762060 - 2400/6448 WRK:Starting jdeCallObject            Exiting JDB_FreeUser with Success(UserHandle 0D3F98E0)
Dec 29 14:04:11.762061 - 2400/6448 WRK:Starting jdeCallObject            Exited jdeCloseDictionary with DDType 0
Dec 29 14:04:11.762062 - 2400/6448 WRK:Starting jdeCallObject            JDB9900913 - Failed to create global table specs for F952450
Dec 29 14:04:11.762064 - 2400/6448 WRK:Starting jdeCallObject            JDB9900307 - Failed to find table specifications
Dec 29 14:04:11.762066 - 2400/6448 WRK:Starting jdeCallObject            JDB3100007 - Failed to get valid table specifications
Dec 29 14:04:11.762068 - 2400/6448 WRK:Starting jdeCallObject            Exiting JDB_OpenTable(Table = F952450) with Failure
Dec 29 14:04:11.762069 - 2400/6448 WRK:Starting jdeCallObject            TCE009101 - Couldn't open F952450.
Dec 29 14:04:11.762071 - 2400/6448 WRK:Starting jdeCallObject            Entering JDB_CloseTable (hRequest 087A5A40)
Dec 29 14:04:11.762072 - 2400/6448 WRK:Starting jdeCallObject            Entering JDB_CloseTable(Table = F98950)
Dec 29 14:04:11.762073 - 2400/6448 WRK:Starting jdeCallObject            Entering JDB_ClearSequencing (hRequest 087A5A40)
Dec 29 14:04:11.762074 - 2400/6448 WRK:Starting jdeCallObject            Exiting JDB_ClearSequencing with Success

Then I think about the fact that this is a new path code and I look in the spec.ini

D:\JDEdwards\E920\UA920\spec & planner

PLANNER_DataSource=Local - PLANNER Specs
DV920_DataSource=Local - PLANNER Specs
PD920_DataSource=Local - PLANNER Specs
PS920_DataSource=Local - PLANNER Specs
PY920_DataSource=Local - PLANNER Specs

Okay, I think that this might be it

I add

UA920_DataSource=Local - PLANNER Specs

And try again.

Am I confident?  No, I leave logging on…  this is the true definition of CNC confidence with table conversions.  If you think you’ve nailed it – you turn logging on in the client jde.ini on the dep server.


I should have been more confident – problem solved.

Thursday, 22 December 2016

Is JDE too slow in the mornings, perhaps it needs a wake up call?

Do you find that JD Edwards is a little tardy in the mornings or on weekends?  Would you like an innovative way to fix this?  I might have an idea for you!

We combine google analytics data on your ERP usage as well as AIS and LAMBDA in AWS to cache up the environment that you want to use.  Why do we use so many things to do this?  Because it’s completely dynamic and based upon demand.

As has been discussed before, we’ve configured Google analytics to record ERP usage and performance.  We extract the highly used applications:

A quick snippet shows the applications that are used most over the last month.


We have some LAMBDA code that loads this information dynamically and makes AIS calls to the relevant web server to those top applications. This ensures that they are cached and ready for use first thing in the morning.  What servers do you need for LAMBDA?  None!  by definition this is serverless compute.  So we are using no servers to troll your Active ERP usage and then ensure that these applications are ready for action.

I did not write any of this code, but my team of awesome innovation experts have done the hard yards for me!

Personally I think that this is some really innovative use of AWS and AIS to ensure that users have a consistent environment to work in.

You can then use google analytics to measure the page load times of the applications on Monday mornings to ensure that the performance is consistent.

Tuesday, 20 December 2016

More Statistics from your Batch Jobs, UBE performance statistics


F986114 is great, shows you some good statistical data for when the job actually run and ended, not taking into consideration queue time.

The query below is extracting the UBE performance data between 15/12/2016 and 19/12/2016.  Note that the times written in F986114 are UTC, so in the example below, I needed to add 13 hours to get the NZ timezone (+interval ‘13’ hour) for the data I was reporting on.  You need to workout your own offset

  from svm900.f986114,ol900.f9860, py900.f983051
  where trim(jcpid) = trim(siobnm) and trim(jcvers) = trim (vrvers) and trim(jcpid) = trim (vrpid) and jcjobsts = 'D'
  and (JCETDTIM + interval '13' hour) < TO_DATE('25122016','DDMMYYYY') and (JCETDTIM + interval '13' hour) >= TO_DATE('19122016', 'DDMMYYYY')

You can then put this data into excel and run any number of pivot tables over the top to extract slow jobs / fast jobs etc.

Regular analysis will allow you to eval trends and also work out if you have problems somewhere in JDE.

More accurate automatic data selection entry tips

I’m using R98403G with the results from R9698711 and need to create 500 or so missing tables.  I’ve spoken about this previously.

This time however, I engaged my trust “sendKeys” vbs script to pound the data selection into the win32 client, and alas, the application no longer seems to allow the send keys functions to work!  Doh…

So, I’m going to use the browser, which does not suffer from the same depreciation in functionality.

I’ve had to modify my script a little bit to ensure that web data selection entry is going to work.

So, I have a spreadsheet which has all of the values that I want to put into data selection.  I’ve created a formula which is going to make this easy to put into my script.


You can see that my script is creating a massive list of comma delimited and strings in double quotes “

Cell B1 = =+""""&A1&""""&","

Cell B2 = =+B1&""""&A2&""""&","

The you can drag the cell B2 to the bottom of your data that you want to add to data selection.

Mine looks like:


Take away the last , and replace with ) and add open ( at the start.

Now, the script template:

set objShell = wscript.createobject("WScript.Shell")
rem tableList=Array("F0101","F0411","F1208")

wscript.sleep 10000

for each table in tableList
    objshell.sendkeys table
    wscript.sleep 500
    objshell.sendkeys "^%(a)"
    wscript.sleep 1000

You need to create a file like “enterDataSelection.vbs” and paste in the above script

So, if you go to your web browser and get the data selection, list of values screen ready.


With your cursor in the text box.

The script is going to type the first entry, wait for .5 second, then press ctrl alt a and wait 1 second then start again on the next item.

You can make this line as long as you need:


Note that you just need to replace the list in brackets with what you need to type.

If you then run the script (right click open) and then give your data selection window the focus (select the text box), the script will start typing / adding the elements.

Try it out with a small amount…

Extra for experts:

AppActivate is a useful function for this work, but the title (which you can use in AppActivate) for this purpose is important.

I really should have some code like the following at the start of my script, this would prevent my script from typing into the wrong window – but I’ve been struggling to find the complete window title…

Do until success = True
  Success = objshell.AppActivate("Batch Versions - Work With Batch Versions - Available Versions - Google")
  wscript.sleep 1000
wscript.sleep 100

But, you can get the titles with the below:

C:\Users\shannonm> Tasklist /V |findstr /C:"Batch"
chrome.exe                   10356 Console                    1    194,276 K Running         MITS\ShannonM                                           0:42:27 Batch Versions - Work With Batch Versions - Available Versions - Google
ApplicationFrameHost.exe     11800 Console                    1     32,784 K Running         MITS\ShannonM                                           0:00:06 Batch Versions - Work With Batch Versions - Available Versions ?- Micros

You can see from the above that the full title between Chrome and iexplore is very different.  And AppActivate does not work really with PIDs that well (for what we want to do).  Therefore we need to use AppActivate and also use the titles from the above.  Note that the Tasklist /V is very handy to give you all of the titles.

Sunday, 18 December 2016

Installing JD Edwards in AWS using RDS

Ever wanted high availability with JD Edwards while using Oracle Standard Edition?  Have you looked at the complications of licencing and recovery and turned away thinking that things are too difficult.  I know that I have.

AWS RDS for oracle has changed this dilemma, making it very easy to use oracle “as a service” – carrying over your Standard edition database licences as part of Oracle Technology Foundation and creating the durability your need in your implementation.  I’m talking about a completely recoverable and highly available (well, as highly as is needed) JD Edwards implementation using power in built AWS features and functions – as well as some smart architecture.

Myriad IT have been working closely with AWS engineers to write a seamless guide to installing JD Edwards on AWS using RDS to give you all of the above and more.  We’ve created a number of reference architectures that are built on the architecture that is spelled out in the white paper.  This means that your web servers and enterprise servers are constantly up and running in alternative availability zones.  You database is running in a single AZ using RDS and will “fail over” to an alternate AZ seamlessly to JD Edwards.  JD Edwards has never been perfect at handling these fail-overs – but a quick restart of ent and we (if required) will give your complete functionality AND production scale while handling a DR event.  This power also gives you quasi high availability at the same time.

AWS have enabled you to architect a highly available and disaster recoverable JD Edwards environment facilitated by RDS.  The white paper that has been created in conjunction with AWS shows how (with a couple of tweaks) you can simply get JD Edwards running in AWS using RDS – seamlessly.  You can seem more details here and actually download the white paper here:

This shows you the exact steps for running the platform pack against RDS and using the native datapump files and utilities to get JD Edwards working in RDS.  After following the process, you’ll be able to run up your deployment server, run all of your ESU’s and get the environment complete is a very small amount of time.  You can then perform any amount of performance testing and take advantage of limitless elasticity to bring your business a very cost efficient, stable and powerful platform.

Myriad IT have taken this concept further than the white paper that you see here, we’ve implemented media objects to sit natively in S3 buckets, we are using LAMBDA functions which call AIS forms to “pre-cache” and environment on a Monday morning to take away those slow patches.  We have ELBs in front of load balanced pigeon pair web and ent servers to scale when required – and more.

If you are considering dipping your toes into the cloud, our JD Edwards reference architectures, many live clients and implementations will surely make your journey more efficient.

I’ll begin to post more technical details on the implementation of JD Edwards in AWS – in an attempt to try and highlight the efficiencies that you can gain from this implementation type.

We are so proud to be involved in this innovation with such an amazing business partner – AWS!

Sunday, 11 December 2016

OATS JDE regression testing & data banks

This could be one of the most boring posts that I’ve ever done, or you might love it.  It might be exactly what you’ve been looking for.  It could be what you are looking for if you have done a lot of regression testing using OATs.  Have not?  Oh well.

I’ve been helping the team record 80 business scenarios in OATS for automated regression testing.  This is a cool service that will test tools releases and ESU’s and give the business a level of confidence that the changes are working without involving the business.  This is going to be more important with the proliferation of SaaS, but let’s deal with that in another post.

I have a team of people recording the scripts and using OATS.  They’ve done a great job, but as teams are – they are using databanks (sometimes up to 4) and are using the same data banks in multiple scripts.  Also, they are expecting that the current record (for the current iteration) is passed between the scripts.  Oh dear, I need to invent some magic that can make this happen.

Actually this is a large gap I see with the automated regression in OATS, I want to be able to save off the “latest” number or unique identifier or what-ever…  I want to save off the last PO # I created and read it back next time the script runs.

So, I’ve put together a couple of simple scripts (functions) that will write and read values to a properties file.  Therefore, I can save off the current pointers to data bank iteration values, or I can save any script related data for next time, nice.

public int OATSWriteEntry(String KeyToWrite, String ValueToWrite )

    Properties prop = new Properties();
    OutputStream output = null;

    try {
        //load the poperties file first
        FileInputStream in = new FileInputStream("\\\\vlneoats\\OatsTests\\") ;
        // set the properties value
        int i;
        output = new FileOutputStream("\\\\vlneoats\\OatsTests\\");
        prop.setProperty(KeyToWrite, ValueToWrite);
        // save properties to project root folder, null);

    } catch (IOException ex) {
    } finally {
        if (output != null) {
            try {
            } catch (IOException e) {
    return 0;

and Read\

public String OATSReadEntry(String KeyToRead) throws Exception
        Properties prop = new Properties();
        InputStream input = null;

    try {

        input = new FileInputStream("\\\\vlneoats\\OatsTests\\");

        // load a properties file

        // get the property value and print it out
        return prop.getProperty(KeyToRead);
        } catch (IOException io) {
        } finally {
            if (input != null) {
                try {
                } catch (IOException e) {

        return "ERROR";

I created these in a separate project and called this from all my scripts.

I called it with code like the following, when I wanted to iterate the script counter:


String szTempValue="";
szTempValue = getScript("ShannonScratch").callFunction("OATSReadEntry", "0501Equipment_Counter").toString();
getVariables().set("CurrentScriptCounter", szTempValue, Variables.Scope.GLOBAL);
getVariables().set("OriginalScriptCounter", szTempValue, Variables.Scope.GLOBAL);
info("Have read value {{CurrentScriptCounter}} as iteration point");
* incrementing counter
int tempCounter=Integer.parseInt(szTempValue);
getScript("ShannonScratch").callFunction("OATSWriteEntry", "0501Equipment_Counter", Integer.toString(tempCounter));
getVariables().set("CurrentScriptCounter", Integer.toString(tempCounter), Variables.Scope.GLOBAL);
info("Setting current iteration to {{ScurrentScriptCounter}}");
//original script
getVariables().set("Todays Date", "{{@today(dd/MM/yyyy)}}",

Apologies for my dodgy java code, this is based upon desperation, not skill (as you can tell).

The above is reading an entry from the properties file (0501Equipment_Counter), increments it and then writes the new value back for next time.  Nice that you see my library is “ShannonScratch”.

My properties file looks like this:

#Sun Dec 11 12:36:06 EST 2016

So now I can schedule my OATS scripts using OTM, and they remember where they are up to.

JD Edwards and Azure AD Services for SSO via oauth

I cannot believe how much I enjoy logging into JD Edwards when I do not need to enter my password.  I then start to count the $$ people are saving as their entire staff are no longer forgetting their JD Edwards password.  That’s right, long usernames and passwords also completely supported.

Some other nice things are that traditional sign in also works for the luddites and people that need to use the thick client.

Some of the uber nerds at Myriad have written some awesome software that allow you to SSO into JD Edwards using Azure AD Services – wow. 

When “on prem”,


We get the modified login screen, fancy new button for AD login.

click it and…


you are in – immediately.

We are using the standard long username mapping functionality in JD Edwards to ensure that it’s all compliant

sign off:


Then redirected back to the login screen:


Wow, how cool is that.

If you try and login while not in the corporate network


You get a challenge for your domain credentials via Azure

Then you log into JDE

How cool is that?

The other extension of this solution is that JDE is hosted in AWS and Azure AD authentication is of course from another cloud – can you see it all coming together?

Friday, 9 December 2016

JDEaaS or E1aaS–how can it be? JD Edwards as a service

No technical details in this post, more strategy.  I want to comment on the ability for JD Edwards to go to the cloud and become a service.  This is going to benefit clients to use resources to sell more widgets, not build more packages!

Do you want to take away the hassles of managing JD Edwards on a day to day basis.  Stop worrying about performance, security, DR and more?  Do you wish you could be provisioned with a URL that you can give to all your users and everyone is in a better place?

We are doing this for people now, it’s the new normal.

The journey to the cloud can be super complex, you start to worry about security, high availability, data sovereignty and more – but you do not have too.  You can decide to take on one system at a time.  This can be JD Edwards.  You can get a fully managed service with SLA’s that suit your business with a simple PUPM charging method.

We can put you in a tier 1 cloud provider (AWS) and completely manage JD Edwards for you – where ever you are in the world.  We can have distributed backups all around the world in secure locations if this is important to you.  I would not rely on a private cloud, as the extensibility and richness of services is always going to be limiting your design.  The reason you want to go to the cloud is to remove constraints. 

We can recreate your entire environment with a cloud formation to have you up and running in minutes / hours after the worst disaster.  We can architect this to be highly available and disaster recoverable (automatically).  We can architect scale up and scale down of compute resources to save you money and give you the horsepower when you need it.  We have predefined cloud templates that give you a lot of this power today.

The solution will be more secure than what you currently have, as there will be 0 database access to anyone, there are firewalls between everything and we can arrange so that only your traffic gets through to your resources.

We can offer automated testing and also provide you with dashboards of performance and usage – unparalleled insight into the usage and performance of your ERP.  Automated testing is going to become more and more important when SaaS vendors will continually provide you with updates (tools releases and upgrades), as they’ll tell you what is wrong in the lower environments. Don’t worry about all the technical messages though, we’ll handle this.

We’ve been on the forefront of AWS migrations and continue to do this more and more for our clients.

The constant innovation in AWS allows us to solve problems quicker and easier than ever before for our clients.

Some managed service providers are including tools releases, quarterly patching and upgrades in the service - this comes at a cost though.