About Me

My photo
I work for Fusion5 Australia . Connect with me on linked in here . I'm raising money for a good cause at the moment, follow donate to leukemia foundation

Friday, 17 November 2017

Movember just went high tech!

I'm doing movember again this year - men's health is a great cause and I like to do my thing.  I think that the movember movement is slowing down in Australia.

I made my movember a little more innovative than most - surprise!

I decided to firstly create a QRCode, so that people could easily donate:

That was simple, my donation URL is https://au.movember.com/donate/details?memberId=316906 So I converted that with http://www.qr-code-generator.com/

 Cool, so now people can scan and donate, that was easy!

Add some rhyme, and I'm away.

The next part is cooler, I own some estimote beacons, so why don't I program them to show my donation page. I need to go to bitly.com to generate a short URL, as I can only save 17 bytes, but that is easy.

This is the beacon I've put outside my office.  I now get out my android phone and we have a donation site being pushed out to anyone listening to the physical web.  Fingers crossed that my technology is going to get some donations.

The above is a screen shot from my phone showing the two beacons that I have projecting web sites.

Beacons are really cool (IoT) devices, we are implementing them at a number of clients and integrating them into JDE.

Tuesday, 14 November 2017

Embark on IoT–where do you start

If I was going to implement some IoT to show the boss, I’d probably use the orchestrator in JDE.  It’s pretty cool and pretty simple and you could impress the boss fairly easily.  But, what if you REALLY wanted to impress the boss.  What if you wanted to be able to support disconnected devices, tonne’s of messages and what about a thing shadow?  All native when looking at the AWS IoT offering. 

Local caching, look no further than https://aws.amazon.com/greengrass/

Greeengrass is like an offline agent for IoT, awesome and native for the suite.

I’m also unsure how JDE might process millions of devices and trillions of messages, as I know that AWS can scale out to.

Connect An IoT Device

Above shows the native consumption of MQTT messages into the AWS engine.

Process IoT Data

You can see that the above is for an autonomous car, forget that though – it could be a freezer for all I care.  The cool things are the fact that the data can be processed into a data warehouse using redshift or even big data processing locations in inexpensive S3 buckets.   Save it all for later.  This also shows real time insights using quicksight, a possible downstream product of big data analysis.  Also using ML and AI for predictive.  This would call orchestrations in JDE (or just AIS calls) to raise work orders and react to the breaches of IoT configured thresholds.

                A high-level view of AWS IoT

A complete solution is available, as seen above.  Making a thing shadow a native part of the tool kit.  This is something that is going to be very important with IoT moving forward, being able to interrogate a digital double.  Imagine putting on the VR goggles and being able to see the entire physical device as a digital double of any asset that you are maintaining.  Pointing your virtual hands to any part of the machine and being able to see all of the values that are being sent to IoT.  Welcome to the future!

Use JDE for what it’s good at – use well architected integration, use best of breed cloud solutions where appropriate!

Wednesday, 1 November 2017

A really quick oracle performance test–what did you get?

Ever had a slow down that you cannot really explain, I know that I have.

What you always need is a set of baseline tests, things that ground your expectations.

Remember that we’ve provided these sorts of things with ERP analytics (at a high level)

and performance benchmark - http://myriad-it.com/solution/performance-benchmark/ (which I think is really cool).

But let’s take it down another notch, database only!

Imagine that things are slowing down and you want to find a problem.  Performance problems are like a pyramid, where there is something like:


If you’re hardware is rubbish, everything will be rubbish.

If you’re database is rubbish, everything will be rubbish…

You see where I’m going.

So, I’d first run some dd commands on the hardware to check disk speeds, I’d check the location of the data disks and then the redo disks.  I check the disk speed where temp is written and swap.  make sure they are all pretty quick.

[root@ronin0-net1 homewood]# dd if=/dev/zero of=speedtest1.dmp oflag=direct conv=notrunc bs=1M count=11200

6527+0 records in

6527+0 records out

6844055552 bytes (6.8 GB) copied, 299.438 seconds, 22.9 MB/s

The above would indicate a VERY large problem

[root@ronin0 homewood]# dd if=/dev/zero of=speedtest1.dmp oflag=direct conv=notrunc bs=1M count=11200

11200+0 records in

11200+0 records out

11744051200 bytes (12 GB) copied, 25.8044 seconds, 455 MB/s

The above would make you smile!

Then – you’ve tested the performance of a bunch of locations  - happy days.  Now the database.

Once again, simple things for simple people.

create a sql script with the following contents:

set echo on
set feedback on
set timing on
spool output.txt
   execute immediate 'drop table testdta.f0101perf';
   execute immediate 'create table testdta.f0101perf as select * from testdta.F0101 where 1 = 0';
   execute immediate 'grant all on testdta.f0101perf to PUBLIC';
   for a in 1..10000 loop
      insert into testdta.f0101perf select * from testdta.F0101 where aban8 = 100;
   end loop;

And run it at the commandline:

C:\Users\shannonm>sqlplus JDE@orcl @shannon.sql

SQL*Plus: Release - Production on Wed Nov 1 14:03:53 2017

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set feedback on
SQL> set timing on
SQL> spool output.txt
SQL> begin
   2    execute immediate 'drop table testdta.f0101perf';
   3    execute immediate 'create table testdta.f0101perf as select * from testdt
a.F0101 where 1 = 0';
   4    execute immediate 'grant all on testdta.f0101perf to PUBLIC';
   5    for a in 1..100000 loop
   6       insert into testdta.f0101perf select * from testdta.F0101 where aban8
= 100;
   7       commit;
   8    end loop;
   9  end;
  10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:31.75
SQL> quit;
Disconnected from Oracle Database 11g Enterprise Edition Release - 64
bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


So, now we can open our results file, which is placed in the dir we ran the script from (again, nothing fancy). output.txt.  Remember Address book 100 should exist – I could make that smarter with = (select max(aban8) from crpdta.f0101), but that would be an extra table scan (index and sort) that I did not want to execute.

What does this do?

Creates a copy of F0101 and then insert’s 100,000 records into it.

SQL> begin
   2    execute immediate 'drop table testdta.f0101perf';
   3    execute immediate 'create table testdta.f0101perf as select * from testdta.F0101 where 1 = 0';
   4    execute immediate 'grant all on testdta.f0101perf to PUBLIC';
   5    for a in 1..100000 loop
   6        insert into testdta.f0101perf select * from testdta.F0101 where aban8 = 100;
   7        commit;
   8    end loop;
   9  end;
  10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:31.75
SQL> quit;

Remember, that this is not really testing index creation and index tablespaces, so you might want to make the test a little more realistic, but you get the picture.  It’s easy to get a bunch of indexes on the table and go from there.

Then you need to work on installing performance benchmark to start getting the stats on the other parts of your ERP – oh and ERP analytics (https://shannonmoir9.wixsite.com/website/erp-analytics)

Sunday, 22 October 2017

Tools release 9.2 update 2 is GA

There are a heap of cool features, let me summarise them:  read the source of truth http://www.oracle.com/us/products/applications/jd-edwards-enterpriseone/jde-ga-10-17-3961047.pdf

  • additions to UXOne
  • mobile time entry – new app
  • mobile inventory transfer and cycle count
  • Supporting MAF 2,.4 (but why would you bother?) https://docs-uat.us.oracle.com/middleware/maf240/mobile/develop-maf/whats-new-this-guide-release-2.3.2.htm
  • There are a heap of application enhancements – which is a little strange when something is labeled tools release.  I guess we are seeing once again the execution of continuous delivery
    • Manufacturing Production Execution Process Simplification
    • HCM improvements
    • Finally - Joint Venture Management - Percentage of Ownership and Distributions
    • Capital Asset Management and Service Management
    • Announcing JD Edwards EnterpriseOne Notifications – NOT mobile message notifications. 
      • orchestrator can now process notifications -
      • the notification system will notify the appropriate users via their preferred delivery: within the JD Edwards web client, in the JD Edwards Work Center, or via email or text message.
      • Wow, does this mean perhaps some attempt at the sadly missing workflow engine?
      • Where are the mobile notifications?
      • I have big plans to integration microsoft Flow into JD Edwards natively as a fully featured and rich workflow engine
    • JD Edwards EnterpriseOne Orchestrator Enhancements
      • read from external data
      • read from watch lists
      • is this going to be workflow I ask (finally!)
    • Server Manager REST API enhancements.  This is cool if you want to connect SCOM or other management product into SM to manage the organisation.
      • Enterprise Server -
      • HTML Server -
      • Application Interface Services Server (AIS) -
      • Transaction Server (RTE) -
      • Business Services Server (BSSV) -
      • BI Publisher Server for One View Reporting (OVR) -
      • Database Server
    • Enhancements to Simplify Staying Current
      • Anything in this area is good.  You can track if BSFN’s are being called
      • I’d still use our ERP analytics program and augment the information with this.
    • More platform certifications – could there be a more boring list?  (MSFT EDGE!)
      • Oracle Database 
      • Oracle JavaScript Extension Toolkit (JET) 3.1 
      • Oracle Mobile Application Framework (MAF) 2.4 for Mobile Foundation 
      • Microsoft EDGE browser 38


Monday, 16 October 2017

CD3–in action

Continuous delivery is way too real.  How do I know?  I’ve seen it.

Take for example you want to look at the release catalog from oracle (new from 9.1)

start here:  https://apex.oracle.com/pls/apex/f?p=24153:99:24777358370464:TAB:NO:::&tz=-6%3A00


Choose JDE


Choose compare releases

And now compare applications:


Cool hey?  So you can now choose a month to compare with – not a “dot” release.

Wednesday, 6 September 2017

Bulk version change tips and tricks

Ever needed to create a lot of versions as a copy of others?  Ever needed to create versions and also change data selections and processing options?  Say for example you opened another DC and wanted to copy all of the config from the previous ones – reuse all of your IP – well, do I have some good news for you..  Well indifferent news – it can be done.

The first step to getting this working is brain storming, one of my fav people to brainstorm with is Shae.  We can have quick single syllable word conversations, grunt a bit – but at the end of the day articulate an elegant solution to some amazing technical problems,  I’m very lucky to have peers like this to work with.  Shae came up with the idea of using a par file to complete this task – and that was a great idea!  I can easily create a project with SQL, populate it with all of the versions I need to copy.  I can also create all of the F983051 and central objects to create the base objects, but I’d need to use load testing or scripts to change all of the PO’s and data selection.

Shae’s idea to use the parfile was great, it seemed possible.  The client in question has about 500 versions all for a particular DC, and I needed to change names, PO;s and data selections based upon the new name change – okay – challenge accepted.

There are heaps of ways of doing this – java, node.js, lambda, vbscript – I went old school – a little bit of sed and awk.

I basically took the parfile, sftpd it to linux and then ripped it apart.

The structure was not too crazy to deal with, although it did feel like Russian dolls, where there was a zip file in a zip file in a zip file.

There was also some pretty funky things like unicode files in the middle not normal files and base64 strings for PO’s – but nothing was going to stop me.

What I’m going to do is just cut and paste the script here, you’ll get the idea of what needed to be done from the sections and the amazing comments.

In my example the version names, PO’s and data selection all changed from string TT to string GB – so it was really easy to apply these rules through the script.

At the end of the day, this created a separate par file that you can restore to a project when all of the new versions in it! Really nice.

There is a tiny bit of error handling and other things – but really just showing you what can be done.

Imagine if you needed to change the queue on 100’s of versions or anything like this.  You could use some of the logic below to get it done (or be nice to me).

if [ $# -ne 3 ]
     echo 'USAGE $0 <parfile> <FROm STRING> <TO STRING>'

parfileNoExt=`echo $parfile | awk -F. '{print $1}'`

rm -fR $expDir

#unzip the file to the working dir
unzip $parfile -d $expDir

for file in `ls $expDir/*.par`
     #echo $file
     dir=`echo $file | awk -F. '{print $1}'`
     #echo $dir
     unzip -q $file -d $dir

# parfile UBEVER_R57000035_PP0001_60_99
#   F983051.xml
#   F983052.xml
#   manifest.xml
#   specs.zip
#    R5700036.PP0002.

#now lets extract the specs zip file for each

find $expDir -name specs.zip -execdir unzip -q \{} \;

#now delete par files and all else

find $expDir -name '*.par' -exec rm \{} \;
find $expDir -name specs.zip -exec rm \{} \;

# now we need to rename directories
if [ $_debug = 1 ]
   echo "RENAME DIRS"

cd $expDir
for dir in `ls -d *${fromString}*`
   echo $dir
   newname=`echo $dir | sed s/_${fromString}/_${toString}/g`
   newname=`basename "$newname"`
   echo $newname
   cd $expDir
   mv $dir $newname

#reanme files, generally in the spec dir
#for file in `find $expDir -name "*${fromString}*.xml" -type f -print`
#holy crap, that took a long time to encase this with double quotes so as not to lose the
#dodgey versions
if [ $_debug = 1 ]
   echo "RENAME FILES"

find $expDir -name "*${fromString}*.xml" -type f |while read file; do
     newfile=`basename "$file"`
     newfile=`echo "$newfile" | sed s/${fromString}/${toString}/2`
     currDir=`dirname "$file"`
     mv "$file" "$currDir/$newfile"
     if [ $? -ne 0 ]
         echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         sleep 10

#filelist="`find $expDir -name "*${fromString}*.xml" -type f -print`"
#echo $filelist
#for file in $filelist
     #newfile=`basename "$file"`
     #newfile=`echo "$newfile" | sed s/${fromString}/${toString}/g`
     #currDir=`dirname "$file"`
     #mv "$file" "$currDir/$newfile"
     #if [ $? -ne 0 ]
         #echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         #sleep 10

if [ $_debug = 1 ]

#This is ridiculous - I need to convert manifest.xml
#from utf-16 to utf-8 and grep and then back again
# this is killing me
for file in `find $expDir -name manifest.xml -print`
   echo $file
#  iconv -f utf-16 -t utf-8 $file | sed s/${fromString}[0-9]/${toString}/g > $file.utf8
   iconv -f utf-16 -t utf-8 $file | sed s/${fromString}0/${toString}0/g > $file.utf8
   iconv -f utf-8 -t utf-16 $file.utf8 > $file
   rm $file.utf8
#okay, now for the contents of the files
set -x
grep -r -l "${fromString}" $expDir | while read file; do
#for file in "`grep -R -l ${fromString} $expDir/*`"
#  do
     newfile=`echo "${file}.new"`
     echo $file "contains $fromString"
     cat "${file}" | sed s/${fromString}/${toString}/g > "${newfile}"
     #note that if you need to compare the internals of the files
     #comment out the following lines.
     rm "$file"
     mv "$newfile" "$file"
     if [ $? -ne 0 ]
         echo "MOVE ERROR " "FROMFILE:$newfile:" "TOFILE:$file:"
         sleep 10

#Need to decode the base63 PO string and replace fromString there too
#find the F983051's
#create a variable for PODATA, check for PP
echo "Processing F983051 VRPODATA"
for file in `find $expDir -name F983051.xml -print`
   base64String=`cat $file | tr -s "\n" "@" | xmlstarlet sel -t -v "table/row/col[@name='VRPODATA']"  | base64 -d`
   charCount=`echo $base64String | wc -c`
   if [ $charCount -gt 1 ]
     base64String=`echo $base64String | sed s/${fromString}/${toString}/g`
     echo 'changed string:' $base64String
     base64String=`echo $base64String | base64`
     xmlstarlet ed --inplace -u "table/row/col[@name='VRPODATA']" -v $base64String $file
     #just need to run the xmlstarlet ed

# find $expDir -name '*.new' -print | wc -l

#so now we replace the .new with the original
#job done...
#need to zip everything back up

if [ $_debug = 1 ]
   echo "Creating spec.zip"

for dir in `ls -d $expDir/*`
     cd $dir
     zip -r specs.zip ./RDASPEC
     zip -r specs.zip ./RDATEXT
     rm -fr ./RDASPEC ./RDATEXT

#now we create all of the par files from the dirs under expDir

for dir in `ls -d $expDir/* |grep drwx`
     cd $expDir
     zip -r ${dir}.par `basename $dir`
     rm -rf $dir

#now the root parfile
cd $expDir
rm -rf ../${parfile}.zip
zip -r ../${parfile}.zip *

Friday, 1 September 2017

JD Edwards and microservice based integrations

The cloud is changing our approaches to everything, and so it should.  It gives us so many modern and flexible constructs which can enable faster innovation and agility and deliver value to the business faster.

You can see from my slide below that we should be advocating strategic integrations in our organisations, seen below as a microservice layer.  This single layer gives a consistent interface “write the code once” approach to exposing JD Edwards to BOB (Best of Breed) systems.  This also will allow generic consumption and expose of web services – where you do not have to write a lot of JD Edwards code, or get into too much technical debt.

If you look at the below, we are exposing an open method of communicating with our “monolithic” and potentially “on prem” services.  This microservice layer can actually be in the cloud (and I would recommend this).  You could choose to use a middleware to expose this layer, or generic pub/sub techniques that are provided to you by all of the standard public cloud providers.


Looking at a little more detail in the below diagram for JDE shows you the modern JDE techniques for achieving this.  You’d wrap AIS calls to be STANDARD interactions to standard forms.  Just like BSSV was created to “AddSalesOrder”, the same could be done in a microservice.  This would be responsible for calling the standard and specific screens in JDE via AIS.  You are therefore abstracting yourself from the AIS layer.  If you needed to augment that canonical from information from another system, you are not getting too invested in JDE – it’s all in your microservice layer.

This also gives you the added benefit of being able to rip and replace any of the pieces of the design, as you’ve created a layer of abstraction for all of your systems – Nice.  Bring on best of breed.

The other cool thing about an approach like this is that you can start to amalgamate your “SaaS silos” which is the modern equivalent of “disconnected data silos”.  If your business is subscribing to SaaS services, you have a standardised approach of being able to get organisational wide benefit from the subscription.

Outbound from JDE, you can see that we are using RTE’s.  These might go directly to a AWS SQS queue, or they might also go to google subscriber queue or Microsoft Azure cloud services.  All could queue these messages.  The beauty of this is that the integration points already exist in JDE as RTE’s.  You just need to point these queues (or TXN server) to your middleware or cloud pub/sub service for reliable and fault tolerant delivery.  You can then have as many microservice subscribe to these messages and perform specific and independent tasks based upon the information coming in.


Wow, JDE has done a great job of letting you innovate at the speed of cloud by giving you some really cool integration methods.  There is nothing stopping you plugging in IoT, mobility, integrations, websites, forms and more into JD Edwards simply and easily.  Also giving you a extremely robust and secure ERP doing ensuring master data management and a single source of truth.

This model works on prem, hybrid or complete cloud.

Tuesday, 22 August 2017

Demo of Eddie the JD Edwards bot

I wrote a blog entry about this earlier, but there have been some great advancements since my initial post.  One of the main ones is that the google assistant has been released for generic android, as opposed to only being available on the pixel.  This is really neat, as we want to use the power of contexts, which was only really available when using the google assistant.


You can see from the above that I’m able to chat with the google assistant by simply saying “hello google” to my phone.

Previously I’d get the following interface


So, now we can ask google to talk to our bot and then begin to give it commands that we’ve defined with api.ai.

api.ai is then able to turn those commands, contexts and intentions into JD Edwards AIS calls using #LAMBDA

From there we are able to then instruct api.ai to give us verbal responses.

Note also that we are able to natively activate this chat in a number of other integration points, one-click.  So you want to activate chat with JDE using twitter, facebook messenger, slack?  Easy!


Imagine being able to open up some limited customer service “bot” actions for any of your JD Edwards users.  You could so simple things like:

  • enter timesheets with voice
  • check on order status’
  • approve PO’s (of course)
  • enter meter readings (we are doing this).

See a little video below of approving PO’s

Thursday, 17 August 2017

Continuous delivery–the journey-some neat tools to assist you

The continuous delivery journey is pretty exciting, but we need to all embrace it to get the most out of it.

Planning your adoption of continuous delivery is really important too, making sure firstly that you are on 9.2 and then setting an update cadence that you are happy with and a schedule that you are going to stick to (schedule of applying ESU’s and tools releases).

You need new tools and new thinking to enable this journey, here are a couple that I’m recommending:

ERP analytics

  • understand your ERP usage better.   Understand the programs that you are using and more importantly the modifications that you are making.  subscribe to ERPAnalytics from Fusion5 to really understand your users and modifications.  Use this data regularly to understand the retrofit and impact analysis of the new round of continuous delivery.


Screenshot above from ERP analytics showing you what applications are being used.

We can see the applications that are being used, we can download this to excel and cross reference this information with the JD Edwards manifests from the ESU’s.  Note that we can use something like powerBI to read this “realtime”, actually read in the ESU release notes or farm the JDE tables for what objects are affected and then produce a really nice neat powerBI dashboard of ACTUAL IMPACT – which allows you to streamline your continuous deployment!

  • Seeing the differences in the code.  Wow, have I got something cool for you.  Imagine that you wanted to see the actual form control differences in your code between environments?  We’ve written some pretty nifty code to allow you to do this VERY easily. 
    • So, you can see that the output from the above is all of the programs in JDE that you use – simple – export a spreadsheet
    • Now extract all of the changed objects in the ESUs that you want to apply (from impact analysis) – easy!
    • cross reference the above so you know what has changed and what has not
    • Now – run our code to give you all of the changes between the objects in DV920 vs. PY920

What does this magic do?


Basically you can provide a number of AIS end points and his code will spit out ALL of the differences in the objects that you select in a CSV (or JSON) format.

For example:  two different AIS sites were compared and for this form W1701A (control number 571) we can see that it exists in site 2 and not in site 1.  We can see that this is parent number, data type 9, visible etc.  Cool?  Yes – this is very cool!













Parent Number





































· Think of a comparison for users security and evaluating whether they can see forms or not. (editable).  so really – think about this.  You could run this program as a series of different roles and users and actually determine what the users would see on the forms!

· Think of identifying modifications a little better. 

· Think of comparing environments that have had ESU’s applied

· Different DD’s, vocab overrides and MORE!

We can feed this software (command line executable) the results of ERP analytics to only look at the objects that have changed for a client – honing in on exactly what they need to know to support CD.

I’m sure you might be able to think of other uses, but if you want a copy of a demo – please reach out.

All clients need is an AIS server and we are away.  We can bring one in [AIS server] (as a VM) if needed and run it off that too.

restore the package from the zip file

install node - from here https://nodejs.org/en/download/

goto restore dir with command prompt

npm i  --This will install dependancies

C:\temp\compareStuff>node app --help

  Usage: app [options] [command]


    compare <formName> [additionalFormNames...]  Compare two different forms.

    Example compare P4210_W4210A --format csv --out P4210_W4210A.csv


    -h, --help         output usage information

    -V, --version      output the version number

    --format <format>  Specify the output format. Allowed: "csv,json" (default: json)

    --out <file>       Write to a file instead of the command line.

Config is in



Sample csv output is above.

So that is a couple of pretty neat  productivity ideas that will get you closer to continuous deployment of Oracle’s continuous delivery.

Thursday, 10 August 2017


Do you every find yourself wishing that there was a cool mobile application out there that you could plug into JD Edwards that are useful and free?  Wait no more, your time is here!

myTagThat is a mobile application that hooks into JD Edwards via AIS 

This is andriod ready and shows how cool the Fusion5 applications are – interactive and useful

You can search for assets from the main screen


See myTagThat in the top right


See that we also allow for scanning bar codes – making finding your assets easy


Note that is a book I’m reading – but it still scans!

This will use P1701 through AIS to find an asset with that equipment number.

You have some basic search criteria on the screen below



Note that you have the ability to map all of the results returned!



In JDE, this is how things start to look:

It authenticates with JDE authentication and allows you to search for any of your fixed assets (are you using CAM?) 

In JDE, P1701


Find your asset as above and then goto row –>locations->Address Book->Inquiry


You can see that the mobile application creates a new current record and makes the older ones historical


Wednesday, 2 August 2017

JD Edwards web forms–data entry made easy

Some things in JDE could be made easier.  Some things in JDE should be made easier.

What if you were entering  requisitions?  What if you were only entering HS&E issues – JD Edwards can be something that is hard to configure and get working for these simple use cases.

You can take a look at our java script, AWS hosted solution that synchronizes data with JD Edwards using AIS. We cache all of the data locally in AWS and refresh this on a schedule.  We host the website with AWS and when the submit button is clicked, this goes to a queue for entry into JD Edwards.  We reserve some next numbers so that you can fid your transaction in JDE.



Now that is a good looking form, compared with:



With this technology we can open up this easy conduit into JD Edwards quickly and easily.  This is hosted too, so you don’t need to worry about “punching a hole in the firewall”.  We can ensure that the backend code connects to your AIS with a VPN and enters the transaction into JDE.

Monday, 31 July 2017

Cannot determine database driver name for driver type "O"

This is quite a specific post.

I’ve been doing a lot of performance testing lately.  One of the tests that I’ve been using to extract a little more performance out of JDE is to look at oracle database 12c. 

I’ve been testing many permutations and combinations of client and server, but I started to get the error below: (after installing JDE on a new and existing template enterprise server).

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024551      jdb_drvm.c460
         JDB9900436 - Cannot determine database driver name for driver type "O"

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024587      jdb_omp1.c1928
         JDB9900254 - Failed to initialize driver.

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024603      jtp_cm.c209
         JDB9909002 - Could not init connect.

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024617      jtp_tm.c1140
         JDB9909100 - Get connect info failed: Transaction ID =

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024630      jdb_rq1.c2452
         JDB3100013 - Failed to get connectinfo

This is on an older tools release (EnterpriseOne (ptf.txt in $SYSTEM/bin32).

I tried 100 different things involving my environment variables and .profile, .bash_profile.  I messed around with a ton of things, but then thought – wait.  This is a tools and I actually put this down on a existing enterprise server with oracle database 12c client (32 bit).  And, this database did not exist when this tools was released(well it was not supported). 

It turns out that my error above is because JDE wants to load certain dll’s from the oracle client dir, and it cannot do this from a 12c client.

To get around this, I just installed a new copy of the oracle client ( and hooked this up to JD Edwards.  As soon as I did this, viola! the enterprise server is working perfectly.  Note also that this client is talking to a 12c database, as there is reverse compatibility between client and server.

Another slightly interesting thing here is that all I did was tar up the client dir from another machine and untar it on this one – no installers (because I had no graphical interface for the oracle install and I also could not be bothered fighting the responseFile for the next 3 months).  As soon as I sprayed out the dir on the linux machine, it all just worked!  Just remember that this is a POC machine, so don’t stress, I will not run a production environment like this – it’s just good to know.

At the end of the day, I used a template AWS vm (that was built for another purpose), I unzipped the enterprise server install dir (e900) and oracle client, updated tnsnames.ora and the machine just WORKS. 

Complete enterprise server in less than 2 hours?  Don’t mind if I do!

Saturday, 29 July 2017

Slightly interesting… How big could my data get?

Forget the custom tables, but if you have 13,844,608 rows in your sales history table, then in oracle, that is going to be about 34GB, so we are talking about 2.3GB per million rows.

This is handy and simple maths for working out data growth and what that might mean to you.  that F0911 is a classic!  306GB for 255Million.

. . imported "CRPDTA"."F42119"                           33.72 GB 13844608 rows
. . imported "CRPDTA"."F04572OW"                         11.74 GB 4133179 rows
. . imported "CRPDTA"."F4111"                            139.9 GB 165666358 rows
. . imported "CRPDTA"."F4101Z1"                          7.578 GB 3731963 rows
. . imported "CRPDTA"."F3111"                            71.49 GB 85907305 rows
. . imported "CRPDTA"."F3102"                            20.75 GB 61873382 rows
. . imported "CRPDTA"."F47012"                           2.870 GB 2013678 rows
. . imported "CRPDTA"."F4074"                            70.35 GB 108624053 rows
. . imported "CRPDTA"."F56105"                           12.09 GB 43949816 rows
. . imported "CRPDTA"."F47003"                           19.27 GB 48747556 rows
. . imported "CRPDTA"."F43199"                           22.07 GB 11543632 rows
. . imported "CRPDTA"."F03B11"                           12.09 GB 10304061 rows
. . imported "CRPDTA"."F5646"                            8.429 GB 27358198 rows
. . imported "CRPDTA"."F4211"                            1.155 GB  478334 rows
. . imported "CRPDTA"."F6402"                            6.649 GB 43428152 rows
. . imported "CRPDTA"."F4105"                            20.18 GB 56430529 rows
. . imported "CRPDTA"."F0911"                            306.0 GB 254224657 rows
. . imported "CRPDTA"."F4006"                            3.686 GB 5964283 rows
. . imported "CRPDTA"."F47047"                           14.19 GB 6007023 rows
. . imported "CRPDTA"."F43121"                           28.40 GB 17036864 rows
. . imported "CRPDTA"."F03B13"                           1.600 GB 1842917 rows
. . imported "CRPDTA"."F1632"                            1.647 GB 5088270 rows
. . imported "CRPDTA"."F47036"                           1.628 GB 1808215 rows
. . imported "CRPDTA"."F57205"                           1.630 GB 4249200 rows
. . imported "CRPDTA"."F470371"                          15.32 GB 5804643 rows
. . imported "CRPDTA"."F6411"                            1.625 GB 6558558 rows
. . imported "CRPDTA"."F6412"                            1.605 GB 9334567 rows
. . imported "CRPDTA"."F4079"                            1.532 GB 6868444 rows
. . imported "CRPDTA"."F42420"                           1.514 GB 1392630 rows
. . imported "CRPDTA"."F0101Z2"                          1.331 GB  683139 rows
. . imported "CRPDTA"."F4311"                            13.42 GB 6834435 rows
. . imported "CRPDTA"."F3460"                            1.490 GB 6548503 rows
. . imported "CRPDTA"."F5763"                            3.708 GB 10283632 rows

Friday, 21 July 2017

Generate missing indexes in 1 environment from another–oracle

I’ve done a heap of tuning in production, created a bunch of indexes and I’m pretty happy with how it looks.  Remember that you only need to create the indexes in the database if they are for tuning – they don’t need to be added to the table specs in JDE.

So, how do I easily generate all of the DDL for these indexes and create them in other locations?

I’’’ generate the create index statements while reconciling

select 'SELECT DBMS_METADATA.GET_DDL(''INDEX'',''' || index_name || ''',''' || OWNER || ''') ||'';'' FROM dual ;'
from all_indexes t1 where t1.owner = 'CRPDTA' and not exists (select 1 from all_indexes t2 where t2.owner = 'TESTDTA' and t1.index_name = t2.index_name) ;

Which will give you a bunch of results like this:


So whack some headers on this to trim the output:

set heading off
set feedback off
set long 99999
set pages 0
set heading off
set lines 1000
set wrap on

And use the run script button in SQL Developer:


You’ll get a pile of output like this:



You can then change the tablespace and owner information and run in your other environments.

Thursday, 20 July 2017

want to know more about ASM on the ODA?

Here are a couple of handy commands, especially if you are on an ODA

As root, you can see what space is being used by what database:

[root@sodax6-1 datastore]# oakcli show dbstorage

All the DBs with DB TYPE as non-CDB share the same volumes

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
JDEPROD, JDETEST   non-CDB    /u01/app/oracle/oradata/datastore                   31G    16.26G      14.74G              3G        REDO
                               /u02/app/oracle/oradata/datastore                 4496G  4346.01G     149.99G            102G        DATA
                               /u01/app/oracle/fast_recovery_area/datastore      1370G   761.84G     608.16G             36G        RECO

Of course, this is what ACFS thinks:

[grid@sodax6-1 ~]$ df -k
Filesystem            1K-blocks       Used  Available Use% Mounted on
/dev/xvda2             57191708   14193400   40093068  27% /
tmpfs                 264586120    1246300  263339820   1% /dev/shm
/dev/xvda1               471012      35731     410961   8% /boot
/dev/xvdb1             96119564   50087440   41149436  55% /u01
/dev/asm/testing-216 1048576000  601013732  447562268  58% /u01/app/sharedrepo/testing
                        32505856   17050504   15455352  53% /u01/app/oracle/oradata/datastore
/dev/asm/acfsvol-49    52428800     194884   52233916   1% /cloudfs
                      1436549120  798850156  637698964  56% /u01/app/oracle/fast_recovery_area/datastore
                      4194304000 1575520568 2618783432  38% /u01/app/sharedrepo/testing2
                      4714397696 4661989408   52408288  99% /u02/app/oracle/oradata/datastore

Now, you might want to take a look at what ASM thinks about this

[grid@sodax6-1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512   4096  4194304  19660800   198252           983040         -392394              0             Y  DATA/
MOUNTED  NORMAL  N         512   4096  4194304   3230720   321792           161536           80128              0             N  RECO/
MOUNTED  HIGH    N         512   4096  4194304    762880   667144           381440           95234              0             N  REDO/

A bit more detial thanks:

[grid@sodax6-1 ~]$ asmcmd volinfo -G DATA -a
Diskgroup Name: DATA

     Volume Name: DATASTORE
      Volume Device: /dev/asm/datastore-216
      State: ENABLED
      Size (MB): 4603904
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u02/app/oracle/oradata/datastore
      Volume Name: TESTING
      Volume Device: /dev/asm/testing-216
      State: ENABLED
      Size (MB): 1024000
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u01/app/sharedrepo/testing
      Volume Name: TESTING2
      Volume Device: /dev/asm/testing2-216
      State: ENABLED
      Size (MB): 4096000
      Resize Unit (MB): 64
      Redundancy: MIRROR
      Stripe Columns: 8
      Stripe Width (K): 1024
      Usage: ACFS
      Mountpath: /u01/app/sharedrepo/testing2

So now, I want to resize, as I’ve made my repo TESTING2 too big and I need some more space in my DATASTORE – so…

[grid@sodax6-1 ~]$ acfsutil size -1T /u01/app/sharedrepo/testing2
acfsutil size: new file system size: 3195455668224 (3047424MB)

and you can see that ACFS actually uses the “Auto-resize increment” to add to the FS when it’s low:

DB_NAMES           DB_TYPE    Filesystem                                        Size     Used    Available    AutoExtend Size  DiskGroup
-------            -------    ------------                                    ------    -----    ---------   ----------------   --------
JDEPROD, JDETEST   non-CDB    /u01/app/oracle/oradata/datastore                   31G    16.26G      14.74G              3G        REDO
                               /u02/app/oracle/oradata/datastore                 4598G  4446.22G     151.78G            102G        DATA
                               /u01/app/oracle/fast_recovery_area/datastore      1370G   761.84G     608.16G             36G        RECO

In my example it’ll add 102GB when low.  So before I resized the /TESTING2 repo, things looked like this:

                      4714397696 4661989408   52408288  99% /u02/app/oracle/oradata/datastore

After resizing

                      4821352448 4662201568  159150880  97% /u02/app/oracle/oradata/datastore

So it’s seen that there is some free space (the 1TB I stole) and has given this back to the data area.

Note that I could have done this with oakcli resize repo (but I did not know that at the time).

Wednesday, 19 July 2017

EM express for 12c

This is cool, no more emctl start dbconsole

I went snooping around for emctl and did not find one under 12c

google and found this gem:  http://www.oracle.com/technetwork/database/manageability/emx-intro-1965965.html#A1 This is probably all you need, but I needed more

When I followed the steps, my browsers got security errors.  Interestingly I only had a port come up with https, not http:


Secure Connection Failed

The connection to sodax6-1.oda.aus.osc:5500 was interrupted while the page was loading.

    The page you are trying to view cannot be shown because the authenticity of the received data could not be verified.
     Please contact the website owners to inform them of this problem.

I checked the ports that were open and found that http was not.

SQL> select dbms_xdb.getHttpPort() from dual;


SQL> select dbms_xdb_config.getHttpsPort() from dual;


So I ran the below:


Then was able to login


ODA goodness–has this thing started to win me over

If you know about ODA’s, you probably know why I like the X6 about 100000 times more than anything before it – it all comes down to IOPS.  If you want more than 1500 IOPs consistently – then you might want to move on from the X5 if you have a very large database.  The X5 does have some cool stuff to mitigate it (it being the lack of IOPS), but at the end of the day there is limited FLASH to get that slow SAS data closer to the CPU.

But, the X6 is very fast and very nice and very FLASH

One thing I needed to do is quickly test 12c database version, so this can be done with a 1 click [need to be honest here, there is NO graphical interface native on the ODA, you need to start getting very familiar with oakcli commands.  Although this has escalated my confidence, I’ve started writing ksh scripts and automating everything I need on this machine.

Take a look at the above, 1 oakcli command and we are upgrading to 12C, both RAC nodes – everything.

That is cool!  (PS. I know that I can also do this is AWS RDS – and that is a click – so I guess this is just okay)…

There is no progress indicator, just “It will take a few minutes”.

A little “extra for experts”, do not modify the .bash_profile for oracle on the oda_base.  I had it prompting me for what oracle home I wanted and this was breaking a bunch of commands – what a dope I am.

I might make another post in 1 hour when this has broken and I’m picking up the pieces…