Wednesday, 6 September 2017

Bulk version change tips and tricks

Ever needed to create a lot of versions as a copy of others?  Ever needed to create versions and also change data selections and processing options?  Say for example you opened another DC and wanted to copy all of the config from the previous ones – reuse all of your IP – well, do I have some good news for you..  Well indifferent news – it can be done.

The first step to getting this working is brain storming, one of my fav people to brainstorm with is Shae.  We can have quick single syllable word conversations, grunt a bit – but at the end of the day articulate an elegant solution to some amazing technical problems,  I’m very lucky to have peers like this to work with.  Shae came up with the idea of using a par file to complete this task – and that was a great idea!  I can easily create a project with SQL, populate it with all of the versions I need to copy.  I can also create all of the F983051 and central objects to create the base objects, but I’d need to use load testing or scripts to change all of the PO’s and data selection.

Shae’s idea to use the parfile was great, it seemed possible.  The client in question has about 500 versions all for a particular DC, and I needed to change names, PO;s and data selections based upon the new name change – okay – challenge accepted.

There are heaps of ways of doing this – java, node.js, lambda, vbscript – I went old school – a little bit of sed and awk.

I basically took the parfile, sftpd it to linux and then ripped it apart.

The structure was not too crazy to deal with, although it did feel like Russian dolls, where there was a zip file in a zip file in a zip file.

There was also some pretty funky things like unicode files in the middle not normal files and base64 strings for PO’s – but nothing was going to stop me.

What I’m going to do is just cut and paste the script here, you’ll get the idea of what needed to be done from the sections and the amazing comments.

In my example the version names, PO’s and data selection all changed from string TT to string GB – so it was really easy to apply these rules through the script.

At the end of the day, this created a separate par file that you can restore to a project when all of the new versions in it! Really nice.

There is a tiny bit of error handling and other things – but really just showing you what can be done.

Imagine if you needed to change the queue on 100’s of versions or anything like this.  You could use some of the logic below to get it done (or be nice to me).


if [ $# -ne 3 ]
   then
     echo 'USAGE $0 <parfile> <FROm STRING> <TO STRING>'
     exit
fi

_debug=1
workingDir=/tmp
parfile=$1
fromString=$2
toString=$3
parfileNoExt=`echo $parfile | awk -F. '{print $1}'`
expDir=$workingDir/$parfileNoExt

rm -fR $expDir

#unzip the file to the working dir
unzip $parfile -d $expDir

for file in `ls $expDir/*.par`
   do
     #echo $file
     dir=`echo $file | awk -F. '{print $1}'`
     #echo $dir
     unzip -q $file -d $dir
done

#parfile
# parfile UBEVER_R57000035_PP0001_60_99
#   F983051.xml
#   F983052.xml
#   manifest.xml
#   specs.zip
#   RDASPEC
#    R5700036.PP0002.1.0.0.0.0.xml
#

#now lets extract the specs zip file for each

find $expDir -name specs.zip -execdir unzip -q \{} \;

#now delete par files and all else

find $expDir -name '*.par' -exec rm \{} \;
find $expDir -name specs.zip -exec rm \{} \;

# now we need to rename directories
if [ $_debug = 1 ]
then
   echo "RENAME DIRS"
fi

cd $expDir
for dir in `ls -d *${fromString}*`
do
   echo $dir
   newname=`echo $dir | sed s/_${fromString}/_${toString}/g`
   newname=`basename "$newname"`
   echo $newname
   cd $expDir
   mv $dir $newname
done

#reanme files, generally in the spec dir
#for file in `find $expDir -name "*${fromString}*.xml" -type f -print`
#holy crap, that took a long time to encase this with double quotes so as not to lose the
#dodgey versions
if [ $_debug = 1 ]
then
   echo "RENAME FILES"
fi

find $expDir -name "*${fromString}*.xml" -type f |while read file; do
     newfile=`basename "$file"`
     newfile=`echo "$newfile" | sed s/${fromString}/${toString}/2`
     currDir=`dirname "$file"`
     mv "$file" "$currDir/$newfile"
     if [ $? -ne 0 ]
       then
         echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         sleep 10
         exit
     fi
done

#filelist="`find $expDir -name "*${fromString}*.xml" -type f -print`"
#echo $filelist
#for file in $filelist
   #do
     #newfile=`basename "$file"`
     #newfile=`echo "$newfile" | sed s/${fromString}/${toString}/g`
     #currDir=`dirname "$file"`
     #mv "$file" "$currDir/$newfile"
     #if [ $? -ne 0 ]
       #then
         #echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         #sleep 10
         #exit
     #fi
#done

if [ $_debug = 1 ]
then
   echo "SED CONTENTS OF FILES AND CREATE .NEW"
fi

#This is ridiculous - I need to convert manifest.xml
#from utf-16 to utf-8 and grep and then back again
# this is killing me
echo 'CONVERTING MANIFEST.XML'
for file in `find $expDir -name manifest.xml -print`
do
   echo $file
#  iconv -f utf-16 -t utf-8 $file | sed s/${fromString}[0-9]/${toString}/g > $file.utf8
   iconv -f utf-16 -t utf-8 $file | sed s/${fromString}0/${toString}0/g > $file.utf8
   iconv -f utf-8 -t utf-16 $file.utf8 > $file
   rm $file.utf8
done
  
#okay, now for the contents of the files
set -x
grep -r -l "${fromString}" $expDir | while read file; do
#for file in "`grep -R -l ${fromString} $expDir/*`"
#  do
     newfile=`echo "${file}.new"`
     echo $file "contains $fromString"
     cat "${file}" | sed s/${fromString}/${toString}/g > "${newfile}"
     #note that if you need to compare the internals of the files
     #comment out the following lines.
     rm "$file"
     mv "$newfile" "$file"
     if [ $? -ne 0 ]
       then
         echo "MOVE ERROR " "FROMFILE:$newfile:" "TOFILE:$file:"
         sleep 10
         exit
     fi
done

#Need to decode the base63 PO string and replace fromString there too
#find the F983051's
#create a variable for PODATA, check for PP
echo "Processing F983051 VRPODATA"
for file in `find $expDir -name F983051.xml -print`
do
   base64String=`cat $file | tr -s "\n" "@" | xmlstarlet sel -t -v "table/row/col[@name='VRPODATA']"  | base64 -d`
   charCount=`echo $base64String | wc -c`
   if [ $charCount -gt 1 ]
     then
     base64String=`echo $base64String | sed s/${fromString}/${toString}/g`
     echo 'changed string:' $base64String
     base64String=`echo $base64String | base64`
     xmlstarlet ed --inplace -u "table/row/col[@name='VRPODATA']" -v $base64String $file
     #just need to run the xmlstarlet ed
   fi
done

# find $expDir -name '*.new' -print | wc -l

#so now we replace the .new with the original
#job done...
#need to zip everything back up

if [ $_debug = 1 ]
then
   echo "Creating spec.zip"
fi

for dir in `ls -d $expDir/*`
   do
     cd $dir
     zip -r specs.zip ./RDASPEC
     zip -r specs.zip ./RDATEXT
     rm -fr ./RDASPEC ./RDATEXT
done

#now we create all of the par files from the dirs under expDir

for dir in `ls -d $expDir/* |grep drwx`
   do
     cd $expDir
     zip -r ${dir}.par `basename $dir`
     rm -rf $dir
done

#now the root parfile
cd $expDir
rm -rf ../${parfile}.zip
zip -r ../${parfile}.zip *

Friday, 1 September 2017

JD Edwards and microservice based integrations

The cloud is changing our approaches to everything, and so it should.  It gives us so many modern and flexible constructs which can enable faster innovation and agility and deliver value to the business faster.

You can see from my slide below that we should be advocating strategic integrations in our organisations, seen below as a microservice layer.  This single layer gives a consistent interface “write the code once” approach to exposing JD Edwards to BOB (Best of Breed) systems.  This also will allow generic consumption and expose of web services – where you do not have to write a lot of JD Edwards code, or get into too much technical debt.

If you look at the below, we are exposing an open method of communicating with our “monolithic” and potentially “on prem” services.  This microservice layer can actually be in the cloud (and I would recommend this).  You could choose to use a middleware to expose this layer, or generic pub/sub techniques that are provided to you by all of the standard public cloud providers.


image


Looking at a little more detail in the below diagram for JDE shows you the modern JDE techniques for achieving this.  You’d wrap AIS calls to be STANDARD interactions to standard forms.  Just like BSSV was created to “AddSalesOrder”, the same could be done in a microservice.  This would be responsible for calling the standard and specific screens in JDE via AIS.  You are therefore abstracting yourself from the AIS layer.  If you needed to augment that canonical from information from another system, you are not getting too invested in JDE – it’s all in your microservice layer.

This also gives you the added benefit of being able to rip and replace any of the pieces of the design, as you’ve created a layer of abstraction for all of your systems – Nice.  Bring on best of breed.

The other cool thing about an approach like this is that you can start to amalgamate your “SaaS silos” which is the modern equivalent of “disconnected data silos”.  If your business is subscribing to SaaS services, you have a standardised approach of being able to get organisational wide benefit from the subscription.

Outbound from JDE, you can see that we are using RTE’s.  These might go directly to a AWS SQS queue, or they might also go to google subscriber queue or Microsoft Azure cloud services.  All could queue these messages.  The beauty of this is that the integration points already exist in JDE as RTE’s.  You just need to point these queues (or TXN server) to your middleware or cloud pub/sub service for reliable and fault tolerant delivery.  You can then have as many microservice subscribe to these messages and perform specific and independent tasks based upon the information coming in.


image

Wow, JDE has done a great job of letting you innovate at the speed of cloud by giving you some really cool integration methods.  There is nothing stopping you plugging in IoT, mobility, integrations, websites, forms and more into JD Edwards simply and easily.  Also giving you a extremely robust and secure ERP doing ensuring master data management and a single source of truth.

This model works on prem, hybrid or complete cloud.

Tuesday, 22 August 2017

Demo of Eddie the JD Edwards bot

I wrote a blog entry about this earlier, but there have been some great advancements since my initial post.  One of the main ones is that the google assistant has been released for generic android, as opposed to only being available on the pixel.  This is really neat, as we want to use the power of contexts, which was only really available when using the google assistant.

Screenshot_20170822-084614

You can see from the above that I’m able to chat with the google assistant by simply saying “hello google” to my phone.

Previously I’d get the following interface

Screenshot_20170822-085016

So, now we can ask google to talk to our bot and then begin to give it commands that we’ve defined with api.ai.

api.ai is then able to turn those commands, contexts and intentions into JD Edwards AIS calls using #LAMBDA

From there we are able to then instruct api.ai to give us verbal responses.

Note also that we are able to natively activate this chat in a number of other integration points, one-click.  So you want to activate chat with JDE using twitter, facebook messenger, slack?  Easy!

image

Imagine being able to open up some limited customer service “bot” actions for any of your JD Edwards users.  You could so simple things like:

  • enter timesheets with voice
  • check on order status’
  • approve PO’s (of course)
  • enter meter readings (we are doing this).

See a little video below of approving PO’s



Thursday, 17 August 2017

Continuous delivery–the journey-some neat tools to assist you

The continuous delivery journey is pretty exciting, but we need to all embrace it to get the most out of it.

Planning your adoption of continuous delivery is really important too, making sure firstly that you are on 9.2 and then setting an update cadence that you are happy with and a schedule that you are going to stick to (schedule of applying ESU’s and tools releases).

You need new tools and new thinking to enable this journey, here are a couple that I’m recommending:

ERP analytics

  • understand your ERP usage better.   Understand the programs that you are using and more importantly the modifications that you are making.  subscribe to ERPAnalytics from Fusion5 to really understand your users and modifications.  Use this data regularly to understand the retrofit and impact analysis of the new round of continuous delivery.

image

Screenshot above from ERP analytics showing you what applications are being used.

We can see the applications that are being used, we can download this to excel and cross reference this information with the JD Edwards manifests from the ESU’s.  Note that we can use something like powerBI to read this “realtime”, actually read in the ESU release notes or farm the JDE tables for what objects are affected and then produce a really nice neat powerBI dashboard of ACTUAL IMPACT – which allows you to streamline your continuous deployment!

  • Seeing the differences in the code.  Wow, have I got something cool for you.  Imagine that you wanted to see the actual form control differences in your code between environments?  We’ve written some pretty nifty code to allow you to do this VERY easily. 
    • So, you can see that the output from the above is all of the programs in JDE that you use – simple – export a spreadsheet
    • Now extract all of the changed objects in the ESUs that you want to apply (from impact analysis) – easy!
    • cross reference the above so you know what has changed and what has not
    • Now – run our code to give you all of the changes between the objects in DV920 vs. PY920

What does this magic do?

JDEFormCompare

Basically you can provide a number of AIS end points and his code will spit out ALL of the differences in the objects that you select in a CSV (or JSON) format.

For example:  two different AIS sites were compared and for this form W1701A (control number 571) we can see that it exists in site 2 and not in site 1.  We can see that this is parent number, data type 9, visible etc.  Cool?  Yes – this is very cool!

form

id

property

equal

value0

value1

Different

P1701_W1701A

571

title

FALSE

Null

Parent Number

DIFF

P1701_W1701A

571

presence

FALSE

FALSE

TRUE

DIFF

P1701_W1701A

571

dataType

FALSE

Null

9

DIFF

P1701_W1701A

571

editable

FALSE

Null

TRUE

DIFF

P1701_W1701A

571

longName

FALSE

Null

txtParentNumber_571

DIFF

P1701_W1701A

571

visible

FALSE

Null

TRUE

DIFF

· Think of a comparison for users security and evaluating whether they can see forms or not. (editable).  so really – think about this.  You could run this program as a series of different roles and users and actually determine what the users would see on the forms!

· Think of identifying modifications a little better. 

· Think of comparing environments that have had ESU’s applied

· Different DD’s, vocab overrides and MORE!

We can feed this software (command line executable) the results of ERP analytics to only look at the objects that have changed for a client – honing in on exactly what they need to know to support CD.

I’m sure you might be able to think of other uses, but if you want a copy of a demo – please reach out.

All clients need is an AIS server and we are away.  We can bring one in [AIS server] (as a VM) if needed and run it off that too.

restore the package from the zip file

install node - from here https://nodejs.org/en/download/

goto restore dir with command prompt

npm i  --This will install dependancies

C:\temp\compareStuff>node app --help

  Usage: app [options] [command]

  Commands:

    compare <formName> [additionalFormNames...]  Compare two different forms.

    Example compare P4210_W4210A --format csv --out P4210_W4210A.csv

  Options:

    -h, --help         output usage information

    -V, --version      output the version number

    --format <format>  Specify the output format. Allowed: "csv,json" (default: json)

    --out <file>       Write to a file instead of the command line.

Config is in

.\config\default.yaml


image

Sample csv output is above.


So that is a couple of pretty neat  productivity ideas that will get you closer to continuous deployment of Oracle’s continuous delivery.

Thursday, 10 August 2017

myTagThat


Do you every find yourself wishing that there was a cool mobile application out there that you could plug into JD Edwards that are useful and free?  Wait no more, your time is here!

myTagThat is a mobile application that hooks into JD Edwards via AIS 

This is andriod ready and shows how cool the Fusion5 applications are – interactive and useful

You can search for assets from the main screen

Screenshot_20170810-182518

See myTagThat in the top right

Screenshot_20170810-182537

See that we also allow for scanning bar codes – making finding your assets easy

Screenshot_20170810-182556

Note that is a book I’m reading – but it still scans!

This will use P1701 through AIS to find an asset with that equipment number.

You have some basic search criteria on the screen below

Screenshot_20170810-183154

Screenshot_20170810-183200

Note that you have the ability to map all of the results returned!

Screenshot_20170810-183225

Nice!

In JDE, this is how things start to look:

It authenticates with JDE authentication and allows you to search for any of your fixed assets (are you using CAM?) 

In JDE, P1701

image

Find your asset as above and then goto row –>locations->Address Book->Inquiry

image

You can see that the mobile application creates a new current record and makes the older ones historical

image

Wednesday, 2 August 2017

JD Edwards web forms–data entry made easy

Some things in JDE could be made easier.  Some things in JDE should be made easier.

What if you were entering  requisitions?  What if you were only entering HS&E issues – JD Edwards can be something that is hard to configure and get working for these simple use cases.

You can take a look at our java script, AWS hosted solution that synchronizes data with JD Edwards using AIS. We cache all of the data locally in AWS and refresh this on a schedule.  We host the website with AWS and when the submit button is clicked, this goes to a queue for entry into JD Edwards.  We reserve some next numbers so that you can fid your transaction in JDE.

https://hsedemo.mye1.com/

image

Now that is a good looking form, compared with:

image

http://e92demo.myriad-it.com:80/jde/ShortcutLauncher?OID=P54HS30_W54HS30C_ZJDE0001

With this technology we can open up this easy conduit into JD Edwards quickly and easily.  This is hosted too, so you don’t need to worry about “punching a hole in the firewall”.  We can ensure that the backend code connects to your AIS with a VPN and enters the transaction into JDE.

Monday, 31 July 2017

Cannot determine database driver name for driver type "O"

This is quite a specific post.

I’ve been doing a lot of performance testing lately.  One of the tests that I’ve been using to extract a little more performance out of JDE is to look at oracle database 12c. 

I’ve been testing many permutations and combinations of client and server, but I started to get the error below: (after installing JDE on a new and existing template enterprise server).

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024551      jdb_drvm.c460
         JDB9900436 - Cannot determine database driver name for driver type "O"

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024587      jdb_omp1.c1928
         JDB9900254 - Failed to initialize driver.

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024603      jtp_cm.c209
         JDB9909002 - Could not init connect.

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024617      jtp_tm.c1140
         JDB9909100 - Get connect info failed: Transaction ID =

7045/-170809600 MAIN_THREAD                             Mon Jul 31 09:13:08.024630      jdb_rq1.c2452
         JDB3100013 - Failed to get connectinfo


This is on an older tools release (EnterpriseOne 9.1.3.3) (ptf.txt in $SYSTEM/bin32).

I tried 100 different things involving my environment variables and .profile, .bash_profile.  I messed around with a ton of things, but then thought – wait.  This is a 9.1.3.3 tools and I actually put this down on a existing enterprise server with oracle database 12c client (32 bit).  And, this database did not exist when this tools was released(well it was not supported). 

It turns out that my error above is because JDE wants to load certain dll’s from the oracle client dir, and it cannot do this from a 12c client.

To get around this, I just installed a new copy of the oracle client (11.2.0.4) and hooked this up to JD Edwards.  As soon as I did this, viola! the enterprise server is working perfectly.  Note also that this 11.2.0.4 client is talking to a 12c database, as there is reverse compatibility between client and server.

Another slightly interesting thing here is that all I did was tar up the client dir from another machine and untar it on this one – no installers (because I had no graphical interface for the oracle install and I also could not be bothered fighting the responseFile for the next 3 months).  As soon as I sprayed out the dir on the linux machine, it all just worked!  Just remember that this is a POC machine, so don’t stress, I will not run a production environment like this – it’s just good to know.

At the end of the day, I used a template AWS vm (that was built for another purpose), I unzipped the enterprise server install dir (e900) and oracle client, updated tnsnames.ora and the machine just WORKS. 

Complete enterprise server in less than 2 hours?  Don’t mind if I do!