About Me

My photo
I work for Fusion5 Australia . Connect with me on linked in here . I'm raising money for a good cause at the moment, follow donate to leukemia foundation

Friday, 17 November 2017

Movember just went high tech!

I'm doing movember again this year - men's health is a great cause and I like to do my thing.  I think that the movember movement is slowing down in Australia.

I made my movember a little more innovative than most - surprise!

I decided to firstly create a QRCode, so that people could easily donate:

That was simple, my donation URL is https://au.movember.com/donate/details?memberId=316906 So I converted that with http://www.qr-code-generator.com/

 Cool, so now people can scan and donate, that was easy!

Add some rhyme, and I'm away.

The next part is cooler, I own some estimote beacons, so why don't I program them to show my donation page. I need to go to bitly.com to generate a short URL, as I can only save 17 bytes, but that is easy.

This is the beacon I've put outside my office.  I now get out my android phone and we have a donation site being pushed out to anyone listening to the physical web.  Fingers crossed that my technology is going to get some donations.



The above is a screen shot from my phone showing the two beacons that I have projecting web sites.

Beacons are really cool (IoT) devices, we are implementing them at a number of clients and integrating them into JDE.

Tuesday, 14 November 2017

Embark on IoT–where do you start

If I was going to implement some IoT to show the boss, I’d probably use the orchestrator in JDE.  It’s pretty cool and pretty simple and you could impress the boss fairly easily.  But, what if you REALLY wanted to impress the boss.  What if you wanted to be able to support disconnected devices, tonne’s of messages and what about a thing shadow?  All native when looking at the AWS IoT offering. 

Local caching, look no further than https://aws.amazon.com/greengrass/

Greeengrass is like an offline agent for IoT, awesome and native for the suite.

I’m also unsure how JDE might process millions of devices and trillions of messages, as I know that AWS can scale out to.

Connect An IoT Device

Above shows the native consumption of MQTT messages into the AWS engine.


Process IoT Data

You can see that the above is for an autonomous car, forget that though – it could be a freezer for all I care.  The cool things are the fact that the data can be processed into a data warehouse using redshift or even big data processing locations in inexpensive S3 buckets.   Save it all for later.  This also shows real time insights using quicksight, a possible downstream product of big data analysis.  Also using ML and AI for predictive.  This would call orchestrations in JDE (or just AIS calls) to raise work orders and react to the breaches of IoT configured thresholds.



                A high-level view of AWS IoT

A complete solution is available, as seen above.  Making a thing shadow a native part of the tool kit.  This is something that is going to be very important with IoT moving forward, being able to interrogate a digital double.  Imagine putting on the VR goggles and being able to see the entire physical device as a digital double of any asset that you are maintaining.  Pointing your virtual hands to any part of the machine and being able to see all of the values that are being sent to IoT.  Welcome to the future!

Use JDE for what it’s good at – use well architected integration, use best of breed cloud solutions where appropriate!

Wednesday, 1 November 2017

A really quick oracle performance test–what did you get?

Ever had a slow down that you cannot really explain, I know that I have.

What you always need is a set of baseline tests, things that ground your expectations.

Remember that we’ve provided these sorts of things with ERP analytics (at a high level)

and performance benchmark - http://myriad-it.com/solution/performance-benchmark/ (which I think is really cool).

But let’s take it down another notch, database only!

Imagine that things are slowing down and you want to find a problem.  Performance problems are like a pyramid, where there is something like:

image

If you’re hardware is rubbish, everything will be rubbish.

If you’re database is rubbish, everything will be rubbish…

You see where I’m going.

So, I’d first run some dd commands on the hardware to check disk speeds, I’d check the location of the data disks and then the redo disks.  I check the disk speed where temp is written and swap.  make sure they are all pretty quick.


[root@ronin0-net1 homewood]# dd if=/dev/zero of=speedtest1.dmp oflag=direct conv=notrunc bs=1M count=11200

6527+0 records in

6527+0 records out

6844055552 bytes (6.8 GB) copied, 299.438 seconds, 22.9 MB/s

The above would indicate a VERY large problem

[root@ronin0 homewood]# dd if=/dev/zero of=speedtest1.dmp oflag=direct conv=notrunc bs=1M count=11200

11200+0 records in

11200+0 records out

11744051200 bytes (12 GB) copied, 25.8044 seconds, 455 MB/s

The above would make you smile!

Then – you’ve tested the performance of a bunch of locations  - happy days.  Now the database.

Once again, simple things for simple people.

create a sql script with the following contents:


set echo on
set feedback on
set timing on
spool output.txt
begin
   execute immediate 'drop table testdta.f0101perf';
   execute immediate 'create table testdta.f0101perf as select * from testdta.F0101 where 1 = 0';
   execute immediate 'grant all on testdta.f0101perf to PUBLIC';
   for a in 1..10000 loop
      insert into testdta.f0101perf select * from testdta.F0101 where aban8 = 100;
      commit;
   end loop;
end;
/
quit;
/


And run it at the commandline:


C:\Users\shannonm>sqlplus JDE@orcl @shannon.sql

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Nov 1 14:03:53 2017

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> set feedback on
SQL> set timing on
SQL> spool output.txt
SQL> begin
   2    execute immediate 'drop table testdta.f0101perf';
   3    execute immediate 'create table testdta.f0101perf as select * from testdt
a.F0101 where 1 = 0';
   4    execute immediate 'grant all on testdta.f0101perf to PUBLIC';
   5    for a in 1..100000 loop
   6       insert into testdta.f0101perf select * from testdta.F0101 where aban8
= 100;
   7       commit;
   8    end loop;
   9  end;
  10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:31.75
SQL> quit;
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64
bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

C:\Users\shannonm>


So, now we can open our results file, which is placed in the dir we ran the script from (again, nothing fancy). output.txt.  Remember Address book 100 should exist – I could make that smarter with = (select max(aban8) from crpdta.f0101), but that would be an extra table scan (index and sort) that I did not want to execute.

What does this do?

Creates a copy of F0101 and then insert’s 100,000 records into it.


SQL> begin
   2    execute immediate 'drop table testdta.f0101perf';
   3    execute immediate 'create table testdta.f0101perf as select * from testdta.F0101 where 1 = 0';
   4    execute immediate 'grant all on testdta.f0101perf to PUBLIC';
   5    for a in 1..100000 loop
   6        insert into testdta.f0101perf select * from testdta.F0101 where aban8 = 100;
   7        commit;
   8    end loop;
   9  end;
  10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:31.75
SQL> quit;

Remember, that this is not really testing index creation and index tablespaces, so you might want to make the test a little more realistic, but you get the picture.  It’s easy to get a bunch of indexes on the table and go from there.

Then you need to work on installing performance benchmark to start getting the stats on the other parts of your ERP – oh and ERP analytics (https://shannonmoir9.wixsite.com/website/erp-analytics)

Sunday, 22 October 2017

Tools release 9.2 update 2 is GA


There are a heap of cool features, let me summarise them:  read the source of truth http://www.oracle.com/us/products/applications/jd-edwards-enterpriseone/jde-ga-10-17-3961047.pdf

  • additions to UXOne
  • mobile time entry – new app
  • mobile inventory transfer and cycle count
  • Supporting MAF 2,.4 (but why would you bother?) https://docs-uat.us.oracle.com/middleware/maf240/mobile/develop-maf/whats-new-this-guide-release-2.3.2.htm
  • There are a heap of application enhancements – which is a little strange when something is labeled tools release.  I guess we are seeing once again the execution of continuous delivery
    • Manufacturing Production Execution Process Simplification
    • HCM improvements
    • Finally - Joint Venture Management - Percentage of Ownership and Distributions
    • Capital Asset Management and Service Management
  • TOOLS
    • Announcing JD Edwards EnterpriseOne Notifications – NOT mobile message notifications. 
      • orchestrator can now process notifications -
      • the notification system will notify the appropriate users via their preferred delivery: within the JD Edwards web client, in the JD Edwards Work Center, or via email or text message.
      • Wow, does this mean perhaps some attempt at the sadly missing workflow engine?
      • Where are the mobile notifications?
      • I have big plans to integration microsoft Flow into JD Edwards natively as a fully featured and rich workflow engine
    • JD Edwards EnterpriseOne Orchestrator Enhancements
      • read from external data
      • read from watch lists
      • is this going to be workflow I ask (finally!)
    • Server Manager REST API enhancements.  This is cool if you want to connect SCOM or other management product into SM to manage the organisation.
      • Enterprise Server -
      • HTML Server -
      • Application Interface Services Server (AIS) -
      • Transaction Server (RTE) -
      • Business Services Server (BSSV) -
      • BI Publisher Server for One View Reporting (OVR) -
      • Database Server
    • Enhancements to Simplify Staying Current
      • Anything in this area is good.  You can track if BSFN’s are being called
      • I’d still use our ERP analytics program and augment the information with this.
    • More platform certifications – could there be a more boring list?  (MSFT EDGE!)
      • Oracle Database 12.2.0.1 
      • Oracle JavaScript Extension Toolkit (JET) 3.1 
      • Oracle Mobile Application Framework (MAF) 2.4 for Mobile Foundation 
      • Microsoft EDGE browser 38


http://www.oracle.com/us/products/applications/jd-edwards-enterpriseone/jde-ga-10-17-3961047.pdf

Monday, 16 October 2017

CD3–in action

Continuous delivery is way too real.  How do I know?  I’ve seen it.

Take for example you want to look at the release catalog from oracle (new from 9.1)

start here:  https://apex.oracle.com/pls/apex/f?p=24153:99:24777358370464:TAB:NO:::&tz=-6%3A00

image

Choose JDE

image

Choose compare releases

And now compare applications:

image

Cool hey?  So you can now choose a month to compare with – not a “dot” release.

Wednesday, 6 September 2017

Bulk version change tips and tricks

Ever needed to create a lot of versions as a copy of others?  Ever needed to create versions and also change data selections and processing options?  Say for example you opened another DC and wanted to copy all of the config from the previous ones – reuse all of your IP – well, do I have some good news for you..  Well indifferent news – it can be done.

The first step to getting this working is brain storming, one of my fav people to brainstorm with is Shae.  We can have quick single syllable word conversations, grunt a bit – but at the end of the day articulate an elegant solution to some amazing technical problems,  I’m very lucky to have peers like this to work with.  Shae came up with the idea of using a par file to complete this task – and that was a great idea!  I can easily create a project with SQL, populate it with all of the versions I need to copy.  I can also create all of the F983051 and central objects to create the base objects, but I’d need to use load testing or scripts to change all of the PO’s and data selection.

Shae’s idea to use the parfile was great, it seemed possible.  The client in question has about 500 versions all for a particular DC, and I needed to change names, PO;s and data selections based upon the new name change – okay – challenge accepted.

There are heaps of ways of doing this – java, node.js, lambda, vbscript – I went old school – a little bit of sed and awk.

I basically took the parfile, sftpd it to linux and then ripped it apart.

The structure was not too crazy to deal with, although it did feel like Russian dolls, where there was a zip file in a zip file in a zip file.

There was also some pretty funky things like unicode files in the middle not normal files and base64 strings for PO’s – but nothing was going to stop me.

What I’m going to do is just cut and paste the script here, you’ll get the idea of what needed to be done from the sections and the amazing comments.

In my example the version names, PO’s and data selection all changed from string TT to string GB – so it was really easy to apply these rules through the script.

At the end of the day, this created a separate par file that you can restore to a project when all of the new versions in it! Really nice.

There is a tiny bit of error handling and other things – but really just showing you what can be done.

Imagine if you needed to change the queue on 100’s of versions or anything like this.  You could use some of the logic below to get it done (or be nice to me).


if [ $# -ne 3 ]
   then
     echo 'USAGE $0 <parfile> <FROm STRING> <TO STRING>'
     exit
fi

_debug=1
workingDir=/tmp
parfile=$1
fromString=$2
toString=$3
parfileNoExt=`echo $parfile | awk -F. '{print $1}'`
expDir=$workingDir/$parfileNoExt

rm -fR $expDir

#unzip the file to the working dir
unzip $parfile -d $expDir

for file in `ls $expDir/*.par`
   do
     #echo $file
     dir=`echo $file | awk -F. '{print $1}'`
     #echo $dir
     unzip -q $file -d $dir
done

#parfile
# parfile UBEVER_R57000035_PP0001_60_99
#   F983051.xml
#   F983052.xml
#   manifest.xml
#   specs.zip
#   RDASPEC
#    R5700036.PP0002.1.0.0.0.0.xml
#

#now lets extract the specs zip file for each

find $expDir -name specs.zip -execdir unzip -q \{} \;

#now delete par files and all else

find $expDir -name '*.par' -exec rm \{} \;
find $expDir -name specs.zip -exec rm \{} \;

# now we need to rename directories
if [ $_debug = 1 ]
then
   echo "RENAME DIRS"
fi

cd $expDir
for dir in `ls -d *${fromString}*`
do
   echo $dir
   newname=`echo $dir | sed s/_${fromString}/_${toString}/g`
   newname=`basename "$newname"`
   echo $newname
   cd $expDir
   mv $dir $newname
done

#reanme files, generally in the spec dir
#for file in `find $expDir -name "*${fromString}*.xml" -type f -print`
#holy crap, that took a long time to encase this with double quotes so as not to lose the
#dodgey versions
if [ $_debug = 1 ]
then
   echo "RENAME FILES"
fi

find $expDir -name "*${fromString}*.xml" -type f |while read file; do
     newfile=`basename "$file"`
     newfile=`echo "$newfile" | sed s/${fromString}/${toString}/2`
     currDir=`dirname "$file"`
     mv "$file" "$currDir/$newfile"
     if [ $? -ne 0 ]
       then
         echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         sleep 10
         exit
     fi
done

#filelist="`find $expDir -name "*${fromString}*.xml" -type f -print`"
#echo $filelist
#for file in $filelist
   #do
     #newfile=`basename "$file"`
     #newfile=`echo "$newfile" | sed s/${fromString}/${toString}/g`
     #currDir=`dirname "$file"`
     #mv "$file" "$currDir/$newfile"
     #if [ $? -ne 0 ]
       #then
         #echo "MOVE ERROR " "FILEFROM:$file:" "FILETO:$currDir/$newfile:"
         #sleep 10
         #exit
     #fi
#done

if [ $_debug = 1 ]
then
   echo "SED CONTENTS OF FILES AND CREATE .NEW"
fi

#This is ridiculous - I need to convert manifest.xml
#from utf-16 to utf-8 and grep and then back again
# this is killing me
echo 'CONVERTING MANIFEST.XML'
for file in `find $expDir -name manifest.xml -print`
do
   echo $file
#  iconv -f utf-16 -t utf-8 $file | sed s/${fromString}[0-9]/${toString}/g > $file.utf8
   iconv -f utf-16 -t utf-8 $file | sed s/${fromString}0/${toString}0/g > $file.utf8
   iconv -f utf-8 -t utf-16 $file.utf8 > $file
   rm $file.utf8
done
  
#okay, now for the contents of the files
set -x
grep -r -l "${fromString}" $expDir | while read file; do
#for file in "`grep -R -l ${fromString} $expDir/*`"
#  do
     newfile=`echo "${file}.new"`
     echo $file "contains $fromString"
     cat "${file}" | sed s/${fromString}/${toString}/g > "${newfile}"
     #note that if you need to compare the internals of the files
     #comment out the following lines.
     rm "$file"
     mv "$newfile" "$file"
     if [ $? -ne 0 ]
       then
         echo "MOVE ERROR " "FROMFILE:$newfile:" "TOFILE:$file:"
         sleep 10
         exit
     fi
done

#Need to decode the base63 PO string and replace fromString there too
#find the F983051's
#create a variable for PODATA, check for PP
echo "Processing F983051 VRPODATA"
for file in `find $expDir -name F983051.xml -print`
do
   base64String=`cat $file | tr -s "\n" "@" | xmlstarlet sel -t -v "table/row/col[@name='VRPODATA']"  | base64 -d`
   charCount=`echo $base64String | wc -c`
   if [ $charCount -gt 1 ]
     then
     base64String=`echo $base64String | sed s/${fromString}/${toString}/g`
     echo 'changed string:' $base64String
     base64String=`echo $base64String | base64`
     xmlstarlet ed --inplace -u "table/row/col[@name='VRPODATA']" -v $base64String $file
     #just need to run the xmlstarlet ed
   fi
done

# find $expDir -name '*.new' -print | wc -l

#so now we replace the .new with the original
#job done...
#need to zip everything back up

if [ $_debug = 1 ]
then
   echo "Creating spec.zip"
fi

for dir in `ls -d $expDir/*`
   do
     cd $dir
     zip -r specs.zip ./RDASPEC
     zip -r specs.zip ./RDATEXT
     rm -fr ./RDASPEC ./RDATEXT
done

#now we create all of the par files from the dirs under expDir

for dir in `ls -d $expDir/* |grep drwx`
   do
     cd $expDir
     zip -r ${dir}.par `basename $dir`
     rm -rf $dir
done

#now the root parfile
cd $expDir
rm -rf ../${parfile}.zip
zip -r ../${parfile}.zip *

Friday, 1 September 2017

JD Edwards and microservice based integrations

The cloud is changing our approaches to everything, and so it should.  It gives us so many modern and flexible constructs which can enable faster innovation and agility and deliver value to the business faster.

You can see from my slide below that we should be advocating strategic integrations in our organisations, seen below as a microservice layer.  This single layer gives a consistent interface “write the code once” approach to exposing JD Edwards to BOB (Best of Breed) systems.  This also will allow generic consumption and expose of web services – where you do not have to write a lot of JD Edwards code, or get into too much technical debt.

If you look at the below, we are exposing an open method of communicating with our “monolithic” and potentially “on prem” services.  This microservice layer can actually be in the cloud (and I would recommend this).  You could choose to use a middleware to expose this layer, or generic pub/sub techniques that are provided to you by all of the standard public cloud providers.


image


Looking at a little more detail in the below diagram for JDE shows you the modern JDE techniques for achieving this.  You’d wrap AIS calls to be STANDARD interactions to standard forms.  Just like BSSV was created to “AddSalesOrder”, the same could be done in a microservice.  This would be responsible for calling the standard and specific screens in JDE via AIS.  You are therefore abstracting yourself from the AIS layer.  If you needed to augment that canonical from information from another system, you are not getting too invested in JDE – it’s all in your microservice layer.

This also gives you the added benefit of being able to rip and replace any of the pieces of the design, as you’ve created a layer of abstraction for all of your systems – Nice.  Bring on best of breed.

The other cool thing about an approach like this is that you can start to amalgamate your “SaaS silos” which is the modern equivalent of “disconnected data silos”.  If your business is subscribing to SaaS services, you have a standardised approach of being able to get organisational wide benefit from the subscription.

Outbound from JDE, you can see that we are using RTE’s.  These might go directly to a AWS SQS queue, or they might also go to google subscriber queue or Microsoft Azure cloud services.  All could queue these messages.  The beauty of this is that the integration points already exist in JDE as RTE’s.  You just need to point these queues (or TXN server) to your middleware or cloud pub/sub service for reliable and fault tolerant delivery.  You can then have as many microservice subscribe to these messages and perform specific and independent tasks based upon the information coming in.


image

Wow, JDE has done a great job of letting you innovate at the speed of cloud by giving you some really cool integration methods.  There is nothing stopping you plugging in IoT, mobility, integrations, websites, forms and more into JD Edwards simply and easily.  Also giving you a extremely robust and secure ERP doing ensuring master data management and a single source of truth.

This model works on prem, hybrid or complete cloud.