Tuesday, 23 November 2021

trust vs. identity in weblogic - cacerts and more

Oh dear, we all need to understand trust and SSL / TLS much better - because let's be honest - we need to configure it all of the time now.  Gone are the good ol' days of HTTP:// and also SSL offload, we need to be SSL conscious more and more.

From a WLS (server) perspective, trust is all about outbound capability - whether this is to internal machines or external machines.  If you use the default trust store in WLS, then this is going to fall back on the java JDK cacerts (something like )

Why did that paste as a picture? [it's the new default for blogger when you are pasting more than plain text]

Anyway, back to trusts...  If you need to speak to another server using SSL, you will need to trust the CA (certificate authority) that that website has signed their certificate with.   Browsers do this automatically with a standard list of CAs.  Therefore, testing with the browser on the web server machine does NOT test anything.  You think that firing up chrome on the web server and navigating to your favourite https:// site will prove that java and therefore AIS or JDE will be fine - no - think again.

You need a couple of extra levels of testing for that kind of trust.  Firstly you need to find the user that is running the java processes on the machine, you need to test with that user.  I recommend firstly starting a powershell window and using wget, that is awesome for a surface test of any connection or connector that you might be struggling with using AIS.  This is going to be a level 1 test, but guess what - this does not test the cacerts from java.



The next level of testing you will need to do is using the JDK on the machine.  Make sure that your JDE instance (AIS or HTML) is configured for the correct JKS.  You can see this in the screen grab below.  This instance is using custom identity and default trust (a setup that I recommend).



If you want to see all of the CA's in the list, you can use the standard java keytool utility.  Note that you can list them without a valid password, but you cannot import.


"C:\Program Files\Java\jdk1.8.0_251\bin\keytool" -list -v  -keystore "C:\Program Files\Java\jdk1.8.0_251\jre\lib\security\cacerts"

Right, so you've made it this far, how can you test that the java trusts are setup properly in the JKS?  Write and run a java program that uses the cacerts to validate certificates.

create a file called wget.java

import java.io.*;

import java.net.*;

public class wget {

  public static void main(String[] args) throws Exception {

    String s;

    BufferedReader r = new BufferedReader(new InputStreamReader(new URL(args[0]).openStream()));

    while ((s = r.readLine()) != null) {

        System.out.println(s);

    }

  }

}

then compile it and run it

"C:\Program Files\Java\jdk1.8.0_251\bin\javac" wget.java

"C:\Program Files\Java\jdk1.8.0_251\bin\java" wget https://www.google.com

There you go, you can validate the entire trust config for WebLogic, by following it's config and implementing the above code.  

Identity is different, this is who the server portrays itself to be, this comes from a certificate that contains private keys.  The private keys allow the server to encrypt traffic in a way that only the client can validate with the public key - therefore ensuring that only a server with a valid certificate can reply.   Many JDE customers load custom certificates into a separate JKS for identity purposes for the server.  These certificates generally contain any aliases that the server wants to use in identifying itself.  Also if you load a custom cert for custom identity, then you need to ensure that your browsers trust the CA [which could be self signed].

The final point that I would like to raise here is that a self signed certificate is just as secure as an "proper" CA created cert.  It contains the same amount of encryption and can last a little longer (that is awesome)!  So I recommend using self signed certificates for anything internal, worry about proper CA's when you need to go to the internet for things.










Thursday, 21 October 2021

Release 22 enhancements - specifically workflow modeller

 A pretty healthy release:

Remember that release 22 is critically the superset of applications ASU and tools 9.2.6.  I don't mind the naming standard, as I do spend quite a bit of time reminding my clients that their jde release is 6 years old.  And that takes me to https://support.oracle.com blah blah.


  • Logic Extensions - these are cool, this is going to be huge
  • Return Associated Descriptions for Data Requests - handy
  • Files as Orchestration Input and Output 
  • Add Current Environment as a System Value in Orchestrations
  • Bypass Warnings During Process Recording
  • Attachments to Messages: Media Objects - this is excellent
  • Orchestrator Support for Media Objects
  • Display Messages in Orchestrator Monitor
  • Service-Only User
  • Workflow Studio
  • Creating External Forms from EnterpriseOne Web Client
  • Launch Orchestrations from Composed EnterpriseOne Page
  • Form Extension Improvements
  • Pop-up Messages for Orchestrations Launched from the EnterpriseOne User Interface
  • Export EnterpriseOne Search Results to Microsoft Excel
  • Bypass UBE Printer Selection Screen
  • Zero Downtime Deployment for Applications
  • JD Edwards Update Manager for Applications
  • Offline Server Manager Monitoring and Notification
  • Automatic Kernel Reconnection
  • Health Check API for a Group of Interdependent Servers
  • AIS REST APIs to Discover Objects and Execute Business Functions
  • Enhanced Logging Capabilities for UBE
  • Simplified UBE Default Queue
  • New UBE job Status for Terminated Jobs
  • Configurable Application and UBE for Server Manager Health checks
  • Enhanced User Security Activity Tracking for Auditing
  • Support for Oracle Autonomous Database on Shared Exadata Infrastructure
  • Improved Business Continuity with Native Oracle Database Connection Management for On-Premise
  • Platform Certifications
Actually so much great stuff.  Although the Fusion5 team created an AWS architecture that supports no downtime deploys, canary deploy, blue / green deploy and more 3 years ago.  

Anyway, back to the workflow studio.

Workflow Studio

Everything can be launched from orchestration studio.



But - that is not why I'm here.  I'm here to talk about the new workflow modeller.  The old 32 bit workflow modeller will not work after 9.2.6.

The need for workflow is everywhere - well all need it and want it

Nearly everything about workflow will continue to work the same way.

  1. Process monitor the same
  2. Distribution lists
  3. Halt tasks
  4. Workflow kernel
  5. Same, same same.


There is a TC that you need to run, look out for the ERID3, if there are any values not 4 - you need to run Convert Workflow Specs Stored in F98811 from Xe to B9 (R89811B) locally on the development client.  You'll probably need the latest planner, as we all know - this is where specs come from…  Also, F98811 is in control tables, so you'll need to run this for all environments.  JN19147 is the latest planner that contains this TC.

Think of a work flow of a process that does require human thought…  Whereas an orchestration or notification can be pretty autonomous.  Human though as in an action.  A good clear example that is available out of the box, is credit limit approvals.  If a credit limit is changed, just check with the manager before that is done.

You get directed to a common approvals screen, you approve or reject and then a BSFN fires off to run the update, perfect.  



The rules engine seems to be a dramatic improvement from the traditional orchestration decision trees.  It looks to have been lifted from the logic extensions, a much easier and more powerful interface.

Now, we have a pretty basic hybrid solution at the moment, foundational.

You still need a thick client to create DSTR's both key and additional data

You need a thick client to create the workflow in OMW before you can open it in the new modeller.  Then you can design the workflow.

  • You cannot call orchestrations - what??  this felt crazy and wrong.  The first thing I wanted to do is call an orchestration.
  • You cannot call notifications - just old school emails
  • You can activate and deactivate workflows
  • You can send an action or information message


This is like a cloud lift and shift, pretty strict - but it's so much better than the old workflow engine.

Note that you'll still need to initiate processes from code.

Of course you can get creative and create a NER or BSFN to call B98ORCH and call your ORCHs - that would be sweet

I can't wait to integrate workflow into something like IFTTT or zapier , which is very possible through a connection.  This could open the world of IOT, voice, tweets and more generically to your JD Edwards robotic capability, teams notifications and more.  Note that this could be SMS, integrate with your todo list.  Automation and workflow are going to become more and more important and I cannot wait to see where this is going in future releases.

Awesome progress and I can see where it's heading.  Great work JDE team.




Tuesday, 19 October 2021

patching a tools release

 We modify tools releases all of the time, if you make modifications to your tools - it is generally best to edit the par file, as opposed to hacking into the weblogic files after you've layed the tools release down.  Why?  Because you are going to forget that you've made changes and deploy over the top - I guarantee you!

So, The parfile (as was explained about 8 years ago in this blog), is made up of the following structures:


The following is a linux script that tears down all of the zip files and then builds them back up - pausing in between

You do not need to use the entire script, but it does tell you exactly what you need to do to change things and packs up the tools so that you can deploy it.

Upload the file that this creates (as a .par) and ensure that while the script waits for you, you edit

scf-manifest.xml

Remember that it is the first line that you see in server manager, so rename the description to include what you have done!  Then you'll see this in server manager. 

<scf-manifest targetType="webserver" version="9.2.4.3" componentName="EnterpriseOne HTML Server" description="EnterpriseOne HTML Server 9.2.4.3 04-07-2020_10_22" minimumAgentLevel="112">^M

Back to my script:
I've left some extra stuff in this script for you.  It's handy for how it patches files with sed and using the input from other files.  You may want to do similar things for your patching requirements.

remember parameter 1 is the complete name and path to the toolsreleasefile.par  the second parameter can be anything, but it's going to wait for you to enter a key before it packs everything back up.

You could add your logo copies in here, your colour changes if they are more complex than the UDC in the new tools...  You can do any of your paper fixes too.  Therefore the tools in SM will have EVERYTHING and no more hacking the owl_deploy and stage dirs.

PS.  I just had to do an edit because of the shape of the par files once I was done.  It's all tested and working now.

if [ $# -ne 2 && $# -ne 1 ]
then
  print "USAGE $0 toolsrelease [confirm]"
  exit 1
else
  TOOLS=$1
  if [ $# -eq 2 ]
  then
    WAIT=1
  fi
fi
tmpDir=/tmp/tools$$
mkdir $tmpDir
if [ $? -eq 0 ]
  then
    unzip $TOOLS -d $tmpDir
    echo "tools found in " $tmpDir
else
  echo "could not create dir:" $tmpDir
fi
currDir=`pwd`
unzip $tmpDir/webclient.ear -d $tmpDir/webclient.ear.dir
unzip $tmpDir/webclient.ear.dir/webclient.war -d $tmpDir/webclient.ear.dir/webclient.war.dir
#we have everything exploded - now we need to patch, but let's remove the zip files
rm $tmpDir/webclient.ear
rm $tmpDir/webclient.ear.dir/webclient.war

# strange
# send complete img and share dirs
rm -rf $tmpDir/webclient.ear.dir/webclient.war.dir/share
rm -rf $tmpDir/webclient.ear.dir/webclient.war.dir/img
unzip /home/shannon/Documents/img.zip -d $tmpDir/webclient.ear.dir/webclient.war.dir
unzip /home/shannon/Documents/share.zip -d  $tmpDir/webclient.ear.dir/webclient.war.dir

#okay, now we are in a position to patch and then use the recompress script.

patchMaster=/home/shannon/Documents
cp $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4login.jsp $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4loginbackup.jsp
cp $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4balogout.jsp $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4balogoutbackup.jsp

#now patch - you don't need this for this blog, but the sed
#commands are pretty RAD
sed -i -e '/^<\/HEAD>.*/r /home/shannon/Documents/html4login_append.jsp' $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4login.jsp
sed -i -e '/.*<!-- Display an informative message when the login screen is due.*/r /home/shannon/Documents/html4login_append2.jsp' $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4login.jsp
sed -i '/^<%@ page.*/r /home/shannon/Documents/html4balogin_append.jsp' $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4balogout.jsp
sed -i '/^framebusting.*/r /home/shannon/Documents/html4balogin_append2.jsp' $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4balogout.jsp

diff $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4login.jsp $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4loginbackup.jsp
diff $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4balogout.jsp $tmpDir/webclient.ear.dir/webclient.war.dir/share/html4balogoutbackup.jsp


if [ $WAIT -eq 1 ]
  then
    read  -n 1 -p "Do All your Manual patching! Enter Key When done:" mainmenuinput
fi

#now zip
cd $tmpDir/webclient.ear.dir/webclient.war.dir
zip -r  $tmpDir/webclient.ear.dir/webclient.war ./*
cd $tmpDir/webclient.ear.dir
rm -rf $tmpDir/webclient.ear.dir/webclient.war.dir
zip -r  $tmpDir/webclient.ear ./*
rm -rf $tmpDir/webclient.ear.dir
cd $tmpDir
zip -r $currDir/$TOOLS.new ./*
rm -rf $tmpDir
cd $currDir



Friday, 24 September 2021

Orchestration caching frustration - Cross Reference

 Perhaps this is just me and my dodgy use case.

I'm using jitterbit and JD Edwards orchestration to synchronise the address book between MSFT CEC and JD Edwards.  Nice and simple.  

I have a JB developer doing his thing and I said that I would donate my time to do the orchestration development - coz I'm a nice guy.

All started well and is not ending well.




A super simple start...  well...  There are no fields in the addressbook that are long enough to take the 40ish character CEC unique ID. I thought that an easy thing to use would be cross reference.  This is designed for this purpose... right?

So I start checking for my new AIS cross reference, if it does not exist - create it with the AN8 record




Easy hey?  Then I I do a similar check for edit and I delete both on a delete!  The perfect crime.



And finally delete



It's all fine until you string it all together.

Add:



Works fine.  Cross reference says it aborted, but it's configured to continue.  This is dodgy.  We can see that the AN8 has been created and now there is a value in the cross reference.


and AN8




I can now run all of my edits, all work well.  Address changes and cross reference picks up the add without an issue.


I run my delete:


I can see that the cross reference has been deleted and the AN8 is gone



See above that 06 is now missing.  BUT - if I try and create it again...  Everything craps out, because the value is cached in AIS



I think personally that is a bug.  If we are using highly dynamic reference data [which is reasonable], then I believe that the cache should stay current with the list.

You need to clear the JAS cache for this to work - crazy I feel.  

Anyway, this took me quite a while to find and makes me think I cannot use cross reference.









Friday, 17 September 2021

Yay - we went live... But how can I measure success?

Go-lives are exciting, but I'm going to cover off go-lives for upgrades - which can be more exciting in some respects.  Why, well everyone knows how to use the system, so it's generally going to be hit pretty hard pretty early.  You users will also have their specific environment that they want to test, and when you mess up their UDO's and grid formats - you are going to hear about it.

I'm focused on, as a management team, how do you know that you upgrade is a success.

I have a number of measures that I try to use to indicate whether a go-live has been successful:

  1. Number of issues is important, but often they do not get raised.
  2. Interactive system usage
    1. how many users, applications, modules and jobs are being run and how that compares with normal usage.
    2. Performance; I want to know what is working well and what needs to be tuned
    3. Time on page - by user and by application
    4. Batch and queue performance and saturation - to ensure that my jobs are not queuing too much
Here is some example reports of these metrics which allow you to define success, measure and improve.



I find the above report is handy, as this is able to show me a comparison of the last week to the last 50 days...  So I get instant trend information.  You can see that I can drill down to ANY report to quickly see what it's runtime was over the last 50 days.  The other nice thing is that the table shows you exactly what is slower and by how much.  I can spend 5 minutes and tell a client all of the jobs that are performing better and which ones they need to focus on.

The next place I look for a comparison is here:


Above I'm able to see my busy jobs and how many times they are running, how much data they are processing or how many seconds they are running for.  I know my go-live date, so that makes things easy too.

I can do this on a queue by queue basis if needed too.

We can also quickly see that users are using all parts of the application that they should be, comparing the previous period - this is user count, pages loaded and time on page.






From a webpage performance point of view, we can report over the actual user experience.  This is going to tell us the page download time, the server response time and the page load time - all critical signals for a healthy system.

All of these reports can be sent to you on a weekly or monthly cadence to ensure that you know exactly how JDE is performing for you.

Thursday, 26 August 2021

Load Testing JD Edwards orchestrations

I've been tasked to help a very large client work out whether they can use JD Edwards orchestration to facilitate transactions from their ecommerce solution.  This is a very common question that I'm being asked more and more.  Customers want to connect their ecommerce solution to JDE for stock availability and also for pricing. 

The problem is that pricing is complex for B2B and a little less complex for B2C - as there is generally only a single price.  When you need to price on volume and customer - then you need to tap into the JD Edwards advanced pricing rules.

How are you going to do this effectively?

My choice is calling the JD Edwards BSFN's for pricing via orchestration.  This is a super easy process.  There are some nasty product out there that masquerade as middleware and expose BSFN's - but you can do this through AIS simply and free.  The other advantages are that it'll upgrade with your JD Edwards application stack.  I would go native AIS / orchestration every time.

I could do a blog (and might) on how easy it is to create an orchestration that allows you to call a BSFN.  It takes minutes.

My highly complex orchestration in 9.2.5.3

?xml version='1.0' encoding='UTF-8'?>
<ServiceRequest>
  <omwObjectName>SRE_2108230001F5</omwObjectName>
  <studioVersion>9.2.5.3</studioVersion>
  <name>request_CallB4201500</name>
  <shortDesc>call B4201500</shortDesc>
  <productCode>55</productCode>
  <locale>en</locale>
  <updateTime>1629696722845</updateTime>
  <description></description>
  <group>0</group>
  <appStack>false</appStack>
  <returnFromAllForms>false</returnFromAllForms>
  <bypassER>false</bypassER>
  <serviceRequestSteps>
    <serviceRequestSteps type="customServiceRequest">
      <scriptLanguage>groovy</scriptLanguage>
      <objectName>B4201500</objectName>
      <functionName>CalculateSalesPricesAndCosts</functionName>
      <dataStructureName>D4201500</dataStructureName>
      <bsfnInputs>
        <bsfnInputs>
          <name>szAdjustmentSchedule</name>
          <input>szAdjustmentSchedule</input>
          <defaultValue></defaultValue>
          <controlID>9</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnAddressNo</name>
          <input>mnAddressNo</input>
          <defaultValue></defaultValue>
          <controlID>10</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnShipToNo</name>
          <input>mnShipToNo</input>
          <defaultValue></defaultValue>
          <controlID>11</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnShortItemNo</name>
          <input>mnShortItemNo</input>
          <defaultValue></defaultValue>
          <controlID>12</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szBaseCurrencyCode</name>
          <input></input>
          <defaultValue>USD</defaultValue>
          <controlID>13</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szCustomerCurrencyCode</name>
          <input></input>
          <defaultValue>USD</defaultValue>
          <controlID>14</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>cCurrencyConversionMethod</name>
          <input></input>
          <defaultValue>Z</defaultValue>
          <controlID>18</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szBranchPlantDtl</name>
          <input>szBranchPlantDtl</input>
          <defaultValue>M30</defaultValue>
          <controlID>34</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szCompany</name>
          <input>szCompany</input>
          <defaultValue>00411</defaultValue>
          <controlID>43</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnQtyShipped</name>
          <input>mnQtyShipped</input>
          <defaultValue></defaultValue>
          <controlID>51</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnQtyOrdered</name>
          <input></input>
          <defaultValue>1</defaultValue>
          <controlID>54</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnConvFactorTransToPrim</name>
          <input></input>
          <defaultValue>1</defaultValue>
          <controlID>55</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>mnConvFactorPricingToPrim</name>
          <input></input>
          <defaultValue>1</defaultValue>
          <controlID>56</controlID>
          <type>Numeric</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szTransactionUom</name>
          <input></input>
          <defaultValue>EA</defaultValue>
          <controlID>66</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szPricingUom</name>
          <input></input>
          <defaultValue>EA</defaultValue>
          <controlID>67</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>jdPriceEffectiveDate</name>
          <input>jdPriceEffectiveDate</input>
          <defaultValue>20210823</defaultValue>
          <controlID>68</controlID>
          <type>Date - yyyyMMdd</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szCustomerPricingGroup</name>
          <input>szCustomerPricingGroup</input>
          <defaultValue></defaultValue>
          <controlID>73</controlID>
          <type>String</type>
        </bsfnInputs>
        <bsfnInputs>
          <name>szLineType</name>
          <input>szLineType</input>
          <defaultValue></defaultValue>
          <controlID>87</controlID>
          <type>String</type>
        </bsfnInputs>
      </bsfnInputs>
      <bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>20</controlID>
          <variable>mnUnitPrice</variable>
          <title>mnUnitPrice</title>
          <type>Numeric</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>21</controlID>
          <variable>mnExtendedPrice</variable>
          <title>mnExtendedPrice</title>
          <type>Numeric</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>24</controlID>
          <variable>mnListPrice</variable>
          <title>mnListPrice</title>
          <type>Numeric</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>26</controlID>
          <variable>szListPriceUOM</variable>
          <title>szListPriceUOM</title>
          <type>String</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>44</controlID>
          <variable>jdTransactionDate</variable>
          <title>jdTransactionDate</title>
          <type>Date - yyyy-MM-dd</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>66</controlID>
          <variable>szTransactionUom</variable>
          <title>szTransactionUom</title>
          <type>String</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>67</controlID>
          <variable>szPricingUom</variable>
          <title>szPricingUom</title>
          <type>String</type>
        </bsfnReturnControls>
        <bsfnReturnControls>
          <controlID>68</controlID>
          <variable>jdPriceEffectiveDate</variable>
          <title>jdPriceEffectiveDate</title>
          <type>Date - yyyy-MM-dd</type>
        </bsfnReturnControls>
      </bsfnReturnControls>
      <isAsynch>false</isAsynch>
    </serviceRequestSteps>
  </serviceRequestSteps>
</ServiceRequest>



Above is the code to the SR, as you might want to know the parameters that you need to fill out to get this working

I therefore have the ability to run this with the following active parameters:

{

  "szAdjustmentSchedule": "string",

  "mnAddressNo": "string",

  "mnShipToNo": "string",

  "mnShortItemNo": "string",

  "szBranchPlantDtl": "string",

  "szCompany": "string",

  "mnQtyShipped": "string",

  "jdPriceEffectiveDate": "string",

  "szCustomerPricingGroup": "string",

  "szLineType": "string"

}

Awesome and easy so far.

I can use curl to run this and put a bunch of &'s at the end of the queries to try and fudge some performance statistics, but that is not really going to help anyone.  I want to do a proper job of testing something that is massively parallel - as we also know that advanced pricing - even in a simple form - is going to take too long as a linear transaction.

Here is a complete script that will allow you to call the orchestration that I have created above.

There are a number of nuances in this that you need to get right and took me quite a while (lucky you).  The use of compressed as native compression at my tools level.  The use of insecure because I was using the JMeter proxy and did not load the certificate properly.  Using curlFormat for some nice timing information for your load testing.  And the use of a here file to load the JSON into a variable.

Note that the first command is not proxied and the second command is proxied.

data=$(cat <<EOF
{
  "mnAddressNumber": "4242",
  "jdDateEffective": "1/1/2021",
  "mnQtyOrdered": "12",
  "szUnitOfMeasure": "EA",
  "szCostCenter": "M30",
  "szItemNo": "220"
}
EOF
)
echo $data
#there is some native compression there, need to turn that off / account for it
#-w "@curl-format.txt" -o /dev/null
set -x
#curl -v --output - --compressed --request POST \
  #--url https://f5dv.fusion5.cloud/jderest/orchestrator/orch_getPrice2 \
  #--header 'Accept: application/json' \
  #--header 'Authorization: Basic Something==' \
  #--header 'Cache-Control: no-cache' \
  #--header 'Connection: keep-alive' \
  #--header 'Content-Type: application/json' \
  #--header 'Host: f5dv.fusion5.cloud' \
  #--header 'accept-encoding: gzip, deflate' \
  #--header 'cache-control: no-cache' \
  #--data "${data}"
countMax=5
i=0
while [ $i -lt ${countMax} ]
do
curl --proxy localhost:8888 --insecure -o /dev/null -w "@curlFormat.txt" --compressed --request POST \
  --url https://f5dv.fusion5.cloud/jderest/orchestrator/orch_getPrice2 \
  --header 'Accept: application/json' \
  --header 'Authorization: Basic Something==' \
  --header 'Cache-Control: no-cache' \
  --header 'Connection: keep-alive' \
  --header 'Content-Type: application/json' \
  --header 'Host: f5dv.fusion5.cloud' \
  --header 'accept-encoding: gzip, deflate' \
  --header 'cache-control: no-cache' \
  --data "${data}" &
  i=$(($i+1))
done

Cool - so I can now do rough testing.

I also created some additional scripts that ripped out the auth token and saved more time there.

Even if I can get 20 items on a page, and my amazing JDE can return a price in .4 of a second...  I need to wait 20x.4 or 8 seconds for the thing to complete - it is not going to fly.  We need to move on from a linear equation.

JMeter to the rescue

Unfortunately (or fortunately) OATS is nearing end of life.  unfortunately as I understand the platform and limitation well and have executed MANY load testing scenarios very successfully.  Fortunately we were given enough notice and I was able to not pay the maintenance bill this year for the companies subscription and find an alternative solution.

JMeter is totally awesome and nerdy - I'm loving it.  It does everything that I need and I can load test JD Edwards [with a number of tweaks] and also importantly orchestrations.

There is a lot of steep learning for JMeter, but it's not my first rodeo.


Above are the results for a 20 thread linear load test, finishing in a eye watering 12 seconds.  I know my internet can be slow, but a customer waiting 12 seconds to render a 20 item page might be too much.

Let's try another step.  How about caching and handle reuse (token specifically)

WOW = 2.8 seconds for 20 calls.  Look at the session initialisation overhead for what we are doing - that is crazy.  It's good to know how long the actual BSFN is taking, that is .12 of a second.  So we are dealing with .28 of overhead.  Fair enough I say.

Let's start to look into parallel processing:

What we are doing here is breaking down the AIS calling into 20 separate HTML calls or threads.  If there is one thing that the internet is pretty good at - that is threading.




We have 2.05 seconds for the 20 threads to complete.  You can see the distinctions in the start times for the above two scenarios.  Immediately above we can see all 20 threads leave JMeter at the same time [but interestingly seem to come back one at a time [seems like we are single threaded somewhere]?].  Although we have all fire off at the same time, their responses are only retrieved at about .12 seconds [magic number] after the previous one... Hmmm - might need to look closer at the parallel processing in WLS and also my server.

But, the theory is good (if not great) when I have lots of servers.  So imagine that I had 20 AIS servers, the I'm only going to wait as long as the longest request.  I'm going to use JSESSIONID in my load balancer and not need stateless load balancing in 9.2.5.3 and I'm going to maintain open connections to the AIS servers.  So, when my ecommerce solution requests a lot of pricing or a lot of availability - I'm going to reply in spades!

I'll write more about this load testing exercise and the use of JMeter for your load testing needs.  I don't see a lot of point in having additional tools. 




















Extending JDE to generative AI