Wednesday, 30 August 2023

2iC part 2 - JD Edwards AI assistant via ChatGPT and LangChain and more

I've had my team working on adding value to JD Edwards using ChatGPT, as blogged about previously.  My notes last time showed some basic interactions with ChatGPT - allowing it to create attachments to JD Edwards transactions based upon dynamic queries that are being sent to the generative AI platform.  This was good, but we wanted it to be better.

We've been working out how we can use our own data to help Generative AI benefit us.  The smarter people that I work with have been starting to dabble in LangChain.

Chain is critical in the definition of this framework - Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.

LangChain is a framework for developing applications powered by language models. It enables applications that are:

Data-aware: connect a language model to other sources of data

Agentic: allow a language model to interact with its environment

The main value props of LangChain are:

Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not

Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks

Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.

It makes sense to use collaboration when trying to get advantage from AI and more specifically generative AI.

We did look at using either fine tuning vs embedding with native chatGPT, but this seemed to not provide us exactly what we needed.

What we decided to do is to use LangChain to assist us understand more about JDE data.  JDE data comes in many forms, and lucky for us - LangChain is able to use numerous consistent document types as input.  Such as SQL, CSV and PDF.

This means that we can produce a chain like the below - by creating a vectorStore of user specific JDE data which will allow an end user to securely query the data that they have access to, using or chaining the technologies or models of their choice.

So I imagine a time that I can basically use row & column security from JDE and being to load a users Vectorstore with all data that they have access to.  Then allow this data to augment is answer complex and adhoc queries that a user might have.  Remember that these queries could be structured via APIs or ad hoc via humans.


What we have done with WSJ and some pretty cool programming and UDOs:


You can see from the above that we are fairly basically programming some User Defined Objects to call orchestrations from WSJ.  What these do essentially mimic what could be done automatically or programmatically.   These buttons give the end user the ability to create their own VectorStore using LangChain APIs which then allows an end user to perform adhoc generative AI queries against the files. 

For example, I can select one of the PDF files and choose "load cache" button.  This uploads the PDF file to my personal location for additional queries.  This does this action securely.  My file is not training the ChatGPT model.

I can then query the contents (fairly basically once again - hey be nice... I'm doing all the JDE work here).

I have asked a simple question, but I could have made it quite complex - you can ask it anything.  this has been basically programmed to prompt for input before calling the orchestration to LangChain.


Okay, so wait...  Back the truck up... I've been able to engage LangChain and ChatGPT using orchestration studio to ask ad-hoc questions about either my PDF or CSV output from JDE?  This is amazing.  

I can therefore schedule these queries, or I can keep the [basic] interface adhoc.

Let's try a little more now.


This is getting cool.  I have a 2017 page report loaded and can ask any queries about this file.  For example I asked about the unique customers in the invoice print run.

I then asked is there consistent pricing amongst all of the invoices?


Oh wait, the final result is not great - given the above.  But good to know

I am super impressed that I can used UDOs and LangChain to get so much more value out of JD Edwards reports.  This is totally the NEXT LEVEL when is comes to extracting value from data.  You can literally ask for insights from your reports and they can be given to you.

This is the best result so far... This shows why you need to start asking questions and knowing about AI...

I asked for a 200 word summary of the report and asked for any anomalies that have been detected:

The report contains multiple invoices from the Worldwide Company to various customers. Each invoice includes details such as the invoice date, customer information, order number, item description, quantity, price, extended price, and tax. The items sold are primarily College Rule Paper.


The invoices are addressed to different customers, including Capital System and ABC Big Box Store, and are shipped to various locations. The invoice dates range from 2016 to 2023, and the prices for the College Rule Paper vary slightly across the invoices.

One anomaly in the data is the reprinting of invoices, which could indicate a correction or duplication. Another anomaly is the presence of unprocessed sales order lines mentioned at the end of the report. This could suggest incomplete transactions or issues in the order processing system.

The report also includes freight charges under the 'Invoice/Type' section, which are not consistently present in all invoices. This inconsistency could be due to different shipping methods or terms agreed with different customers.

Overall, the report provides a comprehensive view of the sales transactions of the Worldwide Company, but the anomalies suggest potential issues that may need further investigation.

You know you want this for your JDE, all you need to do is ask!









Friday, 18 August 2023

Double helix approach to JDE continuous improvement

We will eventually describe how each of the helix's above represent a continuous project, either reducing technical debt (or making updates easier) and the other enhancing how we use JD Edwards.



Why improve continuously?

I think it's well known that focus on iterative improvement (if you can) will reveal better results with small costs and impact of change.  We you are improving continuously, you can train people continuously (a smaller group), you can measure and quantify the improvements that you are seeing and you get better and better at implementing improvements.

When we talk of improvements, this is simply making end users jobs easier and more efficient.  We do this in 100's of different ways and the JDE toolset allows us to enable much of this efficiency with UDO's or their very flexible development environment - what I'd call a more traditional SDLC - Software Development Life Cycle.  At present, we see that man efficiencies are coming from automation, and the adoption of it's bigger brother - enterprise automation.   In it's essence, the foundation of automation is integration.  

Continuous improvement is also baked into the platform.  Though JDE is not SaaS, the ideals of customer paced "SaaS" is a JD Edwards release ideal.  If you have efficiencies in how you can apply the updates, then you can have all of the benefits of SaaS without the sting in the tail.


What's the SaaS sting?

The sting in the tail of SaaS is commitment…  You are committed, there are no more platform decisions…  If you adapt the solution for you, there is cost to maintain your customisations over an every changing platform - of which you have NO say in cadence of change or what is going to change.   Some might say that SaaS is great for standard businesses with standard approaches - but buyer beware - everything is subject to change.  The other factor of SaaS is pricing, when you are committed there are not many button and levers for reducing your pricing.  The perpetual license allowed you some flex in costs.  If your business was struggling you could choose not to pay maintenance and sweat your IT investments - ahhh - NO LONGER on SaaS.  You cannot sweat anything.

I quite often think that SaaS feels nice and new and shiny…  I need to be honest with you, I do not like looking after servers and internet connections - it's not good for my blood pressure.  But, my ideal place is JD Edwards in public cloud…  I can update continuously when I want.  I don't look after tin.  I'm secure and I can implement the most amazing cloud services to create native data lakes (eg. Our fast start JDE -> S3 data lake accelerator) - with minimal investments.

SaaS is not going away, because it’s great for large software vendors - talk about lock in… Easy to buy, easy to login…  hard to leave.


Why in integration so important to everything?

Integration is no longer an afterthought, it's a first class citizen in design ideals.  No new software platform is released without a complete integration solution.  These are generally API based when needed, perhaps push / pull when required and also batch.  

This is an important point, new SaaS subscriptions or any new software needs integration as a first class tenant.  You need to be able to report over the data and reconcile the data - you also need to make it part of your master data strategy and understand it's place in a transaction timeline.  You probably want to ensure that it can connect to your enterprise reporting suite natively too.

Guess what, JDE has a very mature approach to all these integration patterns.  We have RTE (Real Time Events) for pushing data.  We have orchestration's for implementing the VERY important "dumb pipes / smart end points" for integrations - APIs etc.  We have type 3 & 4 JDBC drivers and rest based (loose) connectivity to the tables.  Finally the good ol' UBE for batch.  Nice.  We also complement this with antiquated BSSV [please stop] and EDI.  Pretty neat.  Choose the right tool for the right job.



From <https://docs.oracle.com/en/applications/jd-edwards/interoperability/9.2.x/eotin/jd-edwards-enterpriseone-interoperability.html#u30175980> 

Integration is crucial because we are seeing the rise of the "best of breed" digital adoption.  Point solutions are deployed in seconds, finely tuned for their purpose and can integrate natively.  I'm seeing more of my clients choose point solutions, all cloud based, and making sure that when implemented they respect master data and security ideals.  Oh yes - this is a huge reason for a strategic approach to integration.  Remember that your enterprise security posture is generally only as strong as the weakest link.

Integration enables automation.  Integration reduces human error.  Integration speeds up business.  Simple.


Summary of above

Of course I'm a JDE guy, so you might consider the above with a grain of salt, but what I'm getting to is JDE is an amazing platform with a written guaranteed roadmap until 2034 and rolling year on year.  It's ready for all integration patterns, you can host it on any hyperscaler  and you can do process improvement at your pace. All good…

Now, finally, what do I mean by Double Helix approach to continuous improvement?



The two stands of the helix represent 2 concurrent ideals / projects for JDE.  Imagine that the blue line is modification reduction, or technical debt reduction and the second (red) is continuous improvement.

This is a duplicitous parallel stream that JD Edwards customers should dedicate resource to.  At times you may only focus on technical debt reduction and other times you might focus on process improvement, but they should work hand in hand.  For those of you who do not have a blue strand (you are without modification), then you can run a single helix.

Time is the X axis and effort is the Y axis, so you see that you can continually put effort into either stream, knowing that BOTH are contributing to lowering the TCO for JDE and also improving efficiencies in your enterprise.

Process Improvement

The types of projects that make up continuous improvement are:

• User productivity improvement

○ UI / UDO adoption

§ E1 Pages

§ UX One

§ Search Groups

§ Cafe1

§ Manage Queries

§ Oneview reporting

§ Form personalisation

§ Notifications

§ Form extensionsJDE maturity model

§ WatchLists

§ Workflow modeler

§ Form Extensions

§ Logic Extensions

○ Enterprise automation

○ Speed and efficiency

○ Maturity model assessments

○ Alert -> Analyse -> Act

○ ERP Insights & JDE usage analysis

○ Implementing AI / ML

• Integration

○ IoT

○ Process

○ Data


Process improvement is important, but more importantly is quantification of benefits.  You cannot quantify the improvements in productivity without a baseline, which is why we recommend our customers use ERP Insights to understand their JDE usage and use this data to baseline, measure and therefore quantify the productivity improvements in JDE.  All process improvement should make users jobs easier, reduce errors and processing time and allow users to do value add tasks.

The Alert -> Analyze ->Act paradigm helps us determine where the value add can occur.  We want to implement as much alerting as possible…  This means that the ERP is telling us about problems.  Instead of checking the bottom of 1000 page integrity every month, get notifications of exceptions…  The nice thing about simple alerts is that with enterprise automation we can covert these into workflows / integrations for self-healing…  Acting on an alert can be an easy automation…  Instead of the notification we just mentioned, we might add a journal automatically with an orchestration or an adjustment to balance our accounts due to rounding errors - all with thresholds and reporting.

The nice thing about doing the above is that it could be a citizen developer doing all of this.  There is not package build or deploy - all of that automation above is done by an power user.  Your user base can spend more time of the analyse part of their jobs, which is proactive - value add…  Doing things that algorithms and models cannot do. 

My recommendation is to use a service like the Fusion5 maturity model, which will put some laser focus on your usage of JD Edwards.  We probably want to you have ERP Insights installed first.  We'll look at your system usage under the microscope for that module and make recommendations for process improvement and enhance your use of Alert -> Analyse -> Act.  We'll find the best places for enterprise automation and recommend the lightest touch to implement the chance.  All of which is going to improve your usage of the module, improve your usage of JDE enhancements -> which assists you in all other modules.


Technical Debt Reduction


The types of projects that assist with technical debt reduction are:

• Removal of modifications

• Adoption of Mobile

• License analysis

• Automated testing

• Archive Data


I'll put a slightly different slant on this, I want you to read technical debt reduction as "how can I make upgrades faster"?  Everyone understands that the variable component of an upgrade is retrofit - nearly everything else is predictable.  So, we need to remove or reduce the impact of retrofit, which is done with technical debt reduction.

It's been said over and over that strategic modifications can stay - no problems.  Bolt on modules are pretty easy to upgrade too, the overhead is knowledge & testing - not too much technical retrofit of modifications.  Changes to core objects though, they need to be kept to a minimum.   

Here is a story that I have been told 50 times…  Let's to out upgrade "as is", we need to do it quickly and go live…  Okay, I say - we carry forward all of the technical debt at this point in time.  We hardly do analysis on the modification, whether it has been superseded by existing functionality.  Is there a smarter way of doing the code (with UDO) or is there a smarter way of implementing the process with deep functional knowledge?   It's too hard with the pressure of the upgrade project looming…  There is not enough time and it’s hard to estimate.

What if you did all of this work up front?  Whether it was module by module or object by object - have a continuous technical debt reduction project running.  Removing modifications, changing processing options… making upgrades (updates) easier…  Then, when you get to your update - you'll be surprised by the efficiency.  Remember that UDOs are not your only options for modifications.  You need to think about the improvements with orchestration and the ability to use modern mobile platforms to deploy your JDE functionality.  JD Edwards can be overwhelming for users that are not logging in all of the time.  Replacing this with a simple mobile app (insert favourite mobile app deployment software here) using SSO via JWT to ensure that your data and logic is safe.  The security model being enforced by JDE and therefore orchestration studio ensures that you are implementing least privilege principles over your valuable ERP data.  You also know that is just going to work after you take release 24 - because of your loose coupling implemented as part of your JDE SDLC.

You know that oracle tests their software extremely well before it gets shipped, so you know that the standard functionality is going to be working.  You need to test the standard code as it applies to your business.  You need to ensure that your loosely coupled UDOs continue to work and you can start being smug about keeping JDE up to date on a yearly basis - efficiently and predictably.

You might then start looking at automated regression testing and data archival to speed up some of your other painful go-live activities.





Wednesday, 31 May 2023

The tale of 4 large JDE customers performance, geography & functional footprint

 

Here is a unique view of 4 different JD Edwards customers and what we know about them, they will remain anonymous of course.

We are able to help our customers understand their JDE usage from many different perspectives:
  • geographical, this is not just user counts, but looking into the modules and performance around the globe.  This is perfect when looking for process improvements and measuring global initiatives that you put in place.   Reporting like this helps you with internal billing too.
  • license right size - you know who is using JDE, you know the modules and you know the locations and time being spent.  This allows you to ensure that you are always compliant.  Perhaps taking away access for users that use JDE 1 time a month - which could save you 1000s in compliance and license. 
  • Module usage in terms of location, number of users and programs being used...  Which can turn into analysis of performance and engagement.


Looking at the above we have 1.1 million JDE page views in a month, we can see that this is generated by 1106 users in the same period.  A large functional footprint in JDE (409 unique application being loaded) is being used over many modules and is a global footprint too.    We can see that JDE usage is up a massive 10% as compared with the previous month, the customer must be busy.  This is an amazing executive snapshot to understand exactly what JDE is doing for the customer and comparing that with the previous month.

This is also really handy, when you know more information.  This is a cloud hosted client, so we are able to give complete costings for the platform on a user basis.


We are looking at about 60% usage of the previous sample data, with some limited international usage.  We can see that the average page load time is 45% slower for this customer than the one above and twice as slow as the example below - so there might be some opportunities to make the user experience a little quicker.  Note that this observation is only possible through the analysis of multiple data sets.  

When looking for process improvement, let's focus on the highly used apps and talk to the users that use those apps the most.  This is a simple task with all of the information above.

Okay, we are getting bigger, more complex and a lot faster!  But wait, I do not want to spoil the surprise.  We have more apps in use, more locations around the world and EVERY one of the 1.9 million pages in the last month being loaded in less than .8 of a second.  Usage is up a massive 22% of the previous month.  I can determine where this has come from, in terms of module and location - which is awesome for the business to see.  They can justify the spend and improvements in the platform based upon actual data.


Finally we have the big bopper!  6.4 million page loads in a month, 2400 users and a spread of 636 apps.  All being served up in .6 of a second on average.  I hope those end users know how lucky they are.

Also up 30% on the previous month!  WOW.

I also know the backend architecture of this site, so that is handy to justify the performance data.

As you can see a dashboard on it's own might not provide too much information to a business, but anonymous comparisons and also industry standards can allow us all to strive to be better.

Fusion5 are almost complete in implementing our very popular ERP Insights platform using GA4 and GTM.  This is going to provide BETTER insights to our customers and allow us to tailor the insights for more ERP platforms.  Watch this space.




Monday, 22 May 2023

ChatGPT and JDE - introducing 2iC

I've been in awe of the ChatGPT phenomenon.  It's not my first exposure to AI, I've been following along and implementing basic solutions in this area.  I've done some basic python programming involving training and implementation of iterative learning.  Conceptually I understand what is going on, but I find that ChatGPT has made this much more relevant and almost a commodity now.  Everyone has the ability to "chat" and start to understand the power of AI - even if in this basic use case of large language exchange.

I think that the really cool thing about this is that we are all learning how to ask better questions.  How we can use AI to assist us in our daily activities.  I've used it to summarise and generate data.  Awesome use cases there.  I've used it for programming, it's helped me with python and groovy.   I've asked it about the weather: using https://chat.openai.com/ of course.


If you have a good think about the response above, I think it explains a lot about what ChatGPT can do.  It talks about it's training is only until Sept 21 - so it's not a prediction method for random events (like weather), sure it can tell you the average temperature for the month of May in Melbourne Australia - but it cannot tell you accurately what tomorrows weather is going to be - because it needs super current context.

Yes, there are some advanced toggles that will include "google results" with the model for these types of queries - but this information is just not part of a model.  

Let's get JDE specific, so I'll try some other queries:


This is a pretty decent summary of the actions to perform.  Incredible that the model has crawled over the JDE manuals to be able to create a response like this.  Of course, we start to think that this is a good way of integrating help into JD Edwards - because all of the content it there.  What if I want a little more, include program names:


Fairly generic again, some additional and decent information.

Right - we are starting to get a feel for what we might be able to do in JD Edwards.  Let's get started.


We start nice a simple, a composite page that allows a user to ask ChatGPT questions.  This is a good idea to get your users starting the conversation with ChatGPT.  As we all know, we need to get better at finding the things that AI can do for us as individuals.  Help us with the mundane.  Probably good to note that you cannot just include https://openai.com - that will not work with all sorts of security problems.


We take this to the next level and create a composite page with the same chat interface.

Parametrized URL:



In this example I've preloaded the chat with context sensitive information from the native JD Edwards screen.  I'm passing in the item and the description and a few other details to get ChatGPT to create context sensitive and relevant information - generate a purchase agreement.


You can see that it's fairly easy to create the query with context from the webpage and have ChatGPT do all of the hard work.  Thanks - this is a really easy way of getting things done. 


Purchase Agreement

This Purchase Agreement (the “Agreement”) is made and entered into as of this ____ day of ___________, 20___ (the “Effective Date”), by and between ___________________ (the “Buyer”) and ___________________ (the “Seller”).


WHEREAS, the Buyer desires to purchase, and the Seller desires to sell, the following item (the “Item”):


Mountain Bike, Red, Item Number 210


NOW, THEREFORE, in consideration of the mutual promises and covenants set forth herein, the parties agree as follows:


1. Purchase. The Buyer agrees to purchase the Item from the Seller for the purchase price of ___________ (the “Purchase Price”). The Purchase Price shall be paid in full on or before the Effective Date.


2. Delivery. The Seller agrees to deliver the Item to the Buyer on or before the Effective Date. The Seller shall be responsible for all costs associated with the delivery of the Item, including any applicable taxes.


3. Specifications. The Item shall have the following estimated weight and volume: ___________.


4. Warranty. The Seller warrants that the Item shall be free from defects in material and workmanship for a period of ___________ from the date of delivery.


5. Limitation of Liability. The Seller shall not be liable for any damages, losses, or liabilities arising out of the Item’s use, other than those expressly provided for in this Agreement.


6. Governing Law. This Agreement shall be governed by and construed in accordance with the laws of the ___________.


7. Severability. If any provision of this Agreement is held to be invalid or unenforceable, such provision shall be struck and the remaining provisions shall be enforced.


8. Entire Agreement. This Agreement constitutes the entire agreement between the parties and supersedes all prior and contemporaneous agreements, representations, and understandings between the parties.


9. Modification. This Agreement may not be modified or amended except in a writing signed by both parties.


10. Waiver. No waiver of any provision of this Agreement shall be deemed or shall constitute a waiver of any other provision, whether or not similar, nor shall any waiver constitute a continuing waiver. No waiver shall be binding unless executed in writing by the party making the waiver.


IN WITNESS WHEREOF, the parties have executed this Agreement as of the date first set forth above.


BUYER: ___________________


SELLER: ___________________


Now, let's start to get more saucy!


I've created another layout above and added some buttons with form extensions.  Note that EVERYTHING you see here has been done without a package build or deploy.  We are only able to do all these enhancements to standard JDE due to the enhancements in the tooling. 

I created 3 buttons and associated orchestrations to the buttons, let me summarise what each of them can do.

create Packing instructions:
This is a cool little punch out to ChatGPT with the details of the item.  We ask chatGPT for weights and measures information for the item and then ask it to create some packing instructions.  We then attach this content back as a media object.  Pretty nice hey?  The cool thing is that the model often has accurate dimensions and of course it does help when you have accurate descriptions.


You can see from above that I have cached up the use of 3 different orchestrations on this form, for the 3 buttons of course!


We may have the most simple orchestration in the world.  I've created a connector and connection and then used this to call the chatGPT API's natively.  I did this by fighting the security layer a little bit and using postman to get it all working in the "lab".  I then created the MO functions, which now are very easy to use...  So - two steps and I'm interacting with Chat GPT.

Remember I could use chatGPT to do quality work on the attachments text.  I could use it to consistently fix the cAsE of my data (how good would that be?).  I simply head to https://api.openai.com/v1/chat/completions
and form my query, very simple./ go here for more information: https://platform.openai.com/docs/api-reference

Once again, with a small amount of work and clicks I can put very specific queries to ChatGPT and have the responses natively in JDE.

Here is a sample call:
Metric Cubic Volume: 
- iPhone 13 Pro Max Dimension: 160.8 mm x 78.1 mm x 7.65 mm 
- Volume = 160.8 x 78.1 x 7.65 = 93,346.908 cubic millimeters 
- Metric Cubic Volume = 93.35 cubic centimeters 

Average Weight: 
- iPhone 13 Pro Max Weight: 238 grams 

Packing Instructions: 
1. Secure the iPhone 13 Pro Max in its original packaging. 
2. Place it in a sturdy cardboard box that is slightly larger than the phone's packaging. 
3. Fill any empty spaces inside the box with bubble wrap or packing peanuts to prevent the phone from shifting. 
4. Seal the box with high-quality packing tape. 
5. Attach the shipping label to the box. 
6. Deliver to the courier service for shipping. 

Remember to handle the phone with care and make sure it is properly secured to prevent any damage during transit.

It's really quite impressive...  But - to continue our learning, here is the output from a VERY similar call:

Unfortunately, as an AI language model, I do not have access to the physical dimensions or weight of unreleased products such as the Samsung Galaxy S22 Ultra at this time. It is recommended to wait until official specifications are released by Samsung before attempting to determine the metric cubic volume or average weight, and appropriate packing instructions for this item.

Once again - shows us that the phone was probably released after the training date.


Button 2 - create image
This shows another strength of ChatGPT generating or finding images.  You can easily ask it to attach a photo of the item, based upon the description that you give it.



Note that this is a similar orchestration to last time.  We needed an additional URL launcher - as JDE cannot store URL's greater than 256 chars, which was a little annoying.  We create a copy of the object in Azure Blob Storage and serve it up through there.



The final button - attach anything
This is all about creative thinking.  IF you have an idea about the current transaction, want to write some notes... Then you can use this button (or a button like it), that has a query prompt.
When you  think if a good attachment to the current transaction in JDE, but cannot come up with all fo the words needed - then this is for you. 

What ever you type, gets attached as a media object.  Very nice.


The future -
patent pending, our JDE copilot (named 2iC) is going to be enhanced to allow you to interact with your JDE data and provide you with insights that you've not been able to generate before.  The interactions with 2iC will be grounded with your ERP data [role specific], augmented with your JDE usage information [from ERP Insights] and further enhanced with your training materials...  So interacting with 2iC will be your AI powered and organisation grounded insights machine.  

At Fusion5 we are working on developing 2iC in this fast paced environment.  Making sure that the grounding and associated security models can be respected so that AI's responses are secure and people can only receive insights from that that is available to them. We are looking to get access to the GPT3 APIs to further enhance the responses and size of payloads available to us.  

We all need to starting thinking about this quantum leap and bringing our users and communities along for the ride - injecting AI into our daily lives, which will hopefully enhance and improve some of the more mundane activities that we do.









Monday, 7 November 2022

JDE Bulk change to BATCH QUEUE in versions

I get asked this all of the time, and thought that I'd write a few notes on the topic (before I forget).  Unfortunately the queue of a version is written to central objects, not F983051 in a nice and easy place.

Note that this is going to assist you if you need to find where a batch version queue is actually at.  As it exists in a number of locations that are needed at runtime.  Starts in F98761 central objects, gets copied to the active package build central objects (pending tools release) then gets converted from there to F989999 and finally ends in cache.

The value is stored in the F98761 BLOB in both central objects and the deployed package.  Central objects being the main concern.  Once again, this is the BLOB field

select * from DV920.f98761DVC920A where RSOBNM = 'R42565' and rsvers = 'TEST';


You could tear this down with some JAVA (CLOB / BLOB) and little endian to change it in SQL - but that is going to be a little difficult.

select utl_raw.cast_to_varchar2(rsrdablob), rsobnm from DV920.f98761DVC920A where RSOBNM = 'R42565' and rsvers = 'TEST';

You'll see that the utl_raw functions in SQL don't help you enough because of the platform independent byte order.  Doh.

Anyways, let's get around.  

All we need to do is create a project, insert all of our batch versions into the project and then save a par file.  You know that a par file is a series of zip files, like a Russian Doll (jeez, I hope that is not racist).  Once you get to the bottom of your file structures, you'll find an entry like:


Which has the following contents:

<?xml version="1.0" encoding="UTF-8" standalone="true"?>
-<RDASPEC xmlns:et="http://peoplesoft.com/e1/metadata/v1.0/erptypes" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns="http://peoplesoft.com/e1/metadata/v1.0" Version="TEST" Report="R42565">
-<RDARecord>
-<RDAReport PaperType="LETTER" SubSysWaitTime="0" NextTextID="965" ScaleFactor="0" LimitRowsValue="0" SourceStyleFlags="0" DSName="R42565" POTemplate="T42565" ReportFlags="0" PropertyFlags="1092">
<RDAPrintInfo RDAQueueName="QBATCH" RDAServerString1="*NONE" PageOrientation="2" PageSource="15" PageHeight="15840" PageWidth="12240" PageSize="1"/>
<et:JdeLogFont charSet="1" orientation="0" height="-14"/>
</RDAReport>
</RDARecord>
</RDASPEC>

Easy hey?  All you need to do is update the RDAQueueName above [I just write some basic code to do that] - I'm so old I use bash... and sed and awk...  And then BOOM - restore the par file that you pack back up in your scripts

for file in `ls $expDir/*.par`
  do
    #echo $file
    dir=`echo $file | awk -F. '{print $1}'`
    #echo $dir
    unzip -q $file -d $dir
done

That sort of stuff, just lots more.

So, you get back to your fat client and ensure the project is at the correct status and do a restore of the par file.  You can use UTB (as above) to check that the currently deployed package and central objects have been updated...  Nice.  Then you go and run the job - and the queue is still wrong!  Doh.

Referring to the manual (and me) says to run a quick update package, and all will be good.  But if you cannot wait for that.  Follow these steps

Goto P98770 and find the deployed package

Then write up some SQL like this: - where DVC920A is your active full package.

select * from dv920.f989999DVC920A where wboid like '%R42565%';

delete from dv920.f989999DVC920A where wboid like '%R42565%';

commit;

Then you run it again and it still does not work... WHAT!!!

Oh, delete the cache from the web instance!

BOOM!


Now, you can look at the F989999 and see the NEW QUEUE defined

select wboid,utl_raw.cast_to_varchar2(WBJPO) from dv920.f989999DVC920A where wboid = 'RPTJOBQUEUE-R42565-TEST';



Note how consistently inconsistent JDE is, if you run the command above, you'll see that the job queue can be read (and written by a human).  So if you were a REAL cowboy and had to change the queue on the "superfly" - you might be able to manipulate the F989999 and clear cache only.  I would only attempt that if I was desperate (or in a rush... or wanted a rush)...

I'm still working on the script for the actual bulk queue change, else I might just post it here.  I believe I've provided everything you need to get this done yourself.



Friday, 14 October 2022

MD5 checksum

Not enough blogging...  I know, I know.

Release23 might get a blog or two, I'm pretty excited about some of the enhancements.  I'll say more later, but calling a orchestration from the new workflow modeler is a relief!!

Now, back to this post - which is super basic and we all should now it (and probably do).

Checksum's are used to ensure that the file that had been uploaded has not been tampered with.  So when it's uploaded, quite often there will be a checksum next to the download icon that you can check the download and ensure that no nasty people have changed the download at any point in time.  Nice hey.

You also probably know that a checksum is basically a fingerprint of the file (like a nice and short unique string that is a representation of a much larger piece of data).  So you can MD5 a 10MB file and still only get 32 characters to prove it's identity.  And we know that we actually have [36 = 26 alpha and 10 numeric] 36^32 possibilities.  so that is a lot of uniqueness.

But recently I've needed to validate that some code we have written is making it to various servers (through many proxies and more security than you can poke a stick at), in tact.  So now I'm using checksum.

It's SO simple: 

windoze:

C:\Users\shannon.moir\Downloads>certutil -hashfile myAccess.war MD5

MD5 hash of myAccess.war:

62250ed14d149a7dcb7d35d73519ad06

CertUtil: -hashfile command completed successfully.

C:\Users\shannon.moir\Downloads>where certutil

C:\Windows\System32\certutil.exe

Linux

[ec2-user@F5PLAY1 tmp]$ md5sum myAccess.war

62250ed14d149a7dcb7d35d73519ad06  myAccess.war


Wow - so simple!!!  Therefore I can check git and my customers / servers results and know that it's not a corruption is some transmit



Wednesday, 10 August 2022

OIC and JDE - the perfect mix (like oil and water)?

I recently did an OIC bootcamp, oh man.  There was so much I liked about the product.  In terms of an integration platform - it seemed to be really well thought out.  I love the fact that you could add human interfaces to integrations to make decisions.  That's right - web forms and human interaction on the fly for an integration!  That is nice.

I also really liked the native connections to VBCS and the fact that if you were running cloud ERP of any of the other oracle cloud products - it seems silly if you are not using Oracle Integration Cloud as part of (or all of) your integration solution.  So much so that we are strategically recruiting in this space and want to own customers cloud ERP integrations.

Though, this is a JDE blog and I want to comment on the JDE connector that is available with OIC...  I was just about to start writing accelerators for linking Cloud ERP data with JDE data.  I was ready to create synchronisation of popular transactions - potentially starting with financials.  This was going to be the foundation of capability.  Modernising my teams consulting skills from JDE to Cloud ERP.  An accelerator for customers to migrate their data and run things SBS - side by side.

Then, it hit me...




Nice interface...  Love the drag and drop.  Love the fact that the orchestration studio developers MUST be working with the OIC team, because this all looks too cool and too similar!





This is really neat and a well thought out design.  Like I said though, similar to orchestration.  And what do people find frustrating about orchestration (especially hard core developers) - the lack of smashing a keyboard.  We love writing code, finding code, massive changes...  we love regex...  The above does not give you much love for this.

But, I digress.


I want to create a connection and I need to use an adapter (of course, an API is harder to create and the generic rest connector looks way too "RESTY" to me (and if you know JDE's implementation of rest, you will support me here).

Looking at the details of this connect (can't WAIT to paste in my discovery URL or perhaps the swagger or OpenAPI definitions of all the orchestrations I want to call...).

Getting excited...



Nearly there!




Huh??  WSDL??  SOAP...  BSSV... Oh no...  I'm crying all over my plans to take over the world...

I need to get my team to write a connection to REST-ify calls to JDE.  This could be done.

If you need to expose some relative resource URIs and wrap them up into some funky orchestration calls...  Otherwise this is not going to be an amazing integration.


Anyway, I'm going to fight the good fight with the REST connection to see if I can get some orchestration calls working.  I'll be sure to post how I go.

But, OIC - please modernise your connector for JDE to support orchestrations.  PowerApps does it SOOOO nicely.  I know powerApps is not an integration platform, but you know what I mean.





This risks of containerising JDE