Wednesday, 30 August 2023

2iC part 2 - JD Edwards AI assistant via ChatGPT and LangChain and more

I've had my team working on adding value to JD Edwards using ChatGPT, as blogged about previously.  My notes last time showed some basic interactions with ChatGPT - allowing it to create attachments to JD Edwards transactions based upon dynamic queries that are being sent to the generative AI platform.  This was good, but we wanted it to be better.

We've been working out how we can use our own data to help Generative AI benefit us.  The smarter people that I work with have been starting to dabble in LangChain.

Chain is critical in the definition of this framework - Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.

LangChain is a framework for developing applications powered by language models. It enables applications that are:

Data-aware: connect a language model to other sources of data

Agentic: allow a language model to interact with its environment

The main value props of LangChain are:

Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not

Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks

Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.

It makes sense to use collaboration when trying to get advantage from AI and more specifically generative AI.

We did look at using either fine tuning vs embedding with native chatGPT, but this seemed to not provide us exactly what we needed.

What we decided to do is to use LangChain to assist us understand more about JDE data.  JDE data comes in many forms, and lucky for us - LangChain is able to use numerous consistent document types as input.  Such as SQL, CSV and PDF.

This means that we can produce a chain like the below - by creating a vectorStore of user specific JDE data which will allow an end user to securely query the data that they have access to, using or chaining the technologies or models of their choice.

So I imagine a time that I can basically use row & column security from JDE and being to load a users Vectorstore with all data that they have access to.  Then allow this data to augment is answer complex and adhoc queries that a user might have.  Remember that these queries could be structured via APIs or ad hoc via humans.


What we have done with WSJ and some pretty cool programming and UDOs:


You can see from the above that we are fairly basically programming some User Defined Objects to call orchestrations from WSJ.  What these do essentially mimic what could be done automatically or programmatically.   These buttons give the end user the ability to create their own VectorStore using LangChain APIs which then allows an end user to perform adhoc generative AI queries against the files. 

For example, I can select one of the PDF files and choose "load cache" button.  This uploads the PDF file to my personal location for additional queries.  This does this action securely.  My file is not training the ChatGPT model.

I can then query the contents (fairly basically once again - hey be nice... I'm doing all the JDE work here).

I have asked a simple question, but I could have made it quite complex - you can ask it anything.  this has been basically programmed to prompt for input before calling the orchestration to LangChain.


Okay, so wait...  Back the truck up... I've been able to engage LangChain and ChatGPT using orchestration studio to ask ad-hoc questions about either my PDF or CSV output from JDE?  This is amazing.  

I can therefore schedule these queries, or I can keep the [basic] interface adhoc.

Let's try a little more now.


This is getting cool.  I have a 2017 page report loaded and can ask any queries about this file.  For example I asked about the unique customers in the invoice print run.

I then asked is there consistent pricing amongst all of the invoices?


Oh wait, the final result is not great - given the above.  But good to know

I am super impressed that I can used UDOs and LangChain to get so much more value out of JD Edwards reports.  This is totally the NEXT LEVEL when is comes to extracting value from data.  You can literally ask for insights from your reports and they can be given to you.

This is the best result so far... This shows why you need to start asking questions and knowing about AI...

I asked for a 200 word summary of the report and asked for any anomalies that have been detected:

The report contains multiple invoices from the Worldwide Company to various customers. Each invoice includes details such as the invoice date, customer information, order number, item description, quantity, price, extended price, and tax. The items sold are primarily College Rule Paper.


The invoices are addressed to different customers, including Capital System and ABC Big Box Store, and are shipped to various locations. The invoice dates range from 2016 to 2023, and the prices for the College Rule Paper vary slightly across the invoices.

One anomaly in the data is the reprinting of invoices, which could indicate a correction or duplication. Another anomaly is the presence of unprocessed sales order lines mentioned at the end of the report. This could suggest incomplete transactions or issues in the order processing system.

The report also includes freight charges under the 'Invoice/Type' section, which are not consistently present in all invoices. This inconsistency could be due to different shipping methods or terms agreed with different customers.

Overall, the report provides a comprehensive view of the sales transactions of the Worldwide Company, but the anomalies suggest potential issues that may need further investigation.

You know you want this for your JDE, all you need to do is ask!









Friday, 18 August 2023

Double helix approach to JDE continuous improvement

We will eventually describe how each of the helix's above represent a continuous project, either reducing technical debt (or making updates easier) and the other enhancing how we use JD Edwards.



Why improve continuously?

I think it's well known that focus on iterative improvement (if you can) will reveal better results with small costs and impact of change.  We you are improving continuously, you can train people continuously (a smaller group), you can measure and quantify the improvements that you are seeing and you get better and better at implementing improvements.

When we talk of improvements, this is simply making end users jobs easier and more efficient.  We do this in 100's of different ways and the JDE toolset allows us to enable much of this efficiency with UDO's or their very flexible development environment - what I'd call a more traditional SDLC - Software Development Life Cycle.  At present, we see that man efficiencies are coming from automation, and the adoption of it's bigger brother - enterprise automation.   In it's essence, the foundation of automation is integration.  

Continuous improvement is also baked into the platform.  Though JDE is not SaaS, the ideals of customer paced "SaaS" is a JD Edwards release ideal.  If you have efficiencies in how you can apply the updates, then you can have all of the benefits of SaaS without the sting in the tail.


What's the SaaS sting?

The sting in the tail of SaaS is commitment…  You are committed, there are no more platform decisions…  If you adapt the solution for you, there is cost to maintain your customisations over an every changing platform - of which you have NO say in cadence of change or what is going to change.   Some might say that SaaS is great for standard businesses with standard approaches - but buyer beware - everything is subject to change.  The other factor of SaaS is pricing, when you are committed there are not many button and levers for reducing your pricing.  The perpetual license allowed you some flex in costs.  If your business was struggling you could choose not to pay maintenance and sweat your IT investments - ahhh - NO LONGER on SaaS.  You cannot sweat anything.

I quite often think that SaaS feels nice and new and shiny…  I need to be honest with you, I do not like looking after servers and internet connections - it's not good for my blood pressure.  But, my ideal place is JD Edwards in public cloud…  I can update continuously when I want.  I don't look after tin.  I'm secure and I can implement the most amazing cloud services to create native data lakes (eg. Our fast start JDE -> S3 data lake accelerator) - with minimal investments.

SaaS is not going away, because it’s great for large software vendors - talk about lock in… Easy to buy, easy to login…  hard to leave.


Why in integration so important to everything?

Integration is no longer an afterthought, it's a first class citizen in design ideals.  No new software platform is released without a complete integration solution.  These are generally API based when needed, perhaps push / pull when required and also batch.  

This is an important point, new SaaS subscriptions or any new software needs integration as a first class tenant.  You need to be able to report over the data and reconcile the data - you also need to make it part of your master data strategy and understand it's place in a transaction timeline.  You probably want to ensure that it can connect to your enterprise reporting suite natively too.

Guess what, JDE has a very mature approach to all these integration patterns.  We have RTE (Real Time Events) for pushing data.  We have orchestration's for implementing the VERY important "dumb pipes / smart end points" for integrations - APIs etc.  We have type 3 & 4 JDBC drivers and rest based (loose) connectivity to the tables.  Finally the good ol' UBE for batch.  Nice.  We also complement this with antiquated BSSV [please stop] and EDI.  Pretty neat.  Choose the right tool for the right job.



From <https://docs.oracle.com/en/applications/jd-edwards/interoperability/9.2.x/eotin/jd-edwards-enterpriseone-interoperability.html#u30175980> 

Integration is crucial because we are seeing the rise of the "best of breed" digital adoption.  Point solutions are deployed in seconds, finely tuned for their purpose and can integrate natively.  I'm seeing more of my clients choose point solutions, all cloud based, and making sure that when implemented they respect master data and security ideals.  Oh yes - this is a huge reason for a strategic approach to integration.  Remember that your enterprise security posture is generally only as strong as the weakest link.

Integration enables automation.  Integration reduces human error.  Integration speeds up business.  Simple.


Summary of above

Of course I'm a JDE guy, so you might consider the above with a grain of salt, but what I'm getting to is JDE is an amazing platform with a written guaranteed roadmap until 2034 and rolling year on year.  It's ready for all integration patterns, you can host it on any hyperscaler  and you can do process improvement at your pace. All good…

Now, finally, what do I mean by Double Helix approach to continuous improvement?



The two stands of the helix represent 2 concurrent ideals / projects for JDE.  Imagine that the blue line is modification reduction, or technical debt reduction and the second (red) is continuous improvement.

This is a duplicitous parallel stream that JD Edwards customers should dedicate resource to.  At times you may only focus on technical debt reduction and other times you might focus on process improvement, but they should work hand in hand.  For those of you who do not have a blue strand (you are without modification), then you can run a single helix.

Time is the X axis and effort is the Y axis, so you see that you can continually put effort into either stream, knowing that BOTH are contributing to lowering the TCO for JDE and also improving efficiencies in your enterprise.

Process Improvement

The types of projects that make up continuous improvement are:

• User productivity improvement

○ UI / UDO adoption

§ E1 Pages

§ UX One

§ Search Groups

§ Cafe1

§ Manage Queries

§ Oneview reporting

§ Form personalisation

§ Notifications

§ Form extensionsJDE maturity model

§ WatchLists

§ Workflow modeler

§ Form Extensions

§ Logic Extensions

○ Enterprise automation

○ Speed and efficiency

○ Maturity model assessments

○ Alert -> Analyse -> Act

○ ERP Insights & JDE usage analysis

○ Implementing AI / ML

• Integration

○ IoT

○ Process

○ Data


Process improvement is important, but more importantly is quantification of benefits.  You cannot quantify the improvements in productivity without a baseline, which is why we recommend our customers use ERP Insights to understand their JDE usage and use this data to baseline, measure and therefore quantify the productivity improvements in JDE.  All process improvement should make users jobs easier, reduce errors and processing time and allow users to do value add tasks.

The Alert -> Analyze ->Act paradigm helps us determine where the value add can occur.  We want to implement as much alerting as possible…  This means that the ERP is telling us about problems.  Instead of checking the bottom of 1000 page integrity every month, get notifications of exceptions…  The nice thing about simple alerts is that with enterprise automation we can covert these into workflows / integrations for self-healing…  Acting on an alert can be an easy automation…  Instead of the notification we just mentioned, we might add a journal automatically with an orchestration or an adjustment to balance our accounts due to rounding errors - all with thresholds and reporting.

The nice thing about doing the above is that it could be a citizen developer doing all of this.  There is not package build or deploy - all of that automation above is done by an power user.  Your user base can spend more time of the analyse part of their jobs, which is proactive - value add…  Doing things that algorithms and models cannot do. 

My recommendation is to use a service like the Fusion5 maturity model, which will put some laser focus on your usage of JD Edwards.  We probably want to you have ERP Insights installed first.  We'll look at your system usage under the microscope for that module and make recommendations for process improvement and enhance your use of Alert -> Analyse -> Act.  We'll find the best places for enterprise automation and recommend the lightest touch to implement the chance.  All of which is going to improve your usage of the module, improve your usage of JDE enhancements -> which assists you in all other modules.


Technical Debt Reduction


The types of projects that assist with technical debt reduction are:

• Removal of modifications

• Adoption of Mobile

• License analysis

• Automated testing

• Archive Data


I'll put a slightly different slant on this, I want you to read technical debt reduction as "how can I make upgrades faster"?  Everyone understands that the variable component of an upgrade is retrofit - nearly everything else is predictable.  So, we need to remove or reduce the impact of retrofit, which is done with technical debt reduction.

It's been said over and over that strategic modifications can stay - no problems.  Bolt on modules are pretty easy to upgrade too, the overhead is knowledge & testing - not too much technical retrofit of modifications.  Changes to core objects though, they need to be kept to a minimum.   

Here is a story that I have been told 50 times…  Let's to out upgrade "as is", we need to do it quickly and go live…  Okay, I say - we carry forward all of the technical debt at this point in time.  We hardly do analysis on the modification, whether it has been superseded by existing functionality.  Is there a smarter way of doing the code (with UDO) or is there a smarter way of implementing the process with deep functional knowledge?   It's too hard with the pressure of the upgrade project looming…  There is not enough time and it’s hard to estimate.

What if you did all of this work up front?  Whether it was module by module or object by object - have a continuous technical debt reduction project running.  Removing modifications, changing processing options… making upgrades (updates) easier…  Then, when you get to your update - you'll be surprised by the efficiency.  Remember that UDOs are not your only options for modifications.  You need to think about the improvements with orchestration and the ability to use modern mobile platforms to deploy your JDE functionality.  JD Edwards can be overwhelming for users that are not logging in all of the time.  Replacing this with a simple mobile app (insert favourite mobile app deployment software here) using SSO via JWT to ensure that your data and logic is safe.  The security model being enforced by JDE and therefore orchestration studio ensures that you are implementing least privilege principles over your valuable ERP data.  You also know that is just going to work after you take release 24 - because of your loose coupling implemented as part of your JDE SDLC.

You know that oracle tests their software extremely well before it gets shipped, so you know that the standard functionality is going to be working.  You need to test the standard code as it applies to your business.  You need to ensure that your loosely coupled UDOs continue to work and you can start being smug about keeping JDE up to date on a yearly basis - efficiently and predictably.

You might then start looking at automated regression testing and data archival to speed up some of your other painful go-live activities.





Extending JDE to generative AI