Wednesday, 26 November 2025

How AI Turned Your Best-Kept Secret Into Your Competitive Advantage

How AI Turned Your Best-Kept Secret Into Your Competitive Advantage


The Hidden Gem Nobody Knew They Had

Let's talk about JD Edwards Orchestrator for a minute.

It's genuinely brilliant technology. Released in 2014, it gave you the ability to compose complex business processes using JDE's building blocks - Business Functions, Form Services, Data Services - all through visual configuration. No custom code. No modifications. Just pure, governed business logic.

You could automate virtually anything: price updates, order processing, journal entries, inventory movements, complex multi-step workflows. And it all ran natively in JDE, respecting security, maintaining audit trails, following your business rules.

The catch? Actually using them.

Sure, tools existed to trigger orchestrations from Excel. They worked. Sort of. For $50K+ and a consultant who had to configure each integration point manually. And once that consultant left, good luck making changes. The result? Most organizations built a handful of orchestrations and then... stopped.

Not because orchestrations weren't powerful. But because the interface between "I need this done" and "the orchestration runs" was too complicated for most users to bridge.

That just changed.


The Modern Take: OAuth 2.0 Security Meets Conversational AI

Here's what just became possible, and you can stand this up in hours:

You: "Process these 500 invoices from the email I just forwarded you."

Copilot: "Authenticated as shannon.moir@fusion5.com.au. Processing AP batch orchestration... Validating vendor codes... Checking GL accounts... Complete. 487 invoices posted under your authority, 13 flagged for review - 8 missing PO numbers, 5 over tolerance."

That sentence just:

  • Authenticated you via OAuth 2.0 (you're already signed into Teams/M365)
  • Extracted the spreadsheet from your email
  • Called your AP processing orchestration
  • Executed it under YOUR user credentials (respecting your JDE security profile)
  • Processed hundreds of transactions
  • Returned results in plain English
  • Created a complete audit trail showing YOU initiated this

Zero clicks. Zero forms. Zero credentials shared. Zero security compromises.

Your orchestration didn't change. The business logic didn't change. JDE didn't change.

You just started having a conversation with your business processes.


Why This Time It's Different

The Old Tools: Excel Plugins and Expensive Middleware

If you've been around JDE long enough, you've seen the attempts:

The Excel Add-in Approach (Circa 2016-2020):

  • Install a plugin on every user's machine
  • Map columns to orchestration parameters manually
  • Hope nobody changes the spreadsheet format
  • Pay annual licensing per user
  • Call consultants when it breaks
  • Cost: $50K-$200K for setup, $20K+/year maintenance

It worked for exactly one use case, set up by exactly one consultant, used by exactly one person who was terrified to change anything.

The Custom Integration Approach:

  • Build REST API wrapper around orchestrations
  • Write documentation nobody reads
  • Create training materials nobody watches
  • Maintain custom code forever
  • Cost: $100K+ and 6 months of developer time

Both approaches shared the same fundamental flaw: They required users to understand the technology, not just describe what they needed.

The Modern Approach: Conversational + Secure + Fast

This is different because:

1. OAuth 2.0 Authentication (Real Security)

  • Users authenticate once (they're already signed into Microsoft 365)
  • Every action runs under their actual JDE credentials
  • JDE security profiles apply automatically
  • Complete audit trails show who did what
  • No shared passwords, no service accounts, no security theater

2. Natural Language Interface (Real Usability)

  • Users describe what they need in their own words
  • AI maps that to the right orchestration
  • Parameters get filled from context (email attachments, spreadsheets, previous answers)
  • Results come back in language they understand

3. Hours to Deploy (Real Speed)

  • MCP server deployment: 2-3 hours
  • Orchestration enablement: Automatic (if they exist, they're available)
  • User training: "Just ask for what you need"
  • Total time to first conversation: Same afternoon

4. Zero Modification to JDE (Real Governance)

  • Your orchestrations run exactly as designed
  • Business logic stays in JDE where it belongs
  • No custom code, no modifications, no upgrade blockers
  • IT stays in control of what gets built

This isn't replacing the old tools. This is a completely different paradigm.


Real Scenarios: From "It Takes All Day" to "Ask and It's Done"

Scenario 1: The Monday Morning Vendor Price Update

The Old Reality:

Sarah from Purchasing receives a spreadsheet: 2,847 price updates from a major supplier.

With the Excel plugin tool, she still has to:

  1. Open the special Excel file with the plugin
  2. Make sure her columns match exactly
  3. Click "Validate" and wait
  4. Fix the 47 rows that error out
  5. Click "Submit" and pray
  6. Watch the progress bar for 45 minutes

Usually finishes by lunch. If nothing breaks.

The New Reality:

Sarah types in Teams: "Process the price update spreadsheet from Acme Corp."

Copilot: "Found spreadsheet with 2,847 items. Running vendor_price_update_v2 orchestration under your credentials... Complete in 4 minutes. 2,830 prices updated. 17 exceptions flagged - 12 items not found, 5 prices exceed your authorization threshold (forwarded to Purchasing Manager). Changes logged under your user ID."

Same orchestration. Same business logic. Same security. Different interface.

She's done in 5 minutes instead of 4 hours. And she didn't have to remember how to use the special Excel file.

Scenario 2: The AP Processing That Never Ends

The Old Reality:

Month-end means processing 40-50 journal entry spreadsheets from different departments.

With the old custom REST API integration, IT built a web form. Users still have to:

  1. Log into the custom portal (separate credentials)
  2. Upload their spreadsheet
  3. Wait for email confirmation
  4. Check for errors in a different screen
  5. Re-submit corrections
  6. Repeat until it works

The accounting team blocks out two days for this. Every month.

The New Reality:

"Process all month-end journals from the shared folder using my approval authority."

Copilot: "Found 47 files. Authenticating as robert.chen@company.com... Running journal_validation_posting orchestration on each file... Complete. 45 journals posted (batch ID: JE-2025-11-001), all under your user profile. 2 failed validation - Department 450 account 1234 inactive, Department 320 entries don't balance ($1,250 discrepancy). Details sent to your email. Audit trail complete."

Two days became 15 minutes. The orchestration has sophisticated validation logic that IT built once. Now anyone with proper authority can trigger it by asking.

Scenario 3: Customer Orders That Used to Ruin Fridays

The Old Reality:

Jake's biggest customer emails a weekly order spreadsheet. 300-500 items.

The old Excel plugin required him to:

  1. Download their spreadsheet
  2. Open it in the special template
  3. Map their columns to his columns (because they change their format)
  4. Run validation (15 minutes)
  5. Fix errors
  6. Submit batch
  7. Hope it works

Usually took 3-4 hours every Friday afternoon.

The New Reality:

Friday, 3:00 PM: "Create sales orders from the BigCustomer weekly file. Ship-to their distribution center, requested date next Monday, use our standard pricing agreement."

Copilot: "Processing 467 line items under your sales authority... Running customer_order_import_v3 orchestration... Complete. Orders 450123-450589 created (total $3.2M), all assigned to your user ID for commission tracking. 23 items flagged as below safety stock - PO suggestions generated and sent to Purchasing. Customer confirmation email sent. You're done for the week."

Jake makes his 3:30 PM tee time. The orchestration handles customer-specific item mappings, pricing rules, inventory checks, and order creation. IT built it once. The AI makes it conversational.


The Chaining Effect: Conversations That Compose Business Processes

Here's where it gets transformative.

Once your orchestrations are conversational, you can compose them into workflows without writing new code.

Example: The Supply Chain Cascade

"Check inventory for next week's production schedule, flag anything under safety stock, generate PO suggestions for approved vendors, and send the summary to procurement."

That sentence just:

  1. Called your production_schedule_analysis orchestration
  2. Piped results to inventory_status_check orchestration
  3. Fed those results to smart_po_generation orchestration
  4. Triggered email_procurement_summary notification
  5. All authenticated under your credentials
  6. All audited in JDE

Four orchestrations. Built separately. By different people. For different purposes.

The AI composed them into a workflow because you described the outcome you wanted.

This is English as a programming language. The orchestrations are your functions. The conversation is your code.

Another Example: The New Hire Cascade

"New employee starts Monday - Emma Wilson, Analyst role, Department 450, reports to badge 10234, standard benefits package, laptop and access to Finance systems."

That cascades through:

  • HR employee_onboarding orchestration (creates master record)
  • IT provisioning_automation orchestration (triggers Azure AD, assigns licenses)
  • Department assignment_workflow orchestration (manager notification, cost center allocation)
  • Benefits enrollment_automation orchestration (adds to next enrollment window)
  • Asset management orchestration (creates laptop request)
  • Email notification orchestration (welcome email to Emma, confirmation to manager)

Six orchestrations. Five different systems. One sentence. Under your authority.

The orchestrations don't know about each other. But the AI knows what each one does and can compose them into an onboarding process.

You just onboarded a digital employee to onboard a real employee.


The Vision: Prescriptive Assistants and Digital Employees

This isn't about replacing your existing tools. This is about creating a new class of worker: prescriptive AI assistants that act as your digital workforce.

Meet Your New Digital Employees

Dana: Your AP Processing Assistant

  • Monitors incoming invoices
  • Knows your approval thresholds
  • Validates against POs automatically
  • Routes exceptions to the right people
  • Runs your AP posting orchestration when everything checks out
  • Available 24/7, never takes vacation, doesn't forget the month-end deadline

Marcus: Your Inventory Management Assistant

  • Watches inventory levels constantly
  • Knows your safety stock rules by item and location
  • Predicts stockouts before they happen
  • Generates PO requisitions using your approved vendor list
  • Routes to the right approver based on dollar threshold
  • Triggers your inventory_replenishment orchestration automatically

Sofia: Your Order Management Assistant

  • Monitors incoming customer orders from all channels
  • Validates against credit limits and inventory
  • Flags orders that need special handling
  • Executes your order_processing orchestration for routine orders
  • Escalates complex orders to humans with full context
  • Learns your customers' patterns and preferences

These aren't chatbots. These aren't RPA bots clicking through screens.

These are digital employees with judgment, context, and the authority to execute your business processes through orchestrations.

The Onboarding Process: Hours, Not Months

Here's what's revolutionary: You can stand up a digital employee in an afternoon.

Morning:

  1. Identify a repetitive process (AP invoice processing, price updates, order entry)
  2. Build or dust off an orchestration (you probably already have one)
  3. Deploy the MCP server (2-3 hours if you're following the guide)
  4. Configure OAuth authentication (already done if you're using M365)

Afternoon:

  1. Test: "Process these test invoices"
  2. Refine: Adjust the orchestration if needed
  3. Document: "Dana handles AP processing for invoices under $10K"
  4. Enable: Users can now ask Dana to process invoices

Next Day:

  • Dana processes 200 invoices before anyone arrives
  • Flags 15 exceptions for human review
  • Sends summary report at 8 AM
  • Your team spends the day on exceptions, not data entry

Total setup time: 4-6 hours.

Compare that to:

  • Custom Excel plugin: 3 months and $50K
  • REST API integration: 6 months and $100K+
  • RPA bot development: 2-3 months and ongoing maintenance nightmares

The Multiplication Effect

Once you have one digital employee working, adding the next one is even faster.

Your orchestrations become a library of capabilities. New assistants can mix and match them.

Month 1: Dana (AP Processing) Month 2: Marcus (Inventory) reuses some of Dana's notification orchestrations Month 3: Sofia (Order Management) reuses Dana's validation patterns and Marcus's inventory checks Month 4: Your team proposes three more assistants because they see what's possible

Within six months, you have a digital workforce handling routine operations while your human team focuses on exceptions, strategy, and growth.

This is the vision: A hybrid workforce where digital employees handle the predictable, and humans handle the exceptional.


The OAuth 2.0 Difference: Security That Actually Works

Let's talk about why this is fundamentally more secure than the old approaches.

Old Approach: Security Theater

Excel plugins: Shared service account, hard-coded credentials, everyone uses the same access level Custom APIs: Service account with elevated privileges, hope nobody abuses it Web portals: Separate authentication system, users forget passwords, IT resets them constantly

The result? Either too restrictive (nobody can do their job) or too permissive (everyone has admin rights).

New Approach: Real Security

OAuth 2.0 + Azure Entra ID + JDE Security:

  1. User authenticates once (they're already signed into M365)
  2. Azure validates their identity (your existing MFA, conditional access, all applies)
  3. MCP server receives their token (time-limited, cryptographically signed)
  4. Maps to their JDE user (shannon.moir@fusion5.com.au → SMOIR in JDE)
  5. Orchestration runs under THEIR credentials (with their security profile, their approvals, their limits)
  6. JDE logs it under their user ID (complete audit trail)

If you can't do it in JDE, you can't do it through the AI.

Your AP clerk can process invoices under $10K (their authority limit). Your controller can process anything (their authority is higher). Your warehouse worker can check inventory but not change prices (read-only on pricing tables).

The AI doesn't get special privileges. It impersonates the user making the request.

This means:

  • Proper separation of duties
  • Real audit trails
  • No credential sharing
  • No elevation of privilege attacks
  • SOX compliance maintained
  • Your security team can actually sleep at night

What to Actually Watch Out For

Since security is handled properly, here's what you should actually think about:

1. Change Management: Your Team Might Resist the Help

Your most experienced users might be skeptical: "I've been doing this for 15 years. Why do I need AI?"

The answer: You don't need it to do your job. You need it so you can do more than your job.

The AP clerk who's an expert at processing invoices? Now she has time to analyze vendor spend patterns and negotiate better terms.

The inventory manager who knows the system inside-out? Now he can focus on supplier relationships instead of data entry.

Digital employees handle the routine. Humans get promoted to strategic.

2. Over-Automation: Not Everything Should Be Automated

Just because you can automate something doesn't mean you should.

Good candidates for automation:

  • High volume, low complexity (invoice processing, order entry)
  • Rule-based decisions (reorder points, price updates)
  • Data validation and transformation
  • Scheduled, predictable workflows

Bad candidates for automation:

  • Strategic decisions with incomplete information
  • Edge cases requiring human judgment
  • Processes that change frequently (automate after they stabilize)
  • Anything involving complex ethical considerations

The goal isn't zero humans. It's humans working on human problems.

3. Orchestration Quality: Garbage In, Amplified Out

The AI will make your orchestrations 10x more used.

If your orchestration has a bug, you're about to discover it. Fast.

The good news: High usage means fast feedback. You'll improve your orchestrations quickly because you'll see how they're actually being used.

The bad news: You need to be ready to iterate. Don't build the perfect orchestration over 6 months. Build a good one in 2 weeks, deploy it conversationally, learn from usage, improve.

4. Documentation: It Actually Matters Now

Nobody read your orchestration documentation before because nobody used the orchestrations.

Now people will use them constantly. But they won't read documentation - they'll just ask the AI.

Make sure your orchestrations have:

  • Clear names that describe what they do
  • Good descriptions that explain their purpose
  • Defined input parameters with sensible names
  • Expected output documented

The AI uses this to match user requests to the right orchestration. "Process vendor payments" should map to "vendor_payment_batch_v2" not "BSFN_CUSTOM_JOB_17."


Getting Started: Your First Digital Employee in One Day

Morning (9 AM - 12 PM): Deploy the Infrastructure

Hour 1-2: Deploy MCP Server

  • Follow the deployment guide (it's actually straightforward)
  • Provision Azure Container App
  • Configure connection to your JDE AIS server
  • Set up API Management gateway

Hour 3: Configure OAuth

  • Register application in Azure Entra ID
  • Set up API permissions
  • Configure token validation
  • Test authentication flow

Total: 3 hours (with the guide, following the steps)

Afternoon (1 PM - 5 PM): Enable Your First Use Case

Hour 4: Inventory Your Orchestrations

  • List what orchestrations you already have
  • Pick one painful process to start with
  • Make sure the orchestration has good metadata

Hour 5: Test Conversationally

  • Connect to Copilot/Teams/Power Platform
  • Try: "Process the test vendor price update file"
  • Verify it calls the right orchestration
  • Check that security and audit trails work

Hour 6-7: Refine and Document

  • Adjust orchestration if needed based on testing
  • Create simple guidance: "Ask Dana to process AP invoices"
  • Test with real users

Hour 8: Deploy to Production

  • Enable for pilot user group
  • Monitor usage and feedback
  • Iterate based on what you learn

Next Day:

  • Your first digital employee is processing real work
  • Users are asking for it conversationally
  • You're collecting data on usage patterns
  • You're already planning the next digital employee

Total time: One day.

Week 2: Add Your Second Digital Employee

It's faster the second time because:

  • Infrastructure is already deployed
  • You understand the patterns
  • Users trust the approach
  • You have orchestrations ready to enable

Time: 2-3 hours to enable another use case.

Month 2: You Have a Digital Workforce

  • 5-10 digital employees handling routine operations
  • Your human team focusing on exceptions and strategy
  • Usage data showing ROI in real-time
  • Business units asking for their own digital assistants

This is the multiplication effect of conversational orchestrations.


The Conclusion: Your Hidden Assets Just Became Your Competitive Advantage

For years, your orchestrations have been your best-kept secret. Powerful capabilities that only a few people knew how to trigger.

That just changed.

Those orchestrations are now conversational. Anyone with the right authority can use them by describing what they need. Your business logic became accessible.

Your digital transformation didn't require replacing JDE. It required giving it a voice.

The competitive advantage isn't the AI. It's the business logic you've already built in JDE, now available to everyone who needs it.

Your competitors are still:

  • Manually entering data
  • Paying consultants $200/hour
  • Waiting months for custom integrations
  • Training users on complex systems

You're having conversations with your business processes.

Welcome to the age of the digital workforce.


Start Tomorrow Morning

9:00 AM: Identify one painful, repetitive process 10:00 AM: Check if you have an orchestration for it (you probably do) 11:00 AM: Deploy the MCP server (following the guide) 2:00 PM: Test conversationally 3:00 PM: Deploy to pilot users Next Day: Watch your first digital employee process work

Total investment: One day. Return: Immediate and ongoing.

The barrier wasn't the technology. It was the interface.

That barrier just disappeared.


Written by someone who watched brilliant orchestrations sit unused for years because they were "too hard to trigger." Not anymore.

November 2025

Tuesday, 25 November 2025

We Built a World-First: Connecting JD Edwards to AI Agents via MCP

 

And yes, you can now ask an AI "What's our inventory level for part X?" and get a live answer from JDE.


If you've been in the JD Edwards world for any length of time, you've probably had this conversation: "Can we just connect [insert shiny new technology here] to JDE?" And the answer is usually some variation of "Yes, but it'll take 6-12 months and cost more than your first house."

Well, I'm genuinely excited to share something we've been working on at Fusion5 that changes that equation entirely.

The Problem We All Know Too Well

JD Edwards is a phenomenal system of record. It's robust, it's proven, and it holds the truth about your business. But let's be honest — it wasn't built for the age of conversational AI. If you want to:

  • Let a business user ask a quick question about outstanding invoices
  • Have an AI assistant pull live order data for a customer service rep
  • Automate a workflow that needs real-time JDE data

...you've traditionally been looking at custom development, middleware, orchestrations, and a lot of billable hours.

Meanwhile, AI assistants like Microsoft Copilot, Claude, and others are revolutionising how people interact with systems. But they can't just "talk to" JDE. They don't understand PS_TOKENs, Julian dates, or why your customer table is called F0101.

Enter the Model Context Protocol (MCP)

For those who haven't come across it yet, MCP is an open standard developed by Anthropic that essentially creates a universal adapter between AI agents and external systems. Think of it as USB-C for AI — a standardised way for any AI to connect to any data source.

The catch? Nobody had built one for an ERP system. Until now.

What We Built

We've developed what we believe is the world's first MCP server for an ERP platform — specifically for JD Edwards EnterpriseOne.

Without getting into the weeds (this is a blog, not a technical manual), here's what it does:

It translates AI requests into JDE-speak. When an AI agent asks for "customers in Australia with outstanding invoices over $10,000," our MCP server figures out that means querying F0101 for country code AU, joining to F03B11, filtering on open amounts, and handling all the Julian date conversions along the way.

It handles authentication properly. Every action happens under a real user's identity. No shared service accounts, no security shortcuts. Your JDE role-based security still applies — if a user can't see payroll data in JDE, the AI can't retrieve it for them either.

It speaks JDE fluently. We've embedded comprehensive metadata about JDE's tables, fields, and business functions. The system knows that ALPH means "Alphabetic Name" and that F4211 is your Sales Order Detail. This means the AI can understand business terms and translate them correctly.

It covers the full AIS API. Data queries, form services, file attachments, business functions, reports, orchestrations — if JDE's AIS can do it, our MCP server can expose it to AI agents.

What Does This Actually Look Like?

Here are a few scenarios that are now possible:

Conversational queries: A finance controller asks their AI assistant: "Show me all customers in Australia with outstanding invoices over $10,000." The AI calls our MCP server, which handles the multi-table query, and returns a formatted summary. No JDE screens opened. No SQL written.

Report generation via chat: A sales manager says: "Generate a PDF sales report for January 2025." The AI finds the appropriate JDE batch report, submits it with the right parameters, waits for completion, and returns a download link.

Business function execution: A customer service rep asks: "Calculate the shipping cost for sales order 12345 with express delivery." The AI calls the appropriate JDE business function and returns the calculated freight amount — using JDE's own logic, so the numbers are correct.

Data discovery: A power user building a report asks: "What fields are in the Purchase Order header table?" The AI returns a list of fields with their business descriptions, making JDE's data model more accessible.

Why This Matters

JDE becomes AI-enabled without a replacement project. You don't need to migrate to a new ERP or wait for Oracle to build this. Your existing JDE investment gains AI capabilities today.

The UI becomes optional. Not every interaction needs to go through JDE screens. Business users can get what they need conversationally, through Teams, through Power Platform, through whatever interface makes sense for them.

Development time collapses. Integration projects that would have taken months can now be achieved in weeks. The MCP server handles the complexity — you just need to configure and connect.

Security stays intact. This isn't a backdoor into JDE. It's a secure, auditable extension that respects your existing security model.

The Bigger Picture

This is part of a broader shift we're seeing in how enterprises interact with their systems of record. The AI doesn't replace JDE — it makes JDE more accessible, more useful, and more integrated into modern workflows.

For JDE shops that have been worried about being "left behind" in the AI wave, this is significant. Your ERP can now participate in AI-driven processes alongside your newer cloud systems.


What's Next?

We're continuing to enhance the platform — multi-environment support, bulk operations, real-time event feeds, and tighter integrations with Power Platform and Microsoft Copilot are all on the roadmap.

If you're interested in learning more about what this could mean for your organisation, reach out to Fusion5's Innovation Labs. We're genuinely excited about where this is heading.


Shannon Moir is Director of AI at Fusion5. When not connecting legacy systems to futuristic AI, he can occasionally be found explaining to people that F0101 is actually a very sensible name for a table.

Tuesday, 24 June 2025

This risks of containerising JDE

To container or not container?

We've looked into containerising JDE for a long time.  We've had it running in the lab, we've done extensive performance testing too...  We have struggled to make the leap when it comes to our customers production environments.  The technology should be supported IMHO, but oracle do not support it.  They do not test any of their updates or patches on a containerised implementation.  So would I risk my customers uptime when I cannot get unequivocal support from my primary vendor (oracle), probably not!

Also, you might want to critically evaluate your weblogic licencing, as that can get expensive when deploying on the wrong cloud services.

Problem 1: WebLogic licences

1. Oracle Licensing Model for WebLogic

Oracle WebLogic is typically licensed in one of two ways:

  • Per Processor (CPU) License – based on the number of Oracle-licensed processor cores

  • Named User Plus (NUP) – based on the number of users, with minimums tied to processor count

When containerising, Per Processor is the model most affected.


How CPU Count is Calculated in Containers

Oracle’s policy is clear: Oracle does not recognise container limits as a licensing boundary unless you're using an Oracle-approved hard partitioning technology.

This means:

If you deploy WebLogic inside Docker or Kubernetes, Oracle may count all physical CPU cores on the host unless you use a licensing-compliant method to restrict it.

Example:

  • You run WebLogic in a container limited to 2 vCPUs on a VM with 64 cores.

  • Oracle may still require you to license all 64 cores, unless you use an approved virtualisation technology (like Oracle VM or physical partitioning on Oracle SPARC hardware).


Oracle’s Stand on Virtualisation and Containers

Oracle’s Partitioning Policy document explicitly states:

"Oracle does not recognise soft partitioning (e.g., cgroups, Docker limits, Kubernetes node selectors) as a means to limit licensing requirements."

So:

  • Docker/K8s CPU limits do not restrict licensing scope

  • Hard partitioning (e.g., Oracle LDOMs, IBM PowerVM) is required to reduce licensable CPU

Question Answer
Can you containerise WebLogic? Yes, technically, but licensing must be handled carefully.
How is CPU count calculated? Oracle counts all cores on the host unless hard-partitioned using approved methods.
What are the risks? Over-licensing or non-compliance in audit scenarios.
Best practices? Use OCI, or hard partitioned environments. Avoid relying on Docker/K8s limits alone.


If you do not licence WLS with technology foundation (and many customers do not), then you cannot use any public k8 or docker services, as their soft partitioning is not recognised by oracle.  This is putting you at risk in a licence audit.

Given the above, you pretty much need to run docker or k8 on a dedicated host, which is going to depreciate the availability gains of containers.

Problem 2: You are running an unsupported architecture

I think the risks are more apparent with the latest features,  especially for their more technical components of no downtime package deployments, filesystem integrations and file naming techniques and a few other troublesome edge cases that need additional configuration and support.  I'd do it for my customers if they did not think support was important (they stopped paying maintenance for example).

E1: OCI: Support Statement for Running Containerized JDE on Oracle Cloud Infrastructure (Doc ID 2421088.1)

... While the product development team will be available to actively collaborate with your “containerization of JD Edwards” project, we make no commitments right now that any issue that is specific to containerized deployment will be addressed under standard support model. In other words, if the issue cannot be replicated in a non-containerized environment, the product development team may or may not provide a fix for that...






Tuesday, 4 February 2025

Our AI infused JDE helper - can be yours

For a small monthly cost, we can load all of your JD Edwards manuals into our secure Azure based vector DB and all you to have a personalised JDE AI assistant.  Forget old ways of providing training and use all of the assets that you currently own.

Here is how it works - just have a turn:

https://capps-backend-7hl6h2whmhtla.jollyplant-40694b9e.australiaeast.azurecontainerapps.io/#/

It has a chat mode and a "ask a question" mode.

This is a really nice way of getting your JDE users to be better at prompting AI, which as we already know is an important life skill.  When you need to get better information, you'll get better at prompting.

Remember the RISEN acronym:

1. Role

Definition: Clarify the role or perspective the response should take. This can include specifying whether the prompt should be answered from the viewpoint of an expert, a neutral observer, or another defined persona. 

Example: For a prompt aimed at providing investment advice, the role might be defined as that of a financial advisor.

2. Instructions

Definition: Provide clear, direct instructions on what the prompt needs to accomplish. This typically involves stating explicitly what the response should include or address. 

Example: "List the top three risks of investing in emerging markets."

3. Steps

Definition: Outline the steps or the logical sequence in which the response should be structured. This helps in organizing the response in a coherent and logical manner. 

Example: "Start with a brief introduction to emerging markets, followed by a detailed analysis of each identified risk, and conclude with a summary."

4. End goal

Definition: Define the ultimate purpose or the actionable outcome expected from the prompt. This helps in aligning the prompt with the desired outcome or decision-making process. 

Example: "The end goal is to help an investor understand potential challenges in emerging markets to make an informed investment decision."

5. Narrowing

Definition: Narrow the focus of the prompt to avoid broad or overly general responses. This involves setting boundaries or constraints to hone in on the most relevant and specific information. 

Example: "Focus only on economic and political risks, excluding environmental factors."


Final Example Using RISEN

Prompt:

Role: As a financial advisor,

Instructions: provide an analysis of the current risks in investing in emerging markets.

Steps: Begin with a definition of what constitutes an emerging market. List and explain the top three economic and political risks. Use recent data to support your points and conclude with a brief summary of your analysis.

End goal: Enable potential investors to gauge whether investing in emerging markets aligns with their risk tolerance and investment goals.

Narrowing: Limit your discussion to economic and political risks; do not include social or environmental risks.

Final Prompt to the Model: 

"Assuming the role of a financial advisor, provide a comprehensive analysis of the current economic and political risks associated with investing in emerging markets. Start by defining 'emerging markets,' then identify and elaborate on the top three risks, supported by the most recent data. Conclude with a summary that helps potential investors understand these risks in the context of their personal investment strategies. Focus solely on economic and political aspects, excluding any social or environmental considerations."

My final prompt is WAY cooler.  Look how I can coach the model to use my specific JDE instance to coerce any URLs that it replies with!  It's like programming with words...

"Please be as comprehensive as you can be.  Assuming the role of a JDE administrator provide a comprehensive way of preventing users from being able to run certain applications in JDE. Start by describing the different types of security that are available in JD Edwards. conclude with options available to prevent users from running an application.   please provide a shortcut to the JDE work with user/role security application as part of your response.   If there are any URL's in what is returned that contains JDE, please substitute the domain component with https://f5dv.fusion5.cloud:443/jde/ShortcutLauncher?OID=<PROGRAM NAME>. Where <PROGRAM NAME> is the JDE application name, starting with a P."

This structured approach ensures the prompt is clear, focused, and aligned with the intended output, making it a powerful tool for guiding AI or any responsive system.




Remember that you also have a pile of options for increasing the reference count and more.  There is also a way of ensuring that you have as much default prompt as your instance can handle, which is how I'd build up my instance.




If you've made it this far - let's be honest.  I think that you want your own!  Get in contact.


Tuesday, 29 October 2024

Extending JDE to generative AI

I don't do a lot of work that would assist me create a decent blog these days, especially one for the CNC audience.  Although I think there are a couple of videos in this that will resonate.

I've been able to use and configure an existing Azure template to provide JDE customers with secure access to their documentation and data via Generative AI.  This project (https://github.com/microsoft/PubSec-Info-Assistant) is available to anyone.  It did take me quite a while to have it built and deployed to my own private repo.  But it's there now!


I could probably make this public, but I have uploaded a bunch of data that should not be made private, so in this instance I'll keep it to myself.

The portal gives you the ability to upload any data and then it vectortises the data (Vectorizing data in AI means converting data (text, images, etc.) into numerical arrays or vectors. This transformation enables algorithms to process and analyze the data efficiently, especially in machine learning and deep learning tasks.)  and creates, what is your own personal RAG (In AI, RAG (Retrieval-Augmented Generation) combines information retrieval with generative models. It retrieves relevant data from external sources (e.g., databases or documents) to enhance the accuracy and context of the generated response, creating answers based on both learned knowledge and up-to-date information.) based assistant.

The best way I can understand a technology is to use it in anger.  I uploaded about 250 documents that I had immediate access to, so that I could test the LLM.  It is pretty amazing.  I can ask specific questions about JDE data - and I get some accurate results.  Some is rubbish, but what you learn is that you must get better at prompting.



I think that RISEN is a handy way of remembering how you should prompt.

Anyway, here are a couple of videos that'll show you the portal working over my data and the types of things I was able to ask:

This fist video shows the easy process of uploading data:


The second video is all about asking questions:


I hope this makes sense to you, happy to upload more if needed.

I do want you all to know how much this is costing (I'm going to shut it down this week).



So it's costing 2K a month.  that is very reasonable for a fully operational and productive JDE focussed assistant.  There is some cost fine tuning that can be done too.




Wednesday, 22 November 2023

Heads up using native Azure token for SSO to JDE

It's cheap - yeah?  Cheerful - yeah... but is using a native Azure token for logging into JDE reliable?  - NO...  

Please read this and understand that you cannot have your JAS servers trusting a rolling cert.  Therefore you need some level of intermediate service that does the auth to Azure and created a JWT that JDE trusts... https://learn.microsoft.com/en-us/entra/identity-platform/signing-key-rollover

Even if you checked it every 5 minutes (as per the above) and then automatically imported that into your certificate store (EASY), it seems that you need to restart JDE for the new certificate to be loaded - so a complete JAS outage.  

Extra for experts - every with out ephemeral POD servers, which load the latest certificate and import that into the certificate store - we still need to restart JDE or trigger a replacement of all of the servers to allow new logins to use the new Azure certificate.

Note that the certificate roll can happen at ANY time.

The native solution cannot work - might be time to talk to us about myAccess? https://fusion5.com.au/jd-edwards/myaccess/ 






Tuesday, 21 November 2023

Extending JDE integrations beyond JDE

I hear you saying - That's just don't make sense...  Unless you are integrating JDE with JDE, you are right - as every integration has 2 end points and therefore one of them beyond JDE.

I guess what I want to talk about in this blog post is thinking more broadly about how you solve JD Edwards integration challenges.  I'm going to talk specifically on how fusion5 uses a bicep based integration acceleration infrastructure as code to enable quick and easy integration beyond JDE.   I think it's important to think about your integrations beyond JDE and into what we might have traditionally called middleware.

Middleware is a good term for a consistent set of development tools between two or more systems... Therefore giving you a consistent method of connecting end points, monitoring integrations and stitching things together.  This middleware has always been dominated by the likes of MuleSoft, Boomi, Jitterbit and other players... Although I think that there is a paradigm shift going on at the moment.  What I am seeing is that more and more customers are using the native cloud services that Azure and AWS provide and build their "middleware" using native cloud services.  I believe that the following drives this decision:

  • Cost - put simply you can pay per use and there is not large barrier to get started. You can get started nice and slow and build out the solution
  • Access to talent.  Getting a developer that knows lambda (JS / python) or Azure functions is easier and can be applied to more situations than a dedicated middleware developer
  • All modern endpoints are open.  Quite often new applications are built on a foundation of API's that they stick a front end on.  Therefore integration is not an afterthought, its actually built into the foundational architecture.  This means that connecting to said "Foundational architecture" is also easy and well documented.  restful saves us a lot of work
  • connectors can be seen as a list of limitations.  Too many integration development tasks has lead me to believe that more often than not - an accelerator does not do what a customer wants.  It generally needs to be augmented or changed to get the job done.  Therefore, reading point 3 above - end points have this built in
  • Supports ideals of smart end points, dumb pipes.  Think about JDE specifically and the orchestration studio.  This makes VERY smart end points, which can then be plugged into dumb pipes.  The JDE orchestrations take into consideration security, logic and customisations to present an API which can "do it all".
  • Support for WAF / additional end point security
  • Connectivity / HA / DR all built into the platform
  • Security patching is native and often
  • Logging and monitoring (think Azure monitor, think CloudWatch) can follow enterprise standards - critical for integrations and they are becoming more important that humans.  I don't want to trivialise your existence - but, I reckon that an integration can pump through more orders than a human any day of the week.  Therefore due to the importance logging and monitoring must be a first class citizen.
Wow, they are some good reasons to choose a hyperscaler to be your integration platform.  Fusion5 recognised this some time ago, and have been working on two different solutions on two different hyperscalers to address this opportunity. We have our bicep based Azure Integration Accelerator.  This is a set of code that accelerates and implements the use of standard Azure services to help support getting integrations securely from JDE and to the internet (and beyond).



Looking at the above, the orange ring (outer) is a middleware made up of our bicep deployment using all of the base Azure services to perform integration.  This is NOT just for JDE, it's for anything.  A consistent way of exposing JDE data and functionality to the internet.  Remember, this is super important for being secure.  For giving customers better ways of authenticating to get JDE data (not just a JDE username or JWT).  Personally I would not be putting any WebLogic ports facing the internet (you should see what I can do to the JDE ones...).  So this is how we can consume internet based integration points for JDE and also expose JDE to the internet.  The other huge advantage of using something in the middle is that JDE does not need to be up all of the time.  You can implement promise based integrations (asynchronous), which can allow for some levels of unavailability or at least retry ability when implementing integrations.

The blue circle is our fusion5 integration framework, which I have alluded to previously (like my last blog post), where we have put some bells and whistles into JDE to allow you to manage your orchestration based integrations more consistently.

The white circle is that standard JDE orchestration layer.

Note that orange and blue are not needed, you could expose JDE orchestrations to the internet - not that I would - but you could.



Let's talk a little bit more about the integration acceleration components that we create when we deploy our bicep based accelerator.

Some of the main components that are used are APIM, application insights and log analytics.  All built into the deployment.  OF course, things can easily get more complex, with the use of the service bus to provide more resilience.  However, what you do get out of the box is seriously impressive.

 

You can see from the screenshot above, this is the APIM definitions for the end points that we are exposing with APIM which are acting as a simple proxy for orchestrations.  You'll also note that we need to provide a key to be able to use the exposed APIM end point - here:

POST https://apim-poc-tatua-demo-f5dev-0001.azure-api.net/jdeorch/ORCH_EXT_CreateCustomer HTTP/1.1
Host: apim-poc-tatua-demo-f5dev-0001.azure-api.net
Ocp-Apim-Subscription-Key: <Place subscription key here>


Note that this passes on all of the parameters to JDE, but you have the ability to do any manipulation you want.  Therefore you could expose true restFUL end points for externals and then formulate your JDE calls using APIM and azure functions.  This would allow easy translation of payloads between systems.


Above shows application insights configured for the end point


Then using the application insights configuration for the API, we can then see above graphics of performance and errors coming from all of our calls.  Of course this is also logged using Azure logs, giving you the ability to query these with SQL like power.  You can also attach any of these events to end user alerts - it's got everything you need to build a very mature approach to business as usual.


AS you can see from the above there are many backed in queries for ripping the logs apart and fine tuning your monitoring.

Of course you have the ability to make this as complex or as simple as you want.  But -the main this is that you have the ability to consume and produce internet facing data securely from JDE and other sources.  This data can be asynchronous too.



Above you can see my postman call, this is heading for an AZURE link that has been exposed via APIM.  



This hits my APIM which can do anything (at the moment it does very little).

APIM (Azure API Management service) can inject credentials, it can coerce the payload, it can do subsequent lookups and validation and then call the actual JDE orchestrations.  At the same time providing all of the logging and debugging.

curl --location 'https://apim-poc-tatua-demo-f5dev-0001.azure-api.net/jdeorch/ORCH_EXT_GetItemAvailability' \
--header 'Ocp-Apim-Subscription-Key: 57c023c9aff049bbfdcggh4e84d638854' \
--header 'Content-Type: application/json' \
--header 'Authorization: Basic U00wMDAwMTpHb29FDS^54Q==' \
--data '{
"x-requestType":"ItemAvailability",
"x-pk":"",
"x-sourceSystem":"JDE",
"x-sourceTimestamp":"2023-10-06T03:58:44.000Z",
"x-conversationId":"c559e622-505f-4e46-bf64-a2734b130a70",
"x-correlationId":"ada827f1-8c25-4e5d-9109-ee422bef5fb5",
"x-sendingSystem":"JDE",
"x-sentTimestamp":"2023-10-06T03:58:44.000Z",
"x-TransactionType":"",
"inItemNumber":"1001",
"inBranchPlant":"",
"inLotNumber":"",
"inPrimaryBin":""
}'


You can see the request above.  Note that I have changed the basic auth and the subscription key - sorry hackers - you cannot use this.

At this stage the request has nothing to do with JDE, it's going to Azure.  Azure makes the decision on what to do with it.

Azure is busy logging the request and giving us performance information on my attempts.


We can see all of the performance and availability information above, this is critical for providing a consistent service to your customers.

Let's take this full circle, let's look what the Orchestration framework has to say about this?  Let's look at the development firstly:




We start the integration, check GUIDs etc.  Make sure that this is not a duplicate.  Note that this function is included in the framework.

If that is okay, then we work some magic on the media objects which maintain the logging for integrations.  Maintain the Media Attachment against an Integration Framework Conversation

We then update the conversation to give a marker of where we go up to and then call the actual internal worker orchestration to do the work and create the output.

Finally we mark the orchestration as complete and attach the output to the logs...  You can see that most of this is just copied, or comes from a template orchestration.

P57F5002 is the main workbench for integrations




You can see the 3 attachments which contains the JSON input, output and also the integration logs.  All of which are created in the framework:








Integration log:
21/11/2023 09:26:23 JDV920: Started : ORCH_EXT_GetItemAvailability
21/11/2023 09:26:23 JDV920: Conversation Status Updated to P : Processing
21/11/2023 09:26:24 JDV920: Conversation Process Status Updated to 105 : Orchestration Processing
21/11/2023 09:26:24 JDV920: Conversation Process Status Updated to 110 : Orchestration INT Process called, Asychronously
21/11/2023 09:26:24 JDV920: Started : ORCH_INT_GetItemAvailabilityProcessor
21/11/2023 09:26:24 JDV920: Started : ORCH_INT_GetItemAvailability
21/11/2023 09:26:24 JDV920: Item Availability Retrieved for Item 1001
21/11/2023 09:26:25 JDV920: End : ORCH_INT_GetItemAvailability
21/11/2023 09:26:25 JDV920: Conversation Status Updated to Y : Success
21/11/2023 09:26:25 JDV920: Conversation Process Status Updated to 400 : Orchestration Successful
21/11/2023 09:26:25 JDV920: End : ORCH_INT_GetItemAvailabilityProcessor
21/11/2023 09:26:25 JDV920: End : ORCH_EXT_GetItemAvailability

Input JSON


Text AttachmentClose



Output JSON

{
 "outShortItemNumber": "60003",
 "outBranchPlant": "*",
 "outItemNumber": "1001",
 "ItemAvailability": [
  {
   "outBranchPlant": "10",
   "outLotSerial": "",
   "outOnHand": "3.9000-",
   "outAvailable": "3.9000-",
   "outLocation": ". .",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "10",
   "outLotSerial": "",
   "outOnHand": "3.9000-",
   "outAvailable": "3.9000-",
   "outLocation": "TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "30",
   "outLotSerial": "",
   "outOnHand": ".0350",
   "outAvailable": ".0178",
   "outLocation": ". .",
   "outLotStatusCode": "",
   "outCommitted": ".0172",
   "outSOWOSoftCommit": ".0003",
   "outSOHardCommit": ".0169",
   "outBackorder": ".0001",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "30",
   "outLotSerial": "",
   "outOnHand": ".0350",
   "outAvailable": ".0178",
   "outLocation": "TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": ".0172",
   "outSOWOSoftCommit": ".0003",
   "outSOHardCommit": ".0169",
   "outBackorder": ".0001",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "70",
   "outLotSerial": "",
   "outOnHand": ".0100",
   "outAvailable": ".0100",
   "outLocation": "",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": ".0063",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "70",
   "outLotSerial": "",
   "outOnHand": ".0100",
   "outAvailable": ".0100",
   "outLocation": "TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": ".0063",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "D30",
   "outLotSerial": "",
   "outOnHand": ".1000",
   "outAvailable": ".1000",
   "outLocation": ".  . .",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "D30",
   "outLotSerial": "",
   "outOnHand": ".1000",
   "outAvailable": ".1000",
   "outLocation": "TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "M30",
   "outLotSerial": "",
   "outOnHand": "0.0000",
   "outAvailable": "1.0000-",
   "outLocation": ". .",
   "outLotStatusCode": "",
   "outCommitted": "1.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "M30",
   "outLotSerial": "",
   "outOnHand": "0.0000",
   "outAvailable": "1.0000-",
   "outLocation": "TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": "1.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "7600",
   "outLotSerial": "",
   "outOnHand": "0.0000",
   "outAvailable": "0.0000",
   "outLocation": "P",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "7600",
   "outLotSerial": "",
   "outOnHand": "0.0000",
   "outAvailable": "0.0000",
   "outLocation": "TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": "0.0000",
   "outSOWOSoftCommit": "0.0000",
   "outSOHardCommit": "0.0000",
   "outBackorder": "0.0000",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  },
  {
   "outBranchPlant": "",
   "outLotSerial": "",
   "outOnHand": "3.7550-",
   "outAvailable": "4.7722-",
   "outLocation": "GRAND TOTAL:",
   "outLotStatusCode": "",
   "outCommitted": "1.0172",
   "outSOWOSoftCommit": ".0066",
   "outSOHardCommit": ".0169",
   "outBackorder": ".0001",
   "outFutureCommit": "0.0000",
   "outOnSOOther1": "0.0000",
   "outOnSOOther2": "0.0000"
  }
 ],
 "out_ErrorFlag": "0"
}