Thursday, 17 December 2020

Performance patterns to analyse after upgrading your database.

19c is going to make everything faster, right?  Enterprise edition with multiple processes per query is going to dramatically improve JDE performance, right?  Wrong.  In my experience you will receive very little in terms of performance gains by using EE, or by upgrading to the latest oracle database release.  What you will get is patching and better responses to any issues that you might have.

The reason for this is fairly simple, JDE is no good at complex queries – period.  Generally joins a maximum of 3 tables together and then rarely uses aggregate based queries.  You can imagine that the relational operators that JDE employs on any database are tried and tested.  We are not really trying to push bitmap indexes or massively parallel queries.  I agree that you might get some benefits with some HUGE long running batch, but I’d also like to see that in practice.

In my experience, enabling parallel in an EE database for JDE caused more performance problems than benefits, as the query optimiser wanted to parallel simple queries and slowed the simple stuff down.  This might have been me doing the wrong things, but I spent quite a bit of time on it…  Anyway, this blog entry is about how I justify to a JD Edwards customer the actual results of a database upgrade.

Batch Performance

Firstly, lets look at the performance from a batch perspective.  The new database has only been live for 2 business days, so the data that we are looking at is limited to 20,000 batch jobs.  If course this is going to grow rapidly and our comparison is going to be quicker.  I can generally compare the trend analysis for EVERY batch job in a matter of minutes.


Firstly, we can look at yesterdays performance and compare this with the last 30 days and get a % change in that performance.  So really, for these often running jobs I know immediately if we have a problem or not.  I can also see that for all of the jobs, we are actually 1.8% better off after the implementation of 19c over the weekend – when comparing the average runtime of nearly 18000 jobs.  This is a positive result.

We can also drill down on any of these results and see how many times a day these jobs are running, how much data they are processing and of course – how long they are taking.



For example, for R47042 we can see that it ran 603 times on the 14th, it processed 6.8 rows on average and on average took 33 seconds to run…  This is slightly faster that the average for the last 30 days of 42 seconds.  So looking at the jobs that runs the most – the database upgrade is a great success.

Of course, we look through all of the jobs to ensure that there are not any exceptional results or slowdowns – which could mean missing statistics or missing indexes.



We are also able to quickly compare groups of jobs against the previous month to ensure that the performance is consistent.

The next place we look is the batch queue, to ensure that there has not been any excessive waiting or processing.




We can see from the above that QBATCH ran many more jobs than normal, but this is probably the scheduler catching up.

Looking at this data using runtime seconds.  We can see that perhaps QBATCH2 was not normal, but not too bad either:



Queue time is very interesting:


We can see that the queues have been held on the 3rd and the 12th of December, note the number of seconds that concurrent jobs have queued in qbatch2.

Although, in summary – everything is looking normal.



Note that we have the options of graphing any of the above metrics on the chart, jobs run, wait time, ruin time, rows processed or rows per second.  All of them help determine how things are processing and where you can make some improvements.

Generally I look at the most common jobs (the bread and butter jobs) and ensure that there is no negative impact.  Then some checking of the weekend integrities is also important.  Batch is generally a little easier to manage than the interactive side.

Note that I’m focused on the runtime, but you should also be looking at CPU utilisation and Disk activity.

Interactive Analysis




On aggregate, we can see from the above that the performance might have been impacted negatively from an interactive perspective.  We can see that the last 5 days have been quite a bit slower than has been recorded previously.

You can see that when I choose the last 5 days, I get the following


An average page load time of more than 2 seconds!

If I compare the previous 20 days, the average page load time is 1.8 seconds – so a 10% slow down – I guess that is not too bad.

We can begin to look at this data from a regional perspective too


Potentially this single table could answer all of your questions about JD Edwards performance and how things are changing month on month or week on week.

We can see how user counts change, how activity changes, and then we can see all of the performance metrics also.

You can see here how many pages are being loaded, the average server response time the average page load time and also the page download time.  Any changes to any of these metrics are very easy to find and very easy to report on.

Use this data to guarantee performance for your environment and regions.








Monday, 30 November 2020

Open API catalog import from JDE orchestration to Postman

I'm pretty sure I've blogged about how much I love postman before, it's an amazing tool for kicking off and testing your automation projects.  Being able to see the cURL commands is totally awesome too.

This is a simple blog on how to import ALL your orchestrations (or the ones that you have permission to) into a single postman collection.

There is probably a better way of doing this, but because JDE wants an authenticated call to the listing, I could not use the standard postman import from a URL - as this only seemed to be unauthenticated.


As you can see, there is an option to import from a link, you get a little error down the bottom right saying "Import failed, format not recognized".

So, plan B.

First create a new get request, that looks something like this:

https://f5dv.mye1.com/jderest/v3/open-api-catalog


Once you have done this, ensure that you're authenticating properly to AIS.  I'm using and allowing basic authentication:




Once you send that request, hopefully you get a reply like the one above.

You can use the save button in the middle right of the screen to save this to a local JSON file.



Then in your main postman window, use the import again:



Choose the file that you just saved (the JSON definition)






Choose your file, then you will get this:
Perfect.  Once you hit okay, you'll get a new collection with ALL of your orchestrations.
You will know all of the JSON inputs and all of the standards.  postman also does some nice stuff with variables and the relative paths for the calls.

This is a really neat way of getting all of your orchestration function definitions in one place.  Postman also makes the documentation process nice and easy!


As you can see above, if you create your scenarios and save the payloads, you can provide some amazing documentation for your orchestrations!












Monday, 23 November 2020

Maturity Matrix for JD Edwards

This is another view of some consulting we are doing at Fusion5, to help our clients understand and implement a continuous improvement process for their JD Edwards.

You can get a maturity assessment at any time to determine whether you modular usage is transactional or transformative.  JD Edwards has evolved over time, but what we often see is that clients are not evolving the way that they are using JD Edwards.

If you implemented JD Edwards 10 – 15 years ago, chances are that you are not making the most of base features and potentially not improving the user experience with all of the new UDOs (User Defined Objects).

At Fusion5 we are spearheading a new way of measuring our clients maturity in how they have implemented their JD Edwards modules and then assist them to get more!  We have a tried and tested method of determining where our clients sit, on a module by module basis.  We use a data centric method, which has been developed over the last 10 years to provide unequivocal insights into what programs are being used in JD Edwards and therefore reverse engineer processes.

We then can plug in our dashboards to show you exactly where you sit, in your implementation from a number of perspectives.  We actually give your implementation a score out of 10, in terms of the maturity of the implementation, and then, of course give you a path / plan to improve that score and the types of benefits that the business will receive.



For example, you can see from the above all of the standard modules in JD Edwards and then which programs in each of the modules are being used.  This immediately gives us a data based view on the modular usage, we can quickly tell you any new programs or batch jobs that have been released that you are not using.

We can quickly look at all of the batch jobs as well, to understand if you have processing or queuing problems, or that you might not be running important integrity jobs.



This laser focus on actual usage data, makes it VERY easy for us to turn this into process knowledge.  We can then look at key users and key programs for additional analysis.

Remember, that this does not stop at programs, this is just the beginning.  Fusion5 can look at individual forms or versions of programs and UBEs – to completely understand what is going on in JD Edwards.

When you support your analysis and decisions making with actual data, then any changes that you make to the system will be revealed in the ongoing data analysis.  This is where Fusion5 can shine.  We can show you the difference that are being seen in the actual usage of JD Edwards.  We can tell you %’s of improvement, whether that is engagement time, pages loaded or performance of screens.



We have unique insights into daily usage and can track particular modules and particular UDO’s to ensure that your users are getting the training and that it’s making a difference.  We can help you develop online content and training materials to ensure that the UDO adoption is working – and we measure the success.

We actively manage and measure this with our customers and coach them through the implementation and measurement.  We can schedule the delivery of all these reports so that you do not need to learn a new BI tool.

Note that Fusion5 provide the data sources, the base reports and the development environment for you to extend the standard reports that we deliver, you options for reporting are unlimited.

Our packages can just be a one off analysis and review, or can be based on a continuous improvement mode, where we actively engage with you and provide recommendations on your system continuously.  We can review your UDOs, module usage and performance on a regular basis.  This is a perfect service if you want to upgrade JD Edwards regularly – as we can also assist you measure your success.

Wednesday, 4 November 2020

9.2.5 has dropped, what goodies are we getting?

This is basically a cut and paste of the release notes for 9.2.5 from learnjde.com - a great resource.  I'll put a small amount of commentary around this.

Firstly, please plan to be on 64bit.  This does not have to be a massive project.  You'll just need to ensure that your 3rd party products (if you load them through JDE) can run in a 64bit environment.  I recall that there are some output management and AP workflow tools that might struggle with 64bit.  Please do not expect dramatic performance improvements, as there will not be any.  Please do not expect your kernels or UBE's to use more than 4GB of ram each (if they are, I think you might have other issues...).  This is a good move for security and compliance reasons.

From the horses mouth:

Beginning with EnterpriseOne Tools Release 9.2.5, JD Edwards is transitioning the Tools Foundation compatibility for 32-bit into the Sustaining Support lifecycle phase:

  • ·       With JD Edwards EnterpriseOne Tools 9.2 Update 5 (9.2.5), Oracle will cease delivery of a 32-bit JD Edwards Tools Foundation for Oracle Solaris and HP-UX.
  • ·       JD Edwards EnterpriseOne Tools 9.2 Update 5 (9.2.5) will be the final Tools Foundation release to be delivered as a 32-bit compiled foundation for Oracle Linux, Red Hat Enterprise Linux, Microsoft Windows, IBM i on Power Systems, and IBM AIX.

Following is a list of all the enhancements.  Guess what?  75% of them are for orchestrations, this is also very nice.  If you are not using orchestrations.  If a developer suggests to you that you should implement a flat file based integration solution - fire them and use orchestrations.  You know that an orch and SFTP, can upload CSV can call a UBE if required - can do everything that old school development can do - but it's done on the glass and does not need package builds.  Start using it!

I cannot use the title Digital Transformation, so I’ve changed it to Platform Modernisation.  Hopefully I can drill down on some of these enhancements when the time is right.

Platform Modernisation

Assertion Framework for Orchestrations

Orchestrations are a powerful way to automate EnterpriseOne transactions and integrate to third-party systems and IoT devices. The integrity of an orchestration has two critical aspects: first, it runs without error, and second, it produces the data that the designer expects. The Assertion Framework enables the orchestration designer to specify, in other words, to "assert" the values that are expected to be produced by an orchestration. For example, the designer might assert that a value is expected to be within a certain range, or even match a specific numeric value. If the orchestration produces a value outside that range, the details of the failed assertion are displayed for investigation. For customers who are considering the use of orchestrations as test cases, the Assertion Framework provides a way to define objective success criteria.

Enhanced Configuration Between Enterprise Servers and AIS Server

This feature provides the system administrator with more control over configuring the associations between EnterpriseOne enterprise servers and Application Interface Services (AIS) servers. Customers who deploy a single enterprise server for all their environments can now associate that enterprise server with AIS servers in multiple environments. This configurability facilitates the segregation of AIS servers across environments, such as development, test, and production, while making it possible for a single enterprise server to serve all the environments.

Allow Variables in REST File Uploads

As part of its ability to invoke third-party services through its REST connector, Orchestrator can also upload various content types, for example, EnterpriseOne media objects and files to a REST-enabled content management system. This enhancement enables the orchestration designer to use variables in the definition of the connector. For example, the name of a file might be represented by a variable, enabling the orchestration to dynamically determine the file to upload, thereby expanding the flexibility of this feature.

Configurable AIS Session Initialization

This feature provides you more choice and control over how system resources are used to initialize user sessions. The Application Interface Services (AIS) server is a powerful framework for exposing EnterpriseOne applications and data as services. In addition to being available to external clients, the AIS server has also been used as part of the internal EnterpriseOne architecture to enable certain functionalities, such as UX One components, EnterpriseOne Search, and form extensions.For users who use these features, the system will establish sessions with both the EnterpriseOne HTML server and the EnterpriseOne AIS server, and each session consumes system resources. For users who do not use these features, the system will not establish an AIS session. This enhancement enables the system administrator to configure user sessions to avoid initializing an AIS session, thus conserving system resources.

Extend EnterpriseOne User Session to Externally Hosted Web Applications

The UX One framework offers EnterpriseOne users a converged, flexible, and configurable user interface for all their enterprise applications. Even web-based third-party applications can be configured into EnterpriseOne pages and external forms. This enhancement further improves the user experience by enabling EnterpriseOne and third-party applications to share certain data, such as session information, to provide a more integrated user experience and ensure efficient use of shared resources.

Optimized Retrieval of Large Data Sets by Orchestrator

Of all the capabilities of Orchestrator, retrieving data from EnterpriseOne tables or applications is among the most common and essential. Some usage patterns entail retrieving data in very small transactional "bursts," for example, to get an inventory count of a single item. Other usage patterns entail retrieving very large data sets, for example, to load or synchronize a complete customer list from EnterpriseOne to a third-party system. This enhancement provides performance optimizations for Orchestrator to be able to retrieve very large data sets—possibly thousands of rows—from EnterpriseOne tables and pass the results in the orchestration output. The orchestration designer may also have the output written to disk and exclude it from the orchestration response to prevent the response from becoming very large.


User Experience

Form Extensibility Improvements

With Tools Release 9.2.5, Form Extensibility has been enhanced with the ability to unhide the business view columns on a grid that have been hidden using Form Design Aid (FDA). This enhancement enables users to use a form extension to unhide those business view columns and add them into a grid without creating a customized form in FDA.

Similar to an existing feature in a personalized form, users can now mark a field as required in a form extension, and this setting will apply to all the versions of the application.

These features significantly reduce the time, effort, and cost required to give end users a streamlined user experience.

Learn more about Form Extensions on the Extensibility page, and other User Defined Objects on the User Defined Objects (UDOs) page on LearnJDE.

Enhanced Search Criteria and Actions for Enterprise Search

JD Edwards EnterpriseOne Search helps users quickly find and act on JD Edwards transactions and data as part of their daily activities. To further increase user productivity, EnterpriseOne Search has been enhanced with the following capabilities:

Exact match search: The system retrieves search results that are an exact match to the keyword that a user enters. For example, if a user wants to search for all the sales orders for red bikes and enters “red bike” as the keyword, the system will display only the orders that contain “red bike” in the description. This capability helps in narrowing down search results, thereby improving user productivity.

Ability to export the search results: Users can export the search results to a .csv file so that they can analyze the data or import the data to a different application outside of EnterpriseOne.

Ability to specify query and personal form for related action: When a Related Action is defined to execute a JD Edwards application, the search designer can now enter a specific query and specify a personal form for the application. Simplification of the application interface and input of associated data improves the user experience.

Improved Help

The application-level Help option on JD Edwards EnterpriseOne forms has been enhanced to enable users to search separately for EnterpriseOne documentation and UPK documentation. This improvement enables users to easily locate specific information that addresses their questions. In addition, JD Edwards now provides direct access to www.LearnJDE.com, the JD Edwards resource library, from the user drop-down menu in the EnterpriseOne menu bar. This improvement provides users quick access to the current collateral and education resources across all areas of JD Edwards.

System Automation

Virtual Batch Queues

JD Edwards batch processes (UBE and reports) continue to be critical for customers’ business processes. With Tools Release 9.2.5, JD Edwards has improved the scalability, flexibility, and availability of JD Edwards UBE batches with Virtual Batch Queues (VBQ). This enhancement reduces the dependency on queues per individual server and gives users the flexibility to run and rerun jobs on a group of congruent batch servers. It further enables users to maximize the system resources by having their UBEs processed by any available server from the batch cluster dynamically. The centralized repository for output ensures easy accessibility to report output no matter when or where the batch was submitted and the output was retrieved. Overall, this feature provides high availability for batch processes, expedites the processing of batch jobs, and helps users achieve scale to ensure performance of batch jobs.

Development Client Simplification

While JD Edwards has delivered frameworks to reduce the need for customizations, there are still some scenarios where customers need to create and maintain customizations. In Tools 9.2.5, JD Edwards has greatly reduced the time, effort, and resources required to install and maintain the development client by removing the requirement for the local database. The removal of the local database reduces storage and memory requirements for a development client, resulting in faster EnterpriseOne client installations. This feature also streamlines Object Management Workbench (OMW) activities, Save/Restore operations, and eliminates the need for a mandatory check-in for objects during package deployment. This enhancement results in improved productivity compared to the previous package installation process. The underlying package build and deployment processes are also streamlined, improving the throughput of the software development life cycle.

Automated Troubleshooting for Kernel Failures

Tools 9.2.5 expedites the troubleshooting process for kernel failures by automatically identifying the kernel failures, capturing the log files along with the problem call stack, and sending an email notification. This automated process identifies the cause for the kernel failure and enables the administrator to configure whom to notify (the corresponding team) so that corrective action is performed promptly. The notification contains all the contextual information, which helps to streamline the time spent in resolving issues and working with Oracle Support.

Web-Enabled Object Management Workbench (OMW)

Tools Release 9.2.5 enhances the web version of OMW to support the management of development objects: applications, UBEs, business functions, and all other development objects. Developers, testers, and administrators can now transfer objects through a browser-based application within the EnterpriseOne web user interface. This feature eliminates the need to access the development client for object transfer, streamlines the life cycle of the OMW projects and objects, and generates the potential for remote and automated processing of OMW projects and objects.

Web-Based Package Build and Deployment

With Tools Release 9.2.5, the key system administration applications for package assembly, package build, and package deployment can be run through the EnterpriseOne web interface. These functions were previously available only through the development client. This enhancement creates opportunities for automating and scheduling package builds and sending the related notifications through orchestrations, further extending the digital platform for JD Edwards system administration tasks.

Security

Continuous Enhancements for a Secure Technology Stack

To ensure security compliance and eliminate vulnerabilities around JD Edwards EnterpriseOne deployments, there is a continuous need to enhance the product security and uplift the components to the latest versions. This Tools release includes the following security enhancement:

Support for long and complex database passwords

Automated TLS Configuration Between Server Manager Console and Agents

This feature simplifies and automates the configuration of Transport Layer Security (TLS)- based communications between the Server Manager console and the Server Manager agents. The Secure Sockets Layer (SSL) provides secure communication between the applications across a network by enabling message encryption, data integrity, and authentication; therefore, it is imperative to keep this component updated and configured to your JD Edwards servers. This feature eliminates the need to run multiple platform-specific manual commands for importing the certificate files into Java (keystore and truststore) files, which can be a complicated and error-prone process. For secure JMX-based communication, both TLS v1.3 (for Oracle WebLogic Server with Java 1.8 update 261 or higher) and TLS v1.2 (for Oracle WebLogic Server and IBM WebSphere Application Server) versions of the TLS protocol are supported.

Open Platforms

Support for 64-bit JD Edwards on UNIX Platforms

To ensure that customers are running their business-critical EnterpriseOne system on a stable, supportable infrastructure, and to enable them to leverage the capabilities of the latest and emerging cloud services, hardware platforms, and supporting technologies, EnterpriseOne deployments must remain compliant with currently available platform stacks. Beginning with EnterpriseOne Tools Release 9.2.5, JD Edwards announces support for running the EnterpriseOne Tools foundation in full 64-bit mode on the below UNIX platforms:

·       Oracle Solaris on SPARC

·       IBM AIX on POWER Systems

·       HP-UX Itanium

·       The below platforms already support 64-bit JD Edwards:

·       Oracle Linux

·       Microsoft Windows Server

·       IBM i on POWER Systems

·       Red Hat Enterprise Linux

With EnterpriseOne Tools Release 9.2.5, JD Edwards also announces the withdrawal of support for running EnterpriseOne in 32-bit mode on the below platforms:

·       Oracle Solaris on SPARC

·       HP-UX Itanium

·       Platform Certifications

JD Edwards EnterpriseOne deployments depend on a matrix of interdependent platform components from Oracle and third-party vendors. The product support life cycle of these components is driven by their vendors, creating a continuous need to certify the latest versions of these products to give customers a complete technology stack that is functional, well-performing, and supported by the vendors. This Tools release includes the following platform certifications:

·       Oracle Database 19c:

·       IBM AIX on POWER Systems (64-bit JD Edwards)

·       HP-UX Itanium (64-bit JD Edwards)

·       Oracle Solaris on SPARC (64-bit JD Edwards)

·       Local database for deployment server

·       Oracle Linux 8

·       Oracle SOA Suite 12.2.1.4

·       Microsoft Windows Server 2019 support for deployment server and Development Client

·       Microsoft Edge Chromium Browser 85

·       IBM i 7.4 on POWER Systems

·       IBM MQ Version 9.1

·       Red Hat Enterprise Linux 8

·       Mozilla Firefox 78 ESR

·       Google Chrome 85

Support for the below platforms are withdrawn with Tools Release 9.2.5:

·       Oracle Enterprise Manager 12.1

·       Microsoft Windows Server 2012 R2

·       Microsoft SQL Server 2014

·       Microsoft Edge Browser 42 and 44

·       Apple iOS 11

JD Edwards EnterpriseOne certifications are posted on the Certifications tab in My Oracle Support.

The updated version of JD Edwards EnterpriseOne Platform Statement of Direction is published on My Oracle Support (Document ID 749393.1). See this document for a summary of recent and planned certifications as well as important information about withdrawn certifications.


Tuesday, 20 October 2020

Preparing JDE for the inevitable - Change!

You've got some big JDE changes planned and you want to make sure that you have a baseline of activity and performance (and logs), so that you can compare against this after the change.  What you are really doing is making sure that the feedback loop for change is working, so you can evaluate the change and therefore improve how quickly and accurately you can implement change.

My recommendations are to get prepared at least 1 week before the change and start to gather the information.  What sort of information is important?

App Servers

  • CPU
  • Disk
  • Memory
  • Logs
  • UBE performance
  • UBE wait time
Web Servers

  • CPU
  • Disk
  • Memory
  • Logs
  • Website performance
Database Servers
  • connections
  • high I/O statements
  • high CPU statements
  • temp usage
  • CPU
  • Memory
  • Disk
In reality if you are using too much CPU, then you probably have something setup wrong in JDE, as I never see JDE using that much CPU - or you really have your hardware maximised (or you are running things on your phone).  

Remember that you need a baseline of everything - and this can be done at any time of the day, any day of the week.  If you understand exactly what a "week in the life of" means.

App Servers

These are easy.  Generally punchy CPU.  Kernels can use a lot of memory.  Some batch jobs are pretty intensive, but you must be running them highly parallel to affect the performance of the machine.  The metrics from the machine are going to be helpful, but I must admit - what you really need to worry about on these machines is the performance of the batch jobs.  

Batch jobs are running functions and if you want a generic health measure, I recommend dividing your batch jobs into 3 categories.

  • Punchy
  • IO intensive
  • CPU intensive
If you can get sample average runtime, rows processed and wait time for the above - you'll be able to determine if the "change" that has been done has had any affect on your environment.




For example, you can extrapolate this information from JDE with statistical queries over F986114 and F986110 - or as a 1 off I can create a pretty cool suite of dashboards for you to show your your baseline.


You can then take this a step further and understand the runtimes on a daily or hourly basis to ensure that there are not any dangerous trends that you are dealing with.

UBE wait time is an important measure too, as this will show you if your nightly scheduled jobs have moved significantly.  

Web Servers

Web servers use a lot of memory.  Watch lists and other AIS based options are starting to take their toll on performance.  I certainly recommend separating these out.  There are some good articles on this process.  AIS and Orch can be load balanced from the web server, this is also a good idea IMHO.  Don't just run watchlists locally and expect everything to be peachy.

There have been many improvements in threading in orchestration (read AIS) in the latest tools releases 9.2.4.4, if you are running large and complex orchestrations, get on the latest release.

Web servers can get caught up in garbage collection.  You do need to look out for this and make sure that the cycling is not too much.  The JVM can lock up for the time it takes to run the collection - so make sure that your sizing and startup parameters are correct.


I use a graphic like the above to work out the sweet spot for users on a JVM.  There are a lot of dependencies here, but you can be fast with 80 users on a 4GB JVM.  I mean 80 active users.  The graph above shows average daily response times (which I could make hourly) and also how many users logged in for that day.  This is really cool for JDE.  I'm actually representing Avg page load time, (Avg Page Load Time : The average amount of time (in seconds) it takes that page to load, from initiation of the pageview (e.g., click on a page link) to load completion in the browser.  Avg. Page Load Time consists of two components: 1) network and server time, and 2) browser time.)

Like my impressive bands?  The two red lines indicate an upper and lower performance thresh-holds.  I'd consider things slow or (too fast, can you be too fast?) if they were outside of those bands.

The graphic above is a summary of > 2 million JDE page loads.  This tells me two things, firstly that on this site, 450 users can perform as well as 40 users - so scalability is not an issue for them.  JDE always performs better on the web under some type of load.



The above shows us a pretty cool indication that you get good performance in general every minute of the day for up to 20 users in a single minute (this is interesting in itself - as we only are recording up to 30 people in a single minute asking for new pages).

Database Server

Once again the database server is the most critical item in all of the above, as it underpins every single thing you ask JDE for...  But, the database statistics alone really tell you nothing without a baseline and all of the "actual" user data that you see above.  When you can equate adequate performance (which I class as < 2 user complaints a day - haha), with a pattern of CPU / Memory / poor performing statements - then you know your database is performing well.

Another critical item for all you people considering cloud, is that you need to get your IOPs right.  IOPs cost money in the cloud, so you REALLY need to understand this before committing to an architecture that just won't support what you need.

Log file analysis

I'm lucky enough to have a blackbelt in AWK and GREP and also a purple belt in regular expressions.  This allows me to use a linux VM (which mounts local dirs off my machine) to rip through log files and create a baseline.

Some favs like:

grep -i -r COSE#1000 |grep BSFN | awk -F: '{print $10}' | awk '{print $1}' |sort |uniq -c

21

ApproveLoad

4

BatchReviseOnExit

6

BuildTransToBatchHeadersWrapper

2

BuildTransWOAcctMasterWorkTable

1

CheckItemBranch

1

CreateLoadBegDoc

3

DeleteOrderAddress

18

DeletePrinterFile


Which will give you a nice list of functions that have timed out and how many times they have timed out in the logs that have been given to you.

What I then generally do is place these on a timeline too, because they all could have been done as a single error (bad start / network problem), you need to get a frequency pattern.



So this shows how many errors and when they are occurring.  You can do the same on the enterprise servers.

But, if you are an uber nerd - then you get something like this:

You install cloudwatch agents (for example) on prem or not.  And you configure them to "watch" all of the JDE log files [with some smarts] and then you can report over the top of ALL your logs. (which I touch on here: https://shannonscncjdeblog.blogspot.com/2019/08/tip-9-monitor-relentlessly.html)


So what I can do (and you can see from the above).  I can look at the last year of JDE log files and I can ask the cloud console to summarise the logs using the regex that I have provided, so I can see if the incidents of errors have increased or decreased over time (especially important when introducing new technology).

fields @message,@errorcode, @pid, @date, @time
|stats count(*) as errorCount by errorcode, description 
|parse @message /(?<pid>\d{3,})\s(?<date>\S{3}\s\S{3}\s\d{1,})\s(?<time>\d{2}:\d{2}:\d{2}.\d{6})(?<module>.*)\.c\d{3,}\s(?<errorcode>.*)\s-\s(?<description>.*)/
|filter errorcode like /\S{3}\d{7}/
|filter errorcode not like /RUN0000015/
|sort errorCount desc
|limit 25 rows

What I do in the above is carve out the JDE log file lines that I want and create my own custom fields with the results. I then ask the console to display those results.

Of course this can be done for ORA- errors or anything.

It gets better, then I just need to create some metrics and then some alarms based upon queries like this.

I then get messages when my alerts see too many incidents of particular issues.

So this is a complete automated logging solution for JD Edwards - whether you are on premise or any cloud - this can work for you.

In conclusion

  • Change is inevitable.
  • Create a baseline of performance
  • Create a baseline of logs
  • Get good at automated instrumentation and Alert->Analyse->Act for your system usage and logs
  • Be generic and automate your monitoring
  • Get the monitoring to do all the hard work


Of course, I have the next level example above, where a customer can select any environment and date range and understand exactly how JD Edwards is performing based upon their business and their particular business KPIs.

Remember, this is the CORE of the feedback loop in your CI/CD pipeline adoption in JDE. You cannot continuously improve without the instrumentation.















 

Extending JDE to generative AI