Friday 21 December 2018

Technical Debt - again

In my last post I showed you some advanced reporting over ERP analytics data to understand what applications are being used, how often they are being used and who is using them.  This is the start of understanding your JD Edwards modifications and therefore technical debt.

At Fusion5 we are doing lots of upgrades all of the time, so we need to understand our clients technical debt.  We strive to make every upgrade more cost efficient and easier.  This is easier said than done, but let me mention a couple of ways which we do this:

Intelligent and consistent use of category codes for objects.  One of the code is specifically about retrofit and needs to be completed when the object is created.  This is "retrofit needed" - sounds simple I know.  But, if you create something bespoke - that never needs to be retrofitted - the best thing you can do it mark it like that.  Therefore lots of time will be saved looking at this object in the future (again and again).

Replace modifications with configuration.  UDO's have made this better and easier and continue to do so.  If you are retrofitting and you think - hey - I could do this with a UDO - please do yourself a favour and configure a UDO and don't touch the code!  Security is also an important concept for developers to understand completely.  Because - guess what?  You can use security to force people to enter something into the QBE line - you don't need to use code.  (Application Query Security)



  1. Everyone needs to understand UDO's well.  We all have a role in simplification.
If you don't know what EVERY one of these are - you need to know!

OCM's can be used for force keyed queries.  Wow!!!  Did you know that you can create a specific OCM that forces people to only use keyed fields for QBE - awesome.  So simple.  I know that there is code out there that enforces this.   This is like the above tip for security.



System enhancement knowledge.  This is harder (takes time), but knowledge of how modules are enhanced over time is going to hopefully retire some custom code.  Oracle do a great job of giving us the power to find this, you just need to know where to look:



Compare releases


Calculate the financial impact.  Once you know all of this, you can start to use a calculator like fusion5 have developed, this is going to assist you understand your technical debt and do research around it.  We have developed a comprehensive suite of reports that allow you to slice and dice your modification data and understand what modifications are going to cost you money and which ones will not.  Here are a couple of screen grabs.  All we need to create your personalised and interactive dashboard is the results of a couple of SQL statements that we provide (or you run our agent - though ppl don't like running agents).


You can see that I have selected 5 system codes and I can see how much the worst case and best case estimates for the retrofit of those 5 system codes is.  I can see how often the apps are used and therefore make an appropriate finance based decision on whether that should be kept or not.  You are able to see the cost estimates by object type, system code and more.  Everything can also be downloaded for excel analysis.







Wednesday 19 December 2018

To keep a modification or not–that be the question

The cost of a modification grows and grows.  If you look at your modifications, especially if you are modifying core objects – retrofit is going to continue to cost you money going forward.

How can you work out how often your modified code (or custom code) for that matter is being used?

One method is to use object identification, but this is only part of the story.

You’ll see below that ERP analytics is able to provide you things like number of session, number of unique users, Average time on page and total time on page for each of your JD Edwards applications.  This can be based on application, form or version – which can assist you find out more.

With this information, you can see how often your modifications are used, and for how long and make a call on whether they are worth their metal.


image

Our reporting suite allows you to choose date ranges and also system codes to further refine the analysis.

image


You are then able to slice and dice your mods (note that we can determine modified objects too, but this is using data blending with data studio) to give you a complete picture:

image


Of course, we can augment this list with batch and then calculate secondary objects from cross reference to begin to build the complete picture.  You want to narrow down both retrofit and testing if you can.


image


See below for how we look at queue concurrency and wait times to work out job scheduling opportunities and efficiencies.

image

Thursday 13 December 2018

JDE scheduler problems

Who loves seeing logs like this for their scheduler kernel?

108/1168     Tue Dec 11 21:49:02.125002        jdbodbc.C7611
       ODB0000164 - STMT:00 [08S01][10054][2] [Microsoft][SQL Server Native Client 11.0]TCP Provider: An existing connection was forcibly closed by the remote host.
108/1168     Tue Dec 11 21:49:02.125003        jdbodbc.C7611
       ODB0000164 - STMT:01 [08S01][10054][2] [Microsoft][SQL Server Native Client 11.0]Communication link failure

108/1168     Tue Dec 11 21:49:02.125004        JDB_DRVM.C998
       JDB9900401 - Failed to execute db request

108/1168     Tue Dec 11 21:49:02.125005        JTP_CM.C1335
       JDB9900255 - Database connection to F98611 (PJDEENT02 - 920 Server Map) has been lost.

108/1168     Tue Dec 11 21:49:02.125006        JTP_CM.C1295
       JDB9900256 - Database connection to (PJDEENT02 - 920 Server Map) has been re-established.

108/1168     Tue Dec 11 21:49:02.125007        jdbodbc.C2702
       ODB0000020 - DBInitRequest failed - lost database connection.

108/1168     Tue Dec 11 21:49:02.125008        JDB_DRVM.C908
       JDB9900168 - Failed to initialize db request

Who loves spending the morning fixing jobs from the night before and moving batch queues and UBE's until things are back to normal?  Noone!

Here is something that may help, not I must admit I gotta thank an amazing colleague for this, not my SQL - but I go like it.

What you need to do is write a basic shell script (say that was on the ent server) that runs this:

select count (*) from SY910.F91300
    where SJSCHJBTYP = '1'
    and SJSCHSTTIME > (select
                      ((extract(day from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00'))*86400+
                        extract(hour from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00'))*3600+
                        extract(minute from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00'))*60+
                        extract(second from (current_timestamp-timestamp '1970-01-01 00:00:00 +00:00')))/60)-60 current_utime_minus_1hour
                        from dual);

If you get a 1 that is good, if you get 0 that is bad.  You probably need to recycle your scheduler kernel  (that control  record should change every 15 mins at least).

So, if you have a script that runs that, you can tell if the kernel is updating the control record...

Then you can grep through the logs to find the PID of the scheduler kernel and kill it from the OS.  Then I write a little executable that gives the scheduler kernel a kick in the pants (start a new one) - and BOOM!  You have a resiliant JD Edwards scheduler.




Tuesday 11 December 2018

real-time session information for ALL your JDE users

This post is based upon another yourtube clip, which explains the kind if realtime information that you can extract from JD Edwards using ERP analytics.

Of course, this is just google analytics with some special tuning which is specific to JDE.

This clip shows you how you can see actual activity in JDE, not just server manager – people logged in.  What I find is that the actual load on the system has nothing really to do with what SM reports.  SM reports some artificially high numbers – those which have not timed out.  This can include many hours of inactivity.  What GA (Google Analytics) reports on is those which have interacted with the browser in the last 5 minutes.  It also gives you realtime pages per minute and pages per second.  Sometimes I wonder how you can run a site (or at least do load testing) without some of these metrics.  I often see 120 people in server manager and 35 people online with GA.

Anyway, enjoy the vid – if you have questions, please reach out.


Thursday 6 December 2018

64 bits, not butts!

I've been to Denver and chatted to the team about 64 bit, and they are all pretty nonchalant about the process.  Very confident too, as we all know it's been baked into the tools for some time, just getting it into the BSFN's.

Honestly though, how many of your kernels or UBE's need to address more than 2GB of RAM (or 3GB with PAE blah blah), not many I hope!  If you do, there might be some other issues that you have to deal with first.

To me it seems pretty simple too, we activate 64 bit tools and then build a full package using 64 bit compile directives.  We then end up with 64bit pathcode specific dll's or so's and away we go.

The thing is, don't forget that you need to slime your code to ensure that it is 64bit ready, what does this mean?  I again draw an analogy between char and wchar, remember the unicode debacle?  Just think about that once again.  If you use all of the standard JDE malloc's and reallocs - all good, but if you've ventured into the nether-regions of memory management (as I regularly do), then there might be a little more polish you need to provide.

This is a good guide with some great samples of problems and rectifications of problems, quite specifically for JDE:
https://www.oracle.com/webfolder/technetwork/tutorials/jdedwards/White%20Papers/jde64bsfn.pdf

In the simplest form, I'll demonstrate 64 bit vs 32 bit with the following code and the following output.

#include
int main(void)
{
  int i = 0;
  int *d ;
  printf("hello world\n");
  printf("number %d %d\n",i,sizeof(i));
  d=&i;
  printf("number %d %d\n",*d, sizeof(d));
  return 1;
}

giving me the output

[holuser@docker ~]$ cc hello.c -m32 -o hello
[holuser@docker ~]$ ./hello
hello world
number 0 4
number 0 4
[holuser@docker ~]$ cc hello.c -m64 -o hello
[holuser@docker ~]$ ./hello
hello world
number 0 4
number 0 8

Wow - what a difference hey?  Can't get 32 bit to compile, then you are going to need to run this as root:

yum install glibc-devel.i686 libgcc.i686 libstdc++-devel.i686 ncurses-devel.i686 --setopt=protected_multilib=false

The size of the basic pointer is 8 bytes - you can address way more memory.  This is the core of the change to 64 bit and everything flows from the size of the base pointers.

Basically, the addresses are 8 bytes, not 4 - which changes arithmetic and a whole heap of down stream things.  So when doing pointer arithmetic and cool things, your code is going to be different.

The sales glossy is good from oracle, I say get to 64 if you can.

1.     Moving to 64-bit enables you to adopt future technology and future-proof your environments. If you do not move to 64-bit, you incur the risk of facing hardware and software obsolescence. The move itself to 64-bit is the cost benefit.
2.     Many vendors of third-party components, such as database drivers and Java, which JD Edwards EnterpriseOne requires, are delivering only 64-bit components. They also have plans in the future to end or only provide limited support of 32-bit components.
3.     It enables JD Edwards to deliver future product innovation and support newer versions of the required technology stack.
4.     There is no impact to your business processes or business data. Transitioning to 64-bit processing is a technical uplift that is managed with the JD Edwards Tools Foundation.

This was stolen directly from https://www.oracle.com/webfolder/technetwork/tutorials/jdedwards/64bit/64_bit_Brief.pdf
  
Okay, so now we know the basics of 64 vs 32 - we need to start coding around it and fixing our code.  You'll know pretty quick if there are problems, the troubleshooting guide and google are going to be your friend. 

Note that there are currently 294 ESUs and 2219 objects that are related to BSFN compile and function problems - the reach is far.

These are divided into the following categories:


So there might be quite a bit of impact here.

Multi foundation is painful at the best of times, this is going to tough if clients want to do it over a weekend.  I recommend new servers with 64 bit and get rid of the old ones in one go.  Oracle have done some great work to enable this to be done gradually, but I think just bash it into prod on new servers once you have done the correct amount of testing.

This is great too https://docs.oracle.com/cd/E84502_01/learnjde/64bit.html