Monday, 22 February 2021

UBE performance tracking part 8

I do blog about this a lot, I'm sorry.  UBE performance is cool though, and important.

No matter what type of performance tracking you are trying to do, using batch jobs as your benchmark is a very good place to start.  I've mentioned this a number of times, but batch jobs (UBEs) give you a fairly pure performance reading - as it's two tier.  Unless you've done something crazy like map UBE BSFN's to another server - I really hope you have not done this.

This is a good method of ensuring you have consistency too.  If you are considering any change of technology - a consistent UBE performance is going to give you peace of mind.  Finally, UBE performance also gives you some great ideas of where to look for archive and purge opportunities.  I've see too many Z files grow so large that they have a negative impact on performance.

You can do everything I'm going to show you here without ERPInsights, but our product ERPInsights just makes it all much easier.

You can use SQL over F986110 and F986114 that will extract all trend data for you, but I'm going to demonstrate how I show clients their performance results.  I'm often engaged during load testing exercises to give really deep insights into queuing, runtime and rows processed data.  Performance testing requires detailed analysis of the batch queues, and you will generally run batch next to your interactive load.  Quite often the interactive performance will fluctuate more than batch - but this will give you really good pointers into what is going on.

I used to use SQL over F986110 and F986114, but now I just grab a copy of both files and upload them into bigquery, where I can plugin my insights deashboards.  It only takes a couple of minutes to upload millions of WSJ rows and get a complete understanding of all your batch performance - both holistically and on a job by job and day by day basis.




Quickly, the above shows me this month all of my batch jobs, and how their load profile compares with the previous month.  The report defaults to one month, this would be 1 week or 1 year.  You can compare rows processed, total runtime or average runtime quickly for any period you choose.  You can see that this sample data set has run nearly 400000 UBE's in the last month, which is up 15% on the previous period.  We can then compare the average runtime for each individual job to look for changes.



I also have the ability to plot that data over the last month to see when things have changed.  I can see the rows processed (by the main select loop), the average runtime and the record count (number of jobs processed per day). I just need to scroll down the list and look at the profile for each job.  This is nice, because I'm seeing the representation of 16000 rows of data instantly, and understanding the load profile of this job completely.  Another really nice thing about something like the R47042 is that the rows processed data actually represents order volumes - so there is some empirical business data that you can extract from the above.  Of course we don't mind if the jobs takes twice as long - especially if it's processing twice the data!

The reporting suite contains loads of insights out of the box, and takes moments to navigate.  None of the data is on premise anymore, so there is no performance degradation to JDE when you are getting runtime history for 1000000 batch jobs. 




Some other beneficial reports are show below:

Reports not selecting rows:

Very handy for fixing your job schedule to be more efficient.  This might be normal for some jobs, but generally indicates some improvements are needed for your job schedule.  How about using orchestration studio, check for data and then launch the report...  Instead of just launching the report!


Single job focus

We can see below ALL 16000 runs of the job in the last month, how many rows are processed and how long the jobs took.  We can definitely see that there was an issue on Feb 12, where all of the jobs spiked upwards.  We can check that against other jobs if we need to.


Queue Time and Runtime

you can look at any or all jobs at once and see if they are performing normally.  This sheet compares any date range with the previous month and graphs the entire month.  Therefore after a package build and deploy, you can ensure that your "oft" run and much loved UBE's are running in the same performant way you expect!  You can go and pat the developers on the back and tell them what a great job they are doing.


Remember that all of these reports can be extracted with SQL - but these dashboards make it easy.  All of the information is in JDE, you just need to extract the right stuff.

If you want this at your fingertips, we have a job that you can schedule which updates a cloud datasource (at an interval that you specify).  Therefore you can have up to date dashboards like the above telling you exactly how your batch window is going.  If you like - reach out.  look up this page.  https://www.fusion5.com.au/solutions/enterprise-resource-planning/jd-edwards/erp-insights/ 

We have some really cool demos, and can show you your data very quickly - to ensure that it's all kept nice and real!  We can also just take a 1 off dump of your data and allow you access to all of the reports to see how you are performing - no locked in deals.







Wednesday, 17 February 2021

Archiving my JDE data off an AS/400 back tables to CSV from a 400

Let's be honest, if you are running on a 400, your time is running out.  We have a number of very efficient platform migration mechanisms to get you on a more modern platform - and perhaps one that can move to the cloud.

I do like the AS/400, I get on that thing and can do all the admin tasks I need - but it's done.  I have a client that wants to move all their AS/400 data to some "datalake" and then farm it with custom queries.  I do NOT recommend this as an approach, as you will spend all of your time reverse engineering the data and logic to understand what you had.

I recommend putting the data on ice, in a database that you've probably paid for...  then if you need to access or search the data, use the familiar JDE interface.  Cloud is perfect for this, your footprint in terms of $$ is miniscule and you can bring the system up - as and when you need.  This is the perfect choice for archival or retuirement.

Anyway, back to the post.  This is for when I do it next time.

What I will tell you about the script below, is that it works - yay

Save it as a .vbs file (mine is called downloadDataPRODDTA) and run with

>wscript.exe /h:cscript //E:vbscript downloadDataPRODDTA.vbs

The thing about this is that it was running on an old server (oh, did I mention the AS/400)?  The old server.

Note that my script is a little more complex, in terms of the files that it uses for t he download, as it is only grabbing the tables with data, this is easy - you can search my blog for how to create the datacounts file.


drop table proddta.rowcounts;

commit;

create table proddta.rowcounts (table_name varchar (120), AS400_count integer, oracle_count integer) ;

insert into proddta.rowcounts (
select table_name, 0, 0
from qsys2.systables  t1
where table_schema = 'PRODDTA');

''Run the results of this

select 'UPDATE PRODDTA.ROWCOUNTS set AS400_count = ( SELECT COUNT(1) FROM ' || 'PRODDTA.' || table_name || ' ' || ') WHERE table_name = ''' || table_name || ''';' 
from qsys2.systables t1
where table_schema = 'PRODDTA' ;

The following will give you a nice summary 

select table_name, simd, AS400_count
from proddta.rowcounts, ol7333.f9860
where ltrim(rtrim(siobnm)) = ltrim(rtrim(table_name))
and AS400_count > 0 order by AS400_count desc

Now you are ready to start the export of tables that have rows.  Note that some have blobs and they do not behave so they have also been excluded. 

1.5GB is taking about 15 minutes on this old dog - so it's doing okay.


    schema = "PRODDTA"
    server = "JDE400"
    connectString = "Driver={IBM i Access ODBC Driver};System=JDE400;Data Source=Business Data - PROD;User Id=JDE;Password=jde;"
    dim dbConnIn
    set dbConnIn = wscript.CreateObject("ADODB.Connection")
    dbConnIn.CommandTimeout = 60000
    dbConnIn.Open(connectString)
    dim fso
    set fso = wscript.CreateObject("Scripting.FileSystemObject")

        Dim tmp
        set rs = wscript.CreateObject("ADODB.RecordSet")
        rs.Open "SELECT T1.TABLE_NAME FROM QSYS2/SYSTABLES T1, PRODDTA/ROWCOUNTS T2, OL7333/F9860 T3 WHERE T1.TABLE_NAME = T2.TABLE_NAME AND ltrim(rtrim(t1.table_name)) = ltrim(rtrim(t3.siobnm)) AND  T2.AS400_count > 0 AND T1.TABLE_SCHEMA = '" & schema & "' AND t2.table_name not in ('F99410', 'F09505', 'F03B525') ORDER BY AS400_count asc", dbConnIn
        rs.MoveFirst

    tableName = ""
    

        For Each objField In rs.fields
            While objField > "" 
                tableName = objField.Value
                folder = "E:\Myriad\datadump\" & schema & ""
                If Not fso.FolderExists(folder) Then
                    fso.CreateFolder(folder)
                End If
        'wscript.echo schema & "." & tableName
                Call getData(schema, tableName, dbConnIn, folder)
                rs.MoveNext
            WEND
        Next

        rs.close



FUNCTION getData(schema, tableName , dbConnIn , folder)

        If Not fso.FileExists("E:\Myriad\datadump\" & schema & "\" & tableName & ".csv"Then
            set rs1 = wscript.CreateObject("ADODB.RecordSet")
            rs1.Open "Select * From " & tableName, dbConnIn

            set fso = wscript.CreateObject("Scripting.FileSystemObject")
            set ts = fso.OpenTextFile("E:\Myriad\datadump\" & schema & "\" & tableName & ".csv"2, True)
            line = ""

            For Each tmp2 In rs1.Fields
                line = line & tmp2.Name & ","
            Next

            ts.WriteLine(Left(line, Len(line) - 1))

            While Not rs1.EOF
                line = ""
                For Each tmp1 In rs1.Fields
                    line = line & """" & Replace(tmp1.Value, """""""""") & ""","
                Next

                ts.WriteLine(Left(line, Len(line) - 1))
                rs1.MoveNext
            wend

            rs1.close
            ts.close
        End If

End function

Extending JDE to generative AI