Thursday, 29 August 2019

blocking / locking... potato... potato


Blocking is a funny problem and quite often one of the last things that I look for when there are JD Edwards issues.

We've had some recent problems with some serious blocking, but the reconciliation between client and server has made the log analysis almost impossible...  What I mean is that there are often client errors with no server errors.  The web server will give up, nothing in the enterprise server logs...  Until there are IPC errors because queues are too large [too many BSFNs running in queue]  But there are lessons in this.

First, we could see that there was an increase in the number of instances of COSE#1000 
ERP analytics shows us all historical details, wow that is a lot of problems:


Detailed AWS cloudwatch insights allow us unparalleled capability to query and audit all of our logs:

Can u see what is actually being done here, is it SOOOO amazing.   Looking at all logs in the last X[hours|days|minutes] in all log files for 14 servers, enterprise and web…  Looking for the relevant client [web server] and server [app server] logs that relate to and BSFN errors between the two.  Then showing this as a timeline.

I think that the big lesson here is adopting a consolidated approach to your logging, like a SPLUNK type approach.  If you adopt a some consistency, then all of the advantages of monitoring and better global interrogation are opened up to you.

What we have actually done in this instance is use cloudwatch to ingest all of the JD Edwards log files.  We are consolidating server manager, weblogic, enterprise server, system out and  /var/log into our own log streams that we can query.

Any example of a fairly complex query is below

**CloudWatch Logs Insights** 
region: ap-southeast-2 
#Here are the log files that I want to query
log-group-names: jde_wlsServerJASLogsout, jde_wlsServerJASLogs, jde_webServerLogs, jde_entServerLogs  
#looking at the last hour
start-time: -3600s 
end-time: 0s 
query-string:
```
#fields I want to look at and display on the console
fields @timestamp  ,@message,  @logStream, @log, BSFN
#search string in the logs - this is a simple example that will only match web server logs
|filter @message like 'COSE#1000'
#Allows me to create my own fields from the output and summarise based upon these with a little bit of regex magic
|parse @message /(?<date>\d{2}\s+\S{3}\s+\d{4})\s+(?
| sort @timestamp asc
| limit 10000



#And the results:

```
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|       @timestamp        |                                                                                                                 @message                                                                                                                  |                @logStream                 |              @log              |        BSFN         |    date     |     time     | errorlevel | module  |   user    |   Env    |
|-------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------|--------------------------------|---------------------|-------------|--------------|------------|---------|-----------|----------|
| 2019-08-28 06:15:19.809 | 28 Aug 2019 07:15:19,573 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@3f09721b: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user:USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:15:19,573 | WARN       | RUNTIME | AMRE001 | JPD920UK |
| 2019-08-28 06:24:31.258 | 28 Aug 2019 07:24:31,092 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@2c3526cf: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:24:31,092 | WARN       | RUNTIME | AMI001 | JPD920UK |
| 2019-08-28 06:34:21.978 | 28 Aug 2019 07:34:21,802 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@74a86b0a: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:34:21,802 | WARN       | RUNTIME | AME001 | JPD920UK |
| 2019-08-28 06:42:52.420 | 28 Aug 2019 07:42:52,371 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@12ddb7bb: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6025) SocID:37364 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:42:52,371 | WARN       | RUNTIME | AE001 | JPD920UK |
| 2019-08-28 06:45:25.972 | 28 Aug 2019 07:45:25,747 [WARN  ]  - [RUNTIME]         *ERROR* CallObject@256577d3: COSE#1000 Request timeout: timeout after 90000ms host JDEPROD1:6017(6024) SocID:37846 PID:12066 BSFN:CommitReceiptHeader user: USER Env:JPD920UK  | WEBUK_ip_10_116_22_119i-084b990b0bb60c66e | 202760777498:jde_webServerLogs | CommitReceiptHeader | 28 Aug 2019 | 07:45:25,747 | WARN       | RUNTIME | AM001 | JPD920UK |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 Entries like the above can mean many things, network problems, security problems...  But when you look at the server logs and see nothing...  Start to think about blocking.  Also look at the BSFN summary page for the web server in question, this will show you the BSFN runtimes - max, min and average.  If this shows the functions are generally very fast - then you know that you might have some locking problems.

Now, validate this at the database.  These statements will only work while there is locking / blocking - so have them ready.  These are also oracle statements.

At the database:

Show me the blocking:

select
   (select username from v$session where sid=a.sid) blocker,   a.sid,   ' is blocking ',   (select username from v$session where sid=b.sid) blockee,   b.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2;

What are they doing / blocking / locking?

Select    c.owner,   c.object_name,   c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machinefrom    v$locked_object a ,   v$session b,   dba_objects cwhere   b.sid = a.session_idand  a.object_id = c.object_idand b.sid = 1340;

All in one – with JDE information - machine process type and statement

Select    c.owner,   c.object_name, c.object_type,   b.sid,   b.serial#,   b.status,   b.osuser,   b.machine, b.process, b.program, b.sql_id, REPLACE(d.SQL_TEXT,CHR(10),'') STMTfrom    v$locked_object a ,   v$session b,   dba_objects c, v$sqltext dwhere   b.sid = a.session_idand  a.object_id = c.object_idand d.address = b.sql_addressand b.sql_hash_value = d.hash_valueand b.sid in(select
   a.sidfrom
   v$lock a,
   v$lock bwhere    a.block = 1and    b.request > 0 and    a.id1 = b.id1 and    a.id2 = b.id2);

If you see a long list of results (or a single line), you might want to consider your next actions.  Blocking can be pretty normal, but can also be a result of a deadlock (surely that should be a movie title).   A dead lock occurs when two sessions are fighting for the same lock.  Interestingly I've seen locks last for hours without being deadlocks.  Another very interesting problem that does occur is that a client (JDBC) process can lock be locked by a server process (jdenet_k).  I've seen this quite a bit and even programmed enough bad code to create it myself - ha!

Things to remember is that if a jdenet_k process is blocking for > 5 minutes, there is probably something wrong (the client will have given up - timeout's probably 90 seconds)...  So the BSFN is still running.  The actual human that tried the first time is probably trying again in another JAS session...  Things could be escalating...  Generally I think you can kill those sessions.  UBE's are a little different, leave them alone.

If you see java doing the locking.  You need to make the call.  Try and get back to the JDE session that is doing the damage and see what the user is/was doing [they'll generally deny it].  It can be little things that cause big locks.


Monday, 26 August 2019

TIP 10: CONTINUOUS IMPROVEMENT

Tip 10: Continuous improvement

It’s a theme that is not going to go away.  If you have implemented all of the shiny new toys from JD Edwards, then you need to show ROI. 

This is a theme that we are going to hear a lot in the coming years.  Even the way Oracle is releasing JD Edwards functionality follows the ideals of continuous delivery by committing to release 9.2 until 2030.  We are getting improvements continuously, not in major releases. 

I think there are 3 ways that you can make a difference to your business in relation to adopting continuous improvements with JD Edwards.

Finding and implementing RPA (Robotic Process Automation) Opportunities

There is so much opportunity here, and all of the tools are at your fingertips.  You can use ERP Analytics to find processes (applications / sequences) that are run frequently.  Use this data to go back to the business and observe what the end users are doing.  For instance, if you see that P42101 is being run 2,000 times a day – look for opportunities to improve this process.  This could be EDI, this could be spreadsheet macros that call an orchestration.  What’s an orchestration I hear you ask? 

Orchestration is the ability to turn any piece of JD Edwards functionality into an API.  An API that is easy to call and can be authenticated with the user’s username and password.  So, exposing functionality to an Excel Macro – would be very easy.  You could write an orchestration to enter a sales order (in my example) and then a smart macro to call the orchestration with the data on the spreadsheet.  It could prompt for a username and password.  If your users are being sent orders in spreadsheets – you may have just increased their productivity and reduced a significant amount of human errors.

RPA implementation can be for simple or complex processes.  Look for repetition and eliminate it, as an ex-programmer – if you see repetition in code – there are inefficiencies in that code.  ERP Analytics will then allow you to measure the success of your RPA, as the usage of the applications should go down with good RPA implementation.

Orchestration is free to implement and can make a huge difference to mundane tasks.

Continually optimise your architecture

This may be more relevant for public cloud implementations, but let’s be honest – most are public cloud implementations.  You must continually drive for reduced hosting costs for all of the JD Edwards assets.  Quite often this is difficult, unless you have architected your solution for the cloud, turned the monolithic JD Edwards into an elastic cloud tenant.  This can be done.
Fusion5 has created what we think is a world first elastic JD Edwards cloud formation for AWS.  This architecture has the ability to expand and contract with load and demand.  We are able to define the rules to create new web servers and new batch servers and then retire them when they are not needed.  This allows our clients to have a very efficient architecture and if they feel that they are paying too much, we can reduce the size and number of machines accordingly.

Figure 14: Choosing between re-platforming and rehosting can be difficult, a re-platform is going to provide payback over time

A good illustration of the options you have available to you when you migrate is above, a lift and shift [rehost] is a simple project – but will not allow you to get true cloud benefits from native constructs (cheaper storage, elasticity or additional security).  If you do a re-platform (as I recommend) you can reshape JD Edwards to be a much more flexible cloud tenant.
If you did a rehost, I’d guess you might implement about 8 cloud constructs (EC2, EBS, ALB, multiple AZ, EFS (if you are lucky), whereas if you were re-platforming, you might use (RDS, EC2, EFS, ALB, ASG, CloudWatch, step functions, route53, S3, Launch Templates, Target Groups and more!)
It is much easier to get savings out of a re-platformed architecture.
At a number of sites I’ve seen savings of more than 50% month on month when we work hard at cloud cost reduction.

Continue to update JD Edwards

Patches for JD Edwards are now continuous, so your adoption should also be continuous.  I recommend making a plan, with a schedule of when you are going to take patches, when you are going to test patches and when you are going to put them into prod.  Start simple, choose twice a year and then work backwards for how long you are going to test, how long for retrofit etc. 
If you’ve been lucky enough to re-platform (as above) then you are going to have some distinct advantages when it comes to deployment.  That is that changes can be deployed and tested much more rapidly and actually, continuously.  If you have a flexible cloud implementation you could build and deploy an alternate package for production and ease this out into the user community.  Our AWS cloud formation allows us to deploy packages without outages, we can do this on a schedule and therefore allow environments to consume change at their own pace.  If there is an issue, we can back it out immediately and fix it.
Figure 15:  Sample continuous deployment schedule, simplicity is important.
A flexible architecture allows you to be more aggressive with your consumption of change and keep more “up to date” with the latest code from Oracle.



Sunday, 25 August 2019

TIP 9: MONITOR RELENTLESSLY

Tip 9: Monitor relentlessly

Tip 9 and 10 are closely related, but this tip is all about feedback.  If you know things are going well it’s great.  If you know things are going poorly, that is great too – because you can fix it.  The worst case scenario is that things are going pear-shaped in the background and you only hear about it when a user raises a case.  You need to be KPI’d on finding errors before your users – period.

How can you find errors before your users?  Here are a couple of tricks that Fusion5 implements for our clients:
ERP Analytics
Sometimes referred to the black box for your ERP, we use this to monitor performance and usage of JD Edwards. It records every page load, by every user – every minute of every day.  This information is incredibly powerful for benchmarking and comparing ANY change in your entire enterprise.
UBE Analytics
Having access to the runtime, rows processed, user and server information for every batch job allows us to continually monitor the critical two tiers of JD Edwards.  Reading this with ERP Analytics gives more information on where performance problems might be and another point of data to compare with.
Log monitoring
Fusion5 has a very advanced cloud formation in AWS which utilises cloudwatch to monitor all log files, UBEs and JD Edwards connections.  This enables us to graph and monitor user connections, concurrent UBEs and search ANY logfile for ANY error – ever.  This is a single console for all logs across JD Edwards.  This approach and consistency can be used with many different installation types, not just limited to AWS.
DB growth monitoring
Keeping an eye on table by table database growth is critical for understanding if a process has gone rogue.  It’s also critical for maintaining consistent database performance.  Regular rowcount reporting and size reporting will ensure that you can deliver a service level to your users that is acceptable.  Maintenance of your data size is important for costs and restoration times.

Figure 12: Sample custom dashboard showing metrics that are 100% relevant for JD Edwards

Figure 13: AWS log insights provides intelligence that would previously be impossible to find.  This shows a graphical representation of errors and type of errors over 8 separate web servers.


Friday, 23 August 2019

TIP 8: SECURITY

Tip 8: Security

Security sometimes takes a back seat, but please, please – don’t let it.  Without exaggeration it’s about 1,000,000 times easier to make security changes before a go-live than after.

Simple things need to be completed and tested before go-live:
  • Complete production security model in place, including row security
  • All table auditing enabled (if you do this)
  • Complex JDE password for database and for JDE
  • Do not use the JDE account for the system account for your users.  Please do not do this, create “jdeuser” or something much better that will not get locked out
  • Check that the password expiry policy for Oracle is not default, or else your system accounts will start locking out
  • Change your default node configuration, do NOT leave this standard.  This is one of the largest security holes in the suite.
  • LDAP or SSO is critical.  Controlling user access is easiest if someone else is doing it (I find).  So if the desktop team is decommissioning users (oh and changing passwords) this is a big bonus and will save time and money.   The Fusion5 SSO offering is really cool too, especially if you want to use Azure to MFA people under certain criteria – all done by someone else!
  • Make sure that your data is encrypted at rest and in transit
  • Get your security groups tight and your firewalls enabled
  • Default access should be no access
  • Adopt the most stringent security posture your business can afford
Here is an interesting tip, quite often row security can be good for performance.  Why?  Because it ensures that there is a where clause that is generally an indexed field.  If you are row securing MCU or CO, then the where clause is enforcing less IO and hopefully a quicker result!

Thursday, 22 August 2019

TIP 7: TEST EVERYTHING WHEN YOU ARE DOING “MOCK” GO-LIVES

Tip 7: test everything when you are doing "mock" go-lives

As I said, if I'm at a go-live, it's not my first rodeo.  Sure, I'll always have a nervous demeanour and perhaps a bit of sick feeling in my stomach, but I do love it.  I like seeing all of the data and statistics around me that somewhat affirm the planning and efforts that have gone into the project.  It's simple, when things go to plan.

Of course when a user does an open find over the F4211 and F42119 using a single non-indexed field and then wants to go to the end of the grid… with probably 20 million rows to be displayed…  I might not have tested that (nor catered for it in the sizing).  Oh, and when it doesn't return and they do it another 10 times (to be sure), that also was not in our test plan.  Nonetheless – there will always be challenges and things unexpected – your job is to reduce the number of them.

Mock go-lives are critical.  They do the following important tasks:

Assign responsibilities to all tasks, both prior to, during and after the upgrade.  Ensure that these are on the run sheet and have all been tested before.
Version your run sheet, make sure all line items are filled out and ensure that there are accurate timings.  You will not get good timings on the first conversion, and perhaps not the second.  Subsequent to that you should be building out exactly how long the conversion is going to take so that you can determine if you need to "look outside the square" when it comes to outage windows.
Make sure that people run integrity reports and check the results every time.  I've been involved in go-lives where an integrity did not match on the go-live weekend – but guess what?  It never balanced in the last 5 mock go-lives – it was not compared.  Getting everyone to run every step is a big lesson.
I only really care about rowcounts, but I know that the business will want integrity reports – so you might want a few.  Summing amount columns or hashing is another way to make technical people really happy.
Ensure that you move some WSJ history too.  Nothing worse than a user logging in and not seeing the report they ran on the Friday before go-live weekend.  Anything you can do to reduce the call volume on the Monday after go-live – you should do it!
Timing, if things are too fast you probably have a problem.  If things are too slow, you probably have a problem.  Make sure that things are predictable. 
Sleep is important, people do not make good decisions under lots of pressure and with a lack of sleep.  Go-lives are tough and should be, but not at the expense of the team.  Don't let the team drive if they've worked 20 hours, get a local hotel and an UBER.  Plenty of food and drinks for the project team too.

Get a runsheet, live by the runsheet and succeed with the runsheet.  Regular comms are critical – good luck!

Wednesday, 21 August 2019

TIP 6: LOAD TEST YOUR INTERACTIVE ACTIVITY

Tip 6: Load test your interactive activity

You’ve got my full attention in this section, I really enjoy load testing.  Whether this is using OATS or other software, it’s a bit of a passion of mine.

Here is my recipe for success for interactive load testing.  You have your batch results above, so you are pretty confident with database size and hopefully application (logic layer).  We are now going to test the interactive performance of JD Edwards and how the user is going to experience things.

The first question you need to be honest about, is the peak capacity of users that you are going to test.  If Server Manager tells you 150 users are on, how many people would you load test with?  I can tell you – A LOT LESS!  I would test 40 in that scenario with a wait time of 5 – 8 seconds.  Let me show you why:

Figure 8: Standard ERP Analytics screen showing current activity, both throughput and location


My interactive report says there are 56 users logged into JDE and active in the last 5 minutes.  This is an interactive dashboard that Fusion5 ERP Analytics customers have access to.  You can also see the pages per minute and pages per second.  We are peaking at about 150 a minute in that snapshot, but I can find the peaks over the last 2 months if needed.
Figure 9: Server Manager’s view of the connected world is generally artificially high
Yet Server Manager is trying to tell me that I have 288 users.
Even with my classic double up the AIS users – we have 144 logged in, but only 58 active in the last minute.

What I’m trying to say here is don’t stress your system with too many users. Tuning for 3 x the worst scenario possible is actually going to slow you down.
Figure 10: ERP Analytics screen showing time of day, page views and performance. Reiterating the fact that a busy server is a fast server!

The graph above is unequivocal in showing that performance is better when pages are busy.  Do not have too many idle web servers because you have catered for 3x the users – your users are actually going to experience worse performance.  This is MORE dramatic at the users drop off.  I see around 20% performance improvement when a JAS server is loaded and cached up nicely.

Now that you can determine the number of users you need for load testing, you can execute this with the software or services that you have access to.  At Fusion5 we use OATS and can assist with any load testing you need.  We also validate and continually measure interactive performance using ERP Analytics, which can produce all of the graphs that you see above.

Anecdotally, good performance from JD Edwards is when pages load in about 1.1 seconds. 
Figure 11: Another view of performance over time, but separating download time and server response time. The page load is generally in direct correlation to server response time.

We measure and record exactly what the end user experiences.  We can also report on the network traverse time and the server response time.  These are all critical values when determining what you need to fix.  We can run this reporting on different users or geographies too, so you can compare performance in a single city or around the world.

Tuesday, 20 August 2019

TIP 5: LOAD TEST YOUR BATCH ACTIVITY

TIP 5: LOAD TEST YOUR BATCH ACTIVITY


Batch activity is really a pure performance comparison which takes away a potential tier in your traditional 3 tier architecture using JD Edwards (Web, App & DB).  The nice thing about this also is that you are really only testing your batch server and your database server.

JD Edwards (in my mind) submits two types of jobs
  1. Type 1 runs a series of large SQL statements.  These are generally not complex, as the batch engine’s capacity to run complex statements (even simple aggregates) is not good.  Therefore you are going to get large open selects, which will generally then perform subsequent actions based upon each row that is returned in the main loop.  (eg. R09705 - Compare Account Balances to Transactions)
  2. Punchy UBE that gets in with some tight data selection, generally runs a pile of BSFNs and then jumps out again. (R42565 – Invoice Print)
It’s easy to categorise these jobs because of the amazing job Oracle did with “Execution Detail”, specifically rows processed.
Figure 6: View taken from "Execution Detail" row exit from Work with Submitted Jobs (WSJ)

You can actually databrowse this (V986114A) and see column Alias: PWPRCD, defined as “The number of rows processed by main driver section of the batch job”.  I use this in a lot of SQL around performance, as I can get rows per second for my UBEs – which is a great comparison device.  If you see consistent low numbers here, probably a punchy UBE – lots of rows, probably category 1.

Make sure that you test UBEs in both of the categories that I have listed above.  Some are going to test the database more, some are going to test the CPU on the batch server and some are going to test network I/O.  Make sure that you “tweak” your TCP/IP too, as I have seen this make some impressive differences in batch performance. (search Doc ID 1633930.1 and tweak).

The Fusion5 UBE Analytics suite allows you to do this comparison immediately and gives you some impressive power to compare periods, servers and more.
Figure 7: UBE Analytics summary screen - week on week performance comparison
We can choose a date range for compare and let the system do the rest.

You can see that we can tell for each UBE and version combination that has been run for this fictional client in the date range specified, if you compare it with the previous period – performance has slowed down in the top 12 rows.  I’d be looking at what has changed!
The UBE Analytics data is not stored in JD Edwards, so you never lose your history.

Monday, 19 August 2019

dbms_output and fflush

This is purely a reminder for me next time, getting the output from a begin block, vs. not.

It seems that a being block runs on the server without coming back to the client, this is nice for fast reasons, but crappy for feedback.  You can understand why too.  If the server is busy running all of your statements, then you are not going to get the output back.

And guess what, heaps of the cool statements can only be run on server code, which does make total sense too.  Oh, also there is no fflush with dbms_output - so my blog title is a little misleading. 

It's  an important fact to remember, all your directives with setting echo on, they are for the client.  So spool is a sqlplus command not a PL/SQL command – BOOM!  Mind blown!

This most might also help you purge your work centre data, as quite often [for some more than others] you might get a few too many records in this area.


set echo on
set feedback on
set timing on
SET SERVEROUTPUT ON FORMAT WORD_WRAPPED
spool truncateUselessWorkCentre.log
DELETE from TWEDTA.F00166 where GTOBNM='GT01131'
AND GTTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
DELETE from TWEDTA.F00165 where GDOBNM='GT01131'
AND GDTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
DELETE from  TWEDTA.F01131T where
ZCSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
begin
   for a in 119200..119228 loop
        dbms_output.put_line('06 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000006 and zmdti = a;
        commit;
     dbms_output.put_line('07 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000007 and zmdti = a;
        commit;
     dbms_output.put_line('13 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000013 and zmdti = a;
        commit;
   end loop;
end;
/
DELETE from  TWEDTA.F01133 where
ZTSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));
commit;
DELETE from  TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013);
commit;
spool off;
quit;


gives the output:
SQL> DELETE from TWEDTA.F00166 where GTOBNM='GT01131'
  2  AND GTTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

8155 rows deleted.

Elapsed: 00:00:02.75
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> DELETE from TWEDTA.F00165 where GDOBNM='GT01131'
  2  AND GDTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

8155 rows deleted.

Elapsed: 00:00:04.48
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> DELETE from  TWEDTA.F01131T where
  2  ZCSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

2330 rows deleted.

Elapsed: 00:00:00.36
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> begin
  2  for a in 119200..119228 loop
  3       dbms_output.put_line('06 about to process date' || to_char(a));
  4       DELETE from  TWEDTA.F01131M where ZMAN8 = 99000006 and zmdti = a;
  5       commit;
  6       dbms_output.put_line('07 about to process date' || to_char(a));
  7       DELETE from  TWEDTA.F01131M where ZMAN8 = 99000007 and zmdti = a;
  8       commit;
  9       dbms_output.put_line('13 about to process date' || to_char(a));
10       DELETE from  TWEDTA.F01131M where ZMAN8 = 99000013 and zmdti = a;
11       commit;
12  end loop;
13  end;
14  /

--but, this just comes at once, as there is no concept of fflush for the server, you need to wait for the code to return

06 about to process date119200                                                 
07 about to process date119200                                                 
13 about to process date119200                                                 
06 about to process date119201                                                 
07 about to process date119201                                                 
13 about to process date119201                                                 
06 about to process date119202                                                 
07 about to process date119202                                                 
13 about to process date119202                                                 
06 about to process date119203                                                 
07 about to process date119203                                                 
13 about to process date119203                                                  
06 about to process date119204                                                 
07 about to process date119204                                                 
13 about to process date119204                                                  
06 about to process date119205                                                 
07 about to process date119205                                                 
13 about to process date119205                                                  
06 about to process date119206                                                 
07 about to process date119206                                                 
13 about to process date119206                                                  
06 about to process date119207                                                 
07 about to process date119207                                                 
13 about to process date119207                                                 
06 about to process date119208                                                 
07 about to process date119208                                                 
13 about to process date119208                                                 
06 about to process date119209                                                 
07 about to process date119209                                                 
13 about to process date119209                                                 
06 about to process date119210                                                  
07 about to process date119210                                                 
13 about to process date119210                                                 
06 about to process date119211                                                 
07 about to process date119211                                                 
13 about to process date119211                                                 
06 about to process date119212                                                 
07 about to process date119212                                                 
13 about to process date119212                                                 
06 about to process date119213                                                 
07 about to process date119213                                                 
13 about to process date119213                                                  
06 about to process date119214                                                 
07 about to process date119214                                                 
13 about to process date119214                                                  
06 about to process date119215                                                 
07 about to process date119215                                                 
13 about to process date119215                                                  
06 about to process date119216                                                 
07 about to process date119216                                                 
13 about to process date119216                                                  
06 about to process date119217                                                 
07 about to process date119217                                                 
13 about to process date119217                                                 
06 about to process date119218                                                 
07 about to process date119218                                                 
13 about to process date119218                                                 
06 about to process date119219                                                 
07 about to process date119219                                                 
13 about to process date119219                                                 
06 about to process date119220                                                 
07 about to process date119220                                                 
13 about to process date119220                                                 
06 about to process date119221                                                 
07 about to process date119221                                                 
13 about to process date119221                                                 
06 about to process date119222                                                 
07 about to process date119222                                                 
13 about to process date119222                                                 
06 about to process date119223                                                  
07 about to process date119223                                                 
13 about to process date119223                                                 
06 about to process date119224                                                  
07 about to process date119224                                                 
13 about to process date119224                                                 
06 about to process date119225                                                  
07 about to process date119225                                                 
13 about to process date119225                                                 
06 about to process date119226                                                  
07 about to process date119226                                                 
13 about to process date119226                                                 
06 about to process date119227                                                 
07 about to process date119227                                                 
13 about to process date119227                                                 
06 about to process date119228                                                 
07 about to process date119228                                                 
13 about to process date119228                                                 

PL/SQL procedure successfully completed.

Elapsed: 00:08:47.17
SQL> DELETE from  TWEDTA.F01133 where
  2  ZTSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013));

10484 rows deleted.

Elapsed: 00:00:00.91
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> DELETE from  TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013);

10484 rows deleted.

Elapsed: 00:00:02.07
SQL> commit;

Commit complete.

Elapsed: 00:00:00.01
SQL> spool off;

Finally, the script is improved to use dynamic spool file name and also relative dates.  Note also, that now I know what is server code and what is client code, I get some auditing done at the top and bottom of the script to improve the results.

set echo on
set feedback on
set timing on
SET SERVEROUTPUT ON FORMAT WORD_WRAPPED

col dt new_value dt
select to_char(sysdate,'YYYYMMDDHH24MISS') dt from dual;

spool truncateUselessWorkCentre_daily_&dt.log

select count(1), 'F00166' from twedta.f00166;
select count(1), 'F00165' from twedta.f00165;
select count(1), 'F01131T' from twedta.f01131T;
select count(1), 'F01131M' from twedta.f01131M;
select count(1), 'F01133' from twedta.f01133;
select count(1), 'F01131' from twedta.f01131;

DECLARE
fromdate number;
todate number;
begin
   select (to_char(sysdate-6, 'YYYYDDD')-1900000) into fromdate from dual;
   select (to_char(sysdate-3, 'YYYYDDD')-1900000) into todate from dual;
   for a in fromdate..todate loop
      DELETE from TWEDTA.F00166 where GTOBNM='GT01131'
      AND GTTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
      DELETE from TWEDTA.F00165 where GDOBNM='GT01131'
      AND GDTXKY IN (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
      DELETE from  TWEDTA.F01131T where
      ZCSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
        dbms_output.put_line('06 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000006 and zmdti = a;
        commit;
      dbms_output.put_line('07 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000007 and zmdti = a;
        commit;
      dbms_output.put_line('13 about to process date' || to_char(a));
        DELETE from  TWEDTA.F01131M where ZMAN8 = 99000013 and zmdti = a;
        commit;
      DELETE from  TWEDTA.F01133 where
      ZTSERK in (SELECT ZZSERK from TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a);
      commit;
      DELETE from  TWEDTA.F01131 where ZZAN8 in (99000006, 99000007, 99000013) and ZZDTI = a;
      commit;
   end loop;
end;
/

select count(1), 'F00166' from twedta.f00166;
select count(1), 'F00165' from twedta.f00165;
select count(1), 'F01131T' from twedta.f01131T;
select count(1), 'F01131M' from twedta.f01131M;
select count(1), 'F01133' from twedta.f01133;
select count(1), 'F01131' from twedta.f01131;
spool off;
quit;

Extending JDE to generative AI