Friday, 6 February 2015

JD Edwards unicode conversions faster

If you’ve got lost of data you might have got to this post.  If you are attempting to go live is some sort of second rate cloud, you might have come to this page.  Let me tell you a story and highlight things with some numbers.

Once upon a time there was a table in JD Edwards, let’s call it F03B11 that had 46 million rows.  This table took 5.5 hours to convert to unicode using the JD Edwards standard unicode conversion methodology.

A new conversion method got this down to 15 minutes, but what changed?  Everything!

Read on for the details:

The initial timings were attained during an upgrade and platform migration to the cloud.  This poor client was moving from the stability and performance of their impressively specified p-Series to their no so impressive (at first) cloud provider.

This is an oracle based system, both pSeries and going to cloud. 

I guess this turns out to be a bit of a hard core comparison of cloud vs. physical tin that you can touch and also big unix vs x86 and fibre attached SAN vs – who knows what you are getting in the cloud – 100MBs NAS?

But, you are stuck with your hardware.  You might choose where you are going to run certain items, but in general this piece if fixed.

What can you change to make the process quicker?

Perhaps something like:

  1. create table F0911_UNICODE (…. NCHAR NCHAR blah blah)
  2. drop all F0911 indexes
  3. alter table proddta.f0911 drop constraint F0911_PK;
  4. insert into proddta.f0911_UNICODE select * proddta.f0911;
  5. commit
  6. create all F0911 indexes

This is instead of using the stored procedure and function that JD Edwards calls.

I’ve seen up to 25% improvement (and more) running this manually on the large tables.

It’s easy to get the unicode definitions of the tables, just generate them against a unicode data source (control tables is handy) and then grab the details from SQLDeveloper.

image

I generally employee the above for the top 20 tables and use the standard conversions for the others.

As a side note, the data in this 46 million row F03B11 was 42.6GB single single byte mode but 71.5GB in unicode.  For this table it was 59% greater.

1 comment:

Unknown said...

In step - "create table F0911_UNICODE"
..is this one generated using a unicode Datasource ?




Thanks,
Manoj

Extending JDE to generative AI