I’m copying a massive amount of data for a project. This is a direct migration of JD Edwards from World A73 to OneWorld 9.1. This also has a platform migration in the middle. There are a couple of fortunate things about this, it’s a JD Edwards retirement program. This means that the data is static and I have time… But my clients keep telling me (or is it me telling them) that time is money. So of course I’m doing this in the most efficient way possible.
There is no money for the oracle database gateway for iSeries, that would be nice, but I’m using heterogeneous database gateway to ODBC and perhaps taking a hit on performance but not the back pocket.
So, I’ve completed all of the world conversions and the P9840TC conversions and am ready for the platform migration piece. I’m saving money on disk and only using single-byte characters because this is going to save me 2TB. Did I tell you there is 5TB of data being retired / archived?
We are running the retired environment as a cloud based bureau service. So, I need to pump all of this data into my E1 tables via the oracle HG. Sure, I could use R98403 – but I cannot wait until 2015 to get the data, so things are getting a little funky.
I’ve generated the table structures with datapump, import export from an exisitng environment. This is using a tablespace remap & owner remap with METADATA_ONLY.
Sample file for metatdata_only (no data in the export file)
schemas=CRPDTA,CRPCTL
#content=data_only
content=METADATA_ONLY
#directory=data_pump_dir
directory=myDpump
dumpfile=CRPDTACTL.dmp
logfile=CRPDTACTL_expdp.log
job_name=CRPDTACTL_expdp
This is the imp
schemas=CRPDTA, CRPCTL
remap_schema=CRPCTL:FIACTL,CRPDTA:FIADTA
remap_tablespace=CRPCTLI:FIACTLI,CRPCTLT:FIACTLT,CRPDTAI:FONFIAI,CRPDTAT:FONFIAT
table_exists_action=skip
#content=data_only
content=metadata_only
directory=myDpump
dumpfile=CRPDTACTL.dmp
logfile=FIA_impdp.log
job_name=FIA_impdp2
#parallel=4
I’m then importing all of this and dropping the indexes (yes all of them). You may think that is a little risky (actually – I’m dropping PK’s also and I won’t be creating them either [that’s another story – don’t stress – it’s totally valid]). Then I’m using the HG to populate my tables and then I’m going to generate the indexes using an impdp with sqlfile=
Import over the same export dump file with the sqlfile=, note that you need to bin the section on table_exists_action – as it does not like that.
schemas=CRPDTA, CRPCTL
remap_schema=CRPCTL:FIACTL,CRPDTA:FIADTA
remap_tablespace=CRPCTLI:FIACTLI,CRPCTLT:FIACTLT,CRPDTAI:FONFIAI,CRPDTAT:FONFIAT
content=metadata_only
directory=myDpump
dumpfile=CRPDTACTL.dmp
logfile=FIADDL_impdp.log
job_name=FIADDL_impdp3
sqlfile=createFIADDL.sql
Note also that the sqlfile get’s banged into the directory entry that you added earlier in the DB, with:
create directory myDpump as 'f:\\dpump' ;
grant READ,WRITE ON DIRECTORY myDpump TO JDE_ROLE ;
Now for JDE 9.1, this created a 60MB .sql file! Wow, now there is going to be a considerable amount of work to rip this apart to get only the create index commands!
It’s pretty nice that impdp and expdp actually write the file with a bit of order, so all indexes are done together and all tables are done together. So it’s easy to rip out the create index statements.
Generate drop constraint statements:
SELECT 'ALTER TABLE' || ' FIADTA.' || table_name || ' drop constraint ' || constraint_name ||';'
from all_constraints
where owner = 'FIADTA' and constraint_name like '%PK' ;
Generate drop index statements:
select 'DROP INDEX FIADTA.' || index_name ||';'
from all_indexes
WHERE owner = 'FIADTA' ;
Then I use 5 separate scripts that are similar to:
spool F:\fia\copyLogs1.log
insert into FIADTA.F4111 select * from FIAPRODDTA.F4111@a01proddta;
commit;
insert into FIADTA.F41051 select * from FIAPRODDTA.F41051@a01proddta;
insert into FIADTA.F3102 select * from FIAPRODDTA.F3102@a01proddta;
commit;
…
No comments:
Post a Comment