I know that this is old skool – but I’ve not had time to re-learn dpump, so I’m putting on the glomesh top and nike high tops (and some oakley frogskins) and going old skool on my data refresh knowledge.
You might bang on about a few extra things that dpump might do, but imp and exp is pretty rad (word rad is from the same era as imp and exp).
I’m also making a big thing about global search and replace, because I can never remember the command.
EXP
exp PD910/PASSWORD@instance PARFILE=expPD910.par
This file contains the following:
TABLES=('F980021', 'F95622', 'F95623', 'F95624', 'F95620', 'F95621', 'F952420', 'F95600', 'F98710', 'F982400', 'F98305V', 'F98306', 'F95625', 'F95626', 'F95627', 'F95628', 'F969861', 'F9698710', 'F9698712', 'F980011', 'F983051', 'F983052', 'F952400', 'F952411', 'F98711', 'F98712', 'F98713', 'F98720', 'F98740', 'F98741', 'F98743', 'F98745', 'F98750', 'F98751', 'F98752', 'F98753', 'F98760', 'F98761', 'F98762', 'F98770', 'F98950', 'F98950BK', 'F98950D', 'F989998', 'F989999') LOG=exp910.log FILE=export910_b4golive.dmp
This creates the dumpfile with the data and structures (FILE=) and also a log file (LOG=)
cool, now I want to import this data into PP910 – yes I’m refreshing PP central objects with PD central objects.
So I drop all of the indexes in PP910 and truncate all of the tables –simple.
IMP
Then I IMP the data with the following (note that I don’t want any of the full packages)
imp PP910/PASSWORD@instance
TABLES=('F980021', 'F95622', 'F95623', 'F95624', 'F95620', 'F95621', 'F952420', 'F95600', 'F98710', 'F982400', 'F98305V', 'F98306', 'F95625', 'F95626', 'F95627', 'F95628', 'F969861', 'F9698710', 'F9698712', 'F980011', 'F983051', 'F983052', 'F952400', 'F952411', 'F98711', 'F98712', 'F98713', 'F98720', 'F98740', 'F98741', 'F98743', 'F98745', 'F98750', 'F98751', 'F98752', 'F98753', 'F98760', 'F98761', 'F98762', 'F98770', 'F98950', 'F98950BK', 'F98950D', 'F989998', 'F989999') LOG=impPP910.log FILE=export910_b4golive.dmp ROWS=Y FEEDBACK=10000 IGNORE=Y INDEXES=N
Coolio – see how I did not need to change the tablespace owners or anything (yet!!!). This will insert all of the data. But now I have to generate the indexes. Note that I get a . every time 10000 rows are imported to a table – nice and nerdy oracle. You need IGNORE=Y to avoid it failing because the tables already exist (I truncated, did not drop).
DDL
So, I use an imdexfile
TABLES=('F980021', 'F95622', 'F95623', 'F95624', 'F95620', 'F95621', 'F952420', 'F95600', 'F98710', 'F982400', 'F98305V', 'F98306', 'F95625', 'F95626', 'F95627', 'F95628', 'F969861', 'F9698710', 'F9698712', 'F980011', 'F983051', 'F983052', 'F952400', 'F952411', 'F98711', 'F98712', 'F98713', 'F98720', 'F98740', 'F98741', 'F98743', 'F98745', 'F98750', 'F98751', 'F98752', 'F98753', 'F98760', 'F98761', 'F98762', 'F98770', 'F98950', 'F98950BK', 'F98950D', 'F989998', 'F989999') LOG=impPP910DDL.log FILE=export910_b4golive.dmp ROWS=N INDEXES=Y INDEXFILE=createIndexes.sql
Note that an index file will prevent things hitting the database, but will write it to a file. The above example with write the “CREATE INDEX” DDL to a file called createIndexes.sql
I then use vi to change the owner and the tablespaces with:
vi global search and replace (a command I always forget)
:%s/PD910/PP910/g
And then I wang in some parallel action:
:%s/NOLOGGING/PARALLEL 4 NOLOGGING/g
Woot, these indexes are going to fly!
I then execute the script and I have my indexes in the correct tablespace in the correct schema.