Tuesday, 19 March 2013

Generic advice about data pump

I've done a lot of data pumping lately with upgrades and other projects.  It's a great utility, but there are always some teething problems / lessons.

Firstly, the estimate is generally fairly good, so if you are getting an estimate of 110GB and you are expecting 30 - you probably have something wrong.

Secondly the use of INCLUDE and EXCLUDE is great with JDE, as this is a poor mans parallelism (standard edition cannot parallelise imports impdp), so knowing your large tables can allow you to easily perform quasi parrallelisation.  I quite often do F0911 separately (half the database) and then the rest of the tables.  I do this is an exclude and an include.

Thirdly I generally use par files, I like them because I can use them again.  It's the developer in me that has me wanting to use any code more than once.  It's also the old man in me that forgets things days after I've done them.

Forthly - well I'm sick of starting paragraphs with a [number]ly, so I'll just type.  I did try and write my data to a shared USB drive on another computer (out of my domain).  This proved to be impossible.  expdb seems to create a new proc which is like a fork from the main oracle process, therefore inheriting all of the permissions from the main executable.  This means that it cannot see the shares that I've manually authenticated to in my current windows session and therefore unable to natively write to the share.  Despite the fact that if I create the database directory through EM, and it prompts me for a username and password, I cannot use this dir in the data pump.  I know that I could start the DB with the OS user that has the permissions to the share, but it's another domain - os that ain't gunna work...  That was a timeconsuming and painful little lesson - thanks datapump.

The main commands are expdp and impdp

quite simply I run the command as a non sysdba user with a par file.

so expdp system/PassW0rd@e1prod PARFILE=proddta.exp

Nice, and the contents of proddta.exp:

DIRECTORY=DATA_PUMP_DIR
DUMPFILE=PRODDTA.DMP
JOB_NAME=PRODDTATDUMP
SCHEMAS=PRODDTA
LOGFILE=proddtadump.log
EXCLUDE=TABLE:"IN ('F0911', 'F43199')"
COMPRESS=Y
DIRECT=Y
FEEDBACK=1000000

So that is saying grab all of the tables in PRODDTA excluding F0911 and F43199.  Bang them into dump file PRODDTA.DMP.  From a DB perspective, use a job called PRODDTATDUMP and log the operation in file LOGFILE in the DIRECTORY.

It's that simple.

Have another file essentially the same, except us INCLUDE , not exclude - simple!





No comments:

Extending JDE to generative AI