Monday 9 February 2015

A long story for a couple of good tips

You can probably tell from my last post (not the last post) that I’m working on making a unicode conversion go a lot faster.  There are heaps of ways of doing this, I’m lucky enough to be working on a large oracle site with EE and heaps of CPU’s.  I’m able to give the thing a good thrashing too:

So at the end of the day I need to write 2 different bits of code, 1 to convert all of the huge tables manually, and another to convert the rest of the table with the standard OWTBLCONV.  That is the name of the procedure that oracle gives you to do the conversions for you.  You can just look at this in SQLDeveloper and see what it does, it’s pretty cool.

Anyway, so I’m in unix land and I need a script that will go through all of the tables with CHAR fields and convert them.  Okay, lets get a list:

select  table_name from all_tables t1 where
exists (select 1 from all_tab_columns t2
where t2.data_type = 'CHAR'
and t1.table_name = t2.table_name
and t1.owner = t2.owner)
and t1.owner = 'PRODDTA' ;

Right, that was easy, now which are the big tables again…  Oh yeah, that’s easy:

select segment_name, sum(bytes) from dba_segments where owner = 'PRODDTA' group by segment_name order by 2 desc;

Right we are getting somewhere.  Now there are 4000 tables (approx) in proddta that need to be unicode, so I have a file in unix with 4000 lines.  I want to make my script multi-threaded, so all I do is use the most awesomely awesome command – split:


split -l 500 tables.list parallel_table


So this gives me as many files as I need, starting with “parallel_table” out of my list of tables “tables.list”.  Yes, that easy:

-rw-r--r--    1 moirs    staff          4138 Feb 09 18:42 parallel_tableaa
-rw-r--r-- 1 moirs staff 4139 Feb 09 18:42 parallel_tableab
-rw-r--r-- 1 moirs staff 4079 Feb 09 18:42 parallel_tableac
-rw-r--r-- 1 moirs staff 3688 Feb 09 18:42 parallel_tablead
-rw-r--r-- 1 moirs staff 3619 Feb 09 18:42 parallel_tableae
-rw-r--r-- 1 moirs staff 3640 Feb 09 18:42 parallel_tableaf
-rw-r--r-- 1 moirs staff 3784 Feb 09 18:42 parallel_tableag
-rw-r--r-- 1 moirs staff 3189 Feb 09 18:42 parallel_tableah

That is so Rad!


I then have a script that I call:

#!/usr/bin/ksh
#mkdir ~/uniconvlogs
if [ $# -ne 1 ]
then
echo "USAGE $0 <input file containing list of tables>"
return 1
fi
fileinput=$1
if [ ! -f $fileinput ]
then
echo "ERROR: File $1 does not exist"
return 1
fi
for filename in `cat $fileinput`
do
echo processing $filename
echo "spool /home/moirs/uniconvlogs/$filename.log" >~/uniconvlogs/$filename.sql
echo "set echo on" >> ~/uniconvlogs/$filename.sql
echo "set feedback on" >>~/uniconvlogs/$filename.sql
echo "set timing on" >>~/uniconvlogs/$filename.sql
echo "set pagesize 8000" >>~/uniconvlogs/$filename.sql
echo "set linesize 8000" >>~/uniconvlogs/$filename.sql
echo "set trimspool on" >>~/uniconvlogs/$filename.sql
echo "alter session force parallel DDL PARALLEL 8;" >>~/uniconvlogs/$filename.sql
echo "alter session force parallel DML PARALLEL 8;" >>~/uniconvlogs/$filename.sql
echo "alter session force parallel QUERY PARALLEL 8;" >>~/uniconvlogs/$filename.sql

echo "select to_char(sysdate,'HH24:MI:SS') , to_char(sysdate,'YYY MM DD') from dual;" >>~/uniconvlogs/$filename.sql
echo "Call ProdDta.OWTBLCONV('PRODDTA', 'J@ck@ss','PRODDTA','PRODDTA', '$filename',0,0);" >>~/uniconvlogs/$filename.sql
echo "spool off" >>~/uniconvlogs/$filename.sql
echo "exit" >>~/uniconvlogs/$filename.sql

sqlplus proddta/J@ck@ss@sp2jde @/home/moirs/uniconvlogs/$filename.sql

done

And a script that does the multi-threading:

#!/usr/bin/ksh
for tablelist in `ls ~/parallel_tablea*`
do
convertTables.ksh $tablelist &
done
So now I have a complete suite of rapid conversions with 6 threads.  This is blasting through the workload.
I’ve already taken out the LARGE tables that I’m doing manually, they are done with a script like:
column filename new_val filename ;
select './UnicodeConv/F0101_unicode_conversion_' || to_char(sysdate, 'yyyymmdd' ) ||'.log' filename from dual;
spool &filename

set echo on
set feedback on
set timing on
set pagesize 8000
set linesize 8000
set trimspool on
select to_char(sysdate,'HH24:MI:SS') from dual;
select to_char(sysdate,'YYY MM DD') from dual;


alter session force parallel DDL PARALLEL 8;
alter session force parallel DML PARALLEL 8;
alter session force parallel QUERY PARALLEL 8;

alter table proddta.F0101 rename to F0101_NONUNI;
alter table proddta.F0101_NONUNI drop constraint F0101_PK ;

--stupid, but still uses old name for delete.
--although an index could be called mittens, does not need to be called the table name

DROP INDEX PRODDTA.F0101_0;
DROP INDEX PRODDTA.F0101_10;
DROP INDEX PRODDTA.F0101_11;
DROP INDEX PRODDTA.F0101_12;
DROP INDEX PRODDTA.F0101_13;
DROP INDEX PRODDTA.F0101_14;
DROP INDEX PRODDTA.F0101_15;
DROP INDEX PRODDTA.F0101_16;
DROP INDEX PRODDTA.F0101_2;
DROP INDEX PRODDTA.F0101_3;
DROP INDEX PRODDTA.F0101_4;
DROP INDEX PRODDTA.F0101_5;
DROP INDEX PRODDTA.F0101_6;
DROP INDEX PRODDTA.F0101_7;
DROP INDEX PRODDTA.F0101_8;
DROP INDEX PRODDTA.F0101_9;
DROP INDEX PRODDTA.F0101_CUST_IDX01;

CREATE TABLE "PRODDTA"."F0101"
( "ABAN8" NUMBER,
"ABALKY" NCHAR(20),
"ABTAX" NCHAR(20),
"ABALPH" NCHAR(40),
"ABDC" NCHAR(40),
"ABMCU" NCHAR(12),
"ABSIC" NCHAR(10),
"ABLNGP" NCHAR(2),
"ABAT1" NCHAR(3),
"ABCM" NCHAR(2),
"ABTAXC" NCHAR(1),
"ABAT2" NCHAR(1),


 

        "ABACTIN" NCHAR(1),
"ABREVRNG" NCHAR(5),
"ABSYNCS" NUMBER,
"ABPERRS" NUMBER,
"ABCAAD" NUMBER )
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT )
TABLESPACE "PRODDTAT"
PARALLEL 8 ;

select to_char(sysdate,'HH24:MI:SS') from dual;
select to_char(sysdate,'YYY MM DD') from dual;


INSERT INTO PRODDTA.F0101 SELECT * FROM PRODDTA.F0101_NONUNI ;
commit ;

select to_char(sysdate,'HH24:MI:SS') from dual;
select to_char(sysdate,'YYY MM DD') from dual;

spool off;
quit;

No comments:

Extending JDE to generative AI