About Me

My photo
I work for Fusion5 Australia . Connect with me on linked in here . I'm raising money for a good cause at the moment, follow donate to leukemia foundation

Friday, 30 June 2017

Forget missing indexes, have you thought about UNUSABLE indexes

So I did all that good work in https://shannonscncjdeblog.blogspot.com.au/2017/06/jde-slow-missing-indexes-find-it-fast.html but I find a bunch of my jobs are still not using indexes.  When I look into this some more, I see that the indexes are UNUSABLE!

Wow, deeper into the rabbit hole.

Use the following SQL to find indexes that are not valid.

select * from all_indexes where status <> 'VALID';

Then use the following to generate the statements to fix them.

select 'ALTER INDEX CRPDTA.' || index_name || ' REBUILD TABLESPACE CRPINDEX2;' FROM ALL_INDEXES where status = 'UNUSABLE';

And run the results, easy!

ALTER INDEX CRPDTA.F07351T_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F07351_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F07350_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F07315_PK REBUILD TABLESPACE CRPINDEX2;
ALTER INDEX CRPDTA.F073111_PK REBUILD TABLESPACE CRPINDEX2;

AWR your UBE. Performance investigation with JD Edwards (JDE) oracle database and UBEs

This is pretty cool, if I do say so myself.

Have you ever wanted a little more oracle performance information out of your UBEs?  I know logging is good, but AWR is better for performance.

Here are some basics for you (but this could be taken to a whole other level!).

Firstly, the basics of AWR:

create a snaphot, create another one and then create a report based upon the first and second snaphots – RAD!

create

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

run UBE

runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save

create

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

Let’s put that into a simple korn shell script:

if [ $# -ne 2 ]
  then
   echo "USAGE: $0 REPORT VERSION"
   exit
fi
sqlplus JDE/JDE@JDETEST <<EOF
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select max(snap_id)
from
    dba_hist_snapshot ;
quit;
EOF
time runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save
sqlplus JDE/JDE@JDETEST <<EOF
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select max(snap_id)
from
    dba_hist_snapshot;
quit;
EOF


I like to start simple, then get complex.

This is cool.  It’ll snap and tell you the ID, it’ll run the job and tell you how long it took to run and then it will snap again and tell you the next id – coolio

But I want more and neater.

So now:

if [ $# -ne 2 ]
  then
   echo "USAGE: $0 REPORT VERSION"
   exit
fi
sqlplus JDE/JDE@JDETEST <<EOF
set feedback off
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select 'START##' || max(snap_id)
from
    dba_hist_snapshot ;
quit;
EOF
echo $1_$2 RUNNING##
time runube JDE JDE PY900 *ALL $1   $2     QBATCH     Interactive Hold Save
sqlplus JDE/JDE@JDETEST <<EOF
set feedback off
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
select 'FINISH##' || max(snap_id)
from
    dba_hist_snapshot;
quit;
EOF

This is a little easier – because the output looks like:

you call it like this:

./runme.ksh R30812 F4109 >> R30812_F4109.out 2>&1

and the output looks like:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 30 14:16:17 2017

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> SQL> SQL>   2    3
'START##'||MAX(SNAP_ID)
-----------------------------------------------
START##146
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
R047001A_S0001 RUNNING##
Using User/Password information on the command line has been deprecated.  Although it will continue to work in this release, it will no longer be available as an option in a future release.  Please switch to using one of the -p -f or -d options instead.

New Usage: runube       <[-p|-P] [-f|-F|-d|-D FilePath] [user password]>
                        <Environment>
                        <Role>
                        <ReportName>
                        <VersionName>
                        <JobQueue>
                        <"Interactive"|"Batch">
                        <"Print"|"Hold">
                        <"Save"|"Delete">
                        [Printer]
        -p|-P                   Prompts for user/password information

        -f|-f FilePath          Reads user/password information from the plain text file that is specified in FilePath.

        -d|-D FilePath          Reads user/password information from the plain text file, and indicates the automatic removal of the file.


real    0m19.343s
user    0m8.120s
sys     0m0.242s

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 30 14:16:38 2017

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> SQL> SQL>   2    3
'FINISH##'||MAX(SNAP_ID)
------------------------------------------------
FINISH##147
SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


You just need to awk through the output to find ## and grep –v ‘FINISH## and you’ll feed those results into the next script.

Right, but I want to parameterize this, I don’t want to enter in the snaps.

SELECT
    output
FROM    TABLE(dbms_workload_repository.awr_report_text ((select dbid from v$database),(select instance_number from v$instance),112,113 ));

So then I can automate the output too!

./runme.ksh R0010P XJDE0001 >> R0010P_XJDE0001.out 2>&1

The following bad boy will create the AWR report for you.

if [ $# -ne 2 ]
   then
     echo "USAGE FROMSNAP TOSNAP"
     exit
fi

sqlplus JDE/JDE@JDETEST <<EOF
SELECT
     output
  FROM    TABLE(dbms_workload_repository.awr_report_text ((select dbid from v\$database),(select instance_number from v\$instance),$1,$2 ));
quit;
EOF

Putting it all together, you can now grep though the output of your script.

extra for experts

If you wanted to be totally RAD, you could actually create a script and call it runube (but make it a korn shell script).  Basically it would call your stuff and then the actual runube later.  You could put all of the AWR magic in there also, so that you could have AWR’s for all of your reports.  Note that it might be a bit messy because of other transactions, but you’d get the hint about the performance and you’d know which snaps to use for what reports.

Thursday, 29 June 2017

JDE slow, missing indexes? find it fast… fix it fast!

Here is a basic SQL that will tell you if you are missing any indexes (PK or other) for oracle based upon your current central objects.

Note that there is a difference in the naming of the unique index (_PK), so the large union.

select trim(tpobnm) || '_' || tpinid  as jdeindex
from py900.f98712 
where tpuniq <> 1
and not exists
(select 1 from all_indexes
where owner = 'CRPDTA'
and trim(tpobnm) || '_' || tpinid = index_name)
and exists
(select 1
from all_Tables
where owner = 'CRPDTA'
and table_name = trim(tpobnm))
union
select trim(tpobnm) || '_PK'  as jdeindex
from py900.f98712 
where tpuniq = 1
and not exists
(select 1 from all_indexes
where owner = 'CRPDTA'
and trim(tpobnm) || '_PK' = index_name)
and exists
(select 1
from all_Tables
where owner = 'CRPDTA'
and table_name = trim(tpobnm))
order by 1 desc;

The results will tell you quickly what you are missing.  This is a nice quick sanity check.

I can admit that this works…  For me, I see the following in the results:

F00151_PK
F0006_7
F0006_5
F0006_4
F0006_2
F00021_PK

So then I look in sqldeveloper and see

image

Then I look in TDA and find:

image

Nice one SQL – missing indexes.

I grab the definitions and create them.  Note that I’m using compression too.


CREATE INDEX "CRPDTA"."F0006_2" ON "CRPDTA"."F0006" ("MCCO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_4" ON "CRPDTA"."F0006" ("MCSTYL", "MCCO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_5" ON "CRPDTA"."F0006" ("MCSTYL", "MCFMOD")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_6" ON "CRPDTA"."F0006" ("MCAN8" DESC)
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

CREATE INDEX "CRPDTA"."F0006_7" ON "CRPDTA"."F0006" ("MCCLNU", "MCPCTN", "MCDOCO")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "CRPINDEX2"
PARALLEL COMPRESS 1 ;

And run the SQL again:

No more F0006 – you’d think that it works!

Why did I do this?

Take a look at my IO!  12GBs of physical reads a second…  WHAT!  I needed to track this down fast.  I managed to find the problematic SQL and then also noticed that an index was missing on the F42119…  Okay, not a problem.  But I wanted to make sure that there were no other missing indexes.

image

Monday, 26 June 2017

Building an ODA for JDE–import vmtemplate

I’ve been back on the tools “big time”, building a JD Edwards environment on an ODA – a new FLASH based X6-2HA.  This is pretty exciting stuff (for a nerd like me).

Something interesting when installing VMs – which has been a little painful

you need to run

[root@sodax6-1 testing2]# oakcli import vmtemplate OL7U3 -assembly /OVS/Repositories/testing2/OVM_OL7U3_x86_64_PVHVM.ova -repo testing2 -node 0

Imported VM Template

This is being run from ODA_BASE, but you are specifying the location on DOM0 – WHAT???  Stupid hey?

You are on ODA_BASE and you do df –k:

[root@sodax6-1 testing2]# df -k
Filesystem            1K-blocks       Used  Available Use% Mounted on
/dev/xvda2             57191708   12601536   41684932  24% /
tmpfs                 132203976    1246296  130957680   1% /dev/shm
/dev/xvda1               471012      35731     410961   8% /boot
/dev/xvdb1             96119564   33014300   58222576  37% /u01
/dev/asm/acfsvol-49    52428800     194884   52233916   1% /cloudfs
/dev/asm/testing-216 1048576000  298412232  750163768  29% /u01/app/sharedrepo/testing
/dev/asm/testing2-216
                      4194304000 1454789000 2739515000  35% /u01/app/sharedrepo/testing2
/dev/asm/datastore-344
                        32505856   17050960   15454896  53% /u01/app/oracle/oradata/datastore
/dev/asm/datastore-216
                      4393533440 4240976672  152556768  97% /u02/app/oracle/oradata/datastore
/dev/asm/datastore-49
                       530579456  326606008  203973448  62% /u01/app/oracle/fast_recovery_area/datastore

but, from dom0

[root@sodax6-1dom0 testing2]# df -k
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/sda3             19840924   3543148  15273636  19% /
/dev/sda2            428805744 192371988 214300204  48% /OVS
/dev/sda1               497829     44970    427157  10% /boot
tmpfs                  1233052         0   1233052   0% /dev/shm
none                   1233052       112   1232940   1% /var/lib/xenstored
192.168.18.21:/u01/app/sharedrepo/testing
                      1048576000 298412224 750163776  29% /OVS/Repositories/testing
192.168.18.21:/u01/app/sharedrepo/testing2
                      4194304000 1454788992 2739515008  35% /OVS/Repositories/testing2

Make sure that you reference your template like it’s “hung” off the DOM0 not ODA_BASE

See below, this is stealing a copy of the “JDE in a box” system disk from the JD Edwards templates.  Note that it’s not stealing, but you can get an older version of the OS this way.  note also that it needs to be a compressed tar ball to work.

[root@sodax6-1 testing2]# oakcli import vmtemplate EL58 -files /OVS/Repositories/testing2/e1_X86_sys_914.tgz -repo testing2 -node 0

A special note:

I used AWS S3 buckets as a temp location (instead of a VPN because of speed and configuration problems for the VPN).  I was able to get 50MB/sec download from the S3 bucket into the oracle data centre in Sydney –wow!  that is very impressive.

Friday, 23 June 2017

install AWS command line on linux for data movement

I need to move a lot of data quickly, so I’m going to use an S3 bucks and install the aws cli on my linux hosts to be able to put and get.
Let’s begin the dance…
to use aws-cli, you need to install with pip.
to install pip https://pip.pypa.io//en/latest/installing/#get-pip-py-options you need to wget or at least copy a copy of get-pip.py for python > 2.6
remember that you need to set the proxy (more than likely on your server)
export http_proxy=http://moirs:Password\!@proxy:8080
Then test it
[root@ronin0 ~]# wget www.google.com
--2017-06-23 16:05:57--  http://www.google.com/
Resolving proxy... 10.241.10.79
Connecting to proxy|10.241.10.79|:8080... connected.
Proxy request sent, awaiting response... 302 Found
Location:
http://www.google.co.nz/?gfe_rd=cr&ei=pZNMWeKlFqHM8gfo7LDQCg [following]
--2017-06-23 16:05:57-- 
http://www.google.co.nz/?gfe_rd=cr&ei=pZNMWeKlFqHM8gfo7LDQCg
Connecting to proxy|10.241.10.79|:8080... connected.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `index.html.1'

    [ <=>                                                                                                                        ] 13,313      --.-K/s   in 0.001s
2017-06-23 16:05:57 (15.6 MB/s) - `index.html.1' saved [13313]
Tidy, proxy is good

Try the following to download
wget https://bootstrap.pypa.io/get-pip.py

Now you need to run the get-pip.py script that you’ve copied in
python get-pip.py --proxy="http://moirs:Password\!@proxy:8080"
DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
Collecting pip
/tmp/tmpHeNyVB/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
/tmp/tmpHeNyVB/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
  Downloading pip-9.0.1-py2.py3-none-any.whl (1.3MB)
    100% |################################| 1.3MB 600kB/s
Collecting setuptools
  Downloading setuptools-36.0.1-py2.py3-none-any.whl (476kB)
    100% |################################| 481kB 1.4MB/s
Collecting wheel
  Downloading wheel-0.29.0-py2.py3-none-any.whl (66kB)
    100% |################################| 71kB 1.3MB/s
Collecting argparse; python_version == "2.6" (from wheel)
  Downloading argparse-1.4.0-py2.py3-none-any.whl
Installing collected packages: pip, setuptools, argparse, wheel
  Found existing installation: argparse 1.2.1
    Uninstalling argparse-1.2.1:
      Successfully uninstalled argparse-1.2.1
Successfully installed argparse-1.4.0 pip-9.0.1 setuptools-36.0.1 wheel-0.29.0
/tmp/tmpHeNyVB/pip.zip/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can
Great, now I can install aws-cli
[root@ronin0 ~]# pip --proxy http://moirs:Password\!@proxy:8080 install awscli
DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6
Collecting awscli
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the SNI (Subject Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#snimissingwarning.
  SNIMissingWarning
/usr/lib/python2.6/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see
https://urllib3.readthedocs.io/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
  Downloading awscli-1.11.111-py2.py3-none-any.whl (1.2MB)
    100% |################################| 1.2MB 585kB/s
Collecting botocore==1.5.74 (from awscli)
  Downloading botocore-1.5.74-py2.py3-none-any.whl (3.5MB)
    100% |################################| 3.5MB 282kB/s
Collecting rsa<=3.5.0,>=3.1.2 (from awscli)
  Downloading rsa-3.4.2-py2.py3-none-any.whl (46kB)
    100% |################################| 51kB 5.1MB/s
Collecting s3transfer<0.2.0,>=0.1.9 (from awscli)
  Downloading s3transfer-0.1.10-py2.py3-none-any.whl (54kB)
    100% |################################| 61kB 82kB/s
Requirement already satisfied: argparse>=1.1; python_version == "2.6" in /usr/lib/python2.6/site-packages (from awscli)
Collecting docutils>=0.10 (from awscli)
  Downloading docutils-0.13.1-py2-none-any.whl (537kB)
    100% |################################| 542kB 1.3MB/s
Collecting colorama<=0.3.7,>=0.2.5 (from awscli)
  Downloading colorama-0.3.7-py2.py3-none-any.whl
Collecting PyYAML<=3.12,>=3.10 (from awscli)
  Downloading PyYAML-3.12.tar.gz (253kB)
    100% |################################| 256kB 1.9MB/s
Collecting simplejson==3.3.0; python_version == "2.6" (from botocore==1.5.74->awscli)
  Downloading simplejson-3.3.0.tar.gz (67kB)
    100% |################################| 71kB 4.0MB/s
Collecting ordereddict==1.1; python_version == "2.6" (from botocore==1.5.74->awscli)
  Downloading ordereddict-1.1.tar.gz
Collecting python-dateutil<3.0.0,>=2.1 (from botocore==1.5.74->awscli)
  Downloading python_dateutil-2.6.0-py2.py3-none-any.whl (194kB)
    100% |################################| 194kB 2.3MB/s
Collecting jmespath<1.0.0,>=0.7.1 (from botocore==1.5.74->awscli)
  Downloading jmespath-0.9.3-py2.py3-none-any.whl
Collecting pyasn1>=0.1.3 (from rsa<=3.5.0,>=3.1.2->awscli)
  Downloading pyasn1-0.2.3-py2.py3-none-any.whl (53kB)
    100% |################################| 61kB 4.8MB/s
Collecting futures<4.0.0,>=2.2.0; python_version == "2.6" or python_version == "2.7" (from s3transfer<0.2.0,>=0.1.9->awscli)
  Downloading futures-3.1.1-py2-none-any.whl
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore==1.5.74->awscli)
  Downloading six-1.10.0-py2.py3-none-any.whl
Building wheels for collected packages: PyYAML, simplejson, ordereddict
  Running setup.py bdist_wheel for PyYAML ... done
  Stored in directory: /root/.cache/pip/wheels/2c/f7/79/13f3a12cd723892437c0cfbde1230ab4d82947ff7b3839a4fc
  Running setup.py bdist_wheel for simplejson ... done
  Stored in directory: /root/.cache/pip/wheels/5a/a5/b9/b0c89f0c5c40e2090601173e9b49091d41227c6377020e4e68
  Running setup.py bdist_wheel for ordereddict ... done
  Stored in directory: /root/.cache/pip/wheels/cf/2c/b5/a1bfd8848f7861c1588f1a2dfe88c11cf3ab5073ab7af08bc9
Successfully built PyYAML simplejson ordereddict
Installing collected packages: simplejson, ordereddict, six, python-dateutil, jmespath, docutils, botocore, pyasn1, rsa, futures, s3transfer, colorama, PyYAML, awscli
  Found existing installation: simplejson 2.0.9
    Uninstalling simplejson-2.0.9:
      Successfully uninstalled simplejson-2.0.9
  Found existing installation: ordereddict 1.2
    DEPRECATION: Uninstalling a distutils installed project (ordereddict) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
    Uninstalling ordereddict-1.2:
      Successfully uninstalled ordereddict-1.2
Successfully installed PyYAML-3.12 awscli-1.11.111 botocore-1.5.74 colorama-0.3.7 docutils-0.13.1 futures-3.1.1 jmespath-0.9.3 ordereddict-1.1 pyasn1-0.2.3 python-dateutil-2.6.0 rsa-3.4.2 s3transfer-0.1.10 simplejson-3.3.0 six-1.10.

Now,
Now we can run it, but remember https_proxt envrionment variable too
export https_proxy=http://moirs:Password\!@proxy:8080
aws configure
AWS Access Key ID [None]: GFDGHTRHBT
AWS Secret Access Key [None]: GFDHGF
Default region name [None]: ap-southeast-2
Default output format [None]:
Note that you get your secret key and access ID from the AWS console for your username
you are cooking with gas…
aws s3 ls

Thursday, 22 June 2017

oracle index compression and JDE

Some indexes compress well and some do not, that is my position on index compression – which is about the same for tables.

There are two types of index compression:

  • Creating an Index Using Prefix Compression
    Creating an index using prefix compression (also known as key compression) eliminates repeated occurrences of key column prefix values. Prefix compression is most useful for non-unique indexes with a large number of duplicates on the leading columns.
  • Creating an Index Using Advanced Index Compression (only available in 12c)
    Creating an index using advanced index compression reduces the size of all supported unique and non-unique indexes. Advanced index compression improves the compression ratios significantly while still providing efficient access to the indexes. Therefore, advanced index compression works well on all supported indexes, including those indexes that are not good candidates for prefix compression.

Take a look at the F0911 data below:

select t1.segment_name, t1.owner, t1.segment_type, t1.tablespace_name, t1.bytes, t2.COMPRESSION
from dba_segments t1, all_indexes t2
where t1.owner in ('TESTDTA', 'ORADTA', 'CRPDTA')
and t2.index_name = t1.segment_name
and t2.owner = t1.owner
and t1.segment_name like 'F0911%'
order by t1.segment_name, t1.owner;

Provides:

Note that in my example CRPDTA and TESTDTA are both the same data and compressed.  ORADTA is an older copy and not compressed.

SEGMENT_NAME     OWNER        SEGMENT_TYPE       TABLESPACE_NAME    BYTES COMPRESS

F0911_10    CRPDTA    INDEX    CRPDTAI    9964158976    ENABLED
F0911_10    ORADTA    INDEX    ORADTAI    34437660672    DISABLED
F0911_10    TESTDTA    INDEX    SSDMAXI    9963700224    ENABLED

F0911_11    CRPDTA    INDEX    CRPDTAI    4324261888    ENABLED
F0911_11    ORADTA    INDEX    ORADTAI    15610150912    DISABLED
F0911_11    TESTDTA    INDEX    SSDMAXI    4312596480    ENABLED

F0911_12    CRPDTA    INDEX    CRPDTAI    4467720192    ENABLED
F0911_12    ORADTA    INDEX    ORADTAI    13323927552    DISABLED
F0911_12    TESTDTA    INDEX    SSDMAXI    4467130368    ENABLED

F0911_13    CRPDTA    INDEX    CRPDTAI    4342087680    ENABLED
F0911_13    ORADTA    INDEX    ORADTAI    13638959104    DISABLED
F0911_13    TESTDTA    INDEX    SSDMAXI    4298702848    ENABLED

F0911_15    CRPDTA    INDEX    CRPDTAI    17101357056    ENABLED
F0911_15    ORADTA    INDEX    ORADTAI    14177140736    DISABLED
F0911_15    TESTDTA    INDEX    SSDMAXI    17108828160    ENABLED


List of all F0911 indexes is below:

image


Interesting hey, some are really good and some are pretty junk at compression.

_10 reduces by about 350%, _11 by 350% – but look at 15 and the compressed version is actually larger (that is a little bit of a problem with my dodgy data).

Why is this so?

Looking at the definitions of the indexes:

F0911_10 = GLPOST, GLAID, GLLT, GLCTRY

This is made up of the following columns

image

CRPDTA    F0911_10    CRPDTA    F0911    GLPOST    1    2    1    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLAID    2    16    8    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLLT    3    4    2    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLCTRY    4    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLFY    5    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLPN    6    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLSBL    7    16    8    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLSBLT    8    2    1    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLDGJ    9    22    0    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLASID    10    50    25    ASC
CRPDTA    F0911_10    CRPDTA    F0911    GLBRE    11    2    1    ASC

I feel that this compresses well because of the similar prefix columns in other indexes…

Index F0911_15 is rubbish:

CRPDTA    F0911_15    CRPDTA    F0911    GLDCT    1    4    2    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLDOC    2    22    0    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLKCO    3    10    5    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLDGJ    4    22    0    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLLT    5    4    2    ASC
CRPDTA    F0911_15    CRPDTA    F0911    GLEXTL    6    4    2    ASC
CRPDTA    F0911_15    CRPDTA    F0911    SYS_NC00142$    7    34    0    DESC

So, because I’m using standard index compression, the database is unable to find another index with similar prefix’s as index 15, which means that it cannot get any benefits out of the prefix compression.  Okay that is good to know.

Tuesday, 6 June 2017

Integration options for JD Edwards, is there one size fits all?

No there is not.

…  Should I end my post there, probably not…

I think that you should begin to consider a single solution for your integrations, as it’s going to become more and more popular.  If SaaS has anything to do with your digital journey – which it does, then you need to start considering orchestration.

Orchestration might not be the correct term, as it might be integration enablement – or at least a integration funnel.  What I mean here is that all systems have mechanisms for integration (JDE has heaps AIS, BSSV, UBE, XMLCallObject, COM…) this list goes on.  but they are a little proprietary by nature, in that yes – JDE will talk REST – but you have to form the document exactly as JD Edwards requires.  This is the same for web services, it takes some work.  Without an integration solution, this message transformation must happen at one end of the integration, i.e.

  • Modify JDE to take a generic “rest-ful” payload and then use logic in JDE to transform this into something that JDE can consume / produce and reply
  • Modify the other “point” system (as this would be point to point") to talk very specifically like JDE expects and have JDE reply natively…  Then interpret that native talk to your other point talk.

image

In the instance above I’m going to modify JD Edwards.  I’ll write a basic screen with a large text box that someone can call and plonk in a JSON document.  My code will rip this up and act upon it and then write the results in another text box that the calling system can read.  So we’ve opened up JD Edwards to talk fairly generically but you must communication to JDE in a fairly standard format (you don’t need to know all of the controls etc in AIS).  Or if you don’t want to make any mods to the WMS, then you need to write a BSSV or some generic code to listen for the WMS message and then act upon this.

So above we can see that we are writing quite a bit of code in potentially two solutions to get an integration going…  You notice that if you had a bunch of skills in the WMS, you might make it do the heavy lifting and then light touch code in JDE

What if there is then another system.

image

We do start to get complex.  We are trying to use JDE as the “funnel” and write the logic in JDE for all of the systems, but this is a lot of JD Edwards code that might not be too good at ripping apart XML and JSON.  This is where orchestration or integration comes into play.

What are your options?  Again, the gartner magic quadrant is going to help you create a shortlist:

image

Jitterbit

Oracle

Dell Boomi

Mulesoft

Biztalk

… and more

I’ve applied some logic to this, as these are the solutions that I want to speak about

What are these going to allow you to do?

You are going to be able to do your integration intensive work in a platform / environment that is designed to integrate.  You’ll be able to point and click integration items, apply translations, speak native WS and JSON.  You will be able to do native flat file, read CSV and talk to databases using JDBC – nice!!  All this within your integration solution – nicer.

image

This means that all of your logic for integrations is in one place.  All of your monitoring and maintenance is in one place.  Your integration SDLC should be more agile than your ERP SDLC, and this can also be managed in a single solution.  Are you ready to swap out any component in your enterprise – yes.

This is great, you are not modifying ERP code and WMS code, the integration suite it talking the default integration points from each solution and doing the mapping and coordination between the two systems.  You can keep the ERP, WMS and CRM standard and do the heavy lifting in integration.

So I do mention certain products, but I think that they’ll all get the job done.  Is one better than the other – probably at some things and not at others. 

As a client you need to decide on your price point and tech bias.  If you are a microsoft shop and want Azure only – biztalk in Azure is for you.

If you are an oracle shop then perhaps oracle’s integration cloud service is for you https://cloud.oracle.com/integration

I have a bunch of clients very happy using Jitterbit, which is a great solution for complex requirements.

What if you like to Develop?

My personal favorite is to use some generic “push|pull” cloud based solutions to solve this problem – why?  I have some amazing developers that can get this done easily.  I have the power of AWS and Lambda to communication securely and quickly and only pay for the instructions that I run.  I can have subscribers and queues that I can configure and own.  Infinitely extensible, highly available and disaster recoverable.  Sure, there is some dev – but I’m leaning on cloud based constructs that can span data availability zones and regions if necessary. 

I’m a little cloud agnostic, so I’d also look into the google pub/sub https://cloud.google.com/pubsub/docs/overview, which also has amazing extensibility and flexibility into what you can do and how you can do it.  Building a complete integration solution is NOT as hard as you may think.  If you take integration down to the lowest common denominator, pub/sub – push pull is all that you need.

What is the secret sauce?

Cloud…  You need to set up your integrations like it was SaaS.  Go through the learning and the pain now and reap the benefits when EVERY piece of software that your organisation uses is SaaS.  Establish your integration framework like is was “service” based – microservice if you like.  Then you can not only expose this to a integration, but also a mobile application and an active web page.  IF you are exposing a consistent end point, this can be modified behind the scenes without changing the interface to all of the external systems.

INTaaS

Of course!  I think that this is logic if you have a good partner (like fusion5) and you don’t really want to make all of the technical decisions.  You can just tell your service provider what needs to be connected and they do it.  They manage it, they maintain it.  Provide workflow and a single source of truth for integrations – this is a good model.  Allow you to get back to adding value to the decision making process knowing that the integrations are just working.

Bullet points to help you make your decision

  • cloud based integration solution
  • ensure you get monitoring and workflow if you want to own the solution and make all the technical decisions
  • ensure that your integration provider will give you this if you do not want to own it
  • everything will be SaaS / service eventually, get ready.
  • reuse your logic, think microservice
  • RTE is an awesome way to get information out of JDE into a queueing service…  Use it!
  • AIS is a must for lightweight integration into JD Edwards – it’s not just for mobile apps
  • Weigh up the cost of technical debt (writing ERP code / WMS code to support a integration) vs. using standard integration points and orchestration / integration solution…  You might find that the subscription costs are less than the technical debt you’ll incur quite quickly.

Good representation of the digital journey


I’ve been to a number of digital workshops, and it’d been hard to define what an individual digital journey is.  It’s a dilemma, how to articulate “going digital” or a digital transformation, as it’s always a very subjective journey.  I find that trying to put some structure around the definition and the journey that is relevant for certain companies is the best way.  The process below (as originally found in a Microsoft presentation) is a great start.

You can transform your business from the following business areas:image

So let’s at least start to categorise the elements of a digital journey, then build upon this.

  • empowering your employees
  • Engaging your customers
  • Transforming your products and services
  • Optimising your operations.

I think we can also cross reference the above with our knowledge of mega-trends and come up with a handy matrix of megatrend within digital category:


image

I think that this categorisation assists with identification of opportunities for innovation in your organisation.

Note that I’ve added a couple of my own for “good luck”, integration and configuration (citizen developer).

I could categorise most of the innovation at Fusion5 with the above.

Thursday, 1 June 2017

oracle OLTP compression, F42199 and the missing compression

It’s like a murder mystery…

I’ve been doing some OLTP compression for a client, because they have too much data – and JDE loves whitespace.

So I whoop up some great statements.

create table TESTDTA.F42199SRM as select * from TESTDTA.F42199 where 1=0;
alter table TESTDTA.F42199SRM compress for OLTP;
alter table TESTDTA.F42199SRM move tablespace TESTDTAT;
alter table TESTDTA.f42199SRM NOLOGGING;
insert into TESTDTA.F42199SRM select * from TESTDTA.F42199;

All good, let’s check the size of the tables – still 320 GB…

image

How’s this for a table that works (F0911)

image

F0911 went from 381GB to 21GB…  WHAT!!!!  I’ll take a CPU hit any day of the week to be able to load the entire F0911 in 21GB!  The table above shows ORADTA uncompressed and CRPDTA and TESTDTA compressed.

Hmm, so it takes an hour to run… I get no errors…  But my tables is still 231GB…  Something is wrong.  then I remember reading something about not working if the table has more than 256 columns and I see that F42199 has 266…  Doh!!!

What am I going to do?

It seems simple at first, create an updateable view, nice..  works perfectly expect – when trying to insert the data.

I create a couple of smaller tables and then define the view to be a select over the top of them.  Then insert all of the data and I’m done!

CREATE TABLE "TESTDTA"."F42199_T1"
   (    "SLKCOO" NCHAR(5) NOT NULL ,
    "SLDOCO" NUMBER NOT NULL ,
    "SLDCTO" NCHAR(2) NOT NULL ,
    "SLLNID" NUMBER NOT NULL ,
    "SLSFXO" NCHAR(3),
    "SLMCU" NCHAR(12),
    "SLCO" NCHAR(5),
    "SLOKCO" NCHAR(5),
    "SLOORN" NCHAR(8),
    "SLOCTO" NCHAR(2),
    "SLOGNO" NUMBER,
    "SLRKCO" NCHAR(5),
    "SLRORN" NCHAR(8),
    "SLRCTO" NCHAR(2),
    "SLRLLN" NUMBER,
    "SLDMCT" NCHAR(12),
    "SLDMCS" NUMBER,
    "SLAN8" NUMBER,
    "SLSHAN" NUMBER,
    "SLPA8" NUMBER,
    "SLDRQJ" NUMBER(6,0),
    "SLTRDJ" NUMBER(6,0),
    "SLPDDJ" NUMBER(6,0),
    "SLADDJ" NUMBER(6,0),
    "SLIVD" NUMBER(6,0),
    "SLCNDJ" NUMBER(6,0),
    "SLDGL" NUMBER(6,0),
    "SLRSDJ" NUMBER(6,0),
    "SLPEFJ" NUMBER(6,0),
    "SLPPDJ" NUMBER(6,0),
    "SLVR01" NCHAR(25),
    "SLVR02" NCHAR(25),
    "SLITM" NUMBER,
    "SLLITM" NCHAR(25),
    "SLAITM" NCHAR(25),
    "SLLOCN" NCHAR(20),
    "SLLOTN" NCHAR(30),
    "SLFRGD" NCHAR(3),
    "SLTHGD" NCHAR(3),
    "SLFRMP" NUMBER,
    "SLTHRP" NUMBER,
    "SLEXDP" NUMBER,
    "SLDSC1" NCHAR(30),
    "SLDSC2" NCHAR(30),
    "SLLNTY" NCHAR(2),
    "SLNXTR" NCHAR(3),
    "SLLTTR" NCHAR(3),
    "SLEMCU" NCHAR(12),
    "SLRLIT" NCHAR(8),
    "SLKTLN" NUMBER,
    "SLCPNT" NUMBER,
    "SLRKIT" NUMBER,
    "SLKTP" NUMBER,
    "SLSRP1" NCHAR(3),
    "SLSRP2" NCHAR(3),
    "SLSRP3" NCHAR(3),
    "SLSRP4" NCHAR(3),
    "SLSRP5" NCHAR(3),
    "SLPRP1" NCHAR(3),
    "SLPRP2" NCHAR(3),
    "SLPRP3" NCHAR(3),
    "SLPRP4" NCHAR(3),
    "SLPRP5" NCHAR(3),
    "SLUOM" NCHAR(2),
    "SLUORG" NUMBER,
    "SLSOQS" NUMBER,
    "SLSOBK" NUMBER,
    "SLSOCN" NUMBER,
    "SLSONE" NUMBER,
    "SLUOPN" NUMBER,
    "SLQTYT" NUMBER,
    "SLQRLV" NUMBER,
    "SLCOMM" NCHAR(1),
    "SLOTQY" NCHAR(1),
    "SLUPRC" NUMBER,
    "SLAEXP" NUMBER,
    "SLAOPN" NUMBER,
    "SLPROV" NCHAR(1),
    "SLTPC" NCHAR(1),
    "SLAPUM" NCHAR(2),
    "SLLPRC" NUMBER,
    "SLUNCS" NUMBER,
    "SLECST" NUMBER,
    "SLCSTO" NCHAR(1),
    "SLTCST" NUMBER,
    "SLINMG" NCHAR(10),
    "SLPTC" NCHAR(3),
    "SLRYIN" NCHAR(1),
    "SLDTBS" NCHAR(1),
    "SLTRDC" NUMBER,
    "SLFUN2" NUMBER,
    "SLASN" NCHAR(8),
    "SLPRGR" NCHAR(8),
    "SLCLVL" NCHAR(3),
    "SLCADC" NUMBER,
    "SLKCO" NCHAR(5),
    "SLDOC" NUMBER,
    "SLDCT" NCHAR(2),
    "SLODOC" NUMBER,
    "SLODCT" NCHAR(2),
    "SLOKC" NCHAR(5),
    "SLPSN" NUMBER,
    "SLDELN" NUMBER,
    "SLTAX1" NCHAR(1),
    "SLTXA1" NCHAR(10),
    "SLEXR1" NCHAR(2),
    "SLATXT" NCHAR(1),
    "SLPRIO" NCHAR(1),
    "SLRESL" NCHAR(1),
    "SLBACK" NCHAR(1),
    "SLSBAL" NCHAR(1),
    "SLAPTS" NCHAR(1),
    "SLLOB" NCHAR(3),
    "SLEUSE" NCHAR(3),
    "SLDTYS" NCHAR(2),
    "SLNTR" NCHAR(2),
    "SLVEND" NUMBER,
    "SLCARS" NUMBER,
    "SLMOT" NCHAR(3),
    "SLROUT" NCHAR(3),
    "SLSTOP" NCHAR(3),
    "SLZON" NCHAR(3),
    "SLCNID" NCHAR(20),
    "SLFRTH" NCHAR(3),
    "SLSHCM" NCHAR(3),
    "SLSHCN" NCHAR(3),
    "SLSERN" NCHAR(30),
    "SLUOM1" NCHAR(2),
    "SLPQOR" NUMBER,
    "SLUOM2" NCHAR(2),
    "SLSQOR" NUMBER,
    "SLUOM4" NCHAR(2),
    "SLITWT" NUMBER,
    "SLWTUM" NCHAR(2),
    "SLITVL" NUMBER,
    "SLVLUM" NCHAR(2),
    "SLRPRC" NCHAR(8),
    "SLORPR" NCHAR(8),
    "SLORP" NCHAR(1),
    "SLCMGP" NCHAR(2),
    "SLGLC" NCHAR(4),
    "SLCTRY" NUMBER,
    "SLFY" NUMBER,
    "SLSO01" NCHAR(1),
    "SLSO02" NCHAR(1),
    "SLSO03" NCHAR(1),
    "SLSO04" NCHAR(1),
    "SLSO05" NCHAR(1),
    "SLSO06" NCHAR(1),
    "SLSO07" NCHAR(1),
    "SLSO08" NCHAR(1),
    "SLSO09" NCHAR(1),
    "SLSO10" NCHAR(1),
    "SLSO11" NCHAR(1),
    "SLSO12" NCHAR(1),
    "SLSO13" NCHAR(1),
    "SLSO14" NCHAR(1),
    "SLSO15" NCHAR(1),
    "SLACOM" NCHAR(1),
    "SLCMCG" NCHAR(8),
    "SLRCD" NCHAR(3),
    "SLGRWT" NUMBER,
    "SLGWUM" NCHAR(2),
    "SLSBL" NCHAR(8),
    "SLSBLT" NCHAR(1),
    "SLLCOD" NCHAR(2),
    "SLUPC1" NCHAR(2),
    "SLUPC2" NCHAR(2),
    "SLUPC3" NCHAR(2),
    "SLSWMS" NCHAR(1),
    "SLUNCD" NCHAR(1),
    "SLCRMD" NCHAR(1),
    "SLCRCD" NCHAR(3),
    "SLCRR" NUMBER,
    "SLFPRC" NUMBER,
    "SLFUP" NUMBER,
    "SLFEA" NUMBER,
    "SLFUC" NUMBER,
    "SLFEC" NUMBER,
    "SLURCD" NCHAR(2),
    "SLURDT" NUMBER(6,0),
    "SLURAT" NUMBER,
    "SLURAB" NUMBER,
    "SLURRF" NCHAR(15),
    "SLTORG" NCHAR(10),
    "SLUSER" NCHAR(10),
    "SLPID" NCHAR(10),
    "SLJOBN" NCHAR(10),
    "SLUPMJ" NUMBER(6,0) NOT NULL ENABLE,
    "SLTDAY" NUMBER NOT NULL ENABLE,
    "SLSO16" NCHAR(1),
    "SLSO17" NCHAR(1),
    "SLSO18" NCHAR(1),
    "SLSO19" NCHAR(1),
    "SLSO20" NCHAR(1),
    "SLIR01" NCHAR(30),
    "SLIR02" NCHAR(30),
    "SLIR03" NCHAR(30),
    "SLIR04" NCHAR(30),
    "SLIR05" NCHAR(30),
    "SLSOOR" NUMBER(15,0),
    "SLVR03" NCHAR(25),
    "SLDEID" NUMBER,
    "SLPSIG" NCHAR(30),
    "SLRLNU" NCHAR(10),
    "SLPMDT" NUMBER,
    "SLRLTM" NUMBER,
    "SLRLDJ" NUMBER(6,0),
    "SLDRQT" NUMBER,
    "SLADTM" NUMBER,
    "SLOPTT" NUMBER,
    "SLPDTT" NUMBER,
    "SLPSTM" NUMBER,
    "SLXDCK" NCHAR(1)
     ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
COMPRESS FOR OLTP NOLOGGING
  STORAGE(INITIAL 4294967296 NEXT 209715200 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SSDMAX"
  PARALLEL ;


  CREATE TABLE "TESTDTA"."F42199_T2"
   (    "SLKCOO" NCHAR(5) NOT NULL ,
    "SLDOCO" NUMBER NOT NULL ,
    "SLDCTO" NCHAR(2) NOT NULL ,
    "SLLNID" NUMBER NOT NULL ,
      "SLUPMJ" NUMBER(6,0) NOT NULL ENABLE,
    "SLTDAY" NUMBER NOT NULL ENABLE,
      "SLXPTY" NUMBER,
    "SLDUAL" NCHAR(1),
    "SLBSC" NCHAR(10),
    "SLCBSC" NCHAR(10),
    "SLCORD" NUMBER,
    "SLDVAN" NUMBER,
    "SLPEND" NCHAR(1),
    "SLRFRV" NCHAR(3),
    "SLMCLN" NUMBER,
    "SLSHPN" NUMBER,
    "SLRSDT" NUMBER,
    "SLPRJM" NUMBER,
    "SLOSEQ" NUMBER,
    "SLMERL" NCHAR(3),
    "SLHOLD" NCHAR(2),
    "SLHDBU" NCHAR(12),
    "SLDMBU" NCHAR(12),
    "SLBCRC" NCHAR(3),
    "SLODLN" NUMBER,
    "SLOPDJ" NUMBER(6,0),
    "SLXKCO" NCHAR(5),
    "SLXORN" NUMBER,
    "SLXCTO" NCHAR(2),
    "SLXLLN" NUMBER,
    "SLXSFX" NCHAR(3),
    "SLPOE" NCHAR(6),
    "SLPMTO" NCHAR(1),
    "SLANBY" NUMBER,
    "SLPMTN" NCHAR(12),
    "SLNUMB" NUMBER,
    "SLAAID" NUMBER,
    "SLPRAN8" NUMBER,
    "SLSPATTN" NCHAR(50),
    "SLPRCIDLN" NUMBER,
    "SLCCIDLN" NUMBER,
    "SLSHCCIDLN" NUMBER,
    "SLOPPID" NUMBER,
    "SLOSTP" NCHAR(3),
    "SLUKID" NUMBER,
    "SLCATNM" NCHAR(30),
    "SLALLOC" NCHAR(1),
    "SLFULPID" NUMBER(15,0),
    "SLALLSTS" NCHAR(30),
    "SLOSCORE" NUMBER,
    "SLOSCOREO" NCHAR(1),
    "SLCMCO" NCHAR(5),
    "SLKITID" NUMBER,
    "SLKITAMTDOM" NUMBER,
    "SLKITAMTFOR" NUMBER,
    "SLKITDIRTY" NCHAR(1),
    "SLOCITT" NCHAR(1),
    "SLOCCARDNO" NUMBER
       ) SEGMENT CREATION IMMEDIATE
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
COMPRESS FOR OLTP NOLOGGING
  STORAGE(INITIAL 4294967296 NEXT 209715200 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SSDMAX"
  PARALLEL ;
 


  CREATE VIEW TESTDTA.F42199_T3 as
  select t1.SLKCOO ,
t1.SLDOCO ,
t1.SLDCTO ,
t1.SLLNID ,
t1.SLSFXO ,
t1.SLMCU ,
t1.SLCO ,
t1.SLOKCO ,
t1.SLOORN ,
t1.SLOCTO ,
t1.SLOGNO ,
t1.SLRKCO ,
t1.SLRORN ,
t1.SLRCTO ,
t1.SLRLLN ,
t1.SLDMCT ,
t1.SLDMCS ,
t1.SLAN8 ,
t1.SLSHAN ,
t1.SLPA8 ,
t1.SLDRQJ ,
t1.SLTRDJ ,
t1.SLPDDJ ,
t1.SLADDJ ,
t1.SLIVD ,
t1.SLCNDJ ,
t1.SLDGL ,
t1.SLRSDJ ,
t1.SLPEFJ ,
t1.SLPPDJ ,
t1.SLVR01 ,
t1.SLVR02 ,
t1.SLITM ,
t1.SLLITM ,
t1.SLAITM ,
t1.SLLOCN ,
t1.SLLOTN ,
t1.SLFRGD ,
t1.SLTHGD ,
t1.SLFRMP ,
t1.SLTHRP ,
t1.SLEXDP ,
t1.SLDSC1 ,
t1.SLDSC2 ,
t1.SLLNTY ,
t1.SLNXTR ,
t1.SLLTTR ,
t1.SLEMCU ,
t1.SLRLIT ,
t1.SLKTLN ,
t1.SLCPNT ,
t1.SLRKIT ,
t1.SLKTP ,
t1.SLSRP1 ,
t1.SLSRP2 ,
t1.SLSRP3 ,
t1.SLSRP4 ,
t1.SLSRP5 ,
t1.SLPRP1 ,
t1.SLPRP2 ,
t1.SLPRP3 ,
t1.SLPRP4 ,
t1.SLPRP5 ,
t1.SLUOM ,
t1.SLUORG ,
t1.SLSOQS ,
t1.SLSOBK ,
t1.SLSOCN ,
t1.SLSONE ,
t1.SLUOPN ,
t1.SLQTYT ,
t1.SLQRLV ,
t1.SLCOMM ,
t1.SLOTQY ,
t1.SLUPRC ,
t1.SLAEXP ,
t1.SLAOPN ,
t1.SLPROV ,
t1.SLTPC ,
t1.SLAPUM ,
t1.SLLPRC ,
t1.SLUNCS ,
t1.SLECST ,
t1.SLCSTO ,
t1.SLTCST ,
t1.SLINMG ,
t1.SLPTC ,
t1.SLRYIN ,
t1.SLDTBS ,
t1.SLTRDC ,
t1.SLFUN2 ,
t1.SLASN ,
t1.SLPRGR ,
t1.SLCLVL ,
t1.SLCADC ,
t1.SLKCO ,
t1.SLDOC ,
t1.SLDCT ,
t1.SLODOC ,
t1.SLODCT ,
t1.SLOKC ,
t1.SLPSN ,
t1.SLDELN ,
t1.SLTAX1 ,
t1.SLTXA1 ,
t1.SLEXR1 ,
t1.SLATXT ,
t1.SLPRIO ,
t1.SLRESL ,
t1.SLBACK ,
t1.SLSBAL ,
t1.SLAPTS ,
t1.SLLOB ,
t1.SLEUSE ,
t1.SLDTYS ,
t1.SLNTR ,
t1.SLVEND ,
t1.SLCARS ,
t1.SLMOT ,
t1.SLROUT ,
t1.SLSTOP ,
t1.SLZON ,
t1.SLCNID ,
t1.SLFRTH ,
t1.SLSHCM ,
t1.SLSHCN ,
t1.SLSERN ,
t1.SLUOM1 ,
t1.SLPQOR ,
t1.SLUOM2 ,
t1.SLSQOR ,
t1.SLUOM4 ,
t1.SLITWT ,
t1.SLWTUM ,
t1.SLITVL ,
t1.SLVLUM ,
t1.SLRPRC ,
t1.SLORPR ,
t1.SLORP ,
t1.SLCMGP ,
t1.SLGLC ,
t1.SLCTRY ,
t1.SLFY ,
t1.SLSO01 ,
t1.SLSO02 ,
t1.SLSO03 ,
t1.SLSO04 ,
t1.SLSO05 ,
t1.SLSO06 ,
t1.SLSO07 ,
t1.SLSO08 ,
t1.SLSO09 ,
t1.SLSO10 ,
t1.SLSO11 ,
t1.SLSO12 ,
t1.SLSO13 ,
t1.SLSO14 ,
t1.SLSO15 ,
t1.SLACOM ,
t1.SLCMCG ,
t1.SLRCD ,
t1.SLGRWT ,
t1.SLGWUM ,
t1.SLSBL ,
t1.SLSBLT ,
t1.SLLCOD ,
t1.SLUPC1 ,
t1.SLUPC2 ,
t1.SLUPC3 ,
t1.SLSWMS ,
t1.SLUNCD ,
t1.SLCRMD ,
t1.SLCRCD ,
t1.SLCRR ,
t1.SLFPRC ,
t1.SLFUP ,
t1.SLFEA ,
t1.SLFUC ,
t1.SLFEC ,
t1.SLURCD ,
t1.SLURDT ,
t1.SLURAT ,
t1.SLURAB ,
t1.SLURRF ,
t1.SLTORG ,
t1.SLUSER ,
t1.SLPID ,
t1.SLJOBN ,
t1.SLUPMJ ,
t1.SLTDAY ,
t1.SLSO16 ,
t1.SLSO17 ,
t1.SLSO18 ,
t1.SLSO19 ,
t1.SLSO20 ,
t1.SLIR01 ,
t1.SLIR02 ,
t1.SLIR03 ,
t1.SLIR04 ,
t1.SLIR05 ,
t1.SLSOOR ,
t1.SLVR03 ,
t1.SLDEID ,
t1.SLPSIG ,
t1.SLRLNU ,
t1.SLPMDT ,
t1.SLRLTM ,
t1.SLRLDJ ,
t1.SLDRQT ,
t1.SLADTM ,
t1.SLOPTT ,
t1.SLPDTT ,
t1.SLPSTM ,
t1.SLXDCK ,
t2.SLXPTY ,
t2.SLDUAL ,
t2.SLBSC ,
t2.SLCBSC ,
t2.SLCORD ,
t2.SLDVAN ,
t2.SLPEND ,
t2.SLRFRV ,
t2.SLMCLN ,
t2.SLSHPN ,
t2.SLRSDT ,
t2.SLPRJM ,
t2.SLOSEQ ,
t2.SLMERL ,
t2.SLHOLD ,
t2.SLHDBU ,
t2.SLDMBU ,
t2.SLBCRC ,
t2.SLODLN ,
t2.SLOPDJ ,
t2.SLXKCO ,
t2.SLXORN ,
t2.SLXCTO ,
t2.SLXLLN ,
t2.SLXSFX ,
t2.SLPOE ,
t2.SLPMTO ,
t2.SLANBY ,
t2.SLPMTN ,
t2.SLNUMB ,
t2.SLAAID ,
t2.SLPRAN8 ,
t2.SLSPATTN ,
t2.SLPRCIDLN ,
t2.SLCCIDLN ,
t2.SLSHCCIDLN ,
t2.SLOPPID ,
t2.SLOSTP ,
t2.SLUKID ,
t2.SLCATNM ,
t2.SLALLOC ,
t2.SLFULPID ,
t2.SLALLSTS ,
t2.SLOSCORE ,
t2.SLOSCOREO ,
t2.SLCMCO ,
t2.SLKITID ,
t2.SLKITAMTDOM ,
t2.SLKITAMTFOR ,
t2.SLKITDIRTY ,
t2.SLOCITT ,
t2.SLOCCARDNO
FROM TESTDTA.F42199_T1 T1, TESTDTA.F42199_T2 t2
WHERE t1.SLKCOO = t2.SLKCOO
AND t1.SLDOCO = t2.SLDOCO
AND t1.SLDCTO = t2.SLDCTO
AND t1.SLUPMJ = t2.SLUPMJ
AND t1.SLTDAY = t2.SLTDAY
AND t1.SLLNID = t2.SLLNID ;

create unique index TESTDTA.F42199_T1PK  ON TESTDTA.f42199_T1 (SLKCOO,SLDOCO,SLDCTO,SLUPMJ,SLTDAY,SLLNID);
create unique index TESTDTA.F42199_T2PK  ON TESTDTA.f42199_T2 (SLKCOO,SLDOCO,SLDCTO,SLUPMJ,SLTDAY,SLLNID);

The create my other table, but I get the following when I do an insert:

SQL Error: ORA-01776: cannot modify more than one base table through a join view
01776. 00000 -  "cannot modify more than one base table through a join view"
*Cause:    Columns belonging to more than one underlying table were either
           inserted into or updated.
*Action:   Phrase the statement as two or more separate statements.

I thought that I could try a trigger, but I don’t care about that anymore.

So – it’s time to get creative.

I’ve going to select 10 columns where the data is blank and drop them from the table.  Create a new view over that table and select constants for the other values – JOB DONE!

Then I’ll compress the table and all will be good.

I hear you saying “You cannot do that, what if someone want’s to use one of those columns).  I say, 320GB and 80 million rows cannot be wrong!

select count(1) from testdta.f42199 where SLRLTM>0;
select count(1) from testdta.f42199 where SLRLDJ>0 ;
select count(1) from testdta.f42199 where SLDRQT>0 ;
select count(1) from testdta.f42199 where SLADTM>0 ;
select count(1) from testdta.f42199 where SLOPTT>0 ;
select count(1) from testdta.f42199 where SLPDTT>0 ;
select count(1) from testdta.f42199 where SLPSTM>0 ;
select count(1) from testdta.f42199 where SLXPTY>0 ;
select count(1) from testdta.f42199 where SLDEID>0 ;
select count(1) from testdta.f42199 where SLSOOR >0 ;

So I use the above to fine some fields that are blank for all 80 million records and will academically remove them from the table.

Create a view that just selects ‘’ or 0 and job done!