About Me

My photo
I work for Fusion5 Australia . Connect with me on linked in here . I'm raising money for a good cause at the moment, follow donate to leukemia foundation

Wednesday, 30 March 2016

AWS and standard edition oracle

The more I use AWS, the more I think that this is how everyone should be running their IT environment.  So far, so good.

Everything that I’ve wanted from my environment, AWS has been able to provide.  I’ve not had problems with anything of significance, it’s all been working well.  Quite the opposite of recent experience I’ve had to some engineered solutions from a certain vendor.  Although I must admit I cannot believe that you can buy an engineered solution from a vendor, then need to pay for over 35 days of consulting to put a single database on it.  That it cannot be fixed if the client decides to install the machine themselves.  I’m struggling with the concept of appliance to be honest –but that is a blog for another time.

I’ve just been unit testing a JD Edwards installation on AWS, I chose RDS and EE for a reason, I wanted high availability and DR.  I didn’t really need a lot of the other functionality (although I do like performance and statistics).

I’ve done interactive load testing, I’ve done batch load testing, I know exactly how things perform on this machine with EE – but the client gets free standard edition with JD Edwards, so how about I test this.

I can still do a multiple zone standard edition implementation through the AWS wizards!  So, how cool it this?  I get my high availability and DR with BYOL for database on AWS.  Could this be serious?  I know I’m really going to have to look into this carefully.

image

So you can see from the above

I’m creating a new database from a snapshot that I took 30 minutes ago, wow, that is cool.

I can also change the instance engine to be standard edition.

So I’ve created a standard edition database from a EE snapshot

All I need to do is change tnsnames.ora to point to the new standard edition database, and then run all of my performance tests again.  The small lesson here is to not create a DB and then think that you can restore a snapshot into it, create the DB from the snapshot.

Remember that you cannot just terminate a database instance (well you can), but you cannot take it down and then bring it back up when you want.  You need to delete the instance (take a final snapshot) and then use that snapshot for a restore if you need to.  This is awesome for any load testing that you might want to perform, as you can take it back to before the load testing was run.

To be honest, this might have allowed me to put AWS DBA on my resume.  I can create and restore a highly available database with ease.  This also makes me think that I could re-architect my designs going forward.  I could easily have a prod instance (which contains ORADTA and ORACTL).  I could have a shared instance for ALL other owners (DD, OL, PY, PD, SY etc) and then instances for each additional environment.  Then, when I needed to do a refresh, all I’d have to do was create a snapshot of prod, and then restore this to the same name as my target – CLIENT_CRP for example.  I’d then have a COMPLETE data refresh in a very small amount of time.  A small outage when I deleted their existing instance and created the new one.

Creating the standard edition database was sooo simple, thanks AWS.

I updated the omdatb fields in sy910.f98611 and svm910.f98611 – easy. (in my new standard edition database) I created a new tns entry on all of my machines with the new DB server name.  Note that in AWS your servername is going to be huge, the JDE.INI does not care for that long name – it only uses tnsnames.ora and the DB alias, so that is easy.  You don’t really need to fill out the hostname in [DB SYSTEM SETTINGS] – but what you put there get’s written to the log files – haha!

I then changed the system tnsnames entry in JAS.INI and JDE.INI to point to the new tns entry.  restarted JDE – and now all my connections are going to the new database. 

So I essentially downgraded my EE database for free oracle licences in about 1 hour (note with oracle technology foundation).  Completely! 

The load test against my standard edition database is going fine too, so it looks like this might be the way forward for this implementation.

image

image

Above is runtime, each colour is a different script.  I’ve not got a single error and I have 50 users pounding the system with 0 delay.

image

Database is busy, but not exceptionally so.

image

You can also see from the above that my machines are busy, but this test is relentless…  I’ve also got the JDE job scheduler going full speed too, so there is no rest for the wicked.  It’s really good to see that the machines are getting used, nothing worse that bad performance and 0% CPU all over the place.  I have good performance and CPU getting used.  I could not ask for more.

Thursday, 24 March 2016

Load test traffic analysis–what goes where

When trying to work out your JD Edwards architecture, you really need to know what goes where – (in terms of network traffic), so you can design disk and network layers appropriately.

Wouldn’t it be nice to understand relative numbers of IO and disk so that you could provision this properly.  Well, I’m here to help.

When load testing 50 concurrent users with a mixed work load (queries and writes), very busy…  282000 unique JDE pages in 60 minutes.  This generated 2GB of HTML traffic (to and from) the HTML servers.

The traffic analysis between the various tiers is below, but before I get to this… what else do these numbers tell you?

For starters, each session was 11.4KB/s, that is good to know for remote connections.

image

My awesome visio skills are on display above.  Note that the traffic arrows are fairly generic.  You know that webserver OUT = user HTML in + JDBC in + logic calls out….  It’s not just to the end users.  Same with logic server, most traffic to the database server – but quite a bit to HTML.

Wednesday, 23 March 2016

I’ve downloaded a par file, but the BSFN is going to the WRONG DLL

What should I do?

It’s funny, but you cannot easily change the parent library for a BSFN, even if it’s brand new for your system.

So, it’s time for SQL cowboy.

select SIPARDLL from ol910.f9860 where siobnm = 'B5600003';

SIPARDLL  
----------
CBUSPART  

1 rows selected

Right, that’s wrong…

image

Matches my fat boy, as above..

update ol910.f9860 set SIPARDLL = 'CCUSTOM' where siobnm = 'B5600003';
commit;

image

yay!

Tuesday, 22 March 2016

Database class and throughput lessons for JDE in AWS

You gotta start somewhere…  We all know that you need to start somewhere, but where should we start?

This is a question that I had when choosing the database instance class for a POC in AWS.  I needed to start somewhere.  I did not want to spent too much money, although I do pay by hour… 

AWS teaches you to be a little more efficient with things like IO, because you are paying for it and you are limited in about 10 different ways.

image

Look at the above graph, 60MB/s every day of the week – all large reads.

image

Database recording throughput at 60000 IOPs, but wait – I paid for 3000.

image

AWS is telling me that I’m only using 500 IOP’s but this is a hard celling.  This is limiting my performance…  How do i know?

image

This is how, look at my queue depth

image

The above is a little too “choppy” for my liking too, so I need to smooth this out.

image

Across my architecture I don’t see too much activity at all…  I do not see enough…  Classic bottleneck

So, I do a deep dive before I ask AWS to raise my IOPS.

DB Instance Classes Optimized for Provisioned IOPS

Dedicated EBS Throughput (Mbps)

Maximum 16k IOPS Rate**

Max Bandwidth (MB/s)**

db.m1.large

500 Mbps

4000

62.5

db.m1.xlarge

1000 Mbps

8000

125

db.m2.2xlarge

500 Mbps

4000

62.5

db.m2.4xlarge

1000 Mbps

8000

125

db.m3.xlarge

500 Mbps

4000

62.5

You gotta start somewhere – remember, I started at the bottom line above…  Look at the Max Bandwidth# – that is ME!!  In the addition of reads and writes, I’m hitting this pretty hard.

So I need to up my database server class to something with some more IO – in terms of MB/s – even to get close to 3000 IOPs.  REMEMBER THIS!!  This is an important equation if you are looking at RDS.  Sure, you might think that 3000 IOPs is good when provisioning your server – but if you are limited by this bandwith, you are high and dry.  (Note that with an 8K page, oracle is doing large reads…  We are getting 500 IOPs and 60MB, so it seems that oracle is doing 120K reads??) this seems strange.  although I don’t doubt the MB/s

I need a class above!

db.m3.2xlarge

1000 Mbps

8000

125

db.m3.xlarge

$1.680

db.m3.2xlarge

$3.360

image

Double memory, double cores and double disk throughput

Click a few buttons – and viola!  We are modifying the database to a new class instance

image

Note that you do lose the database!  this was not made clear with the wizard…

time (utc+11)

event

Mar 22 3:04 PM

DB instance restarted

Mar 22 3:04 PM

Multi-AZ instance failover completed

Mar 22 3:03 PM

Multi-AZ instance failover started

Mar 22 2:53 PM

Applying modification to database instance class

Mar 22 5:07 AM

Finished DB Instance backup

Mar 22 5:03 AM

Backing up DB instance

I had a SQL plus session open, and I got the following errors mid way

Error starting at line : 12 in command -
select count(1) from proddta.f0911
Error at Command Line : 12 Column : 1
Error report -
SQL Error: No more data to read from socket

Error starting at line : 13 in command -
select count(1) from proddta.F4801
Error at Command Line : 13 Column : 1
Error report -
SQL Error: No more data to read from socket

Now look!!!  The statistics are vastly different.  The database now has enough memory for SGA, so the physical writes are actually significantly lower – because they do not need to be physical anymore.

image

All fixed with a different class of DB server.

And also, here is the default performance graphs of a system that is hitting some serious ceilings…

image

Above is the before performance…  All scripts running about the same time 250 seconds (give or take)

image

Here is the exact same payload with a proper sized DB server, the difference is AMAZING!  All scripts lower than 250 seconds on average.

Friday, 18 March 2016

I gave my VM more memory, now OATS will not start

It always happens when you don’t want it to happen.

A quick change to get more performance out of my VM for OATS and resulted in killing it

image

A quick search of logs shows problems with the following in AdminServer.log

Exception Description: Cannot acquire data source [OATS_common_DS].
Internal Exception: javax.naming.NameNotFoundException: Unable to resolve 'OATS_common_DS'. Resolved ''; remaining name 'OATS_common_DS'
    at org.eclipse.persistence.exceptions.ValidationException.cannotAcquireDataSource(ValidationException.java:497)
    at org.eclipse.persistence.sessions.JNDIConnector.connect(JNDIConnector.java:109)
    at org.eclipse.persistence.sessions.DatasourceLogin.connectToDatasource(DatasourceLogin.java:162)
    at org.eclipse.persistence.internal.sessions.DatabaseSessionImpl.loginAndDetectDatasource(DatabaseSessionImpl.java:584)

Okay, database problems.

oradim.log has:

Thu Mar 17 19:27:31 2016
C:\OracleATS\oxe\app\oracle\product\11.2.0\server\bin\oradim.exe -startup -sid xe -usrpwd *  -log oradim.log -nocheck 0
Thu Mar 17 19:27:31 2016
ORA-01078: failure in processing system parameters
ORA-00838: Specified value of MEMORY_TARGET is too small, needs to be at least 172M


Thu Mar 17 20:15:44 2016
C:\OracleATS\oxe\app\oracle\product\11.2.0\server\bin\oradim.exe -shutdown -sid xe -usrpwd * -shutmode immediate -log oradim.log
Thu Mar 17 20:15:46 2016
ORA-01012: not logged on


Thu Mar 17 20:15:50 2016
C:\OracleATS\oxe\app\oracle\product\11.2.0\server\bin\oradim.exe -startup -sid xe -usrpwd *  -log oradim.log -nocheck 0
Thu Mar 17 20:15:50 2016
ORA-01078: failure in processing system parameters

Okay, this might be more simple than I thought.  I’ll edit my memory_target and everything will be sweet again.  Hey, I might give it a bunch more memory.

C:\OracleATS\oxe\app\oracle\product\11.2.0\server\dbs>sqlplus "/ as sysdba"

SQL*Plus: Release 11.2.0.2.0 Production on Thu Mar 17 20:26:16 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL>

Now, tricky…

SQL> create pfile from spfile;
create pfile from spfile
*
ERROR at line 1:
ORA-01565: error in identifying file
'%ORACLE_HOME%\DATABASE\SPFILE%ORACLE_SID%.ORA'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.

to get around this, copy C:\OracleATS\oxe\app\oracle\product\11.2.0\server\dbs\spfilexe.ora to C:\OracleATS\oxe\app\oracle\product\11.2.0\server\database

Then after the copy opf the spfilexe.ora above to the database dir

SQL> create pfile from spfile;

File created.

Edit this file (iniiXE.ora in the %ORACLE_HOME%\database dir) 

xe.__db_cache_size=29360128
xe.__java_pool_size=4194304
xe.__large_pool_size=4194304
xe.__oracle_base='C:\OracleATS\oxe\app\oracle'#ORACLE_BASE set from environment
xe.__pga_aggregate_target=0
xe.__sga_target=167772160
xe.__shared_io_pool_size=0
xe.__shared_pool_size=121634816
xe.__streams_pool_size=0
*.audit_file_dest='C:\OracleATS\oxe\app\oracle\admin\XE\adump'
*.compatible='11.2.0.0.0'
*.control_files='C:\OracleATS\oxe\app\oracle\oradata\XE\control.dbf'
*.db_name='XE'
*.DB_RECOVERY_FILE_DEST_SIZE=10G
*.DB_RECOVERY_FILE_DEST='C:\OracleATS\oxe\app\oracle\fast_recovery_area'
*.diagnostic_dest='C:\OracleATS\oxe\app\oracle\.'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=XEXDB)'
*.job_queue_processes=4
*.memory_target=400M
*.open_cursors=300
*.processes=250
*.remote_login_passwordfile='EXCLUSIVE'
*.sessions=20
*.shared_servers=4
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'

I increased the memory

SQL> startup pfile='C:\OracleATS\oxe\app\oracle\product\11.2.0\server\database\initXE.ora'
ORACLE instance started.

Total System Global Area  417546240 bytes
Fixed Size                  2253784 bytes
Variable Size             297798696 bytes
Database Buffers          113246208 bytes
Redo Buffers                4247552 bytes
Database mounted.
Database opened.

YAY!

SQL> create spfile from pfile='C:\OracleATS\oxe\app\oracle\product\11.2.0\server\database\initXE.ora';

File created.

SQL>

Just remember to copy the file from the database dir to the dbs dir, as this is the one that is used for the initialisation of the local database. (C:\OracleATS\oxe\app\oracle\product\11.2.0\server\dbs)

Okay…  can reboot

 

Instead from

Error 404--Not Found

From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:
10.4.5 404 Not Found

The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.

If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.

When going to http://localhost:8088/olt

image

Thursday, 17 March 2016

sudo: sorry, you must have a tty to run sudo

My e1agent start script is not running on reboot, but it is when I run the service command manually…  What is going on?

This is easy, if you look into your /u01/oracle/jde_home_1/SCFHA/logs/e1agent_0.log file (equivilent), you might get the text below:

sudo: sorry, you must have a tty to run sudo

Great, this is super simple.

All you need to do is vi /etc/sudoers and


#
# Disable "ssh hostname sudo <cmd>", because it will show the password in clear.
#         You have to run "ssh -t hostname sudo <cmd>".
#
#Defaults    requiretty

Comment out the above, now you’ll be cooking with gas.

Tuesday, 15 March 2016

504 GATEWAY_TIMEOUT errors with JD Edwards in AWS using ELB

We are load testing JD Edwards in AWS, once again getting a lot of 504 errors.  We are using the AWS provided ELB to load balance JD Edwards between AZ’s.  10 users are giving us a pile of 504’s.

I searched back and remember that we had the same problem for another client in AWS, another search revealed that I needed to make the following changes in weblogic for both nodes.

We need to make a change to keep alive and the connection timeout for JD Edwards to avoid this.  Note that this was slightly different, as we are doing https at the current site.

Duration


The amount of time this server waits before closing an inactive HTTP connection.

Number of seconds to maintain HTTP keep-alive before timing out the request.

MBean Attribute:
WebServerMBean.KeepAliveSecs

Minimum value: 5

Maximum value: 3600

Secure value: 30

HTTPS Duration


The amount of time this server waits before closing an inactive HTTPS connection.

Number of seconds to maintain HTTPS keep-alive before timing out the request.

MBean Attribute:
WebServerMBean.HttpsKeepAliveSecs

Minimum value: 30

Maximum value: 360

Secure value: 60

image

Connect Timeout


The amount of time that this server should wait to establish an outbound socket connection before timing out. A value of 0 disables server connect timeout.

MBean Attribute:
ServerMBean.ConnectTimeout

Minimum value: 0

Maximum value: 240

image

After making the above changes and bouncing the instance we did not get a single 504!

configure AWS ELB for HTTPS offload with JDE

I’m configuring AWS for a new client implementation, of course this needs to be highly available and DR-erable…  You know what I mean.  My definition of DR is across availability zones, they are separate everything, so that is enough for me.  I don’t need cross region.

I have installed two separate web servers, both listening on port 9005 https and 9001 for http.  I’ve configured my AWS ELB to point to these two nodes.  I had to load the SSL cert into AWS, so it’d do the cert offload and the https work.

To get port 9005 working https, I needed to do the following for my JD Edwards server (within AWS).  I’m using a port above 1024 because of all the root restrictions on the server for ports below that number.

Note that you need to make the 3 distinct changes listed below:  Note that this is for JDE server, not AdminServer as stated in many documents.

image

Note that Although I’m doing SSL offload, I still need to configure the AWS ELB to point to a secure port in JDE.  I had a bunch of problems with https/http redirects.  It seems that there is something that does not like holding the https tunnel….  So, when the ELB redirects to JDE on HTTPS, all is good.

I also had to defined a custom affinity rule based upon JSESSIONID for the ELB in AWS.  This allowed JDE to work properly.

image

so now, when I log into my custom URL, I get presented with an https login screen and this is maintained in the application.

image

Above is the redirects and where they are going from and too.

Let’s summarise with a bit of a lesson:

Only install a single JVM / end point for JD Edwards in WLS in AWS if you are going to use ELB and any sort of elasticity.  Why?  Because the ELB can only really forward to a single port per host… So cookie-cut your web servers to be awesome at single port provisioning of E1 (they could do AIS too)…

Friday, 11 March 2016

My (very) basic guide to iptables and jde

I’ve got my web servers firewalled and everything’d – as they are going to be public facing.  I’ve implemented a bunch of security groups to only allow certain ports connecting, but I’m belt and braces, so I’m also firewalling.

I use the standard iptables functionality, I do things simply too.  I edit the /etc/sysconfig/iptables file to do my configuration, as I have not really worked out the sequence of things. (yes, sequencing is important, you must ALLOW then REJECT).  If (in a linear sense) you reject before you allow, nothing is going to get through.

A sample file from my web server is below.  Note that I’m restricting inbound traffic to this server.

# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp --dport 9000 -j ACCEPT
-A INPUT -p tcp --dport 9001 -j ACCEPT
-A INPUT -p tcp --dport 9002 -j ACCEPT
-A INPUT -p tcp --dport 7001 -j ACCEPT
-A INPUT -p tcp --dport 14501 -j ACCEPT
-A INPUT -p tcp --dport 14502 -j ACCEPT
-A INPUT -p tcp --dport 5556 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

COMMIT

You can then save this off, do what you want with it.  This does AdminServer, nodeManager and my JD Edwards instances.  It also allows the server manager traffic.  Note that I could further restrict the source and destination, but I’m not going to do that for now.

service iptables start

service iptables stop

You can list the contents with iptables –L (but the details are not great, as the ports if known are listed with text).

[root@vltweb01 software]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:cslistener
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:etlservicemgr
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:dynamid
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:afs3-callback
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:14501
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:14502
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:freeciv
ACCEPT     all  --  anywhere             anywhere
ACCEPT     tcp  --  anywhere             anywhere            state NEW tcp dpt:ssh
REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

 

Note that all of these need to match your security groups

image

I need security groups because I have a public facing VPC and a private VPC, so I control the traffic between them.

iptables-save > /tmp/iptables.config

Enterprise Server

This is slightly more complicated.  iptables config is below, note the range for jdenet.

# Firewall configuration written by system-config-firewall
# Manual customization of this file is not recommended.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -p tcp --dport 14501 -j ACCEPT
-A INPUT -p tcp --dport 14502 -j ACCEPT
-A INPUT -p tcp --dport 6016:6030 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
COMMIT

Note that in JDE.INI, I enabled predefinedPorts.

[JDENET]
FilePacketBufferSize=32768
internalQueueTimeOut=30
kernelDelay=0
krnlCoreDump=0
maxFixedDataPackets=2000
maxIPCQueueMsgs=400
maxLenFixedData=16384
maxLenInlineData=4096
maxNumSocketMsgQueue=800
netBroadcastAddress=INADDR_BROADCAST
netChildCheck=5
netCoreDump=0
netShutdownInterval=5
serviceNameListen=6016
serviceNameConnect=6016
maxNetProcesses=10
maxNetConnections=800
maxKernelProcesses=60
maxKernelRanges=34
netTrace=0
enablePredefinedPorts=1

This means that the ports are allocated from 6016

[root@vltent01 ~]# netstat -on |grep 60
tcp        0      0 10.10.20.122:6021           10.10.30.211:33907          ESTABLISHED keepalive (7002.54/0/0)
tcp        0      0 10.10.20.122:6017           10.10.20.122:34597          ESTABLISHED keepalive (5527.98/0/0)
tcp        0      0 10.10.20.122:6022           10.10.30.211:36543          ESTABLISHED keepalive (7002.54/0/0)
tcp        0      0 ::ffff:10.10.20.122:34597   ::ffff:10.10.20.122:6017    ESTABLISHED off (0.00/0/0)
unix  3      [ ]         DGRAM                    9260

Thursday, 10 March 2016

weblogic 12c startNodeManager.sh

Are you getting the following when trying to start nodemanager?

[root@vltweb01 ~]# <Mar 8, 2016 10:12:41 PM EST> <INFO> <Loading identity key store: FileName=/u01/oracle/Oracle/Middleware/Oracle_Home/oracle_common/common/nodemanager/security/DemoIdentity.jks, Type=jks, PassPhraseUsed=true>
<Mar 8, 2016 10:12:41 PM EST> <SEVERE> <Fatal error in NodeManager server>
weblogic.nodemanager.common.ConfigException: Identity key store file not found: /u01/oracle/Oracle/Middleware/Oracle_Home/oracle_common/common/nodemanager/security/DemoIdentity.jks
        at weblogic.nodemanager.server.SSLConfig.loadKeyStoreConfig(SSLConfig.java:225)
        at weblogic.nodemanager.server.SSLConfig.access$000(SSLConfig.java:33)
        at weblogic.nodemanager.server.SSLConfig$1.run(SSLConfig.java:118)
        at java.security.AccessController.doPrivileged(Native Method)
        at weblogic.nodemanager.server.SSLConfig.<init>(SSLConfig.java:115)
        at weblogic.nodemanager.server.NMServer.<init>(NMServer.java:143)
        at weblogic.nodemanager.server.NMServer.main(NMServer.java:527)
        at weblogic.NodeManager.main(NodeManager.java:31)

Then, like me – you are starting the wrong nodemanager.  This is the 11G one, 12c uses the node manager in user projects.

A simple search shows:

[oracle@vltweb01 Oracle_Home]$ find . -name startNodeManager.sh
./inventory/Templates/wlserver/server/bin/startNodeManager.sh
./user_projects/domains/e1_apps/bin/startNodeManager.sh
./wlserver/server/bin/startNodeManager.sh

Don’t use the bottom one, use this one ./user_projects/domains/e1_apps/bin/startNodeManager.sh

Phew, that’s better.

Wednesday, 9 March 2016

ip addresses not host names in SM console

 

image

I get this and nothing is really working in SM either, I think a bunch of it is related.

Si, java program to the rescue – jeepers, I’m terrible at java.  Google to the rescue.  http://stackoverflow.com/questions/24057234/getting-ip-address-instead-of-hostname-from-gethostbyname-function-from-inetaddr

import java.net.InetAddress;

class GetHost{

    public static void main(String args[])throws Exception{

        String hostIp=args[0];
        InetAddress addr = InetAddress.getByName(hostIp);
        String host = addr.getHostName();
        if(host.endsWith(".local"))
        {
            int lenght=host.length() ;
            System.out.print(""+host.substring(0,lenght-6));

        }
        else
            System.out.print(host);

    }
}

>javac getHostName.java

> java GetHost 10.10.10.49

this returns the IP address

Right, I’m getting somewhere.  So this host is not known to DNS, so it spews back the IP.  Okay, this makes sense about a lot of things now.

So… get the machine added to DNS, or a poor mans solution:

vi /etc/hosts as root

10.10.10.49 vltweb01 vltweb01.vjde.com

Now try the program again.

[oracle@vltweb01 logs]$ java GetHost 10.10.10.49
vltweb01[oracle@vltweb01 logs]$

Cool!  I then changed the order so the FQDN is returned.

STOP SM

[oracle@vltweb01 bin]$ ./stopAgent
[oracle@vltweb01 bin]$ pwd
/u01/oracle/jde_home_1/SCFHA/bin

Delete from SM console

image

Start it again

image

wait patiently (pounding the F5 key on your browser)

image

Totally cool!Now I have a pile of other errors to fix, but that is one down!

oracle 32 bit client on linux ent server

I’m using 11.2.0.4, so I downloaded patch set (which you can just user as an install, not an upgrade). 13390677

image

Great, I don’t need to patch anything, it’s up to date (not 11.2.0.1)

you run the installer, and I needed the following OS bundles, great post https://gruffdba.wordpress.com/2015/07/18/configuring-a-yum-repository-from-a-linux-install-disc/ helped me a bunch.  Remember that if the installer is asking for i868, this is actually 32 bit.

yum.repos.d]# yum install --enablerepo=dvd \
> binutils-2.* \
> compat-libcap1-1.* \
> compat-libstdc++-33-3.* \
> compat-libstdc++-33-3.*.i686 \
> gcc-4.* \
> gcc-c++-4.* \
> glibc-2.*.i686 \
> glibc-2.* \
> glibc-devel-2.* \
> glibc-devel-2.*.i686 \
> ksh \
> libgcc-4.*.i686 \
> libgcc-4.* \
> libstdc++-4.* \
> libstdc++-4.*.i686 \
> libstdc++-devel-4.* \
> libstdc++-devel-4.*.i686 \
> libaio-0.* \
> libaio-0.*.i686 \
> libaio-devel-0.* \
> libaio-devel-0.*.i686 \
> make-3.* \
> sysstat-9.*

The following environment variables are set as:
    ORACLE_OWNER= jde910
    ORACLE_HOME=  /u01/app/oracleclient32/product/11.2.0/client_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Install e1Agent on el6u6 and getting libXp library problems.

 

This post basically exists because of the following stack trace.  This has occurred because I’m installing software in AWS and used the AMI ID OL6.7-x86_64-PVM-2015-12-04 (ami-0fb7ef6c) as the base for all my servers.  This seems to be missing a few items, so let me document the journey…

[jde910@vltent01 install]$ ./runInstaller
Starting Oracle Universal Installer...

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-03-09_08-54-36AM. Please wait ...[jde910@vltent01 install]$ Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.

Exception java.lang.UnsatisfiedLinkError: /tmp/OraInstall2016-03-09_08-54-36AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory occurred..
java.lang.UnsatisfiedLinkError: /tmp/OraInstall2016-03-09_08-54-36AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)
        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1699)
        at java.lang.Runtime.load0(Runtime.java:770)
        at java.lang.System.load(System.java:1003)
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)
        at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1803)
        at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1720)
        at java.lang.Runtime.loadLibrary0(Runtime.java:823)
        at java.lang.System.loadLibrary(System.java:1028)
        at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.awt.Toolkit.loadLibraries(Toolkit.java:1592)
        at java.awt.Toolkit.<clinit>(Toolkit.java:1614)
        at oracle.bali.ewt.olaf.OracleLookAndFeel.<clinit>(OracleLookAndFeel.java:1451)
        at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager._useOracleLookAndFeel(OiifmGraphicInterfaceManager.java:243)
        at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager.<init>(OiifmGraphicInterfaceManager.java:263)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at java.lang.Class.newInstance0(Class.java:355)
        at java.lang.Class.newInstance(Class.java:308)
        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.createInterfaceManager(OiicSessionInterfaceManager.java:209)
        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.getInterfaceManager(OiicSessionInterfaceManager.java:243)
        at oracle.sysman.oii.oiic.OiicInstaller.getInterfaceManager(OiicInstaller.java:467)
        at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:966)
        at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:906)
Exception in thread "main" java.lang.NoClassDefFoundError: Could not initialize class oracle.bali.ewt.olaf.OracleLookAndFeel
        at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager._useOracleLookAndFeel(OiifmGraphicInterfaceManager.java:243)
        at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager.<init>(OiifmGraphicInterfaceManager.java:263)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at java.lang.Class.newInstance0(Class.java:355)
        at java.lang.Class.newInstance(Class.java:308)
        at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.createInterfaceManager(OiicSessionInterfaceManager.java:209)

From the beginning, this is a list of steps that should provide you a safe install.

yum install unzip

yum install xterm

yum install glibc.i686

yum install ksh

yum erase libXp libXi libXtst

cd /etc/yum.repos.d

yum install wget

wget http://public-yum.oracle.com/public-yum-el5.repo

Note that if you try and install the below, you might get

warning: rpmts_HdrFromFdno: Header V3 DSA/SHA1 Signature, key ID 1e5e0159: NOKEY
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle


The GPG keys listed for the "Oracle Linux 6Server Latest (x86_64)" repository are already installed but they are not correct for this package.
Check that the correct key URLs are configured for this repository.

If you get the error above, Edit the file below, make

[root@vltweb02 yum.repos.d]# vi public-yum-el5.repo
[el5_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://yum.oracle.com/repo/OracleLinux/OL5/latest/$basearch/
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
gpgcheck=0
enabled=1

yum install libXp.i386 libXi.i386  libXtst.i386 --setopt=protected_multilib=false

install java as root, rpm –i ./jdk-8u73-linux-x64.rpm

[root@vltent01 software]# rpm -i ./*.rpm
Unpacking JAR files...
        tools.jar...
        plugin.jar...
        javaws.jar...
        deploy.jar...
        rt.jar...
        jsse.jar...
        charsets.jar...
        localedata.jar...
        jfxrt.jar...

useradd oracle
groupadd wls
usermod -a -G wls oracle
groupadd oinstall
usermod -a -G oinstall oracle
groups oracle

su – oracle

cd /u01/software/Disk1/install

chmod 755 *

Then, /u01/software/Disk1/install/runInstaller.sh

maybe you get this:

[jde910@vltent01 install]$ ./runInstaller
bash: ./runInstaller: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

need as root (if you’ve nto done it)  yum install glibc.i686

now as jde910:

[jde910@vltent01 install]$ which java
/usr/bin/java
[jde910@vltent01 install]$ java -version
java version "1.8.0_73"
Java(TM) SE Runtime Environment (build 1.8.0_73-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.73-b02, mixed mode)

 

And WOW – I finally get a installer window.  This is some conflict with the older installers and the new OS, you need 386 versions of some of the x11 stuff.

download and install java (grab RPM from http://www.oracle.com/technetwork/java/javase/downloads/java-se-jdk-7-download-432154.html )

./tunInstaller

image

Note that I’m managing WLS, so oracle user is good

image

happy

image

next

image

Enter your dep server and port (note that this is all in AWS)

image

Note that this failed when taking screen shots, I tried to start the agent, and it did not know about /bin/ksh

I yum install ksh

deleted the old dirs and uninstalled with the universal installer

and then installed again, all good.

 

Note that these changes broke the WLS12C installer,

to fix this I did

yum erase xterm

yum erase libX*

yum install xterm

yum install libX*

This uninstalled the mix of xterm I needed for the old e1agent

>>> Ignoring failure(s) of required prerequisite checks.  Continuing...
Preparing to launch the Oracle Universal Installer from /tmp/OraInstall2016-03-08_06-53-12PM
Log: /tmp/OraInstall2016-03-08_06-53-12PM/install2016-03-08_06-53-12PM.log
X-Server access is denied on host
[Fatal Error] DISPLAY variable set incorrectly: 10.10.20.114:0.0
[Resolution] Verify that your DISPLAY environment variable is set correctly,
and that there is an X11 server on the system. If you are
running the Oracle Installer as a different user or on a different host,
you may need to use the xhost command to ensure that host/user
has permission to write to your display.

Monday, 7 March 2016

EXP-00003: no storage definition found for segment(0, 0)

I experienced this when exporting data from an oracle database.

I need to use EXP, because dpump writes to dirs on the database server and I do not have access to the database server.

Once I own the server, I use expdp / impdp - but exp is my friend before then.

EXP-00003: no storage definition found for segment(0, 0)
. . exporting table                       FF34S001
EXP-00003: no storage definition found for segment(0, 0)
. . exporting table                       FF34S002
EXP-00003: no storage definition found for segment(0, 0)
. . exporting table                       FF34S003
EXP-00003: no storage definition found for segment(0, 0)
. . exporting table                       FF34S01W
EXP-00003: no storage definition found for segment(0, 0)
. . exporting table                        T76B001
EXP-00003: no storage definition found for segment(0, 0)
. . exporting table                        T76B002
EXP-00003: no storage definition found for segment(0, 0)

When running the commands, you can see the above errors.

This is generally for the empty tables, so this is not the worst thing in the world, but I want the table definitions.  This is a bug in oracle that seems to affect tables that have not had any rows inserted.

My client was on 11.2.0.1
H:\data\backups>sqlplus

SQL*Plus: Release 11.2.0.1.0 Production on Mon Mar 7 10:51:53 2016

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Enter user-name:

I note that the client I'm using is 11.2.0.1 and the database server is linux 11.2.0.4 - there is a known bug with this combination of client and server.   Don't bother with trying COMPRESS=Y in your command, it is not going to work.

H:\data\backups>exp PRODDTA/PASSWORD@DB01 ROWS=N COMPRESS=Y FILE=PRODDTA_NODATA_COMP.exp

As you can see, I tried with the above.

This sent me on a search for the 11.2.0.4 patch set.

1339067711.2.0.4.0 PATCH SET FOR ORACLE DATABASE SERVER 
Goto MOS for this, use the patches tab and you'll see a ton of downloads.

Eventually I found the correct client download:

p13390677_112040_WINNT_3of6.zip

Note, that this is 3of6

After downloading and applying this patch, all started working well!  It was not quite that easy.  The exp command was only available in the e1local directory, which was not a client install - but a full database install.  The client on the machine was a 32 bit client, but did not have the utilities installed, which exp is part of - so therefore this client did not have exp and could not be upgraded.

So, I had to install a new client (using the download above), only selecting utilities.  I then needed to create network\admin\tnsnames.ora in the new installation dir and exp work...  No more errors.




Friday, 4 March 2016

I need to do it, I need to document the undocumented setting–faster JDE on oracle

I’ve been sworn to secrecy regarding this setting, but it’s gotta come out.  Are you having UBE’s with unpredictable performance, fast one minute and then slow the next.  Fast one user and slow the next?  You need to get into a bit of this action:

alter system set “_optim_peek_user_binds”=FALSE scope=both;

This is for oracle, you might be able to tell.

I’ve used this at a number of clients with complete success.  I’ve also worked on an OMCS managed site, where this is a recommendation from them for JDE – note that you might not find a MOS reference.

There are a number of references on the web, but it seems to make the QEP (query execution plan) more predictable for JDE.

Let me know if it kicked any goals for you!  How?  Search my blog for F986114 and there are some good statistical queries that you could run to answer this question.

Wednesday, 2 March 2016

cannot start e1Agent / runAgent / startAgen server manager agent for JD Edwards

 

I try and start eAgent  and I get a message about “ERROR: List of process IDs must follow –p”

This relates to /u01/jde_home/SCFHA/bin/runAgent

you’ll see there is a  command “ps – p $PIDFILEINFO…

This is teh command that is breaking, as PIDFILEINFO is set to the contents of the pid file – WOW!!

My pidfile was empty, as I has some permissions problems, trying to start the agent as oracle not jde900 – Doh

A couple of cheeky chown –R chgrp –R commands and deletion of the empty /u01/jde_home/SCFHA/agent.pid and we are cooking with gas.

Tuesday, 1 March 2016

If you are considering a surface book, DON’T DO IT!

I recently bought a surface book.  I got the grizzly bear, it has 16GB of RAM and i7 processor – cost a fortune.  I wanted a real road warrior.  I do lots with VM’s, programming etc.  It needs to perform.

Im back using my macbook pro must of the time, this surface book is terrible!

image

Look at the CPU usage above, using word, outlook and visio…  100%, I cannot do anything….

I cannot tell if it’s windows 10 or my use of onedrive or what, but this machine runs like a complete donkey.  Nice that there is one throat to choke, as I have a hardware device, operating system and office suite all from a single vendor – Microsoft.  I’m constantly restarting programs because it’s unusable.  We are back to early 2000’s, restarting Microsoft word every couple of hours because your document has more than 25 pages.  It’s so frustrating.

image

It’s very standard to have the system and compressed memory up at 34% the entire day…  This causes the fan to run for the entire day… 

This is the second surface book I have had.  I had to take the first one back after 2+ hours on the phone and then repeating the same conversation in the Microsoft shop in Sydney for 2 hours.  The USB when docked would constantly come in and out – when the microsoft dock was connected.  It was fine if the surface book dock was plugged into the screen directly (screen separated from keyboard).

I get numerous system crashes, generally every could of days.  This has been a very disappointing purchase and I would not get another one.

Lesson here, don’t go bleeding edge on these things, let them settle in for a while.

</rant>