Tuesday, 21 May 2013

Sizing JD Edwards for 1000 users, general sizing maxims for JD Edwards

It's an interesting conundrum, you want to choose the platform for your JD Edwards solution.  You might be a net new customer or you might be upgrading and putting platform choice out there as an option.  What are you going to choose, there are so many options out there.

Firstly, start around your database and work from there.  The database WILL be your bottleneck for scalability and performance.  Your disk IO will be the bottleneck for the database server.  This is IF your solution is sized correctly.  JDE does not use many of the advanced database functions, lets be honest it does not use aggregate functions that often!  So JDE will push the DB disks hard, get your system ready for it!

I'm thinking that if you had more than 1000 concurrent users, then your choices are smaller.  I personally would not put that load on a SQLServer database just yet.  I think you have much more scalability with the other options.  What are they?  Oracle or AS/400 of course.  I'm actually not going into DB2/UDB, as my experiences (of which there are many) might have you looking at other options.

Now I'm not immediately and directly discounting SQLServer as an option for > 1000 concurrent users, I just think that it's going to be harder to achieve.  NOT impossible, probably not even "extremely difficult", but harder than the other options.  I'm also talking about 1000 concurrent users, not named.

I think that with the correct amount of SSD, Fusion IO, fast disk (get data close to CPU!!), you can make a SQL server box do almost anything.  Native 64 bit support has taken care of nearly all extensibility issues that one might have previously had with SQL Server.

Lets talk more about the two other main options, Oracle or AS/400.  The adage used to be - "no one gets fired for buying IBM" - that's gotta be wrong and getting wronger (like my paradoxical statement) every year.  I'd like to start firing people that choose IBM because they have two staff (that could leave tomorrow) that know their way around an AS/400 - This is not a reason to keep the machine.  We need to make all business decisions based upon ROI, not from the heart.  These days we need to future proof our decisions too, as technology advancements are moving at the speed of light!  Yes the 400 has sat in a corner and just worked for the last two years, in general - so would any other DB!

I like the way oracle has essentially decoupled itself from hardware.  You point an application to a listener and away you go - it does NOT matter what CPU architecture is behind the scenes - as long as the data is there - they query will process the same way.  Unfortunately for AS/400 (and SQLServer) there is a limitation on the CPU architecture that is supported behind the scenes.

Oracle has also hitched itself to the hardware in a way that no other vendor can, engineered systems.  They've developed hardware that is purely designed for one purpose in life - run database queries, run them fast and run them all day long.  These are amazing machines (viewed as appliances) that are created and designed with a single purpose in life - no wonder they do such a good job.  I've worked with a number of engineered systems and they do fly.

A common misconception that I get from many clients considering oracle (which is free in the standard edition form with net new JD Edwards licensing [SaaS or perpetual]) is that oracle is too complex, they'll need to employ a DBA etc etc.  I'll get up on my soap box and yell from the roof tops - this is not the case anymore.  The Enterprise Manager suite of programs that comes with 11GR2 is exceptional.  What I like to do is use EM12C to augment the management and monitoring capabilities, which is free for basic functionality.  The combination of these two management methods will see your database running and monitored without human interaction.  Some of my clients have uptime of > 260 days without being touched...  On Windows...  This is not a record (this is not when things crashed, it's an example), this is just how things are at the moment, an oracle database, if sized correctly, will just run.  If you have issues - start pointing the finger towards something else.

I do need to go back to the 400.  It's a great machine.  It does a lot of heavy lifting for a lot of clients, and has been doing this for a very long time.  I love the 400, I've had interactions with it for the last 15 years.  I do however, think that there are better options out there - when considering things like:

  • interoperability
  • IaaS options, cloud options
  • HA
  • SAN storage usage
  • supportability
  • hardware agnostic
  • resource availability

At this stage I seem to be recommending oracle for the database for systems that support over 1000 users concurrently.  You have a choice of computers that are going to support this, IBM P,  sun T5, Sun M series, engineered systems, x86 (win & lin) - the list goes on.  Look at what oracle have done with x86 and exadata / ODA - exceptional performance from the old x86 CPUs.  What I'm trying to say here is that you don't need the massive RISC based CPU's to get massive performance anymore.  Let's face it, processing power will be a commodity item, if it is not already.  Reliability is being built into every layer (RAC, virtualisation etc), you're not going to be concerned about CPU architecture for your database, because oracle can run anywhere (is that like Java I hear you say?).

Get your data as fast as possible and get it as close as possible to the CPU and you are not going to have performance issues.

Any architecture will support this type of interactive load, with the right amount of cores, sockets and RAM.  RAC can see you expanding the solution across any number of chassis.

Once you've decided on the database (you need to decide between standard edition and EE - how deep are your pockets?), you can then put some logic processing and web processing around your DB decision - easy!  No real licence ramifications for LOADS of CPU on the enterprise server / logic server - so I say go for it.  Webserver licensing should be handled under your technology foundation, so you can also "soup up" this layer.  Remember that hardware is cheaper than consulting when you have performance issues.  Get this right up front!

That is my very basic wrap for some key considerations in large JD Edwards sizing and database choices, the main take homes are:

  • Any hardware will do the job (within MTRs), some will be harder to get fast than others
  • Oracle DB is not the scary time consuming beast that used to be
  • Get your database server sized right, get the fastest disk IO possible.  Consider SSD / fusionIO
  • Build your solution around the database server
  • Build your solution to be future proof, consider question like IaaS readiness.

No comments:

Extending JDE to generative AI