Monday 18 July 2016

2 weeks of AWS production activity

You don’t just go live in AWS like you would “on prem”, you adapt to your user base and load.

We are in the adaption phase for one of our clients who has recently gone live with JD Edwards in AWS.  Exciting times for all involved.

Adaption involves the nexus of performance and cost effectiveness.   What do I mean by that, well – what is the lowest spec environment that I can run up to provide the same end user experience.  This will save the client money.  I can then convert this spec to a “reserved instance” metric – which will save even more money.

Look at the CPU specs for the environment over the last 2 weeks.

image

Wow, some of the web servers jumped around a bit – but this was a problem with stuck threads in WLS, we’ve sorted this and can see that the average CPU utilisation for all enterprise and web tiers is very low on average.

So now we know how things look from a CPU POV, let’s look at the application performance so that if we make changes – we have a measuring stick – or at least something to measure against.

image

So we can see from above that evaluating information from over 90000 ERP pages, we start to form a pattern of performance.  We can of course drill down to the applications themselves:

image

This gives us a great idea of the interactive performance.  Batch is really easy too, as we have F986114 to carve up for that.

The combination of google analytics and F986114 allows us to change the underlying technology with confidence and ensure that we provide a consistent level of service to our customers whilst improving the price point for the hosted environment.

We are going to fine tune this environment, as we have all of the information that we need.  We might change the host type for the web servers to get a little more bang for buck and also the enterprise servers.  At this stage, the DB server (which is RDS) might also be made a little smaller, as we are not even close to stretching that.  Watch this space.

No comments: