Another cool exercise while doing the AWS training is the formation of an auto-scale group. Despite the fact that the exercise was pretty trivial, (in terms of workload), it’s an amazing exercise to stand up 5 m4.10xlarges do run a “stress” test in about 15 minutes.
I was able to throw 200 CPU’s at this and getting the following graph:
This is 200 CPU’s and 5x160GB of RAM 800GB of RAM for a demo… It’s totally amazing to be able to execute this workload in less than 15 minutes of config. So I’ve got 5 servers running my CPU intense workload in < 15 minutes.
stress --cpu 40 --io 8 --vm 6 --hdd 8 -t 3600
My autoscale group worked like a charm, I had it spin up another instance when the CPU got over 60% – which is bascially all of the time with my command:
So my ELB is taking on all the new work compute each time the CPU threshold is being reached:
My launch configuration is here:
As you see, I’m not being nice at all.
Then my auto scaling group is doing the rest:
Note that my history is a little “spotty”, as the limits of my account means I can only run 5 x m4.10xlarge machines
I’m going to get this working with web server and ent server pairs. I’m also going to look at the internal LB usage from web server to ent server. With the appropriate affinity, I think I can get all of JDE to scale up and down. Scaling up for batch in the PM is going to be easy too. I look forward to seeing if the M4’s are much quicker than the M3’s for the ERP payload.
This flexibility is unparalleled in the physical world – of course the workload is hard to perceive also in ERP, but it’s incredible.
4 comments:
Hi,
can you please provide the full Launchconfig for autoscaling ?
Thanks
Sorry, that's commercial IP at the moment - when it's in the public domain I'll donate it!
Hi,
thanks for the quick reply. One more question were you able to make autoscaling work on the enterprise servers ?
Thanks
It is really a great work and the way in which you are sharing the knowledge is excellent.Amazon Web service Training in Velachery
Post a Comment