Tuesday 23 August 2016

Are your media objects getting you down?

Media objects can get large and painful, I’ve seen many sites with 100+GB of MOs – some of which will never be used again.

At Myriad we develop lots of mobile applications, of course this allows users to attach as many pictures as they want to JDE transactions, and this does cause a bit of bloat.  Our devices natively want to write 12MP files to the MO’s.  This add up, so we have developed a solution that ensures that you don’t lose quality but also don’t use too much expensive storage.

Firstly, we have a automatic job that compresses all of the photos that have been attached as media objects.  This is a fairly simple job that has access to the MO queue.  It then writes a new entry to the F00165 table and points this to a URL which references an S3 bucket – which has the original high quality image.  We have some special referrer security on the bucket policy that only allows access from certain referrers. 

So, each image is ripped to 1024x768 automatically – nice! This looks better when using JD Edwards or reports now, or anything that reports and attempts to display these large attachments.  Our location that stores media objects (deployment server out of the box), is not bogged down with massive attachments – they are all in a very inexpensive S3 bucket.

No quality is lost, as you have a link to the original image.

So, this gives your MOs 11x9 of storage durability…  You can restart your dep server at any time, objects are accessed as S3 objects which are stored in your (or OUR!) secure AWS account and location.

Q: How durable is Amazon S3?

Amazon S3 Standard and Standard - IA are designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

As with any environments, the best practice is to have a backup and to put in place safeguards against malicious or accidental users errors. For S3 data, that best practice includes secure access permissions, Cross-Region Replication, versioning and a functioning, regularly tested backup.

If you would like a neat solution like this to take away your MO-woes, get in contact.

You don’t need to be on AWS for all of this to work.  Note that we are thinking of an extension that uses https://aws.amazon.com/workdocs/ to index and make this content drill back into JDE – watch this space.

No comments: