This post forms part of a three-part series on how we migrate content to AWS for Media and Entertainment and Broadcast customers. In this post we’ll be looking at the end-stage of the process, when we integrate AI and a Media Asset Management platform (MAM).

Part 3 of 3 – AI and MAM integration

Now that the customer’s data is in the cloud, at this point we can start to use AI to further catalogue the data they have. The AI tools can go through the footage and annotate it with relevant data to make it easier for the customer to find a particular shot or scene that they want. Even if they’re not looking for something in particular, they are able to do a search through the footage for all of the footage that matches a profile, for instance, “girl dancing in the rain.” This is made possible by the facial recognition and object detection of Amazon’s Rekognition, a powerful engine with incredible accuracy that can analyze millions of stored videos in seconds. We also use Amazon Transcribe to automatically convert any speech to text, meaning that the customer can then retrieve videos simply by searching for spoken lines and key phrases. This goes together with Amazon Textract, a machine learning service that extracts written text from images and videos that you pass through its system. The result is a bank of multi-textual information that makes the archive much easier to navigate, assists the customer in tracking down footage, and positions them as a media company to commercialize that content or to reuse it later on.

The collection can also then be stored in a cloud-based MAM platform, for the efficient editing and distribution of the digital media to their target audience. The features that customers can gain in choosing to move to AWS take their assets to the next level. Tape Ark’s job is to make the transition possible.