Executive Summary

Tape Ark recently completed a project to liberate footage from a collection of broadcast video tapes (Betacam format) from deteriorating media that had been held in offsite storage for over 40 years. 

The Challenge

Due to the undigitised video footage residing on legacy tape and limited accompanying metadata, the TV broadcaster were unable to easily or quickly view the historical footage or search and edit it in order to access and use the content when required. 

The Tape Ark Solution

Using Tape Ark’s scalable and automated mass tape liberation infrastructure and depth of technical expertise in legacy data migration, footage from the video collection was carefully extracted from the Betacam tapes to a local disk cache and then converted to a broadcast quality HD format.

The digitised video was then uploaded into the public cloud to a secure Amazon Web Services (AWS) S3 storage account.  The historical film footage was finally available in the cloud for download and use by the TV broadcaster.

Now accessible and liberated, Tape Ark applied AWS Rekognition software to the footage, automatically splitting the video into individual clips in high resolution broadcast quality HD.  As an additional step, smaller low resolution proxy clips were also created for internal use by the television network to provide rapid access to the clips for viewing and editing in the local studio.

Tangible Results

The television broadcaster can now search the metadata of their video to search for footage of a celebrity or politician appearing in an historical clip, at a particular location or point in time and include other search topics/criteria that may be relevant. 



Click here for more information on:

Example of Metadata Enrichment through AI Application

On one of the Betacam tapes owned by the TV network was a clip entitled “Australian Prime Minister’s Christmas Message”.  Looking at the database, the clip had no date, and it was not known which prime minister it was delivering this message.  The only way for the broadcaster to find out more would be to watch the clip (in fact all of the clips) to create a more detailed database.

Using AWS AI software, including Rekognition to analyse the video footage, Tape Ark  were able to  produce frame by frame .json outputs detailing the faces it could recognise, a total face count, whether they were male or female faces, the expression on the face (happy, sad, confused etc.).  It also output a list of each object in the frame, and where possible, the subtype of object (like tree, and species of tree).

After we completed the work, the TV network was able to now search on the same video containing the prime minister, but using far more advanced search tools and criteria.  They could search for content to target video that was very specific to their needs.  Things like:

“I want to find a video of Australian Prime Minster Tony Abbott, standing next to his wife Margie, mentioning the word Christmas, where he is wearing a blue suit, and his wife looks happy, standing next to a “Abies Alba” species of Christmas tree

 Example of object, activity and scene detection

Example of object, activity and scene detection

Other Applications for this kind of technology include: 

  • Security footage direct ingest and analysis

  • CCTV footage from drilling rigs, police stations, airports, public venues etc. 

  • Underwater ROV footage of pipelines, drilling platform footings, vessel inspections etc.

  • Dashcam video

  • Historical archives – both private and public collections of significance

  • Cinematography collections