Tape Ark liberated 40-year-old footage from a collection of deteriorating Betacam media tapes.
Each aging videotape contained approximately 100 historical news clips from a major Australian television network broadcaster. Unfortunately, they were accompanied by a very limited and minimal metadata catalog, making it very difficult to locate relevant clips when the TV network needed historical footage.
Due to the undigitised video footage residing on legacy tape and limited accompanying metadata, the TV broadcaster could not easily access the historical footage or search and edit the content when the need arose. This resulted in each videotape having to be physically watched to ascertain its content – a manual, time consuming, and expensive process to undertake.
Using Tape Ark’s scalable tape to cloud liberation infrastructure and their in-depth technical expertise in legacy data migration, the footage from the video collection was carefully extracted from the Betacam tapes to a local disk cache and converted to a broadcast-quality HD format.
The digitized video was then uploaded into Amazon Web Services (AWS) S3 storage account.
With the data in the cloud, Tape Ark applied AWS Rekognition software to the footage, automatically splitting the video into individual clips in high-resolution HD broadcast-quality. In an additional step, smaller low-resolution proxy clips were created for internal use by the television network to provide rapid access to the clips for local viewing and editing.
With the newly created clips in AWS S3 cloud storage, and using Amazon’s Rekognition to analyze the video and image content, Tape Ark was then able to perform:
- Object, activity & scene detection, identifying thousands of items such as vehicles, pets, species of plants, fashion, cultural icons, furniture, and scenes including known locations.
- Facial recognition to identify celebrities and create an index of detected faces for search and discoverability.
- Facial analysis – locating faces within images and analyzing facial attributes such as whether or not the face is smiling, wearing glasses, has a facial beard, eyes open or closed, etc. The facial analysis output also included the coordinates in each frame of each person’s left and right eye, the corners of their mouth, and the arrangement of other detailed elements.
- Sentiment Analysis – detecting human emotions such as happy, sad, surprised, or angry.
- Text in Image – locating and extracting text within images, including text in natural scenes such as road signs, car registration plates, t-shirts, and captions or text within news tickers in the news frames.
- Speech to Text – converting all spoken word to text and time coding the text to the frames. So when searching for a particular person saying a word or phrase, the search would position the user immediately at the exact frame of the spoken words.
- The data generated through AI software application created a rich, searchable index and catalog that could be imported into the network’s Media Asset Management (MAM) system.
- The use of this technology created a significant uplift in the depth and knowledge of the video clips that could never before be envisioned.
In today’s current digital world where social media and being first to press is critical, this search capability can make a massive difference in terms of monetizing the video collection they possess.
The television broadcaster can now contextualize searches using the new metadata to identify specific footage of a celebrity or politician appearing in a historical clip, at a particular location or point in time, and other search terms and criteria to further hone in on the search.
Very soon after Tape Ark completed this work, a major radio network approached the TV broadcaster to buy the sound-bytes of the news clips for use on their radio shows. This immediately turned the TV networks accumulated historical collection of video and audio into a revenue generator and income stream.
Perhaps one of the material benefits for this customer, was that for each full Betacam of videotape (which equates to approximately 60Gb of data in the cloud), the cloud cost to store the clips was $0.002 of a cent per month per clip. This cost is significantly cheaper than offsite storage and it removed the need for the broadcaster to own and maintain legacy video reading devices to access the content. The time to access clips decreased from hours to a few seconds, resulting in a significant overall improvement in newsroom production.
An equally important takeaway for this project was the extent to which our clients’ learned how vulnerable their historical tape collections are and the benefits of applying modern technology. As an initial step, the content needs to be liberated; the number of working tape drives to read the tapes is reducing, making this process more difficult. Additionally, tape media is deteriorating, and in many cases, the tapes are the single and only source of irreplaceable content. Unfortunately, if the tape deteriorates beyond recovery, then the historical video footage is lost forever – before it has a chance to be enriched with AI and emerging technology and put to greater use.