Author
eMAM
Date Published
Apr 1, 2024
Tags
Partnership
Video Search API
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.

EMAM, Inc. and Twelve Labs Inc. will be showcasing the combined solution at the upcoming National Association of Broadcasters show in Las Vegas, NV, on April 14-17 at EMAM booth SL5093 and Twelve Labs booth SU4116.

In the face of expanding volumes of current and historical content with limited time and budget to tag and organize media, organizations need help making sense of and using it all. Twelve Labs provides multimodal AI models for video content, while the eMAM™ platform provides media asset management with integrated workflow process automation. The combined offering empowers organizations to easily find, create, and use video content efficiently and affordably.

Twelve Labs builds on its powerful video language foundation models and tools, including Marengo 2.6 for search and Pegasus-1 for text generation. The system can contextually understand videos and individual frames of videos so they can be easily found and used to automatically generate summaries, chapters, titles, and transcripts/captions. Users of the combined system can use the traditional eMAM search features and filters or a new natural language interaction panel to find the best video segments or whole videos as needed.

David Miller, EMAM President, shared, "Whether organizations need to make sense of the daily flood of new material, seek to use their poorly structured and tagged archive, or need to instantly find important material for breaking news, powerful AI tools will make every media library more useful and valuable."

Jae Lee, founder and CEO of Twelve Labs shared, "Manually logging videos is time-consuming and hard to scale. Moreover, object-level tags miss the context needed to add real value to your video. With the power of Twelve Lab's models available in EMAM, organizations are better equipped than ever before to make the most of their rich content."

From the web interface, eMAM users can browse the search results, preview media, and organize them into categories and projects. Non-technical users can make markers, sub-clips, and timelines/sequences to share with editors and designers using Adobe Creative Cloud and DaVinci Resolve integrated solutions for craft editing.

eMAM can manage the storage and retrieval of original resolution, mezzanine, and proxy in any number of cloud and local systems. After review and approval from eMAM and linked emails, finalized projects can be shared as links for social media, screeners, or links or shared as high-resolution media with packaging and delivery to broadcast, newsroom, or streaming/distribution platforms.

About EMAM

The eMAM product line (eMAM Vault, eMAM Publish, eMAM Workgroup, eMAM Enterprise, eMAM SaaS Cloud Service, and eMAM PaaS Cloud Platform) meets the media asset management and workflow management needs of media, government, faith, and corporate organizations in local, hybrid, and cloud environments worldwide. The eMAM™ platform has been in continuous development since 2006 with a rich tool set and scores of integrated technology partners. EMAM, Inc. is headquartered in the US with channel partners worldwide.

About Twelve Labs

Twelve Labs makes video instantly, intelligently searchable, and understandable. Twelve Labs' state-of-the-art video understanding technology enables the accurate and timely discovery of valuable moments within an organization's vast sea of videos so that users can do and learn more. Leading venture capitalists, technology companies, AI luminaries, and successful founders back the company. Twelve Labs is headquartered in San Francisco, with an APAC office in Seoul. Learn more at twelvelabs.io.Related Link:https://www.emamsolutions.com

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

Building Advanced Video Understanding Applications: Integrating Twelve Labs Embed API with LanceDB for Multimodal AI

Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.

James Le, Manish Maheshwari
Advanced Video Search: Leveraging Twelve Labs and Milvus for Semantic Retrieval

Harness the power of Twelve Labs' advanced multimodal embeddings and Milvus' efficient vector database to create a robust video search solution.

James Le, Manish Maheshwari
Building Semantic Video Search with Twelve Labs Embed API and MongoDB Atlas

Learn how to create a powerful semantic video search application by combining Twelve Labs' advanced multimodal embeddings with MongoDB Atlas Vector Search.

James Le, Manish Maheshwari
Unlocking Video Insights: The Power of Phyllo and Twelve Labs Collaboration

The collaboration between Phyllo and Twelve Labs is set to revolutionize how we derive insights from video content on social media

James Le