Author
Twelve Labs
Twelve Labs
Date Published
12/12/2024
Tags
Investment
Startup
Share
Join our newsletter
You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong.
Please try again.

Databricks, HubSpot Ventures, IQT, SK Telecom, and Snowflake make strategic investment; As company enters its next phase of growth, Twelve Labs hires former CTO of SK Telecom and leader of Apple's Siri, Yoon Kim, as President

SAN FRANCISCO, Dec. 12, 2024 /PRNewswire-PRWeb/ -- Twelve Labs, the video understanding company, today announced that three of the world's leading infrastructure providers, Databricks, SK Telecom, and Snowflake Ventures, as well as HubSpot Ventures and In-Q-Tel (IQT) have each made a significant strategic investment in the company. The $30 Million in funding underscores the value Twelve Labs delivers to end customers, particularly across the media and entertainment space, including professional sports leagues, major film and production studios, and the world's largest content creators and developers, as well as other businesses utilizing large stores of video content. The news comes upon release of its latest video foundation model, Marengo 2.7, which applies an entirely new multi-vector approach to video understanding, yielding unprecedented results.

Twelve Labs will use its latest funding to accelerate development of key initiatives in addition to hiring. The company continues to grow aggressively to meet customer demand and further its R&D efforts. Twelve Labs provides industry-leading video AI solutions designed to unlock the full potential of vast enterprise video archives. Its proprietary multimodal foundation models, Marengo and Pegasus, bring human-like understanding to videos, enabling precise semantic search, summarization, analysis, Q&A, and more.

"We're incredibly excited to partner closely with leading data platforms. It's a no-brainer for us to bring our video foundation models to the largest and most trusted enterprise infrastructure providers." said Jae Lee, CEO of Twelve Labs.

A Milestone in the AI Ecosystem

As part of their respective investments, Databricks and Snowflake will deliver Twelve Labs' capabilities to users through interoperability with their vector databases. This represents an important milestone in the AI ecosystem, validating the value end customers experience with Twelve Labs.

"We're incredibly excited to partner closely with leading data platforms. It's a no-brainer for us to bring our video foundation models to the largest and most trusted enterprise infrastructure providers." said Jae Lee, CEO of Twelve Labs. "This is just the beginning. Today we have interoperability with their leading vector databases and we're looking forward to building out the next generation of AI tooling within these platforms."

Snowflake is developing an advanced integration with Twelve Labs that will leverage Snowflake Cortex AI, Snowflake's fully managed AI service that provides a suite of generative AI features that can be leveraged to improve the consumer experience, drive monetization through enhanced content analysis, and enhance personalization and creative visioning with enhanced video and creative search capabilities. Twelve Labs' multimodal video embeddings can be stored in Snowflake with vector data support, enabling advanced video analytics and AI-driven applications, all with Snowflake AI Data Cloud's built-in security and governance.

"In working with Twelve Labs and our media, sports and advertising customers, there is a strong opportunity to leverage the company's models for advanced video search, creative versioning and video personalization," said Bill Stratton, Global Head of Media, Entertainment & Advertising at Snowflake. "Our investment will unlock even more opportunity for our customers to leverage AI without copying or moving their data and Twelve Labs and Snowflake are committed to solutions that are privacy focused and respect our customers' intellectual property."

Twelve Labs and Databricks developed an integration that specifically reduces development time and resource needs for advanced video applications, enabling complex queries across vast video libraries and enhancing overall workflow efficiency. Through a unified approach to handling multimodal data, users no longer have to juggle separate models for text, image, and audio analysis. Instead, they can work with a single, coherent representation that captures the essence of video content in its entirety. This not only simplifies deployment architecture but also enables more nuanced and context-aware applications, from sophisticated content recommendation systems to advanced video search engines and automated content moderation tools.

Moreover, this integration extends the capabilities of the Databricks Data Intelligence Platform and benefits the Databricks ecosystem, allowing seamless incorporation of video understanding into existing data pipelines and machine learning workflows. Whether companies are developing real-time video analytics, building large-scale content classification systems, or exploring novel applications in Generative AI, this combined solution provides a powerful foundation. It pushes the boundaries of what's possible in video AI, opening up new avenues for innovation and problem-solving.

"Twelve Labs' technology is highly advanced and fills an important gap in the current AI ecosystem. We are excited to back the company building the future of video understanding and are already working on projects that will up-level customer capabilities," said Andrew Ferguson, VP of Databricks Ventures. "Integrating Twelve Labs Embed API with our Mosaic AI Vector Search overcomes the challenge of efficient processing of large-scale video datasets and accurate multimodal content representation. Our work together will help deliver on the promise of data intelligence, and we look forward to enabling additional integrations in the future."

Taking Video Understanding to the Next Level

Upon the close of the latest round of funding and amid exceptional growth and product demand, Twelve Labs has hired Yoon Kim as President and Chief Strategy Officer. Dr. Kim most recently served as Partner at Saehan Ventures, where he was in charge of discovering and fostering promising AI and deep tech startups. Prior to that, he served as CTO at SK Telecom, leading AI innovation. But perhaps Dr. Kim is best known for his pivotal role in the development of Apple's AI assistant, Siri. He joined Apple in 2013 through the acquisition of Novauris Technologies, a pioneer in mobile speech recognition, where he served as CEO. At Apple, he led teams that build speech recognition technologies for Siri and iOS dictation.

"While it is unusual for a company of Twelve Labs' age and stage to hire a President, this move is a testament to the demand we have experienced and the path we see moving forward. Yoon is the right person to help us execute," said Jae Lee, CEO and co-founder of Twelve Labs. "Yoon will be instrumental in driving future growth with key acquisitions, expanding our global presence, and aligning our teams toward ambitious goals. He will also give us a unique talent advantage in South Korea."

With the addition of Dr. Kim, Twelve Labs will solidify its leadership position. Dr. Kim will focus on strengthening the global enterprise market and overseeing global business strategies as well as securing top AI talent.

"Twelve Labs not only possesses world-class multimodal AI technology, but also has an outstanding ability to create real-world value for its customers through rapid innovations," said Dr. Kim. "I believe Twelve Labs is on the verge of establishing clear and sustainable leadership in video understanding AI. Based on my experience as both an entrepreneur and executive in the US, UK and Korea, I'm looking forward to working with global partners in fulfilling our mission of enabling humans and machines to understand all video content in the world and recruiting the best AI talent to join our cause."

Dr. Kim will split time between Twelve Labs' San Francisco and Seoul offices.

To learn more about why so many game-changers are joining Twelve Labs as investors, partners, customers, and team members, please visit twelvelabs.io.

About Twelve Labs

Twelve Labs makes video instantly, intelligently searchable and understandable. Twelve Labs' state-of-the-art video understanding technology enables the accurate and timely discovery of valuable moments within an organization's vast sea of videos so that users can do and learn more. The company is backed by leading venture capitalists, technology companies, AI luminaries, and successful founders. It is headquartered in San Francisco, with an APAC office in Seoul. Learn more at twelvelabs.io.

Media Contact

Amber Moore, Moore Communications, 1 5039439381, amber@moorecom2.com

SOURCE Twelve Labs

Generation Examples
No items found.
No items found.
Comparison against existing models
No items found.

Related articles

Twelve Labs is building AI that can analyze and search through videos

Startup Twelve Labs is building models and tools to help companies search through and analyze video content.

TechCrunch
Kyle Wiggers
Our SOC 2 Type 2 Certification

Twelve Labs has successfully completed its SOC 2 Type 2 audit, marking a significant milestone in our commitment to data security and privacy.

Ulises Cardenas
Our Series A to Build the Future of Multimodal AI

We've raised $50M for our Series A funding from NVIDIA and NEA

Jae Lee
S.Korea's Twelve Labs ranks among world's top 50 generative AI startups

The company has independently developed a massive AI model geared toward video understanding

The Korea Economic Daily