You’re now subscribed to the Twelve Labs Newsletter! You'll be getting the latest news and updates in video understanding.
Oh no, something went wrong. Please try again.
Introduction
Are you familiar with the concept of “one source, multi-use”? For content creators and influencers, it's all about maximizing audience engagement by repurposing content across different platforms.
This tool, “Generate Social Posts for your Video”, is specifically designed to assist video content creators in converting their videos into written formats, whether it's a fun Instagram post packed with emojis or a detailed blog entry.
Here's how it works: Simply upload your video to the application and select one of the given social media platforms where you want to make a post. Or customize your own prompt or requirements of a desired written format, such as "Write a summary in bullet points." In just a few clicks, you'll have your transformed content ready to go.
Now, let's dive into the step-by-step process of building the app!
Prerequisites
You should have your Twelve Labs API Key. If you don’t have one, visit the Twelve Labs Playground, sign up, and generate your API key.
The repository containing all the files for this app is available on Github.
(Good to Have) Basic knowledge in JavaScript, TypeScript, Node, React, and React Query.
How the App is Structured
The app consists of five main components; GenerateSocialPosts, VideoFileUploadForm, Video, InputForm, and Result.
GenerateSocialPosts: It serves as the parent container for the other components. It holds key states that are shared with its descendants.
VideoUrlUploadForm: It features a straightforward form for uploading a video file and indexing it using the TwelveLabs API. It displays the video being indexed and provides real-time status updates until the indexing process is complete.
Video: It displays a video based on a given URL. It is utilized within multiple components throughout the application.
InputForm: It consists of preset radio buttons representing major social media platforms and a text area where users can specify the requirements of written content they desire from a video.
Result: This component showcases the generated written content/text based on the user's selection of a social platform or input, leveraging the TwelveLab’s '/generate' API.
The app also has a server that stores all the code involving the API calls and the apiHooks.tsx which is a set of custom React Query hooks for managing state, cache, and fetching data.
Now, let’s take a look at how these components work along with the Twelve Labs API.
How the App Works with Twelve Labs API
1 - Showing the Most Recent Video of an Index
This app only works with one video, which is the most recently uploaded video of an index. Thus, on mount, the app shows the most recent video of a given index by default. Below is the process of how it works.
Get all videos of a given index in App.tsx (GET Videos)
Extract the first video’s id from the response and pass it down to GenerateSocialPosts.tsx
Get details of a video using the video id, extract the video url and pass it down to Video.tsx (GET Video)
So, we make two GET requests to the Twelve Labs API in this flow of getting the videos and showing the first video on the page. Let’s take each step in detail.
1.1 - Get all videos of a given index in App.tsx
Inside the app, the videos are returned by calling the react query hook useGetVideos which makes the request to the server. The server then makes a GET request to the Twelve Labs API to get all videos of an index. (💡Find details in the API document - GET Videos)
<GenerateSocialPosts
index={apiConfig.INDEX_ID}
videoId={videos.data[0]?._id || null} //passing down the id
refetchVideos={refetchVideos}
/>
1.3 - Get details of a video using the video id, extract the video url and pass it down to Video.tsx
Similar to what we’ve seen from the previous step in getting videos, to get details of a video, we’re using the react query hook useGetVideo which makes the request to the server. The server then makes a GET request to the Twelve Labs API to get details of a specific video. (💡Find details in the API document - GET Video)
/** Get a video of an index */
app.get(
"/indexes/:indexId/videos/:videoId",
async (request, response, next) => {
const indexId = request.params.indexId;
const videoId = request.params.videoId;
try {
const options = {
method: "GET",
url: `${API_BASE_URL}/indexes/${indexId}/videos/${videoId}`,
headers: { ...HEADERS },
};
const apiResponse = await axios.request(options);
response.json(apiResponse.data);
} catch (error) {
const status = error.response?.status || 500;
const message = error.response?.data?.message || "Error Getting a Video";
return next({ status, message });
}
}
);
It returns the details of a video including the url of a video. You may have observed that the video URL was not readily available from the previous GET Videos request. This is precisely why we're executing the GET Video request here – to delve deeper and extract the video URL.
One more note here is that as this app only allows a user to upload/index a video file from a local device, the video url will be inside the “hls” object. (while in my previous app - Summarize Youtube Video , the youtube url was available inside the “source” object as the app supported uploading by youtube url)
{video && (
<Video
url={video.hls?.video_url} // passing down the url
width={"381px"}
height={"214px"}
/>
)}
2 - Uploading/Indexing a Video by Video File
In this app, a user can upload and index a video by selecting a video file from the local device. We’re assuming that a main user of this application would be a video content creator who has a video ready in the local device.
Once you submit a video indexing request (we call it a ‘task’), then we can receive the progress of the indexing task. I also made a video visible during the indexing process so that a user can confirm and watch the video while the indexing is in progress. Please note that the video is not instantly available from the beginning of the indexing process; the video url becomes available about at the middle of the indexing process.
Create a video indexing task by a video file in VideoUrlUploadForm.tsx (POST Task)
Receive and show the progress of the indexing task in Task.tsx (GET Task)
Let’s take a look at each step one by one.
2.1 - Create a video indexing task by a video file in VideoUrlUploadForm.tsx
When a user selects a video file from his/her local device and submits the videoFileUploadForm, the video indexing process starts. indexYouTubeVideo puts together the necessary data that needs to be sent for making the API request (e.g., language, index id, video file) as a form and makes a POST request to the server. The server then makes a post request to Twelve Labs API’s ‘/tasks’ endpoint. (💡Find details in the API document - POST Task)
/** Index a video and return a task ID */
app.post(
"/index",
upload.single("video_file"),
async (request, response, next) => {
const formData = new FormData();
// Append data from request.body
Object.entries(request.body).forEach(([key, value]) => {
formData.append(key, value);
});
const blob = new Blob([request.file.buffer], {
type: request.file.mimetype,
});
formData.append("video_file", blob, request.file.originalname);
const options = {
method: "POST",
url: `${API_BASE_URL}/tasks`,
headers: {
"x-api-key": TWELVE_LABS_API_KEY,
accept: "application/json",
"Content-Type":
"multipart/form-data; boundary=---011000010111000001101001",
},
data: formData,
};
try {
const apiResponse = await axios.request(options);
response.json(apiResponse.data);
} catch (error) {
const status = error.response?.status || 500;
const message =
error.response?.data?.message || "Error indexing a Video";
return next({ status, message });
}
}
);
It returns an id of the video task that has just been created.
{
"_id": "65e9f732bb29f13bdd6f305a"
}
2.2 - Receive and show the progress of the indexing task in Task.tsx
With the returned task id from the previous step of getting details of a video, we will use this task id to get details of the task and keep updating the task status to the user. So I’ve set the Task component to render inside the VideoFileUploadForm when there is a task id.
Inside the Task component, we’re receiving the data by using the useGetTask React Query hook which makes a GET request to the server. The server then makes a GET request to the Twelve Labs API as below. (💡Find details in the API document - GET Task)
Unless the “status” is “ready”, the useGetTask hook will refetch the data every 5,000 ms so that a user can see the progress of the task in real-time. Check how I leveraged the refetchInterval property of useQuery below.
3 - Receiving User Inputs and Generate/Show Results
Now we’re at the core and fun part - generating any text from a video! We’re receiving the user input then using the Twelve Labs API’s ‘/generate’ endpoint to generate open-ended texts.
Receive a user input either from the preset radio buttons representing major social media platforms or a text area form in InputForm.tsx
Based on the user input, make ‘generate’ API call in Result.tsx (POST Open-ended texts)
Show the results in Result.tsx
Let’s dive into each step.
3.1 - Receive a user input either from the preset radio buttons representing major social media platforms or a text area form in InputForm.tsx
InputForm is a simple form consisting of five radio buttons; four representing Instagram, Facebook, X, blog and the last one representing ‘Others’ where a user can customize and specify what kind of text the user wants from a video.
If a user selects the preset social platform, the embedded prompt for the platform will be set as prompt. This will later invoke the generate API in the result.tsx. If a user selects ‘Others’, the user input will be set as prompt.
async function handleSubmit(event: React.FormEvent) {
event.preventDefault();
let promptValue = "";
let platformValue = "";
if (instagramRef.current?.checked) {
promptValue = "write a Instagram post with emojis, 100 words or less. Do not provide an explanation. Do not provide a summary.";
platformValue = "Instagram";
} else if (facebookRef.current?.checked) {
promptValue = "write a Facebook post with emojis, 150 words or less. Do not provide an explanation. Do not provide a summary.";
platformValue = "Facebook";
} else if (xRef.current?.checked) {
promptValue = "write a X (formerly Twitter) post with emojis, 50 words or less. Do not provide an explanation. Do not provide a summary.";
platformValue = "X";
} else if (blogRef.current?.checked) {
promptValue = "write a blog post with details. Divide sections with subtitles. Do not provide an explanation. Do not provide a summary.";
platformValue = "Blog";
} else if (textRadioRef.current?.checked) {
const inputValue = textAreaRef.current?.value.trim();
if (inputValue?.length && inputValue?.length > 0) {
promptValue = inputValue;
platformValue = `"${inputValue}"`;
} else {
setShowCheckWarning(true);
return;
}
}
setPrompt(promptValue);
setIsSubmitted(true);
setShowVideoTitle(true);
setPlatform(platformValue);
}
💡 Tips on prompt engineering
Take a look at the embedded prompts I've configured for Instagram, Facebook, and more. Dive into this guide on prompt engineering tailored for Twelve Labs API!
3.2 - Based on the user input , make ‘/generate’ API call in Result.tsx
When the form has been submitted and the video id and prompt are set, useGenerate hook will be called from the Result.tsx. The hook will then make the request to the server where the API request to Twelve Labs API is made. (💡Find details in the API document - POST Open-ended texts)
It returns an object consisting of response ‘id’ and ‘data’ which is the generated text based on a provided prompt. Below is an example result when you select ‘Instagram’.
{
"id": "ea704c10-91bf-4e7b-9941-a323d5fe8970",
"data": "🌮🍟 Taco Tuesday just got a whole lot better with these loaded wraps! 🥪🌮🍟 Packed with protein, veggies, and all the fixings, these wraps are a delicious and easy way to elevate your Taco Tuesday game! 🌮🍟 #tacotuesday #wraps #proteinpacked #easyrecipe #delicious #foodie #yum"
}
3.3 - Shows the results in Result.tsx
Based on the response we get from the above step, the results are shown in the Result component. To improve the presentation of the results in the Result component, you can split the returned string by the line dividers ("\n") and render each line within its own <p> tag.
Unlocking the full potential of your video content has never been easier, thanks to Twelve Labs' '/generate' API! With this powerful technology, you can effortlessly transform your videos into captivating social media posts tailored for platforms like Instagram, X, and even blog posts! Whether you're looking to engage your audience on social media or create compelling blog content, the possibilities are endless.
This application is also the culmination of a series of tools I've developed, all leveraging Twelve Labs' Generate API. From gaining inspiration from other influencers with the "Summarize a YouTube Video", to effortlessly generating titles, topics, and hashtags for video uploads with "Generate Titles and Hashtags for Your Video" and now, repurposing content for various platforms with this tool – it's all about simplifying the content creation process. Feel free to utilize all three applications in this context!
Learn how to build a semantic video search engine with the powerful integration of Twelve Labs' Embed API with ApertureDB for advanced semantic video search.
Whether you're looking to find the perfect berry-toned lipstick or just curious about spotting specific colors in your videos, this guide will help you leverage cutting-edge AI to do so effortlessly.
Leverage Twelve Labs Embed API and LanceDB to create AI applications that can process and analyze video content with unprecedented accuracy and efficiency.