How do you use AI to build a YouTube video summarizer?

By muntashah
use AI to build a YouTube video summarizer

YouTube has become integral to the digital world, hosting different types of video content. This blog covers a method for summarizing YouTube videos with the help of OpenAI. Once you master the technique, you can grasp the essence of every lengthy YouTube video.

In our method, we use an application with front-end development in React.Js and an Express server for the backend to handle YouTube and OpenAI API requests. We also need the server, as API usage in browser-like environments is disabled by default.

To simplify the process, we have divided it into 6 simple to understand steps. Continue reading to learn it!

 

Step 1: Setting Up the React App

You should run the following command in your terminal to create a basic React app:

npx create-react-app youtube-summarizer

It creates a react app with the basic configurations and tools. Next, proceed with the installation of the react-youtube package with the following command:

cd youtube-summarizer
npm install react-youtube

It helps you embed and play YouTube videos on our app. The complete code is in the GitHub Repository.

Next, open the src/App.js file. It will be our app’s root component and entry point, from which you can import the above package.

import React, { useState, useEffect, useRef } from 'react';
import YouTube from 'react-youtube';
import './App.css';

Once done, we can move to step 2 of our method.

 

Step 2: Creating different objects for the React App

In this step, we will create components in our root component below the package importations in the src/App.js file.

  1. Create a VideoInput component.
  2. Set the component with a videoUrl prop as the input value. When the input changes, update it with the setVideoUrl function. 

You can use the following code to do so. It will help us render an input field for entering a YouTube video URL. 

// Setting up the video input component for the summarizer.
const VideoInput = ({ videoUrl, setVideoUrl }) => (
 <input
   type="text"
   placeholder="YouTube Video URL"
   value={videoUrl}
   onChange={(e) => setVideoUrl(e.target.value)}
 />
);
// Next, you should prompt input component.
const PromptInput = ({ prompt, setPrompt }) => (
 <input
   type="text"
   placeholder="Enter your summarization prompt"
   value={prompt}
   onChange={(e) => setPrompt(e.target.value)}
 />
);
// Next, you should submit the button component
const SubmitButton = ({ loading }) => (
 <button type="submit" disabled={loading}>
   Submit
 </button>
);
// Next, load message component
const LoadingMessage = ({ loading }) => loading && <p>Loading...</p>;
// Next, error message component
const ErrorMessage = ({ error }) => error && <p>{error}</p>;
// Next, youtube player component
const YouTubePlayer = ({ videoId }) => <YouTube videoId={videoId} />;
// Finally, laod summary component
const Summary = React.forwardRef(
 ({ transcriptLoaded, summary }, ref) =>
   transcriptLoaded && (
     <div className="summary" ref={ref}>
       <p>{summary}</p>
     </div>
   ),
);
 

Explanation:

  1. PromptInput component builds an input field to enter the summarization prompt. 
  2. The SubmitButton component builds a submit button with the label Submit.
  3. Loading message component makes a loading indicator load when the loading prop is true.
  4. ErrorMessage component causes an error message when the error prop is true.
  5. The YouTubePlayer component creates a YouTube player using the react-youtube library (with the specified videoID).

Summary component creates a div with the class summary that contains a paragraph with the summary prop as its content.
 

Step 3: Orchestrating the Entire Functionality

In step 3, initialize all the needed state variables using the State hook. It will help you manage the application’s dynamic data, including video URL, video ID, transcript, loading state, error state, summary, user prompt, and form submission state. Also, initialize a ref to scroll to the summary component when loaded. Here is the code to do the following:

// Main App Component
function App() {
 const [videoUrl, setVideoUrl] = useState("");
 const [videoId, setVideoId] = useState("");
 const [transcript, setTranscript] = useState("");
 const [transcriptLoaded, setTranscriptLoaded] = useState(false);
 const [loading, setLoading] = useState(false);
 const [error, setError] = useState("");
 const [summary, setSummary] = useState("");
 const [prompt, setPrompt] = useState("");
 const [formSubmitted, setFormSubmitted] = useState(false);
const summaryRef = useRef(null);

Once done, you can create a “form submission handler” and a function. This function will help you extract the video ID from a YouTube URL (with a regular expression) while preventing the default “form submission behaviour”. You can later set the video ID and mark the form as submitted.

const handleSubmit = async (e) => {
 e.preventDefault();
 const videoId = getVideoIdFromUrl(videoUrl);
 setVideoId(videoId);
 setFormSubmitted(true);
};
const getVideoIdFromUrl = (url) => {
 const regex = /[?&]v=([^&]+)/i;
 return url.match(regex)[1];
};

You can now use video ID to fetch a YouTube video transcript with the useEffect hook  (when the videoID state changes). Complete the process as

Make a POST request to the /fetchTranscript endpoint with the video ID as the payload

Update the transcript state with the fetched data.

Set transcriptLoaded to true.

useEffect(() => {
 const fetchTranscript = async () => {
   if (videoId) {
     setLoading(true);
     setError("");
     try {
       const response = await fetch("/fetchTranscript", {
         method: "POST",
         headers: { "Content-Type": "application/json" },
         body: JSON.stringify({ videoId }),
       });
       const data = await response.json();
       setTranscript(data.items[0].snippet.description);
     } catch (err) {
       setError("Error fetching transcript");
     } finally {
       setLoading(false);
       setTranscriptLoaded(true);
     }
   }
 };
 fetchTranscript();
}, [videoId]);

After you obtain the transcript, use it to generate the video summary, using another useEffect hook that triggers when the transcriptLoaded and formSubmitted states change. 

It will initiate the following process:

Fetches the summary by making a POST request to the /fetchSummary endpoint with the transcript and user prompt as the payload. 

Updates the summary state with the fetched data, scrolls to the summary component, and resets loading and form submission states when the process is complete.

useEffect(() => {
 const fetchSummary = async () => {
   if (transcriptLoaded && formSubmitted) {
     setLoading(true);
     setError("");
     try {
       const response = await fetch("/fetchSummary", {
         method: "POST",
         headers: { "Content-Type": "application/json" },
         body: JSON.stringify({ transcript, prompt }),
       });
       const data = await response.json();
       setSummary(data);
       summaryRef.current.scrollIntoView({ behavior: "smooth" });
     } catch (err) {
       setError("Error fetching summary");
     } finally {
       setLoading(false);
       setFormSubmitted(false);
     }
   }
 };
 fetchSummary();
}, [transcriptLoaded, formSubmitted, transcript, prompt]);

Finally, you should render the main structure of the application. The code has a form with all the input fields, a submit button, and the components for displaying loading messages, errors, YouTube player, and the generated summary. However, remember to pass the relevant state variables and functions as props to the child components. Let’s write a code:

const App = () => {
 return (
   <div className="app">
     <h1>YouTube Summarizer</h1>
     <form onSubmit={handleSubmit}>
       <VideoInput videoUrl={videoUrl} setVideoUrl={setVideoUrl} />
       <PromptInput prompt={prompt} setPrompt={setPrompt} />
       <SubmitButton loading={loading} />
     </form>
     <LoadingMessage loading={loading} />
     <ErrorMessage error={error} />
     <YouTubePlayer videoId={videoId} />
     <Summary
       transcriptLoaded={transcriptLoaded}
       summary={summary}
       ref={summaryRef}
     />
   </div>
 );
};
export default App;

 

Step 4: Adding on style

To control the appearance of components, open the src/App.css file. Write the following code to adjust the styling to your liking.

body {
 background-color: #808080; /* grey */
}
.app {
 display: flex;
 flex-direction: column;
 align-items: center;
 justify-content: center;
 padding: 20px;
 font-family: Arial, sans-serif;
}
form {
 display: flex;
 flex-direction: column;
 align-items: center;
 margin-bottom: 20px;
}
input {
 margin: 10px 0;
 padding: 10px;
 width: 300px;
 border: 1px solid #ddd;
 border-radius: 4px;
}
button {
 padding: 10px 20px;
 border: none;
 border-radius: 4px;
 background-color: #007BFF;
 color: white;
 cursor: pointer;
}
button:disabled {
 background-color: #ccc;
 cursor: not-allowed;
}
.summary {
 width: 100%;
 border: 1px solid #ddd;
 border-radius: 4px;
 padding: 10px;
 height: 30vh;
 overflow-y: auto;
}

Explanation:

In our styling code:

  1. The webpage’s overall background colour is chosen to be grey. 
  2. The main application container is centered horizontally and vertically with the help of the flexbox. 
  3. While form elements are arranged in a centered column, the input fields are styled with desired margins, padding, and rounded corners.
  4. We have chosen blue for the submit button, and the disabled state is indicated with a light grey background. Also, a not-allowed cursor will represent a disabled state. 
  5. We have also set the summary container a border and chosen a fixed height with a vertical scrollbar.

Note: You can make styling as per your liking.

Here is an idea of how the final look is going to be:

youtube summarizer

Now, let’s move to the next step of creating communication between the OpenAI and YouTube APIs.

 

Step 5: Setting Express Server

To implement a simple express server, move to the folder that contains your project. You must create a new folder and initialize a new Node.js project here.To implement a simple express server, move to the folder that contains your project. You must create a new folder and initialize a new Node.js project here.

npm init -y

Now, install the Express package. It handles incoming HTTP requests for you and serves the React front. 

Axios makes HTTP requests to the YouTube Data API. After making a request, it fetches video details and calls OpenAI’s API to generate summaries. Then, Dotenv loads the API keys for YouTube and OpenAI from a .env file. It keeps them separate from the codebase for security reasons.

npm install express axios openai dotenv

You should now create a new file with the name server.js. In this file, you need to write the backend code as follows:

require("dotenv").config();
const express = require("express");
const axios = require("axios");
const path = require("path");
const { OpenAI } = require("openai");
const app = express();
app.use(express.json());
const API_KEY = process.env.YOUTUBE_API_KEY;
const openai = new OpenAI({
 apiKey: process.env.OPENAI_API_KEY,
});
// First, fetch video transcripts
app.post("/fetchTranscript", async (req, res) => {
 const videoId = req.body.videoId;
 try {
   const response = await axios.get   (
`https://www.googleapis.com/youtube/v3/videos?part=snippet&id=${videoId}&key=${API_KEY}`,
   );
   res.json(response.data);
 } catch (error) {
   res.status(500).send("Error fetching transcript");
 }
});
// Second, fetch video summary
app.post("/fetchSummary", async (req, res) => {
 const transcript = req.body.transcript;
 const prompt = req.body.prompt;
 try {
   const chatCompletion = await openai.chat.completions.create({
     model: "gpt-3.5-turbo",
     messages: [
       { role: "user", content: transcript },
       { role: "system", content: `Summarize the video: ${prompt}` },
     ],
   });
   res.json(chatCompletion.choices[0].message.content);
 } catch (error) {
   res.status(500).send("Error fetching summary");
 }
});
// You should now serve the app for any other GET requests
app.use(express.static(path.join(__dirname, "../youtube-thumbnail-app/build")));
app.get("*", (req, res) => {
 res.sendFile(
   path.join(__dirname, "../youtube-thumbnail-app/build", "index.html"),
 );
});
const PORT = process.env.PORT || 3009;
app.listen(PORT,()=>console.log(`Server listening port ${PORT}`));

Explanation:

The above server code sets up an Express.js server to handle POST requests. It fetches video transcripts from YouTube (based on video ID) and generates video summaries using OpenAI’s GPT-3.5-turbo model. 

Hence, the server creates a communication between YouTube’s API and OpenAI’s API. It retrieves relevant data and responds with the generated summary.

Finally, you need to obtain OpenAI and YouTube API keys with the following steps:

To generate the OpenAI API key

Open the OpenAI website -> create an account -> log into your account -> click on your profile name -> access the menu -> select the View API Keys option -> Click on the Create New Secret Key button.

Now your OpenAI API key is generated, copy and paste it somewhere safe.

To generate the YouTube API key

Open Google developer’s console website -> sign in with your Google account -> Click on Create project  (choose a project’s name) -> Enable billing -> Click on APIs & Services -> Search for YouTube Data API v3 -> Enable next to it -> Click APIs & Services -> Create credentials -> choose API key -> Select YouTube Data API v3 for the API.

Note: Here, you should choose Public data under restrictions.

Now, Click Create and copy the YouTube API key generated.

When you have both keys, create a .env file in the same location as your server.js file to store their API keys.

YOUTUBE_API_KEY=your_youtube_api_key
OPENAI_API_KEY=your_openai_api_key

It is done :)

Step 6: Show the Results

It is time for the final show; proceed to a terminal and run the command:

npm start

Now, start your server and run it on the same port as your React app with the command:

node server.js

Once you've done this, open localhost on your browser. The app can summarize your desired YouTube video. However, note that an error message will be displayed if the video does not have a transcript.

Example Case Study:

Once all the steps are done:

Choose any long video that you want to summarize. 
Enter the video URL in the YouTube Video URL field.
It will prompt you for how you want to summarize this video in the summarization field. 
Here, in this case, we will choose the video to be summarized in a single paragraph. 
Click the submit button. 

summarize this video in the summarization field

The results of the app will be summarized text: 
XYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXY
ZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZX
YZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZXYZ
XYZXYZXYZXYZ.

Note: XYZ here is just taken as an example text.

And we are done!

 

Summary

AI technology has revolutionised the way people interact with the digital world, and one such example is the use of an AI-based YouTube Video Summarizer. The YouTube Video Summarizer attempts to seamlessly integrate the power of React and OpenAI. It offers users a tool to distil the essence of YouTube videos. It allows you to generate concise summaries without losing important details. While we have discussed the core functionality of such integration in this blog, you can make enhancements as you like.

Share this blog:

Profile picture for user muntashah
muntashah
I am an avid writer who enjoys the world of computer science. My strength lies in delivering tech points in easy-to-understand words.