
Imagine youāre playing rock-paper-scissors with your computer, where your hand gestures are recognized through your webcam in real-time, how is it? With advancements in machine learning, specifically using YOLOv8, you can easily implement this hand gesture recognition project in Python.
In this article, weāll walk through the entire process of setting up the project, including training a model to recognize hand gestures like rock, paper, and scissors, using a custom dataset.
If youāre curious about using Python for real-time gesture recognition or want to create a fun, interactive project, this tutorial is for you!
Requirements
Before diving into the code, letās ensure Python (3.6 or later) is installed on your computer. If you donāt have Python, you can download it for free fromĀ https://www.python.org/downloads/.
Now download all the dependencies we require using the following commands:
pip install gitpython>=3.1.30 pip install matplotlib>=3.3 pip install numpy>=1.23.5 pip install opencv-python>=4.1.1 pip install pillow>=10.3.0 pip install psutil pip install PyYAML>=5.3.1 pip install requests>=2.32.0 pip install scipy>=1.4.1 pip install thop>=0.1.1 pip install torch>=1.8.0 pip install torchvision>=0.9.0 pip install tqdm>=4.64.0 pip install ultralytics>=8.2.34 pip install pandas>=1.1.4 pip install seaborn>=0.11.0 pip install setuptools>=65.5.1 pip install filterpy pip install scikit-image pip install lap
Alternative Installation
Installing the above utilities one by one might be a boring task. Instead, you can download the ārequirements.txtā file containing all the dependencies above. Simply run the following command. It will automate the whole task in one go.
pip install -r requirements.txt
Training of YOLO Model on Custom Dataset
At the very first, we have to train our YOLO model. Please follow the steps below:
Download the Dataset
Download the rock-paper-scissor dataset from roboflow.com.
Now unzip the downloaded dataset. The folder should look like the following:

Training YOLOv8 Model with Custom Dataset using Colab
OpenĀ Google Colab, sign in with your Gmail account and open a new notebook.
Now go to the āRuntimeā menu, select āChange runtime typeā, choose āT4 GPUā for the Hardware accelerator, andĀ save it.
Letās check whether the GPU is running perfectly or not using the following command:
!nvidia-smi
The output should look like the following:

Next, installĀ ultralyticsĀ on yourĀ colabĀ workspace using the following command:
!pip install ultralytics
Now open yourĀ Google DriveĀ and navigate to āMy Drive.ā Now create a folder named āDatasetsā under āMy Driveā and inside the āDatasetsā folder create one more folder āRockPaperScissors.ā
Letās open the unzipped dataset folder, select all items present there, and drop them into the āRockPaperScissorsā folder on Google Drive. It may take a while so wait until it is finished. The final āRockPaperScissorsā folder will look like the following:

Now open the ādata.yamlā file in the text editor and modify the path variable to: ā../drive/MyDrive/Datasets/RockPaperScissorsā The final ādata.yamlā file will look like the following:

Now, letās go back to our Google Colab dashboard. You need to mount your Google Drive with the Colab. Insert the following command in a new cell and run it:
from google.colab import drive drive.mount('/content/drive')
You should get a success message like this: āDrive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(ā/content/driveā, force_remount=True).ā
Now we will start training our YOLO model with our custom dataset. Again, create a new cell, insert the command below, and run it.
!yolo task=detect mode=train model=yolov8l.pt data=../content/drive/MyDrive/Datasets/RockPaperScissors/data.yaml epochs=100 imgsz=640
Here, āepochs=100ā specifies the number of training epochs. An epoch is one complete pass through the entire training dataset. Here, the model will be trained for 100 epochs.
āimgsz=640ā sets the size of the input images on which the model will be trained. In this case, images will be resized toĀ 640Ć640Ā pixels before being fed into the model.
The whole training can take around 1 ā 2 hours even more to complete.
After, the completion of the training go to the āFilesā section in your Colab dashboard and navigate through these folders: ārunsā -> ādetectā -> ātrainā -> āweightsā. Inside the āweightsā folder you will see ābest.ptā and ālast.ptā these two files. Download ābest.ptā from there.
Setting Up the Environment
For this project, create a separate folder named āRockPaperScissors.ā Under this folder, create another folder named āWeightsā to store the pre-trained YOLO model.
Place the Downloaded YOLO Model
In the previous section, we trained our YOLO model with a custom dataset and downloaded a file named ābest.pt.ā Now place this file inside the āWeightsā folder.
Create Your Python Script
Weāre almost at the end of setting up the environment. Now choose your favorite text editor and open the entire project folder āRockPaperScissors.ā Inside this folder, create a Python program file named ārock_paper-scissors.py.ā This is where youāll write the code.
Your final project file hierarchy should look like the following:
RockPaperScissors/ āāā Weights/ ā āāā best.pt āāā rock_paper_scissors.py
The Program
Hereās the complete Python program that uses YOLOv8 to detect rock-paper-scissors gestures via your webcam. The program captures live video, performs object detection on each frame, and identifies gestures with confidence scores.
import cv2 import math import cvzone import threading from ultralytics import YOLO # Load YOLO model with custom weights yolo_model = YOLO("Weights/best.pt") # Define class names class_labels = ['paper', 'rock', 'scissor'] frame = None def capture_video(video_capture): global frame while True: success, img = video_capture.read() if success: frame = img # For number detection through webcam video_capture = cv2.VideoCapture(0) # Start the video capture in a separate thread capture_thread = threading.Thread(target=capture_video, args=(video_capture,)) capture_thread.daemon = True capture_thread.start() while True: # Perform object detection results = yolo_model(frame) for r in results: boxes = r.boxes for box in boxes: x1, y1, x2, y2 = box.xyxy[0] x1, y1, x2, y2 = int(x1), int(y1), int(x2), int(y2) w, h = x2 - x1, y2 - y1 conf = math.ceil((box.conf[0] * 100)) / 100 cls = int(box.cls[0]) if conf > 0.1: cvzone.cornerRect(frame, (x1, y1, w, h), t=2) cvzone.putTextRect(frame, f'{class_labels[cls]} {conf}', (x1, y1 - 10), scale=0.8, thickness=1, colorR=(255, 0, 0)) # Display the frame with detections cv2.imshow("Image", frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.destroyAllWindows() cv2.waitKey(1)
Explanation of the Code
- Loading the Model: We start by loading the YOLOv8 model with the custom-trained weights saved in Weights/best.pt. This model is trained specifically to detect rock, paper, and scissors gestures.
- Starting Video Capture: We set up the webcam for live video capture using OpenCV. To ensure smooth processing, the video capture runs in a separate thread.
- Running Object Detection: For each video frame, we use YOLOv8 to perform object detection. The model returns bounding boxes around detected gestures with confidence scores. We then draw these bounding boxes on the frame and label each detection with its class (rock, paper, or scissors) and confidence level.
- Displaying the Output: Each processed frame is displayed in a window. The program continuously captures video and updates the display until you press āqā to exit.
Output
Summary
In this tutorial, we developed a rock-paper-scissors sign detection project using Python, YOLOv8, and OpenCV. We demonstrated how to use YOLOv8 for real-time hand gesture recognition in Python. This is a fantastic introduction to using YOLOv8 with custom datasets for specific object detection tasks.
Whether youāre interested in gesture recognition, machine learning, or real-time video processing, this project is a practical example of whatās possible with Python and YOLOv8.
Now youāre ready to take this project further! Try experimenting with more gestures or enhance the model to recognize complex hand signs. With the skills youāve learned, thereās no limit to the applications you can create.
For any query related to this project, reach out to me at contact@pyseek.com.
Recommended Article: Create a Finger Counter Using Python, OpenCV & Mediapipe
Happy Coding!