top of page

Software

Computer Vision

Our computer vision program runs on our raspberry pi 5 and it detects and tracks deer using an AI model. OpenCV (a library for machine learning) allows us to capture the frames from our OV9732 Camera then have our YOLO AI model run inference on it to check for a deer detection.

Custom training an AI model

We successfully trained our custom YOLO (You Only Look Once) AI model by implementing a structured approach. First, we gathered a diverse dataset containing images of deer and annotated the images to facilitate accurate object detection. Next, we grabbed an off the shelf detecting model from Ultralytics to custom train it on our dataset.

Deer dataset collection

Roboflow is a very important tool that allows developers to annotate their images and manage their dataset. This tool was very useful for our project because it helped us gather images and prepare them for machine learning. We had about 1200 images then added preprocessing layers and augmentations such as brightness, color, contrast, auto-orient and so much more, to diversify our dataset. That created many more images for us.

Training YOLO AI model

For our AI model, we chose the YOLOv8 nano version from Ultralytics which is one of the best at object detection and has excellent speed and good accuracy.

For training our model, we used google colab, which provides computing resources for free, such GPUs and TPUs. This gave us computing power to train our model, because our computers are not suitable for machine learning. It would be much, much slower on our own computers.

Training in google colab

Downloaded all the necessary packages and libraries in google colab notebook, such as Python, openCV and YOLO, so we could train our models. The next thing we did was downloading and uploading our dataset from Roboflow straight into the notebook.

We then began training with that one important line of code. Throughout the process, we encountered several software related challenges. We tested different models and datasets, and we experimented with various training parameters. Some models ran slowly, while others produced too many false positives, so we continuously refined our approach to improve accuracy and performance.

We converted our model from the original PyTorch format to an optimized ONNX file, which runs faster on our hardware. Even with the optimized model, the system initially achieved only about 5–7 FPS, so we implemented multiprocessing to run key tasks in parallel and keep the system from lagging.With this improvement, our computer vision software now handles multiple operations simultaneously: capturing frames from the camera, performing inference, tracking the detected deer, streaming the results to our web server, and sending pixel coordinates to the Arduino Uno for motor control.

Raspberry pi 5 and Arduino Uno communication

We have programed our raspberry pi 5 to setup a serial communication with the Arduino Uno, which control the deterrence. Our computer vision software grabs the coordinates of the deer detected in a frame, calculates the center point then sends those coordinates to the Arduino microcontroller.

Deterrence control

The Arduino is the main controller for our laser. It receives raw pixel coordinates from the Raspberry Pi 5 and converts those pixels into stepper-motor movements.

When the Arduino receives these coordinates, it means a detection has occurred, so it turns on the laser. The laser is only active when a valid signal is present and turns off when no tracking is happening. The Arduino is programmed to receive the coordinates, parse them, convert them to steps, and move the laser to the corresponding position in the frame.

We calibrated the system to ensure the laser accurately points at the detected deer. We also used limit switches to home the motors to a known center position so the camera and laser remain aligned along the same axis.

bottom of page