Thanks to my diverse experiences away from home, I have developed resilience through curiosity, creativity and continuous learning. Here follows a selection of professional and personal projects. Feel free to contact me by email if you would like to know more.


Projects

Table of Contents

Camera Focus

Introduction

LMI Technologies designs and builds 3D computer vision solutions for industrial automation. Here is a short overview of the process to obtain camera position and focus adjustment.


Disclaimer

Due to intellectual property, this is a condensed overview of the project and its results. Many details were intentionally omitted.

Case

Creating a platform to automate the characterization and focus adjustment of stereoscopic sensors.

The following process shows how to adjust focus and determine camera position with a single camera.

Note

The process is done twice for the stereoscopic camera.

Process

The sensor to be aligned is a stereoscopic camera with a projected structured light. The goal is to align the left lens with respect to the right against the reference target. The OpenCV code is available on my GitHub.

  • Camera calibration and intrinsic values
  • Locating camera in space

The Zhang calibration approach:

Reference target :

In an industrial environment the target must remain visible at all times. Here I have simulated camera movement with respect to the target.

Tilt test

Rotation test

Outcome

The target is positioned at a fixed, known distance from the stereoscopic sensor. The position is determined using the sensor’s intrinsic and extrinsic parameters with respect to the target. And feature sharpness serves as a measure of focus.


Automated Quality Inspection

Introduction

Addev Profom inc specializes in the converting and distribution of high-performance materials such as adhesives, technical materials and chemicals.

Case

Reducing quality inspections on die cut gaskets using automation and computer vision.

The product is a foam based gasket with an adhesive backing, delivered to the client on a silicon coated liner

The die-cutting process does not allow for the removal of excess material at the indicated locations (red circle).

Gasket on liner

However, gaskets for automotive clients cannot be delivered with excess material remaining. A significant amount of time was spent manually inspecting finished products and removing any excess when necessary. Here is the approach taken to reallocate manual labor to more valuable tasks.

The Cell

The design is an aluminium extrusion robotic cell with a conveyor belt.

The 4-axis robot moves to the location if material excess is found on the product.

The vision camera is located on the roof of the cell and is considered the origin of the system.

Detection process

The robot is positioned relative to the camera.

The steps involved to determine if excess material is present.

Calibration & Alignment

  • Camera calibration (contrast, white balance)
  • Camera XY-alignment with respect to the robot
  • Robot world calibration with respect to the camera

Video detection

  • Circle detect algorithm within the camera FOV
  • Find grey level in circle (255 -> no excess/ 0 -> excess)
  • Get circle centre if excess is found
  • If excess found, stop the conveyor belt

Camera region of interest

Excess removal

Disclaimer

Each application is different and the tool is designed accordingly. Due to intellectual property, the tooling is not shareable.

With the conveyor belt stopped, the robot goes to the reported location and pokes the excess out.

A 3D printed bracket coupled with a poking system was designed and added to the end of the shaft.


Volumetric Video

Introduction

I enjoy playing music and recording my performances. To capture different angles, I usually move a camera to multiple locations. However, this requires precise playing for continuity. This project aims to use a single device to film in 3D, eliminating the need for multiple recordings.

Generate a point cloud stream using a custom stereoscopic camera

Creating a video where the user can change camera perspective. This is achieved by scanning scene with the use of a stereoscopic vision system.

Notes

Code available upon request on GitHub with more explanation.

Setup

Use of two webcams and a constant and uniform lighting and backdrop.

Stereo calibration is performed using a similar code seen at Camera Focus

Depth

The bloc matching algorithm is used to calculated the disparity map.

From the disparity map and the camera intrinsic values we deduce the depth map. The depth map was rendered using Google CoLab accelerated GPUs.

Seen from the left camera, depth map and original video composited.

Details

My computer is very slow and doesn’t have a gpu. The depth map was rendered using Google Colab cloud GPUs.

Point Cloud

From the depth map, I obtain a point scatter with OpenCV 3D re-projection method.

Observation

Bloc matching does not perform well with low contrast features.

Many missing points due to obstruction and low quality cameras.

My face scan

Improved point cloud

Observation

The point cloud is cleaner than the first attempt. Fewer gaps in the point cloud are observed.

More scans

Guitar close up

Distant scan

Next steps

  • Write algorithm .ply file stream
  • Fill gaps in point cloud

OCR Video

Introduction

Reusing the homography code written for Camera Focus to perform live text recognition from a video.

Live video text recognition with ocr result overlay

Setup

Extracting text from book cover with tesseract.

Detail

Code available on my GitHub

Isolating the book the background and apply polar lines

Result

There’s an underlying bug in the code which creates these glitches. Still working on it. Text recognition algorithm needs tuning



Pose Estimation

Introduction

Using pose estimation to determine movement accuracy.

Realtime pose estimation applied to dance moves.

Setup

Dance moves are repeated twice to establish a comparison. Relative movement is measured. OpenCV and Mediapipe

Detail

Code available on my GitHub

Red lines represent the displacement with respect to the attempted pose (frames on the right).

Graph

Accuracy through time

This is still a work in progress.

Result

Dance moves and relative position

Next

  • Find subject centre of gravity
  • Adjust for multiple angles

Plant Stop motion

Introduction

This is an IoT project with a raspberry pi and Philipps hue light: using computer vision to autonomously take photos of a growing plant.

Taking pictures of a plant to create a stop motion footage.

Setup

Multiple pictures are taken automatically during the day.

Detail

Code available on my GitHub

Pictures are taken according to local sunrise and sunset times. A bluetooth remote controlled light is turned on after sunset. Minor Image processing and histogram matching is applied.

Raw and image processed

Result

It was difficult to automate the exposure adjustment due to sunlight coming from all directions in the afternoon. The first frame is the colour balance reference for all the frames.

Scroll to Top