Slow progress on my SLAM by blob-finding project

Jacob David C. Cunningham
4 min readJan 15, 2023

--

A navigation platform using a Pi Zero 2 W, 8MP Pi Camera 2, VL53L0X, TFmini-S and an MPU9250.

GitHub Repo

I have been working on this project on my free time, it’s one of those projects that just drags on since it has so many tasks to do that are not trivial to me. I had to recently relearn y = mx + b lol in order to later find the intercept between two points.

Why

In a previous post I had shown the blob-finding process. A month ago I created a web-based remote control with live video stream. I just found some tutorial on how to stream the video feed from the pi camera to the web and then paired web socket control.

Note: purple patch, that’s from TFmini-S “lidar”

Anyway I discovered how tiny the FOV is which lead me to using a panorama. Initially I tried to do this manually/myself but it is not easy. There is more to it than just scaling up/down and rotating/trying to align. More than likely you also have to deform the image/skew it say around a sphere to get the points to line up more. In the end I ended up going with what OpenCV had (Stitcher) and it is amazing (generaly just works).

In my case since my environment has many random tiny objects, more images is better. I am taking 15 images. Unfortunately the current method I’m doing to produce these panoramas is a slow process. Ideally you would take snapshots from a live video feed but I’m individually taking images and then stitching the top, middle, bottom set, rotating them vertically then stitching them horizontally to arrive at the final panorama.

I need a way to find the center of the camera FOV for the beam-pointing so I added a red-crosshair which is deformed randomly by whatever the Stitcher algo decides to do. You could look at the Stitcher code and track the center pixel of image [0, 2] but I’m not doing that here.

The 15 images on the left become the pano on the right, you can see how the red cross hair is deformed. All lines start out very long like the bottom-left corner line.

What I’ve been working on lately is finding the intersect of that red cross. It is not a perfect solution, and you may have too much red in an image… in which case you could try other colors (longer process).

The presence of this crosshair does affect the pano generation process so it shifts

I know that this crosshair appears mostly in the upper half of the panorama and it starts 30% in from the left. I am taking 50px slices from 30% all the way to the right at 70%. Once you have the points above, you can get the slope intercept formulas and figure out where they intersect.

That’ll give you the center of the camera fov at image [0, 2] out of the 15.

Anyway the process is just a slog. More info below. It’s possible this whole thing doesn’t work in the end, there are a lot of sources of error and it’s not real time.

The tasks to complete follow this flow:

  • find center of red x automatically
  • do the HSV mask, blob finding automatically
  • Point the depth probing beams at the centroids of these blobs, account for same area scanned already
  • keep the 3D point coordinates and apply motion from the IMU as the robot moves
  • build a map/correct from different perspectives and then be able to use those known object locations to plan future motions

I don’t know when I’ll complete this, I have other projects/a job but it’s fun to work on.

Update

I did succeed in writing the (find crosshair center) code above. I used white. The diagonal is set as 2px wide but it’s at least 3 when diagonal.

See the implementation here.

I tried red, green and white for the diagonal color.

Videos

--

--