Janky navigation with blob finding and depth probing (devlog)

Jacob David C. Cunningham
8 min readAug 21, 2022

--

This post is related to my current SLAM project. See old one if interested. Note I am a noob.

Background

I have developed a newer navigation unit which has some sensors, namely: camera, ToF ranging sensor, lidar sensor and an IMU. So now I have to take that and be able to figure out my surroundings and navigate. The IMU is at the center of the pan/tilt intersection. Supposedly it’ll take the place of rotary encoders.

Note I am coming from the web dev industry so I have no background in this, what I’m doing is just freeform building/look up what I need as I go.

This navigation unit then steers a robotic platform that it’s on, which is electronically separated from itself eg. controls it by a websocket. See more info here. Both units are on the same WiFi network.

Example image depth probing

Here is a photo from the navigation unit on my desk

Note the faint pink lidar pattern near the middle of the frame under the window sill. The TFmini-s is apparently not a “true” lidar but that’s what it is marketed as.

In this case, what I’m trying to depth probe is the monitor stand (to the right, black vertical rectangle).

Here you can see dimensions of my setup. This did change (moved it in from the left, 2" so it’s 5" away instead of 7"). My expected distance when the sensors point/probe at the monitor stand is 13.75" after moving the nav unit. I moved it because the FOV of the camera was narrower than I expected so the stand was barely in frame.

Now we do the mask application to find the parts of the image that are black. My initial attempt failed outright, so I looked around and found this. This yields this for my picture. Now I’ll split it into quadrants.

This actually not right since the white box and yellow post it note are both gone

Then I point the ranging sensor beam(s) at the centroids of the black regions.

Then I translate that into the servos moving/pointing in those directions to then sample. I’ll confirm if the pixel aiming is working by drawing a point on the picture based on determined centroid with code.

Using a 3D modeling program helps for visualizing stuff.

SketchUp. The cyan rectangle would be the monitor stand

Here you can see the difference between these two sensors which is the size of the beam FOV.

ToF VL53L0X is 25 degrees FOV while the TFmini-s (right) is 2 degrees

You can see how the larger FOV can be problematic, because if you’re trying to gauge depth of a specific item and something else gets hit/registers a return then your depth map is not correct. But the large FOV is a good immediate check if something is close.

There is a problem I have not mentioned which is depending on how far away something is, the angle to point at it changes… and that’s not something you can easily tell from a single picture.

You can see here what I mean. These two rectangles are the same size on the same axis, but their midpoints, the angle from a central point (bottom middle) changes. I believe this is perspective. There is a relationship/can figure it out.

This is the test environment (me being lazy)

Eventually it will be the nav unit on the robot (higher off the ground plane), and will do a bunch of tests on the floor… later on it will send the images it analyzes to a web interface, and show me the applied masks so I can see why it choose to turn in a certain direction.

Overall the plan will be to check various HSV ranges to make sure the entire photo is covered.

Something like this… where the cropped images are divided and then scanned. Find the colors, blobs, centroids, depth map… then treat as cubes in space. The IMU tracks robot’s progress (accel/vel/pos). My apt thankfully (?) has crap everywhere so navigation has to be better than just an ultrasonic bump sensor on a perfectly flat wall.

I could also “cheat” where I only scan things at a certain distance. I expect them to be idk a foot or less away. The angle would be calibrated for that hmm…

The process I’m describing here is manual, but it will be automatic using 1D/2D histogram plots (headless though). How well it works is something else. I’m struggling to keep motivation here. Saturday for me.

Also at this time I don’t have the servo commands down yet eg. “turn right 30 degrees” I gotta figure that out.

Actual implementation

The middle image is using thresholding technique THRESH_BINARY, last image is what I would expect to be grouped

Anyway this is going to need work, this is like a super crappy version 1.

Then you find the centroids, just doing the center of the red rectangles.

Translate to angles, of course this is a quadrant so the sensor bed plane is going to point to the top-left. I tried the full image and it’s not good… misses large patches of black pixels. I’ll try using the quad method for now. In theory if you run it parallel it would save time but this is not a fast processing platform. It’s okay to stop, take a photo (several seconds), analyze (several more seconds), etc…

So in one quadrant the range of 5000 pixels minimum grouping works well, but in this quadrant it’s bad.

So that’ll have to get adjusted dynamically, it’ll do something like “majority of pixels checked? no… change parameters” until every part is analyzed. Of course you have to include other colors as well not just black. In this particular test environment I’m specifically looking for the black monitor stand though. Absolute worst case you can check for is the entire pixel set eg. width * height of the quadrant.

The above is a fail though, there’s clearly a giant area of black but it’s not picking it up.

Ooh I have an idea, I can put padding around it. Technically cheating but… dang no improvement. I’m gonna have to improve this it sucks.

Idk I could split it even further down. I’m thinking about this thing where you just drop any pixels that are not black. It takes a long time to iterate over pixels though unless you use some tricks like with numpy or something with bitwise.

This is a pretty neat trick I learned long time ago for SketchUp. You can import a photo of something and scale the photo to reality using the tapemeasure tool.

So I take a photo of a piece of paper (known size) and bring it into SketchUp then I can do the FOV stuff in 3D. Of course I have the photo aligned by base above but the photo should go down more. I also imported it flipped above but can just rotate the image on Z axis. The visible lidar patch is what I can use to vertically align the image with the camera. On the TFmini-s the smaller hole is the projector.

So it’s more like this.

Oh yeah… the other thing is you can just take the corners of the picture and “equidistantly” join them to the center point of the camera lens.

This also what I can use to physically model perspective and figure out a formula to guess what angle something is at.

Yeah there’s a relationship, at 16.5" away, the photo above is 22" by 16.5". There’s gonna be some translations… if a blob based on the photo is some size in pixels, you could estimate its size in inches provided you have a known value like depth.

Servo pointing (depth probe)

Assuming I have found some blobs to point at, I’ll come up with some functions to point the servo at the desired area.

Another example photo with SketchUp determined angle

I’m not gonna spend too much time explaining this because I need more time to derive it as a function that I can just call.

I scaled the image based on known dimensions (might be incorrect with regard to FOV/lens) and then rotated the sensor assembly until it hit the same axis as the center of the orange box.

At this time I don’t have a function to turn an angle into the right PWM start/end range but I’ll guess for this case. Using a visual reference.

The sensor assembly is moved to the left, but it would go directly to the middle/where the angles come from

Yeah… I thought it would be like 100 PW units = 10 degrees but it does not seem like it.

Also the depth measurements are not good… the side-by-side offset sensors either increase or decrease the distance when measuring at an angle… so I’ll have to factor that out. Each sensor is also better/worse when being used for close or long range and their error is both ways eg. distance measured is too short or too far… so yeah that’ll need work.

Rotate from 0 to 16 degrees

Well… 1500ms is the center of a servo supposedly, so if 16 degrees in a direction is roughly hmm… well here it’s 210ms more but the value… 1670 ms seems close, idk could be coincidence I’m not a PWM expert.

There’s no real conclusion here, just yeah… the project goes on.

Video

--

--