Closed loop using computer vision (idea)

Hello! I’m not sure if this idea makes sense, but I wanted to share it. And I’m sure I’m not the first but I’d be curious if anybody has tried this for 3D printers or robots.

Instead of using closed loop at the motor, you could try to close the loop at the end effector of your robot or 3D printer.

A high speed camera (raspberry) could look at how the end effector / hot end of the 3D printer moves and determine the position with relative precision (how precise?).

So this would take into accord any imperfections of the gantry, like from linear rails, pulleys, belts or any give in the structure.

The controller software could have a kinematic model of the structure and could “learn” these imperfections and learn to anticipate and compensate for them.

This is what humans and animals do all the time. Our kinematic is far from precise yet we are capable of far exceeding what a robot could do with our wobbly hardware because we learned to compensate. But this would only be possible using computer vision and some sort of clever algorithm or machine learning.

This might enable the design of much simpler and cheaper build robots to do the work we spend a lot of money on now.

If the end effector has registration points that are reasonably large (e.g. 10 pixels) then the positioning can be relatively precise. I’m not sure about the lag between image acquisition and analysis and motor commands though.

For 3D printers which are already exceedingly accurate, I find it hard to imagine how this is going to be an improvement, unless you’ve got some kind of new ‘wobbly’ platform that is far cheaper and faster to produce, or could be entirely printed by the printer with nothing else required except the extruder, servos, and controller silicon.

This does sound like exactly what I want for a robot weeder, and then the tractor design gets a lot simpler if it doesn’t matter if it wobbles a lot going over a rough field, you simply correct the position of the weedpuller end effector based on the same input that identifies the target weed.

1 Like

You’re probably right, a printer for 200 bucks if properly tuned is already plenty precise and the problems are more in the mechanical engineering of extruder speed and mechanisms. So economy of scale probably made this idea not economically viable for normal 3D printing.

For 6DOF 3D printing with (cheap) robot arms this could still be interesting. Maybe even required?

E3D with the toolchanger made a toolhead with additive / subtractive manufacturing with a tiny milling bit, this allows you to make precise parts. Maybe advances there could be used to make linear rails and parallel robots. That maybe have good repeatability but not precision.

But the proper use case is probably in robotics with SLAM.

1 Like