Hello! I’m not sure if this idea makes sense, but I wanted to share it. And I’m sure I’m not the first but I’d be curious if anybody has tried this for 3D printers or robots.
Instead of using closed loop at the motor, you could try to close the loop at the end effector of your robot or 3D printer.
A high speed camera (raspberry) could look at how the end effector / hot end of the 3D printer moves and determine the position with relative precision (how precise?).
So this would take into accord any imperfections of the gantry, like from linear rails, pulleys, belts or any give in the structure.
The controller software could have a kinematic model of the structure and could “learn” these imperfections and learn to anticipate and compensate for them.
This is what humans and animals do all the time. Our kinematic is far from precise yet we are capable of far exceeding what a robot could do with our wobbly hardware because we learned to compensate. But this would only be possible using computer vision and some sort of clever algorithm or machine learning.
This might enable the design of much simpler and cheaper build robots to do the work we spend a lot of money on now.
If the end effector has registration points that are reasonably large (e.g. 10 pixels) then the positioning can be relatively precise. I’m not sure about the lag between image acquisition and analysis and motor commands though.