Yes, I am a bit afraaid that term is not really correct, In industrial planning Takt is also used in english, but presumably in electronics its not …
So clock-line would have been more correct, but also lacks part of the meaning here, actually it is more an additional time reference but neither for the operation of the processor nor for the serial line (although in theory maybe one of those could be mis-used for this purpose).
As I said I would assume a good frequency for normal operation would be around or below 1kHz.[quote=“dlang, post:19, topic:83”]
I’m not worried about the need for something to break things down into chunks that take the same amount of time on all axis. This is a very similar problem to how to draw curves. Every CNC machine I know already has to break any curve down into a series of straight line segments. It’s either done at the CAM stage and shows up in the g-code as a ton of tiny line segments, or if the g-code uses G2/G3, it’s done in the g-code interpreter. In either case, it’s not a problem unless you have pauses between steps.
You are right with this statement. The reasons I would try to avoid this are the following, first (and more obvious) the number of commands to be issued will be drastically increased, especially if you are going to implement higher order acceleration schemes (which I find an appealing concept, although their proof of use is neither given nor declined for me yet) but more severe I think it makes sense to have the transformation from the arc to line segments on the lowest possible level to preserve as much accuracy as possible. For controllers unable to do this, it could be done at the master but e.g. on the ODrive it should be done there, because I assume the controller knows the best how accurate it could follow.
On this point I may have been a bit imprecise again, you are totally right, that the actual planning cannot be at two different levels and it should be at the highest possible. Actually I am lacking a bit of experience on how effectively this could be achieved even with only a serial interface without the mentioned external time reference. The idea of the look ahead on controller side (you can also call it a buffer) is more focused on the layout of the position or speed regulator and allowing it to compensate on errors differently depending on the future conditions (Probably this is not really working with a one layer standard PID scheme but could be more helpful when higher level or multi layer regulators are applied).
Actually some time ago I was following some discussions in the mechaduino forum, but didnt since a while, so maybe they did overcome these issues, but it was quite obvious that the PID tuning was extremely hard and from what I understood there were mainly two issues causing it:
The used AMS encoder seems to be quite high latency at least depending on the output mode choosen (this is only my impression, so i never tested or compared those, although they are very promising looking at the specs)
The change of nominal position with Step-dir input. As especially the I-term evolves over time and is quite helpful for long term stable positioning, it is often reset when nominals are changing as part of the anti-windup strategy. if a known rate of change for the nominal exists, the strategy would change on this point allowing for more effective regulation and use of all terms, however with step/dir you can only derive the rate of change from the past which may be completely wrong.