Realtime mixed-mode control of multiple odrive boards

Hi there,

I am building a drum-playing robot that will eventually incorporate 10 motors or so. I plan to use either a Teensy or RaspberryPI to decode MIDI signals and distribute instructions to the appropriate motor. I have been researching the implementation details, and I’m hoping to solicit your advice on a few issues. I also have a couple of specific drive questions.

  1. Time Synchronization: There’s a few issues here. First off, the time it takes for a drum stick to travel depends on the speed (loudness) of the hit. I plan to characterize these times using a piezo sensor on the drum head, and lookup (or calculate) the appropriate delay time for each motion. But, I don’t think that maintaining the timing on the MIDI controller and broadcasting move commands over USB will provide sufficient timing accuracy, especially as the number of motors increases. Humans can perceive timing differences in sounds separated by a few milliseconds. Is there any clock synchronization capability written into the existing firmware? Has anyone successfully implemented coordinated, multi-axis movement with more than one odrive in the past? Do you know what the existing latency is, and how much it fluctuates? Does UART present any advantages over USB for realtime control?

  2. I haven’t found much documentation on the “feed-forward” arguments in the python library, but I suspect they may be appropriate for the motion control I’m looking for. In particular, I want to drive the drum stick towards the drum head at a specific speed (velocity control), send the motor into free-rotation just before it hits the drum head to allow for a natural bounce (torque goes to zero at a specific position) and then retract the stick to the starting position (position control). The documentation encourages one control mode (Position/Velocity/Torque). Am I correct in understanding that the position mode offers control of all three modes, with the feed-forward values setting limits of torque and velocity when moving to a specific position? If a feed-forward value of 0 is “no-limit” how can I set the motor to free-wheel (0 torque)? Can you change control modes quickly between moves?

I’m a few years into what I’m calling “my 10-year project,” so I know I’ve got a lot of work ahead of me. I was well on my way to writing my own teensy-based motor controller for a brushed-DC implementation when I discovered odrive, and couldn’t pass up the opportunity to use high-torque, quiet, brushless motors. As for my background: I’ve got some embedded programming experience, but I’m not an expert. I have the summer off (my day job is teaching physics), so I’m hoping to make good progress on this robotics project over the next few months. My odrive just arrived today, so my gears and turning and I’m excited to get to work!

4 Likes

This sounds like a really awesome project! Thanks for taking the time in writing out your question so clearly.

There are a few different ways of how to go about implementing this, on a spectrum from quick and dirty to what will work really well. Since you mention this is a long term project for you, I will elaborate on what I think will be the best solution.

Summary: Give the ODrive’s the parameters for a mixed-mode motion profile to follow over USB or UART, then fire the motion off with a digital sync signal.

I would recommend that you execute the motion on the ODrive rather than stream realtime position over a communication interface. It will be much higher update rate and zero-jitter, hence much smoother motion. The downside is that you have to write a trajectory tracker yourself. We do have a planned feature to add an official one, but we are not there yet.
I think that once you get used to the organization of the code and get the dev environment up, running the trajectory directly in the controller object in the ODrive is actually easier than on an external device over a communication interface.

As you describe it you have a mixed-mode motion profile that you need to execute. To start simple, we can use a constant acceleration motion profile, and we can parameterise it with things such as start position, and acceleration; start time is given by the digital sync signal. We can simply say that the receding motion is exactly the same as the attacking one but reversed in time, or you could have different parameters for this as you like.

You were correct in identifying that we can kind of do both velocity control and torque control while in position control mode. However the feed-forward terms are not limits: they are feed forward terms of the setpoint. As in, if you have time-varying setpoints, they let the controller anticipate how the setpoint is moving, and hence staying on track rather than lagging behind the fed trajectory.

You can smoothly interpolate between position control and velocity control by ramping pos_gain between nominal and zero. More importantly, you can ramp between tracking your trajectory and full compliance (zero torque), by setting vel_integrator_gain to zero for all time, and ramping vel_gain between nominal and zero.

Here is a picture:

The tasks we need to do are:

  1. Add the relevant parameters to controller.hpp. You need to add them as member variables, and also add the protocol definitions so they are available for writing to. Look at for example pos_setpoint.
  2. Make a new function Controller::track_trajectory, and call it at the beginning of Controller::update, here. This runs at a period of current_meas_period, a variable that’s available in the code. It’s about 100 microseconds (about 10kHz). You can use a simple counter variable and this period to form your timebase.
  3. In Controller::track_trajectory implement all the logic required to generate the trajectories shown in the picture. You can directly write to pos_setpoint, vel_setpoint, and vel_gain.
  4. Make a bool variable called trajectory_active or something like that, and set up a callback function, say trajectory_sync_cb(). Then use GPIO_subscribe to register this callback to a rising edge of a gpio pin. Check out the index pulse callback code here for reference.

I understand this is a metric crapton of info to take in, and it may take you a while to fully digest it. I’d be happy to schedule a call to go over some of the ideas and advice in detail.

Cheers.

1 Like

Wow! Thank you for the detailed and encouraging response.

I agree that the trajectory tracker as you describe should be straightforward to implement. I’ve started looking through the code and have a good idea of what I need to do. I will have some time to write some code next week. I don’t have any questions at this point, but will likely take you up on your offer of a phone call once I have specific questions about implementation details/best practices.

My current uncertainties are application specific— I need a better understanding of the ideal motion for the range of loud and soft hits and how to switch between them. For example, a human player keeps the sticks close to the drum when playing quietly, but perhaps the robot would be better off maintaining a consistent start position, approach the drumhead at a consistent rate, and decelerate before impact to produce a quiet hit. I need to decide whether the trajectory should vary with playing speed (like a human player does), or if it’s acceptable to just vary the repeat rate of a single least-time trajectory. Furthermore, MIDI is designed to control instruments that can produce tones of arbitrary length (note on followed by note off). This isn’t a problem for playing a stored midi file, as I can look ahead to the next hit and anticipate how much time available for the whole hit motion. But, I also want the capability to use the robot as a live instrument, receiving input from a MIDI device like an iPad or keyboard. It’s difficult to optimize for this situation since you never know what’s coming next. I also need to accommodate the variations introduced by the different motors, drumsticks and bounce response for each drum to insure simultaneous impact across the whole instrument.

These design considerations all require careful thought. So, I’m doing some deep thinking. I want to have a good sense of how the trajectory/trajectories can be efficiently and appropriately parameterized before implementing them in code.

As I said, I don’t think I have any immediate questions for you. Thanks again for your extraordinarily helpful response. My long post here is both an update on my progress and a tool for organizing the various loose ends in my head. I’m excited to get to work.

1 Like

i guess the hardest part is to get used to the organization of the odrive code. because i personnaly found it more comfy to calculate trajectory and then stream postionning commands. but it still feels inappropriate. this should be implemented in fw logic.

i wish i would unserstand odrive internal logic to port my work in the firmware and offer a pull request.

edit: thanks oskar for this very detailed answer

Hi, has anyone made progress towards mixed-mode control with the ODrive? I’m also starting a project that will involve an “impact” sequence: velocity control towards the impact, torque control during, and position control to return to the original position.

Are Oskar’s suggestions still the recommended way to approach this?