You’re quite right, there is no /fundamental/ reason that a motor drive cannot be implemented in pure software on some general purpose computer. The main reason we use microcontrollers, as you guessed, is real-time performance, particularly in terms of low-latency (the time it takes to react to an input, usually measured in nanoseconds) and determinism (aka “jitter”): i.e. the standard deviation of the time it might take to execute a given piece of code, given interrupts and other factors.
High-throughput general purpose desktop CPUs like x86 are utterly horrendous for latency, because of their heavy use of caching, virtual memory, branch-prediction, low-level firmware interrupts, and other “features” which mean that whatever software you write, either the OS, firmware, or hardware can screw up your timing. They might be fast in terms of throughput (but power hungry and expensive) but they are terrible for latency, and the latency is unpredictable, ie they have high jitter.
To make matters worse: Modern Intel x86 chips have a nasty thing called Intel Management Engine, which is a background OS baked into the chip that you cannot disable, which may fire interrupts in the middle of your code even if written at “kernel” level. It basically means that x86 is useless for any ‘real time’ applications.
As to why Intel ME exists: Ask the CIA.
Whereas on a microcontroller like STM32, we can be certain that the assembly instructions that we compile and load will never be interrupted or preempted by anything else, so we can know for certain how long each piece of code will take to run. This is essential for a digital control system implementing PID control loops, but also it is safety-critical for any software-controlled inverter: If the software were to lock-up at any point, it could start a fire.
There’s also a second reason: An STM32F4 microcontroller as used in ODrive costs about USD$3, whereas a low end Intel chip costs $50.
Third reason: Intel chips don’t really have GPIO. They are so power hungry that any spare pins are used for power supply, which can reach hundreds of amps for a high end Intel chip. So any GPIO is abstractd via PCI-express to smaller microcontrollers running their own firmware, like ODrive.
That said, it’s possible that you could control FET gates and current sense ADCs from your Intel chip, but you’d be wasting GHz-capable GPIOs on kHz signals, and there’s always the possibility (and likelihood) that system-level complexity of the x86 system will come to bite you.