Can anyone explain why encoders aren’t the same as buttons?
Seems to me that a step encoder is simply a knob that when turned in one direction (one click) it sends a digital (non-analogue) signal and vice-versa. Why is that a lot of midi brains can’t accept encoders as simply two digital buttons? (or three buttons if the encoder has a momentary push button)
Is it to do with the acceleration of when you turn an encoder quickly? I thought this was handled in software?
Physically (electronically) there is no reason two digital buttons inputs cant read an encoder, however the code needs to be changed to decode the inputs correctly. So then the developer needs some way that the user can switch between the inputs being buttons and encoders, without adding much cost to the product, and also being easy for non-techs to understand. Those last two are the problems…
“It’s a pulse switch which only contacts in the direction of rotation.
so now there is a solution to the ‘Encoder’ Problem. You can send either two repeated Keystrokes or repeat two seperate Joystick button Inputs.
you get Repeated Joystick Button Presses based on the direction you turn the switch. Opens up all kinds of possibilities doesn’t it.”
found here
looks like these guys have the same problems has we have…
Most encoders spit out digital data in a form known as Grey Code.
Grey Code is a kind of scalable digital format that, among other things, allows the receiving end to know where on the 360 rotation the encoder is facing.
It’s quite useful here and there, like for knowing where you left and encoding from one session to the next, etc.
For most purposes though, especially ours, it’s just a pain in the back.
I guess one could fairly simply create a bit of logic to bypass this, though. I might look into, but business calls as always.