Early versions of the T-Stick DMI included only one type of inertial sensors: 3-axis accelerometers, used for sensing both movement and orientation with respect to gravity. The major gestures and postures used are shake, jab, tilt (elevation), and roll. Of these, jabbing has the closest relationship with timing controllability as it is used as a sound-excitation gesture; future revisions will likely detect jabs on-board the microcontroller and send the data immediately. Shaking is calculated using leaky integration of a combination of accelerometers with the gravity component removed with a high-pass filter.
This situation was far from ideal. The elevation and roll information was extremely noisy, since accelerometers sense acceleration associated with movement of the interface as well as the contribution from gravity. A balance must be found in processing between fast response and noisy data or slow response and stable data. This means that orientation cannot be mapped to obvious sound parameters without obviously unintended consequences. Orientation and movement of the T-Stick are visually very striking and as such beg to be mapped to similarly dramatic sonic parameters.
Additionally, the accelerometers are unable to measure azimuth – rotation around a vertical axis in the world frame (around the gravity vector for example). This means that jabbing and shaking gestures are calculated in the instrument frame, and jabbing towards the audience, for example, cannot be detected as distinct from any other jab at the same elevation.
This limitation was revisited when an opportunity arose to develop a new version of the T-Stick hardware for use in controlling sound spatialization in a piece for dancer and ‘cellist (Duo pour un violoncelle et un danseur by Sean Ferguson and Isabelle Van Grimde/Corps Secrets). For this piece, calculation of the pointing direction the T-Stick was required, as the dancer was supposed to steer sounds around a ring of loudspeakers by pointing. A three-axis magnetometer (magnetic-field sensor) was integrated into the sensing hardware, resulting in the T-Stick measuring two different vectors: gravity and the Earth’s magnetic field (in the absence of a stronger magnetic field or shielding). Assuming that the two vectors are different (that we are not at one of the Earths magnetic poles) there is only one possible orientation of the T-Stick that corresponds to a given measurement, and so elevation, roll, and azimuth can all be calculated. It is fairly simple to apply a 2-axis rotation derived from accelerometer data to the magnetic field vector to derive azimuth.
Of course this assumes perfect sensors – in real life magnetometers are notoriously noisy since they are affected by any local disturbances in the Earth’s magnetic field, caused by electrical cables, transformers, metal building structures, and natural anomalies. Also as mentioned above, the rotation of the gravity vector using accelerometer data is complicated by accelerations due to other forces. Misestimation of the gravity vector will result in a faulty rotation of the magnetic field vector and a faulty calculation of azimuth. Even worse, affordable magnetometers are very slow – the magnetometer used for the Spatialization T-Stick (or “SpaT-Stick”) has a maximum sample rate of 5 Hz. For the purposes of the piece, smoothed polar trajectories were drawn between orientation samples to prevent the spatialized sound from jumping from one position to another, but if the azimuth was rotated through more than 180 degrees in less than 200ms the trajectory would be interpolated in the opposite direction to that performed. This restricted the speed of gestures permitted for the performer.
The next step in the evolution of this system is presented here. It involves the addition of rate-gyroscopes, changing to an improved representation of orientation, and the exploration of adaptive filters for combining sensor signals (sensor fusion), reducing noise, and improving latency.
The first addition to the system is another sensor, the rate-gyroscope. This sensor measures angular velocity around one axis, so we will use three of them oriented orthogonally. For purposes of prototyping, all sensors (accelerometers, magnetometers, and gyroscopes have been connected on a prototyping breadboard along with an Arduino Mini for sampling (figure below). The gyroscopes used here operate much faster than the magnetometers, and obviously could be used alone for mapping synthesizer control to angular velocity. Integrating the gyroscope data, however, can provide an estimate of orientation separate from that calculated using accelerometer and magnetometer data, and unaffected by either linear acceleration or local magnetic field distortion.
This combination of 3-axis accelerometers, magnetometers and gyroscopes in a single package is commonly called an Attitude and Heading Reference System (AHRS). These systems are used for aircraft stabilization and navigation, automobile navigation systems, and for inertial motion capture systems for virtual and augmented reality (G. F. Welch, “History: The use of the Kalman filter for human motion tracking in virtual reality,” Presence, vol. 18, pp. 72–91, February 2009).
Unfortunately, common vibrating-mass gyroscopes suffer from bias drift, especially when the ambient temperature changes. Bias is the voltage which represents zero velocity, and which is subtracted from each sample to calculate the true velocity – if this value changes, then our velocity integration will also be biased, and will slowly increase or decrease even if the sensor remains stationary. Gyroscopes of much higher quality exist, and are used for dead-reckoning in Inertial Navigation Systems (INS) in missiles for example. The prices of these sensors, however, make them unsuitable for prototyping digital musical instruments.
A simple and obvious solution is to combine the integrated gyroscope data and the calculated orientation from the other sensors directly, using gyroscope data to provide fast response and correcting it whenever a new magnetometer measurement is available. This approach still exhibits several problems beyond the abruptness of the corrections: the information used in the correction orientation is still noisy, and is influenced adversely by linear accelerations of the interface and local magnetic field distortion.
Representing Orientation using Quaternions
Traditionally the most common method for representing orientations is using Euler angles, which describe the orientation using three successive rotations. Different conventions exist for the order of these rotations; Euler’s convention was to first rotate about the z-axis, then about the new x-axis, and finally rotate about the new z-axis. A problem can arise called gymbal lock when using 3-angle representations of rotation, preventing a rotation from being properly executed.
Unit quaternions are another way of representing orientation, discovered by Sir William Rowan Hamilton in 1843. One advantage of the quaternion representation is that gymbal lock is not possible. Another advantage is the ease with which successive rotations can be applied. For current orientation quaternion qold and rotation quaternion to be applied qrotation, the rotation can be applied world frame by
qnew = qrotationqold
or in the body frame by
qnew = qoldqrotation
This makes the gyroscope integration much easier, since it must be done in the body frame since the gyroscope signal is not referenced to an outside source. Calculating the orientation quaternion can be done fairly simply. Starting with accelerometer measurements and magnetometer measurements:
We can easily calculate elevation θ and roll φ, but azimuth ψ is slightly more difficult. Converting to quaternions, the elevation quaternion qe and roll quaternion qr can be calculated.
At this point we can use the elevation and roll quaternions to rotate the magnetic field vector to match the orientation. If we use a normalized version of the magnetic field vector then its pure vector quaternion is simply , and the rotation is accomplished by
The azimuth ψ lies in the x and y quaternion terms and can be calculated using . The azimuth quaternion is then
and the total orientation quaternion is given by
Simple Complementary Filter
Now that we have a better representation of orientation, the only thing left is to combine our various sensor signals to generate a better estimate of orientation.
The orientation estimate described above (calculated from accelerometer and magnetometer measurements) suffers from errors whenever the sensors are moved; averaged over time the accelerometer signal accurately measures the acceleration vector due to gravity, but over short time scales may be completely thrown off by accelerations from sensor movement. Considered in the frequency domain, our estimate will be accurate at low frequencies, but noisy at higher frequencies. We could smooth it efficiently by using an exponential moving average:
accel_mag_ema = a * accel_mag + (1 - a) * accel_mag_ema;
Where a is a coefficient that sets the weight of the new measurement against the smoothed estimate – the optimal value of this coefficient will depend on the sample rate of our sensors and the frequency at which the sensor will be moved. Notice that we are using the past sample of accel_mag_ema in this equation: each update results in a weighted sum of the current measurement and the past value.
The measurements taken from the rate gyroscopes, on the other hand, give us an egocentric indication of angular velocity, but as described above the gyros suffer from very low-frequency noise (which we called bias drift) which will need to be corrected. Orientation estimates calculated from the gyro signals thus exhibit the opposite tendency to the accelerometer+magnetometer estimate, since they are accurate at high frequencies but noisy at low frequencies. The bias drift we will deal with in a similar fashion to the accelerometer+magnetometer estimate, only this time we want a high-pass filter:
gyro_ema = b * gyro_raw + (1 - b) * gyro_ema; gyro_highpass = gyro_raw - gyro_ema;
The weight coefficient b should be set very low in this case, since the bias will drift over the course of minutes or hours (say over at least 100000 samples) and we don’t want gyro_ema to be influenced much by movement of the sensors. Setting b very low may result in a problem converging on the correct bias when starting up our algorithm, however, so we will perform a little trick: use the exponential moving deviation of the gyro signals to indicate when the instrument is being moved, and increase the coefficient b when it is “safe”:
gyro_ema = b * gyro_raw + (1 - b) * gyro_ema; gyro_emd = c * abs(gyro_raw - gyro_ema) + (1 - c) * gyro_emd; d = (gyro_emd > thresh) ? low_val : high_val; gyro_bias = d * gyro_raw + (1 - d) * gyro_bias;
Our two estimates are complementary – by combining them correctly we should be able to generate a single estimate that is better either on its own. A Kalman filter is perfect for this task, but since we wish to run our code on a small microcontroller we will use a complementary filter instead. Since we needed to integrate the angular velocity measurements anyway we can simply add them to our smoothed estimate above.
orient = accel_mag_ema + gyro_raw - gyro_bias;
The pseudocode taken together forms a (very) simple complementary filter, since it combines a low-passed version of the accelerometer+magnetometer estimate with a high-passed version of the gyroscope estimate.
One More Thing
All of the example pseudocode given above assumes a linear interpolation between the two orientation estimates, but since we are using quaternions to represent orientation we will need to use spherical linear interpolation instead (i.e. we want to interpolate our orientations along the surface of the unit quaternion hypersphere instead of directly between two points on that hypersphere). In our case we will use an algorithm called SLERP (Spherical Linear intERPolation). We will also use quaternion multiplication.
For more information, check out the code on GitHub written for Arduino and the CKDevices Mongoose IMU (but easily adaptable to other platforms and sensors). To finish, here’s a video demonstrating tracking the orientation and shape of a deformable object using two IMUs and the sensor fusion code posted above: