I made this video today to quickly document some work I’ve been doing using multiple inertial measurement units (IMUs) for tracking the orientation, bending, and twisting of deformable objects. Intertial measurement units generally contain 3-axis accelerometers, gyroscopes, and magnetometers – the data can be cleverly combined to obtain an estimate of orientation (as long as the IMU is being used in an environment with constant gravity and magnetic field, and not in free-fall).
This particular object has two Mongoose IMUs embedded inside it, running some custom firmware I wrote for complementary filtering/sensor fusion for estimation of orientation.
how two IMU can sense all points of object? I guess you use some motion sensors to sense deformation for each point of object.
Only the two ends of the object have sensors in them, all the other points in the visualization are interpolated from this data.
Hi, great visualization. I am working on my bachelor thesis where i am using three IMUs for deflection measurements. I am wondering if you can recommend to me some publications by you (or something you like) about this sort of filtering/sensor fusion technique or theory for obtaining orientation?
Any input is greatly appreciated, thank you.
Thanks Baldur 🙂
Here’s an old writeup on the sensor fusion used for the T-Stick. It explains how and why I used a complementary filter to estimate orientation.