That Sounds Right:
Auditory Self-Supervision for
Dynamic Robot Manipulation

Learning to produce contact-rich, dynamic behaviors from raw sensory data has been a longstanding challenge in robotics. Prominent approaches primarily focus on using visual or tactile sensing, where unfortunately one fails to capture high-frequency interaction, while the other can be too delicate for large-scale data collection. In this work, we propose 'Audio Robot Learning' (AuRL) a data-centric approach to dynamic manipulation that uses an often ignored source of information: sound. We first collect a dataset of 25k interaction-sound pairs across five dynamic tasks using commodity contact microphones. Then, given this data, we leverage self-supervised learning to accelerate behavior prediction from sound. Our experiments indicate that this self-supervised 'pretraining' is crucial to achieving high performance, with a 34.5% lower MSE than plain supervised learning and a 54.3% lower MSE over visual training. Importantly, we find that when asked to generate desired sound profiles, online rollouts of our models on a UR10 robot can produce dynamic behavior that achieves an average of 11.5% improvement over supervised learning on audio similarity metrics.

Dataset

We collect a dataset of 25k interaction-sound pairs across five dynamic tasks using commodity contact microphones placed on and around the robot.


Learning Dynamic Skills from Behavior

To learn behaviors with AuRL, we first transform the raw audio waveform into a Mel spectrogram. Then, to learn good representations, we use self-supervised learning on our audio data using the BYOL algorithm. Finally, we train a linear model to predict the behavior primitives on top of the self-supervised representations by minimizing the MSE loss between the predicted actions and the test action using simple supervised training.

Results

Our key results are as follows :

  • The self supervised training with AuRL outperforms plain supervised training with 34.5 % lower MSE. Importantly, we also outperform methods that use visual information instead of audio.
  • On our real robot experiments, AuRL shows a 11.5% lower distance between the desired audio and generated audio.
  • In settings where we have limited data for training, self supervised pretraining significantly outperforms regular supervised training.