A haptic bracelet for turning the hand into an instrument, performed via touch.
Stejara Dinulescu, Fall 2021. MAT240A Final Project.
The sense of touch is used to convey semantic and emotional information when interacting socially, with expressivity and nuance paralleled in speech and musical composition. We present a wrist-worn haptic device that can digitally encode tactile gestures (taps, slides, pinches, etc.) across the whole surface of the hand via remote and minimal acoustic sensing, leaving the hand free for manual interaction. By sonifying the captured mechanical signals generated by spatiotemporal wave propagation across the skin that occurs during touch contact, our wearable device transforms the hand into a musical instrument.
-
10/28: Utilizing the current device setup (4 3-axis analog accelerometers connected to 2 Adafruit Feather M0 Wifi boards, depicted in image above), read serial input via Max's serial object. See SerialAccelerometerInput.maxpat and SerialAccelerometerIn + Accelerometer.h Arduino files, located in the SerialAccelerometerIn folder.
- Following the Communications Tutorial 2: Serial Communication Max tutorial, I created a max patch with the serial object, passing it the port that the adafruit device was connected to in the Arduino IDE.
- I was successful reading in numbers from the microcontroller, but the string that I was sending serially via Arduino was not being parsed properly on the other end. I looked into the itoa and fromsymbol object in the tutorial and other references, and I found that itoa did what I needed. However, the chunking parameter was making parsing difficult, so I ended up changing the way I was sending values serially via Aruidno and using the fromsymbol object.
- However, halfway through my creation/exploration of this patch, my devices stopped registering in the ports list of Arduino. Both Feather M0 boards were not being listed on either my Macbook or my Windows 10 office computer, despite manually resetting the board. Stack overflow or adafruit forums were not fruitful.
- --> Updating my Macbook fixed this issue (still using the Teensy moving forward)
- I also found other cables with data flow that allowed me to connect the microcontroller to computer. Maybe the cables that I was previously using broke, but this seems relatively unlikely.
- In the meantime, I ordered a Teensy board to attempt to utilize it as a midi input. Arrives on 11/01.
-
Utilize Teensy to transmit accelerometer data to Max (11/01-11/03). Setup pictured below with 2 accelerometers.
- Get Teensy registered by computer (works now on Mac and Windows) as a midi or HI device, and/or via serial port in Arduino
- A macOS update (to the new Monterey OS) fixed the problem I was having with the teensy not being recognized (or any microcontroller device) via USB port
- Receive messages in Max
- Display raw accelerometer data (x, y, z data per accelerometers --> 2 accelerometers --> 6 floats per read sample) in Max
- Issue --> Sample drops due to needing to bang the serial object to read from serial port. Move back to old Python script for reading serial input, and send OSC data to Max.
- OSC messaging from python script to max works (11/03 -11/04) --> I am filling a buffer with the samples read in via OSC; however, I am still experiencing sample drops. This causes the buffer to look like discrete signals (i.e. measuring periods of touch on the accelerometer, or no touch).
- 11/06 --> I just needed to change some parameters of the buffer object to visualize the signal better. Now, you can clearly see when each accelerometer is tapped, as well as the resulting mechanical reverberations. This can now be sonified!
- Get Teensy registered by computer (works now on Mac and Windows) as a midi or HI device, and/or via serial port in Arduino
-
Sonifying signals/controlling continuous and discrete synthesis processes
- Using line~ and mc.play~, I am able to playback all three channels of signal from each recorded buffer (11/07). This is a discrete playback method, which will be used for gesture detection and playback/control.
- How can accelerometer signals drive/shape continuous audio output? First, I tried with a simple sine oscillator, varying pitch with signal change (11/07-11/08).
- 11/10: I can successfully sonify signals continuously (as the accelerometers detect motion), as well as playback from a buffer discretely. 3-axis signals are normalized and mapped to a desired frequency range. Using mc.cycle~ object to treat each axis as a "voice", combining them all in the output. Will look into varying sonic properties with each accelerometer channel for more complex voices.
- 11/15-16: how can I introduce silence into the system? There should be no sound when there is no touch contact.
- Capturing values for the "zero" position -- i.e. when hand is flat and out, ready for touch contact
- Incoming sample values are subtracted from the zero value to get a relative change in accelerometer readings
- This change is then sonified -> this works, however, slight movements trigger sound. (Karl's recommendation -- use a continuous "zero" mapping as the hand changes, as well as a threshold for when to sound)
- 11/15-16: how to sonify taps on the hand as impulses? I need an envelope control that is triggered upon touch contact (Karl's suggestion -- onset detection to trigger envelope)
- 11/17-19: onset detection implementation from in-class example (and Music Visualizer)
- 11/20-21: perfect envelope triggering and include reverb
- 11/29: storing audio signal into buffer for gesture detection
- My first approach was to store the one-channel signal (recorded a bit before the onset) further down in the pipeline, where I make my conversions to frequency and create a signal, which is then modulated and sonified. I thought that I could store the created 1-channel signal, which I could send through a moving average filter (similar to my previous signal processing pipeline)
- However, the max objects update at their fastest rate of 20ms, which is too slow compared to my sample rate. Therefore, I am losing samples when visualizing in the buffer. Next approach: use the onset detector to capture what sample number we are on, then read from the original stored buffer at 50 samples before the detected onset sample. This will mean I have to pipe everything through another instance of my sonification objects, but will enable me to grab the whole signal without losing samples.
- 11/29: This method works, I have touch gestures stored in a buffer now, which can be sent to a classifier, for both accelerometers.
- 12/02-12/03: data preprocessing
- demean the data
- moving average of window size 5
- lowpass filter @ 250 hz
- normalize to 1 across 2 channels (mc.normalize~), where each accelerometer is one channel
- 12/03: feature extraction
- mean absolute deviation
- peak to peak
- spectral centroid (using gen~.centroid and gen~.pfft patcher examples)
- 12/03: brainstorming how to use ml.svm object
- Meeting with Karl 12/04. Make own normalization function to match signal processing from prior work, and move forward with trying to get training data into ml.svm classifier.
- 12/06: signal processing and classification
- Made own normalization function using gen~ codebox to match prior work
- Tried to write some temporary data to the classifier, but it kept getting me a path error that I could not solve. Decided to go directly to bringing in the data to max from python via OSC messaging.
- Wrote python script to read training data from trainingdata.csv and send to max via osc messaging, decoded and parsed osc messages via max
- Trained ml.svm classifier and tested on one of the rows of data to see if I get the right class output.
- Sliding zero function --> every time gesture is done writing to buffer, zero position is reset
-
12/06 - 12/07: Sonify features extracted from gesture
- Features extracted include spectral centroid (frequency domain), peak to peak difference (time domain), and mean absolute deviation (time domain)
- mc wasn't working here, so I had to change approaches to extracting features per-channel with the use of encapsulation
- Features extracted include spectral centroid (frequency domain), peak to peak difference (time domain), and mean absolute deviation (time domain)
- Extract more features for classification (I was only able to implement 3 out of the 7 features that we use in the paper. The other 7 features turn into more features due to some features containing five bins each). Because of this, classification cannot work on just 3 features with the training method that I currently use.
- Classification
- Ability to classify from all features extracted
- Trigger/modulate sound based on classified gestures
- Tuning sound parameters of the system
- Make gestures repeatable for easy use as an instrument