Here’s an example of how a clinical research team studying stuttering respiratory patterns and other biosignals used VivoSense® software to integrate a multitude of wearable physiological sensor data, manage signal artifacts and produce robust data analysis.
Stuttering is a speech disorder that impacts over 70 million people worldwide. While more common in childhood, for some, it is a life-long disorder. There is no cure, but there are therapeutic processes that can help reduce the severity and improve the flow of speech.
The research team of Dr. Bruno Villegas, of the Laboratory of Biomechanics and Applied Robotics, conducted a multisensor research study to differentiate speech conditions for stuttering diagnosis and after therapeutic interventions. In their research, Monitoring of Respiratory Patterns and Biosignals During Speech from Adults Who Stutter (AWS) and Do Not Stutter (AWNS): A Comparative Analysis, they evaluated how the average phase angle (the angular measure of the synchrony between the chest and abdominal respiratory volume waveform and other physiological biosignals correlate during the speech of adults who stutter and those who don’t.
Dr. Villegas’ team used VivoSense® data visualization and analytics software to ingest and synchronize physiological data from wearable sensors. They collected data from multiple physiological signals:
- Respiratory Inductance Plethysmography (RP) to derive the angular measure of synchrony between the chest and abdominal respiratory volumes as well as respiratory flow and volume.
- Pulse Photo Plethysmography (PPG) to derive Hear Rate(HR) and Pulse Oximetry (PPG)
- Galvanic skin response (GSR) to derive change in emotional state.
For analysis of respiratory patterns during the speech of the subjects who stutter, researchers declare a consensus that Respiratory Inductance Plethysmography (RP) is the preferred method of respiratory data collection. RP is not effort dependent and is noninvasive, and thus, is a simple way to assess natural breathing data and airflow to the lungs for long periods. It is becoming more prominent as a diagnostic tool as well as a tool to discover unique respiratory diagnostic indexes in clinical studies.
RP data was imported into VivoSense® software, calibrated by the least-squares calibration method, and manually synchronized to the start times of the other imported data. The independently collected data are integrated into one user-friendly platform in VivoSense so that the physiological measures derived in VivoSense® to be visualized and analyzed in relation to one another.
Data Quality Control
The researchers used VivoSense® automatic artifact management tools to ensure that their analysis used only quality data. They reviewed all raw wearable sensor data for artifact (noise) and managed it with a set of tools including:
- Event (such as breath or heartbeat) detection or removal.
- Interpolation or smoothing over artifactual data.
- Removal of unusable artifactual data.
The expiratory volume during speech in reading (ESR), when most stutters during speech occur, was extracted from the respiratory data. In many of the clinical trials which use RIP collected data, VivoSense® software is integral to analyze and isolate specific metrics from raw respiratory data.
The research team used data graphics and video observation to manually determine the initial and final time of each exhalation. A qualified speech-language pathologist examined if the speech during each exhalation was fluent, contained a stuttering block, a repetition, prolongation, or more than one stutter episode. Only exhalations with one stutter block and fluent speech exhalations were analyzed. This approach was used to label and validate the stuttering events so that they could be used in further analysis.
The research team compared the linear regression of speech exhalations because, during fluid speech, the exhalation volume decreases with an almost constant slope. They evaluated the exhalation graphs to determine the distance between the actual volume waveform and the linear regression applied over the graph during a fluid speech and a stuttering block.
The comparative analysis showed a significant difference between the stuttering and non-stuttering groups in relation to the expiratory volume during speech: the number of peaks and ESR amplitude from the applied linear regression was much higher in the stuttered speech. As a result, respiratory pattern signals can be used to differentiate block segments and fluent speech segments during a standardized reading task.
Using VivoSense® data visualization and analytics software, Dr. Villegas’ research team was able to integrate a multitude of wearable sensor types as well as detailed cardio-respiratory analysis and visualization tools. The rich set of tools allowed the researchers to manage artifact data, visualize the data, and import events from video data to easily manage all the steps of their research project appropriately.