Smartphone-based system for learning and inferring hearing aid settings
- The ever-growing number of amplification parameters available in modern digital hearing aids has brought a corresponding increase in complexity of the fitting procedure and difficulty in prescribing the best settings. Trainable hearing aids - that is, instruments that adjust some settings over time based on input received from individual users - have been commercially available since 2006. Previous research has shown that hearing aid wearers can successfully train their instruments' gain-frequency response and compression parameters in everyday situations. However, hearing aids have relatively slow processing speed, small data storage capabilities, and a limited user interface, important factors that hinder their ability to be trained. In this dissertation, we address these limitations by proposing a hearing system, combining hearing aids with a smartphone, that introduces additional possibilities for training and personalization. Our system, named Hearing Aid Learning and Inference Controller, or HALIC, comprises two hearing aids, a smartphone, and a body-worn gateway to wirelessly connect them. HALIC leverages a smartphone's built-in sensors to gather additional relevant information, implements computation-intensive machine learning algorithms to improve its performance over time, and includes an intuitive user interface to let individual users express their preferences via listening evaluations. HALIC has several important components. We developed a smartphone-based sound environment classifier using time- and frequency-based features and a hierarchical classification system. We employed penalized logistic regression detectors and a music detector to classify the environmental sound as Quiet, Speech in Quiet, Noise, Speech in Noise, Music, and Party. We made use of the phone's geolocation capabilities and accelerometer to classify current location, first broadly as Place or Route, and then with greater granularity. Furthermore, we explored two different types of listening evaluations, each representative of a graphical user interface style, that individuals could use to train their system: The A/B Test and the Self-Adjustment Screen. The A/B Test is a standard method of comparing two variants applied here in a novel way to compare two hearing aid settings, A and B. The user listened to both settings and gave a subjective, relative evaluation ("A is better, " "B is better, " or "no difference"). Alternatively, the Self-Adjustment Screen allowed users to toggle buttons until they found their absolute preferred setting. Whereas the A/B Test concealed the names of the settings, the Self-Adjustment Screen explicitly labeled the settings, making them visible to the users. Lastly, we implemented a knowledge-based agent to build Trained Models (AB Model and SA Model) and infer the best possible setting from those available. To explore the benefits of self-training with a smartphone-based hearing system, we conducted an experimental within-subjects study in which we confined the space of settings to microphone modes (directional and omnidirectional), noise reduction states (off and on), and certain programs (General, Music, and Party). The study aimed to investigate whether users could train the HALIC hearing system, using a smartphone in real-world situations, to provide settings preferred over settings presented by an \untrained system"; that is, the manufacturer's algorithm for automatically selecting microphone mode and noise reduction state based on acoustic environment. During the training phase, lasting approximately four weeks, HALIC built trained models based on individual listening preferences and context, including environmental sound class, sound level, location, and hour of day. The result of each A/B Test informed the AB Model, while the result of each Self-Evaluation Screen informed the SA Model. In the subsequent two-week validation phase, and unbeknownst to participants, the meaning of settings A and B in each A/B Test changed. One setting (randomly chosen as A or B) was selected by the untrained system, while the other was selected by the \trained system, " whereby HALIC predicted the optimal setting (among available choices) using an inference engine, which considered the trained model and current context. We used validation-phase data to determine whether participants could show a preference for microphone mode and noise reduction settings predicted by the inference engine ("trained settings") over those suggested by the hearing aids' untrained system ("untrained settings"). Sixteen participants (10 male, 6 female) with moderate-severe hearing loss and ranging in age between 22 and 79 years (M = 55.5 years) participated in this study. Fifteen had at least 6 months of experience wearing hearing aids, and 14 had previous experience using smartphones. They were selected among the population of four audiology offices in the San Francisco Bay Area. Results showed that the 15 participants with valid data preferred trained settings over untrained settings (p < 0:05). Individually, 7 of 15 participants showed a significant preference for trained settings, while one participant had a significant preference for untrained settings. Among participants who completed more than 150 training-phase listening evaluations, 80% (4/5) had a significant preference for the trained settings. We also explored setting preferences as a function of sound environment and sound level. Generally, pooling data across participants did not reveal any significant patterns, whereas breaking down data by participant showed that individual preferences could be strong and idiosyncratic. The obtained data, as well as subjective participant feedback, indicate that smartphones could become viable tools to train hearing aids. Individuals who are tech-savvy and have milder hearing loss seem well suited to take advantages of the benefits offered by training with a smartphone. These results, using microphone directionality modes and noise reduction states, are a first step in demonstrating how smartphones could take personalization beyond the audiogram and fitting rationales.
|Type of resource
|electronic; electronic resource; remote
|1 online resource.
|Stanford University, Department of Mechanical Engineering.
|Leifer, Larry J
|Leifer, Larry J
|Cutkosky, Mark R
|Cutkosky, Mark R
|Statement of responsibility
|Submitted to the Department of Mechanical Engineering.
|Thesis (Ph.D.)--Stanford University, 2015.
- © 2015 by Gabriel Aldaz
- This work is licensed under a Creative Commons Attribution Non Commercial 3.0 Unported license (CC BY-NC).
Also listed in
Loading usage metrics...