Skip to main content

Speech provides a window to brain health

Technology developed by 2 ASU professors uses small disturbances in speech to assess neurological diseases


two people working on computer

|
November 19, 2019

Speech is one of those human abilities — like riding a bike — that comes so naturally once mastered it’s easily taken for granted. In reality, the mental and oral gymnastics required to produce the correct sounds in the correct order to accurately communicate a thought is a complex process with plenty of opportunities for things to go wrong.

“Generating speech — clean, intelligible speech that we can understand — that's really hard,” said Arizona State University Associate Professor Visar Berisha. “It requires very precise motor control. It requires structuring sentences that are sometimes complicated, depending on what you're trying to convey, and doing it on the fly. So if there's any disturbance, any change, to something that's happening in the brain, it becomes apparent. That's why people start slurring their words when they drink too much.”

Or when they have a neurological disorder.

Berisha, who has a joint appointment in the College of Health Solutions in speech and hearing sciences and the School of Electrical, Computer and Energy Engineering, is working with College of Health Solutions Associate Dean and Professor Julie Liss to highjack those small disturbances and use them as a window to brain health.

The pair recently received a National Science Foundation Small Business Innovation Research Phase II grant to fund their company Aural Analytics, which uses a technology platform developed through research conducted at ASU to detect characteristic changes in speech patterns that appear at the earliest stages of neurological diseases. The goal of the technology and its commercial application is tracking and assessment of disease progression, and eventually, early detection.

Since its inception, Aural Analytics has raised a total of $4.3 million in funding that includes grants and investment from venture capitalists. This most recent NSF small business grant will allow the company to expand their technology platform to include a portal to collect speech samples from anywhere in the world through a downloadable app and analyze them in real time.

Meeting of the minds

Liss, whose background is in speech language pathology and speech neuroscience, has always been interested in improving quality of life for those with communication disorders secondary to neurological disease. What that requires first is an understanding of the ways in which their speech is degraded, so much of Liss’ research focused on acoustic analysis to characterize how different types of brain damage or disease processes manifested in speech signals.

The process involved bringing patients who had been diagnosed with neurological diseases into her lab, where they would enter a sound booth and provide speech samples that would later be digitized and analyzed to create a sort of key for which speech disturbances were indicative of which diseases. But it took time.

“Even though we recognized it was a powerful tool, we really didn’t have a way to make it scalable or useful in a broader domain,” Liss said.

Meanwhile, Berisha had independently become fascinated with the clinical implications of speech analysis as an engineer when he learned of Liss’ work.

“The definition of an engineer is someone who takes all of the knowledge they learn about the natural world — physics and biology and chemistry and the mathematical sciences — and invents and designs new things,” he said. “This is sort of a perfect application for that … being able to reverse engineer what's happening in the brain just from the way that someone speaks.”

He reached out to Liss and, with their powers combined, the technology that eventually became the heart of Aural Analytics was born. Within a year, they had a funded grant through the NSF and industry experts were encouraging them in “not so subtle ways” to commercialize their tech.

Getting down to business

Grasping the business of running a business presented a steep learning curve for both, and Wikipedia came in handy often. Even more helpful was the training they received through the NSF Innovation Corps (I-Corps) program, a requirement of their initial grant, which they describe as “a boot camp to bring academics along in the business space.”

From there, Liss and Berisha joined Venture Devils, an ASU program that aids entrepreneurs’ success by connecting them with mentors, funding opportunities and development spaces. In 2015, longtime entrepreneur Daniel Jones was assigned as their mentor. All agree it was a serendipitous meeting of minds.

two men and a woman stand at a table, talking

From left to right: Aural Analytics CEO Daniel Jones and co-founders Julie Liss and Visar Berisha at their headquarters at SkySong, the ASU Scottsdale Innovation Center. Photo by Deanna Dent/ASU Now

“Sparks flew from day one,” Jones said. “They’re at the top of their fields, they’re very respected and yet very down to earth and sort of hungry to learn and open to feedback and humble, which is something I'm particularly fond of — humility in leadership. So in many ways, just interpersonally, we vibed from the beginning.”

Since then, the company has grown by leaps and bounds, moving from its initial headquarters in a small office on the second floor of a building at SkySong, the ASU Scottsdale Innovation Center, to 1951@SkySong, a 6,500-square-foot co-working space, to their current solo digs on the ground floor of Building 4, complete with a kitchen, lounge area, open concept office area and two smaller rooms that double as meeting rooms and offices.

One of those offices belongs to Jones, who believed in the company so much, he jumped on board full time as CEO, something Liss and Berisha say they needed badly.

“We had some success but what was missing was someone to take control of it and move it to the next phase,” Berisha said. “Neither of us were interested in leaving academia, and we're still not interested in leaving academia.”

They needed someone willing to completely dedicate themselves to the company who also understood what they were trying to do.

“What we're doing is not easily understandable,” Liss said. “When we first started, people were like, ‘What? You’re doing speech therapy?’ They couldn’t quite understand that speech was being used for a purpose that was beyond just the speech production. And (Daniel) got it like that.”

Moving on up

The company now employs a team of roughly 30 people, including four ASU alums, three of whom are former students of Berisha’s who now apply what they learned in their computer science and electrical engineering courses as software engineers and algorithm developers.

Aural Analytics’ technology is currently being used in clinical trials through a collaboration with Dr. Jeremy Shefner, chair of neurology at the Barrow Neurological Institute and an expert on ALS. Shefner met Liss at an ALS symposium and immediately recognized the unique value of her work.

“I was attracted to the idea that you could quantitatively assess a variety of measures of speech and do it in a way that’s easy for patients and could show changes in a very sensitive way,” Shefner said. “There are other groups looking at speech analytics technology, but at this point, I think Aural Analytics’ tech is clearly more advanced.”

So advanced that everything can be done from the patient’s home. Trial participants simply download the Aural Analytics app to their phone, then record themselves performing a variety of articulation and cognitive assessments. Those with conditions that present in a deterioration of motor skills, such as ALS or Parkinson’s disease, may be asked to repeat a single phrase several times while those with conditions that present in a deterioration of cognitive ability, such as Alzheimer’s disease, may be asked to recall details from a story they read.

Because it’s so easy for patients to use, clinicians are able to conduct more frequent sampling to show how they progress from day to day, as opposed to monthly or quarterly. The technology also improves on previous assessment methods that often relied only on patient questionnaires.

“The best way to characterize it right now is that we're trying to measure something that was very difficult to measure before,” Berisha said. “For example, we’re looking at how precise is your articulation, how quickly are you speaking, what is the quality of the vibration of the vocal folds. Eventually this will turn into diagnostics but right now, we sort of term it as a thermometer for speech. It's a way of objectively measuring different clinically relevant changes in speech.”

woman speaking to group of employees

Aural Analytics co-founder Julie Liss brainstorms with her team of software engineers, many of whom are ASU alums.

In the future, Berisha and Liss do hope the technology can be used for early detection in patients identified as being at risk for neurological diseases, such as those with genetic predispositions or those who have endured concussions.

“With respect to concussion, if your son plays football or your daughter plays soccer, or any sport, then before the season starts, perhaps you could test them for a baseline,” Berisha said. “And then immediately following a concussion, you'd test them again and you’d be able to see how it affected their speech,” which in turn reflects how it may have affected their brain health.

Berisha and Liss have plans to collaborate with researchers at ASU’s Global Sport Institute to dive deeper into the effects of concussion. But they’ve already used their technology to test the speech of some pretty famous subjects. In 2017, they garnered media attention when a study they conducted on Muhammad Ali using the Aural Analytics technology showed signs of slowed and slurred speech years before he was diagnosed with Parkinson's disease.

While their tech is currently progressing through the FDA clearance process, it is already bring used on four different continents in six different languages, and they hope to release an in-clinic version of the app in 2020.

It’s a long road, but all of it will only serve to make the technology more sound.

“In order to get to the point where you can have someone speak into a machine and be able to say, ‘You've got Parkinson's disease,’ or ‘You've got ALS,’ you need massive, massive, massive amounts of clinical data,” Liss said. “So we're making our way down that road but we're also providing value on the way, rather than just waiting for the eventuality of diagnostics.”

After a decade of working together, Liss and Berisha are still excited to go to work each morning. And they attribute much of their success to the interdisciplinary nature of their work, which they say ASU has helped facilitate.

Visar’s dual appointment at College of Health Solutions and the Ira A. Fulton Schools of Engineering allowed them to merge their labs and co-mentor students in varying disciplines.

“Students are learning multiple skill sets that typically don't overlap until doctoral training,” Liss said.

In addition, Visar credits the engineering school for being especially forward-thinking when it comes to entrepreneurship, in two different ways: One, the Fulton Schools have an entrepreneurial professorship program that provides some release from teaching as well as funding to keep professors’ labs operational while they focus on entrepreneurial pursuits. Second, the schools modified their tenure criteria to include entrepreneurship, whereas before it could have been seen as a distraction to obtaining tenure.

That, along with the host of other ASU resources they’ve benefitted from, has made it possible to get where they are today.

“We pinch ourselves every day,” Liss said.

Top photo: ASU College of Health Solutions Associate Dean and Professor of speech and hearing science Julie Liss and Associate Professor Visar Berisha, who has a joint appointment in the College of Health Solutions and the Ira A. Fulton Schools of Engineering, review the latest updates to the technology they developed for their company Aural Analytics, which detects characteristic changes in speech patterns that appear at the earliest stages of neurological diseases. Photo by Deanna Dent/ASU Now

More Science and technology

 

Student using laptop computer

ASU class explores how ChatGPT Enterprise can assist in scholarly writing

Just over a month ago, Jacob Greene received a notification he’d been waiting for — his proposal to use ChatGPT Enterprise was approved. Greene is an assistant professor at Arizona State University’…

Outdoor ASU sign reading "New schools New degrees New buildings" in front of a building.

New engineering degrees at ASU aim to open pathways, empower engineering expertise

It doesn’t take an extensive internet search to discover that engineering has become one of the most rapidly and broadly expanding STEM fields. Engineering has been on an upswing in recent years,…

Graphic illustration of a close-up view of the gut microbiome.

Study: Combining info on genes, gut bacteria enhances early disease detection

Identifying those at highest risk for developing common chronic diseases like heart disease, diabetes, Alzheimer’s disease and cancer is a core priority for preventive medicine. By catching elevated…