Speech recognition tool can predict worsening congestion in HF

Early findings imply that language changes in stable patients portend hospitalizations, providing a window of opportunity.

MADRID, Spain — There is more encouraging, albeit preliminary, evidence that a smartphone app using speech analysis technology can help identify heart failure patients at risk of impending decompensation leading to hospitalization .

According to William Abraham, MD (Ohio State University, Columbus), who presented the Cordio HearO community study during a late-breaking clinical trial session at the 2022 Congress of the European Society of Cardiology for Heart Failure (ESC HF), the technology has the potential to reduce hospitalizations for acute decompensated heart failure (ADHF) and improve patients’ quality of life, while saving on the costs of unnecessary hospitalizations.

“The idea really came from clinical observations that in patients who you see coming into the clinic or being admitted to hospital with decompensated heart failure, you can hear changes in their voice – their breathing, their speech, it’s just different,” Abraham told TCTMD. “So the idea is, could you use speech processing and analysis algorithms to detect these changes earlier, before you can actually hear them? This is intended to provide an early warning of decompensating heart failure.

Other studies using implantable hemodynamic monitors have established that patients actually start getting congested about 2 to 3 weeks before they need acute care, Abraham continued.

“It’s not like all of a sudden they develop pulmonary edema overnight and end up in the hospital. It’s kind of a slow process of decompensation,” he explained. “Our hope was that through speech analysis we could detect these changes very early and provide a very simple, non-invasive tool to identify patients who are at risk of worsening heart failure or being hospitalized for heart failure. .”

Changes in lung fluid

Cardio HearO speech analysis technology uses a patient-specific model to detect changes in lung fluid that can lead to a number of physiological changes, including swelling of the soft tissues of the vocal tract as well as swelling of the vocal cords .

Abraham and his colleagues have Previously reported results from the ACUTE study using the same technology showing that automated speech analysis could differentiate between a “wet” clinical state during hospitalization for ADHF, versus a “dry” clinical state at discharge of the hospital. The current study focuses on stable patients who recorded daily speech samples that were then analyzed retrospectively to see if detectable changes predicted subsequent decompensation and hospitalization.

The Cordio HearO community study set out to analyze patient speech data during periods of HF remission to see if impending deterioration could be detected in speech patterns and, if so, when. would be possible. The study recruited 430 patients with congestive heart failure to date, who record five sentences in a smartphone app each morning.

In the preliminary analysis reported by Abraham at ESC HF, 460,000 records were analyzed from 180 patients who, over a 2-year period, experienced a total of 49 heart failure decompensation events, including 39 were early events.

As Abraham has shown here, about half of the 180 Israeli patients enrolled so far are women and the majority were Hebrew or Russian speakers, with a minority speaking Arabic or English. Only 15% had a reduced ejection fraction (LVEF

When the first events of HF decompensation – hospitalization or change of medication – were analyzed against the changes detected by speech, the speech analysis algorithm successfully predicted an event in 82% of cases, while the system delivered a false negative in 18% of patients.

82% sensitivity of a non-invasive technology is “really fantastic if you put it in the context of today’s standard of care, which is to monitor daily weight change, which only has a sensitivity of about 10% to 20% to predict heart failure hospitalization,” Abraham told TCTMD.

A false positive has occurred about once every 5 months per patient, Abraham said, suggesting the app can also detect other changes in lung status unrelated to heart failure status, but a similar activation of the healthcare cascade can also occur when patients report symptoms that do not ultimately turn out to be indicative of worsening heart failure.

Of note, the system was able to predict subsequent HF events about 18 days prior, he pointed out, “which agrees very well with what we know about the time course of decompensation and suggests that the speech analysis processing application could detect the worsening of heart failure events well in advance. This can give us a window of opportunity to intervene and prevent the worsening of these heart failure events. cardiac.

In general, Abraham said, patients had positive experiences with the app, with about four in five being highly compliant with their daily records. A similar phenomenon was seen with CardioMEMS, which allowed patients to be “more deeply engaged” in the technology and therefore, he told TCTMD, “do a better job of taking care of themselves. -selves”. I think they also feel like they are regaining a bit more control over their disease.

The next stage of this technology will be an interventional study to determine whether acting on system alerts with treatment modifications could help avoid hospital admissions. “This approach has the potential to reduce ADHF hospitalizations and improve patient quality of life and economic outcomes, but of course we need to show this now in larger randomized clinical studies,” Abraham said.

For TCTMD, Abraham pointed out that a wide range of speech recognition and processing technologies are currently being investigated, both in the field of heart failure and beyond, calling it a research area. exciting. As for this specific technology, “this is the first time we’ve studied this technology in the outpatient setting” for patients with heart failure, he noted, “which is really exciting.”

After his last-minute presentation on Saturday, panelists and audience members wondered if he had researched whether voice-recognition technology worked better or worse with different accents or languages, or whether things like humidity and temperature, which could affect speech, made a difference.

First addressing the issue of accents and languages, Abraham noted that a trial is just beginning in the United States, so it will be interesting to see how the technology performs against, for example, southern accents. United States compared to those in the north. “I think it worked well in several different languages, so it will probably work well in different dialects in the United States and other geographies as well,” he predicted.

As for how climate or geography might influence interpretation, “that’s a great question,” he said. “As far as I know, it works the same in different kinds of environmental conditions,” Abraham continued. “But don’t take that answer to the bank, because I don’t know if it’s well researched. I will have to study this for you.


Source link

Comments are closed.