The Pros and Cons of Using AI-Based Mental Health Tools

[ad_1]

The COVID-19 pandemic has Underline the need for mental health resources and treatment. But underinvestment in mental health care has been pervasive not just in the United States, but around the world, and the data paints a grim picture. less than half of adults with mental illness receive treatment. There is a shortage of qualified therapists and psychiatrists. For those who have access to it, the wait for an appointment can stretch for months. Many mental health issues go undiagnosed or are diagnosed long after they appear.

In this context, there is great interest in digital tools in mental health. Digital tools are seen as low-cost options that can increase access to mental health care around the world. They may be able to identify symptoms and allow for early diagnosis or identify disorders that are currently undiagnosed. One of the barriers to seeking treatment is the stigma associated with mental disorders, and digital tools can reduce this stigma. And in the absence of therapists speaking the patient’s native language, it is possible to use digital tools to provide care in the patient’s language.

Unsurprisingly, there is a wide variety of digital, and often artificial intelligence (AI)-based mental health and wellness tools, ranging from simple mindfulness and meditation apps to therapy apps that are being marketed. as adjuncts (or even alternatives) to actual (i.e., in-person) therapy. Generally, these latter options come in two forms (although they are not explicitly called as such for regulatory reasons): psychotherapy chatbots (or interactive tools) and digital phenotypes. Woebot Health, Wysa, and myCompass are examples of interactive tools, and Cogito Companion, StudentLife, EmotionSense, MOSS, and Biewe are examples of digital phenotypes (many of which are research projects rather than commercial applications, but are driven by key stakeholders such as insurers, employers and governments). What these two types of applications have in common is that their clinical effectiveness has not been established by long-term studies. But digital tools, especially psychotherapy chatbots, have already gained millions of users.

Psychotherapy chatbots

Apps like my compass and Woebot Health are intended to treat mild to moderate depression, stress and anxiety. By sending text prompts and emails, myCompass encourages users to self-monitor their moods, behaviors and lifestyle changes. Similarly, Woebot Health uses daily short chats, videos, games and mood tracking and challenges people to examine their thoughts. These chatbots are primarily cognitive behavioral therapy (CBT) tools in digital avatars.

CBT therapy is an evidence-based approach used by mental health professionals. A central tenet of CBT is that an individual’s reaction to an adverse event, not just the event, plays a key role in their mental well-being. Based on this idea, CBT therapists train a patient to observe their reactions and mental states and reorient/reframe their responses to be less negative and more realistic within their treatment. During CBT, patients record their thoughts (eg, negative feelings) and are challenged repeatedly (over several sessions) to reframe them, so that the new way of thinking is reinforced and becomes normal. Not surprisingly, CBT requires repeated interactions between patient and therapist to be effective.

Chatbots attempt to digitally implement CBT interventions (which are originally intended for in-person sessions), but there are some challenges, including the following:

  • CBT is typically delivered through sessions that last 30 minutes to 1 hour. With digital apps, users spend less time but access sessions more frequently. Will CBT be effective in such a scenario?
  • The therapist observes many contextual clues, then adapts his sessions accordingly. Would a digital assistant be up to the task?
  • The effectiveness of the therapy depends on the level of trust and the relationship established between the counselor and the patient. Can this confidence be reproduced by digital applications? Even though they can be programmed to seem caring, in the end, that’s a simulation of empathy rather than true empathy, isn’t it?

One of the potential benefits of digital apps is that they can gather a more detailed picture of users’ mental state based on daily (or even more granular) logs compared to less frequent (weekly) self-reporting. or monthly) which is the norm for TCC in person. But that assumes that users will dutifully record and report their moods and emotional triggers in digital apps, and even then the subjective bias of self-reporting remains. Certainly, psychotherapy chatbots have a role to play in mental health interventions, but their clinical effectiveness must first be proven, and then digital CBT best practices must be codified.

Numerical phenotypes

Another active area of ​​research and interest relates to digital phenotypes in mental health. Digital phenotypes refer to AI models that infer a user’s mental states, emotions, and behaviors based on data collected from their smartphone. Given the ubiquity of smartphones and the amount of time users spend on them, a digital phenotype can be very useful in determining baseline behaviors at the individual level. For example, sleep cycles, speech patterns, social interactions, cognitive functioning, physical movements, and several other facets can all be inferred by analyzing data from smartphones and wearable devices.

Two types of data serve as inputs to a digital phenotype: active data and passive data. Active data refers to data provided by users in response to nudges, prompts, and questions while using the mental health app. Passive data is data collected in the background as users go about their daily business. For example, the number of steps taken in a day, the number of hours slept, the time spent on applications, the time spent on phone calls, as well as all digital interactions (clicks, taps and scrolls on the phone ) are automatically saved. The user’s call data, SMS data, social data, and activity data contain many clues about their mental state.

Once certain basic behavioral patterns are established after an initial period of use, any deviation can trigger an alert that someone may be going through a rough patch. Smartphone sensors, such as GPS, accelerometer, keyboard, and microphone, can pick up on changes in speech patterns, activity rhythms, etc., and can be used to detect any depressive tendencies.

Such cues can be used to personalize a digital CBT application or can lead to a recommendation to see a human therapist. Advocates of digital mental health phenotypes argue that smartphone data can lead to earlier and more accurate diagnosis than traditional approaches. Simply put, data and AI can unlock better diagnoses, better treatments, and better care.

Concerns about digital phenotypes

Digital phenotypes raise several concerns (perhaps even more so than CBT chatbots), including the following:

  • The user data collected is highly sensitive, but many current apps are timid about their data protection, data privacy, and data sharing practices. Additionally, a high level of data security must be in place to guard against any data breaches or security attacks.
  • If (as is current wisdom) mental health disorders are on a spectrum, it is difficult to decide where to draw the line to classify an individual as having an illness or not. This raises the specters of subjectivity (in a seemingly objective, data-driven approach), false negatives, and false positives.
  • False negatives deprive deserving people of access to the treatments they need.
  • False positives can be triggered by a temporary change in a user’s activity or habits for innocuous reasons (eg, a common flu virus). If this data is saved and shared with other parties, it may become a permanent part of their record.
  • The AI ​​technologies that digital phenotypes rely on, such as natural language processing (NLP) and speech recognition, only work well for/in certain regions. NLP only works well for English and a dozen other languages, but doesn’t work at the same level of accuracy for most languages ​​in the world. Ditto for voice/voice recognition.
  • Cultural norms and individual differences play a role in the effectiveness of treatment. This is something a trained practitioner is good at. Phenotypes that work in one context do not necessarily work in a different context or culture.

In summary, digital tools and AI hold great promise for the diagnosis and treatment of mental health, but there is still a long way to go. We need higher clinical standards, regulatory oversight, strong privacy and data protection practices, transparency of AI methods, compliance with Responsible AI Principles, and validation/evidence of large-scale clinical trials. Mental health is too important an area to adopt a “move fast and break things” attitude. We must proceed with caution given what is at stake.

[ad_2]
Source link

Comments are closed.