Overview

Artificial intelligence (AI) is a buzzword and is the considered the "big thing" in health care. It is touted as holding tremendous promise in health and mental healthcare systems. In general, many healthcare and mental health professionals are not familiar with AI, are distrustful of AI, or may be skeptical of its application in health and mental health care. The purpose of the course is to familiarize participants with AI and its potential applications in health and mental health care. The merits and limitations of AI will be reviewed as well as factors that will aid in facilitating and impeding the integration of AI in service delivery. Ethical concerns will be reviewed as well.

Education Category: Management
Release Date: 04/01/2025
Expiration Date: 03/31/2028

Table of Contents

Audience

This course is designed for dental professionals who interact with AI technologies.

Accreditations & Approvals

NetCE Nationally Approved PACE Program Provider for FAGD/MAGD credit. Approval does not imply acceptance by any regulatory authority or AGD endorsement. 10/1/2021 to 9/30/2027 Provider ID #217994. NetCE is an ADA CERP Recognized Provider. ADA CERP is a service of the American Dental Association to assist dental professionals in identifying quality providers of continuing dental education. ADA CERP does not approve or endorse individual courses or instructors, nor does it imply acceptance of credit hours by boards of dentistry. Concerns or complaints about a CE provider may be directed to the provider or to ADA CERP at www.ada.org/cerp. NetCE is approved as a provider of continuing education by the Florida Board of Dentistry, Provider #50-2405. NetCE is a Registered Provider with the Dental Board of California. Provider Number RP3841. Completion of this course does not constitute authorization for the attendee to perform any services that he or she is not legally authorized to perform based on his or her license or permit type.

Designations of Credit

NetCE designates this activity for 3 continuing education credits. AGD Subject Code 550. This course meets the Dental Board of California's requirements for 3 unit(s) of continuing education. Dental Board of California course #03-3841-25462.

Course Objective

The purpose of the course is to familiarize participants with AI and its potential applications in health and mental health care.

Learning Objectives

Upon completion of this course, you should be able to:

  1. Outline the development and potential uses of AI in the context of health and mental health.
  2. Outline practice considerations related to AI use, including barriers, limitations, ethical concerns, and collaboration.
  3. Assess the limitations and the barriers of AI in day-to-day practice for providers.
  4. Explain ethical concerns of the use of AI in health and mental health care.
  5. Discuss collaborations between AI and practitioners.

Faculty

Alice Yick Flanagan, PhD, MSW, received her Master’s in Social Work from Columbia University, School of Social Work. She has clinical experience in mental health in correctional settings, psychiatric hospitals, and community health centers. In 1997, she received her PhD from UCLA, School of Public Policy and Social Research. Dr. Yick Flanagan completed a year-long post-doctoral fellowship at Hunter College, School of Social Work in 1999. In that year she taught the course Research Methods and Violence Against Women to Masters degree students, as well as conducting qualitative research studies on death and dying in Chinese American families.

Previously acting as a faculty member at Capella University and Northcentral University, Dr. Yick Flanagan is currently a contributing faculty member at Walden University, School of Social Work, and a dissertation chair at Grand Canyon University, College of Doctoral Studies, working with Industrial Organizational Psychology doctoral students. She also serves as a consultant/subject matter expert for the New York City Board of Education and publishing companies for online curriculum development, developing practice MCAT questions in the area of psychology and sociology. Her research focus is on the area of culture and mental health in ethnic minority communities.

Faculty Disclosure

Contributing faculty, Alice Yick Flanagan, PhD, MSW, has disclosed no relevant financial relationship with any product manufacturer or service provider mentioned.

Division Planner

Mark J. Szarejko, DDS, FAGD

Division Planner Disclosure

The division planner has disclosed no relevant financial relationship with any product manufacturer or service provider mentioned.

Director of Development and Academic Affairs

Sarah Campbell

Director Disclosure Statement

The Director of Development and Academic Affairs has disclosed no relevant financial relationship with any product manufacturer or service provider mentioned.

About the Sponsor

The purpose of NetCE is to provide challenging curricula to assist healthcare professionals to raise their levels of expertise while fulfilling their continuing education requirements, thereby improving the quality of healthcare.

Our contributing faculty members have taken care to ensure that the information and recommendations are accurate and compatible with the standards generally accepted at the time of publication. The publisher disclaims any liability, loss or damage incurred as a consequence, directly or indirectly, of the use and application of any of the contents. Participants are cautioned about the potential risk of using limited knowledge when integrating new techniques into practice.

Disclosure Statement

It is the policy of NetCE not to accept commercial support. Furthermore, commercial interests are prohibited from distributing or providing access to this activity to learners.

Technical Requirements

Supported browsers for Windows include Microsoft Internet Explorer 9.0 and up, Mozilla Firefox 3.0 and up, Opera 9.0 and up, and Google Chrome. Supported browsers for Macintosh include Safari, Mozilla Firefox 3.0 and up, Opera 9.0 and up, and Google Chrome. Other operating systems and browsers that include complete implementations of ECMAScript edition 3 and CSS 2.0 may work, but are not supported. Supported browsers must utilize the TLS encryption protocol v1.1 or v1.2 in order to connect to pages that require a secured HTTPS connection. TLS v1.0 is not supported.

Implicit Bias in Health Care

The role of implicit biases on healthcare outcomes has become a concern, as there is some evidence that implicit biases contribute to health disparities, professionals' attitudes toward and interactions with patients, quality of care, diagnoses, and treatment decisions. This may produce differences in help-seeking, diagnoses, and ultimately treatments and interventions. Implicit biases may also unwittingly produce professional behaviors, attitudes, and interactions that reduce patients' trust and comfort with their provider, leading to earlier termination of visits and/or reduced adherence and follow-up. Disadvantaged groups are marginalized in the healthcare system and vulnerable on multiple levels; health professionals' implicit biases can further exacerbate these existing disadvantages.

Interventions or strategies designed to reduce implicit bias may be categorized as change-based or control-based. Change-based interventions focus on reducing or changing cognitive associations underlying implicit biases. These interventions might include challenging stereotypes. Conversely, control-based interventions involve reducing the effects of the implicit bias on the individual's behaviors. These strategies include increasing awareness of biased thoughts and responses. The two types of interventions are not mutually exclusive and may be used synergistically.

#51450: AI in Health Care

INTRODUCTION

Artificial intelligence, or AI, is an umbrella term that refers to a field of computer sciences that focuses on the use of computers, technology, and intelligence systems to simulate human thinking activities and execute human-level problem-solving [1,2]. AI has been employed to improve diagnosis, facilitate and expedite the discovery of new drugs, organize and manage healthcare data, and perform surgery [3]. In the area of mental health care, AI can be used to enhance the patient experience by automating schedule reminders, sending out personalized health tips and recommendations, offering chatbots and virtual assistants to talk to individuals who may be in distress or in crisis, and performing risk assessments [3,4].

Despite AI's growth, many health and mental health professionals are not familiar with AI applications and how they can be specifically integrated and applied in their day-to-day practices. Furthermore, it is often viewed with ambivalence.

The goal of the course is to provide health and mental healthcare professionals a basic overview of artificial intelligence and its current applications in the health mental health sectors. A discussion of the facilitators and barriers to its use will be presented. In addition, the array of limitations and challenges presented by AI technology will be explored.

AN OVERVIEW OF ARTIFICIAL INTELLIGENCE (AI)

DEFINITION

As noted, AI is an umbrella term that generally refers to the use of computers, technology, and intelligence systems to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making [1,2,5]. AI technology gives computers cognitive abilities to complete tasks that humans generally perform, such as perceiving, learning, recognizing, predicting, and generating rules [6]. For example, when the AI application ChatGPT is provided with a prompt (i.e., "Outline the differences between ice cream and sorbet"), it will generate the outline based on research conducted via Internet search. It can also help authors with their writing tone. For example, an author can upload several paragraphs to ChatGPT, which can then edit the paragraphs so that they sound less technical and more informal and humorous. Voice-activated searches on smartphones are also an example AI—a search task that once required human performance but now can be conducted via technology that is activating via voice. It is estimated that half of cell phone users use voice-activated searches daily [7].

MARKET OVERVIEW

The AI market is projected to grow to $407 billion by 2027, a huge increase compared with the $86 billion revenue reported in 2022 [7]. In 2024, it was estimated that AI in the healthcare market was worth $14.9 billion; by 2030, it is projected to grow to $164 billion [8]. In 2020, 90% of healthcare organizations indicated they use AI for automation [9]. In the area of mental health, the growth of AI is also expected to increase rapidly. In 2023, in the United States, mental health app market revenue was estimated to be $5.72 billion, with a projected growth to $16.5 billion by 2030 [10]. It is expected that hospitals and clinics will be the largest users of AI in the mental health market, primarily driven by the prevalence rates of mental health conditions and the need for diagnoses and treatments [11]. As of 2024, North America is the dominant player in the global AI market [11].

HISTORICAL EVOLUTION OF AI USE

On the surface, it seems as if AI recently emerged in the health and mental health care, but this is not true. In the 1950s, Alan Turing developed a test for intelligence in a computer, requiring that a human should be unable to distinguish the machine from another human by using the replies to questions put to both [12]. This has become known as the Turing test [13]. In 1951, Christopher Strachey developed the first AI program, and at that same time, it was John McCarthy who then employed the term artificial intelligence to refer to the use of science and engineering to make machines intelligent [13,14]. In 1956, McCarthy organized the Summer Dartmouth Conference on AI, which drew leading scientists, researchers, mathematicians, and engineers to start a dialogue about AI and its practical uses. Some say that this conference laid the foundation to the inception of AI [12]. In terms of initial applications of AI, the first industrial robot appeared in General Motor's assembly line in 1961, and three years later, the first chatbot by the name of ELIZA emerged on the scene, which was developed by Joseph Weizenbaum at MIT [13]. ELIZA followed the tenets of Carl Roger's person-centered therapy, which would rephrase and repeat in order to mimic a human conversation [15]. Over the years, there were other chatbots, including A.L.I.C.E. and Apple's Siri, which laid the foundation for AI personal assistants [15].

Many historians have labeled the 30-year time period between 1970 and 2000 as the AI winter, because it was a time of fewer AI developments [13]. The first AI winter was in the late 1970s, during which time many doubted the practical uses of AI. The second AI winter occurred during the late 1980s into the early 1990s, when there was an AI lag prompted by the financial cost of AI development and maintenance of expert digital databases [13]. However, there were still some noteworthy developments. Deep Blue, a chess-playing computer, played against the world champion in chess in 1997 and won [16]. In 2011, Watson, another computer, won a Jeopardy gam played against the two leading rival players [16].

Between 2010 and 2020, electronic health records and large-scale healthcare databases became more prominent, and the question of how to leverage AI with health data emerged. In 2000, the da Vinci Surgical System was introduced as the first robotic surgical platform approved by the FDA [17]. As noted, Apple introduced Siri, a virtual assistant, into iPhones in 2011, and then in 2014, Alexa, another virtual assistant was released by Amazon [13]. Today, ChatGPT and Copilot are used in many arenas, and this has led to the surfacing of many potential implications and ethical conflicts associated with AI use in various fields/disciplines, including in health and mental health care. One of the main concerns is that many AI applications (e.g., ChatGPT) are not Health Insurance Portability and Accountability Act (HIPAA) compliant and content shared with AI applications is generally added to the associated database and shared.

ATTITUDES TOWARD AI

Overall, AI is viewed with ambivalence and concern. A bit more than half of Americans are concerned about the use of AI in day-to-day life; only 10% express more excitement than concern, and 36% are ambivalent [18]. As of 2023, an estimated 58% of adults in the United States had heard of ChatGPT while 42% were not familiar at all [18]. Pew Research conducted a survey about the public's attitudes and knowledge of AI in 2022 and 2023. Those who had heard about AI in 2022 were 16 points more likely to express more concern than excitement over AI one year later [19]. Another survey conducted in the United Kingdom found that twice as many of those surveyed thought that AI would bring more benefits than risks (28% vs. 14%). Almost 33% believed it would have a positive impact in the area of health care [20]. In a separate survey study with 427 U.S. participants, respondents were concerned that AI proliferation would result in loss of human contact and personal interactions with healthcare providers, with clinicians ending up as passive decision-makers [21].

In general, it appears that many health and mental health professionals are not familiar with AI, are distrustful of AI, and perhaps even skeptical in its application in health and mental health care. Elsevier, a leading academic publishing company, conducted a global online survey with a sample of 1,999 respondents comprised of clinicians and researchers from 123 countries [22]. This study found that 54% of respondents had used AI, and 31% had utilized it for work. Slightly more (39%) respondents in China are using AI applications compared to the United States (30%) or India (22%) [22]. When asked to identify an AI product, ChatGPT was the most frequently cited [22]. In a 2024 national Chinese study involving 1,243 nursing students, nursing professionals, and other healthcare professionals, 57% reported very little knowledge about AI and almost 66% indicated that they were not familiar with the role of AI in nursing [2]. Almost all (95%) believed that there could be imminent concerns with AI in health care and that more work will be needed in the area of ethics in AI in health care.

Most individuals as well as health and mental health providers prefer to have a human provider, even when they were told that the AI could perform the same task better or more accurately. While many professionals appreciate the benefits of AI, they believe it should be used as a supplement to a human provider [23]. Overall, while there is a good amount of empirical research on the role of AI in health care and medicine, much of the work is descriptive and experimental [24]. In a bibliometric analysis of 100 of the most commonly cited articles about AI in medicine, researcher found that the majority were published after 2000 and that oncology appears to be at the forefront in implementing AI in practice, with cardiovascular medicine lagging [25].

BENEFITS OF AI IN HEALTH AND MENTAL HEALTH CARE

Many have posited that AI will revolutionize the health and mental health fields, offering multiple benefits. Its potential benefits are briefly summarized here; however, specific applications will be discussed in the next section, which will further illustrate AI's practical uses.

INCREASE SERVICE ACCESS

AI-based health and mental health delivery of services can promote equity and justice, particularly for those underserved and marginalized populations that may not have easy access to services. These technologies could enhance remote monitoring and the provision of telehealth and mental health services [26]. Some have argued that AI could play a role in democratizing health and mental health disparities, reducing long wait lists, costs, and other systemic barriers [27].

ENHANCED DIAGNOSTIC ACCURACY

AI has been found to increase the accuracy of detection and identification of various conditions, which can then facilitate improved prevention or early intervention efforts [23].

PERSONALIZED TREATMENT AND SUPPORT

AI can tailor treatments and interventions to individual patients'/clients' needs based on the synthesis of clinical data, history, and demographics [14]. Individuals can also receive personalized reminders and support to assist in treatment adherence.

PROMOTE PUBLIC HEALTH SURVEILLANCE

AI can rapidly aggregate, synthesize, and analyze large public health datasets to track and predict health and mental health trends. This can be used to guide decisions related to public health policy and prevention efforts [14].

INCREASE EFFICIENCY AND PRODUCTIVITY

AI platforms can automate and streamline clinical day-to-day administrative tasks, reducing costs and time burdens [28].

IMPROVED DATA ANALYSIS

AI platforms can also rapidly aggregate large volumes of data from multiple datasets. This can be used to identify patterns and obtain the most up-to-date best evidence recommendations quickly and implement these approaches into practice [4,29].

APPLICATIONS OF AI IN HEALTH AND MENTAL HEALTH CARE

In 2020, 90% of healthcare organizations indicated they use AI for automation; this has likely increased in the years since [9]. As noted, AI has been employed to improve diagnosis, facilitate and expedite the discovery of new drugs, organize and manage healthcare data, and perform surgery [3]. The consensus is that AI applications have potential to improve health and mental healthcare delivery, reduce costs, and enhance efficiency and possibly efficacy from the diagnosis to treatment/intervention stages. This section will explore a few examples. When specific AI applications are mentioned, these examples are merely used as illustrative purposes and does not indicate endorsement.

ASSESSING, DIAGNOSING, AND INTERPRETING IMAGES

AI has been used to assess, diagnose, and interpret health and mental health conditions via a variety of methods, but perhaps the most obvious is through the analysis of imaging. AI applications can analyze images such as x-rays, computed tomography (CT) scans, and magnetic resonance imaging (MRI) to detect abnormal patterns. This approach can decrease human error and make more accurate diagnoses. For example, in a dataset of mammograms, an AI system was able to interpret the mammograms and make diagnoses of breast cancer, with a reduction in false positives of 5.7% and of false negatives of 9.4% [14]. In this case, the AI system was more sensitive and accurate compared to the radiologists (90% vs. 78% accuracy rates, respectively) [14]. In a systematic review and meta-analysis, researchers concluded that AI demonstrated a high level of accuracy in detecting lung cancer, which has positive implications for early diagnosis [30]. AI platforms can also assist with the differential diagnosis of certain disorders that present similar clinical symptoms, such as types of dementia or depression. AI systems can help to differentiate between these disorders by examining brain imaging and structural MRI scans [31]. Ultimately, accurate diagnosis is the lynchpin to effective treatment planning, and the use of AI technology to improve diagnosis is an exciting advancement on this front.

Voice data can be analyzed using AI technology to examine for volume, tone, breathing patterns, and vocal cord vibrations to assess and screen for type 2 diabetes, stroke, Parkinson disease, depression, post-traumatic stress disorder (PTSD), schizophrenia, heart conditions, larynx cancer, speech disorders, and autism [32]. In these cases, the human voice can be a digital biomarker to detect disorders that can affect the voice. In one systematic literature review involving 145 studies, Parkinson disease was the most common condition that used voice data as a digital biomarker [33]. However, the researchers found that the studies included in their review had limited and unbalanced data sets and focused primarily on diagnostic detection versus longitudinal monitoring [33].

AI-integrated virtual realities can mimic real life and provide health and mental health practitioners an opportunity to virtually assess patients/clients for various health and mental health conditions. When patients/clients enter the virtual environment, they respond and behave to stimuli as if in real life, and practitioners can observe and measure for physiological changes and symptoms, such as anxiety, fear, paranoia, and other emotional reactions [34].

AI-trained chatbots can perform triage to assess if certain symptoms and/or the severity of symptoms warrant emergent care. The chatbot can determine if the patient must see a professional and book an appointment [35].

Risk assessments are critical in health and mental health systems, and AI technologies are being employed in this arena as well. The Mayo Clinic has developed an AI model that can detect patients with a weak heart pump where no symptoms were present. The AI model can detect risk of stroke or myocardial infarction for patients in the years to come [36].

PREDICTIVE ANALYSIS

Clinical prediction and analysis integrated with AI algorithms can rapidly analyze data in order to identify patterns and correlations with precision and accuracy [37]. As a result, it can predict patient readmission, relapse, and/or complications risks. AI algorithms are able to analyze medical history, demographic, patient/client records and charts, and lifestyle information to predict health and mental health problems with some precision [14]. This technology can forecast disease and psychosocial problems and identify at-risk populations to inform public health surveillance initiatives [37]. One example is UCLA's California Policy Lab, where researchers aggregated data from 90,000 individuals who were users of various social services. Their AI algorithm was able to predict who might end up unhoused [38].

AI has also been used to analyze content from social media posts, blogs, online forums, and other online forums to assess mental health public sentiment and trends [39]. At the University of Denver, School of Social Work, an AI algorithm was developed to predict substance use disorder by analyzing Facebook posts. The algorithm was 80% accurate in predicting substance use, compared with 30% accuracy using traditional statistical models [40].

Personal sensing, defined as collecting and analyzing data from sensors embedded in the context of daily life with the aim of identifying human behaviors, thoughts, feelings, and traits, often with the aid of AI, has the potential to predict, measure, and monitor individuals' mental health [41]. For example, AI technology can evaluate content from social media posts and data from smart watches, smartphones, and other wearable health devices to identify behavioral patterns and changes and correlate them with specific health and mental health conditions [35,41]. Natural language processing algorithms can monitor conversations from texts messages, emails, and social media posts to identify key words and changes in language semantics and syntax (e.g., length of messages, longer intervals between texts, posts, or calls) that might indicate an increased risk of depression, suicide, or anxiety [35,41].

TREATMENT AND INTERVENTION PLANNING

Today, AI is widely used for the delivery of care and treatment/intervention planning. For example, AI is incorporated with robotic systems to work alongside surgeons performing complicated surgical procedures. The Da Vinci Surgical System provides real-time guidance to surgeons to increase accuracy and precision in cardiac, gynecological, pediatric, urologic, and general surgeries [17]. As of 2025, approximately 75% of prostate cancer surgeries are performed using the da Vinci system.

Robots have also been utilized to assist patients requiring rehabilitation, given the rising costs hiring a professional caregiver to help with activities of daily living for patients with dementia or recovering from stroke or other health conditions [42]. Because of their memory loss, patients with Alzheimer disease or other dementias are at increased risk of malnutrition and dehydration. With the help of AI-driven robots, caregivers can monitor patients' behaviors in real time and generate audio reminders [42].

Virtual reality interventions using AI platforms have been used in both the mental health and physical rehabilitation fields. Immersive body system feedback powered by AI can provide a tracking system for movements and engage patients in physical activity [42]. Personalization is often key to physical therapy planning, and these tools allow patients to receive real-time guidance, education, resources, and support. Furthermore, patients who may have limited access to services due to geographic or transportation constraints can use these interventions in their home.

AI can be used to predict the treatments or interventions that are likely to be successful for an individual by analyzing available data, circumstances, clinical history, and treatment context [23]. AI algorithms can provide risk estimates for each treatment option and, based on decision models, offer recommendations for the most appropriate approach to treatment for the specific patient [43]. As discussed, chatbots are able to collect patient/client data and assess symptoms and then synthesize this data to recommend treatment and interventions. If a patient presents an imminent safety risk, the chatbot can notify a health or mental health professional for immediate intervention [4]. The AI-driven small data platform CURATE.AI is one example of an AI system that evaluates individual patient data and to customize dose recommendations [44,45]. These tools do not remove final decision-making from the clinician, but acts as a tool and supplement to traditional practice [45].

Mental health digital apps supported by AI-automated conversational agents can provide emotional support and psychoeducation, with immediate real-time interaction for individuals who may be fearful, embarrassed, and/or reluctant to speak to a real-life counselor or therapist [46]. One example is Youper, an AI-driven app developed by a psychiatrist that uses a decision tree for each user and chatbots to provide emotional support using cognitive-behavioral therapy techniques to manage depression, anxiety, and other emotional issues [47,48]. Woebot is another example of an AI-powered app that delivers cognitive-behavioral counseling thorough AI conversations using a chatbot. It can track mood patterns by analyzing voice tones and tailoring interaction to the client [47]. Woebot has also been adapted for use in the management of substance use disorders by adding components of motivational interviewing, dialectical behavioral therapy, psychoeducation, and craving and pain monitoring [46]. Woebot has been examined for preliminary feasibility, efficacy, and acceptability [46]. Using a pre -and post-test design with 101 users with an average age of 38 years, researchers found confidence to resist substances increased and cravings, past-month substance use, and anxiety decreased in Woebot users compared with usual care.

Practitioners can incorporate AI-facilitated treatments, interventions, and therapies to supplement traditional treatments/interventions. These AI-driven treatments have been defined as being "digital and fully automated," with the use of a conversational interface, real-time, and personalized, and tailored to an individual's need [48]. It is critical to emphasize that AI is not a substitute for a practitioner's expertise, skills, and clinical judgment [42].

PROMOTE PATIENT MONITORING AND SUPPORT

Chatbots, virtual assistants, and assistive robots can also be used to enhance the patient/client experience by automating schedule reminders, sending out personalized health tips and recommendations, and monitoring progress and treatment response and adherence [3,4,5]. Virtual assistants can assume some of the administrative responsibilities of healthcare providers by reminding patients of their appointments and their treatment plan as well as collecting patient information to monitor progress and sending the progress check in reports their provider(s) [14]. Assistive robots can serve as helpers and companions to certain populations, particularly those who have limited mobility or disabilities who might benefit from increased companionship and support to reduce loneliness [35]. For example, pet bots have been created with sensors to respond to touch, sounds, and visual cues [35].

AUTOMATING OF CLINICAL PROCESSES

As noted, AI platforms can be used to reduce the burden of daily administrative procedures, such as billing, authorizations, and charting [23]. Information from provider notes can be extracted in order to assign medical codes for billing, authorization approvals, and insurance claim processing [28,35]. Other administrative tasks that AI may help to can automate and streamline include [28]:

  • Scheduling appointments

  • Managing by organizing, categorizing, and processing patient records and documents

  • Billing

  • Sending reminders and communications

  • Improving data security and compliance by monitoring data breaches and reviewing administrative process to ensure compliance with regulations and laws

ANALYSIS OF DATA AND EVIDENCE-BASED LITERATURE

AI applications are able to aggregate, analyze, and interpret large amounts of data and literature to offer evidence-based recommendations [4,12]. Because it can be challenging for practitioners to sift through and aggregate, synthesize, and analyze multiple and large data sources, AI technology can be used to rapidly retrieve clinical literature and information from different sources in order to recommend treatment or diagnostic approaches [14]. AI applications are able to graph information from multiple databases quickly, which can give practitioners an accurate overview of the available knowledge base [49]. AI can even conduct social network analyses to identify a patient's social support network, allowing the practitioner to assess and determine how to use or enhance this social support system [49].

EDUCATION AND TRAINING

AI has potential application in the education of both patients/clients and practitioners. Using chatbots other AI assistants, health- and/or mental health-related materials can be generated and easily disseminated for patient education, disease prevention, and awareness building. Chatbots can be programmed to answer patients'/clients' questions quickly, and in certain circumstances, individuals may feel less embarrassed to seek guidance for certain conditions that may be perceived as stigmatizing [4,14].

Education, training, and continuing education can be enhanced using AI platforms. Consider new medical students who can practice certain clinical skills with AI avatars in real-life simulated situations, or mental health counselor trainees practicing assessment and rapport-building skills with virtual clients [35]. A systematic review examined the role of AI in medical education, including potential strengths and limitations [50]. Researchers analyzed articles published between 2017 and 2022, and a total of 25 articles met the inclusion criteria. Overall, current application of AI in medical education was focused primarily on specialty training and continuing education. AI application has been employed in radiology, surgery, cardiology, diagnostics, and dentistry. Accreditation standards have generally not yet addressed the appropriate use of AI applications, and more empirical research is needed to examine its effectiveness [50].

BARRIERS, LIMITATIONS, AND ETHICAL CONSTRAINTS

Although there are clear promises of AI applications, many have expressed concerns about the use of AI in health and mental healthcare service delivery.

PRIVACY, SECURITY, AND CONFIDENTIALITY ISSUES

Health and mental health data are sensitive, and there are concerns about how secure data are when using AI-driven platforms. One of the main questions is the extent to which are there risks of breaches of security, compromising privacy and confidentiality [4]. Patients'/clients' health records, therapy/counseling session notes, clinical histories, and behavioral data are required to comply with HIPAA regulations in order to ensure patient/client confidentiality [51]. In datasets, patient data are required to be de-identified; however, sophisticated algorithms are able to re-identify the data [52]. In one study, an algorithm was able to re-identify data from 85% of adults and 70% of children in a physical activity cohort study [53].

Informed Consent

At the heart of informed consent and client autonomy is the ability of clinicians to provide sufficient and understandable information to allow individuals to make an informed decision about their care. Patients/clients should be informed about the potential risks and benefits of AI in the delivery of their care. For example, AI algorithms can generate or perpetuate biases, create hallucinations (inaccuracies when the AI misperceives patterns), and introduce cyber-risks (e.g., data breaches) [54,55,56]. When a practitioner has employed AI for clinical decision-making without oversight, the patient/client should be made aware of this [56]. Furthermore, they should understand how their clinical/medical/behavioral data may be used beyond their clinical treatment planning [4,56].

PERPETUATING BIAS

AI algorithms are trained on existing datasets, and the validity and reliability of the original data are influenced by the original data collection procedures [57]. If the original data are not representative of certain demographic groups and if it ultimately makes inaccurate predictions, health and mental health disparities, biases, and inequities can be perpetuated [4,23,58]. This is known as algorithmic bias, which has been defined as, "the instances when the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation to amplify them and adversely impact inequities in health systems" [59]. Some have advocated for rigorous peer review feedback of AI algorithms to help counteract this potential problem [55]. The following strategies have been proposed to address algorithmic bias [60]:

  • Including experts who are diversely represented to review AI algorithms

  • Employing methods to manage situations in which there is not enough available information

  • Introducing algorithms gradually in order to test their outcomes

  • Creating mechanisms to collect feedback

LESS THAN OPTIMAL SOLUTION TO REDUCING DISPARITIES

Although AI has been proposed as a potential solution to reduce or mitigate health and mental health disparities, this may be an oversimplistic view. Many macro systemic factors will continue to exist and will exacerbate issues related to the digital divide, which continues to affect the general utilization of AI [27]. As of 2024, 80% of U.S. adults had broadband Internet access at home. However, there are significant differences in among racial/ethnic groups. While 83% of White American and 84% of Asian American adults have home broadband access, this reduces to 68% for Black adults and 75% for Hispanic adults [61]. It is plausible that AI will actually increase the disparities because the user demographics will be reflected back to users [27].

LACK OF TRANSPARENCY, ACCOUNTABILITY, AND POLICIES/REGULATIONS

AI algorithms have been compared to black boxes, because it is not always clear how the algorithm was designed, developed, tested, and validated [58]. Private commercial companies are often wary of releasing too much information or risking their intellectual property [62]. This is counter to the transparency regarding the validity and reliability and safety and risks of AI-derived clinical information health and mental health professionals require in order to make sound clinical decisions for their patients/clients. Similarly, patients/clients should understand the nature of the information in order to make informed decisions about their care [58]. In a 2024 study assessing public documentation on 14 AI-based radiology products, researchers employed a self-designed instrument to measure the transparency of the AI products in different domains, including ethics, safety and risks, training of the AI algorithm, and performance limitations [58]. Overall, they found very little publicly available information about the products, particularly on safety and risks. Most of the products did not disclose how the AI was trained to evaluate for algorithmic biases. Unfortunately, because AI technology has grown and changed so rapidly, regulations and policies have not kept pace.

RISKS/SAFETY, PROFESSIONAL LIABILITY, AND COMPETENCE

Data error is a challenge to the integrity of AI applications. If there are any algorithmic inaccuracies and/or biases, false clinical decisions could be made and result in harm to patients/clients. This in turn can impact professional liability [4]. Hallucinations are relatedly common with most AI applications; these events occur when AI software misperceives patterns that are do not exist, and it then produces nonsensical, inaccurate, and fabricated data and outcomes [63]. Ultimately, hallucinations can result in misdiagnoses, medical errors, inaccuracies in medical documentation, and data inaccuracies, leading to faulty and potentially harmful conclusions [54]. The following strategies may be incorporated in order to build trust and mitigate the impact of AI hallucinations [54]:

  • Continual validation and evaluation of AI-powered platforms to detect hallucinations and employ diverse datasets

  • Human oversight by healthcare and mental health professionals to review AI results to ensure clinical accuracies

  • Provide training and continuing education to practitioners about the use of AI

  • Delineate explainable and transparent AI models and protocols so practitioners understand how AI algorithms are developed

  • Establish ethical and legal governance committees to provide oversight

Questions about liability may arise with the involvement of AI, including who or what is liable and accountable for outcome(s) [4,54]. Because AI is new and developing quickly, many health and mental healthcare professionals have not been fully trained on how to utilize AI nor are they fully aware of the latest evidence-based practice guidelines [55]. This raises the ethical issue of professional competency. The heart of clinical decision-making and judgment have traditionally revolved around the expertise of the professional, and a shift to an AI "expert" system requires practitioners to be fully trained and feel competent and comfortable with the operation of AI systems [26]. Practitioners who are interested in using AI in their clinical practice may refer to the Association for the Advancement of Artificial Intelligence (https://aaai.org) for guidance and to keep up to date on new developments [55].

OVER-RELIANCE ON AI AND PERCEPTION OF CLIENT/PATIENT ABANDONMENT

On a macro level, some have argued that overreliance on AI tools in health and mental health can result in practitioners becoming complacent in continuing to build and enhance their skills [62]. In other words, this could ultimately lead to the de-skilling of the health and mental healthcare workforce [62]. Some providers may also feel pressure to follow AI recommendations, instead of using the AI recommendations to supplement their clinical judgment [64].

On a micro level, patients/clients whose providers use AI extensively may feel that their practitioners are abandoning them [55]. While chatbots can offer immediate support, some patients/clients will prefer direct human interaction that cannot be obtained from a bot. At the heart of the provider-patient/client relationship is human empathy, conveyed through verbal and nonverbal communication and human interaction. Ultimately, AI does not have emotional intelligence to employ clinical wisdom, intuition, and empathy [27]. An over-reliance on AI can lead to an impersonal and dehumanizing experience for patients/clients [64].

COLLABORATION BETWEEN PRACTITIONERS AND AI

Some are skeptical of the use of AI in the health and mental health fields, and others are wholly enthusiastic. The reality is that optimal outcomes will come from a partnership of AI and practitioners. In an ideal world, AI would serve as a complement to humans, enhancing and supporting clinical decision-making but with practitioners providing oversight [29]. The American Medical Association coined the term "augmented intelligence" to refer to the ideal relationships between humans and AI applications [36]. The following ideal outcomes can result from an AI partnership with practitioners [27]:

  • Combined insights to better understand the needs of the patient/client

  • Timely interventions, with the practitioner providing oversight to ensure the interventions are tailored to specific needs and handle new issues that emerge real time

  • Combined AI efficiency and practitioner empathy help to promote patient/client trust

  • Reduce diagnostic errors by combining precision from AI and clinical wisdom and intuition of the practitioner

  • Improved ethical decision-making by the AI objective analysis with the practitioner's experience, wisdom, and insights

However, this partnership is still tenuous. Some research indicates that team members' sense of trust does not necessarily grow over time in human-AI teams (HATs or HAIT) [65]. Incorporating AI team members does not necessarily enhance communication and coordination, perhaps because human team members have excessively high of expectations of AI team members. Alternatively, human team members may lack the training and understanding necessary for clear and optimal communication and collaboration with AI team members [65]. Ultimately, more training and continuing education are needed to improve the relationship between AI and practitioners and to explore how specific skills can merge and complement each other [27].

CONCLUSION

The technology of AI is growing at a rapid pace, and it can be difficult to keep up. While many remain skeptical and mistrustful of the application of AI in day-to-day life, practitioners in the health and mental health fields are called on to recognize that AI will continue to be a part of the professional landscape. In order for a positive synergistic relationship to develop between AI and practitioners, practitioners require an understand of AI systems and an opportunity to take part in AI-powered clinical decision-making, rather than passive relegation of clinical practice to AI. All professionals should feel empowered to take control and supervise AI systems. Future research is needed to evaluate the effectiveness of AI driven assessments, diagnoses, treatments, and interventions, so there is a larger body of evidence-based practice evidence.

Works Cited

1. Amisha, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Fam Med Prim Care. 2019;8(7):2328-2331.

2. Wang X, Fei F, Wei J, et al. Knowledge and attitudes toward artificial intelligence in nursing among various categories of professionals in China: a cross-sectional study. Front Public Health. 2024;12:1433252.

3. Daley S. AI in Healthcare: Uses, Examples and Benefits. Built In. Available at https://builtin.com/artificial-intelligence/artificial-intelligence-healthcare. Last accessed March 26, 2025.

4. Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: A narrative review. Front Digit Health. 2024;6.

5. Rebelo AD, Verboom DE, Rebelo dos Santos N, de Graaf JW. The impact of artificial intelligence on the tasks of mental healthcare workers: a scoping review. Comput Hum Behav Artif Hum. 2023;1(2):100008.

6. Marsden P. Artificial Intelligence Defined: Useful List of Popular Definitions from Business and Science. Available at https://digitalwellbeing.org/artificial-intelligence-defined-useful-list-of-popular-definitions-from-business-and-science. Last accessed March 26, 2025.

7. Haan K. 24 Top AI Statistics and Trends in 2024. Available at https://www.forbes.com/advisor/business/ai-statistics. Last accessed March 26, 2025.

8. MarketsandMarkets. Artificial Intelligence (AI) in Healthcare Market Worth $164.16 Billion by 2030. Available at https://www.marketsandmarkets.com/PressReleases/artificial-intelligence-healthcare.asp. Last accessed March 26, 2025.

9. Statistica. Awareness and Adoption of AI and Automation in Healthcare Worldwide in 2019 and 2020. Available at https://www.statista.com/statistics/1223613/state-of-healthcare-automation-worldwide. Last accessed March 26, 2025.

10. Lavrentyeva Y. The Big Promise AI Holds for Mental Health. Available at https://itrexgroup.com/blog/ai-mental-health-examples-trends. Last accessed March 26, 2025.

11. Polaris Market Research. Artificial Intelligence (AI) in Mental Health Market Share, Size, Trends, Industry Analysis Report. Available at https://www.polarismarketresearch.com/industry-analysis/artificial-intelligence-ai-in-mental-health-market. Last accessed March 26, 2025.

12. Hirani R, Noruzi K, Khuram H, et al. Artificial intelligence and healthcare: a journey through history, present innovations, and future possibilities. Life. 2024;14(5):557.

13. Kaul V, Enslin S, Gross SA. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92(4):807-812.

14. Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23:689.

15. Law M. From ELIZA to ChatGPT: The Evolution of Chatbots Technology. Available at https://technologymagazine.com/articles/from-eliza-to-chatgpt-the-evolution-of-chatbots-technology. Last accessed March 26, 2025.

16. La Rose D. From Checkers to Chess: A Brief History of IBM AI. Available at https://www.ibm.com/blog/from-checkers-to-chess-a-brief-history-of-ibm-ai. Last accessed March 26, 2025.

17. UC Health. About the daVinci Surgical System. Available at https://www.uchealth.com/services/robotic-surgery/patient-information/davinci-surgical-system. Last accessed March 26, 2025.

18. Faverio M, Tyson A. What the Data Says about Americans' Views of Artificial Intelligence. Available at https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence. Last accessed March 26, 2025.

19. Tyson A, Kikuchi E. Growing Public Concern about the Role of Artificial Intelligence in Daily Life. Available at https://www.pewresearch.org/short-reads/2023/08/28/growing-public-concern-about-the-role-of-artificial-intelligence-in-daily-life. Last accessed March 26, 2025.

20. Office of National Statistics. Public Awareness, Opinions and Expectations about Artificial Intelligence: July to October 2023. Available at https://www.ons.gov.uk/businessindustryandtrade/itandinternetindustry/articles/publicawarenessopinionsandexpectationsaboutartificialintelligence/julytooctober2023. Last accessed March 26, 2025.

21. Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives. BMC Med Inform Decis Mak. 2020;20(1):170.

22. Elsevier. Insights 2024: Attitudes Toward AI. Available at https://www.elsevier.com/insights/attitudes-toward-ai. Last accessed March 26, 2025.

23. Akinrinmade AO, et al. Artificial intelligence in healthcare: perception and reality. Cureus. 2023;15(9):e45594.

24. Fritsch SJ, et al. Attitudes and perception of artificial intelligence in healthcare: a cross-sectional survey among patients. Digit Health. 2022;8(8).

25. Sreedharan S, Mian M, Robertson RA, Yang N. The top 100 most cited articles in medical artificial intelligence: a bibliometric analysis. J Med Artif Intell. 2020;3(3).

26. Galijasevic T, Škarić M, Podolski E, Mustač F; Matovinović M, Marčinko D. New Breakthroughs in AI Chatbots and Their Potential in Mental Health Services. Available at https://ieeexplore.ieee.org/document/10612516. Last accessed March 26, 2025.

27. Babu A, Joseph AP. Artificial intelligence in mental healthcare: transformative potential vs. the necessity of human interaction. Front Psychol. 2024;15.

28. Varnosfaderani SM, Forouzanfar M. The role of ai in hospitals and clinics: transforming healthcare in the 21st century. Bioengineering. 2024;11(4):337.

29. Sharma M, Savage C, Nair M, Larsson I, Svedberg P, Nygren JM. Artificial intelligence applications in health care practice: scoping review. J Med Internet Res. 2022;24(10):e40238.

30. Kanan M, Alharbi H, Alotaibi N, et al. AI-driven models for diagnosing and predicting outcomes in lung cancer: a systematic review and meta-analysis. Cancers. 2024;16(3):674.

31. Lee EE, Torous J, De Choudhury M, et al. Artificial intelligence for mental health care: clinical applications, barriers, facilitators, and artificial wisdom. Biol Psychiatry Cogn Neurosci Neuroimaging. 2021;6(9):856-864.

32. Listen up! AI is diagnosing patients through voice notes. Indian Pract. 2024;77(9):10.

33. Idrisoglu A, Dallora AL, Anderberg P, Berglund JS. Applied machine learning techniques to diagnose voice-affecting conditions and disorders: systematic literature review. J Med Internet Res. 2023;25:e46105.

34. Bell IH, Nicholas J, Alvarez-Jimenez M, Thompson A, Valmaggia L. Virtual reality as a clinical tool in mental health research and practice. Dialogues Clin Neurosci. 2020;22(2):169-177.

35. Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Bohr A, Memarzadeh K (eds). Artificial Intelligence in Healthcare. St. Louis, MO: Elsevier; 2020: 25-60.

36. Mayo Clinic Press. AI in Healthcare: The Future of Patient Care and Health Management. Available at https://mcpress.mayoclinic.org/healthy-aging/ai-in-healthcare-the-future-of-patient-care-and-health-management. Last accessed March 26, 2025.

37. Khalifa M, Albadawy M. Artificial intelligence for clinical prediction: exploring key domains and essential functions. Comput Methods Programs Biomed Update. 2024;5.

38. Kendall M. This California County is Testing AI's Ability to Prevent Homelessness. Available at https://calmatters.org/housing/homelessness/2024/03/california-homeless-los-angeles-ai. Last accessed March 26, 2025.

39. Supriya MS, Aniket A, Rakshitha NM, Aakanksha J, Kenith P. AI-Powered Mental Health Diagnosis: A Comprehensive Exploration of Machine and Deep Learning Techniques. Available at https://ieeexplore.ieee.org/document/10515610. Last accessed March 26, 2025.

40. University of Denver. AI for the Public Good. Available at https://stories.du.edu/magazine-winter-24/university-of-denver-magazine-winter24/features/ai-for-the-public-good/index.html. Last accessed March 26, 2025.

41. Minerva F, Giubilini A. Is AI the future of mental healthcare? Topoi. 2023;42(3):1-9.

42. Bint Khalid U, Naeem M, Stasolla F, Syed MH, Abbas M, Coronato A. Impact of AI-powered solutions in rehabilitation process: recent improvements and future trends. Int J Gen Med. 2024;17:943–969.

43. Antel R, Abbasgholizadeh-Rahimi S, Guadagno E, Harley JM, Poenaru D. The use of artificial intelligence and virtual reality in doctor-patient risk communication: a scoping review. Patient Educ Couns. 2022;105(10):2038-3050.

44. AI Singapore. CURATE.AI: Personalised Pharmacological Intervention Through Small Data Phenotypic Optimisation. Available at https://aisingapore.org/tech-offers/curate-ai-personalised-pharmocological-intervention-through-small-data-phenotypic-optimisation. Last accessed March 26, 2025.

45. Blasiak A, Khong J, Kee T. CURATE. AI: optimizing personalized treatment/medicines with artificial intelligence. SLAS Technol. 2020;25(2):95-105.

46. Prochaska JJ, Vogel EA, Chieng A, et al. A therapeutic relational agent for reducing problematic substance use (Woebot): development and usability study. J Med Internet Res. 2021;23(3):e24850.

47. Mavila R, Jaiswal S, Naswa R, Yuwen W, Erdly B, Si D. iCare: An AI-Powered Virtual Assistant for Mental Health. Available at https://www.computer.org/csdl/proceedings-article/ichi/2024/837300a466/1ZCgVXU6VwI. Last accessed March 26, 2025.

48. Mehta A, Niles AN, Vargas JH, Marafon T, Couto DD, Gross JJ. Acceptability and effectiveness of artificial intelligence therapy for anxiety and depression (Youper): longitudinal observational study. J Med Internet Res. 2021;23(6):e26771.

49. Sharma MK, Nachappa MN, Kumar R. Personalized Treatment Recommendations for Mental Health Disorders Using AI and Big Healthcare Data. Available at https://ieeexplore.ieee.org/document/10455991. Last accessed March 26, 2025.

50. Sun L, Yin C, Xu Q, Zhao W. Artificial intelligence for healthcare and medical education: a systematic review. Am J Transl Res. 2023;15(7):4820-4828.

51. Olawade DB, Wada OZ, Odetayo A, David-Olawade AC, Asaolu F, Eberhardt J. Enhancing mental health with artificial intelligence: current trends and future prospects. J Med Surg Public Health. 2024;3.

52. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics. 2021;22(1):122.

53. Na L, Yang C, Lo C-C, Zhao F, Fukuoka Y, Aswani A. Feasibility of reidentifying individuals in large national physical activity data sets from which protected health information has been removed with use of machine learning. JAMA Netw Open. 2018;1(8):e186040.

54. Gondode P, Duggal S, Mahor V. Artificial intelligence hallucinations in anaesthesia: causes, consequences and countermeasures. Indian J Anaesth. 2024;68(7):658-661.

55. Reamer F. Artificial intelligence in social work: emerging ethical issues. Int J Soc Work Values Ethics. 2023;20(2):52-71.

56. Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing diabetic retinopathy with artificial intelligence: what information should be included to ensure ethical informed consent? Front Med. 2021;8.

57. Khan B, Fatima H, Qureshi A, et al. Drawbacks of artificial intelligence and their potential solutions in the healthcare sector. Biomed Mater Devices. 2023;1-8.

58. Fehr J, Citro B, Malpani R, Lippert C, Madai VI. A trustworthy AI reality-check: the lack of transparency of artificial intelligence products in healthcare. Front Digit Health. 2024;6.

59. Panch T, Mattie H, Atun R. Artificial intelligence and algorithmic bias: implications for health systems. J Glob Health. 2019;9(2).

60. Colón-Rodríguez CJ. Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias. Available at https://minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias. Last accessed January 15, 2025.

61. Pew Research Center. Americans' Use of Mobile Technology and Home Broadband. Available at https://www.pewresearch.org/internet/2024/01/31/americans-use-of-mobile-technology-and-home-broadband. Last accessed March 26, 2025.

62. Avula VCR, Amalakanti S. Artificial intelligence in psychiatry, present trends, and challenges: An updated review. Arch Ment Health. 2024;25(1):85-90.

63. MIT Sloan Teaching and Learning Technologies. When AI Gets It Wrong: Addressing AI Hallucinations and Bias. Available at https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias. Last accessed March 26, 2025.

64. Akingbola A, Adeleke O, Idris A, Adewole O, Adegbesan A. Artificial intelligence and the dehumanization of patient care. J Med Surg Public Health. 2024;3.

65. Schmutz JB, Outland N, Kerstan S, Georganta E, Ulfert A-S. AI-teaming: redefining collaboration in the digital era. Curr Opin Psychol. 2024;58.


Copyright © 2025 NetCE, PO Box 997571, Sacramento, CA 95899-7571
Mention of commercial products does not indicate endorsement.