Study Points
- Back to Course Home
- Participation Instructions
- Review the course material online or in print.
- Complete the course evaluation.
- Review your Transcript to view and print your Certificate of Completion. Your date of completion will be the date (Pacific Time) the course was electronically submitted for credit, with no exceptions. Partial credit is not available.

Study Points
Click on any objective to view test questions.
- Outline the development and potential uses of AI in the context of health and mental health.
- Outline practice considerations related to AI use, including barriers, limitations, ethical concerns, and collaboration.
- Assess the limitations and the barriers of AI in day-to-day practice for providers.
- Explain ethical concerns of the use of AI in health and mental health care.
- Discuss collaborations between AI and practitioners.
What was the name of the first chatbot?
Click to ReviewOn the surface, it seems as if AI recently emerged in the health and mental health care, but this is not true. In the 1950s, Alan Turing developed a test for intelligence in a computer, requiring that a human should be unable to distinguish the machine from another human by using the replies to questions put to both [12]. This has become known as the Turing test [13]. In 1951, Christopher Strachey developed the first AI program, and at that same time, it was John McCarthy who then employed the term artificial intelligence to refer to the use of science and engineering to make machines intelligent [13,14]. In 1956, McCarthy organized the Summer Dartmouth Conference on AI, which drew leading scientists, researchers, mathematicians, and engineers to start a dialogue about AI and its practical uses. Some say that this conference laid the foundation to the inception of AI [12]. In terms of initial applications of AI, the first industrial robot appeared in General Motor's assembly line in 1961, and three years later, the first chatbot by the name of ELIZA emerged on the scene, which was developed by Joseph Weizenbaum at MIT [13]. ELIZA followed the tenets of Carl Roger's person-centered therapy, which would rephrase and repeat in order to mimic a human conversation [15]. Over the years, there were other chatbots, including A.L.I.C.E. and Apple's Siri, which laid the foundation for AI personal assistants [15].
Which of the following is NOT an example of AI technology?
Click to ReviewBetween 2010 and 2020, electronic health records and large-scale healthcare databases became more prominent, and the question of how to leverage AI with health data emerged. In 2000, the da Vinci Surgical System was introduced as the first robotic surgical platform approved by the FDA [17]. As noted, Apple introduced Siri, a virtual assistant, into iPhones in 2011, and then in 2014, Alexa, another virtual assistant was released by Amazon [13]. Today, ChatGPT and Copilot are used in many arenas, and this has led to the surfacing of many potential implications and ethical conflicts associated with AI use in various fields/disciplines, including in health and mental health care. One of the main concerns is that many AI applications (e.g., ChatGPT) are not Health Insurance Portability and Accountability Act (HIPAA) compliant and content shared with AI applications is generally added to the associated database and shared.
Which of the following best describes how the general public feels about AI technology?
Click to ReviewIn general, it appears that many health and mental health professionals are not familiar with AI, are distrustful of AI, and perhaps even skeptical in its application in health and mental health care. Elsevier, a leading academic publishing company, conducted a global online survey with a sample of 1,999 respondents comprised of clinicians and researchers from 123 countries [22]. This study found that 54% of respondents had used AI, and 31% had utilized it for work. Slightly more (39%) respondents in China are using AI applications compared to the United States (30%) or India (22%) [22]. When asked to identify an AI product, ChatGPT was the most frequently cited [22]. In a 2024 national Chinese study involving 1,243 nursing students, nursing professionals, and other healthcare professionals, 57% reported very little knowledge about AI and almost 66% indicated that they were not familiar with the role of AI in nursing [2]. Almost all (95%) believed that there could be imminent concerns with AI in health care and that more work will be needed in the area of ethics in AI in health care.
Dr. A decides to use AI technology to do initial triaging of patients to determine if a patient needs to be seen by a physician. What type of AI technology might be useful in this scenario?
Click to ReviewAI-trained chatbots can perform triage to assess if certain symptoms and/or the severity of symptoms warrant emergent care. The chatbot can determine if the patient must see a professional and book an appointment [35].
An AI system tracked content individuals' smart watches and smartphones to detect behavioral patterns that might be correlated with certain health conditions. What is this an example of?
Click to ReviewPersonal sensing, defined as collecting and analyzing data from sensors embedded in the context of daily life with the aim of identifying human behaviors, thoughts, feelings, and traits, often with the aid of AI, has the potential to predict, measure, and monitor individuals' mental health [41]. For example, AI technology can evaluate content from social media posts and data from smart watches, smartphones, and other wearable health devices to identify behavioral patterns and changes and correlate them with specific health and mental health conditions [35,41]. Natural language processing algorithms can monitor conversations from texts messages, emails, and social media posts to identify key words and changes in language semantics and syntax (e.g., length of messages, longer intervals between texts, posts, or calls) that might indicate an increased risk of depression, suicide, or anxiety [35,41].
Mrs. G, 83 years of age, tells the social worker that sage is extremely lonely alone at home. The social worker believes getting a cat could be helpful, and although Mrs. G indicates an animal would be fun, she does not feel prepared to take care of an animal. What AI technology might be an option for this patient?
Click to ReviewChatbots, virtual assistants, and assistive robots can also be used to enhance the patient/client experience by automating schedule reminders, sending out personalized health tips and recommendations, and monitoring progress and treatment response and adherence [3,4,5]. Virtual assistants can assume some of the administrative responsibilities of healthcare providers by reminding patients of their appointments and their treatment plan as well as collecting patient information to monitor progress and sending the progress check in reports their provider(s) [14]. Assistive robots can serve as helpers and companions to certain populations, particularly those who have limited mobility or disabilities who might benefit from increased companionship and support to reduce loneliness [35]. For example, pet bots have been created with sensors to respond to touch, sounds, and visual cues [35].
If a practitioner decides to use AI to develop interventions for a patient/client, what information should the informed consent cover?
Click to ReviewAt the heart of informed consent and client autonomy is the ability of clinicians to provide sufficient and understandable information to allow individuals to make an informed decision about their care. Patients/clients should be informed about the potential risks and benefits of AI in the delivery of their care. For example, AI algorithms can generate or perpetuate biases, create hallucinations (inaccuracies when the AI misperceives patterns), and introduce cyber-risks (e.g., data breaches) [54,55,56]. When a practitioner has employed AI for clinical decision-making without oversight, the patient/client should be made aware of this [56]. Furthermore, they should understand how their clinical/medical/behavioral data may be used beyond their clinical treatment planning [4,56].
An algorithm is trained predominantly on chest x-ray data from male patients. What could potentially result as a result of the approach to training?
Click to ReviewAI algorithms are trained on existing datasets, and the validity and reliability of the original data are influenced by the original data collection procedures [57]. If the original data are not representative of certain demographic groups and if it ultimately makes inaccurate predictions, health and mental health disparities, biases, and inequities can be perpetuated [4,23,58]. This is known as algorithmic bias, which has been defined as, "the instances when the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation to amplify them and adversely impact inequities in health systems" [59]. Some have advocated for rigorous peer review feedback of AI algorithms to help counteract this potential problem [55]. The following strategies have been proposed to address algorithmic bias [60]:
Including experts who are diversely represented to review AI algorithms
Employing methods to manage situations in which there is not enough available information
Introducing algorithms gradually in order to test their outcomes
Creating mechanisms to collect feedback
What is the underlying cause of AI hallucinations?
Click to ReviewData error is a challenge to the integrity of AI applications. If there are any algorithmic inaccuracies and/or biases, false clinical decisions could be made and result in harm to patients/clients. This in turn can impact professional liability [4]. Hallucinations are relatedly common with most AI applications; these events occur when AI software misperceives patterns that are do not exist, and it then produces nonsensical, inaccurate, and fabricated data and outcomes [63]. Ultimately, hallucinations can result in misdiagnoses, medical errors, inaccuracies in medical documentation, and data inaccuracies, leading to faulty and potentially harmful conclusions [54]. The following strategies may be incorporated in order to build trust and mitigate the impact of AI hallucinations [54]:
Continual validation and evaluation of AI-powered platforms to detect hallucinations and employ diverse datasets
Human oversight by healthcare and mental health professionals to review AI results to ensure clinical accuracies
Provide training and continuing education to practitioners about the use of AI
Delineate explainable and transparent AI models and protocols so practitioners understand how AI algorithms are developed
Establish ethical and legal governance committees to provide oversight
What does augmented intelligence refer to?
Click to ReviewSome are skeptical of the use of AI in the health and mental health fields, and others are wholly enthusiastic. The reality is that optimal outcomes will come from a partnership of AI and practitioners. In an ideal world, AI would serve as a complement to humans, enhancing and supporting clinical decision-making but with practitioners providing oversight [29]. The American Medical Association coined the term "augmented intelligence" to refer to the ideal relationships between humans and AI applications [36]. The following ideal outcomes can result from an AI partnership with practitioners [27]:
Combined insights to better understand the needs of the patient/client
Timely interventions, with the practitioner providing oversight to ensure the interventions are tailored to specific needs and handle new issues that emerge real time
Combined AI efficiency and practitioner empathy help to promote patient/client trust
Reduce diagnostic errors by combining precision from AI and clinical wisdom and intuition of the practitioner
Improved ethical decision-making by the AI objective analysis with the practitioner's experience, wisdom, and insights
- Back to Course Home
- Participation Instructions
- Review the course material online or in print.
- Complete the course evaluation.
- Review your Transcript to view and print your Certificate of Completion. Your date of completion will be the date (Pacific Time) the course was electronically submitted for credit, with no exceptions. Partial credit is not available.