
June 30, 2025
Author: Patricia Hernández Valdez
Versión en español
Introduction
In recent years, artificial intelligence (AI) has revolutionized various areas of human life, including the field of mental health. This has raised concerns about its impact on psychological practice: Does AI represent a supportive tool or a real threat to Psychology in its daily practice?
Development
AI, defined as the ability of computational systems to perform tasks that typically require human intelligence, has quickly been incorporated into applications related to Psychology. Chatbots, online therapy platforms, emotion detection algorithms, and automated diagnostic systems are some examples of the growing use of these technologies. This trend has opened previously unimaginable possibilities to expand coverage, personalize interventions, and generate large-scale clinical data. However, it has also sparked debates about its reliability, ethical impact, and influence on the therapeutic relationship.
One of the main arguments in favor of using AI in Psychology is its ability to democratize access to mental health services. In countries with few specialists, AI-based tools can provide a provisional solution to serve vulnerable populations. Additionally, since they do not require constant human intervention, these platforms offer more immediate, continuous, and cost-effective care (Fiske, Henningsen & Buyx, 2019).
However, the use of AI in Psychology is not without risks. One major concern is the dehumanization of clinical practice, as the therapist-patient relationship relies on empathy, trust, and the complex interpretation of verbal and non-verbal language, silences, context, and emotions. These variables are difficult to encode into an algorithm; indeed, studies have shown that users may experience feelings of emotional emptiness or dissatisfaction when interacting with automated platforms, compromising therapeutic effectiveness and long-term positive impact (Sanches et al., 2021).
Another risk is the misuse of sensitive (i.e., confidential) data. Digital tools collect personal, psychological, and behavioral information that, if placed in the wrong hands, can be used for commercial, discriminatory, or even criminal purposes. The lack of clear regulation regarding the handling of this data undermines the confidentiality principle, one of the ethical pillars of Psychology (APA, 2017).
From the professional practice perspective, AI could replace certain routine functions, such as administering psychometric tests or collecting patient histories. However, this does not imply the disappearance of Psychology, but rather a transformation of the psychologist's role. Rather than focusing on administrative or repetitive tasks, the professional will be able to focus on complex clinical decision-making, designing specialized interventions, and providing personalized care, thereby increasing therapeutic efficacy without compromising the professional's role.
On a theoretical level, the rise of AI also invites us to rethink fundamental concepts. For example, what do we understand by mind, consciousness, or emotion? Can these human experiences be reproduced or simulated by a machine? Psychology, as the science of studying behavior and mental processes, is therefore compelled to engage with other disciplines like neuroscience, ethics, philosophy, and computer science to better understand the limits and scope of artificial intelligence.
Based on the above, it is essential to promote an interdisciplinary approach that includes active participation of psychologists in the design, validation, and regulation of digital tools. This ensures that technological solutions respect the fundamental principles of the profession: human dignity, autonomy, confidentiality, informed consent, and non-maleficence.
On the other hand, the field of online psychotherapy, which has grown significantly since the COVID-19 pandemic, has shown that technology can be a powerful ally. However, this modality requires specific training, clear ethical frameworks, and constant monitoring of service quality. In this sense, AI could be a complementary tool to improve treatment efficiency, but it should never replace human intervention.
Finally, the digital divide must also be considered. Many individuals in need of psychological care lack access to technology or are digitally illiterate. This could exacerbate inequalities in access to mental health services if automated tools are favored without considering the socio-economic and cultural context of the population (WHO, 2022).
Conclusions
Artificial intelligence is not inherently a threat to Psychology, but its uncritical and unregulated incorporation could have negative consequences for both users and professionals. Far from viewing AI as a threat, it must be understood as a tool that, when used properly, can enhance psychological work and expand its social impact. However, this requires a humanistic, ethical, and person-centered approach.
The key is not to lose sight of the relational nature of Psychology. Technology can facilitate processes, but it cannot replace the human capacity to understand, empathize, and build connections. The psychologist of the future will need to master the clinical and ethical foundations of their profession, as well as understand the scope and limitations of digital tools. The integration of AI must be carried out with a critical and reflective mindset that prioritizes the quality of care and the rights of individuals.
Ultimately, the true danger is not technology itself, but the lack of preparation to use it with judgment, sensitivity, and responsibility. Psychology is called to lead this transition, bringing a complex, interdisciplinary, and deeply human perspective.
Patty Hernández Valdez is a psychologist, expert in legal psychology, and bioethicist. She coordinates the Master's program in Online Bioethics Studies at Universidad Anáhuac México. She has been a speaker at various international conferences and an author of publications on bioethics, mental health, and human rights.
The opinions expressed in this blog are the sole responsibility of the author and do not necessarily represent the official stance of CADEBI. As an institution committed to inclusion and plural dialogue, CADEBI promotes and disseminates a diversity of voices and perspectives, believing that respectful and critical exchange enriches our academic and formative work. We value and encourage all comments, responses, or constructive criticisms you wish to share.
More information:
Centro Anáhuac de Desarrollo Estratégico en Bioética (CADEBI)
Dr. Alejandro Sánchez Guerrero
alejandro.sanchezg@anahuac.mx