De-Villainising AI: Charting a Responsible Path in Mental Health Care

The concept of Artificial Intelligence (AI) often finds itself at the centre of controversy and scepticism. Surveys from 17 countries show that 61% of people are hesitant or refuse to trust AI, with 73% highlighting potential risks such as cybersecurity threats, misuse of AI technology, and job displacement.1

This apprehension is more acute in areas like mental health care, where personal connection and trust are crucial. A study found that only 14.1% of participants (≥ 18 years old, M = 31.76 years old) trust AI-based psychotherapy systems to protect their data securely.2 However, 55% showed a preference for AI-based psychotherapy. They appreciated the freedom to talk openly about sensitive topics, the around-the-clock availability, and the ability for remote interaction.3 This juxtaposition highlights the need of carefully blending AI into mental health care, balancing the advantages of AI-enhanced care with commitments to address its risks.

Cam AI commits to ethical practices and strong data privacy in mental health care. This blog will explore our dedication to responsible AI and user privacy protection, showcasing our approach to responsibly enhance mental health support.

Using AI in the field of mental health

In the realm of mental health, AI plays a crucial role by providing us with tools to experiment with better ways of offering support in a scientifically controllable manner.

AI can offer support 24/7, addressing the shortage of human therapists available to provide timely therapy. Furthermore, the ability to type rather than speak aloud to a therapist, whether in-person or via video conferencing platforms, offers greater privacy. This method helps mitigate fears of judgement or stigma. Additionally, Cam AI’s service is free for young people, making therapeutic support accessible to a broader segment of the younger population.

However, the primary aim is not to replace therapists with AI, but to foster a complementary relationship that leverages the strengths of both. By integrating AI, we make the therapeutic alliance and approach widely available to the young people who will benefit from being helped to navigate in-the-moment issues, while still upholding the professional and empathetic standards of human therapists.

This approach creates a supplemental service that enriches traditional therapy methods, making high quality mental health care more adaptable and widely attainable.

What makes our AI more appropriate than ChatGPT?


Our AI differs significantly from ChatGPT because we’re in complete control of what our service says. ChatGPT acts like a ‘stochastic parrot’, mimicking language patterns without understanding them, which makes its responses unpredictable. OpenAI, the creators of ChatGPT, leaves the responsibility for its output to the users, suggesting that anything it generates should be attributed to them.

Our foundation, on the other hand, is built on training data derived from authentic therapy sessions between human clients and therapists, ensuring that every response is rooted in genuine therapeutic interactions. Qualified psychologists oversee the responses, combining structured knowledge—such as the principles of Cognitive Behavioural Therapy (CBT) and client history—with statistical analysis to craft responses that are both relevant and empathetic.

Transparency is the cornerstone of our approach. We do not simply state that content is AI-generated; we provide a clear lineage for each response, tracing it back to human-generated content from real therapy sessions. This ensures that our AI facilitates the delivery of appropriate and effective support, tailored to the specific needs of clients based on variables such as age, self-identified gender, and therapy style.

While we acknowledge the inherent risks in AI-generated responses, we mitigate these concerns by openly communicating the likelihood that a qualified therapist would offer a similar response, alongside an assessment of the potential impact of any inaccuracies.

Safeguarding your privacy with chatbots

In all cases, your data is safeguarded within cloud-based systems fortified by multiple layers of protection. This includes multi-factor authentication, token-based access, and federated identity verification. For customised therapy, we rely exclusively on non-PII data, such as metadata and the content of your conversations with our chatbots.

Overall, Cam AI upholds the principles of responsible AI utilisation while safeguarding the privacy of user data.

References

  1. Trust in Artificial Intelligence | Global Insights 2023. (2023, February 22). KPMG. Retrieved March 8, 2024, from https://kpmg.com/au/en/home/insights/2023/02/trust-in-ai-global-insights-2023.html
  2. Aktan, M. E., Turhan, Z., & Dolu, İ. (2022). Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Computers in Human Behavior, 133, 107273. https://doi.org/10.1016/j.chb.2022.107273
  3.  Aktan, M. E., Turhan, Z., & Dolu, İ. (2022). Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Computers in Human Behavior, 133, 107273. https://doi.org/10.1016/j.chb.2022.107273

Image Source: ChatGPT (DALL-E)

Author: Julie Wang, Cam AI Volunteer

I am a first year undergraduate student from Cambridge University studying Psychological and Behavioural Sciences. I am currently volunteering as an assistant at Cam AI, reading papers about AI and mental health, engaging in outreach activities and writing blogs. I am curious about the ways in which AI can enhance the mental health services provided to humans, and am very excited to be part of this team.

Leave a Comment

Your email address will not be published. Required fields are marked *