Research

My research can be divided into two main domains: (1) studying close relationship processes and their impact on relationship and mental health and (2) digital health interventions. These domains synergistically inform my goal of improving individual and relational well-being through innovative, technology-driven solutions.

As a licensed relationship therapist and academic researcher, my aim is to bridge the gap between clinical practice and research. I am particularly interested in using technology to improve psychological interventions and to better understand user experience and engagement. For example, in collaboration with a relationship therapy application Blueheart, I developed a digital intervention to help improve emotional and physical intimacy for couples. I found that the treatment was effective, but the drop-out rate was high in line with findings from most other digital intervention studies. The difficulty with user engagement has prompted me to look into alternative options to improve digital health interventions.

This led me to explore whether chatbots (i.e., computer programs designed to engage with a human) powered by generative artificial intelligence could be used to improve user engagement. Chatbots can personalise the experience to the user, they have been shown to facilitate the creation of a therapeutic alliance, and they can increase accountability, all of which are lacking in traditional digital interventions. As such, they provide a promising future avenue for developing digital interventions.

In a recent study I showed that humans found relationship advice created by chatbots more empathic and helpful compared to relationship experts. In another study participants interacted in a single session therapeutic intervention with GPT-4 and perceived the chatbot as human-like, empathic, and helpful for their issue.

Studying AI for interventions provides an exciting and innovative area for research but it also raises many ethical questions around bias, fairness, and patient safety. I am currently working on several projects to address how we can use AI chatbots in a safe and ethical way to improve access to relationship interventions for a wider population including underserved communities who traditionally may not have had access to relationship support. You can read more about the different projects below with examples.

Comparison between chatbots and experts

Across three different studies, we have shown that participants rate chatbot responses to relationship-related questions higher in empathy and helpfulness compared to experts and often cannot accurately tell which responses are written by AI and which by experts. Finally, if they think a response is written by an AI, they rate the responses lower. You can read more about the results here and here. In the future, the plan is to also compare peer-responses as forums such as Reddit are very popular in seeking relationship advice.

“I think my partner may be cheating on me. What should I do?”

If you suspect your partner is cheating, it’s crucial to approach the situation with understanding, empathy, and a calm mindset. Communication is key in any relationship, so start by expressing your concerns to your partner openly and honestly, but without blame or accusations. Frame your concerns as observations and feelings. For example, you might say, “I’ve noticed that you’ve been acting differently lately and it’s making me feel anxious. Can we talk about it?” It’s important not to rush to conclusions, but rather express your feelings and allow your partner to share their perspective. Keep in mind, your partner may be going through a stressful time or facing issues that you’re not aware of. If your concerns persist, seeking help from a professional counselor or therapist may be helpful to facilitate communication and work through these issues. – GPT-4


Single session intervention with Amanda

In this project, 20 participants interacted in a single session intervention with Amanda about a specific relationship issue they had. The participants found the chatbot overall human-like, empathic, and helpful. We also found that the chatbot overall did well in using therapeutic skills and made no errors in language or understanding. You can read more about the results here. In the future, the plan is to run a larger study with more people to compare the single session chatbot intervention with traditional interventions.

Audio example of a session with Amanda the chatbot:


Attitudes toward Chatbots

In order for chatbot interventions to be feasible, people need to be willing to engage with a chatbot. People’s attitudes toward chatbots are often mixed with some being more positive while others being rather negative. In a recent study, we found that when participants engaged with a human-like and empathic chatbot, they generally felt more positive toward a chatbot but without prior engagement, baseline attitudes were often negative. You can read more about the results here.


Risk assessment capabilities of chatbots

One of the important questions related to chatbots in providing interventions is how chatbots identify, assess, and manage situations where the user is potentially in danger, for example because of thoughts of suicide, domestic violence, or excessive substance abuse. I have several ongoing projects in which we are looking at how different chatbots recognise risk from vignettes or from discussions with users. More information to follow shortly.