1. Introduction
Advances in computational capacity, data collection and machine learning are contributing to an increasing interest in artificial intelligence (AI), as reflected by a recent surge in funding and research. AI has many potential applications within medicine. It is being used to automate the national 111 triage service [Reference Burgess1] and can detect retinal pathology as effectively as consultant ophthalmologists [Reference Fauw, Ledsam, Romera-Paredes, Nikolov, Tomasev and Blackwell2]. However, before AI can safely have an impact within psychiatry, there are a number of issues which must be considered, such as capacity and consent, data security and patient privacy, and clinical governance.
2. Diagnostic tools
One challenge for clinicians in making diagnoses is that patient interactions only offer a snapshot of an individual’s mental state, yet mood disorders are dynamic in nature and fluctuate over time. At present, psychiatric assessment of patients includes observation of their mental state and subjective self-report questionnaires, such as the GAD-7 for anxiety and PHQ-9 for depression. These methods are subjective, difficult to repeat and can be time-consuming. AI may enable additional methods, such as audio and video analysis, which have greater objectivity and may have better predictive value. IBM research developed a machine learning speech classifier with 79% accuracy in predicting psychosis onset in those with clinically high risk [Reference Corcoran, Carrillo, Fernández-Slezak, Bedi, Klim and Javitt3]. Another group showed how computer vision can detect ADHD and ASD with 96% accuracy based on video analysis of a person’s behaviour [Reference Jaiswal, Valstar, Gillott and Daley4]. As well as assisting with diagnosis, such tools may also help monitor the progress of patients, in both inpatient and outpatient settings.
3. Monitoring
All but the most severe psychiatric care takes place in an outpatient setting so additional monitoring in the community would offer significant benefit. Early detection and prevention of relapse can have a significant impact on outcomes.
One aspect of monitoring mood and mental health outside the clinical setting is written records kept by patients, such as mood diaries. This is prone to recall bias and compliance issues. Apps have been developed which actively prompt users to answer questions on their mood, sleeping patterns and other relevant areas, although some of the same issues persist.
AI may be combined with sensors and smartphone applications to enable increased monitoring in the community. Cogito, a Boston-based AI company, developed the app “Companion” which records behavioural indicators via smartphones. In a study of 73 people with either PTSD or depression, they found that number of outgoing calls, count of unique numbers texted, absolute distance travelled, dynamic variation of the voice, speaking rate and voice quality were all good predictors of symptoms of depression and PTSD [Reference Place, Blanch-Hartigan, Rubin, Gorrostieta, Mead and Kane5]. As well as behavioural indicators, physiological data may provide further objective measures of mental health. In a group of 251 college students, Sano et al. found skin conductance and temperature to have 78% accuracy for classifying students into high/low stress groups and 86% accuracy for classifying high/low mental health groups [Reference Sano, Taylor, McHill, Phillips, Barger and Klerman6].
Another aspect of monitoring is medication adherence, which is an issue in all chronic health problems and may particularly be so in mental health conditions. The estimates of non-adherence to antipsychotic medications range from 20% to 89% for patients with schizophrenia or bipolar disorder [Reference Stentzel, van den Berg, Schulze, Schwaneberg, Radicke and Langosch7]. Several smartphone applications aim to improve adherence by giving reminders and helping patients keep track of their medications. The use of machine learning may facilitate continual improvement of such applications, including tailoring to the individual, to maximise their effect on medication adherence.
4. Treatment
Artificial intelligence can enable existing treatments to be provided through novel means, which may increase availability and effectiveness.
4.1 CBT chatbots
Internet-based cognitive behavioural therapy (CBT) has been offered since the 1990s but has been characterised by low adherence. The development of CBT chatbots, which mimic normal conversational style to deliver CBT, may increase adherence and offer other advantages. Kirkpatrick et al. found that one such chatbot, named ‘Woebot’, decreased both depression and anxiety in college students over a 2 week course [Reference Fitzpatrick, Darcy and Vierhile8].
The optimal use of such a chatbot is still unclear given the preliminary stage of research. While some early studies suggest improvement in anxiety and depression [Reference Fitzpatrick, Darcy and Vierhile8], one study showed an increase in anxiety and alcoholic consumption among Japanese workers, which authors attributed to heightened awareness of pathological thought and drinking behaviour [9].
4.2 Accessibility, de-stigmatisation and personalisation
In 45% of the world, there is less than one psychiatrist to every 100,000 people yet over 50% of the world population now owns a smartphone. A single AI system can be used by very large populations, provided they have access to the required hardware which is increasingly ubiquitous. Such technology could increase accessibility by reducing lengthy and costly travel to centrally-located mental health clinics. It may enable over-burdened mental health professionals to increase the reach of their services. It could also help people whose conditions restrict their ability to travel, such as GAD, agoraphobia or physical health problems.
AI may also increase accessibility by circumventing some of the stigma surrounding mental illness. For example, chatbots may avoid such stigma as they are not part of a wider social construct with all the associated cultural norms and expectations. They are more likely to be perceived as non-judgmental, non-opinionated and overall neutral.
AI may enable greater personalisation of care. For example, more sophisticated analysis of large volumes of data may enable better prediction of therapeutic response and side effect profile from different medications. This would reduce the need to perform multiple trials of different medications.
5. Re-balancing clinician workload
AI may drastically re-balance a clinician’s workload, giving them more time to interact with patients, thereby improving the quality of care. This is particularly true within psychiatry because it is a text-heavy field. Psychiatrists spend lots of time reading previous notes to build an accurate picture of a patient’s history. Natural Language Processing (NLP) is an area of AI which involves analysing human language for meaningful information. NLP could be used to summarise the most important data from a patient’s electronic health records, providing a succinct summary at the beginning of a consultation. For example, a timeline of mental state examinations and the associated treatment regime could be produced. NLP could also provide more powerful analysis if combined with AI analysis of speech and video to provide a short summary of a patient’s mental state, which could be a useful supplement to the psychiatrist’s mental state examination. Such an algorithm would be objective and not at risk of inter-clinician variability.
6. Data security, privacy and consent
Healthcare data is sensitive and mental health data is particularly so given the high risk of stigmatisation and discrimination in the case of data disclosure. For the public to embrace this new technology, which is necessary for it to be effective, the healthcare profession must earn their trust. There have been high profile cases involving misuse of personal data, such as the Facebook-Cambridge Analytica scandal. Similar events within healthcare could have serious consequences.
Mental illness can affect capacity and thus the ability to provide consent. Presence of capacity can fluctuate over time. A patient may initially give consent for passive monitoring, for example, but if they later lose capacity due to worsening mental health then it is unclear whether the patient’s initial consent remains valid. AI would also require patients to consent to much greater amounts of data; those who consent to medical notes may not so easily consent to video, audio and other forms of data being stored.
7. Clinical governance
The regulation of mobile apps is significantly less than the regulation of medical products and treatments. For example, over 47,000 mental health apps were on sale to US consumers in 2015. The majority of these had not been validated, and of the few that had it was primarily by small-scale, short-term pilot studies. The use of unvalidated smartphone apps poses risk to patients due to poor quality information and potentially harmful recommendations. Many of the apps available for mental health may not be classed as ‘medical products’ as they focus more on self-management and lifestyle
Clinical governance in this area is important to prevent unregulated use and potential harm. In the UK, the NHS has made efforts to introduce this by creating the “NHS Digital Apps Library” which includes ‘NHS Approved’ labels for those which sufficient evidence of effectiveness and safety.
It should not be assumed that new technologies will definitely have a positive effect and they can plausibly cause harm. For example, while some feel more empowered to control their illness by using passive monitoring, others may feel overwhelmed by the additional responsibility, or perceive it as a constant reminder that they are ill [Reference Glenn and Monteith10]. Some technologies may benefit certain subpopulations yet be harmful to others and these distinctions must be borne out by thorough research.
8. Conclusion
The successful integration of AI into healthcare could dramatically improve quality of care. In psychiatry, new tools for diagnosis, monitoring and treatment may improve patient outcomes and re-balance clinician workload. While there is great potential, numerous risks and challenges will arise. These will require careful navigation to ensure the successful implementation of this new technology.
Conflicts of interest
Christopher Lovejoy and Varun Buch are employees of Cera Care. Mahiben Maruthappu is an investor and employee of Cera Care. Cera Care is a domiciliary care provider conducting research into how artificial intelligence can be used to improve the care delivered to elderly people living at home. This work received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.
Comments
No Comments have been published for this article.