Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-24T06:27:28.519Z Has data issue: false hasContentIssue false

A call for open data to develop mental health digital biomarkers

Published online by Cambridge University Press:  03 March 2022

Daniel A. Adler*
Affiliation:
Cornell Tech, USA
Fei Wang
Affiliation:
Department of Population Health Sciences, Weill Cornell Medicine, New York, New York, USA
David C. Mohr
Affiliation:
Center for Behavioral Intervention Technologies and Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
Deborah Estrin
Affiliation:
Cornell Tech, USA
Cecilia Livesey
Affiliation:
Department of Psychiatry, University of Pennsylvania, Philadelphia, Pennsylvania, USA
Tanzeem Choudhury
Affiliation:
Cornell Tech, USA
*
Correspondence: Daniel A. Adler. Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Digital biomarkers of mental health, created using data extracted from everyday technologies including smartphones, wearable devices, social media and computer interactions, have the opportunity to revolutionise mental health diagnosis and treatment by providing near-continuous unobtrusive and remote measures of behaviours associated with mental health symptoms. Machine learning models process data traces from these technologies to identify digital biomarkers. In this editorial, we caution clinicians against using digital biomarkers in practice until models are assessed for equitable predictions (‘model equity’) across demographically diverse patients at scale, behaviours over time, and data types extracted from different devices and platforms. We posit that it will be difficult for any individual clinic or large-scale study to assess and ensure model equity and alternatively call for the creation of a repository of open de-identified data for digital biomarker development.

Type
Editorial
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press on behalf of the Royal College of Psychiatrists

Over the past decade, numerous studies have explored the use of data collected from ubiquitous, everyday technologies, including smartphones, wearables, social media and computer interactions (e.g. keyboard keystrokes) to remotely and continuously measure individuals’ physiology and behaviour within the fabric of their lives. Machine learning models process this data to identify digital biomarkers associated with symptoms of depression, schizophrenia and bipolar disorder.Reference Mohr, Zhang and Schueller1 Mental health disorders commonly remain undiagnosed or misdiagnosed, potentially owing to their heterogeneity in presentation or because non-mental health clinicians (e.g. primary care physicians), often the entry point to care, are less equipped for symptom assessment. Access to unobtrusively and remotely collected biomarkers can reduce friction to integrate physiological and behavioural data critical to understanding everyday mental health into patient care.

Tseng et al provide an example of digital biomarker identification for predicting schizophrenia symptom changes.Reference Tseng, Sano, Ben-Zeev, Brian, Campbell and Hauser2 Smartphone sensing data were collected longitudinally from 61 schizophrenia patients over the course of a year, and schizophrenia symptoms were self-reported by patients every 2–3 days. Behavioural features were calculated from raw sensing data, and machine learning models were created to predict self-reported symptoms from behavioural features. Statistical techniques uncovered behavioural features with the greatest influence on model predictions; these became candidate digital biomarkers. Although this specific example stems from the literature, medical centres have established ‘digital clinics’ to explore how digital biomarkers can best inform care.Reference Rodriguez-Villa, Rauseo-Ricupero, Camacho, Wisniewski, Keshavan and Torous3

Despite the promise of digital biomarkers, it is difficult to collect both the digital and symptom outcome data required for digital biomarker construction at scale and ensure that models make consistent, equitable predictions within any single study or clinic. We define machine learning model equity as equal and accurate performance (e.g. with respect to sensitivity, specificity and positive predictive value) of a model across diverse patient subgroups over time.Reference Rajkomar, Hardt, Howell, Corrado and Chin4 Without assessing model equity, introducing digital biomarkers into care is tenuous. We first describe three components of population diversity most relevant to digital biomarker model equity and explain why it is challenging for any single clinic or longitudinal study to collect data capturing all of these components (Fig. 1). We then hypothesise that an open data repository created to develop mental health digital biomarkers may solve these challenges.

Fig. 1 Data collection challenges that reduce the ability of any single clinic or longitudinal study to assess and ensure the model equity of digital biomarkers, and how an open data repository may address these challenges.

Digital biomarker data collection challenges

Demographic diversity at scale

Although previous digital biomarker research has engaged with clinical populations, it is difficult for any single clinic to recruit demographically diverse patients for digital biomarker research at scale. Clinics have easiest access to geographically collocated patients already in care and often recruit current patients into studies for digital biomarker creation.Reference Mohr, Zhang and Schueller1 For example, Henson et al recruited 83 patients with schizophrenia in the greater Boston area to create digital biomarkers associated with psychotic relapse.Reference Henson, D'Mello, Vaidyam, Keshavan and Torous5 Jacobson et al recruited 23 patients with a diagnosed mood disorder (major depressive disorder, bipolar I, bipolar II) in treatment to identify digital biomarkers associated with disorder diagnosis and changes in symptom severity.Reference Jacobson, Weingarden and Wilhelm6 Recent work by Müller et al cautions against using digital biomarkers validated only within small, homogenous populations in practice: machine learning models trained using GPS-mobility features to predict depression symptoms had varying generalisation accuracy across distinct patient subgroups in a larger sample.Reference Müller, Chen, Peters, Chaintreau and Matz7 Thus, machine learning model equity is not ensured unless digital biomarkers are validated against data collected across demographically diverse populations.

Changing behaviours over time

Aggregate human behaviour changes over time, requiring long-term collection of digital sensor and symptom outcomes data for potential digital biomarker updates. Long-term symptom outcomes data are costly for any single clinic to collect. Digital biomarker creation requires frequent symptom sampling, typically collected through patient self-reports. Frequent self-reporting of symptoms is arduous for the patient, and thus researchers often create incentives for participation. For example, in the work by Ben-Zeev et al to develop digital biomarkers of schizophrenia, researchers collected a ten-question self-report from patients three times per week for an entire year.Reference Ben-Zeev, Brian, Wang, Wang, Campbell and Aung8 To encourage engagement, researchers may compensate patients per self-report collected, increasing compensation over time to retain participation.Reference Meyerhoff, Liu, Kording, Ungar, Kaiser and Karr9 Thus, studies are careful not to overburden patients, often limiting per-patient data collection to short time periods, from 2 weeks to 3 months.Reference Jacobson, Weingarden and Wilhelm6,Reference Meyerhoff, Liu, Kording, Ungar, Kaiser and Karr9 Digital biomarker studies could rely on outcomes collected during clinical visits and recorded within medical claims databases or electronic health records (EHRs) instead of self-reports; however, medical claims and EHRs, although valuable in many contexts, provide only episodic outcome measures. Developing digital biomarkers from episodic outcomes diverges from the potential of using digital biomarkers for more frequent assessment.

Owing to limited longitudinal data collection, digital biomarkers are unlikely to generalise across time as aggregate population behaviour changes. For example, the COVID-19 pandemic isolated individuals. Studies conducted during the pandemic may uncover novel relationships among movement, social behaviour and mental health. In addition, in any year, human behaviour and mental health exhibit seasonal patterns. Machine learning models need to be trained using data that capture both seasonality and aggregate changes over time. In machine learning, this is called concept drift.Reference Žliobaitė10 Concept drift cannot be solved with large-scale studies isolated to a specific time period. Machine learning models may not be equitable for an individual over time unless behaviour remains consistent.

Data type diversity

Individuals engage with a variety of platforms and devices. For example, smartphones and wearables are more ubiquitous in advanced economies.Reference Silver11 Data types are also inconsistent within a specific technology: Google and Apple, the companies who develop the two predominant smartphone operating systems used worldwide, have different policies on which sensor data types can be collected from smartphones hosting their operating systems. In addition, measurement errors may vary across mobile sensor hardware, and sensors evolve over time. Reviewing the cited examples, Ben-Zeev et al and Meyerhoff et al collected smartphone sensing data exclusively from Android smartphones, whereas Jacobson et al collected wristwatch actigraphy data.Reference Jacobson, Weingarden and Wilhelm6,Reference Ben-Zeev, Brian, Wang, Wang, Campbell and Aung8,Reference Meyerhoff, Liu, Kording, Ungar, Kaiser and Karr9

For digital biomarker studies leveraging social media data, platforms host many types of media (e.g. text versus photos). For example, platforms such as Instagram focus on photo and video sharing, whereas Reddit hosts primarily text-based comment threads. Thus, an individual displays different types of behaviour across different social media platforms, and may actively use only a subset of existing social media platforms. These differences manifest within platform-specific digital biomarkers. Birnbaum et al used Facebook data to identify digital biomarkers associated with relapse in schizophrenia, including an increase in co-tagging in photos and friending on the platform.Reference Birnbaum, Ernala, Rizvi, Arenare, Van Meter and De Choudhury12 Twitter data are majority text-based compared to Facebook data, and Saha et al relied on tweet keywords to identify associations with antidepressant side-effects.Reference Saha, Torous, Kiciman and De Choudhury13 Equitable models need to be exposed to data types across devices and platforms, or they will only cater to subsets of the population who engage with a particular device or platform. Although a single clinic can validate digital biomarkers across multiple data types, the heterogeneity of devices and platforms internationally, many of which are country specific, make universal assessment infeasible within a clinic's local population.

Why open data?

The model equity challenges described above can only be solved if machine learning models are trained using data collected from demographically diverse populations at scale over time, and are exposed to many different data types. Specifically, it is impractical for any single clinic or large-scale study to collect data that meaningfully capture these heterogeneities. Alternatively, we propose that researchers should publish and pool de-identified data to create digital biomarkers. This practice of publishing data-sets for shared use is called open data.

Suppose an open data repository for mental health digital biomarker construction exists. Clinicians and researchers internationally collect data from local communities and add de-identified data to the repository. This repository is updated multiple times per year from these diverse data sources, capturing behavioural changes due to seasonality. Over time, the collected data capture generational changes in behaviours and behavioural changes due to world events (e.g. the COVID-19 pandemic). New smartphones, wearables and computers are released each year, and social media platforms evolve. Researchers assess whether trained machine learning models remain accurate using data collected from newer devices and platform updates. Machine learning models trained with data from the repository are published, to be continuously evaluated and improved by the community. Clinicians serving a specific community hypothesise that current models underperform for their patients, and a paper is published validating this hypothesis. Funding is then directed to improve model performance within this population. We call to create an open data repository for mental health digital biomarkers such that this vision of equitable model development and continuous validation can be realised.

Privacy considerations

Privacy considerations need to be addressed parallel to open data repository creation. Many interesting data types collected to create mental health digital biomarkers are revealing. For example, GPS-derived biomarkers can be an indication of both mobility and social behaviour, associated with changes in severity of depression and bipolar disorder symptoms.Reference Mohr, Zhang and Schueller1 Collecting GPS data poses a high risk of re-identification. In addition, although individuals may have publicly facing social media profiles, users do not engage with social media with the intention of their data being collected and analysed for mental health measurement. Concerns around re-identification are imperative owing to the stigma surrounding mental illness, and legal protections for individuals who share de-identified digital biomarkers need to be upheld, given that re-identification may expose individuals involved in sensitive situations (e.g. substance misuse).

We believe that privacy is not an impediment to open data but a necessary consideration, creating interesting, unexplored research directions. First, the open data repository could be restricted to focus research on less identifiable yet clinically relevant data types, for example, activity and sleep. Second, the community could investigate using privacy-preserving techniques during data collection or model building. Machine learning techniques, including differential privacy and federated learning, attempt to limit potential re-identification while preserving the utility of the extracted data.Reference Xu, Glicksberg, Su, Walker, Bian and Wang14 Investing in privacy research is beneficial for protecting users who contribute to the open data repository and for the future of data-driven health research broadly.

Building an open data repository for mental health digital biomarkers

How do we begin to build a repository? Even with de-identification, anonymisation is not guaranteed, and thus safeguards should exist to enforce ethical data usage. For example, developers of open data platforms often create transactional costs in exchange for data access. Sage Bionetworks’ Synapse is an open healthcare data-sharing platform that requires new users to undergo training, agree to and take a quiz on governance policies, and publicly disclose research motives prior to accessing data on the platform.Reference Grayson, Suver, Wilbanks and Doerr15

We call on clinicians, researchers and industry experts in psychiatry, psychology, ubiquitous computing, machine learning and privacy to create a working group and decide to either use an existing platform or create a new open data platform for development of mental health digital biomarkers. In parallel, the working group should establish privacy and data standards. In the interim, the coauthors have created a webpage and provided links to two open de-identified data-sets collected by a subset of the coauthors and collaborators across Dartmouth College, the University of Washington and Northwell Health. The data-sets include smartphone sensing data and mental health outcome measures self-reported by study participants and are hosted on the Precision Behavioral Health at Cornell Tech website (https://pbh.tech.cornell.edu/data.html). We plan to update this site as more relevant data-sets become publicly available and then migrate the hosted data to the agreed-upon platform.

Limitations

An open data repository can enable equitable digital biomarker development only if there are targeted investments to either collect or contribute data representing the different types of diversity (demographic, time, data type) described. In addition, although an open data repository will magnify existing practical challenges with respect to combining multimodal data for predictive modelling – with modalities sampled at different frequencies, carrying different statistical properties – it will also enable further research to solve these challenges, ultimately creating more robust digital biomarkers. Note that an open data repository will not resolve broader systemic biases in healthcare, which may manifest and be amplified in collected data.

Clinical benefit and conclusion

Assessing machine learning model equity will increase clinician trust in the use of digital biomarkers for more efficient, effective and targeted care delivery, ultimately improving clinical outcomes and reducing provider burnout. Increased clinician engagement during repository creation will focus research on data types most relevant to symptom management, and sustained engagement will lead to more seamless integration of digital biomarkers into patient visits and clinical workflows.

Author contributions

D.A. conducted the literature survey and wrote the original draft. F.W. and T.C. provided supervision. D.A., F.W., D.C.M., D.E., C.L. and T.C. participated in reviewing and editing the manuscript.

Funding

D.C.M. acknowledges support from NIMH grant R01MH111610. F.W. acknowledges support from NSF 1750326 and NIH RF1AG072449. T.C. acknowledges support from the NIH, which supported some of the randomised controlled trials discussed within the article. D.A. is supported by the National Science Foundation Graduate Research Fellowship under grant no. DGE-1650441. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Declaration of interest

D.A. is co-employed by UnitedHealth Group, outside the submitted work. T.C. is a co-founder and equity holder of HealthRhythms, Inc., is co-employed by UnitedHealth Group, and has received grants from Click Therapeutics related to digital therapeutics, outside the submitted work. D.A. and T.C. hold pending patents (PCT/US21/52203; 17/551,994) related to material discussed in the surveyed literature. D.C.M. has accepted honoraria and consulting fees from Apple, Inc., Otsuka Pharmaceuticals, Pear Therapeutics and the One Mind Foundation, and royalties from Oxford Press, and has an ownership interest in Adaptive Health, Inc. C.L. is on the advisory board for Compass Pathways and is co-employed by UnitedHealth Group, outside the submitted work.

References

Mohr, DC, Zhang, M, Schueller, SM. Personal sensing: understanding mental health using ubiquitous sensors and machine learning. Annu Rev Clin Psychol 2017; 13: 2347.CrossRefGoogle ScholarPubMed
Tseng, VW-S, Sano, A, Ben-Zeev, D, Brian, R, Campbell, AT, Hauser, M, et al. Using behavioral rhythms and multi-task learning to predict fine-grained symptoms of schizophrenia. Sci Rep 2020; 10: 15100.CrossRefGoogle ScholarPubMed
Rodriguez-Villa, E, Rauseo-Ricupero, N, Camacho, E, Wisniewski, H, Keshavan, M, Torous, J. The digital clinic: implementing technology and augmenting care for mental health. Gen Hosp Psychiatry 2020; 66: 5966.CrossRefGoogle ScholarPubMed
Rajkomar, A, Hardt, M, Howell, MD, Corrado, G, Chin, MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med 2018; 169: 866–72.CrossRefGoogle ScholarPubMed
Henson, P, D'Mello, R, Vaidyam, A, Keshavan, M, Torous, J. Anomaly detection to predict relapse risk in schizophrenia. Transl Psychiatry 2021; 11: 16.CrossRefGoogle Scholar
Jacobson, NC, Weingarden, H, Wilhelm, S. Digital biomarkers of mood disorders and symptom change. NPJ Digit Med 2019; 2: 13.CrossRefGoogle ScholarPubMed
Müller, SR, Chen, XL, Peters, H, Chaintreau, A, Matz, SC. Depression predictions from GPS-based mobility do not generalize well to large demographically heterogeneous samples. Sci Rep 2021; 11: 14007.CrossRefGoogle Scholar
Ben-Zeev, D, Brian, R, Wang, R, Wang, W, Campbell, AT, Aung, MSH, et al. CrossCheck: integrating self-report, behavioral sensing, and smartphone use to identify digital indicators of psychotic relapse. Psychiatr Rehabil J 2017; 40: 266–75.CrossRefGoogle ScholarPubMed
Meyerhoff, J, Liu, T, Kording, KP, Ungar, LH, Kaiser, SM, Karr, CJ, et al. Evaluation of changes in depression, anxiety, and social anxiety using smartphone sensor features: longitudinal cohort study. J Med Internet Res 2021; 23: e22844.CrossRefGoogle ScholarPubMed
Žliobaitė, I. Learning under concept drift: an overview. arXiv [Preprint] 2010. Available from: http://arxiv.org/abs/1010.4784 [cited 23 Apr 2021].Google Scholar
Silver, L. Smartphone Ownership is Growing Rapidly Around the World, but Not Always Equally. Pew Research Center, 2019. Available from: https://www.pewresearch.org/global/2019/02/05/smartphone-ownership-is-growing-rapidly-around-the-world-but-not-always-equally/ [cited 28 Jul 2021].Google Scholar
Birnbaum, ML, Ernala, SK, Rizvi, AF, Arenare, E, Van Meter, AR, De Choudhury, M, et al. Detecting relapse in youth with psychotic disorders utilizing patient-generated and patient-contributed digital data from Facebook. NPJ Schizophr 2019; 5: 19.CrossRefGoogle ScholarPubMed
Saha, K, Torous, J, Kiciman, E, De Choudhury, M. Understanding side effects of antidepressants: large-scale longitudinal study on social media data. JMIR Ment Health 2021; 8: e26589.CrossRefGoogle ScholarPubMed
Xu, J, Glicksberg, BS, Su, C, Walker, P, Bian, J, Wang, F. Federated learning for healthcare informatics. J Healthc Inform Res 2020: 5; 119.Google ScholarPubMed
Grayson, S, Suver, C, Wilbanks, J, Doerr, M. Open data sharing in the 21st century: Sage bionetworks’ qualified research program and its application in mHealth data release. SSRN Electron J 2019. Available from: https://doi.org/10.2139/ssrn.3502410.CrossRefGoogle Scholar
Figure 0

Fig. 1 Data collection challenges that reduce the ability of any single clinic or longitudinal study to assess and ensure the model equity of digital biomarkers, and how an open data repository may address these challenges.

Submit a response

eLetters

No eLetters have been published for this article.