Article contents
Expanding horizons in reinforcement learning for curious exploration and creative planning
Published online by Cambridge University Press: 21 May 2024
Abstract
Curiosity and creativity are expressions of the trade-off between leveraging that with which we are familiar or seeking out novelty. Through the computational lens of reinforcement learning, we describe how formulating the value of information seeking and generation via their complementary effects on planning horizons formally captures a range of solutions to striking this balance.
- Type
- Open Peer Commentary
- Information
- Copyright
- Copyright © The Author(s), 2024. Published by Cambridge University Press
References
Addicott, M. A., Pearson, J. M., Sweitzer, M. M., Barack, D. L., & Platt, M. L. (2017). A primer on foraging and the explore/exploit trade-off for psychiatry research. Neuropsychopharmacology, 42(10), 1931–1939.CrossRefGoogle ScholarPubMed
Aru, J., Drüke, M., Pikamäe, J., & Larkum, M. E. (2023). Mental navigation and the neural mechanisms of insight. Trends in Neurosciences, 46(2), 100–109.CrossRefGoogle ScholarPubMed
Botvinick, M. M. (2012). Hierarchical reinforcement learning and decision making. Current Opinion in Neurobiology, 22(6), 956–962.CrossRefGoogle ScholarPubMed
Botvinick, M. M., Niv, Y., & Barto, A. G. (2009). Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective. Cognition, 113(3), 262–280.CrossRefGoogle ScholarPubMed
Correa, C. G., Ho, M. K., Callaway, F., Daw, N. D., & Griffiths, T. L. (2023). Humans decompose tasks by trading off utility and computational cost. PLoS Computational Biology, 19(6), e1011087.CrossRefGoogle ScholarPubMed
Cover, T. M., & Thomas, J. A. (1991). Elements of Information Theory (pp. 336–373). Wiley.Google Scholar
Dubey, R., & Griffiths, T. L. (2020). Reconciling novelty and complexity through a rational analysis of curiosity. Psychological Review, 127(3), 455–476. http://dx.doi.org/10.1037/rev0000175.CrossRefGoogle ScholarPubMed
Eysenbach, B., Gupta, A., Ibarz, J., & Levine, S. (2018). Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070.Google Scholar
Fox, R., Pakman, A., & Tishby, N. (2015). Taming the noise in reinforcement learning via soft updates. arXiv preprint arXiv:1512.08562.Google Scholar
Gershman, S. J., & Niv, Y. (2015). Novelty and inductive generalization in human reinforcement learning. Topics in Cognitive Science, 7(3), 391–415.CrossRefGoogle ScholarPubMed
Gottlieb, J., Oudeyer, P.-Y., Lopes, M., & Baranes, A. (2013). Information-seeking, curiosity, and attention: Computational and neural mechanisms. Trends in Cognitive Sciences, 17(11), 585–593.CrossRefGoogle ScholarPubMed
Gruber, M. J., & Ranganath, C. (2019). How curiosity enhances hippocampus-dependent memory: The prediction, appraisal, curiosity, and exploration (pace) framework. Trends in Cognitive Sciences, 23(12), 1014–1025.CrossRefGoogle ScholarPubMed
Harada, T. (2020). The effects of risk-taking, exploitation, and exploration on creativity. PLoS ONE, 15(7), e0235698.CrossRefGoogle ScholarPubMed
Harhen, N. C., & Bornstein, A. M. (2023). Overharvesting in human patch foraging reflects rational structure learning and adaptive planning. Proceedings of the National Academy of Sciences, 120(13), e2216524120.CrossRefGoogle ScholarPubMed
Jach, H.K., Cools, R., Frisvold, A., Grubb, M., Hartley, C. A., & Hartman, J. (2023). Curiosity in cognitive science and personality psychology: Individual differences in information demand have a low dimensional structure that is predicted by personality traits. PsyArXiv.Google Scholar
Jiang, N., Kulesza, A., Singh, S., & Lewis, R. (2015). The dependence of effective planning horizon on model accuracy. In E. Elkind, G. Weiss, P. Yolum, & R. H. Bordini (Eds.), Proceedings of the 2015 international conference on autonomous agents and multiagent systems (pp. 1181–1189). International Foundation for Autonomous Agents and Multiagent Systems.Google Scholar
Kaelbling, L. P., Littman, M. L., & Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1–2), 99–134.CrossRefGoogle Scholar
Kashdan, T. B., Stiksma, M. C., Disabato, D. J., McKnight, P. E., Bekier, J., Kaji, J., & Lazarus, R. (2018). The five-dimensional curiosity scale: Capturing the bandwidth of curiosity and identifying four unique subgroups of curious people. Journal of Research in Personality, 73, 130–149.CrossRefGoogle Scholar
Kauvar, I., Doyle, C., Zhou, L., & Haber, N. (2023). Curious replay for model-based adaptation.Google Scholar
Kobayashi, K., Ravaioli, S., Baranès, A., Woodford, M., & Gottlieb, J. (2019). Diverse motives for human curiosity. Nature Human Behaviour, 3(6), 587–595. http://dx.doi.org/10.1038/s41562-019-0589-3.CrossRefGoogle ScholarPubMed
Kruglanski, A. W., & Webster, D. M. (2018) Motivated closing of the mind: “seizing” and “freezing”. In A.W. Kruglanski (ed), The Motivated Mind (pp. 60–103). Routledge.CrossRefGoogle Scholar
Lai, L., & Gershman, S. J. (2021). Policy compression: An information bottleneck in action selection. In Psychology of learning and motivation (Vol. 74, pp. 195–232). Elsevier.CrossRefGoogle Scholar
Liquin, E. G., & Gopnik, A. (2022). Children are more exploratory and learn more than adults in an approach-avoid task. Cognition, 218, 104940. http://dx.doi.org/10.1016/j.cognition.2021.104940.CrossRefGoogle Scholar
Litman, J. A. (2008). Interest and deprivation factors of epistemic curiosity. Personality and Individual Differences, 44(7), 1585–1595.CrossRefGoogle Scholar
Lydon-Staley, D. M., Zhou, D., Blevins, A. S., Zurn, P., & Bassett, D. S. (2021). Hunters, busybodies and the knowledge network building associated with deprivation curiosity. Nature Human Behaviour, 5(3), 327–336.CrossRefGoogle ScholarPubMed
Mack, M. L., Preston, A. R., & Love, B. C. (2020). Ventromedial prefrontal cortex compression during concept learning. Nature Communications, 11(1), 46.CrossRefGoogle ScholarPubMed
Masís, J., Chapman, T., Rhee, J. Y., Cox, D. D., & Saxe, A. M. (2023). Strategically managing learning during perceptual decision making. Elife, 12, e64978.CrossRefGoogle ScholarPubMed
Molinaro, G., Cogliati Dezza, I., Bühler, S. K., Moutsiana, C., & Sharot, T. (2023). Multifaceted information-seeking motives in children. Nature Communications, 14(1), 611. http://dx.doi.org/10.1038/s41467-023-40971-x.CrossRefGoogle ScholarPubMed
Momennejad, I. (2020). Learning structures: Predictive representations, replay, and generalization. Current Opinion in Behavioral Sciences, 32, 155–166.CrossRefGoogle ScholarPubMed
Nussenbaum, K., Martin, R. E., Maulhardt, S., Yang, Y. J., Bizzell-Hatcher, G., Bhatt, N. S., Koenig, M., Rosenbaum, G. M., O'Doherty, J. P., Cockburn, J., & Hartley, C. A. (2023). Novelty and uncertainty differentially drive exploration across development. eLife, 12, 595. http://dx.doi.org/10.7554/eLife.84260.CrossRefGoogle ScholarPubMed
Oudeyer, P.-Y., & Kaplan, F. (2007). What is intrinsic motivation? A typology of computational approaches. Frontiers in Neurorobotics, 1, 6.CrossRefGoogle ScholarPubMed
Patankar, S. P., Zhou, D., Lynn, C. W., Kim, J. Z., Ouellet, M., Ju, H., … Bassett, D. S. (2023). Curiosity as filling, compressing, and reconfiguring knowledge networks. Collective Intelligence, 2(4), 26339137231207633.CrossRefGoogle Scholar
Rmus, M., Ritz, H., Hunter, L. E., Bornstein, A. M., & Shenhav, A. (2022). Humans can navigate complex graph structures acquired during latent learning. Cognition, 225, 105103.CrossRefGoogle ScholarPubMed
Rubin, J., Shamir, O., & Tishby, N. (2012). Trading value and information in mdps. Decision making with imperfect decision makers (pp. 57–74). Springer Berlin.CrossRefGoogle Scholar
Schacter, D. L., & Addis, D. R. (2007). The cognitive neuroscience of constructive memory: Remembering the past and imagining the future. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1481), 773–786.CrossRefGoogle ScholarPubMed
Schapiro, A. C., McDevitt, E. A., Rogers, T. T., Mednick, S. C., & Norman, K. A. (2018). Human hippocampal replay during rest prioritizes weakly learned information and predicts memory performance. Nature Communications, 9(1), 3920.CrossRefGoogle ScholarPubMed
Schapiro, A. C., Rogers, T. T., Cordova, N. I., Turk-Browne, N. B., & Botvinick, M. M. (2013). Neural representations of events arise from temporal community structure. Nature Neuroscience, 16(4), 486–492.CrossRefGoogle ScholarPubMed
Schmidhuber, J. (2008). Driven by compression progress: A simple principle explains essential aspects of subjective beauty, novelty, surprise, interestingness, attention, curiosity, creativity, art, science, music, jokes. In Pezzulo, G., Butz, M. V., Sigaud, O., & Baldassarre, Gianluca (Eds.), Workshop on anticipatory behavior in adaptive learning systems (pp. 48–76). Springer.Google Scholar
Schulz, L. E., & Bonawitz, E. B. (2007). Serious fun: Preschoolers engage in more exploratory play when evidence is confounded. Developmental Psychology, 43(4), 1045–1050. http://dx.doi.org/10.1037/0012-1649.43.4.1045.CrossRefGoogle ScholarPubMed
Shannon, C. E. (1948). A mathematical theory of communication. The Bell System Technical Journal, 27(3), 379–423.CrossRefGoogle Scholar
Stachenfeld, K. L., Botvinick, M. M., & Gershman, S. J. (2017). The hippocampus as a predictive map. Nature Neuroscience, 20(11), 1643–1653.CrossRefGoogle ScholarPubMed
Sternberg, R. J., & Lubart, T. I. (1996). Investing in creativity. American Psychologist, 51(7), 677.CrossRefGoogle Scholar
Sutton, R. S., Precup, D., & Singh, S. (1999). Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1–2), 181–211.CrossRefGoogle Scholar
Tang, H., Houthooft, R., Foote, D., Stooke, A., Xi Chen, O., Duan, Y., … Abbeel, P. (2017). # exploration: A study of count-based exploration for deep reinforcement learning. Advances in Neural Information Processing Systems, 30, 2753–2762.Google Scholar
Wade, S., & Kidd, C. (2019). The role of prior knowledge and curiosity in learning. Psychonomic Bulletin & Review, 26(4), 1377–1387. http://dx.doi.org/10.3758/s13423-019-01598-6.CrossRefGoogle ScholarPubMed
Wilson, R., Bonawitz, E., Costa, V. D., & Ebitz, R. B. (2021). Balancing exploration and exploitation with information and randomization. Current Opinion in Behavioral Sciences, 38, 49–56.CrossRefGoogle ScholarPubMed
Wilson, R., Wang, S., Sadeghiyeh, H., & Cohen, J. D. (2020). Deep exploration as a unifying account of explore-exploit behavior.CrossRefGoogle Scholar
Wittmann, B. C., Bunzeck, N., Dolan, R. J., & Düzel, E. (2007). Anticipation of novelty recruits reward system and hippocampus while promoting recollection. Neuroimage, 38(1), 194–202.CrossRefGoogle ScholarPubMed
Wittmann, B. C., Daw, N. D., Seymour, B., & Dolan, R. J. (2008). Striatal activity underlies novelty-based choice in humans. Neuron, 58(6), 967–973.CrossRefGoogle ScholarPubMed
Yoo, J., Bornstein, A., & Chrastil, E. R. (2023). Cognitive graphs: Representational substrates for planning.CrossRefGoogle Scholar
Zedelius, C. M., Gross, M. E., & Schooler, J. W. (2022). Inquisitive but not discerning: Deprivation curiosity is associated with excessive openness to inaccurate information. Journal of Research in Personality, 98, 104227.CrossRefGoogle Scholar
Zhou, D., Kim, J. Z., Pines, A. R., Sydnor, V. J., Roalf, D. R., Detre, J. A., … Bassett, D. S. (2022). Compression supports low-dimensional representations of behavior across neural circuits. bioRxiv, 2022–11.CrossRefGoogle Scholar
Zhou, D., Lydon-Staley, D. M., Zurn, P., & Bassett, D. S. (2020). The growth and form of knowledge networks by kinesthetic curiosity. Current Opinion in Behavioral Sciences, 35, 125–134.CrossRefGoogle ScholarPubMed
Zhou, D., Patankar, S., Lydon-Staley, D. M., Zurn, P., Gerlach, M., & Bassett, D. S. (2023). Architectural styles of curiosity in global Wikipedia mobile app readership. PsyArXiv.Google Scholar
Zurn, P. (2021). Curiosity: An affect of resistance. Theory & Event, 24(2), 611–617.CrossRefGoogle Scholar
Ivancovsky et al. propose fruitful connections between curiosity and creativity under an exploration–exploitation trade-off. The explore–exploit trade-off is the decision between a familiar option with known value and an unfamiliar option with unknown or uncertain value (Addicott, Pearson, Sweitzer, Barack, & Platt, Reference Addicott, Pearson, Sweitzer, Barack and Platt2017). Choosing unfamiliar options is risking time, energy, and foregone reward in return for information (Rubin, Shamir, & Tishby, Reference Rubin, Shamir and Tishby2012).
These ideas have history in reinforcement learning. For example, novelty-seeking is important to prevent failures of learning where subpar solutions are settled on prematurely (Fox, Pakman, & Tishby, Reference Fox, Pakman and Tishby2015). Despite the benefits of novelty-seeking, seeking novel information can also carry a high cost when forgoing familiar opportunities and accruing a burdensome amount of information (Wilson, Bonawitz, Costa, & Ebitz, Reference Wilson, Bonawitz, Costa and Ebitz2021). Thus, one must manage costs by taking “sensible risks” which balance exploring to learn novel information about the environment with accruing increasingly complex information for different tasks at hand (Sternberg & Lubart, Reference Sternberg and Lubart1996). One way to encourage taking on these risks for exploration is to use heuristics which locally track what has and has not been seen (Tang et al., Reference Tang, Houthooft, Foote, Stooke, Xi Chen, Duan and Abbeel2017; Wittmann, Bunzeck, Dolan, & Düzel, Reference Wittmann, Bunzeck, Dolan and Düzel2007; Wittmann, Daw, Seymour, & Dolan, Reference Wittmann, Daw, Seymour and Dolan2008). By contrast, preferring familiarity can manifest as a form of perseverative information seeking that was associated with deprivation curiosity (Lydon-Staley, Zhou, Blevins, Zurn, & Bassett, Reference Lydon-Staley, Zhou, Blevins, Zurn and Bassett2021), a drive to reduce uncertainty and acquire missing information (Kashdan et al., Reference Kashdan, Stiksma, Disabato, McKnight, Bekier, Kaji and Lazarus2018; Litman, Reference Litman2008). This preference for familiarity has been seen as prevalent in people with greater depressed mood and anxiety (Zhou et al., Reference Zhou, Patankar, Lydon-Staley, Zurn, Gerlach and Bassett2023), and may be an important heuristic strategy to reduce uncertainty for better reliability of future-oriented decisions (Harhen & Bornstein, Reference Harhen and Bornstein2023; Jiang, Kulesza, Singh, & Lewis, Reference Jiang, Kulesza, Singh and Lewis2015). However, in large environments, such local heuristics are impoverished, particularly when higher-order associations are needed for planning. This need for richer measurements motivates the use of network science tools to formalize both local and global relationships as internal representations of the environment (Yoo, Bornstein, & Chrastil, Reference Yoo, Bornstein and Chrastil2023; Zhou, Lydon-Staley, Zurn, & Bassett, Reference Zhou, Lydon-Staley, Zurn and Bassett2020). Thus, we propose expansions of the novelty-seeking model using reinforcement learning approaches to exploration and network science perspectives on information complexity and compression.
Ivancovsky et al. rightly note that curiosity and creativity must involve a dynamic policy of behavior that adaptively alternates between modes of exploration and exploitation. Reinforcement learning approaches reveal what behavior pattern, or policy, is appropriate for a given task and environment, for instance adapted to the sparsity of rewarding solutions (Gershman & Niv, Reference Gershman and Niv2015). To this end, the reinforcement learning approach of Harada (Reference Harada2020) was described. However, notably this paper reported that divergent and convergent thinking measures of creativity and the personality trait of openness to experience (a proxy for being “inventive/curious”) were not robustly associated to exploration and exploitation behavior based on model-free reinforcement learning (Harada, Reference Harada2020). This finding and other work (Jach et al., Reference Jach, Cools, Frisvold, Grubb, Hartley and Hartman2023; Molinaro et al., Reference Molinaro, Cogliati Dezza, Bühler, Moutsiana and Sharot2023) highlight the need for understanding creativity via more sophisticated models of the value of exploration.
The value of information is sometimes treated as a simple heuristic for predisposing choices toward exploration (Gottlieb, Oudeyer, Lopes, & Baranes, Reference Gottlieb, Oudeyer, Lopes and Baranes2013), but the value can also be formally expanded as the change in future expected value that results from increasing certainty over representations of the environment and sequence of choices (Kaelbling, Littman, & Cassandra, Reference Kaelbling, Littman and Cassandra1998). These planning and policy iteration approaches aim for more global knowledge about the environment, and thereby differ from the local count-based reward functions to encourage exploration (Masís, Chapman, Rhee, Cox, & Saxe, Reference Masís, Chapman, Rhee, Cox and Saxe2023; Oudeyer & Kaplan, Reference Oudeyer and Kaplan2007; Tang et al., Reference Tang, Houthooft, Foote, Stooke, Xi Chen, Duan and Abbeel2017; Wittmann et al., Reference Wittmann, Daw, Seymour and Dolan2008). Here we focus on approaches that balance the increased long-run discounted expected value of knowledge with the cost of sampling (exploration) (Kaelbling et al., Reference Kaelbling, Littman and Cassandra1998). To this end, the focus of choices shifts from an explore-or-exploit distinction to the iterative improvement of knowledge of the environment by testing predictions and simulations of future outcomes according to a given action policy (Gruber & Ranganath, Reference Gruber and Ranganath2019; Kobayashi, Ravaioli, Baranès, Woodford, & Gottlieb, Reference Kobayashi, Ravaioli, Baranès, Woodford and Gottlieb2019; Wilson, Wang, Sadeghiyeh, & Cohen, Reference Wilson, Wang, Sadeghiyeh and Cohen2020; Dubey & Griffiths, Reference Dubey and Griffiths2020; Liquin & Gopnik, Reference Liquin and Gopnik2022).
We describe two areas of future research. First, creative insights can emerge from expanded planning horizons. Planning is commonly implemented as a search over a decision tree, wherein expanded horizons entail a deeper search in the tree. When the internal representation of information about the causal structure of the environment is accurate, longer planning horizons are useful. However, when the representation is incomplete, a smaller planning horizon compresses the policy space and prevents overfitting to past observations (Jiang et al., Reference Jiang, Kulesza, Singh and Lewis2015). Humans can search over more complex structures in knowledge representations (Yoo et al., Reference Yoo, Bornstein and Chrastil2023). That knowledge may be more modular and compressible, allowing for the grouped representation of a more diverse chain of actions (Lai & Gershman, Reference Lai and Gershman2021; Momennejad, Reference Momennejad2020; Patankar et al., Reference Patankar, Zhou, Lynn, Kim, Ouellet, Ju and Bassett2023; Schapiro, Rogers, Cordova, Turk-Browne, & Botvinick, Reference Schapiro, Rogers, Cordova, Turk-Browne and Botvinick2013; Stachenfeld, Botvinick, & Gershman, Reference Stachenfeld, Botvinick and Gershman2017). The ability to use more complex knowledge structures may involve a spatial-like ability to navigate those structures (Rmus, Ritz, Hunter, Bornstein, & Shenhav, Reference Rmus, Ritz, Hunter, Bornstein and Shenhav2022), as well as a metacognitive ability to balance knowledge uncertainty with deeper planning (Schulz & Bonawitz, Reference Schulz and Bonawitz2007; Wade & Kidd, Reference Wade and Kidd2019; Nussenbaum et al., Reference Nussenbaum, Martin, Maulhardt, Yang, Bizzell-Hatcher, Bhatt, Koenig, Rosenbaum, O'Doherty, Cockburn and Hartley2023). Indeed, a form of mental navigation that spans diverse spaces has been proposed to be linked with both creativity and curiosity (Aru, Drüke, Pikamäe, & Larkum, Reference Aru, Drüke, Pikamäe and Larkum2023; Eysenbach, Gupta, Ibarz, & Levine, Reference Eysenbach, Gupta, Ibarz and Levine2018; Zhou et al., Reference Zhou, Patankar, Lydon-Staley, Zurn, Gerlach and Bassett2023). Although such diversity and depth can decrease knowledge uncertainty, it comes at the cost of time and computational resources to accrue and update information. Computational cost motivates the next direction of research.
Second, creatively recombining knowledge benefits from unlearning or updating outdated knowledge. This form of creativity complements a type of curiosity that is characterized by deconstructing and rebuilding current structures (Zurn, Reference Zurn2021). When an agent seizes onto a supposedly optimal choice that is actually suboptimal, future resources must be used to unlearn those experiences (Fox et al., Reference Fox, Pakman and Tishby2015). This is precisely a problem that deprivation curiosity can exacerbate (Kruglanski & Webster, Reference Kruglanski and Webster2018; Zedelius, Gross, & Schooler, Reference Zedelius, Gross and Schooler2022). A solution to this problem involves aiming for simpler, compressed policies by chunking actions (Lai & Gershman, Reference Lai and Gershman2021). Compression involves smartly discarding some information to efficiently redescribe the information, such as by describing an elephant and a chicken with one joint description rather than describing each alone (Cover & Thomas, Reference Cover and Thomas1991; Mack, Preston, & Love, Reference Mack, Preston and Love2020). In order to modulate the planning horizon, policies could be compressed to increase certainty, albeit over an impoverished model. This idea is related to strategically decomposing, aggregating, and reducing sequences of actions into a hierarchy of “options” (Botvinick, Niv, & Barto, Reference Botvinick, Niv and Barto2009; Sutton, Precup, & Singh, Reference Sutton, Precup and Singh1999) to balance the growing cost of planning (Botvinick, Reference Botvinick2012; Correa, Ho, Callaway, Daw, & Griffiths, Reference Correa, Ho, Callaway, Daw and Griffiths2023). The idea also relates to a computational form of curiosity that involves improving prediction of expected long-term value (Gruber & Ranganath, Reference Gruber and Ranganath2019; Schmidhuber, Reference Schmidhuber, Pezzulo, Butz, Sigaud and Baldassarre2008). Prediction is related to compression because the best compression is the true data generating model, and the true data generating model is the most predictive (Shannon, Reference Shannon1948). Notably, neural activity has been measured to be most compressed in the default-mode network (Mack et al., Reference Mack, Preston and Love2020; Zhou et al., Reference Zhou, Kim, Pines, Sydnor, Roalf, Detre and Bassett2022), a network of regions central to the proposed novelty-seeking model. Default-mode activity is also associated with the simulation of hypothetical episodes (Schacter & Addis, Reference Schacter and Addis2007) and the replay of episodic memories (Schapiro, McDevitt, Rogers, Mednick, & Norman, Reference Schapiro, McDevitt, Rogers, Mednick and Norman2018), which can help to plan or update actions from new experiences (Kauvar, Doyle, Zhou, & Haber, Reference Kauvar, Doyle, Zhou and Haber2023; Wilson et al., Reference Wilson, Wang, Sadeghiyeh and Cohen2020).
In conclusion, curiosity could be thought of computationally as actions taken to justify the expansion of one's planning horizon. The consequent cost of increased complexity can be managed by creatively compressing action policies, which further supports the pursuit of long-term goals.
Financial support
D. Z. acknowledges funding from the George E. Hewitt Foundation for Medical Research. A. M. B. acknowledges funding from NINDS R01NS119468 (PI: E.R. Chrastil) and NIMH R01MH128306 (PI: M.A. Yassa).
Competing interests
None.