We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The nervous system consists of not only neurons, but also of other cell types such as glial cells. They can be modelled using the same principles as for neurons. The extracellular space (ECS) contains ions and molecules that affect the activity of both neurons and glial cells, as does the transport of signalling molecules, oxygen and cell nutrients in the irregular ECS landscape. This chapter shows how to model such diffusive influences involving both diffusion and electrical drift. This formalism also explains the formation of dense nanometre-thick ion layers around membranes (Debye layers). When ion transport in the ECS stems from electrical drift only, this formalism reduces to the volume conductor theory, which is commonly used to model electrical potentials around cells in the ECS. Finally, the chapter outlines how to model ionic and molecular dynamics not only in the ECS, but also in the entire brain tissue comprising neurons, glial cells and blood vessels.
In this book, we have aimed to explain the principles of computational neuroscience by showing how the underlying mechanisms are being modelled, together with presenting critical accounts of examples of their use. In some chapters, we have placed the modelling work described in its historical context where we felt this would be interesting and useful. We now make some brief comments about where the field of computational neuroscience came from and where it might be going.
Candidate models for how neurons or networks operate must be validated against experimental data. For this, it is necessary to have a good model for the measurement itself. For example, to compare model predictions from cortical networks with electrical signals recorded by electrodes placed on the cortical surface or the head scalp, the so-called volume conductor theory is required to make a proper quantitative link between the network activity and the measured signals. Here we describe the physics and modelling of electric, magnetic and other measurements of brain activity. The physical principles behind electric and magnetic stimulation of brain tissue are the same as those covering electric and magnetic measurements, and are also outlined.
This book is about how to construct and use computational models of specific parts of the nervous system, such as a neuron, a part of a neuron or a network of neurons, as well as their measurable signals. It is designed to be read by people from a wide range of backgrounds from the neurobiological, physical and computational sciences. The word ‘model’ can mean different things in different disciplines, and even researchers in the same field may disagree on the nuances of its meaning. For example, to biologists, this term can mean ‘animal model’. In particle physics, the ‘standard model’ is a step towards a complete theory of fundamental particles and interactions. We therefore attempt to clarify what we mean by modelling and computational models in the context of neuroscience. We discuss what might be called the philosophy of modelling: general issues in computational modelling that recur throughout the book.
Plasticity in the nervous system describes its ability to adapt to change, in response to exposure to new information, fluctuations in the internal environment or external injury. In each case, computational models at different levels of detail are required. Given that memory traces are stored in modifiable synapses, to model the storage and retrieval of information requires models of the modifiable synapse and of a network of neurons. We discuss the processing ability of the network as a whole, given a particular mechanism for synaptic modification, modelled in less detail. Neurons also exhibit homeostatic plasticity, the ability to maintain their firing activity in response to a fluctuating environment. This can involve modulation of intrinsic membrane currents, as well as synaptic plasticity. It must work in concert with synaptic plasticity for learning and memory to enable neural networks to retain and recall stored information whilst still being responsive to new information.
In this chapter, a range of models with fewer details than those in previous chapters is considered. These simplified neuron models are particularly useful for incorporating into networks, as they are computationally more efficient and sometimes they can be analysed mathematically. Reduced compartmental models can be derived from large compartmental models by lumping together compartments. Additionally, the number of gating variables can be reduced whilst retaining much of the dynamical flavour of a model. These approaches make it easier to analyse the function of the model using the mathematics of dynamical systems. In the yet simpler integrate-and-fire model, first introduced inand elaborated on in this chapter, there are no gating variables, with action potentials being produced when the membrane potential crosses a threshold. At the simplest end of the spectrum, rate-based models communicate via firing rates rather than via individual spikes.
The membrane potential of a neuron varies widely across the spatial extent of a neuron. The membrane may have spatially distinct distributions of ion channels and synaptic inputs arrive at different dendritic locations and propagate to the cell body. The membrane potential varies along axons, as the action potential propagates. We therefore need neuron models that include spatial, as well as temporal, dimensions. The most common approach is compartmental modelling in which the spatial extent of a neuron is approximated by a series of small compartments, each assumed to be isopotential. In limited cases of simple neuron geometry, analytical solutions for the membrane potential at any point along a neuron can be obtained through the use of the cable theory. We describe both modelling approaches here. Two case studies demonstrate the power of compartmental modelling: (1) action potential propagation along axons; and (2) synaptic signal integration in pyramidal cell dendrites.
This chapter presents the first quantitative model of active membrane properties, the Hodgkin–Huxley model. This was used to calculate the form of action potentials in the squid giant axon. Our step-by-step account of the construction of the model shows how Hodgkin and Huxley used the voltage clamp method to produce the experimental data required to construct mathematical descriptions of how the sodium, potassium and leak currents depend on the membrane potential. Simulations of the model produce action potentials similar to experimentally recorded ones and account for the threshold and refractory effects observed experimentally. Whilst subsequent experiments have uncovered limitations in the Hodgkin–Huxley model descriptions of the currents carried by different ions, the Hodgkin–Huxley formalism is a useful and popular technique for modelling channel types.
There are many types of active ion channel beyond the squid giant axon sodium and potassium voltage-gated ion channels studied in , including channels gated by ligands such as calcium. This chapter presents methods for modelling the kinetics of any voltage-gated or ligand-gated ion channel. The formulation used by Hodgkin and Huxley of independent gating particles can be extended to describe many types of ion channel. This formulation is the foundation for thermodynamic models, which provide functional forms for the rate coefficients derived from basic physical principles. To improve on the fits to data offered by models with independent gating particles, the more flexible Markov models are introduced. When and how to interpret kinetic schemes probabilistically to model the stochastic behaviour of single ion channels will be considered. Experimental techniques for characterising channels are outlined, and an overview of the biophysics of channels relevant to modelling channels is given.
Presenting the fundamental algorithms and data structures that power bioinformatics workflows, this book covers a range of topics from the foundations of sequence analysis (alignments and hidden Markov models) to classical index structures (k-mer indexes, suffix arrays, and suffix trees), Burrows–Wheeler indexes, graph algorithms, network flows, and a number of advanced omics applications. The chapters feature numerous examples, algorithm visualizations, and exercises, providing graduate students, researchers, and practitioners with a powerful algorithmic toolkit for the applications of high-throughput sequencing. An accompanying website (www.genome-scale.info) offers supporting teaching material. The second edition strengthens the toolkit by covering minimizers and other advanced data structures and their use in emerging pangenomics approaches.
Edited by
Xiuzhen Huang, Cedars-Sinai Medical Center, Los Angeles,Jason H. Moore, Cedars-Sinai Medical Center, Los Angeles,Yu Zhang, Trinity University, Texas
This chapter explores the ethical status of a presumed artificially intelligent machine (AI) and the grounds for such status from the perspective of ethical philosophy. The chapter explicitly repudiates anthropocentric claims for an AI or assumptions about its capabilities or motivations for its existence or actions. It also avoids any speculation about whether or when AI capabilities may come about. Instead, it provides a framework for building the vocabulary and concepts for analysis of important ethical issues for AI from first principles. The philosophical foundations of ethics considered here are nihilism, divine command, consequentialism, deontology, and virtues ethics.
Edited by
Xiuzhen Huang, Cedars-Sinai Medical Center, Los Angeles,Jason H. Moore, Cedars-Sinai Medical Center, Los Angeles,Yu Zhang, Trinity University, Texas
Bioinformatics is one of the fastest growing fields in the twenty-first century. Over the last few decades, studies of biology have moved from low-throughput hands-on experiments to computational analyses of the increasingly complex tree of life. Alongside this change are multiple challenges. The first challenge exists in interdisciplinary collaboration. The current interdisciplinary collaboration model is still bounded by individual disciplines and is far from seamless. The second challenge is big data. How to extract the useful data from the haystack of big data? The third challenge is human infrastructure. We need to educate the next generation of scientists as early as possible to solve the interdisciplinary and complex biology problems with computational resources. To address these challenges, we propose a No-Boundary Thinking approach to teach the next generation of scientists. To explain it, we present three No-Boundary Thinking teaching and research models. All of them are embedded into undergraduate computer science curriculums.
Edited by
Xiuzhen Huang, Cedars-Sinai Medical Center, Los Angeles,Jason H. Moore, Cedars-Sinai Medical Center, Los Angeles,Yu Zhang, Trinity University, Texas
Edited by
Xiuzhen Huang, Cedars-Sinai Medical Center, Los Angeles,Jason H. Moore, Cedars-Sinai Medical Center, Los Angeles,Yu Zhang, Trinity University, Texas
Pharmacogenomics is the study of genetic factors that influence drug response. Pharmacogenomics combines pharmacology and genomics to identify genetic predictors of variability in drug response that can be used to maximize drug efficacy while minimizing drug toxicity in order to tailor drug therapy for patients, thus improving patient care and reducing healthcare costs. In this chapter we review the field of pharmacogenomics in its current state and clinical practice. Recent research, methods, and resources for pharmacogenomics are reviewed in detail. We discuss the advantages and challenges in pharmacogenomic studies. We elaborate on the barriers to clinical translation of pharmacogenetic discoveries and the efforts of various institutions and consortia to mitigate these barriers. We also discuss applications and clinical translation of pharmacogenomic research moving forward, along with social, ethical, and economic issues that require attention. We conclude by previewing the use of big data, multi-omics data, advanced computing technology, and statistical methods by scientists across disciplinary boundaries along with the efforts of government organizations, clinicians, and patients that could lead to successful and clinically translatable pharmacogenomic discoveries, ushering in an era of precision medicine.
Edited by
Xiuzhen Huang, Cedars-Sinai Medical Center, Los Angeles,Jason H. Moore, Cedars-Sinai Medical Center, Los Angeles,Yu Zhang, Trinity University, Texas
The goal of this chapter is to explore and review the role of artificial intelligence (AI) in scientific discovery from data. Specifically, we present AI as a useful tool for advancing a No-Boundary Thinking (NBT) approach to bioinformatics and biomedical informatics. NBT is an agnostic methodology for scientific discovery and education that accesses, integrates, and synthesizes data, information, and knowledge from all disciplines to define important problems, leading to innovative and significant questions that can subsequently be addressed by individuals or collaborative teams with diverse expertise. Given this definition, AI is uniquely poised to advance NBT as it has the potential to employ data science for discovery by using information and knowledge from multiple disciplines. We present three recent AI approaches to data analysis that each contribute to a foundation for an NBT research strategy by either incorporating expert knowledge, automating machine learning, or both. We end with a vision for fully automating the discovery process while embracing NBT.