About ten years ago, reports of lab-grown “mini brains” or “brains in a dish” appeared in the media, falling somewhere between the curious and the alarming. The trigger of these reports was a new method to grow three-dimensional neural tissue from human stem cells that recapitulates, to some degree, the early development of brain tissue. Despite their relatively small size and other limitations, such model systems capture in part the structure and functions of regions of the human brain and can also be combined to form so-called assembloids.
Traditional methods for studying both development and diseases of the human brain have relied mostly on animal models and, in so far as humans could be used, brain tissue from the dead and brain imaging of the living. All of these methods have well-known limitations: Animal brains, in particular rodent brains, differ significantly from human brains; postmortem human brain tissue is simply dead; and there are both technical and ethical limitations to what sort of experiments can be conducted using brain imaging, not to mention invasive brain research. Two-dimensional neural cell cultures certainly have their uses and may, in some cases, be sufficient or even have advantages over cerebral organoids, but they lack, in particular, the structural features of organoids. Cerebral organoids have enormous potential for research on brain disorders, as well as brain development.
Although some researchers have used the terms “mini brain” and “brain in a dish” to refer to brain organoids, there are concerns—expressed within the scientific community as well—that these terms wrongly suggest that these entities are miniature versions of the complete brain. The appeal of these terms is that they are simple and media-friendly, and arguably more useful for raising public interest in organoid research than the technical and opaque term “organoid.” In academic contexts, the term “organoid” has been preferred, although it also suggests a miniature organ. The first reports of three-dimensional brain tissue grown from stem cells use the terms “neural cell balls,” “brain balls,” or simply “balls,” but these terms have not caught on. That is not the end of the terminological difficulties though. Whatever the context, there is no consensus with regard to what term to use with “organoids”; are they brain, cerebral, or neural? The difference may seem inconsequential, and indeed these terms are sometimes used interchangeably, but there are reasonable people who have a strong preference for one term over the others. Here we use “cerebral organoids” as its usage is considerably more common than “neural organoids.”
Research on and use of cerebral organoids raise a host of ethical issues ranging from traditional research ethics questions, including informed consent, biobanking, and data protection, to issues of clinical translation concerning, for example, the uncertain evidence basis for first-in-human trials or the risk of damaging a patient’s brain or altering that patient’s personality with organoid transplants. The long-standing debate about the sources for stem cell and organoid research and questions concerning the moral status of cerebral organoids and neural chimeras feature prominently in the ethical debate on cerebral organoids as well.
The alarming aspect of the terms “mini brain” and “brain in a dish” was the misleading implication that complete brains (in miniature) were being grown, that might be experiencing the horror of being conscious while being completely cut off from the external world. The immediate ethical issue here, however, is not so much the potential horror of a locked-in consciousness as it is responsible and effective science communication. Nonetheless, the ethical issue that has received the most attention is the possibility that cerebral organoids might indeed develop sentience, that is, the ability to experience pleasure and pain, or that they might even develop some degree or form of consciousness. Early on it was pointed out that without sensory input and, perhaps, motor output, there could be no sentience, let alone consciousness. In the meantime, cerebral organoids have been grown that respond to light, they have been connected with muscle, and they have even been taught to play the simple video game Pong. That is considerable interaction with the external world. Being able to interact with the external world may be a necessary condition for the emergence of consciousness, but it is not sufficient. There is no indication that cerebral organoids have currently the size or complexity to develop sentience or consciousness, and clearly they have never been part of a complete, living being capable of having the social experiences and history that some argue is necessary for the development of anything approaching the human sort of consciousness.
If it turned out at some point that cerebral organoids had developed sentience or consciousness, the ethical difficulty may appear not to be insurmountable (if we ignore how we got there). It seems that the sentient cerebral organoid would deserve at least the ethical protection of nonhuman animals used in research and the conscious cerebral organoid would deserve the ethical protection of vulnerable human research subjects—which would in the latter case exclude most if not all research uses of such cerebral organoids, and arguably prohibit their creation in the first place. The details of these ethical parallels, including their relevance, implications, and limitations, would still need to be worked out.
These simple parallels break down quickly in the case of intermediate forms of consciousness, as opposed to full human-like consciousness. In her contribution to this symposium, Karola Kreitmair draws out the complexities in the relationship between consciousness, moral status, and adequate research protection. She argues that we may not be able to know enough about the relevant aspects of organoid consciousness to determine their moral status and consequently the research protections they are due. In the case of research into brain disorders, however, the very features needed to make the model serve its purpose—for example, susceptibility to stress—might make its use appear or become ethically problematic, as Katherine Bassil and Dorothee Horstkötter discuss in their case study of research on stress-related mechanisms and disorders using cerebral organoids and chimeras.
At the center of the ethical issue concerning the potential consciousness of cerebral organoids is the question of how we could know whether an organoid has developed consciousness in the first place. How could we possibly detect or measure consciousness? Alex McKeown argues that research on cerebral organoids faces the dilemma that scientists have ethical reasons to refrain from investigating the development of consciousness in cerebral organoids (by intentionally producing conscious organoids), but at the same time the lack of knowledge about the development of consciousness in cerebral organoids may lead to them accidentally creating conscious entities and thereby causing suffering and harm. The problem of how to detect consciousness in nonbehavioral entities is aggravated by the absence of any agreement on the meaning of consciousness. How can we find consciousness when we don’t even agree on what we are looking for, even in healthy human adults, let alone in novel beings in the laboratory? Increased size and complexity of cerebral organoids, connecting different cerebral organoids in assembloids, implanting cerebral organoids in animal brains, and the creation of “organoid intelligence” systems may all lead to situations where sentience or consciousness emerges in some form. We have to prepare for the risk that consciousness may emerge in cerebral organoids at some point, but also not forget that the question of consciousness is not the only relevant ethical issue.
Part of the appeal of cerebral organoids is that they may replace animals for some research uses. This appeal is both scientific and ethical. In terms of science, the use of cerebral organoids may produce results that are more applicable to humans than results from using animal models. In terms of ethics, the suffering of the research animal is avoided through the replacement with cerebral organoids. There are, however, some concerns that the use of organoids will be additional to the use of animal models, or even require additional, parallel research on animals.
Cerebral organoids have been implanted in animals, for example rats, where they integrated unexpectedly well in the host’s brain and even affected its behavior. This raises issues both about conferring higher, even in some sense human-like, cognitive abilities on an animal and about animal welfare. There are ethical worries about humanizing the animal brain, even if it does not go so far as creating human-like consciousness. The growing use of neural chimeras in organoid research also raises the question whether traditional frameworks of animal ethics are suitable for addressing related ethical issues. Andrew J. Barnhart and Kris Dierickx explore this question by applying the Six Principles’ framework to two recent case studies involving xenotransplantation of cerebral organoids.
Cerebral organoids, neural chimeras, and interfaces of brain organoids with computer technology lead us to explore novel beings and their potential forms of consciousness, which in turn raise fascinating epistemological and ethical issues, including questions about moral status and legal protections. At the same time there are disagreements about the moral and legal protection of nonhuman animals that do have a moral status. In his discussion of the case of Happy the elephant, whose moral status was acknowledged by the same court that denied Happy legal protection due to the elephant’s legal status as property, Joshua Jowitt calls for developing regulatory frameworks for the governance of research on cerebral organoids that avoid similar contradictions and rule out unethical practices in case they pass the threshold of consciousness.Footnote 1
Acknowledgements
The authors gratefully acknowledge the support of the German Federal Ministry of Education and Research (BMBF, project number 01GP2183), the Dr. Kurt und Irmgard Meister-Stiftung, and the Hans Gottschalk-Stiftung.
Competing interest
The authors declare that they have no conflict of interest.