Published online by Cambridge University Press: 10 September 2018
Brain–computer interfaces (BCIs) are driven essentially by algorithms; however, the ethical role of such algorithms has so far been neglected in the ethical assessment of BCIs. The goal of this article is therefore twofold: First, it aims to offer insights into whether (and how) the problems related to the ethics of BCIs (e.g., responsibility) can be better grasped with the help of already existing work on the ethics of algorithms. As a second goal, the article explores what kinds of solutions are available in that body of scholarship, and how these solutions relate to some of the ethical questions around BCIs. In short, the article asks what lessons can be learned about the ethics of BCIs from looking at the ethics of algorithms. To achieve these goals, the article proceeds as follows. First, a brief introduction into the algorithmic background of BCIs is given. Second, the debate about epistemic concerns and the ethics of algorithms is sketched. Finally, this debate is transferred to the ethics of BCIs.
The authors thank Mary Clare O’Donnell for her valuable support in preparing the manuscript.
1. Burwell, S, Sample, M, Racine, E. Ethical aspects of brain computer interfaces: A scoping review. BMC Medical Ethics. 2017;18(60):1–11.CrossRefGoogle ScholarPubMed
2. Grübler, G, Hildt, E, eds. Brain–Computer Interfaces in Their Ethical, Social and Cultural Contexts. Dordrecht: Springer; 2014.CrossRefGoogle Scholar
3. Tamburrini, G. Brain to computer communication: Ethical perspectives on interaction models. Neuroethics 2009;2(3):137–49.CrossRefGoogle Scholar
4. See note 1, Burwell et al. 2017.
5. Clausen, J. Man, machine and in between. Nature. 2009;457(7233):1080–1.CrossRefGoogle ScholarPubMed
6. Grübler, G. Beyond the responsibility gap. Discussion note on responsibility and liability in the use of brain–computer interfaces. AI & Society. 2011;26:377–82.CrossRefGoogle Scholar
7. Grübler, G. Shared control—shared responsibility? International Journal of Bioelectromagnetism. 2011;13(1):56–7.Google Scholar
8. Haselager, P. Did I do that? Brain–computer interfacing and the sense of agency. Minds & Machines. 2013;23(3):405–18.CrossRefGoogle Scholar
9. Matthias, A. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology. 2004;6(3):175–83.CrossRefGoogle Scholar
10. Holm, S, Voo, TC. Brain–machine interfaces and personal responsibility for action – maybe not as complicated after all. Studies in Ethics, Law, and Technology. 2010;4(3)7.Google Scholar
11. Lucivero, F, Tamburrini, G. Ethical monitoring of brain–machine interfaces. AI & Society. 2008;22(3):449–60.CrossRefGoogle Scholar
12. Mittelstadt, BD, Allo, P, Taddeo, M, Wachter, S, Floridi, L. The ethics of algorithms: Mapping the debate. Big Data & Society. 2016;3(2):1–21.CrossRefGoogle Scholar
13. Hill, RK. What an algorithm is. Philosophy & Technology. 2016;29(1):35–59.CrossRefGoogle Scholar
14. The article can thus be considered as realizing the idea that “[d]esigning imprecise regulation that treats decision-making algorithms, AI, and robotics separately is dangerous,“ and that ”[c]oncerns about fairness, transparency, interpretability, and accountability are equivalent, have the same genesis, and must be addressed together, regardless of the mix of hardware, software, and data involved”; see note 12, Mittelstadt et al. 2016.
15. See note 8, Haselager 2013.
16. McFarland, DJ, Wolpaw, JR. EEG-based brain–computer interfaces. Current Opinion in Biomedical Engineering. 2017;4:194–200.CrossRefGoogle ScholarPubMed
17. Lotte, F, Congedo, M, Lécuyer, A, Lamarche, F, Arnaldi, B. A review of classification algorithms for EEG-based brain–computer interfaces. Journal of Neural Engineering. 2007;7(2):R1–R13.CrossRefGoogle Scholar
18. Bauer, W, Vukelic, M. Forschungsprojekt EMOIO. In: Neugebauer, R, ed. Digitalisierung Schlüsseltechnologien für Wirtschaft & Gesellschaft. Berlin & Heidelberg: Springer Verlag; 2018:135–51.Google Scholar
19. See note 16, McFarland, Wolpaw 2017.
20. See note 16, McFarland Wolpaw 2017; note 18, Bauer, Vukelic 2018.
21. See note 8, Haselager 2013.
22. Yuste, R, Goering, S, Agüera y Arcas, B, Bi, G, Carmena, JM, Carter, A, et al. Four ethical priorities for neurotechnologies and AI. Nature. 2017;551(7679):159–63.CrossRefGoogle ScholarPubMed
23. See note 12, Mittelstadt et al. 2016.
24. See note 12, Mittelstadt et al. 2016.
25. See note 12, Mittelstadt et al. 2016.
26. O’Neil, C. Weapons of Math Destruction. How Big Data Increases Inequality nd Threatens Democracy. New York: Crown; 2016.Google Scholar
27. See note 12, Mittelstadt et al. 2016.
28. van Wel, L, Royakkers, L. Ethical issues in web data mining. Ethics and Information Technology. 2004;6(2):129–40.CrossRefGoogle Scholar
29. Eubanks, V. Automating Inequality. How High-Tech Tools Profile, Police, and Punish the Poor. New York: St. Martin’s Press; 2017.Google Scholar
30. Kroll, J, Huey, J, Barocas, S, Felten, EW, Reidenberg, JW, Robinson, DG, et al. Accountable algorithms. University of Pennsylvania Law Review. 2017;165:633–705.Google Scholar
31. de Laat, PB. Algorithmic decision-making based on machine learning from Big Data: Can transparency restore accountability? Philosophy & Technology 2017.Google Scholar
32. Sandvig C, Hamilton K, Karahalios K, Langbort C. Auditing algorithms: Research methods for detecting discrimination on Internet platforms. Presented at Data and Discrimination: Converting Critical Concerns into Productive Inquiry, a preconference at the 64th Annual Meeting of the International Communication Association, May 22, 2014, Seattle, WA.
33. Tutt, A. An FDA for algorithms. Administrative Law Review 2017;69(1):83–123.Google Scholar
34. Burrell, J. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society. 2016;3(1):1–12.CrossRefGoogle Scholar
35. Lepri, B, Oliver, N, Letouze, E, Pentland, A, Vinck, P. Fair, transparent, and accountable algorithmic decision-making processes. Philosophy & Technology. 2017 [epub ahead of print].Google Scholar
36. See note 35, Lepri et al. 2017.
37. Romei, A, Ruggieri, S. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review. 2014;29(5):582–638.CrossRefGoogle Scholar
38. See note 12, Mittelstadt et al. 2016.
39. Malle, BF. Moral competence in robots? In: Seibt, J, Hakli, R, Norskov, M, eds. Sociable Robots and the Future of Social Relations: Proceedings of Robo-Philosophy 2014. Amsterdam: IOS Press; 2014:189–98.Google Scholar
40. Malle, BF. Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology. 2016;18(4):243–56.CrossRefGoogle Scholar
41. Anderson, M, Anderson, SL. Machine ethics: Creating an ethical intelligent agent. AI Magazine. 2007;28(4):15–26.Google Scholar
42. See note 8, Haselager 2013.
43. Glannon, W. Ethical issues with brain–computer interfaces. Frontiers in Neuroscience. 2014;8:1–3.Google ScholarPubMed
44. See note 12, Mittelstadt et al. 2016.
45. See note 18, Bauer, Vukelic 2018.
46. See note 3, Tamburrini 2009.
47. See note 34, Burrell 2016.
48. Beckmann, M, Pies, I. The constitution of responsibility: Toward and ordonomic framework for interpreting (corporate social) responsibility in different social settings. In: Luetge, C, Mukerji, N, eds. Order Ethics: An Ethical Framework for the Social Market Economy. Dordrecht: Springer; 2016:221–50.CrossRefGoogle Scholar
49. Scherer, AG, Rasche, A, Palazzo, G, Spicer, A. Managing for political corporate social responsibility: New challenges and directions for PCSR 2.0. Journal of Management Studies. 2016;53(3):273–98.CrossRefGoogle Scholar