Hostname: page-component-f554764f5-68cz6 Total loading time: 0 Render date: 2025-04-15T12:07:47.745Z Has data issue: false hasContentIssue false

Merging man and machine: A legal assessment of brain–computer interfaces in armed conflict

Published online by Cambridge University Press:  31 March 2025

Denise Koecke*
Affiliation:
Law Student and Graduate in International Relations, European University Viadrina, Frankfurt(Oder), Germany
Rights & Permissions [Opens in a new window]

Abstract

Imagine a future where man and machine become one on the battlefield, where soldiers direct weapon systems through a neural implant. Research advances on brain–computer interfaces (BCIs) may eventually allow such control of arms at the speed of thought. This article sketches two modes of BCI-controlled weapon systems. In Mode A (active BCI), the soldier opens fire by actively imagining that he is pushing a button with his hand. By contrast, Mode B (reactive BCI) captures neural signals evoked instantly after having spotted a target, before the operator becomes consciously aware of it. If he deems the target lawful, the brain signal is translated into a command to fire. Arguing that such man–machine collaboration transforms the operating soldier into a means of warfare, this article conducts a weapon review in line with Article 36 of Additional Protocol I (AP I) to answer the question of whether BCIs can be lawfully used to control weapons in international armed conflict. Consequently, the two set-ups are reviewed on their compliance with the customary targeting principles of international humanitarian law. Since Mode B casts doubt on the amount of control that the soldier retains over his targeting decision, the concept of meaningful human control is transposed from the debate on lethal autonomous weapon systems and applied to BCIs. It is found that reactive BCIs cannot be meaningfully controlled and thus violate the principles of distinction and proportionality. Hence, reactive BCIs are unlawful under Article 36 of AP I.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of International Committee of the Red Cross.

Introduction

The idea of merging man and machine is no longer confined to the realms of science fiction. Research advances on brain–computer interfaces (BCIs) have already made it possible for a paralyzed man to drive a Formula One car using only his thoughts,Footnote 1 and will likely engender a paradigm shift in how humans interact with technological gadgets.Footnote 2

BCIs establish a connection between the brain, a computer able to decode its neural signals, and a physical device controlled thereby. Contemporary applications focus mainly on the mental command of prostheses, but such man–machine collaborations are projected to dominate military systems by the early 2030s.Footnote 3 Most notably, the United States’ Defense Advanced Research Projects Agency (DARPA) has declared an interest in their development.Footnote 4 In fact, the US is currently considering export controls on BCIs, fearing an “impact on U.S. national security”.Footnote 5 The technology could serve, inter alia, the purpose of remote weapon control,Footnote 6 effectively merging man and machine to combine their strengths on the battlefield.Footnote 7 One such endeavour has already fathered a US patent,Footnote 8 while the governments of China, Russia and France have equally been reported to be researching and developing BCIs.Footnote 9

While scholars of various disciplines deem BCI-controlled weapons feasible,Footnote 10 few legal academics have contemplated their application in warfare, and none have differentiated between the specific BCI types or conducted a comprehensive analysis of their legality. Considering the technology’s encroachment on the military, this article sets out to fill the gap and determine whether soldiers can lawfully use BCIs to control weapons under international humanitarian law (IHL). For this purpose, a review on legality under Article 36 of Additional Protocol I (AP I) will be the core of the assessment, which is hence restricted to international armed conflict.

The article begins with a brief overview of the types and functioning of BCIs. The article then outlines two set-ups for BCI-administered weapon control. In the first, Mode A, the operating soldier consciously identifies and engages targets, whereas in the second, Mode B, the commander selects targets subconsciously. In either scenario, it will be shown that the associated soldier transforms into a means of warfare, as part of a weapon system. This novel status brings the entire man–machine collaboration within the scope of Article 36 of AP I. Subsequently, the two BCIs are evaluated on their adherence to the customary targeting principles of IHL. In particular Mode B, which impedes the exercise of conscious control over targeting decisions, challenges the operator’s ability to ensure compliance with distinction and proportionality. Hence, the concept of meaningful human control (MHC) is borrowed from the debate on lethal autonomous weapon systems to discuss the degree of oversight required for the BCI’s lawfulness. To this end, considerations of precaution and military necessity are solicited, before settling on an “on-the-loop standard” of MHC. For Mode B’s analysis, a model is presented which aims to ascertain whether the subconscious BCI operator can predictably ensure that lawful targeting decisions are made. These insights will finally allow a conclusion on the legality of either form of BCI.

Technological framework: Weapon control through BCIs

Overview and categorization of BCIs

Before discussing the legal merits, BCI technology – in its two methods of controlling weapons, as outlined above – needs to be briefly sketched. The human brain operates on neural signals, spikes of electrical activity, which can be captured by a neural implant,Footnote 11 interpreted by a computer and translated into commands that control an external device.Footnote 12 This entire set-up of person and machine is referred to as the brain–computer interface.

BCIs fall into two categories: active and reactive. Active BCIs require the user to intentionally focus on a command, so as to generate “spontaneous signals” that are subsequently deciphered by the computer – for example, as the desire to move one’s hand to the right (“conscious command”).Footnote 13 In contrast, reactive BCIs pick up “evoked signals” which are generated subconsciously – preceding the user’s awareness – in reaction to external stimuli (“subconscious command”).Footnote 14 Commonly, these signals are elicited by rapid serial visual presentation (RSVP), where the individual is shown images at 200- to 500-millisecond intervalsFootnote 15 with the task of spotting a predefined object therein.Footnote 16 The recognition of the object evokes a neural response in the user’s brain that the BCI then captures and may use to launch an operational command.Footnote 17

Military BCIs: Two modes of weapon control

Both active and reactive BCIs can be used to control weapons. This section of the article presents two different set-ups that can be conceived towards this end, informed partially by feasibility studiesFootnote 18 and by discussions among academics of various disciplines. The two modes shall serve as non-exhaustive, illustrative examples of conceivable BCI-directed weapon control.

Mode A: Conscious command through active BCI

In the first scenario, Mode A,Footnote 19 an unmanned ground vehicle (UGV), mounted with a machine gun, is remotely controlled via BCI. The vehicle comes with a 360-degree camera, which the operating soldier can move around to scan the environment for lawful targets.Footnote 20 After he identifies a target, the operator gives an explicit command to open fire by actively imagining that he is pushing a button with his right hand.Footnote 21 The computer recognizes the neural signals generated by the combatant mentally “pressing the button” and authorizes the UGV to strike. While Mode A comes with the benefit of the warfighter’s immersion despite his remoteness from the battlefield, it can only accelerate the target engagement to some extent, since he must still make the conscious effort to launch an attack.Footnote 22

Mode B: Subconscious command through reactive BCI

Mode B works differently.Footnote 23 This set-up takes advantage of the fact that about 250 to 500 milliseconds after visual perception, the soldier’s brain will have already, subconsciously, reacted to the target stimulusFootnote 24 – much faster than any computer could, and faster than he is consciously aware of.Footnote 25 To make use of these subconsciously evoked signals, the warfighter is presented with images of the UGV’s surroundings via RSVP.Footnote 26 His task remains to identify and shoot lawful targets. Recognizing the neural signals evoked whenever he subconsciously spots such a target, the BCI will instantly instruct the machine gun to fire. In this way, the commander’s consciousness is overstepped in order to attack much more quickly. Contrary to Mode A, where the operator must willingly decide to strike, Mode B does not require his decision but only his brain’s instantaneous reaction. Thus, every delay between target perception and engagement is eliminated. Nonetheless, the soldier remains capable of withdrawing from the BCI control at any point.

While the two modes diverge in their manner of targeting, either BCI creates a secure link between the operator and the UGV, which merits discussion as to whether the operator’s legal status is altered. Consequently, it is necessary to evaluate whether the commanding soldier falls under the scope of Article 36 of AP I as a new weapon or means of warfare.

Legal status of the BCI-controlling soldier

The soldier as a weapon

First, it shall be considered whether the soldier could be deemed to have turned into a weapon. In the absence of consensus, the definition of a weapon as “an offensive capability that can be applied to a military object or enemy combatant” is widely shared.Footnote 27 This wording indicates that a weapon is an instrument, a tool, something that “is applied”. Article 36 of AP I substantiates this by referring to a weapon’s employment as “its” employment. Chircop and Liivoja, explicitly treating the question of whether BCI-enhanced warfighters can be considered weapons, thus propose the definition of “an instrument through which an offensive capability manifests and by which it can be applied”.Footnote 28

Adopting this definition henceforth, it seems questionable to regard the BCI operator as a mere instrument. The soldier’s status boils down to whether his employment of the technology takes him beyond the “tipping point” at which he is transformed to the extent of becoming the weapon instead of simply using the weapon. Considering, by analogy, a gun fused to a combatant’s hand, it seems obvious that the gun remains a weapon and the soldier retains his human status.Footnote 29 The situation is less clear for BCIs. Mehlman, Lin and Abney have argued that there is only one way to avoid this slippery slope: to consider soldiers – enhanced or not – as weapons to begin with.Footnote 30 That proposal has earned them firm criticism,Footnote 31as it would degrade the human soldier to a mere instrument of combat,Footnote 32 running contrary to the humanitarian spirit of IHL.Footnote 33

These reflections point to an answer on the warfighter’s status in Mode A. The human operator can only be considered a weapon if the BCI fully objectifies him,Footnote 34 but one can hardly argue that this tipping point is reached in Mode A, where the commander preserves full freedom of action – he observes the environment, spots potential targets, determines their status and decides to engage. Just as the soldier is not the gun fused to his hand, he is not the instrument; rather, he uses the BCI to manifest the machine gun’s – the instrument’s – offensive capability.

The reactive BCI, meanwhile, turns the soldier’s subconscious neural response into an instruction to attack. His brain, in this set-up, functions like a processing machine, similar to a computer.Footnote 35 In Kantian terms, one could doubt that the warfighter was still treated as an end in himself in lieu of a means to an end. Despite his brain’s function, the operator is not fully objectifiedFootnote 36 – not fully deprived of his freedom of action. The soldier remains physically conscious during operations, competent to terminate the BCI control at any time. Objects do not possess such agency. Additionally, the operator conducts an assessment that draws from the pool of his human experience: from his intellectual understanding of lawful targeting and his knowledge of combat. Such a task cannot be performed by things. While it is true that the operator deliberately relinquishes conscious command by lending his brain to the target engagement process, the human appropriates the machine for this purpose, not vice versa. The operator uses the weapon (the machine gun); he is not used by it.

The soldier as a means of warfare

Definition of means of warfare and weapon system

Since the soldier is not turned into a weapon, he could alternatively be subsumed under means of warfare. The text of AP I grants only a vague starting point for a definition. By virtue of its construction, Article 35 of AP I suggests that means of warfare refers to “weapons, projectiles and materials”. In fact, while means incorporate weapons, it is sensible to perceive means more broadly in order to avoid redundancy in the text of Article 36.Footnote 37 Accordingly, means of warfare can be defined as “weapon systems or platforms employed for the purposes of attack”, including “associated equipment used directly to deliver force during hostilities”.Footnote 38 For the same reasons as mentioned above, the warfighter does not qualify as “associated equipment”; however, he could be part of a weapon system.

The term “weapon system” is not found explicitly in AP I, but is commonly defined as

the weapon itself and those components required for its operation …[,] limited to those components or technologies having a direct injury or damaging effect on people or property … and all other devices and technologies that are physically destructive or injury producing.Footnote 39

This notion, though, seems under-inclusive as it only relates to weapon-associated inputs delivering harm (e.g. munitions or projectiles). It refers exclusively to the physical impact of the force delivered, not to the source of how force is administered. As such, one would exclude all new manners of weapon control from the duty of review as long as they relied on established physical components executing the damage. Consequently, it would – for the sake of Article 36 of AP I – make no difference if a soldier were to use a hand-held machine gun to aim and fire at targets or if the machine gun were to be mounted on top of a vehicle, where it spins uncontrollably, while the vehicle is blindly remote-controlled. The vehicle itself, the manner of how the machine gun is attached to it and the remote control do not themselves produce injuries – the only component directly delivering harm remains the exact same machine gun bullets. It should be clear that the two scenarios ought to be evaluated very differently under the auspices of distinction and proportionality. The term “weapon system” was crafted precisely to capture additional factors of how a weapon is put to use and to scrutinize the entire set-up of how force within such a system – during normal or expected useFootnote 40 – is delivered.Footnote 41 Such understanding reflects the term “means of warfare” as referring to a process, a technique, a mechanism to administer harm – including both the harm-delivering process and the actual harm.

In this spirit, Schmitt defines a weapon system as “a combination of one or more weapons with all related equipment, materials, services, personnel, and means of delivery and deployment … required for self-sufficiency”.Footnote 42 What this definition has in common with the one cited above is that they both conceptualize a weapon system as a weapon needing additional input to function self-sufficiently – i.e., to produce damaging effects. Schmitt’s understanding, however, considers all of the elements that are required to operate a given weapon as part of the weapon system. This enables us to review every part needed to deliver harm (even if it does not do so eo ipso) holistically as one means of warfare under Article 36 of AP I.

The operator as part of a weapon system

While such inputs have traditionally been technological components, the novel man–machine collaboration presents us with a human inseparably connected to the damaging effect of the UGV’s machine gun. Only by him lending the powers of his brain via the neural implant, and thereby giving the mental engagement command, can the BCI-controlled UGV fulfil its purpose. To compare, a conventional weapon system is self-sufficient without human input in terms of how it mechanically functions and processes information; all that is needed from the operator is for him to launch the engagement signal (by pressing the button). The BCI, au contraire, relies on the operator to perform duties of transmitting information. Dissimilar to a physical button pressed for target engagement, the functioning of the weapon system relies on the trigger in the operator’s mind. One could argue that its functioning does not actually differ from a conventional weapon system, at least for Mode A: in both cases, the soldier must willingly decide to fire and use his body to give that command. Possibly, this detaches him from the weapon system’s structure and makes him a user instead of part of a weapon system, just as if he were to fire a regular rifle. However, the difference is precisely that the rifle functions self-sufficiently once the mechanical process of striking a target is initiated – i.e., when the soldier pulls the trigger. The BCI, on the other hand, only works because the operator invites the targeting process into his body. For this purpose, the soldier’s brain occupies a function equal to that of a mechanical component in a conventional weapon system, being either the button to engage (Mode A) or both the button and the processing machineFootnote 43 (Mode B). Without him, the offensive capability could not manifest,Footnote 44 placing the BCI operator within the system of how the delivery of force is conceived and administered.Footnote 45 Differently put, if a rifle is taken by the enemy, it can easily be used, by anybody, to fire back at its owner – but when the BCI operator is captured, the man–machine collaboration cannot be used in the same fashion, since its workings depend on the operator’s mental command. With the BCI, as opposed to the rifle, the soldier is – via his neural implant – fused to the delivery of force, without which the weapon system is not self-sufficient. Not just anyone can launch the offensive power; it must be the operator with whom the targeting system maintains a neural connection. The soldier therefore qualifies as a weapon system and – as such – as a means of warfare.Footnote 46

The soldier’s combatant status as part of a weapon system

Commander and BCI cannot be viewed separately, since the neural implant creates a permanent bridge between the former and the armed vehicle. However, this does not mean that – by virtue of attaining the status of a means as warfare – the BCI-controlling soldier ceases to be a combatant or, generally speaking, loses his respective status. Though operator and BCI are incapable of delivering harm independently of each other,Footnote 47 the merger between them does not undermine the formerly held status of either the UGV and its gun or the soldier.

While the machine gun remains a weapon, the operator’s status could only dissipate if the set-up reduced him to a weapon – a mere instrument.Footnote 48 This is not the case, as explained above. In contrast, a weapon system is comprised of multiple, separable parts. It is a weapon plus delimited (human) components connected to its employment or delivery of force. Thus, if the warfighter does not turn into a weapon himself, precisely because he maintains the freedom of action that is preconditional to his status as combatant, one cannot argue that the BCI strips him of his combatant privilege. Therefore, the operator does not lose his status, but takes on the dual role of both combatant and means of warfare.

With regard to the operator’s status, then, it can be concluded that the freedom of action which the soldier preserves in both modes prevents him from becoming an objectified instrument – a weapon. Either mode turns the operator into an indispensable part of a weapon system (comprising him, his neural implant, the computer, the UGV and the machine gun), and as such, he takes on a dual role of combatant and means of warfare. Article 36 of AP I is therefore the relevant provision for assessing the lawfulness of the entire weapon system – the active or reactive BCI. The soldier’s dual status within remains relevant and builds the foundation for the following assessment.

Lawfulness of BCI-controlled weapon systems: Article 36 review

Article 36 of AP I obliges the Contracting Parties to review new means of warfare. Such an assessment arguably constitutes customary international law,Footnote 49 serving as a preliminary check of a weapon system’s lawfulness per se – independently of the operational context.Footnote 50 If one or both modes of BCI violated the provisions of the Protocol,Footnote 51 their use would be illegal ab initio.Footnote 52 For this purpose, Modes A and B ought to be evaluated in line with the “normal or expected” circumstances of employment,Footnote 53 each on its own merits. Following the International Committee of the Red Cross’s (ICRC) Guide to the Legal Review of New Weapons, Means and Methods of Warfare (ICRC Review Guide),Footnote 54 the BCI must a priori be capable of applying targeting law – i.e., it must be compatible with IHL’s targeting principles, all of which are customary.Footnote 55

Targeting principles

Since the prohibition against causing superfluous injury or unnecessary suffering (AP I Article 35(2), reaffirming the 1907 Hague Convention IV Articles 22 and 23(e)) is not affected in the scenarios under discussion here (assuming the BCI-controlled machine gun shoots regular bullets), only the principles of distinction and proportionality must be scrutinized.

Distinction

The principle of distinction – enshrined in Article 48 of AP I in its elementary form – prohibits targeting civilian persons (AP I Articles 51(2) and 51(6)) or objects (AP I Article 52(1)). Attacks shall not be indiscriminate (AP I Article 51(4)) and must be directed solely against military objectives (AP I Article 52(2)). The machine gun is not an indiscriminate weapon; during normal and expected use, it is directed against specific persons. Hence, it allows the attacker to only strike lawful targets (AP I Article 51(4)(a)–(b)), thereby limiting its impact to military objectives (AP I Article 51(4)(c)). In the case of BCIs, the only questionable element is whether the weapon’s mental control permits its operator to exercise that distinction.

Indeed, Mode A’s conscious operator can instruct the UGV to carry out lawful attacks, but whether his counterpart in Mode B can do so equally is more obscure. The soldier must not merely identify regular combatants (AP I Article 43(2), 1907 Hague Convention IV Article 3, Geneva Convention III Article 4(A)(1)) as lawful targets, but must also distinguish between e.g. irregular armed forces (AP I Article 43(3), Geneva Convention III Article 4(A)(2)), civilians directly participating in hostilities (AP I Article 51(3), Geneva Convention IV Article 5) and civilians not qualifying as such (AP I Article 50(1), Article 3 common to the four Geneva Conventions). Prior RSVP studies show that the participants’ assessments can be erroneous,Footnote 56 and even when deciding consciously, warfighters may not always grasp such battlefield complexities when making quick decisions.Footnote 57 Sharkey gives the example of “an ununiformed man firing a rifle in the vicinity of an army platoon [who] may be deemed to be a hostile target … [even though] he had actually just killed a wolf that had taken one of his goats”.Footnote 58 Nevertheless, the active BCI operator can hesitate, reflect and observe his surroundings before firing. Thus, he can discriminate as well as any combatant may during combat. In Mode B, however, the neural signal evoked once the warfighter views the gunman could instantly launch a command to strike that gunman. Pursuant to the soldier’s immediate subconscious response, he cannot consciously foresee or control whether an engagement command will be given in a certain moment. This casts serious doubt on the operator’s ability to adhere to the principle of distinction.

Proportionality

The proportionality principle precludes attacks which may be expected to cause incidental civilian harm that is excessive in relation to the concrete and direct military advantage anticipated (AP I Articles 51(5)(b) and 57(2)(a)(iii)). Given the principle’s context-dependent nature, some academics oppose to it forming part of the weapon review procedure.Footnote 59 However, as endorsed by others,Footnote 60 the ICRC has included the notion in its Review Guide, stating that the characteristics and foreseeable effects of the weapon system must, per se, allow for its proportionate use.Footnote 61

Mode A does not preclude the operator from respecting the principle of proportionality, since the conscious mental targeting renders him no less competent to balance “the foreseeable extent of incidental or collateral civilian casualties or damage” and “the relative importance of the military objective as a target”.Footnote 62 Once again, however, the reactive BCI – Mode B – is contentious in this regard. Noll categorically rules out the idea that a subconscious combatant could comply with the principle of proportionality.Footnote 63 He advances that a proportionality assessment requires the combatant to enter into an inner dialectic, use language and resort to his reason;Footnote 64 otherwise, “IHL cannot be applied”.Footnote 65

In fact, the operator’s subconscious response does not seem to reflect a rational judgement grounded on conflicting normative factors. On the other hand, soldiers in the heat of combat do not rationally ponder a targeting decision either; they do not apply the law through verbalization and discussion,Footnote 66 as demanded by Noll. After all, proportionality comes with a “fairly broad margin of judgment”Footnote 67 and is foremost “a question of common sense and good faith”Footnote 68 – the BCI commander should merely avoid attacks that can be expected to excessively tilt the balance towards military needs (AP I Article 57(2)(b)).Footnote 69 Such an estimation may be possible to make subconsciously, but even if it is, the soldier cannot guide, predict or veto the result. Any conclusion on his capacity to comply with proportionality will be contingent on whether one can predict a proportionate outcome.

Accordingly, inasmuch as Mode A’s deliberate targeting does not differ from conventional remote systems,Footnote 70 it can obey the customary targeting principles. Hence, the active BCI is lawful under Article 36 of AP I. We are left questioning Mode B’s legality on the grounds of distinction and proportionality. By virtue of his subconscious target assessment, the operator does not control the “moment of command” – the moment the computer transmits the engagement decision to the UGV. Thus, he can neither exercise reason with regard to, nor predict or veto, his targeting choice. Without a further understanding of the nature of the control that he must exert over his subconscious, any assertion on the lawfulness of the targeting outcome would be guesswork. The question emerges as to whether control in general, and control over the moment of command in particular, bears legal significance under Article 36 of AP I and therefore constitutes a ground for challenging Mode B’s lawfulness.

Meaningful human control

Legal value: Transposition from the LAWS debate

The legal merit of control has been subject to the vivid discussion on lethal autonomous weapon systems (LAWS),Footnote 71 from which the concept of meaningful human control emerged. While its exact scope is intensely debated and will be examined further below, generally speaking, MHC is construed as the minimum threshold of human oversight on target selection and engagement. Human intervention is not a sufficient condition thereto; the intervention must be meaningful.Footnote 72 In both LAWS and BCIs, some degree of human input is still present – at least during the development and engineering phase. However, both technologies spark the question of whether this reduced amount of human agency – compared to conventional weapon systems – is problematic. Mode B illustrates this point: despite human presence throughout the entire targeting process, the soldier lacks control over the moment of command. The reactive BCI’s mode of operation hereby shares the legal uncertainties of LAWS (i.e., whether it can apply targeting law). As a first step in addressing this issue, the merits of the MHC standard and how it relates to the reactive BCI shall be considered.

The notion of MHC cannot be found in the text of AP I; accordingly, it does not constitute a separate criterion for a weapon system’s lawfulness.Footnote 73 Rather, MHC over a means of warfare must be implicitly given in order for the means to respect the targeting principles, as human oversight lies at the core of a weapon system’s capacity to safeguard distinction and proportionality.Footnote 74 The active BCI reveals this aspect: because the operator makes conscious decisions, he is presumed to be capable of exercising distinction and proportionality. Conscious decision-making is hereby indicative of human agency – i.e., the capacity to ensure, through a volitional act, that one’s intentions manifest in the real world.Footnote 75 In Mode A, the conscious warfighter can ensure that his intention (to engage lawful targets) manifests on the battlefield (by him actively instructing the UGV to engage a lawful target).

Along these lines, MHC acts as a “fail-safe” to attest the meaningful presence of human agency where control is not exercised at the level of operational command.Footnote 76 Thus, the concept can be used as shorthand for a weapon system’s general capacity to apply targeting law – i.e., for our purposes, legality under Article 36 of AP I.Footnote 77 In Mode B, the soldier lacks control over the moment of command. At stake is hence the question of whether the amount of agency he retains over the engagement decision can ensure that his intention – to comply with the principles of distinction and proportionality – manifests. In other words, it is uncertain whether the operator’s subconscious control can satisfy the MHC standard.

Based on the similarity of control issues in LAWS and Mode B BCIs, it seems viable to transpose the concept of MHC to such BCIs.Footnote 78 However, some caution should be advised: within the LAWS domain, MHC relates to human control over a weapon system (such as a UGV), but this does not capture our subject of review. What is questionable in Mode B is not the control exerted over the weapon mounted on the UGV, but the control, if any, that the soldier wields over his subconscious. In light of the operator’s dual status as both combatant and means of warfare (see above), he retains the responsibilities of a combatant – to ensure that the principles of warfare extend to his targeting decisions by controlling the trigger in his mind.

For the purposes of this review, MHC builds the bridge between the human agentic capacity to ensure application of the law and the law’s actual application. As such, it acts as shorthand for the capacity to ensure lawfulness, in the absence of control at the operational stage. Having established its legal value for the Article 36 procedure, we now need to evaluate what MHC amounts to. Once the necessary degree of control – the minimum threshold for the warfighter’s conscious oversight – is identified, Mode B can be subsumed thereunder. Doing so will establish whether the lack of control over the moment of command prevents Mode B’s lawfulness.

Content of MHC: Out-of-the-loop versus on-the-loop standard

There is no consensus on the content of MHC. Human control can take different forms and include different factors at different stages of development, operations and targeting. Scholarly opinions on what level is regarded as “meaningful” fall into three categories: the standards of “in the loop”, “on the loop” and “out of the loop”. Respectively, these describe situations in which the operator either gives the engagement command (in the loop), only supervises the engagement command (on the loop), or does not need to be implicated in the engagement command at all (out of the loop).Footnote 79 Applying the trichotomy by analogy,Footnote 80 the appropriate standard for Mode B will now be examined.

If one were to adopt an out-of-the-loop standard, the human control during a weapon system’s design could be sufficient to ensure its compliance with the targeting principles.Footnote 81 In a reactive BCI, it is the operator’s presence that brings human agency to the system. Therefore, the system’s actual performance, as determined through testing, poses the sole relevant criterion for legality under Article 36 of AP I.Footnote 82

Such testing would be realized in a simulation, where the BCI-enhanced soldier subconsciously identifies targets and afterwards reviews the situation in a conscious state of mind.Footnote 83 If his subconscious responses aligned with his posterior conscious judgement in every instance (or most instances), MHC over the target assessment could be attested. Following this argument, the operator could rule out that he was ever (or frequently) compelled to correct the instruction transmitted to the UGV. Hence, he would not need control over the moment of command, since he could a priori ensure that the attack satisfied the principles of distinction and proportionality, based on the track record of his subconscious.

However, the on-the-loop-standard, to which the majority of scholars subscibe, opposes this approach. In order to be on the loop, it is demanded that the operator (at least) oversees the targeting decision, with the possibility of intervening;Footnote 84 hence, the BCI commander – in his function as a conscious combatant – would need to similarly supervise his subconscious. Such supervision is deemed necessary to ensure a predictable outcome.Footnote 85 Invoking predictability to mark the minimum threshold of human agency is sensible. If the BCI-controlling soldier wants to act lawfully (by applying distinction and proportionality in line with the rules regulating the conduct of hostilities), he must be able to ensure that the UGV only engages lawful targets. Otherwise, the argument goes, he cannot meaningfully control his means of warfare. MHC is hereby the idea that intention X leads – with reasonable certainty – to outcome Y. If the operator has MHC and harbours the intention to engage lawful targets, he predictably engages lawful targets.Footnote 86 In other words, to have MHC in a reactive BCI requires the soldier’s subconscious targeting to be predictably lawful.

The ICRC Review Guide reflects this rationale by referring to the importance of testing a means of warfare on the “reliability of the targeting mechanism”Footnote 87 and whether its “foreseeable effects are capable of being limited to the target [distinction] or being controlled in time or space [proportionality]”.Footnote 88 This suggests that a degree of foresight of the weapon system’s ramifications must be given for its lawfulness.Footnote 89 Therefore, the predictability of a weapon system to comply with the targeting principles can be seen as an overarching consideration of the Article 36 procedure.Footnote 90

Having outlined the rationale of predictability and how it underpins MHC, it must now be assessed against the out-of-the-loop standard. After all, one could argue that the BCI operator, by means of testing, can predict – with reasonable certainty – discriminate and proportionate outcomes. As stated, there is the possibility of failure. The out-of-the-loop standard is criticized precisely because one cannot rule out unlawful results in all situations arising during normal and expected useFootnote 91 – namely, in situations that would have been decided differently (lawfully) under the soldier’s conscious control over the moment of command.

The risk of failing to adhere to the targeting principles may frustrate a reasonable level of predictability in the context of Mode B’s operation. IHL’s fundamental balancing act between military necessity and humanity shall guide the following analysis. In fact, the requirement to ensure a predictable outcome can be seen as a child of the precautionary principle,Footnote 92 as reflected in its wording: a party is to refrain from attacks expected to cause excessive civilian death or damage (AP I Article 57(2)(a)(iii)). This expectation implies an element of uncertainty, but such uncertainty – the risk of incidental civilian harm – ought to be avoided, or in any event minimized, inter alia by taking all feasible precautions (AP I Article 57(2)(a)(ii)).

The duty to take feasible precautions (AP I Article 57) is not confined to the operational stage; it is not only a matter of the law of targeting. Rather, it informs the weapon law assessment under Article 36 of AP I for the same reasons as the principle of proportionality.Footnote 93 Addressing the criticism of establishing predictability by testing, the out-of-the-loop standard needs to be confronted with the obligation to take constant care to spare the civilian population in the conduct of military operations (AP I Article 57(1)).Footnote 94 The norm does not refer to attacks (defined in AP I Article 49(1)) – unlike the ensuing paragraphs. However, military operations are wider than attacks, while encompassing them, and hence refer to situations in and outside of active combat.Footnote 95 Moreover, constant care denotes a continuous, ubiquitous effort, both before and after the outbreak of hostilities.Footnote 96 It cannot be limited to the pre-deployment stage – i.e., by merely testing a weapon system once and, afterwards, neglecting all feasible measures to verify that only lawful targets are struck (AP I Article 57(2)(a)(i)). The notion of “feasible” as used in Article 57 of AP I signifies “those precautions which are practicable or practically possible taking into account all circumstances ruling at the time, including humanitarian and military considerations”.Footnote 97

On this point, there is a radical difference between reactive BCIs and LAWS. One may argue that an autonomous weapon system could outperform humans and better comply with the duty to take feasible precautions. The same cannot be said for Mode B BCIs. It is impossible that the reactive BCI better adheres to the targeting principles than its active counterpart. At most, the soldier in Mode B performs equally as well as the one in Mode A. Hence, to spare the civilian population from (the risk of) harm, the feasible measure is rather obvious: letting the same human operator decide consciously. If one were to employ a reactive BCI regardless (being familiar with the system’s lack of control over the moment of command), the precautionary principle would be violated – that is, unless the military necessity of using Mode B’s ultra-rapid targeting mechanism rendered alternative means not practicable or practically possible.

Ultimately, the military advantage must be examined in line with the “circumstances ruling at the time”Footnote 98 – meaning the circumstances in a future where BCI technology has materialized. In anticipation, one can consider the argument that the increased speed of battlefield operations prescribes a need to “tilt the balance between necessity and humanity”Footnote 99 in order for humans to keep pace.Footnote 100 Under this view, a military disadvantage could arise from not being able to maximize the targeting speed if Mode B were to be precluded. However, the argument does not hold for two reasons. Firstly, the accelerated pace affects both sides of the conflict, and therefore does not pose a relative disadvantage to any one party. If reactive BCIs were deemed unlawful, the adversary would not be allowed to use the technology either, curbing the upper limits of reaction speed on both sides to those of conscious human- or computer-based decision-making. Secondly, the rhythm of combat is not an external variable, but will be determined by the means of warfare chosen by an armed force. Consequently, a party cannot raise the increased pace – caused by its proper choice to employ Mode B – as a justification to employ Mode B (venire contra factum proprium).

In light of the foregoing, it appears feasible to employ an alternative means of target identification, as the military advantage gained by using Mode B does not supersede the duty to take constant care. The precautionary rule ergo flanks and vindicates the level of predictability. A proportionate standard of predictability is not met solely by testing the reactive BCI’s performance prior to deployment: military necessity does not overturn the need to ensure Mode B’s predictable compliance with the targeting principles beyond a dry run. As a matter of proportionality, an out-of-the-loop standard of MHC – pertaining to the reactive BCI – must be rejected. Rather, an on-the-loop standard shall lay the foundation for the remaining assessment. Under this standard, MHC – commonly construed as calling for operational oversight – would be achieved in Mode B if the warfighter could supervise his subconscious.

On-the-loop standard: Application to Mode B

What such supervision encompasses will be spelled out in the following. This will allow us to finally gauge whether the operator in a reactive BCI exerts MHC even though he cannot control the moment of command. As a shorthand, the presence of MHC will determine Mode B’s lawfulness.

From the myriad of academic contributions on the subject, Kwik has distilled the content of an on-the-loop standard of MHC in a comparative study.Footnote 101 The ensuing model features three elements (awareness, weaponeering and context control), the cumulative impact of which translates directly into the “overarching goal-element”Footnote 102 of predictability. Each component shall be applied to the man–machine collaboration in Mode B.

Starting with awareness, the soldier must follow the operational context, including its military goal and stakes.Footnote 103 Moreover, he needs to have a thorough grasp on how the BCI functions and how his subconscious interacts with it.Footnote 104 Thus, he ought to understand which neural signals trigger the targeting mechanism and how/if he can influence them. Via simulation-based testing, he must become cognizant of what situations (both combat- and mindset-related) his subconscious response is accurate in, and where errors occur. If one had adopted an out-of-the-loop standard, one would simply refer to any reliable test results at this stage. All aspects of awareness control can be met by Mode B.

Incumbent on prior awareness, weaponeering describes the possibility of controlling whether to use a means of warfare in the first place.Footnote 105 The soldier needs to gauge whether the reactive BCI, particularly the subconscious targeting mechanism, is suitable in a given context, and deploy it selectively in those circumstances.Footnote 106 Since he preserves his freedom of action (see above), the combatant may shut down the BCI at any time. He can further make the above determinations after sufficient testing and mission briefing, which renders weaponeering control unproblematic.

Lastly, after deployment and over the entire course of an operation, the commander needs to exercise context control. On the one hand, he must have the possibility of adjusting the weapon system’s parameters (e.g. imposing a limit on the number of targets that the machine gun can shoot within a given time frame).Footnote 107 Since the BCI-enhanced combatant can consciously react to changing contexts, no issue arises. On the other hand, he needs to uphold a “persistent connection”Footnote 108 with the means of warfare, translating into the latter’s capacity to supervise and abort an attack. This challenges Mode B: as an expression of being on the loop vis-à-vis his subconscious, the warfighter must be able to veto a targeting decision made. In line with the discussion above and the functioning of reactive BCIs generally, this is impossible: the evoked potential picked up by the neural implant, then translated into a command to engage, constitutes the earliest recognizable indicator of the soldier’s response, preceding his awareness and leaving him no opportunity to intervene.Footnote 109 The missing control over the moment of command effectively prevents the combatant from wielding his veto.

The merit of “veto control”Footnote 110 ought to be regarded as an expression of the human agency that lies at the core of legal reasoning.Footnote 111 This last verification before opening fire – whether the target is actively participating in hostilities or merely shooting a wolfFootnote 112 – ties into Noll’s idea of genuinely applying the law through an inner dialectic.Footnote 113 Only when the operator can “take a remote perspective”Footnote 114 vis-à-vis his actions and is “reason-responsive”Footnote 115 can his human agency permit him to weigh and balance military necessity and humanity, and only then can a prediction of lawfulness be issued. In the absence of such supervision, the soldier in Mode B – as a combatant – does not have MHC over his subconscious as part of the weapon system. The reactive BCI therefore violates the principles of distinction and proportionality during its normal or expected use of controlling an armed UGV when reviewed under Article 36 of AP I. The subconsciously controlled BCI (Mode B) is thus unlawful ab initio.

Conclusion

In this article, two types of BCI-controlled weapon systems were presented: Mode A (active BCI with conscious command) and Mode B (reactive BCI with subconscious command). The soldier operating either set-up assumes a dual status under IHL, as both a combatant and a means of warfare (part of a weapon system). Thus, the entire man–machine collaboration was subjected to a review on its lawfulness under Article 36 of AP I, requiring its capacity to respect – especially – the principles of distinction and proportionality.

Mode A poses no issues of legality. In contrast, Mode B – the reactive BCI – undermines the operator’s control over the moment of command. Thus, the concept of MHC was adapted to inspect the oversight that he must exert over his subconscious. Predictability was identified as an overarching requirement under Article 36 of AP I and rationale behind MHC. It serves to ensure that the outcome of an engagement decision adheres – with reasonable certainty – to the targeting principles. After balancing the precautionary duty and military necessity, pre-deployment testing of the reactive BCI (out-of-the-loop standard) was found insufficient to assure predictability. Instead, the operator should be able to supervise his subconscious (on-the-loop standard) so as to cancel an attack if needed. This ability to veto was rejected for Mode B, thereby undermining MHC – i.e., the BCI’s predictable compliance with distinction and proportionality.

On these grounds, it is concluded that a soldier can lawfully use a BCI to control a weapon under IHL if he controls that weapon in a conscious state of mind via an active BCI. The use of a subconsciously commanded reactive BCI is unlawful under Article 36 of AP I.

Lastly, academic rigour demands that we lay down some caveats for these findings. Firstly, this article depicts two models (Mode A and Mode B) as ideal-typical use cases for remote weapon control through BCIs, though the dichotomy of active versus reactive BCIs is somewhat simplified.Footnote 116 Furthermore, no absolute level of MHC and of being on the loop – against which every weapon system can be held – was established. Instead, a relative standard of ensuring reasonably predictable outcomes was presented, grounding the Article 36 review on considerations of proportionality. One can criticize reading military necessity and precautions into a level of MHC that renders a weapon system unlawful ab initio (weapons law). Alternatively, one could advance that these aspects need only be assessed in the context of a specific attack (law of targeting). Following this view, the risks and uncertainties of the reactive BCI would not restrain its lawful acquisition and development, but would solely call for constant care at the deployment stage. Nevertheless, this far-reaching take on Article 57(1) of AP I and on predictability under Article 36 of AP I represents the author’s best attempt at good-faith interpretation of the law. Maybe one ought to accept that the protection of humanity in armed conflict advises us to rather err on the side of caution.

Additionally, all those factors of BCI command, alternative means and battlefield challenges that tie into the legality of reactive BCIs (even of acquiring and developing them) are presumed at this point. In fact, BCI-controlled weapon systems could eventually fall into the category of technologies that were proclaimed to be groundbreaking junctures but ultimately never saw the light of day.Footnote 117 Given the pending development of BCIs and the secret nature of military research, some level of assumption is unavoidable.Footnote 118 While at this point it cannot be entirely excluded that this article is dedicated to a future that will never emerge, there is ample reason to believe that such – or similar – systems will shape the battlefields of tomorrow. Reflections on their legality and the technology’s handling in general should thus, before long, make their way onto the agenda of researchers, military professionals and policy-makers alike.

Among the plethora of legal challenges arising from BCIs, this article has addressed only a small fraction. Particularly, contemplations on accountability for mental acts under international criminal law will be needed. No matter the research angle through which one approaches BCIs, however, the reflections inevitably touch on core considerations of cognition, reasoning and human decision-making. Human agency amidst the targeting process has been characterized as the rationale behind MHC. It entails the possibility of bringing, and the obligation to bring, a battlefield decision under the scrutiny of one’s reason. In the end, when faced with a decision that determines life or death, veto control – the ability to hesitate just one more instance, to reconsider, to change one’s mind – puts the human in meaningful human control. Regardless of what novelties the future may bring, we owe it to ourselves and to the conscience of humanity to guard what is quintessentially human in the merger of man and machine.

Footnotes

*

The author would like to thank Dominik Steiger, Sabine von Schorlemer, Mara Ebbers and William Boothby for their insightful and helpful comments on earlier versions of this article.

The advice, opinions and statements contained in this article are those of the author/s and do not necessarily reflect the views of the ICRC. The ICRC does not necessarily represent or endorse the accuracy or reliability of any advice, opinion, statement or other information provided in this article.

References

1 Emotiv, “EMOTIV x Rodrigo Hubner Mendes – Driving F1 Car Just by Thinking”, YouTube, 18 August 2017, available at: www.youtube.com/watch?v=NhmXaeaHkDc (all internet references were accessed in February 2025).

2 Patrick Cutter, The Shape of Things to Come: The Military Benefits of the Brain-Computer Interface in 2040, Defense Technical Information Center, 2015, available at: https://apps.dtic.mil/sti/citations/AD1012768.

3 Adrian Czech, “Brain-Computer Interface Use to Control Military Weapons and Tools”, in Szczepan Paszkiel (ed.), Control, Computer Engineering and Neuroscience, Springer International, Cham, 2021, p. 197; Peter Emanuel, Cyborg Soldier 2050: Human/Machine Fusion and the Implications for the Future of the DoD, US Army Combat Capabilities Development Command Chemical Biological Center, 2019, p. 7, available at: https://apps.dtic.mil/sti/citations/AD1083010.

4 DARPA, “Six Paths to the Nonsurgical Future of Brain-Machine Interfaces”, available at: www.darpa.mil/news/2019/nonsurgical-brain-machine-interfaces; Robbin A. Miranda et al., “DARPA-Funded Efforts in the Development of Novel Brain–Computer Interface Technologies”, Journal of Neuroscience Methods, Vol. 244, 2015, p. 52.

5 US Office of the Federal Register, “Request for Comments Concerning the Imposition of Export Controls on Certain Brain-Computer Interface (BCI) Emerging Technology”, 2021, available at: www.federalregister.gov/documents/2021/10/26/2021-23256/request-for-comments-concerning-the-imposition-of-export-controls-on-certain-brain-computer.

6 Patrick Tucker, “It’s Now Possible to Telepathically Communicate with a Drone Swarm”, Defense One, 6 September 2018, available at: www.defenseone.com/technology/2018/09/its-now-possible-telepathically-communicate-drone-swarm/151068/; DARPA, “N3: Next-Generation Nonsurgical Neurotechnology”, available at: www.darpa.mil/program/next-generation-nonsurgical-neurotechnology.

7 William M. Curlin, Human Cognitive Enhancement: Ethical Implications for Airman-Machine Teaming, Air War College, Maxwell AFB, 2017, p. 11, available at: https://apps.dtic.mil/sti/citations/AD1041955; P. Cutter, above note 2, p. 4; Jonathan Moreno et al., “The Ethics of AI-Assisted Warfighter Enhancement Research and Experimentation: Historical Perspectives and Ethical Challenges”, Frontiers in Big Data, Vol. 5, 2022, p. 4.

8 The US granted a patent jointly to DARPA and Duke University for an “apparatus for acquiring and transmitting neural signals”, for controlling, inter alia, “weapons or weapon systems, [and] robots or robot systems”. See Patrick D. Wolf, Miguel A. L. Nicolelis, James C. Morizio and John K. Chapin, “Apparatus for Acquiring and Transmitting Neural Signals and Related Methods”, US Patent Application 20050090756, 28 April 2005, available at: www.freepatentsonline.com/y2005/0090756.html.

9 Bryan T. Stinchfield, “The Military and Commercial Development of Brain–Computer Interfaces: International (In)Security with Brain-Machine Teaming”, Defense and Security Analysis, Vol. 39, No. 2, 2023, p. 6.

10 Anika Binnendijk, Timothy Marler and Elizabeth M. Bartels, Brain-Computer Interfaces: U.S. Military Applications and Implications: An Initial Assessment, RAND Corporation, Santa Monica, CA, 2020, pp. 13–14; A. Czech, above note 3, p. 197; P. Emanuel, above note 3, p. 7; Eric Jensen, “The Future of the Law of Armed Conflict: Ostriches, Butterflies and Nanobots”, Michigan Journal of International Law, Vol. 35, No. 2, 2014, p. 288; Sin-Kon Kim, Sang-Pil Cheon and Jung-Ho Eom, “A Leading Cyber Warfare Strategy According to the Evolution of Cyber Technology after the Fourth Industrial Revolution”, International Journal of Advanced Computer Research, Vol. 9, No. 40, 2019, p. 74; Margaret Kosal and Joy Putney, “Neurotechnology and International Security: Predicting Commercial and Military Adoption of Brain-Computer Interfaces (BCIs) in the United States and China”, Politics and the Life Sciences, Vol. 42, No. 1, 2023, p. 7; Armin Krishnan, Military Neuroscience and the Coming Age of Neurowarfare, Routledge, New York, 2018, p. 65; Calum MacKellar, Cyborg Mind: What Brain–Computer and Mind–Cyberspace Interfaces Mean for Cyberneuroethics, Berghahn Books, New York and Oxford, 2019, p. 82; Jonathan D. Moreno, Mind Wars: Brain Science and the Military in the 21st Century, Bellevue Literary Press, New York, 2012, pp. 53–59; Fiachra O’Brolchain and Bert Gordijn, “Brain–Computer Interfaces and User Responsibility”, in Gerd Grübler and Elisabeth Hildt (eds), Brain-Computer Interfaces in Their Ethical, Social and Cultural Contexts, Springer, Dordrecht, 2014, p. 169; B. T. Stinchfield, above note 9, p. 5; Irene Tracey and Rod Flower, “The Warrior in the Machine: Neuroscience Goes to War”, Nature Reviews Neuroscience, Vol. 15, No. 12, 2014, pp. 830–831.

11 Such a neural implant would form part of a (semi-)invasive BCI. Generally, one can distinguish between invasive, semi-invasive and non-invasive BCIs. For invasive BCIs, electrodes are planted in the cortical layers of the brain. While semi-invasive BCIs equally require surgery, the neural implant does not sit directly in the brain, but under the scalp. By contrast, non-invasive BCIs obtain electrical signals from outside the scalp, which significantly decreases signal quality. For this reason, invasive and semi-invasive set-ups show a far superior performance and are deemed apt for more intricate tasks. Hadeel Alharbi, “Identifying Thematics in a Brain-Computer Interface Research”, Computational Intelligence and Neuroscience, 2023; A. Czech, above note 3; P. Emanuel, above note 3; Brando J. King, Gemma J. M. Read and Paul M. Salmon, “The Risks Associated with the Use of Brain-Computer Interfaces: A Systematic Review”, International Journal of Human–Computer Interaction, Vol. 40, No. 2, 2022.

12 A. Czech, above note 3.

13 H. Alharbi, above note 11, p. 4; Rabie Ramadan and Athanasios Vasilakos, “Brain Computer Interface: Control Signals Review”, Neurocomputing, Vol. 223, 2016, p. 33; Wenchang Zhang, Chuanqi Tan, Fuchun Sun, Hang Wu and Bo Zhang, “A Review of EEG-Based Brain-Computer Interface Systems Design”, Brain Science Advances, Vol. 4, No. 2, 2018, p. 157.

14 H. Alharbi, above note 11, p. 4; R. Ramadan and A. Vasilakos, above note 13, p. 33; W. Zhang et al., above note 13, p. 157.

15 Jon Touryan, Anthony J. Ries, Paul Weber and Laurie Gibson, “Integration of Automated Neural Processing into an Army-Relevant Multitasking Simulation Environment”, in Dylan D. Schmorrow and Cali M. Fidopiastis (eds), Foundations of Augmented Cognition, Springer, Berlin and Heidelberg, 2013.

16 Pengbo Fan et al., “A Novel SSVEP-BCI Approach Combining Visual Detection and Tracking for Dynamic Target Selection”, IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), 2021; Li-Wei Ko et al., “SSVEP-Assisted RSVP Brain-Computer Interface Paradigm for Multi-Target Classification”, Journal of Neural Engineering, Vol. 18, No. 1, 2021; Amar Marathe et al., Heterogeneous Systems for Information-Variable Environments (HIVE), US Army Research Laboratory, Aberdeen Proving Ground, MD, 2017, available at: https://apps.dtic.mil/sti/pdfs/AD1035246.pdf.

17 F. O’Brolchain and B. Gordijn, above note 10, p. 167; J. Touryan et al., above note 15.

18 A. Marathe et al., above note 16; J. Touryan et al., above note 15.

19 A similar set-up to Mode A has been referred to by Guy W. Eden, “Targeting Mr. Roboto: Distinguishing Humanity in Brain-Computer Interfaces”, Military Law Review, Vol. 228, No. 2, 2020, pp. 30–32; P. Emanuel, above note 3, p. 7; S.-K. Kim, S.-P. Cheon and J.-H. Eom, above note 10, p. 74; Rain Liivoja, “Being More than You Can Be: Enhancement of Warfighters and the Law of Armed Conflict”, in Matthew W. Waxmann and Thomas W. Oakley (eds), The Future Law of Armed Conflict, Oxford University Press, Oxford, 2022, p. 89; C. MacKellar, above note 10, p. 82; Gregor Noll, “Weaponising Neurotechnology: International Humanitarian Law and the Loss of Language”, London Review of International Law, Vol. 2, No. 2, 2014, pp. 205–206; Gregor Noll, “War by Algorithm: The End of Law?”, in Max Liljefors, Gregor Noll and Daniel Steuer (eds), War and Algorithm, Rowman & Littlefield, London, 2019, pp. 86–87; F. O’Brolchain and B. Gordijn, above note 10, p. 163; Carolyn Sharp, “Cognitively Enhanced Humans as Both Warfighters and Weapons of War”, University of Florida Journal of Law and Public Policy, Vol. 32, No. 2, 2022, p. 318; B. T. Stinchfield, above note 9, p. 5; Stephen White, “Brave New World: Neurowarfare and the Limits of International Humanitarian Law”, Cornell International Law Journal, Vol. 41, No. 1, 2008, p. 182; Yan Xiaodong, Chu Kaixuan, Zhao Liyang, Chang Tianqing and Zhang Jie, “Application of Man-Machine Hybrid Intelligence in Ground Unmanned Combat”, 2022 International Conference on Industrial Automation, Robotics and Control Engineering (IARCE), 2022; Jian Yang, Ruina Dang, Tao Luo and Jin Liu, “The Development Status and Trends of Unmanned Ground Vehicle Control System”, 2015 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems (CYBER), 2015.

20 See J. Touryan et al., above note 15, pp. 775–776.

21 See e.g. Guglielmo Tamburrini, “Brain to Computer Communication: Ethical Perspectives on Interaction Models”, Neuroethics, Vol. 2, No. 3, 2009, p. 140; W. Zhang et al., above note 13, p. 157.

22 G. Noll, “Weaponising Neurotechnology”, above note 19, p. 206.

23 A similar set-up to Mode B has been referred to by A. Binnendijk, T. Marler and E. M. Bartels, above note 10, pp. 13–14; G. W. Eden, above note 19, p. 12; A. Krishnan, above note 10, p. 65; Jonathan D. Moreno, Mind Wars: Brain Research and National Defence, 2nd ed., Dana Press, New York and Chicago, IL, 2006, pp. 39–40; G. Noll, “Weaponising Neurotechnology”, above note 19, pp. 206–207; F. O’Brolchain and B. Gordijn, above note 10, p. 176; Jean-Marc Rickli and Marcello Ienca, “The Security and Military Implications of Neurotechnology and Artificial Intelligence”, in Orsolya Friedrich, Andreas Wolkenstein, Christoph Bublitz, Ralf J. Jox and Eric Racine (eds), Clinical Neurotechnology Meets Artificial Intelligence: Philosophical, Ethical, Legal and Social Implications, Springer Nature Switzerland, 2021, p. 209, available at: https://doi.org/10.1007/978-3-030-64590-8; I. Tracey and R. Flower, above note 10, pp. 830–831; S. White, above note 19, pp. 182–183.

24 L.-W. Ko et al., above note 16, p. 2.

25 G. Noll, “Weaponising Neurotechnology”, above note 19, p. 207.

26 G. Noll, “War by Algorithm”, above note 19, pp. 86–87; J. Touryan et al., above note 15, pp. 775–781.

27 This definition was introduced by Justin McClelland, “The Review of Weapons in Accordance with Article 36 of Additional Protocol I”, International Review of the Red Cross, Vol. 85, No. 850, 2003, p. 404. It has been endorsed by e.g. William H. Boothby, Conflict Law: The Influence of New Weapons Technology, Human Rights and Emerging Actors, T. M. C. Asser Press, The Hague, 2014, p. 169; Steven Haines, “The Developing Law of Weapons”, in Andrew Clapham and Paola Gaeta (eds), The Oxford Handbook of International Law in Armed Conflict, Oxford University Press, Oxford, 2014, p. 276.

28 Luke Chircop and Rain Liivoja, “Are Enhanced Warfighters Weapons, Means, or Methods of Warfare?”, International Law Studies, Vol. 94, 2018, p. 176.

29 Ibid., p. 180.

30 Maxwell Mehlman, Patrick Lin and Keith Abney, Enhanced Warfighters: Risk, Ethics and Policy, Case Research Paper Series in Legal Studies, Working Paper 2013–2, Case Western Reserve University School of Law, Cleveland, OH, 2013, pp. 29–30.

31 L. Chircop and R. Liivoja, above note 28, p. 177; Heather Harrison Dinniss and Jann K. Kleffner, “Soldier 2.0: Military Human Enhancement and International Law”, International Law Studies, Vol. 92, 2016, p. 438.

32 L. Chircop and R. Liivoja, above note 28, p. 177.

33 G. W. Eden, above note 19, p. 32.

34 L. Chircop and R. Liivoja, above note 28, p. 177; G. W. Eden, above note 19, pp. 28–29; H. Harrison Dinniss and J. K. Kleffner, above note 31, p. 438.

35 G. W. Eden, above note 19, p. 28.

36 Ibid., p. 29.

37 William H. Boothby, Weapons and the Law of Armed Conflict, 2nd ed., Oxford University Press, Oxford, 2016, p. 4; S. Haines, above note 27, pp. 276–277; J. McClelland, above note 27, pp. 405–406.

38 Harvard Program on Humanitarian Policy and Conflict Research, HPCR Manual on International Law Applicable to Air and Missile Warfare, Cambridge University Press, Cambridge, 2013, p. xxiv. This definition has been endorsed by e.g. W. H. Boothby, above note 37, p. 5; L. Chircop and R. Liivoja, above note 28, p. 178; H. Harrison Dinniss and J. K. Kleffner, above note 31, p. 437.

39 Thompson Chengeta, “Are Autonomous Weapons Systems the Subject of Article 36 of Additional Protocol I to the Geneva Conventions”, UC Davis Journal of International Law and Policy, Vol. 23, No. 1, 2017, p. 73; Department of the Army, Army Regulation 27-53: Review of Legality of Weapons under International Law, Washington, DC, p. 1, 1979, available at: https://irp.fas.org/doddir/army/ar27-53.pdf; International Committee of the Red Cross (ICRC), A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures to Implement Article 36 of Additional Protocol I of 1977, reproduced in International Review of the Red Cross, Vol. 88, No. 864, 2006 (ICRC Review Guide), p. 937 fn. 17; W. Hays Parks, “Conventional Weapons and Weapons Reviews”, Yearbook of International Humanitarian Law, Vol. 8, 2005, p. 115.

40 Yves Sandoz, Christophe Swinarski and Bruno Zimmermann (eds), Commentary on the Additional Protocols, ICRC, Geneva, 1987 (ICRC Commentary on the APs), para. 1469.

41 Ibid., para. 1402.

42 Michael N. Schmitt, “War, Technology and the Law of Armed Conflict”, in Anthony M. Helm (ed.), The Law of War in the 21st Century: Weaponry and the Use of Force, International Law Studies, Vol. 82, Naval War College, Newport, RI, 2006, p. 142. This definition is endorsed by e.g. William H. Boothby, “Methods and Means of Cyber Warfare”, International Law Studies, Vol. 89, 2013, p. 388; L. Chircop and R. Liivoja, above note 28, p. 178; Jonathan D. Herbach, “Into the Caves of Steel: Precaution, Cognition and Robotic Weapon Systems under the International Law of Armed Conflict”, Amsterdam Law Forum, Vol. 4, No. 3, 2012, p. 5.

43 G. W. Eden, above note 19, p. 28.

44 L. Chircop and R. Liivoja, above note 28, p. 180.

45 See Klaudia Klonowska, Article 36: Review of AI Decision-Support Systems and Other Emerging Technologies of Warfare, Asser Research Paper 2021–02, T. M. C. Asser Institute for International and European Law, 2021, p. 10, available at: https://papers.ssrn.com/abstract=3823881: “[T]echnologies that themselves are not weapons but nevertheless are integral to the capacity of a weapon inflicting harm or damage should be reviewed under article 36.” Similarly, a system used to “analyse target data and then provide a target solution or profile” falls under means and methods of warfare as “an integral part of the targeting decision process”: J. McClelland, above note 27, pp. 405–406.

46 See, in affirmation, L. Chircop and R. Liivoja, above note 28, p. 180.

47 C. Sharp, above note 19, p. 326.

48 Ibid.

49 W. H. Boothby, above note 27, p. 170; Stuart Casey-Maslen, “Weapons”, in Ben Saul and Dapo Akande (eds), The Oxford Guide to International Humanitarian Law, Oxford University Press, Oxford, 2020, p. 274; ICRC Review Guide, above note 39, p. 933.

50 ICRC Review Guide, above note 39, p. 933.

51 Article 36 of AP I imposes a review not merely of the Protocol’s provisions, but also of all other applicable rules of international law. Those rules encompass custom, human rights law and treaty law (S. Casey-Maslen, above note 49, p. 274). However, it would overstep the boundaries of this article to assess disarmament and human rights law.

52 ICRC Review Guide, above note 39, p. 933.

53 ICRC Commentary on the APs, above note 40, para 1469.

54 ICRC Review Guide, above note 39, p. 52.

55 For the prohibition against superfluous injury and unnecessary suffering, see International Court of Justice (ICJ), Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, 7 August 1996, ICJ Reports 1996, paras 78–79 (Nuclear Weapons Advisory Opinion); Jean-Marie Henckaerts and Louise Doswald-Beck (eds), Customary International Humanitarian Law, Vol. 1: Rules, Cambridge University Press, Cambridge, 2005 (ICRC Customary Law Study), Rule 70, available at: https://ihl-databases.icrc.org/en/customary-ihl/rules. For the principle of distinction, see Nuclear Weapons Advisory Opinion, paras 78–79; ICRC Customary Law Study, Rule 1. For the principle of proportionality, see ICRC Customary Law Study, Rule 14.

56 It was shown that participants fare well in RSVP tasks when they do not need to distinguish between lawful and unlawful targets – i.e., when the task is to spot a combatant who is either present or not present in the image: J. Touryan et al., above note 15. However, in a 2017 study that asked participants to only engage lawful targets – i.e., to engage a person carrying a gun, while sparing a person without a gun – median error rates (depending on the specificities of the set-up) of between 5% and 12% were observed: A. Marathe et al., above note 16.

57 Noel Sharkey, “Towards a Principle for the Human Supervisory Control of Robot Weapons”, Politica & Società, No. 2, 2014, p. 314.

58 Ibid.

59 W. H. Boothby, above note 37, p. 67; Michael N. Schmitt and Jeffrey Thurnher, “‘Out of the Loop’: Autonomous Weapon Systems and the Law of Armed Conflict”, Harvard National Security Journal, Vol. 4, No. 2, 2013, p. 274.

60 Vincent Boulanin and Maaike Verbruggen, Article 36 Reviews: Dealing with the Challenges posed by Emerging Technologies, Stockholm International Peace Research Institute, 2017, p. 22; S. Haines, above note 27, pp. 288–289; Kathleen Lawand, “Reviewing the Legality of New Weapons, Means and Methods of Warfare”, International Review of the Red Cross, Vol. 88, No. 864, 2006, pp. 927–929.

61 ICRC Review Guide, above note 39, p. 943.

62 Michael Bothe, Karl Josef Partsch and Waldemar A. Solf, New Rules for Victims of Armed Conflicts: Commentary on the Two 1977 Protocols Additional to the Geneva Conventions of 1949, Vol. 2, Martinus Nijhoff, Leiden, 2013, p. 351.

63 G. Noll, “Weaponising Neurotechnology”, above note 19.

64 Ibid., pp. 214–215.

65 Ibid., p. 215.

66 Ibid., p. 223.

67 ICRC Commentary on the APs, above note 40, para. 2210.

68 Ibid., para. 2198.

69 Ibid., para. 2213.

70 G. W. Eden, above note 19, p. 31.

71 Lethal autonomous weapon systems are defined as weapon systems “with autonomy in the ‘critical functions’ of acquiring, tracking, selecting and attacking targets”. ICRC, Autonomous Weapon Systems: Technical, Military, Legal and Humanitarian Aspects: Expert Meeting, Geneva, 26 March 2014, p. 7.

72 Peter Asaro, “On Banning Autonomous Weapon Systems: Human Rights, Automation and the Dehumanization of Lethal Decision-making”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 695; Bérénice Boutin and Taylor Woodcock, “Aspects of Realizing (Meaningful) Human Control: A Legal Perspective”, in Robin Geiß and Henning Lahmann (eds), Research Handbook on Warfare and Artificial Intelligence, Edward Elgar, Cheltenham, 2024; Convention on Certain Conventional Weapons (CCW) Meeting of Experts on LAWS, Report of the Informal Meeting of Experts on LAWS under the Chair’s Personal Responsibility, 11 April 2016, para. 38; Linda Eggert, “Rethinking ‘Meaningful Human Control’”, in Jan Maarten Schraagen (ed.), Responsible Use of AI in Military Systems, CRC Press, Boca Raton, FL, and Abingdon, 2024, p. 213; Anna-Katharina Ferl, “Imagining Meaningful Human Control: Autonomous Weapons and the (De-)Legitimisation of Future Warfare”, Global Society, Vol. 38, No. 1, 2024, p. 140; Michael Horowitz and Paul Scharre, Meaningful Human Control in Weapon Systems: A Primer, CNAS Working Papers, Center for a New American Security, 2015, p. 10, available at: www.cnas.org/publications/reports/meaningful-human-control-in-weapon-systems-a-primer.

73 Dan Saxon, Fighting Machines: Autonomous Weapons and Human Dignity, University of Pennsylvania Press, Philadelphia, PA, 2022, p. 59.

74 See ICRC, “Statement: Expert Meeting on Lethal Autonomous Weapons Systems”, 15 November 2017, available at: www.icrc.org/en/document/expert-meeting-lethal-autonomous-weapons-systems. In this statement, the ICRC holds that “the rules on the conduct of hostilities are addressed to those who plan, decide upon, and carry out an attack”. Therefore, “compliance with these legal obligations would require that combatants retain a minimum level of human control over the use of weapon systems to carry out attacks in armed conflict”. See also P. Asaro, above note 72, p. 78; Eric Talbot Jensen, “The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict”, International Law Studies, Vol. 96, 2020.

75 Patrick Haggard, “Human Volition: Towards a Neuroscience of Will”, Nature Reviews Neuroscience, Vol. 9, No. 12, 2008, p. 935; C. MacKellar, above note 10, p. 120.

76 Daniele Amoroso and Guglielmo Tamburrini, “Toward a Normative Model of Meaningful Human Control over Weapons Systems”, Ethics and International Affairs, Vol. 35, No. 2, 2021.

77 Jonathan Kwik, “A Practicable Operationalisation of Meaningful Human Control”, Laws, Vol. 11, No. 43, 2022, p. 3.

78 See Henning Lahmann, “The Future Digital Battlefield and Challenges for Humanitarian Protection: A Primer”, Working Paper, Geneva Academy of International Humanitarian Law and Human Rights, Geneva, 2022, p. 27, demanding not to restrict MHC to the realm of LAWS, but to adopt a broader view.

79 Ioannis Kalpouzos, “Double Elevation: Autonomous Weapons and the Search for an Irreducible Law of War”, Leiden Journal of International Law, Vol. 33, No. 2, 2020, p. 292; Duncan MacIntosh, “Fire and Forget: A Moral Defense of the Use of Autonomous Weapons in War and Peace”, in Jai Galliott, Duncan MacIntosh and Jens D. Ohlin (eds), Lethal Autonomous Weapons: Re-examining the Law and Ethics of Robotic Warfare, Oxford University Press, Oxford, 2021, pp. 14–18; Michael N. Schmitt, “International Humanitarian Law and the Conduct of Hostilities”, in B. Saul and D. Akande (eds), above note 49, pp. 172–173.

80 Invoking the trichotomy of in/on/out of the loop is not entirely fitting for BCIs. The word “loop” in this context refers to the “observe–orient–decide–act” (OODA) loop: see David J. Bryant, “Rethinking OODA: Toward a Modern Cognitive Framework of Command Decision Making”, Military Psychology, Vol. 18, No. 3, 2006, p. 183. The fact that the BCI operator always observes, orients, decides and acts renders this trichotomy inaccurate. Nonetheless, one can analogously utilize the concept, insofar as it refers to (conscious) human control over the decision to engage. However, a differentiation between “on the loop” and “in the loop” pertaining to the reactive BCI is nonsensical, as the decision to engage always traces back to the operator – be it to his conscious or his subconscious. Contrary to LAWS, BCI command can thus be classified dichotomously: the commanding soldier makes either conscious or subconscious decisions. Therefore, the ensuing analysis will exclude the in-the-loop standard – not least since Mode A is already an emanation of it.

81 E.T. Jensen, above note 74, p. 57; Tim McFarland, “Autonomous Weapons and Human Control”, Humanitarian Law and Policy Blog, 18 July 2018, available at: https://blogs.icrc.org/law-and-policy/2018/07/18/autonomous-weapons-and-human-control/; Marco Sassòli, “Autonomous Weapons and International Humanitarian Law: Advantages, Open Technical Questions and Legal Issues to be Clarified”, International Law Studies, Vol. 90, No. 1, 2014, p. 309; M. N. Schmitt and J. Thurnher, above note 59, p. 280.

82 Kenneth Anderson, Daniel Reisner and Matthew C. Waxman, “Adapting the Law of Armed Conflict to Autonomous Weapon Systems”, International Law Studies, Vol. 90, 2014, p. 11; Pablo Kalmanovitz, “Judgment, Liability and the Risks of Riskless Warfare”, in Nehal Bhuta, Susanne Beck, Robin Geiβ, Hin-Yan Liu and Claus Kreβ (eds), Autonomous Weapons Systems: Law, Ethics, Policy, Cambridge University Press, Cambridge, 2016, p. 18.

83 G. Noll, “Weaponising Neurotechnology”, above note 19, p. 217.

84 Daniele Amoroso and Guglielmo Tamburrini, “The Ethical and Legal Case against Autonomy in Weapons Systems”, Global Jurist, Vol. 18, No. 1, 2018; D. Amoroso and G. Tamburrini, above note 76; P. Asaro, above note 72; Rebecca Crootof, “A Meaningful Floor for ‘Meaningful Human Control’”, Temple International and Comparative Law Journal, Vol. 30, 2016; Neil Davison, “A Legal Perspective: Autonomous Weapon Systems under International Humanitarian Law”, UNODA Occasional Papers, No. 30, 2018, p. 5; Human Rights Watch, Losing Humanity: The Case Against Killer Robots, 5 December 2012, available at: https://reliefweb.int/report/world/losing-humanity-case-against-killer-robots; D. Saxon, above note 73; Paul Scharre, Army of None: Autonomous Weapons and the Future of War, W. W. Norton, New York and London, 2018; Amanda Sharkey, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology, Vol. 21, No. 2, 2019; Karolina Zawieska, “An Ethical Perspective on Autonomous Weapon Systems”, UNODA Occasional Papers, No. 30, 2018, p. 49.

85 Article 36, Meaningful Human Control, Artificial Intelligence and Autonomous Weapons, Briefing Paper for Delegates at the CCW Meeting of Experts on LAWS, 11 April 2016, p. 2, available at: https://article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.pdf; N. Davison, above note 84, pp. 15–16; Amanda M. Eklund, Meaningful Human Control of Autonomous Weapon Systems, Totalförsvarets Forskningsinstitut, 2020, p. 15, available at: http://umu.diva-portal.org/smash/record.jsf?pid=diva2%3A1420989&dswid=5805; J. Kwik, above note 77, pp. 5–6; D. Saxon, above note 73, p. 43.

86 See ICRC, Ethics and Autonomous Weapon Systems: An Ethical Basis for Human Control?, Geneva, 3 April 2018, p. 14, describing predictability as a “means of connecting human agency and intent with the eventual outcome and consequences of the machine’s operation”. See, similarly, Article 36, above note 85.

87 ICRC Review Guide, above note 39, p. 946.

88 Ibid.

89 William H. Boothby, “Some Legal Challenges Posed by Remote Attack”, International Review of the Red Cross, Vol. 94, No. 886, 2012, p. 585; K. Lawand, above note 60, pp. 927–928.

90 V. Boulanin and M. Verbruggen, above note 60, pp. 18, 23; N. Davison, above note 84, pp. 15–16.

91 D. Amoroso and G. Tamburrini, above note 76, p. 253.

92 J. D. Herbach, above note 42; Nwamaka A. Iguh and Florence C. Akubuilo, “The Position of International Humanitarian Law on the Use of Combat Drones in Armed Conflict”, Unizik Law Journal, Vol. 18, 2022, p. 100; Michael N. Schmitt and Michael Schauss, “Uncertainty in the Law of Targeting: Towards a Cognitive Framework”, Harvard National Security Journal, Vol. 10, No. 1, 2019, pp. 177–180.

93 D. Amoroso and G. Tamburrini, above note 76, p. 253; V. Boulanin and M. Verbruggen, above note 60, p. 20; N. Davison, above note 84, p. 8; S. Haines, above note 27, pp. 286–287; ICRC Review Guide, above note 39, p. 943; K. Lawand, above note 60, p. 928.

94 M. N. Schmitt and M. Schauss, above note 92, pp. 179–180; ICRC Commentary on the APs, above note 40, para. 2191.

95 W. H. Boothby, above note 89, p. 585; Jean-François Quéguiner, “Precautions under the Law Governing the Conduct of Hostilities”, International Review of the Red Cross, Vol. 88, No. 864, 2006, p. 797; ICRC Commentary on the APs, above note 40, para. 2191.

96 Geoffrey S. Corn and James A. Schoettler, “Targeting and Civilian Risk Mitigation: The Essential Role of Precautionary Measures”, Military Law Review, Vol. 223, No. 4, 2015, p. 794; Asaf Lubin, The Duty of Constant Care and Data Protection in War, Indiana Legal Studies Research Paper No. 473, Indiana University Maurer School of Law, Bloomington, IN, 2022, p. 12, available at: https://papers.ssrn.com/abstract=4012023.

97 ICRC Commentary on the APs, above note 40, para. 2198.

98 Ibid.

99 D. Saxon, above note 73, p. 46.

100 Kenneth Anderson and Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can, Hoover Institution Jean Perkins Task Force on National Security and Law Essay Series, Stanford University, Stanford, CA, 2013, p. 5, available at: https://scholarship.law.columbia.edu/faculty_scholarship/1803; Michael C. Haas and Sophie-Charlotte Fischer, “The Evolution of Targeted Killing Practices: Autonomous Weapons, Future Conflict and the International Order”, Contemporary Security Policy, Vol. 38, No. 2, 2017, p. 297; James Johnson, “Artificial Intelligence: A Threat to Strategic Stability”, Strategic Studies Quarterly, Vol. 14, No. 1, 2020, p. 30; D. Saxon, above note 73, p. 64.

101 J. Kwik, above note 77.

102 A. M. Eklund, above note 85, p. 30.

103 J. Kwik, above note 77, pp. 9–11.

104 Ibid.

105 Ibid., pp. 11–12.

106 Ibid.

107 Ibid., pp. 12–14.

108 Ibid., p. 13.

109 Walter Glannon, “Ethical Issues in Neuroprosthetics”, Journal of Neural Engineering, Vol. 13, No. 2, 2016, p. 11; F. O’Brolchain and B. Gordijn, above note 10, p. 169; Stephen Rainey, Hannah Maslen and Julian Savulescu, “When Thinking Is Doing: Responsibility for BCI-Mediated Action”, AJOB Neuroscience, Vol. 11, No. 1, 2020, p. 53.

110 Steffen Steinert, Christoph Bublitz, Ralf Jox and Orsolya Friedrich, “Doing Things with Thoughts: Brain-Computer Interfaces and Disembodied Agency”, Philosophy and Technology, Vol. 32, No. 3, 2019, p. 471.

111 Ozlem Ulgen, “Kantian Ethics in the Age of Artificial Intelligence and Robotics”, Questions of International Law, Vol. 43, 2017, p. 69.

112 N. Sharkey, above note 57, p. 314.

113 G. Noll, “Weaponising Neurotechnology”, above note 19, pp. 214–215.

114 Jos de Mul and Bibi van den Berg, “Remote Control: Human Autonomy in the Age of Computer-Mediated Agency”, in Mireille Hildebrandt and Antoinette Rouvroy (eds), Law, Human Agency and Autonomic Computing, Routledge, Abingdon, 2011, p. 52.

115 Filippo Santoni de Sio and Jeroen van den Hoven, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI, Vol. 5, 2018, p. 5.

116 Current research moves towards integrating the use of spontaneous and evoked signals in so-called “hybrid systems”. See A. Czech, above note 3, p. 196; R. Ramadan and A. Vasilakos, above note 13, p. 33; W. Zhang et al., above note 13, p. 157.

117 B. T. Stinchfield, above note 9, p. 14.

118 G. Noll, “Weaponising Neurotechnology”, above note 19, p. 208; S. White, above note 19, p. 178.