It is increasingly clear that autonomous agents can commit international crimes such as torture and genocide. This article aims to construct ‘electronic liability’ for such international crimes. It will argue that it is not sufficient to hold the persons or programmers behind the autonomous agents liable, but that it should be possible to hold the autonomous agents that commit international crimes liable. It will examine ways in which legal personality can be attributed to machines and argue that if there is a continuum of potential subjects of ICL, then the argument for electronic personhood and liability of machines is as compelling as for other non-humans such as corporate entities and animals. It will be argued that the ICC will potentially only be able to meaningfully prosecute international crimes committed by autonomous agents if it is willing to accommodate strict liability and other faultless models of liability that have so far been anathema to international criminal justice.