3 - Opacity
from Part I - Challenges
Published online by Cambridge University Press: 15 July 2021
Summary
As computer programs become ever more complex, the ability of non-specialists to understand them diminishes. Opacity may also be built into programs by companies seeking to protect proprietary interests. Both such systems are capable of being explained, albeit with recourse to experts or an order to reveal their internal workings. Yet a third kind of system may be naturally opaque: some machine learning techniques are difficult or impossible to explain in a manner that humans can comprehend. This raises concerns when the process by which a decision is made is as important as the decision itself. For example, a sentencing algorithm might produce a ‘just’ outcome for a class of convicted persons. Unless the justness of that outcome for an individual defendant can be explained in court, however, it is, quite rightly, subject to legal challenge. Separate concerns are raised by the prospect that AI systems may mask or reify discriminatory practices or outcomes.
Keywords
- Type
- Chapter
- Information
- We, the Robots?Regulating Artificial Intelligence and the Limits of the Law, pp. 63 - 82Publisher: Cambridge University PressPrint publication year: 2021