Why AI’s lack of motivation doesn’t preclude legal explainability
Acknowledgements
This op-ed arose from the workshop on Automation in Legal Decision-Making: Regulation, Technology, Fairness organised on 14 November 2024 at the University of Copenhagen under the aegis of the LEGALESE project. The workshop aimed at inspiring critical thinking about the growing role of automation in legal decision-making and its implications for fairness and explainability in legal decision-making. Participants from multiple disciplines were invited to contribute and discuss blog posts reflecting on a range of pressing themes, including the role of human oversight, the challenges of explainability, and the importance of interdisciplinary dialogue for effective and responsible innovation. By engaging with both cautionary tales and constructive approaches, the event fostered a space for discussion on how legal, technical, and societal perspectives can come together to shape ‘fair’ ADM practices.
Funding: The authors received funding from the Innovation Fund Denmark, grant no 0175-00011A.
Within the debate over AI and legal explainability, it has been argued that black box AI should not be used for legal decision-making because such AI cannot provide adequate explanations of decisions as they do not anchor those explanations in motivations (Sarid and Ben-Zvi, 2023; Rudin, 2019). This conclusion – in part – disagrees with an argument called the universal inscrutability argument (UIA). UIA claims that since human brains are as inaccessible as AI black boxes – in that we do not really know in any mechanistic detail how either brains or LLMs work when making (legal) decisions – we owe no additional commitment to explainability for those decisions outside of the already existing legal explainability requirement. The criticism argues that UIA neglects the importance of normative justifications and the motivating reasons for decisions. We suggest there might be a different interpretation of what the UIA claims about explanations and their function within legal systems. That is, we suggest that UIA should be seen as capturing the legal explainability requirement by emphasising the institutional robustness needed to guard from an overreliance on persuasive, yet false explanations regardless of the substrate of its explainer (Olsen et. al., 2021).
Critique of the UIA
One version of this argument states:
“Universal inscrutability arguments might incorrectly assume that biochemical causal explanations do (or could do) relevant work in justifying administrative decisions, when, in fact, they do not. This is because biochemical causal explanations can only explain decision-making as it relates to the physical state of a specific brain.”(Sarid and Ben-Zvi, 2023)
We believe this characterisation misrepresents the UIA’s position. UIA is in complete agreement that the functional explanation for a legal decision is one that deals with normative justifications not a reductive casual diagram. The argument further maintains that access to motivations is a crucial requirement for legal explainability,
“Administrative law focuses on motivating reasons, ie, factors that count in favour of the prediction at hand and which were actually considered by the relevant agent. But as far as motivating reasons go, humans are not ‘black boxes’ – it is very common to ask what reasons motivate a given person’s prediction, to accept their answers as revealing their justifying reasons, and to scrutinise that decision on the basis of the answers given.” (Sarid and Ben-Zvi, 2023)
Between motivation and justification
This position rightly suggests that justifying reasons are what legal explanations are supposed to provide. However, they then posit that justifying reasoning can only emerge from “motivations” and that since black boxed AI does not have motivations, then such AI cannot provide justificatory reasons and hence cannot meet the legal explainability requirement. We do not think that there is such a link between motivation and justification. Just because an AI system cannot be said to have “motivations”, it does not follow that it cannot produce justificatory reasons for a decision (prediction). In fact, and this is the heart of the UIA, if an AI system can do this as good as humans, then it does not matter that these reasons cannot be ascribed to some individual’s motivation. What matters is the quality of the justificatory reasons, and only that. The reason is that motivation lives in the black box of the human mind. It may even be the case that the motivation and causes leading to the decision may be completely arbitrary to what may actually justify the decision legally.
Finally, we wish to point out that the authors are misplacing the very target of who or what is needed to be explained. The point of explanation in administrative law is to justify a decision and its process in a specific scenario (Cohen, 2010; Waldron, 2023). The point is not to explain a decisional agent. If motivation has a role to play in administrative law, what counts is the institutional motivation, not the motivation of any given person within that institution. We must therefore speak of the relevant institution rather than the “relevant agent”. While natural persons do engage in role agency (acting as officials) by operationalising their natural cognitive competences in the service of the institution, they will need to reconstruct the institutional motivations – not their own personal motivations – if they are called upon to elaborate on the justifying reasons for the decision. Hence, we do not consider it accurate to say that the law is interested in the motivating reasons of natural persons. It is instead interested in the justifying reasons of the institution.
Conclusion
In this sense there is much overlap in what we and those critical of UIA are searching for: decisions that are justified by providing reasons that have to do with assessing how well a decision comports to what the law provides. When justificatory reasoning serves as the defining characteristic of explainability, and when explanations are evaluated through a Turing test-like approach, black box AI can plausibly meet these standards (Olsen et. al., 2021).
In fact, we may go further in adding how black box AI may be preferable to black box humans: not just for reasons of efficiency, speed, accuracy, equality across time and space (though those are all positives). Humans have a tendency to overvalue and retrofit explanations to their own behavior and decisions, whether or not they accord with causal reality or with an institutional fidelity to the law. This human fallibility, even towards understanding our own decisions, is one of the major reasons we invented explainable and reviewable decisions as a due process rule in the first place(Huq, 2020; Shapiro, 2002).






