Computing the law: beyond the code transparency

An epistemological approach.

Alina Mierlusa
aDepartment of Philosophy, UAB
Published,

Abstract

This proposal aims to discuss the topic of code transparency and its practices (open source) in relationship with current law computation.

The law has been a central philosophical concept since the antiquity. The way law is written, interpreted or enforced has tremendous ethical, political and ontological consequences.

As different institutional processes, including those within juridical institutions, are being automatized, it opens up the discussion whether law could be written in code, a formal language or a specification language, so it can be computed.

This process brings different issues such as public responsibility, algorithm fairness and overall making the code/data/processes publicly available.

Extending the successful open source practice and code transparency to computing law processes may represent a good solution. However, we argue that this might not work just as a copy-paste of principles, but it would need a further epistemological approach.

Motivation

  1. Acknowledging the importance that open source, as a practice, had in more traditional software development and on a wider cultural level.
  2. The notion of law is central in philosophy - at an epistemological, ontological and political level. An epistemological approach to law computation fills the gap between technical and political (decision making)

State of the question

Computing the law goes beyond the traditional software paradigm.

  1. AI techniques have different epistemological layers: modalization/formal, computation (code and data)
  2. Current proposals of public certification: legal programming languages partially make the modalization/formal level more understandable for the general public.
  3. Open source: legal (licenses), logistics (versioning systems, code transparency practices, etc.), governance (decision making)
Open source practice is still limited: releasing code ¬ trustworthy. Public/open data is still an issue.

Proposal

Expand current discussion around "Explainability of AI".

Define and compare interpretability / Explainability / code transparency

Explain how machine makes decisions are fundamental, but still limited...

Give a primacy to data: how it is generated, how it feeds the model

Discussion

Discussions on formalization and automation of decision making processes are not new.

Important epistemological debates on these issues takes place by the mid-20th century.

These discussions are often assigned to "continental philosophy" but are of great benefit to approach current challenges in AI

One of the central themes in 20th century "french theory"/"post-structuralism": critique of structure auto-sufficiency.

An epistemological approach of problem: need to focus on condition of possibility of structure - data-sets