At Blue J, we're delighted to offer this blog series providing an exclusive sneak preview into the highly anticipated book, The Legal Singularity. The book explores how society can harness artificial intelligence (AI) to make the law "radically better" and achieve a vision of the law as fully comprehensive and predictable. One illustration of our commitment to advancing the legal industry is the development of Ask Blue J, our upcoming AI-powered tax Q&A bot similar to ChatGPT. We're thrilled to be a part of this exciting future and look forward to continuing the conversation with you.
Exploration of Chapter 9: Towards Ethical and Equitable Legal Prediction
If you missed our summary of the preceding chapter, we encourage you to read it here.
The ninth chapter of The Legal Singularity highlights how algorithmic decision-making (ADM) tools can perpetuate inequality and emphasizes the need for the responsible development of these tools. Rather than abandoning their use, addressing the potential harm of ADM tools can facilitate progress towards achieving equality and fairness, particularly where our innate biases make it difficult to do so.
The authors introduce a framework for evaluating social problems related to AI and ADM, categorizing them into two types: reflection and amplification problems, and techno-epistemic problems. Reflection and amplification problems involve projecting current or historical social issues into the future, potentially reifying or accelerating harmful biases. Techno-epistemic problems are those that are unique to AI-enabled prediction or are created through the use of AI in legal settings which creates new challenges for the legal landscape.
The decontextualization of data, and of legal data in particular, is a critical element in understanding these problems. Legal data, reduced to simplistic two-dimensional representations and isolated from its social, political, and economic context, can reproduce harmful hierarchies. As a result, systems may not only mirror existing social biases, but also exacerbate them by spreading and embedding them in institutions, often disguised by the perception of technology's objectivity.
Algorithmic affirmative action, which involves manipulating data to tackle discrimination, is a popular proposed solution for addressing decontextualization. However, this approach is problematic for two reasons. First, data itself is already distorted and lacks context, making it difficult to determine the extent of correction needed and increasing the likelihood of errors in the system. Second, manipulating data undermines the empirical evaluation of predictive tools, resulting in systems that produce socially desirable outcomes, but do not accurately reflect the data.
While some argue that achieving fairness through ADM is impossible as long as social issues are reflected in historical training data, the authors consider this view to be too fatalistic. The decontextualization problem arises when algorithm designers rely on limited data inputs. Therefore, a viable solution is to improve algorithmic design by taking into account a wider range of social context and extra-legal considerations.
Blue J embrace’s its algorithmic responsibility and strives to avoid creating or perpetuating bias. Data used for predictions and informing the responses of our upcoming AI-powered tax Q&A bot, Ask Blue J, is closely monitored and tested. Moreover, Blue J adheres to human rights norms to prevent negative impacts on parties based on protected characteristics, ensuring a non-discriminatory approach towards achieving the legal singularity.
Stay tuned for our final blog post of the series which will summarize the main objectives of The Legal Singularity and provide some closing thoughts on the relationship between AI and the law.
The Legal Singularity (published with University of Toronto Press) is set for release July 2023.
Sign up for the Blue J newsletter today.
Whether you have questions or are interested in booking a demo, we would love to hear from you.