We are on a mission to simplify research & analysis

Law is only getting more and more complex; it’s time for better tools for legal analysis

It’s time…

At Blue J, we leverage the power of artificial intelligence and industry-leading legal expertise to help you to deliver the answers and insight you need to be successful.

Our platform is designed to improve productivity while enabling your team to focus on higher value tasks.

University of Toronto

It all started at the University of Toronto in 2014

Our CEO Benjamin Alarie was the Associate Dean of the Faculty of Law and was invited to judge an IBM Watson competition.

He became fascinated by the possibilities of applying AI to tax law and the opportunity to use machine learning to predict outcomes and recommendations.

By 2015, our first early prototype was built. In 2016, early adopter firms started piloting Blue J. Quickly thereafter we began selling our tax product commercially before expanding to employment, HR, and US law applications.

Leadership team

Benjamin Alarie

Benjamin Alarie

Chief Executive Officer

Brett Janssen

Brett Janssen

Chief Technology Officer

Avi Brudner

Avi Brudner

Chief Operating Officer

Albert Yoon

Albert Yoon

Co-Founder

Anthony Niblett

Anthony Niblett

Co-Founder

Suzanne Gratch

Suzanne Gratch

VP, Finance

Adam Haines

Adam Haines

VP, Product & Customer Experience

Rose Duggan

Rose Duggan

Director, People & Culture

Peter van Hezewyk

Pete van Hezewyk

VP, Marketing

Abdi Aidid

Abdi Aidid

Legal Innovation Strategist

Blue J’s charter on algorithmic responsibility

Adopted on February 14th, 2020

Avoid creating or reinforcing inappropriate bias

We rely on established human rights norms for determining what could be inappropriate bias. In particular, we proactively seek to identify and avoid unjust impacts on parties on the basis of protected characteristics such as race, national or ethnic origin, color, religion or creed, age, sex, sexual orientation, gender identity or expression, family status or disability.

Provide transparent data and predictions

We strive for transparency so that our users know how we make predictions. We use algorithms that can be explained to and understood by our users. We are transparent about where our data comes from and provide meaningful explanations for how results are reached. We give users descriptions of the variables that we use in our algorithms.

Improve the human condition

Advances in machine learning will have transformative impacts on how many decisions are made. We promote algorithmic tools that have the potential to improve the lives of individuals and the more efficient, fair, and transparent functioning of institutions.

Insist on high standards of accuracy

We strive to ensure that our products are accurate. Our products must meet internal accuracy standards before they are released in order to ensure predictions are of the highest quality.

Continuously monitor outcomes

We monitor the outcomes of algorithmic predictions in order to ensure continued accuracy and to guard against unintentional outcomes. We are mindful of the potential impact of feedback loops and have measures in place to avoid them. We conduct regular testing in order to ensure that the data used to generate algorithmic predictions are relevant, accurate and up-to-date.

Allow for human oversight and control

Our algorithms are subject to human oversight and control. Our predictions are designed to assist humans with decision-making. Decisions are made with a human-in-the-loop, and our systems allow for human intervention. To aid in this, we collect feedback from users and respond promptly to reports of unexpected outcomes.

Invest in data quality

We invest in quality at every stage of our processes to produce the most reliable predictions possible. We verify source data and validate models using best-in-class methods.

Safeguard privacy and security

Our users’ privacy is one of our highest priorities. We use current industry-standard encryption technology to safeguard users’ information.