Artificial Intelligence and Professional Regulation: Use of AI by Regulators
This blog is Part 3 of our series, Artificial Intelligence and Professional Regulation
AI continues to evolve and reshape all aspects of society. The increased proliferation of AI creates some interesting questions in the context of professional regulation, which include:
Should regulators use AI in the course of carrying out the statutory duties, and in particular in making decisions about professionals?
If so, what standards, expectation, or restrictions should regulators implement in their own use of AI?
Benefits and Risks
The use of AI by regulators has the potential of yielding some significant benefits, including:
Increased Efficiency: AI automates routine and repetitive tasks, which could help lead to improved efficiency in a regulator’s processes. Given the legislated deadlines and volume of investigations or proceedings that many regulators face, this could be a considerable improvement for many regulators.
Enhanced Accuracy: AI systems can analyze vast amounts of data with high precision, which can help to minimize human errors.
Improved Decision-Making: AI can process complex datasets and generate insights that might not be immediately apparent to humans. Increased pattern recognition and insights could be valuable to assist regulators in make operational decisions, such as with respect to allocation of resources.
Use of AI by regulators poses some significant risks as well:
Bias and Fairness: AI systems base their analysis on the data that the system is trained on. Where that data contains biases, use of AI in decision-making can perpetuate or amplify those biases present in the training data, creating a risk of unfair or discriminatory outcomes. This could violate the professional’s right to procedural fairness, which includes a right to an unbiased decision.
Hallucinations: AI research can generate incorrect or misleading legal precedents, case law, or interpretations, and inaccurate summaries or analyses, commonly referred to as “hallucinations”.
Accountability: When AI systems make decisions or recommendations, it can be unclear who is responsible for those outcomes. Some generative AI is “unexplainable”, meaning that there is no way to see inside the black box and understand how the AI generated its response. The use of unexplainable AI in administrative decision-making raises procedural fairness concerns, as it could be found to violate the right to reasons.
Case Law and Commentary
While few cases have considered the use of AI by regulators, Haghshenas v. Canada, 2023 FC 464, confirmed that regulators are permitted to use AI. This case was an application for judicial review, challenging the decision of an immigration officer to deny the Applicant a work permit. The Applicant argued that the decision was not made by the immigration officer, but rather was made “Chinook”, an AI program developed and employed by the federal government, based on Microsoft Excel.
The Federal Court rejected this argument, finding that while immigration officer inputs data into Chinook and reviews Chinook’s output analysis, the ultimate decision is made by the immigration officer, and not by Chinook. This case stands for the proposition that the use of AI as an aid is permissible in administrative decision-making, as long as the ultimate decision is made by a human.
Regulators developing a policy on the use of AI may want to look to the principles published by the Federal Court with respect to the court’s use of AI as helpful guidance. The Federal Court committed not to use AI or automated decision-making tools in judgments or orders without first engaging in public consultation. Before implementing any specific use of AI, the Federal Court committed to consulting with relevant stakeholders where the AI tool may impact the profession or the public.
The Federal Court also committed to complying with the following 7 key principles in its implementation of AI:
Accountability: Being accountable to the public for any potential use of AI in its decision-making function.
Respect for fundamental rights: Ensuring the use of AI does not undermine judicial independence, access to justice, or fundamental rights, such as rights to a fair hearing.
Non-discrimination: Ensuring AI use does not reproduce or aggravate discrimination.
Accuracy: Using verified and certified sources of data for any processing of data, including for judicial decisions and for purely administrative reasons.
Transparency: Authorizing external audits of any AI-assisted data processing methods used.
Cybersecurity: Storing and managing data in a secure technological environment that protects the confidentiality, privacy, provenance, and purpose of the data managed.
“Human in the loop”: ensuring that all results of AI generated outputs will be verified by a human.
Conclusion
Given the significant potential benefits that incorporating AI into practices can bring, it is reasonable, and probably even advisable, for regulators to explore implementation of AI. But such implementation must be done with caution and keeping in mind the risks and best practices as outlined above. For assistance in developing or drafting a use of AI policy, please contact us.
To read more from our blog series on Artificial Intelligence and Professional Regulation, check out our blog posts on Using AI to Take Chart Notes and Best Practices in Charting.