Skip to main content

Algorithmic Entity Accountability

Objective Ethics of Algorithmically Managed Legal Entities

Published onAug 15, 2022
Algorithmic Entity Accountability

Under the new Wyoming law enabling Decentralized Autonomous Organizations to form as legal entities, it is possible to create an algorithmically managed LLC. One issue this raises is whether an algorithm can perform the functions legally required of an LLC manager. Specifically, under Wyoming statute, an LLC manager “owes to the company and…the other members the fiduciary duties of loyalty and care …” [FN 1].

Meanwhile, earlier this month the United Kingdom published a new and innovative policy direction for developing a regulatory framework intended to foster the safe and effective use of AI “to offer legal [and] financial advice”  [FN 3]. As with the requirements for LLC managers, both lawyers and financial advisors also owe fiduciary duties to clients. Wisely, the UK policy would take an incremental approach to such legal reforms by, among other things, targeting authorization for AI to perform such functions only in narrowly defined and approved contexts and by ensuring the oversight by accountable people. Specifically, the UK policy would require “…accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person - whether corporate or natural” [FN 2]. However, when a “legal person” may itself be an algorithm running an LLC, this safeguard can no longer assume such a legal entity has the adult human capacity of legal and financial professionals to comprehend and correctly exercise fiduciary duties owed.

And so, real and timely questions are arising as to whether or how an AI or other algorithm can minimally be capable of the analysis and judgments necessary to understand and correctly perform such fiduciary duties.

One potential approach to addressing these questions in this emerging legal, regulatory, and public policy context is to start by applying the same tests applied to human professionals, namely professional ethics exams. Such exams offer a relatively objective and longitudinally calibrated basis for testing and evaluating whether the test taker comprehends and can correctly identify when and how to discharge their fiduciary duties.

In the United States, we use an exam called the Multistate Professional Responsibility Examination (MPRE) [FN 3]. Can relevant parts of this exam serve as the basis for a new kind of test suite that can be applied to algorithms as a way to evaluate their minimum capability and fitness to discharge fiduciary duties? There are, for instance, some examples of IQ tests being administered to AI [FN 4]. Could a test harness of some kind be developed to apply existing fiduciary duties exams to the algorithms that would operate a legal entity, provide legal counsel, or offer financial advice?


Update 1: Project Overview and Testimony Videos

Fiduciary AI Test Harness - Project Prospective

Part 1: Flash talk overview of this project, presented at Legal Hackers International Summit, Brooklyn, NY, September 10th, 2022.

Part 2: Testimony on this project, presented to Wyoming Legislature, September 19th, 2022.


NOTE: The above is a rough draft of a blog post that will be refined over time. This version is offered for feedback, critique, and in hopes of catalyzing idea flow on the topic. The current version of this post may be found at: https://www.civics.com/pub/algorithmic-accountability


FOOTNOTES

FN 1: See https://law.justia.com/codes/wyoming/2015/title-17/chapter-29/article-4/section-17-29-409 but note that while this Wyoming legal requirement is typical under the laws of many states, some other states permit the duties to be waived.

FN 2: See https://www.gov.uk/government/publications/establishing-a-pro-innovation-approach-to-regulating-ai/establishing-a-pro-innovation-approach-to-regulating-ai-policy-statement

FN 3: See https://en.wikipedia.org/wiki/Multistate_Professional_Responsibility_Examination and this example practice exam https://www.law.uh.edu/faculty/adjunct/dstevenson/QUESTIONS%20-%20for%20class.pdf and this affirmative statement of the underlying rules https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/model_rules_of_professional_conduct_table_of_contents/

FN 4: See https://www.ijcai.org/proceedings/2019/0846.pdf and https://spectrum.ieee.org/how-do-you-test-the-iq-of-ai

Comments
0
comment
No comments here
Why not start the discussion?