The Journal of Things We Like (Lots)
Select Page
Roberto Tallarita, AI Is Testing the Limits of Corporate Governance, Harv. Bus. Rev. (Dec. 05, 2023).

Roberto Tallarita’s recent Harvard Business Review article, “AI Is Testing the Limits of Corporate Governance,” insightfully discusses the upheaval at OpenAI last November, when its CEO, Sam Altman, was temporarily ousted by the board, a move quickly reversed to thwart his potential departure to Microsoft with key team members.

Tallarita’s piece showcases the inadequacies of traditional corporate governance mechanisms in managing the unique challenges posed by artificial intelligence (AI). His evaluation of the OpenAI board actions is based on two key observations. He asserts that conventional corporate governance design is ill-equipped to mitigate the existential risks associated with AI. This shortcoming arises from a fundamental clash between the pursuit of profit and societal goals. In scenarios where financial incentives are as compelling as they have been for a disruptive entity like OpenAI, profit motives are likely to take precedence. Notably, OpenAI diverged from typical governance by securing investments for an entity fully controlled by a nonprofit, a rare approach in the tech sector.

Tallarita’s second point is that, despite customizing an AI firm’s governance to counter profit motives, without carefully crafted rules, the pull towards profit remains strong. The transaction planners at OpenAI did not go far, possibly because, as Tallarita suggests, they had no real incentives to craft an airtight prohibition on pursuing profits; after all, they made large investments. As a result, while investors could not formally have a say in the firm, they could, as Microsoft was planning to, hire Sam Altman directly, which is essentially akin to “buying” OpenAI without paying its shareholders, as Tallarita points out. This will always be a problem whenever a firm is heavily constrained in its governance design, but its talent and knowledge can easily be “acquired and redeployed free from these constraints.” There is more: even if one writes a perfect contract that fully prevents the company from pursuing profits, Tallarita believes that in equilibrium, such a company would be much less successful at attracting new capital than other firms that are more ambiguous about whether profits could be attained. This is because, obviously, investors are after profits. Hence, even if we all agree that AI should not be developed to cause existential harm to society, because of an inherent “race to profits” caused by how capital markets work, we cannot look at corporate governance for solving AI-related externalities.

This does not mean that corporate governance solutions for AI firms are irrelevant. Tallarita suggests that corporate governance experts should keep experimenting to find ways to combine profit and safety; arguably not an easy task but something unescapable. He believes that retaining the profit motive in AI holds more promise than attempting to curb greed and ambition. While he does not offer an overall roadmap to achieve this goal, he suggests that board composition should become a top priority and that AI companies should appoint directors with different viewpoints and greater cognitive distance than ordinary companies, with boardroom norms rewarding time commitment and robust discussion.

While this type of arrangement would lead to improvement, Tallarita warns that corporate governance is ultimately an ineffective policy tool to counter the existential risk posed by AI. Drawing from incomplete contracts theory, he posits that the main safety valve that corporate governance offers, which is assigning residual rights of control over certain assets to one party when an unforeseen circumstance arises, may not work with AI. “[W]hat happens if the AI becomes uncontrollable?” He suggests that the AI firm will have a hard time turning off the machine. Because we cannot rely on corporate action, he recommends deploying “extraordinary legal controls . . . of the kind used to regulate nuclear proliferation or biohazard.” True, “good corporate governance can help in the transitional phase, [but] the government should quickly recognize its inevitable role in AI safety and step up to the historic task.”

Implications for AI regulation

First, the piece is informative and timely for its AI-related implications. As we are still in the initial phases of AI development, the OpenAI board debacle is instructive for future regulatory endeavors. Despite the uneasy relationship between the tech sector and regulation, especially in the U.S., this is hardly the field where U.S. policymakers can hide behind the false choice between digital regulation and innovation. Given AI’s global impact, ideally, a multilateral approach would be most effective. For now, only the EU AI Act imposes stringent regulation. However, it remains unclear whether its unilateral and extraterritorial measures will foster cooperation or, conversely, create regulatory antagonism.

Implications for corporate governance more generally

Tallarita’s commentary examines the role of corporate governance in AI, concluding that it offers limited solutions, especially in mitigating the technology’s inherent risks. However, his article also highlights the inherent limitations of relying on corporate governance and private mechanisms to address significant societal challenges.  This is a point well worth making, for the limitations are daunting.  To see why, consider the practical mechanics of relying primarily on private ordering and corporate governance to solve our society’s problems.  We could leave our existing governance arrangements in place and remit implementation to the discretion of the board of directors, continuing to rely on an incomplete contracting framework.  Alternatively, we could write more complete contracts, thereby imposing social directives on the board.

There are precise reasons why incomplete contracts work reasonably well in generating value for investors. Boards benefit from discretion and minimal judicial oversight, especially in decisions without conflicts of interest or changes in control. This latitude is largely because managerial and investor interests often align, a result of pressures and scrutiny from capital, labor, and corporate control markets. Additionally, executive compensation structures provide strong incentives to maximize shareholder value. However, replicating this alignment towards non-profit goals presents challenges. Typically, managers are rewarded more for increasing shareholder value than for achieving other objectives. Despite ESG-focused experts’ attempts to rectify this, existing compensation models still struggle to advance broader societal or environmental goals (as Tallarita himself and Lucian Bebchuk note elsewhere). If alignment does not work, board control cannot be relied upon, exactly what Tallarita warns about with respect to AI risk.

Therefore, in the absence of breakthroughs on how to recalibrate managerial incentives (via compensation or otherwise), the only viable way to adapt corporate governance towards the societal goals we want corporations to pursue is to write more specific contracts. Whether this strategy will succeed hinges largely on corporate law practitioners. However, a persistent concern is ensuring fair and robust representation of societal interests at the negotiation table. Tallarita’s article suggests that corporate lawyers, owing to their allegiance to paying clients like management or investors, may not effectively champion these interests. Moreover, there’s a noticeable absence of precedents to guide us. Sure, we must experiment, but how? Via stakeholder-appointed directors, possibly with veto power over certain sensitive matters? Adopting bonding mechanisms such as green pills to protect the climate? Providing standing to sue derivatively to certain classes of stakeholders? Recalibrating executive compensation? Explicitly expanding fiduciary duties and limiting exculpatory provisions? The list can go on and these questions are expected to persist, presenting ongoing challenges for corporate planners. Certainly, Tallarita’s stimulating work will come in handy.

Download PDF
Cite as: Matteo Gatti, What Corporate Governance for AI?, JOTWELL (June 13, 2024) (reviewing Roberto Tallarita, AI Is Testing the Limits of Corporate Governance, Harv. Bus. Rev. (Dec. 05, 2023)), https://corp.jotwell.com/what-corporate-governance-for-ai/.