One of the most consequential challenges confronting corporate governance in the near term will be its ability to exercise informed oversight over the application of artificial intelligence (“AI”) within its organization. It will be a challenge that will arise regardless of the industry sector in which the company operates, and regardless of how it applies AI in that operation.
The essence of the challenge is the rapidly emerging conflict between the perceived societal and commercial benefits arising from AI implementation, and the societal and institutional risks arising from its use. The need to address the challenge is urgent; the competing interests of benefit and risk are hurtling at each other at hypersonic speed.
While the challenge is certain to arise at some point at the government/regulatory level, it is likely to arise more immediately at the corporate, operational level. And the governing board, with its strategic and risk management portfolios, is the most appropriate platform from which companies may resolve the challenge for the benefit of all corporate constituencies.
Nowhere is this risk/benefit conflict better demonstrated than in the health care sector, which is widely acknowledged for leveraging research and innovation to achieve advances and efficiencies in patient care and treatment.
A recent feature in The Wall Street Journal reflected on the “vast array” of AI projects currently being pursued by hospitals and health systems. These projects range from the use of AI algorithms to process mountains of data available in electronic medical records; to enhancements in radiology imaging that can lead to early detection and more responsive treatment; to infection reducing and cardiac arrest/stroke risks in the emergency room and intensive care unit. As seen through this lens, AI offers exciting promise for improving patient care and safety.
Yet, as the Journal feature noted, this great promise is offset in part by algorithmic flaws and other undetected risks or application failures that could result in severe patient harm. Such are exacerbated by limitations on the ability of concerns researchers, clinicians and administrators to promptly identify and respond to breakdowns in AI application. Moreover, the sheer volume of available AI models may overwhelm the ability of staff to effectively understand and manage them in the interest of the patient.
Then, of course there are the broader range of AI concerns with accuracy and fairness that may arise regardless of industry sector, such as concerns with privacy protection and with endemic racial, gender and age biases that could have significant personal costs.
But legitimate concerns regarding AI management, failure and risk—and the need for government regulation—are often countered by similarly legitimate concerns with the need for moderation in risk management and regulation. Commercial thought leaders view technology as a leading source of competitive advantage in the global marketplace and an emerging pillar of the US economy. AI innovation is seen as an important component of that technology.
In that context tipid, circumspective, and or excessive efforts to review, evaluate and regulate AI applications are seen by innovation supporters as particularly harmful, to the extent that they may unintentionally serve to frustrate the realization of the social promise of AI. For that reason there are extended concerns that risk intolerance not become the standard for reviewing and approving AI development.
The importance of this risk/benefit conflict has been recognized by several leading US commercial policy organisations. For example, earlier this year the prominent Business Roundtable introduced its “Roadmap for Responsible Artificial Intelligence,” a set of principles intended to guide business in their implementation of “responsible” AI. These include recommendations on adapting existing governance structures to account for AI.
Similarly, the well-known US Chamber of Commerce has launched an “Artificial Intelligence (AI) Commission on Competition, Inclusion and Innovation,” intended to advance US leadership in the use and regulation of AI technology. The expectation is that the Commission will recommend “durable, bipartisan AI policy solutions” intended to support to innovation while fostering fairness in the deployment of AI.
It is against this backdrop that the board should confront its role with respect to AI oversight, and its ability to resolve the risk/benefit conflict as it relates to AI use within the company. While the board may seek the input from AI experts and proponents in the management team (eg, the CIO), it should not delegate to them the ultimate resolution of the conflict. It must serve as a “fair broker” in resolving the competing interests.
Yes, AI is a highly complex topic and yes, it may be very difficult for the average director to grasp the nuances and risks of AI implementation. But AI is a critical component of every company’s strategic vision, and oversight of that vision is a primary function of governance. Indeed, the board’s basic sense of judgment and ability to “see the whole field” well positions it to navigate the company between the Scylla of reckless innovation and the Charybdis of stifling circumspection.