Saturday, April 20, 2024 | Shawwal 10, 1445 H
clear sky
weather
OMAN
25°C / 25°C
EDITOR IN CHIEF- ABDULLAH BIN SALIM AL SHUEILI

Senior industry leaders need to learn about AI

minus
plus

A member of Epstein Becker Green in the Washington, DC, office and serves as Chairman of the Board and Chief Data Scientist of EBG Advisors, Inc


Imagine this. You are President of the United States. It's your dream job, because you have more power than anyone else in the world, and nobody ever criticises you. It's nothing but four years of Nirvana (the transcendent state, not the Seattle Grudge band).


In walks your Secretary of State to tell you about a new policy that the US adopted to impose sanctions on France to address the fact that their wine tastes too good. Apparently, the French didn't take it very well, and are retaliating with their own sanctions. You ask who put the U.S. policy in place, and the Secretary explains that it was Jake, a junior analyst on the France desk. You ask why such an important decision was made by a junior analyst, and the Secretary explains, "He knows French."


Honestly, this happens every day in corporate America, but instead of US/France sanctions, it's adopting the use of algorithms that play important business functions that, when done incorrectly, can lead to liability. For example, when algorithms that facilitate selection of qualified individuals for employment, promotion, credit, or for the provision of medical care, government services, or even entrance into office buildings, are created in a way that could lead to adverse disparate impact on racial and ethnic minorities, women, or other protected groups, such algorithms not surprisingly may violate the law. Further, quite apart from discriminatory impact, algorithms that simply do not work as intended could cause injury and actionable claims.


In both events, there is substantial risk of federal or state enforcement action. Consider:


•The US Equal Employment Opportunity Commission and similar agencies have explained that deficient AI can violate employment discrimination laws (Commissioner Keith Sonderling spoke at a September 1 webinar, sponsored by the EEOC Chicago, Houston and Miami Districts, on "The EEO Implications of Using Artificial Intelligence and Machine Learning in Employment Decisions").


•Defective algorithms can violate federal and state fair credit and consumer protection laws. For example, according to a Federal Trade Commission report on "Big data: A Tool for Inclusion or Exclusion?" "one credit card company settled FTC allegations that it failed to disclose its practice of rating consumers as having a greater credit risk because they used their cards to pay for marriage counseling, therapy, or tire-repair services, based on its experiences with other consumers and their repayment histories."


•Poorly designed and inadequately tested algorithms used by customers can result in class action product liability and governmental and private attorney general instituted litigation;


•Algorithms that are not transparent and deceptively mislead consumers in advertising can run afoul of various federal and state unfair trade practice prohibitions;


•Delegation of data preservation and access to deficient AI, or the improper use of private data, can implicate federal, state (eg., the California Privacy Rights Act), and even international (the General Data Privacy Regulation) law and result in ruinous fines;


•Erroneous AI decision-making regarding government claims submission, eg, with respect to health care reimbursement and government contracts, could result in treble-damage liability under laws like the federal False Claims Act;


•Algorithms that drive medical practice, if insufficiently designed and tested, can violate US Food and Drug Administration requirements and lead to claims ranging from unlawful discrimination to medical malpractice.


Andrew Smith, Director, FTC Bureau of Consumer Protection, perhaps sums up the government's view of AI in an April 8, 2020, blog post, explaining that "the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability."


And these are just examples of legal liability. I left out obvious business risk including harm to reputation such as Microsoft's use of the Tay chatbot that shortly after being launched, was coerced by users into engaging in racist rants on Twitter.


But here's the truly disturbing part. When there's a problem such as these, it becomes apparent that the algorithm was put in place by Jake in the IT department. Why? Because he knows Python.


Senior industry leaders, including in-house counsel, presumably have qualifications for making important decisions that involve complex strategy and entail bet-the-farm outcomes. They have years of experience, and that experience typically covers a broad range of scenarios that produce a wide range of risks that need to be navigated. But all too often, today that expertise does not include understanding AI and the risks associated with it. Senior leaders and counsel shy away because a reasonable understanding requires some measure of math, statistics and computer science. And that is scary stuff to someone who's decades beyond school.


I am 60 years old, and I faced that problem. In my day job, I advise those developing or using AI on legal and regulatory requirements. Unfortunately, though, I didn't understand how these algorithms really worked. To address that deficit, I went back to the University of Michigan, for an online Master's of Applied Data Science. I had never written a line of code in my life.


That was almost three years ago. I will graduate this December.


Going back to school at an advanced age can be terrifying. At times, I also found it humiliating. I clearly knew less than most of my classmates (who were, by the way, roughly the same age as my children). But I do have something that most millennials don't — bad knees. That meant I could sit in a chair for endless hours and work on homework without the temptation to do something else. And the pandemic helped by keeping me at home.


In order to participate in any discussion, you need to understand the vocabulary. Like any technical subject, data science has certainly its share of esoteric terminology and acronyms. It's important to know both what the words mean, and to develop an intuitive understanding, so that you can ask intelligent questions. - Reuters


SHARE ARTICLE
arrow up
home icon