54% of people intent on using AI have their sights set on the generative side and anticipate engaging tools like ChatGPT and its capacity to deliver paragraphs or pages of information from well-worded prompts.
Artificial intelligence has been great for everything, except autocorrecting my name. Even when the software knows that I am a human and a woman, it still tries to turn me from Paola to Paul or transform me into a koala. Like it or not, AI is here to stay.
AI has been an active part of our lives for longer than we realize. It’s that way with technologies that are suddenly branded as “new.” Most of our “new” tech has been a decade or more in the making. And by the time it’s everywhere, it has been many places for two or three decades.
If you want to know how long artificial intelligence has been in use and in what capacities, just ask AI.
Most chatbots point to 1956 as AI’s founding year. Ideas and applications came of age during the 1970s and 1980s. During the 1990s, as the internet was expanding—after it, too, had been in the making for decades—machine learning took off and AI accelerated. So, before it was everywhere, AI was here.
Artificial intelligence is becoming increasingly ubiquitous in business. As we travel into this new world of apps and algorithms that make decisions and perform background and critical services, we’re bringing along many undesirable stowaways.
AI is picking up our biases. It’s even refreshing biases we called out a while ago and thought we had all but eradicated. If business leaders need to know a few things about AI, the propensity for bias should top the list.
If for no other reason, leaders want to come up to speed on AI biases because a backlash of litigation is beginning to engulf prominent AI businesses. These companies’ AI tools are tainted with many of the same prejudices that were addressed by protective legislation. The algorithms powering the tools under scrutiny are often rolling back the calendar and reintroducing discrimination against groups whose victimization required civil rights legislation. How?
If we assume that an AI tool does what it is told (programmed) to do, then AI behaving badly makes us assume that there are some bad lines in the code. We wonder who would do this in the first place. Then we breathe a sigh of relief because it sounds like an easy fix. But that’s not how artificial intelligence works.
Remember machine learning? AI thinks. And unless it is extensively trained and exposed, it will develop and apply the worst that our societies have had to offer. Left unchecked, AI can resurrect many of the trends and practices that we thought we had left behind. And it’s already happening.
Leaders whose organizations build or use AI should pay particular attention. And that’s everyone.
AI has the capacity to detect and eliminate or circumvent bias. This quality has to be trained into the tool. Initial programming has to be repeatedly layered over with exposure to diverse scenarios, individuals, and datasets. Without intensive training, testing, and revision, AI picks up on patterns and integrates them as conveniences and shortcuts. AI, after all, is part of our automation toolbox.
We expect people to simplify their work responsibilities, especially routine tasks. AI does the same thing. The bad behavior of a bank employee who uses gender or race to make determinations about loans slides into related AI. And here is the thing: bias may be more visible during human-human encounters and less detectable during human-machine or machine-machine encounters. But the data exposes the behaviors.
AI will only be as good as its training and periodic review. Once leaders begin to think of AI tools as wholly digital employees, leaving them to work unchecked will no longer be an option or practice. We regularly inspect tools of automation in industrial and manufacturing settings. The unseen nature of AI may give it an out-of-sight-out-of-mind quality. Relinquishing this way of seeing AI can be the best decision that leaders can make.
Monitoring outcomes for all use of their AI tools and being proactive about evidence of bias could have spared several companies time, money, and litigation.
Wherever bias operated in the past or lingers in the present, algorithmic bias has slipped in. Look at AI in employment, housing, education, healthcare, banking, credit, and finance and you’ll find biased algorithms and AI tools becoming bad actors.
It’s simple. If these tools “see” the usual percentages of individuals occupying certain roles in the datasets they are fed, they begin to make assumptions about who should be where. The tool is simplifying, and results may be discriminatory.
A UC Berkeley study of discrimination in consumer lending demonstrated that biased mortgage algorithms cause Black and Brown borrowers to pay $765 million more per year than White borrowers. Without taking proactive steps to address these patterned issues, businesses may build AI tools that revictimize the most vulnerable populations.
Delegating tasks to AI tools and leaving them unsupervised may be an invitation to adjustment through litigation. Proactive correction means retraining. Leaders can adopt policies that detect and mitigate biases in their AI tools.
Many strategies can reduce the risk and likelihood of biases taking hold. Detection is a multigroup effort.
AI tools have to be rigorously and exhaustively tested by diverse groups. Tools that operate self-driving vehicles have to learn to recognize a fully representative array of pedestrians, including children and individuals with different mobilities and movement patterns. Chat GPT has to learn that Australians are not automatically bad tenants.
AI does not know anything new. It automates and packages our best and worst thinking. Fortunately, AI is still young and teachable. Industry leaders can fix these issues. Leaders should be excited about the opportunity to help shape how this not-so-new technology treats people as it shapes our future.
1. AI bias may not indicate a company’s direct or intentional malice, but businesses and related players can get drawn into the backlash against a piece of software.
2. Develop and maintain an understanding of AI bias as part of the obligation to protect your organization from litigation or reputational damage.
3. Make sure you know the history of an AI tool and are aware of any reports of bias before deciding to use it anywhere in your business.
Read the full article here