How much power are organizations and their leaders will hand over to the machines? Quite a bit, actually – but no one is ready to hand over the keys to their entire operations to artificial intelligence. At issue is the data that fuels AI decisions and actions, as well as concerns around erroneous “black-box” decisions and bias. The motto for business leaders is trust, but verify.
In a recent survey, while 61% of executives said they “fully trust” the reliability and validity of their AI outputs, 40% do not believe their company’s data is ready yet to achieve accurate outcomes. The survey, conducted for Teradata by NewtonX this spring, finds then most important factors are trust in AI stems from reliable and validated outcomes (52%), consistency or repeatability of results (45%), and the brand of the company that built their AI (35%).
How much, then, are executives willing to hand over to AI? The technology is not ready for lights-out operations just yet, industry leaders state. “In general, we believe that there needs to be a human in the loop to review and advance AI generated recommendations — be they related to predictive maintenance, logistics optimization, supply chain optimization, production optimization, fraud detection, whatever,” says Tom Siebel, CEO of C3 AI, and founder of Siebel Systems, now a part of Oracle. “We believe the responsible deployment of AI in the enterprise will require human supervision today and in the future.”
Despite immense pressure to unlock potential productivity and efficiency gains, “tread carefully with AI solutions,” advises Binny Gill, founder and CEO of Kognitos. There are well-documented instances of hallucinations with generative AI, “but also have deep-seeded trust issues with AI as a whole.”
Currently, trust in AI outputs, whether operational or generative, “is limited,” agrees Junaid Saiyed, CTO at Alation. “In the case of AI-suggested outputs, there is potential for contextual misunderstanding, biased results, or hallucinations.” The ability to deliver greater trust depends on “the governance of the data used, and the level of risk involved.”
While errors in an email campaign may be negligible, “human oversight is critical in high-stakes industries like insurance, where algorithmic bias or exposure to PII can lead to severe repercussions,” Saiyed adds.
How can AI proponents begin to build the trust needed for more autonomous – but still human-guided — systems?
To address the trust issues around AI, companies need to be clear, open, and guarded about their use of AI in decisions, Siebel says. “All AI-generated recommended actions should include a complete evidence package that explains the basis for the recommendation and the underlying factors that contribute to the recommendation,” he urges. “The workflow should require an explicit human approval to advance any recommendation into action.”
For an illustration of the need for human-guided AI interactions, look to the experiences with self-driving cars, says Gill. “The driver may trust the AI in the car, but the ability to take over control of the steering wheel when the machine is stuck or when the driver is not comfortable is a must.” Similarly, Gill continues, “a better steering wheel will be required for businesses. Business users need to be able to review how AI is going to affect the world around them — their financial books, email, or business processes — before the actions happen.”
The best way to achieve this “is for the AI engine to propose a plan in natural language that the human can review before handing it off to a diligent AI system,” Gill adds.
This is scalable and sustainable, says Siebel, noting that within C3’s transactions, “hundreds of-AI recommended actions are reviewed and rejected by human intervention every day; thousands — perhaps tens of thousands — are reviewed and promoted into action after management review.”
The tagline for AI-enhanced processes should be “machine suggested, human verified,” Saiyed says. This requires “clear monitoring roles and transparency in AI models for easy interpretation. Regular audits are crucial to correct errors and biases, while robust feedback mechanisms facilitate continuous improvement.”
Tracking data lineage is also critical, especially for compliance with regulations such GDPR, CCPA, HIPAA, and emerging AI rules, Saiyed adds. “Organizations must unify and trust data quality, focusing on metrics like freshness, accuracy, and completeness. This foundation is essential for building reliable AI models, mitigating risks, and empowering users to leverage trusted data effectively.”
Finally, every human close to the AI process should have direct authority to revise or stop an AI transaction. “Our AI tools can be consistently reviewed and overridden by humans, whether through GenAI applications that propose responses or AI-driven search engines,” says Saiyed. “This ongoing human oversight ensures that AI is a supportive tool rather than an infallible authority.”
Across any process or task, “there will always be a responsible human,” says Gill. “In the case of an airplane flying on autopilot it is the human pilot. In the case of a car, it is the person in the driver’s seat. In the case of a factory with a large assembly line, it is the worker on the floor who is overseeing the widget quality. In the case of business processes, it is the person who is the subject matter expert and is tasked with handling the cases when Trusted AI cannot make the right decisions and seeks help.”
Read the full article here