Todd Moore is Vice President of Encryption Solutions at Thales Cloud Protection & Licensing.
It’s clear that generative AI isn’t only the next big thing but also looks to have real staying power. Generative AI provides answers to straightforward questions, creates imagery and produces new written and audio artifacts. Many of the potential use cases of this AI technology are in the early stages, with even more new use cases emerging daily.
What’s new with this type of AI is that, historically, these technologies were classifiers. For example, AI systems could be trained to distinguish and classify the difference between two different images. Generative AI is an umbrella term for processes that can automatically create something that doesn’t yet exist in the real world based on the data it’s been trained on. It’s this creativity that’s set to be transformative.
Wide-Ranging Impacts
Gartner analysts have predicted that “more than 30%—up from zero today—of new drugs and materials [could]
be systematically discovered using generative AI techniques” by 2025. Salesforce analysts, meanwhile, found that 67% of senior IT leaders globally are prioritizing generative AI for their businesses between now and mid-2023. We’ve already seen examples of AI technology being developed to help narrow down potential formulations and match drugs to patients. It’s also likely that the automation and decision-making power of generative AI will lead to further advances in fields like marketing and communications, software development and other areas in which the ability to autonomously complete business and IT processes could be hugely beneficial.With this power comes great responsibility. Although the potential use cases offer all kinds of opportunities, they also could have disastrous consequences in the wrong hands. Many organizations have already issued blanket prohibitions or at least guidelines on the usage of generative AI, encouraging staff to exercise caution and be mindful of copyright, accuracy and data privacy concerns. Bad actors are also closely analyzing these technologies to see how they could benefit from this disruption.
Legislative Responses
Governments worldwide are already taking steps to establish guidelines around usage, with the U.S. currently working through responses to a consultation that investigated what a framework should look like while meeting with industry leaders and key players to discuss the potential risks. The EU is also working on similar proposals for a code of conduct. In the meantime, some countries have moved to temporarily ban platforms altogether while these frameworks are confirmed, with the Italian government ordering ChatGPT’s developer OpenAI to temporarily stop processing Italian users’ data until GPDR compliance is reached.
Although generative AI tools have been around for some time, it’s only recently that this technology has been released into the public domain for anyone to utilize. This rapidly advancing technology and its use cases make for a challenging environment to issue guidelines and legislation, but understanding the risks and rewards can bring a lot of business value.
From Breach Risks To Trust Concerns
The main concerns around generative AI can be boiled down to either trust or security. AI models—because of their reliance on training and data in order to work—can easily give outputs that are biased or factually incorrect. These can be difficult to spot, and they can inadvertently further entrench biases that unfortunately exist within society. In response, as many as 79% of those surveyed within organizations across all levels support AI regulation, according to BCG, with 71% believing that the rewards outweigh the risks.
AI-generated outputs can also be used to make cybercrime more lucrative and convincing—whether it’s launching a social engineering attack, fine-tuning malware code to make it harder to detect or using AI to generate and share guidelines, advice and tutorials with cybercriminals. Interestingly, combatting these risks may come from AI itself, using other kinds of tools to perform linguistic analysis and syntax detection to reverse-engineer text, imagery and video and flag content.
Establish Clear Usage Policies
As businesses experiment with implementing AI models, it isn’t something they can put in place and then forget about. It requires ongoing work to review the decisions that are being made and efforts to ensure harmful outputs and toxicity are minimized. This involves establishing some clear principles around usage to make ethical development a reality.
For example, humans need to be involved in reviewing all the data sets and documents involved in training models, as well as removing biased and false elements. Businesses should only use data that customers share proactively or that the business collects directly—otherwise, they risk impacting accuracy and trust.
Alongside any work they might be doing to implement generative AI models of their own, organizations also have a responsibility to educate their workforce and customers about the cyber risks these technologies can pose. They need to be constantly mindful of impersonation scams, phishing and other techniques that generative AI has the potential to make even more convincing.
Generative AI is now mainstream, and we’ve only just begun to realize the impact it will have on our lives. This period of rapid transformation makes it tempting for businesses to jump on the bandwagon quickly, but without clear frameworks in place around development and usage, there could be potentially disastrous business consequences. Businesses and consumers must stay vigilant to potential new threats and formalize their policies regarding generative AI. It’s only then that they’ll be able to walk the line between usefulness and hindrance.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here