In an era marked by rapid technological advancements, government agencies face critical decisions when it comes to adopting emerging technologies. Agencies need to weigh the benefits with the associated risks of these technologies to determine how best to proceed. It seems that nothing is taking the world by storm more these days than Large Language Models (LLMs) and their potential endless use cases. However, what are some of the potential risks to this emerging technology?
The United States Patent and Trademark Office (USPTO), known for safeguarding intellectual property rights, recently made a significant choice by deciding against the use of generative AI and Large Language Models (LLMs), including ChatGPT. At the April 2023 GovFuture Forum event at George Mason University (GMU) in the Washington, DC region, Scott Beliveau, Branch Chief of Advanced Analytics & Acting Director of Data Architecture in the Office of the Chief Technology Officer (OCTO) at the United States Patent and Trademark Office (USPTO) shared his agency’s perspectives on the use of AI and LLMs, and his agency’s current posture with regards to their use within government activities. As a followup to the event, Scott was interviewed on a GovFuture podcast where he dived deeper into some of the topics around use of AI in the government.
The growing use of AI
It should come as no surprise that AI is everywhere. In fact, most people use AI on a daily basis from recommending products we buy, to what shows we watch, to how we drive from point A to point B. It’s even used to help draft emails and write articles! Scott says that “Artificial Intelligence (AI) has the potential to provide tremendous societal and economic benefits and to foster a new wave of innovation and creativity. AI now appears in 18 percent of all new utility patent applications and in more than 50 percent of all the applications that we examine at the USPTO. AI-generated content and other emerging technologies, however, can pose novel challenges and opportunities in both IP policy and the tools used to deliver reliable intellectual property rights. Consequently, our general posture (at the USPTO) has been to take a measured approach by actively engaging and seeking feedback from the broader innovation community and experts in AI on IP policy issues. We also recognize that policy issues will arise in the future that we cannot yet imagine. With these engagements, we strive to continue fostering the impressive breakthroughs in AI and other emerging technology through our world-class intellectual property system.”
Generative AI and LLMs offer undeniable benefits in terms of efficiency and accuracy, but their use also introduces risks. Transparency concerns arise from the lack of clarity surrounding AI algorithms and the potential biases embedded within them. Additionally, the truthfulness of AI-generated content and the potential for misinformation pose significant challenges, particularly within the patent examination process. For these reasons the USPTO is taking a cautious approach to using this technology.
Navigating Trust, Transparency, and Truthfulness
One of the big takeaways from the podcast interview was related to navigating trust, transparency, and truthfulness in government. Government agencies have an obligation to ensure the highest level of integrity, fairness, and accountability, especially when dealing with intellectual property matters. Given the potential limitations and biases inherent in AI systems, maintaining public trust and transparency becomes paramount.
Scott shared that he tries to “categorize the risks (of AI and emerging technologies) into three buckets. One being trust, the other being transparency and the third being truthfulness. A lot of time, in looking at our role in the public sector, all of us in the public sector have different roles, missions, responsibilities. It’s extremely important and lives sometimes are at stake when it comes to making decisions. So in making decisions and looking at things like large language models oftentimes what we’ll see is large language models may be full of bias. They may be trained on data that we don’t know where it came from. It may not be from reputable sources and it could be either intentionally or unintentionally malicious. But as our role as public servants, it’s extremely important for us to maintain the trust of the public. So when we say we think there’s a storm coming or this particular drug is beneficial, that you’re able to trust that message and trust that source.
The second part of that is transparency. When we’re making decisions in the public, as well as the IP system fundamentally, is really based on trust and transparency, because in exchange for explaining your idea you can get an exclusive right for that and then your people build upon that particular aspect of it so you really need to know what goes into it. And having the openness to know that, yes, not only do I trust this, but I track, I understand, and I follow the facts that led to that particular decision.
And then finally is truthfulness. A lot of the models that we’ve seen, they’re getting better every day and they sound truthful. They sound very truthful, and are sometimes very convincing versus going back to something like the Eliza chatbot back in the 70s / 80s, which sounded a little quirkier. So it’s really, the risk is knowing that information or, a decision or, a basis for a recommendation has a solid foundation on it.”
The USPTO’s decision not to adopt generative AI and LLMs is rooted in safeguarding trust, transparency, and truthfulness. By forgoing these technologies, the USPTO demonstrates a commitment to ensuring fairness and avoiding potential biases or uncertainties associated with AI systems. This choice also reinforces the agency’s dedication to maintaining public confidence in the patent examination process.
Read the full article here