Teresa Tung is Chief Technologist at Accenture Cloud First.
Generative AI, the technology behind applications like ChatGPT, is taking the world by storm. We’ve reached a tipping point in the way the public views artificial intelligence.
Even among professionals working in the field for years, there’s a definite sense of “before and after” ChatGPT. A seismic shift has taken place, and nothing will be quite the same.
Reimagining Software Development
Now, the question is how businesses are going to apply this technology. One of the first areas we’re seeing an impact is in the software development lifecycle (SDLC).
The number of use cases across the SDLC is large. These include everything from code generation and analysis to incident detection and resolution to generating system documentation. Beyond custom software, it can be applied to managed services and the configuration of packaged software.
At Accenture, we’re actively exploring this domain. Our clients and our workforce stand to gain significantly by deploying generative AI in a way that makes software development and management better.
Three Questions Of Trust
However, if this technology is really going to reinvent the way we create and manage software, we need to be able to trust it. That means the large language models (LLMs) that power generative AI must be reliable, secure and responsible. This raises three key questions.
1. Accuracy. Can we trust the outputs we get from generative AI enough to make them usable in day-to-day work? For SDLC use cases, accuracy requires having the right architecture in place to capture context for the LLMs—knowledge of our software code, systems and practices—and to integrate insights into our tools and processes.
2. Security. Can we trust the technology from a cybersecurity and data privacy perspective, especially as the risk landscape evolves?
3. Responsibility. Can we trust that using generative AI within the enterprise won’t open up unforeseen legal or ethical risks? Understanding the vulnerabilities in the underlying IP and data used to train the LLM is key—whether it’s one we build or one that is pretrained.
Three Categories For Consumption
To address these questions, we need to evaluate the potential ways generative AI models can be consumed. Think of it as a distinction between buying, boosting and building.
While there will be quick wins for code generation with out-of-the-box solutions today, we’ll see more customized generative AI-powered co-pilots bear fruit over the next year or so. Then, there’s the longer-term potential to rethink the SDLC end-to-end through the creation of our own custom models. Unlocking future phases starts with understanding the considerations related to trust across these categories.
Now: Buy
There are many ready-made generative AI services that primarily address the routine parts of SDLC work—testing, writing documentation, generating ready-made code snippets and modules, etc.
These services package together all of the layers of a generative AI solution, from the front-end application through to the underlying foundation model. The speed to value is high.
For example, with popular code development tools like Amazon CodeWhisperer, Github Copilot and Tabnine, the LLM is already built into an integrated development environment (IDE). These tools can generate parts of the code and recommend tasks to accelerate development, improve code quality and ensure adherence to development standards. Our own experiments have yielded higher productivity and improved developer satisfaction by automating mundane tasks.
The trade-off is less control. When using any LLM, companies need to consider mitigation strategies for dealing with potential bias in public training data. Within these tools, “bias” might mean better support for more popular languages like Java, Javascript and Python over specialized languages like C++, SQL, COBOL or Elixir.
Companies also need to be aware of the potential for IP infringement from data baked into the training code. Additionally, there’s the risk of proprietary data and IP leakage when using a managed model (where many companies will opt for private).
Next: Boost
Increasingly, companies are starting to customize generative AI by taking an existing model and fine-tuning it to fit specific use cases. Examples across the SDLC include generating first drafts of code and documents as well as creating a “coach on your shoulder” advisor to help developers upskill and provide guidance on system and code specifics.
To take advantage of these AI-powered co-pilots, companies will need to have basic development standards and a knowledge base to customize the LLM. Even better would be an automation framework that helps build and maintain the knowledge base, deploy the insights and monitor usage.
Here, the company takes more ownership of risk mitigation. This approach also provides a path to a model that’s more customized (e.g., with their specific tool sets, code standards and documentation formats) to suit business needs.
Later: Build
Eventually, some organizations will take this even further and build their own model, trained on their own data, that’s totally under their own control. Given the high data, compute and expertise required, most that choose this route will start with an open-source, pretrained LLM, adding their own domain data to the corpus. This approach gives a company the most control (and associated responsibility) to create highly customized solutions while eliminating the risks around IP infringement and bias from public data.
For example, we’re investigating emerging LLMs to begin creating our own custom code models. We’re also looking at emerging frameworks to dynamically generate and carry out multiple steps of a process. This can make software development more accessible for both experienced and junior developers and even business users.
Trust Is Paramount
In domains like software development, generative AI’s potential to deliver rapid innovation and efficiency is immense. Maintaining trust is going to be critical. By understanding how the core enablers of that trust—architecture, security, responsibility—will vary as deployment approaches mature, we can ensure employees, customers and businesses can all reap value from this exciting technology.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here