3 things to consider with generative AI

 

By Suma Nallapati

 

Artificial intelligence (AI) is by no means brand-new. Scientists have been transfixed by the concept of artificially intelligent machines for at least a hundred years. Yet, the mainstream conversations around this set of technologies make it seem like the rapid advances in AI capabilities came out of nowhere. That’s certainly not the case. The newfound attention is because there’s heightened focus around a specific, groundbreaking form of AI: generative AI, which can produce various types of content like text, imagery, audio, and synthetic data

 

The concept of generative AI exploded in popularity late last year, due in large part to the virality of tools like ChatGPT. This chatbot was developed by startup OpenAI and released to the public in November 2022. The extent of the program’s embrace speaks for itself. Estimates show that ChatGPT reached 100 million monthly active users in January, making it the fastest-growing consumer application in history.

Internet memes aside, the sheer influence of AI-powered tools like ChatGPT is nothing to joke about. The potential benefits are vast, from improving existing workflows and simplifying processes, to automating manual tasks, to summarizing complex information. And it can benefit industries from healthcare to manufacturing. At the same time, proceeding with caution is absolutely warranted. As diginomica recently warned, generative AI could “lead to a pandemic of deliberate misinformation as well as accidental error.” 

The debate over generative AI’s use is ongoing. Many find themselves with more questions than answers when trying to assess the full scope of its potential impact over how our society operates, today and in the future. So much is still unknown, but one thing is for certain: The right structures, processes, and expertise need to be in place to adequately support how businesses operate amidst the backdrop of the ever-evolving AI revolution. The uptick in generative AI’s popularity has made this loud and clear. Here are some considerations for organizations looking to do just that. 

 

Practice good governance

Companies seeking to implement generative AI must exercise strong governance practices and adhere to well-defined guidelines. As with any emerging technology, privacy, data security, algorithmic transparency within AI models, and cybersecurity vulnerabilities must all be top of mind to mitigate risks related to ethical and legal compliance.

According to Info-Tech’s Infrastructure and Operations Priorities 2023 report, both AI governance and data governance were among the top six priorities for technology leaders to consider as they navigate between threats and opportunities in the year ahead.

Enterprises must store data with an abundance of caution. There should be a designated data steward and a comprehensive, organized architecture. To maintain strong oversight and reduce operational inefficiencies, enterprises need to be aware of all the areas/silos permeated by generative AI and make sure they are each governed with consistency. 

 

Legislation also has an important role to play. While policymakers lag behind the widespread adoption of generative AI, there is not yet a clear-cut model regarding compliance and governance standards. This puts the burden on IT departments to define their own risk management governance models. 

Minimize bias

Generative AI is simply a tool, and like any other tool, it can be used for good or bad. As with earlier iterations of AI, generative AI may exacerbate systemic biases, and this will be an evergreen hurdle. 

Companies must be rigorous in avoiding bias when collecting data to train their models. This means consciously curating and cleaning data sets to account for any inherent biases and making them as representative of the entire human population as possible.

 

Model design and development can also betray the makers’ personal biases. Strategies such as regularly auditing AI model outputs, collaborating with various sources, and machine learning bias mitigation techniques must be applied with redundancy as a core pillar. While this has been understood for years with machine learning, tackling bias is still no easy task, and will require developers’ continued diligence to sustainably maintain their models’ accuracy. 

We’ve already seen efforts to mitigate some of these issues for traditional AI, and it’s imperative this trend continues for generative AI. For example, Included developed an AI platform that analyzes a company’s employee data and pulls insights about diversity, equity, and inclusion (DEI) for metrics like pay, promotions, and employee lifecycle. This illustrates how companies can use AI to tackle bias, in this case DEI in the workplace.

Execute with intention

Like with any new technology, generative AI should not be adopted for its own sake, but with intentionality and following a rigorous cost-benefit analysis.

 

Companies weighing the potential benefits of investing in generative AI should begin by identifying areas where it could alleviate stakeholder pain. People’s needs must be at the forefront, and there should be a thoughtful roadmap illustrating how generative AI would improve outcomes.

This could range from driving operational efficiencies, which increase productivity, to innovating new consumer products with the AI model outputs. A prime example of the former is venture capital firm SignalFire, which uses AI for a variety of back-office functions, such as turning term sheets into long-form legal documents. Microsoft and Google recently launched AI-boosted search platforms. Bing’s AI chatbot and Bard can automate a company’s outreach efforts, enabling employees to instead work on more important tasks.

The launch of these products was not without their challenges. For example, Bard incorrectly answered a question, leading to a viral fiasco, and Microsoft’s Bing chatbot made headlines following several strange interactions with users. Microsoft responded swiftly to the issue by setting limits on its AI chatbot. While it’s clear that AI will undoubtedly play a powerful role in our society, generative AI products require rigorous testing before coming to market. Companies should take heed from these situations. Once they’re confident in its strategic benefits, make sure the rollout is a means-tested, trustworthy product.

 

Suma Nallapati is CIO of Insight Enterprises.

Fast Company

(16)