Home  >  Article  >  Technology peripherals  >  Six best practices for developing enterprise usage policies for generative AI

Six best practices for developing enterprise usage policies for generative AI

PHPz
PHPzforward
2023-04-28 23:40:051463browse

Six best practices for developing enterprise usage policies for generative AI

Generative AI is a recently noticeable AI technology that uses unsupervised and semi-supervised algorithms to generate data from existing materials (such as text, audio, video, images and code) to generate new content. The uses of this branch of AI are exploding, and organizations are using generative AI to better serve customers, make more use of existing data, improve operational efficiency, and many other uses.

But like other emerging technologies, generative AI is not without significant risks and challenges. According to a recent survey of senior IT leaders conducted by Salesforce, 79% of respondents believe that generative AI technology may have security risks, 73% of respondents are concerned that generative AI may be biased, and 59% Respondents believe the output of generative AI is inaccurate. In addition, legal issues need to be considered, especially if the content generated by the generative AI used externally is authentic and accurate, the content is copyrighted, or comes from competitors.

For example, ChatGPT itself will tell us, “My responses are generated based on patterns and correlations learned from a large text dataset, and I do not have the ability to verify that all cited sources in the dataset are accurate or trustworthy Degree."

Legal risks alone are extensive. According to the non-profit Tech Policy Press organization, these risks include contracts, cybersecurity, data privacy, deceptive trade practices, discrimination, disinformation, ethics, knowledge Risks Related to Title and Verification.

In fact, your organization may already have many employees testing the use of generative AI, and as this activity moves from experiments to real life, it is important to take proactive steps before unintended consequences occur. important.

Cassie Kozyrkov, chief decision scientist at Google, said: "If the AI-generated code works, it is very high-level. But it doesn't always work, so before copying and pasting it into other important places , and don’t forget to test the output of ChatGPT.”

Enterprise usage policies and related training can help employees understand some of the risks and pitfalls of this technology, and provide rules and recommendations to help them understand how to make full use of this technology , thereby maximizing business value without putting the organization at risk.

With that in mind, here are six best practices for developing policies for your enterprise’s use of generative AI.

Determine the scope of your policy – ​​The first step for a business to develop a usage policy is to consider its scope. For example, will this cover all forms of AI or just generative AI? Targeting only generative AI could be a useful approach, as this addresses large language models including ChatGPT without having to impact numerous other AI technologies. How to establish AI governance policy for the broader domain is another matter, and there are hundreds of such resources online.

Involve all relevant stakeholders across the organization—this may include HR, legal, sales, marketing, business development, operations, and IT. Each team may have different purposes, and how the content is used or misused may have different consequences. Involving IT and innovation teams demonstrates that the policy is not just a restraint developed from a risk management perspective, but a balanced set of recommendations designed to maximize productivity and business benefits while managing business risk.

Consider current and future uses of generative AI - Work with all stakeholders to itemize all internal and external use cases currently in use, as well as scenarios for the future use cases, each of which can help inform policy development and ensure relevant areas are covered. For example, if you’ve seen a proposal team (including contractors) experimenting with generative AI drafting content, or a product team generating creative marketing content, then you know there could be follow-up consequences for output that potentially infringes someone else’s intellectual property rights. Intellectual Property Risks.

In a state of constant evolution - When developing enterprise usage policies, it is important to think through and cover the information that goes into the system, how the generative AI system is used, and what comes from the system How the information output is subsequently used. Focus on internal and external use cases and everything in between. This measure may help prevent accidental repurposing of that content for external use by requiring all AI-generated content to be labeled, thereby ensuring transparency and avoiding confusion with human-generated content, even for internal use. or to prevent action based on information you believe to be true and accurate without verification.

Share it widely throughout the organization - Since policies are often quickly forgotten or even unread, it is important to provide appropriate training and education around the policy, which can include production Training videos and hosting live sessions. For example, live Q&A sessions with representatives from IT, innovation, legal, marketing and proposal teams, or other relevant teams, can help employees understand future opportunities and challenges. Be sure to include plenty of examples to help put the audience in the situation, such as when a major legal case pops up that can be cited as an example.

Dynamic update of the document - Like all policy documents, you need to keep the document dynamically updated, at an appropriate pace according to new uses, external market conditions and development requirements Make an update. Having all stakeholders “sign off” on the policy or incorporating it into an existing policy manual signed by the CEO shows that these policies have senior-level approval and are important to the organization. Your policy should be just one component of your broader governance approach, whether for generative AI or AI technology or technology governance in general.

This is not legal advice and your legal and HR departments should take the lead in approving and disseminating the policy. But I hope this can provide you with some reference ideas. Just like your corporate social media policy a decade ago, spending time on this now will help you reduce surprises and changing risks in the years to come.

The above is the detailed content of Six best practices for developing enterprise usage policies for generative AI. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete