Home  >  Article  >  Technology peripherals  >  Is responsible AI a technical or business issue?

Is responsible AI a technical or business issue?

WBOY
WBOYforward
2023-04-10 08:11:021330browse

Artificial intelligence (especially ChatGPT) has been applied around the world. There is also a high potential for AI to be misused or abused, a risk that must be taken seriously. However, AI also brings a range of potential benefits to society and individuals.

Thanks to ChatGPT, artificial intelligence has become a hot topic. People and organizations have begun to consider its myriad use cases, but there are also concerns about potential risks and limitations. With the rapid implementation of artificial intelligence, responsible artificial intelligence (RAI) has come to the forefront, and many companies are questioning whether it is a technology or a business issue.

According to a white paper released by the MIT Sloan School of Management in September 2022, the world is in a period when artificial intelligence failures begin to multiply, and the first batch of artificial intelligence-related regulations are about to come online. The report opens a new window. While these two developments provide urgency for the implementation of responsible AI projects, it has been seen that the companies leading in responsible AI are not primarily operated by regulations or other Problem driven. Instead, their research recommends that leaders view responsible AI from a strategic perspective, emphasizing their organization’s external stakeholders, broader long-term goals and values, leadership priorities, and social responsibilities.

This is consistent with the view that responsible artificial intelligence is both a technical and a business issue. Obviously, the underlying issues lie within the AI ​​technology, so that's front and center. But the reality is that the standards for what is and is not acceptable for artificial intelligence are not clear.

For example, people agree that AI needs to be “fair,” but whose definition of “fair” should we use? It’s a business-to-business decision, and it’s hard to make a decision when you get into the details .

The "Technical and Business Issues" approach is an important one because most people only evaluate the technical aspects. Assessing and fully automating responsible AI from both a business and technical perspective can help bridge the gap between the two. This is especially true for heavily regulated industries. The NIST Artificial Intelligence Framework, released just last week, provides helpful guidelines to help organizations assess and address their needs for responsible artificial intelligence.

What is responsible artificial intelligence?

AI can differentiate and create bias. AI models can be trained on data that contains inherent biases and can perpetuate existing biases in society. For example, if a computer vision system is trained using images of mostly white people, it may be less accurate at identifying people of other races. Likewise, AI algorithms used in the recruitment process may also be biased because they are trained on resume datasets from past hires, which may be biased along gender or racial lines.

Responsible AI is an approach to artificial intelligence (AI) that seeks to ensure that AI systems are used ethically and responsibly. This approach is based on the idea that AI should be used to benefit people and society, and that ethical, legal and regulatory considerations must be taken into account. Responsible AI involves the use of transparency, accountability, fairness and safety measures to ensure responsible use of AI systems. These could include the use of AI auditing and monitoring, developing ethical codes of conduct, using data privacy and security measures, and taking steps to ensure that AI is used in a human rights-compliant manner.

Where is the need for responsible AI most?

Early adopters of AI are banking/finance, insurance, healthcare and other heavily regulated industries including telecommunications and consumer facing heavy industries (retail, hotel/tourism, etc.). It can be broken down by industry:

? Banking/Finance: Artificial intelligence can be used to process large amounts of customer data to better understand customer needs and preferences, which can then be used to improve the customer experience and provide more tailored Serve. AI can also be used to identify fraud and suspicious activity, automate processes, and provide more accurate and timely financial advice.

?Insurance: Artificial intelligence can be used to better understand customer data and behavior to provide more personalized insurance coverage and pricing. AI can also be used to automate claims processes and streamline customer service operations.

?Healthcare: Artificial intelligence can be used to identify patterns in medical data and can be used to diagnose disease, predict health outcomes, and provide personalized treatment plans. AI can also be used to automate administrative and operational tasks such as patient scheduling and insurance processing.

?Telecommunications: Artificial intelligence can provide better customer service by analyzing customer data and understanding customer needs and preferences. AI can also be used to automate customer service processes, such as troubleshooting and billing.

?Retail: Artificial intelligence can personalize the customer experience by analyzing customer data and understanding customer needs and preferences. AI can also be used to automate inventory management and customer service operations.

?Hotel/Travel: Artificial intelligence can be used to automate customer service processes such as online booking and customer service. AI can also be used to analyze customer data and provide personalized recommendations.

How to regulate responsible artificial intelligence?

Government regulation of artificial intelligence is a set of rules and regulations implemented by the government to ensure that the development and use of artificial intelligence (AI) is safe, ethical and legal . Regulations vary from country to country, but they generally involve setting ethical, safety, security and legal liability standards for any harm caused by AI systems. Government regulators may also require developers to receive training on safety and security protocols and ensure their products are designed with best practices in mind. Additionally, governments may provide incentives for companies to create AI systems that benefit society, such as those that help combat climate change.

By incorporating a security regulatory framework into their responsible AI plans, companies can ensure that their AI systems meet necessary standards and regulations while reducing the risk of data breaches and other security issues. This is an important step on the journey to responsible AI, as it helps ensure organizations can manage their AI systems in a responsible and safe manner. In addition, the security regulatory framework can serve as a guide to help organizations identify and implement best practices for using artificial intelligence technologies such as machine learning and deep learning. In summary, responsible AI is as much a technical issue as it is a business issue.

The Security Regulation Framework can help organizations assess and address their needs for responsible AI, while providing a set of standards, guidelines and best practices to help ensure their AI systems are safe and legal. Regular. Early adopters of safety regulatory frameworks include heavily regulated industries and those that are heavily consumer-oriented.

A new world of mundane?

Artificial intelligence is still a relatively new technology, and most use cases currently focus on more practical applications, such as predictive analytics, natural language processing and machine learning. While a “brave new world” scenario is certainly possible, many current AI-driven applications are designed to improve existing systems and processes, rather than disrupt them.

Responsible artificial intelligence is as much a technical issue as it is a business issue. As technology advances, businesses must consider the ethical implications of using artificial intelligence and other automated systems in their operations. They must consider how these technologies will impact their customers and employees, and how they can use them responsibly to protect data and privacy. Additionally, when using artificial intelligence and other automated systems, businesses must ensure compliance with applicable laws and regulations and be aware of the potential risks of using such technologies.

The future of responsible artificial intelligence is bright. As technology continues to evolve, businesses are beginning to realize the importance of ethical AI and incorporate it into their operations. Responsible AI is becoming increasingly important for businesses to ensure the decisions they make are ethical and fair. AI can be used to create products that are transparent and explainable, while also taking into account the human and ethical impact of decisions. Additionally, responsible AI can be used to automate processes, helping businesses make decisions faster, with less risk, and with greater accuracy. As technology continues to advance, businesses will increasingly rely on responsible AI to make decisions and create products that are safe, reliable, and good for customers and the world.

The potential misuse or abuse of artificial intelligence (AI) poses risks that must be taken seriously. However, AI also brings a range of potential benefits to society and individuals, and it is important to remember that the degree of danger from AI depends on the intentions of the people using it.

The above is the detailed content of Is responsible AI a technical or business issue?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete