Home > Article > Technology peripherals > Responsible artificial intelligence is valuable
A new IDC white paper sponsored by Credo AI, an artificial intelligence management software company, reveals that one of the biggest opportunities for B2B enterprises in 2023 is the correct adoption of artificial intelligence.
This global study shows that organizations that adopt an AI-first, ethics-forward approach can expect significant improvements in business across the board. A variety of indicators are expected to increase by approximately 22-29% year-on-year, including: increased revenue, improved customer satisfaction, sustainable operations, improved profits and reduced business risks. The survey aims to provide valuable insights into the state of responsible AI adoption among B2B businesses and identify key challenges and opportunities.
Every business organization will find itself facing an increasingly complex and competitive reality, and it is clear that embracing new technologies is the only way to thrive in today’s modern business world.
The survey highlights the critical role artificial intelligence will play for B2B enterprises in 2023 and beyond. Executives express a desire for their organizations to adopt responsible AI, ranking customer satisfaction (30%), increased sustainability (30%) and increased profits (25%) as the most important expected business benefits.
However, despite the strong belief in the positive applications of artificial intelligence, the survey also shows that many executives have reservations or lack of confidence in moving forward with the development and implementation of artificial intelligence. Feedback showed that only 39% had high confidence, 33% had some confidence in reservations, and 27% had low confidence in building and using AI ethically, responsibly and compliantly.
Credo AI Founder and CEO Navrina Singh said: “Organizations around the world are eager to harness the power of artificial intelligence, especially generative AI, and also recognize the need to The importance of adopting these technologies to unlock lasting return on investment." "However, there are still significant challenges to overcome, particularly when it comes to building confidence in AI and ensuring compliance with regulations. This survey aims to help organizations identify these challenges and Providing actionable insights for implementing responsible AI practices now.”
Despite AI’s clear benefits, many companies have yet to fully embrace it. Respondents to the survey said they were concerned about the negative impacts of AI being implemented responsibly without the right governance. The main issues are loss of private data (31%), hidden costs (29%) and reduced customer trust (26%).
Globally, the Chief Information Officer (CIO) is the primary player in an organization’s responsible AI strategy. They can help businesses ensure their AI systems produce fair results, protect privacy, and comply with regulations.
CIOs surveyed unanimously believe that the EU Artificial Intelligence Act is the most critical regulation in their upcoming implementation process (42%). This is followed by the UK AI White Paper (37%) and the US Privacy Protection Act (29%).
As the use of artificial intelligence increases, we will see the development of a large number of new regulations to address the potential negative impacts of artificial intelligence, but current surveys indicate that the EU Artificial Intelligence Act Considered the most critical AI regulation for organizations to ensure compliance, its provisions and requirements are widely considered the benchmark for responsible AI implementation globally.
Responsible AI can help companies ensure that their AI systems produce fair results, protect privacy, and comply with regulations. As a result, businesses can improve customer experience, increase trust in the brand, and build a positive reputation as a responsible organization.
When asked what components their organization’s AI governance structure would include, respondents prioritized security, risk management, regulatory guidance and compliance (45%), followed by are technology selection, standardization and architecture (43%).
Responsibly implementing and scaling AI is a daunting task that requires input from multiple stakeholders across an organization and its ecosystem. Success will be defined by aligning ethical and legal standards, but also by successfully integrating AI with the different software systems being used. This aspect of management is an area ripe for innovation.
“Responsible AI is the future of the industry and offers a wealth of opportunities for organizations,” said Ritu Jyoti, global AI research leader for IDC’s Artificial Intelligence and Automation research practice. “Prioritizing ethics and compliance today Companies that regulate their AI practices will be better positioned tomorrow to benefit from improved customer satisfaction, sustainable operations, and increased profits. Now is the time to take action to ensure better outcomes for both businesses and customers."
This Credo AI-sponsored survey was conducted by IDC in the fourth quarter of 2022. More than 500 respondents from B2B companies around the world participated in this survey activity.
Credo AI is an AI governance software platform that brings scenario-driven governance and risk assessment to ensure AI is fair and compliant throughout the AI lifecycle , safe, reliable, auditable and people-centered. The platform enables organizations to embrace advanced artificial intelligence with confidence, maximizing its potential while minimizing associated risks. Credo AI's customers include retail, finance and banking, insurance, defense and high-tech companies.
The above is the detailed content of Responsible artificial intelligence is valuable. For more information, please follow other related articles on the PHP Chinese website!