Home > Article > Technology peripherals > Europe’s new artificial intelligence bill will strengthen ethical review
As the EU moves towards implementing the Artificial Intelligence Bill, AI ethical issues such as bias, transparency and explainability are becoming increasingly important, which the Bill will effectively regulate The use of artificial intelligence and machine learning technologies across all industries. AI experts say this is a good time for AI users to familiarize themselves with ethical concepts.
Europe’s latest version of the Artificial Intelligence Act, introduced last year, is moving quickly through the review process and could be implemented as early as 2023. While the law is still being developed, the European Commission appears ready to make strides in regulating artificial intelligence.
For example, the law will set new requirements for the use of artificial intelligence systems and ban certain use cases entirely. So-called high-risk AI systems, such as those used in self-driving cars and decision support systems for education, immigration and employment, will require users to conduct impact assessments and audits of AI applications. Some AI use cases will be closely tracked in databases, while others will require sign-off from external auditors before they can be used.
Nick Carrel, director of data analytics consulting at EPAM Systems, a software engineering firm based in Newtown, Pa., said there is a strong need for unexplainability and interpretability as part of MLOps engagements or data science consulting engagements. . The EU's Artificial Intelligence Bill is also pushing companies to seek insights and answers about ethical AI, he said.
“There’s a lot of demand right now for what’s called ML Ops, which is the science of operating machine learning models. We very much see ethical AI as one of the key foundations of that process,” Carrel said. “We have additional requests from customers... as they learn about the EU legislation that is coming into force around artificial intelligence systems at the end of this year and they want to be prepared.”
Inexplicability and explainability are Separate but related concepts. A model's interpretability refers to the extent to which humans can understand and predict what decisions a model will make, while explainability refers to the ability to accurately describe how a model actually works. You can have one without the other, says Andrey Derevyanka, head of data science and machine learning at EPAM Systems.
"Imagine you are doing some experiment, maybe some chemistry experiment mixing two liquids. This experiment is open to interpretation because, you see what you are doing here. You take a item, plus another item and we get the result,” Derevyanka said. "But for this experiment to be interpretable, you need to know the chemical reaction, you need to know how the reaction is created, how it works, and you need to know the internal details of the process."
Derevyanka said, in particular Deep learning models can explain but cannot explain specific situations. "You have a black box and it works in a certain way, but you know you don't know what's inside," he said. “But you can explain: If you give this input, you get this output.”
Bias is another important topic when it comes to ethical AI. It’s impossible to completely eliminate bias from data, but it’s important for organizations to work to eliminate bias from AI models, said Umit Cakmak, head of the data and AI practice at EPAM Systems.
“These things have to be analyzed over time,” Cakmak said. "It's a process because bias is baked into historical data. There's no way to clean bias out of the data. So as a business you have to set up some specific processes so that your decisions get better over time, which will improve the quality of your data over time, so you will be less and less biased over time."
Artificial in the European Union SMART Act would classify the use of artificial intelligence by risk It’s important to trust that AI models won’t make wrong decisions based on biased data.
Cakmak said there are many examples in the literature of data bias leaking into automated decision-making systems, including racial bias showing up in models used to evaluate employee performance or select job applicants from resumes. Being able to show how the model reaches its conclusions is important to show that steps have been taken to eliminate data bias in the model.
Cakmak recalls how a lack of explainability led a healthcare company to abandon an AI system developed for cancer diagnosis. "AI worked to some extent, but then the project was canceled because they couldn't build trust and confidence in the algorithm," he said. “If you can’t explain why the outcome is happening, then you can’t proceed with treatment.”
EPAM Systems helps companies implement artificial intelligence in a trustworthy way. The company typically follows a specific set of guidelines, starting with how to collect data, to how to prepare a machine learning model, to how to validate and interpret the model. Ensuring that AI teams successfully pass and document these checks, or "quality gates," is an important element of ethical AI, Cakmak said.
Steven Mills, chief AI ethics officer for Global GAMMA at Boston Consulting Group, said the largest and best-run companies are already aware of the need for responsible AI.
However, as the AI Bill gets closer to becoming law, we will see more companies around the world accelerate their responsible AI projects to ensure they do not fall foul of the changing regulatory environment and new expectations.
“There are a lot of companies that have started implementing AI and are realizing that we’re not as hopeful as we’d like to be about all the potential unintended consequences and we need to address that as quickly as possible,” Mills said. “This is the most important thing. People don't feel like they're just haphazard and how they apply it."
The pressure to implement AI in an ethical way comes from the top of organizations. In some cases, it comes from outside investors who don't want their investment risk compromised by using AI in a bad way, Mills said.
"We're seeing a trend where investors, whether they're public companies or venture funds, want to make sure AI is built responsibly," he said. "It may not be obvious. It's It may not be obvious to everyone. But behind the scenes, some of these VC firms are thinking about where they are putting their money to make sure these startups are doing things the right way."
Carrel said, While the details of the Artificial Intelligence Act are also vague at the moment, the law has the potential to clarify the use of artificial intelligence, which would benefit both companies and consumers.
“My first reaction was that this was going to be very rigorous,” said Carrel, who implemented machine learning models in the financial services industry before joining EPAM Systems. "I've been trying to push the boundaries of financial services decision-making for years, and all of a sudden there's legislation coming out that would undermine the work we do.
But the more he looks at the pending law, the more he likes it See.
"I think this will also gradually increase public confidence in the use of artificial intelligence in different industries," Carrel said. "Legislation says you have to register high-risk artificial intelligence systems in the EU, which means you Know that somewhere there will be a very clear list of every AI high-risk system in use. This gives auditors a lot of power, which means naughty boys and bad players will gradually be punished, and hopefully over time we will create more opportunities for those who want to use AI and machine learning for better causes. People leave behind best practices – the responsible way. ”
The above is the detailed content of Europe’s new artificial intelligence bill will strengthen ethical review. For more information, please follow other related articles on the PHP Chinese website!