Home > Article > Technology peripherals > How to make artificial intelligence explainable?
RegTech Company Hawk: AI chief technology officer Wolfgang Berner said if you really explainability is particularly critical for many applications.
Berner said: “In highly regulated areas such as anti-money laundering, it is entirely appropriate to consider transparency and explainability in the use of artificial intelligence. When the AI’s decisions are too disconnected from the original data and the way the algorithm works, When there is a lack of transparency, concerns arise about this classic “black box AI.”
Hawk AI believes that the key to a compliant industry, trust, and acceptance is a high level of transparency. So for the company, the need for AI explainability goes far beyond purely regulatory requirements.
With intelligible artificial intelligence, financial institutions can overview and control even complex models such as neural networks. For Hawk AI, explainability consists of two aspects – what is the rationale for an individual decision driven by the AI? How are the algorithms that help artificial intelligence developed?
Hawk AI said: "For Hawk AI, this is clear - only what is technically explainable will ultimately be accepted. The exact criteria for a decision or the statistical probability of certain risks and the composition of the algorithm As important as complete documentation of the AI decision-making process is. It is also important that all of this is expressed in clear and understandable language rather than purely technical terms.”
The company believes that every detail and every The data sources all have to be verifiable – for example, whether certain values are significantly higher or lower compared to a specific peer group. Why AI assumes certain expected values and how these values relate to each other must be transparent. The data picture must be so clear that the compliance officer would use the same data to make the same decision.
Additionally, consistent feedback and validation processes help continuously improve decisions – so the AI learns directly from the compliance team’s decisions and can better support them in the future.
Hawk mentioned that AI must not only be transparent at the outset of its application - as it improves independently by being exposed to new data - but also need to be able to understand such optimizations. To this end, the company claims that every change process in AI is also documented in the software and requires explicit approval. Therefore, without compliance teams able to understand and control it, AI will never evolve.
Hawk AI concluded: “AI anti-money laundering is ready – with Hawk AI, it’s transparent and secure. For these reasons, Hawk AI talks about “white box AI” related to artificial intelligence , its technology is completely understandable to compliance teams compared to "black box AI". As a result, our software provides complete control and security. The application of artificial intelligence in finance is revolutionizing the fight against finance The fight against crime.
“Technology-driven anti-money laundering not only significantly outperforms traditional systems in terms of efficiency and effectiveness, but is also particularly forward-looking due to its ability to learn from patterns of criminal behavior. Therefore, in the long term, the use of artificial intelligence to fight financial crime will become an industry standard. This technology has been proven in practice for many years. Even in very large financial institutions it is already in use today, or at least established in the first pilots. ”
Hawk AI has partnered with Diebold Nixdorf, a leader in enabling connected commerce in finance and retail, to expand the reach of the former’s AML solutions.
The above is the detailed content of How to make artificial intelligence explainable?. For more information, please follow other related articles on the PHP Chinese website!