The European Union is working on legislation on artificial intelligence (AI). The American information technology company IBM is arguing for ‘precision regulations’ and proper coordination of European and American regulations.
Over the years, IBM has developed a distinct vision of AI regulation and has not failed to make it public. ‘In January 2020 we already argued for a risk-based approach. The European Commission is also following that,” said Christina Montgomery, chief privacy officer and vice president of IBM. At the same time, she points out points for improvement.
- Europe is working on legislation for artificial intelligence. It focuses on the risks.
- IBM also advocates an approach that regulates the highest risks.
- It must be clear where the responsibility lies.
- Transatlantic cooperation is important, IBM emphasizes. Especially for small and medium-sized companies.
In its approach, the Commission focuses on the risks associated with the use of AI. The red list includes applications with artificial intelligence that make it possible to trace people very broadly, to map their behavior and to manipulate their decisions. Those applications are prohibited. An exception applies only for military applications.
A second list includes high-risk applications in key sectors such as health, climate, robotics, government and police. They must be approved before they are put on the European market. Data from an AI system can reinforce prejudice or discrimination. For example, by systematically selecting men for job applications. Or by refusing financial credits to certain minority groups. The third category is a list of low-risk applications, especially chatbots.
IBM provides general purpose AI technology, Montgomery explains. Artificial intelligence has a long life cycle and is more complex than many other technologies. It is therefore crucial to carefully determine who is responsible for which part of the chain. The European texts could use more clarity. “The responsibility should lie with those closest to the risk and who can best manage that risk,” says Montgomery.
Transparency in the use of AI is important. Our research department has been working for years on tools to detect prejudices, to explain the results.
Montgomery agrees with the Commission’s aim to emphasize human oversight. ‘Transparency in the use of AI is important. Our research department has been working for years on tools to detect prejudices, to explain the results.’
“We use AI as a way to improve human decisions, not replace humans. For an application in which AI decides who gets a certain job, there must at least be human supervision.’
Another tricky example is facial recognition. ‘The general principles are that the application must be fair, explainable and that only the minimum of the necessary data may be used.’ That led to IBM refusing to provide facial recognition technology for detecting Covid-19. ‘There are other ways to do that than taking a photo and using AI. Because those photos can be kept and processed for other purposes.’ On that point, IBM has the same stance as European regulators.
At this stage, transatlantic cooperation is very important. Without coordination, you get complexity, which is bad for small and medium-sized businesses.
‘We also support the European approach not to fully regulate low-risk AI applications, but to demand transparency.’ Montgomery gives the example of chatbots. They are ‘low risk’, but people have a right to know that they are talking to a chatbot and not another person.
‘AI has a lot of potential for the good of society. It is important that regulations do not hinder innovation. We are in favor of a precision approach in which applications are only regulated for the highest risks.’
Montgomery welcomes and largely agrees that Europe is setting a standard for AI. ‘But transatlantic cooperation is very important at this stage. Without mutual coordination you get complexity and that is bad for small and medium-sized companies.’
It will take at least a few years before new European regulations come into force. Meanwhile, companies using AI can prepare. A mechanism is needed that takes responsibility. At IBM there is a council with about 25 members. ‘It is deliberately very diversely composed. People come from different sides of the company. There are lawyers, HR people, salespeople and data scientists. The goal is to support a culture of trusted AI across the company.”