Tuesday, April 23, 2024
No menu items!
HomeArtificial Intelligence and Machine LearningExecutive Interview: Paul Nemitz, Principal Adviser on Justice Policy for the European...

Executive Interview: Paul Nemitz, Principal Adviser on Justice Policy for the European Commission, Brussels 

Proposed AI Act for Europe Would Set Rules, Increase Scrutiny of Practices 

Paul Nemitz, the Principal Adviser on Justice Policy in the European Commission, is well known to US big tech companies. An experienced lawyer, Nemitz is a leader of public policy projects for the European Commission. He was the lead director for putting the privacy and data protection GDPR in place in 2018. Today he is a leading figure in the development of AI-related European regulations on AI, recently announced. (See AI Trends April 22.) He recently spent a few minutes talking with AI Trends Editor John P. Desmond about the impact of the EU’s proposed new rules on AI.  

[Ed. Note: Nemitz is a speaker at the AI World Executive Summit: The Future of AI, to be held virtually on July 14, 2021.]  

AI Trends: The European Commission on April 21 released proposed regulations governing the use of AI in a legal framework proposal being called the Artificial Intelligence Act. What was your role in the development of this proposal, and what is the goal? 

Paul Nemitz, Principal Adviser on Justice Policy for the European Commission, Brussels

Paul Nemitz: As the principal advisor on justice policy, I deal with a lot of issues in the triangle between law, democracy, and technology. And so my input to this proposal was of a strategic nature. The purpose of this proposal is on one hand to create and strengthen the European internal market on AI, but also to manage risks relating to AI. In particular, risks relating to fundamental rights, what you would call constitutional rights or civil liberties of people, and the rule of law.  

The draft proposal has far-reaching implications for big tech companies including Google, Facebook, Microsoft, Amazon, and IBM, who have all invested substantially into AI development. What is your hope for how the big tech companies will respond? 

We have already received a lot of positive responses on this proposal. I think any tech company, which has a responsible attitude toward innovation and engineering, also has a responsible attitude to the safety of their products, and a responsible attitude toward democracy functioning well, and to fundamental rights of people being respected. They’ll take this proposal up constructively, because this is not a proposal which hinders the technology. It’s a proposal which makes this technology safe and trustworthy. And I would think that those who will go along with this constructively will have in the long term, a much more sustainable profit perspective than those who fight the principles of responsible innovation and responsible engineering. 

Are the proposed rules subject to change as you work through the approval process? How long is that expected to take? 

Yes, it’s a legislative process in our democracy, namely the European Parliament, which is elected by the people, and the Council of Ministers, which represents the governments of the 27 member states of the EU. This legislative process is a defining process for cutting-edge issues. Certainly we will see some new ideas and some better ideas coming out of these deliberations. Experience shows that legal instruments look different at the end of the process than they look at the beginning. To give an example, the General Data Protection Regulation (GDPR) had just under 4,000 amendments to work through in the European Parliament, before it was adopted. And that process took six years. 

I would think here, it will not take that long. It will probably take two years and there will certainly be changes. And in some cases, certainly better solutions than what has been proposed so far. 

The EU has for the past decade been an aggressive watchdog of the tech industry, with policies such as the GDPR around data privacy becoming blueprints for other nations. Is that the hope for the AI Act? What moves the EU to put itself into this watchdog role? 

It was never our intention with GDPR to conquer the world. The motivation for GDPRand this is a similar motivation for AIis that we want to make sure that our people can benefit from the data economy and from high technology. But in a way which secures a good functioning of our democracy, showing a good respect for an individual’s fundamental rights and a good respect for the rules of law. We want to make sure that technology also operates within those worlds and that nothing is done by technology or AI, which would be illegal for individual human beings to do. This is the motivation. 

We have other proposals on the table, which are more related to competition aspects. But this proposal basically is one which serves to give a frame to AI as a technology, which we believe will be as ubiquitous as electricity, namely all present. And it’s a very powerful technology which can bring great improvements, great public interest service, great productivity gains, and which also contains risks. These risks need to be mitigated and managed. 

Is there a risk that the European AI companies will be at a disadvantage operating under the proposal? 

No, I don’t think at all, because these rules will apply to any AI which enters our market. So there will be a level playing field. It doesn’t matter whether the AI comes from outside or inside the European Union. It is the same for GDPR. Also by having one rule, we create the common market of 27 member states for these types of products. 

If we wouldn’t do this, we wouldn’t have 27 different rules, and that would be much worse for both our own companies and companies from the United States. They can now more easily sell and make money in the whole of Europe, rather than having to do it differently in all the 27 member states. 

The AI Act proposes a European AI Board, made up of regulators from each member country. How do you envision that board working? Would it be handling complaints made about inappropriate ways AI might be used? 

The policing of the compliance with these rules will be handled in large part by the national regulators, and only in very limited circumstances on the EU level. The board will have an advisory function to the European Commission in the development of the policy and the implementation of this regulation.  

Some have said the proposal is too vague in certain areas, which could lead to legal disputes. Who decides, for example, if the use of an AI system is detrimental to one or more groups? 

That is the nature of the law, as it is crafted in language. And the first ones to interpret this law are those who have to comply with it, namely the companies who produce AI or put it on the market, supported by their lawyers. Then, if they have doubts, they can of course be in a dialogue with the authorities responsible to police the implementation. And eventually, remaining issues will be resolved by Courts.  

Now, let me say something about technology neutral regulation. We need to put rules in place in this world of fast-paced innovation in technology, but also in business models, which are open enough in terms of their language that they don’t become meaningless already tomorrow. What does this mean? That means we cannot use the buzzwords of the day, but we might have language of a conceptual nature, which can be reinterpreted as the technology develops and as business models develop. And so there’s a certain tradeoff between openness for innovation in the future in the legal text, versus legal certainty today. And I’m sure in the legislative deliberations, the right balance between these two important objectives, will be found. 

Do you have an example of a system that can cause harm by manipulating behavior? 

Let’s take a very practical example of what happened in the Cambridge Analytica case. People werewithout knowing themselvesmanipulated in terms of what they saw on screen, and how they were being targeted for election campaign messages. The messages were tailor-made for them rather than the key message of the political party being spread evenly to everybody. So this type of manipulative nudging for elections undermined the ability of the individual to decide on political preferences because it distorted what it saw of the political party up for election. It undermined the good functioning of democracy and is an example where harm was done. 

What is your view of US Section 230, that says information services providers shall not be treated as publishers of information that originates from content providers?  

Now we are leaving the AI regulation and going to another legislative proposal, which is called the Digital Services Act (DSA) and is about the behavior of platforms and responsibility of platforms. This is where the parallel is to Section 230 in the US. So this is an old law, introduced in the US as part of the Communications Decency Act passed in 1996. In Europe, a similar provision was introduced in article 14 of the E-Commerce Directive adopted in 2000, basically copied from the US. The discussion today in the US and in Europe is about whether it continues to be right to say that platforms, even the biggest platforms, carry no responsibility whatsoever about the content that third parties put onto the platforms. This is in terms of communications, videos, writing and pictures. This is an issue shared on both sides of the Atlantic.  

So it’s a great demonstration that we have actually common problems in the digital economy. On both sides of the Atlantic, legislative discussions are underway to move forward, to ensure a greater degree of responsibility to be taken by the big platforms for what’s happening on their networks. These networks like YouTube, like Twitterlike Facebook are now used by more than 40% of the population in the US and the EU to build their political opinions. We have networks that spread child pornography, terrorist recruitment content, or for that matter propaganda coming from foreign countries and financed and organized by states. We also have systematic, false messages, fake news, and fabricated fantasy stories gaslighting our impression of our society. 

This is an important responsibility and the DSA serves to strengthen the mechanisms of responsibility, which the platforms in particular, will be subject to. The US discussion on article 230 very much goes in the same direction. I hope that on both sides of the Atlantic, we will come to solutions which are a convergence. But one thing is clear: the old recipes, which were meant as a subsidy, basically to help the growth of the nascent internet industry, cannot be the same at a time when the internet companies that provide these platforms are now the biggest companies on the stock exchange. They must carry much greater responsibility, and they have the means to carry out this responsibility because they are highly profitable. 

Google has announced the phaseout of cookies in its browser in 2022. Does this help Google to move in what in your view is the right direction in the data privacy area? What more could Google do in this area in your view? 

I can’t comment too much on individual company policies, but one thing is clear. Google has not said that it will stop collecting personal data of people and profiling people and to make money in this way. So basically, the “stalker economy’ business model as Al Gore has called it continues. Google has realized that it has so many visitors directly on its own website premises, like in Google search or on YouTube, that even without following people by means of cookies to other websites and around the web, it can still assemble a huge amount of personal information about people. And it may also have other means to track people’s behavior offline and online. It is public knowledge that Google also buys data from a lot of other sources, like credit card data. And buying Fitbit will provide a lot of personal health data and behavioral data on people. 

And Google has location data provided through the Android mobile phone system and the mapping system, Google Maps. So, I wouldn’t say that Google has become a data protection and privacy hero by doing away with cookies. But it’s good if companies feel the pressure to have more respect for people’s personal data and privacy. And in the end, the question will be in this world, whether the business model, which relies on totally stripping down individuals and basically making them naked before the algorithm, in order to be able to sell advertising, is a business model sustainable in the long run. I don’t think it is. 

 

Read the draft proposal of the Artificial Intelligence Act of the European Commission. 

[Ed. Note: Nemitz is a speaker at the AI World Executive Summit: The Future of AI, to be held virtually on July 14, 2021.]  

Read MoreAI Trends

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments