Interview Christiaan van Veen

Christiaan van Veen is the Director of the Digital Welfare States and Human Rights Project at the Center for Human Rights and Global Justice, which is based at New York University School of Law. Van Veen’s work focuses on the impact of state digitalization on human rights, including in the area of social protection and assistance. Van Veen has extensive experience in the field of international human rights law. He previously served as a special advisor on new technologies and human rights to the United Nations Special Rapporteur on extreme poverty and human rights. Van Veen has also been a consultant for the Office of the United Nations High Commissioner for Human Rights. He has undertaken numerous human rights fact-finding missions to countries around the world, including Chile, Romania, Mauritania, China, Saudi Arabia, the United States and the United Kingdom.

  1. What are the latest trends in the digitalization of welfare states?

Governments in the Global North have been experimenting with the use of digital technologies and data in the context of their welfare states for decades now. Earlier digitalization efforts in the welfare state focused mostly on streamlining internal processes, such as information management. We then saw a trend toward moving existing government services online and the introduction and updating of government websites and portals, often under the heading of ‘e-government’. In recent years, there is much talk of, and many related government reports and strategies about, ‘digital government’ and ‘digital transformation’. What we are witnessing is that digital technologies are becoming integral to key stages and functions of social protection and assistance programs, from enrollment and registration, to decision-making, communication with beneficiaries, enforcement and investigation of risks and needs, as well as actual service delivery and so on. Governments are also increasingly aiming to connect separate ‘data siloes’ and make use of large quantities of digital data available in the private sector. More recently, we also see that governments are increasingly experimenting with the use of Artificial Intelligence tools, including in the welfare state, although these are still relatively early days.

  1. What are the three biggest challenges in the digitalization of welfare states in terms of respect of ethics and human rights?

My work relates to the risks and opportunities of digital welfare states from the perspective of human rights. Three major challenges from the perspective of human rights relate to democratic oversight, transparency and accountability. First, technological innovation in social protection systems is still often perceived as technocratic fixes with overwhelmingly positive or neutral implications for the human rights of individuals. However, digitalization is driven by political preferences and has unequal impacts on individuals, including in relation to their rights. Second, and relatedly, many of the digitalization projects within government, including in the welfare area, happen under the political radar and with limited involvement of legislatures, media and the general public. Oftentimes, the technology itself is difficult to understand or otherwise shielded from scrutiny. Third, this means that those responsible for digital innovation in social protection and assistance are not often held to account, even when rights violations may be at stake. Because many of these developments happen away from the spotlight, or sometimes even explicitly in secret, it is exceedingly difficult for individuals who have been affected to realize their right to a remedy when their rights and interests are harmed.

  1. How can we ensure the digitalization of social services remains ethical?

Let me first comment on the framing of this question. Because much technological innovation comes from the private sector, there is has been fierce resistance from this corner against any regulation of digital technologies. Big Tech and other parts of the technology industry have, once their products and services came under more intense scrutiny, proposed that self-regulation, guidelines and ethical values regulate the technologies they produce. In short, the fact that ‘ethics’ are often invoked in discussions about digitalization is an outcome of this corporate resistance to regulation and not a neutral framing.

Slowly, for instance in the recent White Paper on Artificial Intelligence by the European Commission, we see a shift in the debate towards regulating technology via law and to underline their human rights rather than their ethical impact. That makes a difference because human rights are legal norms and can be invoked before national and international courts and other accountability mechanisms. Ethics, on the other hand, are not legal norms, often ill-defined and mostly without attached accountability mechanisms. I am not against ethical digitalization, obviously, but more interested in helping to ensure that technology compluies with public regulation and ensuring that technology does not contribute to human rights violations.

To ensure that the digitalization of social services complies with existing human rights law, we need to be aware of the challenges involving democratic oversight, transparency and accountability mentioned above. There appears to be a gradual shift in that regard, with increasing attention from parliaments, media, other oversight bodies and the general public for the fact that individual rights are at stake. An important step is that, from political decision-making to implementation, human rights law needs to be taken into account. There is an important procedural dimension here. An important step to make sure human rights are taken into account is to make sure that those individuals and groups who are affected by a technological innovation are consulted every step of the way, from the political decision-making stage to the actual implementation.