The AI research in the area of peace, justice and security at each of the universities in Zuid-Holland complements the AI research being performed by the other two. Three researchers explain.
Bram Klievink works at Leiden University’s Campus The Hague as Professor Digitalisation and Public Policy. His collaborative work on AI in South-Holland began with a research project at the point where cybersecurity and governance meet, with his TU Delft colleague Michel van Eeten. They worked with the AIVD on research on identifying and understanding digital threats. For instance, by developing new instruments to measure the threat level of critical networks and to recognise attacker behaviour.
Now, Klievink aims to further integrate Leiden’s governance and policy expertise with Delft’s socio-technical take of cybersecurity. The universities also work together on other research into the use of algorithms in a governance context.
Klievink gives the example of the LDE Centre for BOLD Cities, where the three universities are researching how Big, Open and Linked Data (BOLD) can work in, with and for cities. Klievink: ‘The centre brings together, among others, a sociological perspective, urban studies, public administration, media studies and technology. We have developed a joint multidisciplinary minor, which started last autumn.’
Klievink believes the collaboration in his field between the three Zuid-Holland universities is successful because there are similarities and differences in their expertise and focal points. ‘Rotterdam is strong in law and corporate aspects. And of course Delft is known for technology expertise, which means both fundamental work on AI and its applications. What people are less aware of is that Delft also knows a lot about governance and ethics, which is what Leiden has a name for. Alongside this expertise in governance, policy, law and normative aspects, Leiden is also strong in technology.’ He adds that such a general organisational outline fails to do justice to the knowledge possessed by all the specific groups and researchers at the universities.
Delft professor of cybersecurity Michel van Eeten explains how researchers at the different universities also bump into one another outside their universities. ‘I’m on the Cyber Security Board with Bibi van den Berg, professor of cybersecurity governance at Leiden, for instance.’
Rotterdam professor of law and economics Klaus Heine sees his fundamental research, where he compares data with nuclear energy, as complementary to that of the other universities. ‘That too is valuable. Obviously we exchange techniques with Delft University of Technology, for instance. This is something management should want more of. Then we could be world leading.’
Below, more about Heine’s work, and that of Michel van Eeten and Bram Klievink. They all conduct AI-related research in the field of peace, justice and security.
How to encourage businesses to make the internet secure
TU Delft – Michel van Eeten
Professor of cybersecurity Michel van Eeten’s work sometimes gives businesses the heebie-jeebies. ‘Take hosting companies, they’re often fairly positive about their network security, sometimes undeservedly. For the Ministry of Justice, we researched the hosting of images of child sexual abuse. How good are hosting companies at monitoring whether their customers post such images, and what do they do if that does happen?’
No need to name and shame
Under political pressure the name was published of the company that proved to have hosted over 90 percent of the material. Van Eeten, who specialises in cybersecurity governance, sees no need for such naming and shaming. There are more effective ways to solve the problems, he says.
The recurring theme in Van Eeten’s work is examining the relationship between technology and behaviour within the scope of cyber risks. ‘We look for vulnerabilities in systems that are linked to the internet, and then look at how we can contact the owners of these systems in the event of any leaks and what will spur them or the internet provider on to solve these problems.’ Van Eeten and his team develop and use techniques to measure whether the it is safe to use connected devices and whether the date travels safely over the digital highway.
Two companies responsible for hacked devices
Van Eeten’s team carried out a large-scale analysis of cameras, digital video recorders, thermostats and other connected devices. ‘The Internet of Things is sometimes very insecure and susceptible to hackers and malware.’ He made an important discovery. ‘There are tens of thousands of manufacturers, but just two of them, important players, proved responsible for half of all the hacked devices.’
Step one is for the Ministry of Economic Affairs and Climate Policy to send a letter to such companies. Talking to manufacturers solves a lot of problems already, says Van Eeten: ‘The threat of the EU blocking these companies from the European market, which would cost them one of their biggest markets, should only be brought in later.’
What big data and nuclear power have in common
Erasmus University Rotterdam – Klaus Heine
Europe is lagging behind the US and China when it comes to technology because privacy is important to us. Professor of law and economics Klaus Heine looks for creative ways to make this a unique selling point. ‘The EU could actually be the safest and most attractive ecosystem for human-centred AI.’
When Facebook purchased WhatsApp this was no problem according to competition law: WhatsApp was only a small company. But all that data… That was not a concern of this branch of law. Klaus Heine, who researches big data and privacy, believes this should have been viewed from the perspective of property law. ‘This related on the one hand to a technology and on the other to big data, which is in effect a kind of fuel.’
This is Heine’s trademark: coming up with the most exciting comparisons, for instance with the past. ‘Around 1950 we faced a similar challenge with nuclear power to the one we face now with big data. On the one hand there was the technology of nuclear physics and on the other the access to nuclear fuel, the radioactive material. The question was: how to make this technology safe but useful to society. The solution that still works today is that nuclear power stations are private but the fuel belongs to government.’
The European Atomic Energy Community (Euratom) harmonises the member states’ research programmes for the development and peaceful use of nuclear power. Euratom is a source of inspiration for Heine. ‘Facebook, Amazon and Google are allowed to use their data technology as long as a kind of Euratom decides whether what they do is in the interest of society. This is how Europe could be the place to be for safe new technology.’
Heine: ‘With cybersecurity issues I always try to work out where in society you can see something similar at play. When Maastricht University was held to ransom with ransomware, I could see an analogy with a flood when the dyke breaks. Then the army is called in. In such a scenario I can imagine volunteers and commandos coming to repair the infrastructure. The framework for a kind of cyber militia has already been discussed, by the US ministry of defence, for instance.’
‘The government’s biggest AI challenge is that no system is ever neutral’
Leiden University – Bram Klievink
Using artificial intelligence is more complicated for the government than for companies. Professor of public administration Bram Klievink and his colleague identify the problems and find solutions for digitalisation in public policy.
‘If half of the books that Amazon recommends to you aren’t interesting, it’s not really an issue. But if the government makes mistakes in just one-tenth of a percent of cases, this can be very serious; for example, if it’s trying to identify fraudsters like the Dutch government’s System Risk Indicator (SyRI),’ explains Bram Klievink. The system, used since 2014 to prevent fraudulent benefit claims, created risk profiles on the basis of data about fines, compliance and education, among other factors. Although your data were anonymously encrypted until you emerged as a potential fraudster, the court decided that the violation of the right to a private life was too great. It shows that although the government has considerable scope, it is more restricted than – let’s say – Facebook.
Policy analysis with social media data
Klievink and his team are also researching whether social media data are useful for analysing the effects of policies. There are many technical possibilities: ‘You can do sentiment analysis, for example, and try to assess the level of support for policies. But when you use a technique like that, you always make choices,’ Klievink explains. ‘And even minor choices and deliberations can have unexpected or unintended consequences for how the outcomes will be used.’ If you analyse Twitter data, for example, you have to set the point at which your system identifies a Tweeter as a human or a bot. Is it ten tweets a day or a hundred? And how many conversation topics does your topic model distinguish? Will it be five large, but general topics, or do you choose a refined model with twenty more specific topics? ‘These choices are never neutral, but we can’t avoid making them.’
Decision-makers and technicians
Dilemmas relating to these choices will often stay hidden, because policy-makers and the technicians who create the systems don’t speak each other’s language. ‘The AI specialist often has technical and methodological expertise, but lacks the necessary content expertise to foresee the consequences of the choices that are made. Conversely, the policy-maker often doesn’t know what knobs the technician can turn, exactly what their settings are, and what this means for the outcomes.’ Klievink therefore concludes that the collaboration between people from diverse disciplines working on public AI projects can never be close enough.