Open letter on the feasibility of “Chat Control”: Assessments from a scientific point of view

Source: https://www.ins.jku.at/chatcontrol/

Open letter on the feasibility of „Chat Control“:Assessments from a scientific point of view

Update: A parallel initiative is aimed at the EU institutions and is available in English at the CSA Academia Open Letter . Since the very similar arguments were formulated in parallel, they support each other.

The initiative of the EU Commission discussed under the name “ Chat Control ”, the unprovoked monitoring of various communication channels to detect child pornography, terrorist or other “undesirable” material – including attempts at early detection (e.g. “grooming” minors through text messages that build trust) – mandatory for mobile devices and communication services, has recently been expanded to include the monitoring of direct audio communications . Some states, including Austria and Germany , have already publicly declared that they will not support this initiative for monitoring without cause. AlsoCivil protection and children’s rights organizations have rejected this approach as excessive and at the same time ineffective . Recently, even the legal service of the EU Council of Ministers diagnosed an incompatibility with European fundamental rights. Irrespective of this, the draft will be tightened up even more and extended to other channels: in the last version even to audio messages and conversations. The approach appears to be coordinated with corresponding attempts in the US ( “EARN IT” and “STOP CSAM” Acts ) and the UK (“Online Safety Bill”).

As scientists who are actively researching in various areas of this topic, we therefore make the declaration in all clarity: This advance cannot be implemented safely and effectively. There is currently no foreseeable further development of the corresponding technologies that would technically make such an implementation possible. In addition, according to our assessment, the hoped-for effects of these monitoring measures are not to be expected. This legislative initiative therefore misses its target, is socio-politically dangerous and would permanently damage the security of our communication channels for the majority of the population.

The main reasons against the feasibility of „Chat Control“ have already been mentioned several times. In the following, we would like to discuss these specifically in the interdisciplinary connection between artificial intelligence (AI, artificial intelligence / AI), security (information security / technical data protection) and law .

Our concerns are:

  1. Security: a) Encryption is the best method for internet security. Successful attacks are almost always due to faulty software. b) A systematic and automated monitoring (ie „scanning“) of encrypted content is technically only possible if the security that can be achieved through encryption is massively violated, which is associated with considerable additional risks. c) A legal obligation to integrate such scanners will make secure digital communications in the EU unavailable to the majority of the population, but will have little impact on criminal communications.
  2. AI: a) Automated classification of content, including methods based on machine learning, is always subject to errors, which in this case will lead to high false positives. b) Special monitoring methods, which are carried out on the end devices, open up additional possibilities for attacks up to the extraction of possibly illegal training material.
  3. Law: a) A sensible demarcation from the explicitly permitted use of specific content, for example in the educational sector or for criticism and parody, does not appear to be automatically possible. b) The massive encroachment on fundamental rights through such an instrument of mass surveillance is not proportionate and would cause great collateral damage in society.

In detail, these concerns are based on the following scientifically recognized facts:

  1. Security
    1. Encryption using modern methods is an indispensable basis for practically all technical mechanisms for maintaining security and data protection on the Internet. In this way, communication on the Internet is currently protected as the cornerstone for current services, right through to critical infrastructure such as telephone, electricity, water networks, hospitals, etc. Trust in good encryption methods is significantly higher among experts than in other security mechanisms. Above all, the average poor quality of software in general is the reason for the many publicly known security incidents. Improving this situation in terms of better security therefore relies primarily on encryption.
    2. Automatic monitoring („scanning“) of correctly encrypted content is not effectively possible according to the current state of knowledge. Procedures such as „Fully Homomorphic Encryption“ (FHE) are currently not suitable for this application – neither is the procedure capable of this, nor is the necessary computing power realistically available. A rapid improvement is not foreseeable here either.
    3. For these reasons, earlier attempts to ban or restrict end-to-end encryption were mostly quickly abandoned internationally. The current chat control push aims to have monitoring functionality built into the end devices in the form of scanning modules (“Client-Side Scanning” / CSS) and therefore to scan the plain text content before secure encryption or after secure decryption . Providers of communication services would have to be legally obliged to implement this for all content. Since this is not in the core interest of such organizations and requires effort in implementation and operation as well as increased technical complexity, it cannot be assumed that the introduction of such scanners will be voluntary – in contrast to scanning on the server side.
    4. Secure messengers such as Signal or Threema and WhatsApp have already publicly announced that they will not implement such client scanners, but to withdraw from the corresponding regions. This has different implications for communication depending on the use case: (i) (adult) criminals will simply communicate with each other via “non-compliant” messenger services to further benefit from secure encryption. The increased effort, for example to install other apps on Android via sideloading that are not available in the usual app stores in the respective country, is not a significant hurdle for criminal elements. (ii) Criminals communicate with possible future victims via popular platforms, which would be the target of the mandatory surveillance measures discussed. In this case, it can be assumed that informed criminals will quickly lure their victims to alternative but still internationally recognized channels such as Signal, which are not covered by the monitoring. (iii) Participants exchange problematic material without being aware that they are committing a crime. This case would be reported automatically and possibly also lead to the criminalization of minors without intent. The restrictions would therefore primarily affect the broad – and irreproachable – mass of the population.It would be utterly delusional to think that without built-in monitoring, secure encryption could still be reversed. Tools like Signal, Tor, Cwtch, Briar and many others are widely available as open source and can easily be removed from central control. Knowledge of secure encryption is already common knowledge and can no longer be censored. There is no effective way to technically block the use of strong encryption without Client Side Scanning (CSS). If surveillance measures are prescribed in messengers, only criminals whose actual crimes outweigh the violation of the surveillance obligation will maintain their privacy.
    5. Furthermore, the complex implementation forced by proposed scanner modules creates additional security problems that do not currently exist. On the one hand, this represents new software components, which in turn will be vulnerable. On the other hand, the Chat Control proposals consistently assume that the scanner modules themselves will remain confidential, since they would be trained on content that is already punishable for mere possession (built into the Messenger app), on the one hand, and simply for testing evasion methods, on the other can be used. It is also an illusion that such machine learning models or other scanner modules, distributed to billions of devices under the control of end users, can ever be kept secret.NeuralHash “ module for CSAM detection, which was extracted almost immediately from corresponding iOS versions and is thus openly available . The assumption by Chat Control proposals that these scanner modules could be kept confidential is therefore completely unfounded and incorrect Corresponding data leaks are almost unavoidable here.
  2. artificial intelligence
    1. We have to assume that machine learning (ML) models on end devices cannot, in principle, be kept completely secret. This is in contrast to server-side scanning, which is currently legally possible and also actively practiced by various providers to scan content that has not been end-to-end encrypted. ML models on the server side can be reasonably protected from being read with the current state of the art and are less the focus of this consideration.
    2. A general problem with all ML-based filters are false classifications, i.e. that known “undesirable” material is not recognized as such with small changes (also referred to as “false negative” or “false non-match”). For parts of the push, it is currently unknown how ML models should be able to recognize complex, unfamiliar material with changing context (e.g. „grooming“ in text chats) with even approximate accuracy. The probability of high false negative rates is high.In terms of risk, however, it is significantly more serious if harmless material is classified as “undesirable” (also referred to as “false positive” or “false match” or also as “collision”). Such errors can be reduced, but in principle cannot be ruled out. In addition to the false accusation of uninvolved persons, false positives also lead to (possibly very) many false reports for the investigative authorities, which already have too few resources to investigate reports.
    3. The assumed open availability of ML models also creates various new attack possibilities. Using the example of Apple NeuralHash , random collisions were found very quickly and programs were freely released to generate any collisions between images . This method, also known as “malicious collisions”, uses so-called adversarial attacks against the neural network and thus enables attackers to deliberately classify harmless material as a “match” in the ML model and thus classify it as “undesirable”. In this way, innocent people can be harmed in a targeted manner by automatic false reports and brought under suspicion – without any illegal action on the part of the attacked or attacker.
    4. The open availability of the models can also be used for so-called „training input recovery“ in order to extract (at least partially) the content used for training from the ML model. In the case of prohibited content (e.g. child pornography), this poses another massive problem and can further increase the damage to those affected by the fact that their sensitive data (e.g. images of abuse used for training) can continue to be published. Because of these and other problems, Apple, for example, withdrew the proposal .We note that this latter danger does not occur with server-side scanning by ML models, but is newly added by the chat control proposal with client scanner.
  3. Legal Aspects
    1. The right to privacy is a fundamental right that may only be interfered with under very strict conditions. Whoever makes use of this basic right must not be suspected from the outset of wanting to hide something criminal. The often-used phrase: „If you have nothing to hide, you have nothing to fear!“ denies people the exercise of their basic rights and promotes totalitarian surveillance tendencies. The use of chat control would fuel this.
    2. The area of ​​terrorism in particular overlaps with political activity and freedom of expression in its breadth. It is precisely against this background that the „preliminary criminalisation“, which has increasingly taken place in recent years under the guise of fighting terrorism, is viewed particularly critically. Chat control measures go in the same direction. They can severely curtail this basic right and make people who are politically critical the focus of criminal prosecution. The resulting severe curtailment of politically critical activity hinders the further development of democracy and harbors the danger of promoting radicalized underground movements.
    3. The field of law and social sciences includes researching criminal phenomena and questioning regulatory mechanisms. From this point of view, scientific discourse also runs the risk of being identified as “suspicious” by chat control and thus indirectly restricted. The possible stigmatization of critical legal and social sciences is in tension with the freedom of science, which also requires “research independent of the mainstream” for further development.
    4. In education, there is a need to educate young people to be critically conscious. This also includes passing on facts about terrorism. Through the use of chat control, the provision of teaching material by teachers could put them in a criminal focus. The same applies to addressing sexual abuse, so that control measures could make this sensitive subject more taboo, even if “self-empowerment mechanisms” are to be promoted.
    5. Interventions in fundamental rights must always be appropriate and proportionate, even if they are made in the context of criminal prosecution. The technical considerations presented show that these requirements are not met with Chat Control. Such measures thus lack any legal or ethical legitimacy.

In summary, the current proposal for chat control legislation is not technically sound from either a security or AI point of view and is highly problematic and excessive from a legal point of view. The chat control push brings significantly greater dangers for the general public than a possible improvement for those affected and should therefore be rejected.

Instead, existing options for human-driven reporting of potentially problematic material by recipients, as is already possible with various messenger services, should be strengthened and made even more easily accessible. It should be considered whether anonymous registration options for correspondingly illegal material could be created and made easily accessible from messengers. Existing criminal prosecution options, such as the monitoring of social media or open chat groups by police officers, as well as the legally required analysis of suspects‘ smartphones, can continue to be used accordingly.

For more detailed information and further details please contact:

Security issues:
Univ.-Prof. dr
Rene Mayrhofer

+43 732 2468-4121

rm@ins.jku.at

AI questions:
DI Dr.
Bernard Nessler

+43 732 2468-4489

nessler@ml.jku.at

Questions of law:
Univ.-Prof. dr
Alois Birklbauer

+43 732 2468-7447

alois.birklbauer@jku.at

Signatories: inside

  • AI Austria ,
    association for the promotion of artificial intelligence in Austria, Wollzeile 24/12, 1010 Vienna
  • Austrian Society for Artificial Intelligence (ASAI) ,
    association for the promotion of scientific research in the field of AI in Austria
  • Univ.-Prof. dr Alois Birklbauer, JKU Linz
    Head of the practice department for criminal law and medical criminal law )
  • Ass.-Prof. dr Maria Eichlseder, Graz University of Technology
  • Univ.-Prof. dr Sepp Hochreiter, JKU Linz
    Board of Directors of the Institute for Machine Learning, Head of the LIT AI Lab )
  • dr Tobias Höller, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • FH Prof. TUE Peter Kieseberg, St. Pölten University of Applied Sciences
    Head of the Institute for IT Security Research )
  • dr Brigitte Krenn, Austrian Research Institute for Artificial Intelligence
    Board Member Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Matteo Maffei, TU Vienna
    Head of the Security and Privacy Research Department, Co-Head of the TU Vienna Cyber ​​Security Center )
  • Univ.-Prof. dr Stefan Mangard, TU Graz
    Head of the Institute for Applied Information Processing and Communication Technology )
  • Univ.-Prof. dr René Mayrhofer, JKU Linz
    Board of Directors of the Institute for Networks and Security, Co-Head of the LIT Secure and Correct System Lab )
  • DI Dr. Bernhard Nessler, JKU Linz/SCCH
    Vice President of the Austrian Society for Artificial Intelligence )
  • Univ.-Prof. dr Christian Rechberger, Graz University of Technology
  • dr Michael Roland, JKU Linz
    (post-doc at the Institute for Networks and Security)
  • a.Univ.-Prof. dr Johannes Sametinger, JKU Linz
    Institute for Business Informatics – Software Engineering, LIT Secure and Correct System Labs )
  • Univ.-Prof. DI Georg Weissenbacher, DPhil (Oxon), TU Vienna
    (Prof. Rigorous Systems Engineering)

Published on 07/04/2023

Hinterlasse einen Kommentar

Diese Seite verwendet Akismet, um Spam zu reduzieren. Erfahre, wie deine Kommentardaten verarbeitet werden..