In today’s security research, numerous pressing questions arise that cannot be addressed by established methods of IT-security but should nevertheless be considered in the same context. A well-known example is the handling of disinformation. The phenomenon itself is not an IT-security issue, but rather a political, sociological or journalistic one.

However, information technology has acted as an accelerant here, simplifying the creation through large language models or deepfakes, accelerating the spread through social networks and messengers, and amplifying the effect through botnets. As a result, dealing with disinformation on a technical level has also become an issue of IT security in the broader sense.  


Detecting Desinformation

The detection of texts representing disinformation by methods of natural language processing (NLP) and machine learning, but also by conventional methods of computational linguistics, is a common strategy for automated handling of large volumes of texts. These tools efficiently pre-select or filter messages. Those that are potentially disinformation are then examined in more detail by human experts. 

Similarly, image recognition tools are also used to detect the strategy often used in disinformation, where images are taken out of context and used as evidence of a hoax. Detecting deepfakes and other types of artificially generated multimedia designed to alter the behavior of people or their spoken words in videos is another common example.  

Finally, analyzing infrastructures, especially in the context of messenger services and social networks, that spread and amplify disinformation, i.e. by botnets and combinations of different social media channels, is also a technical aspect of disinformation detection. 


Malicious content detection/filtering in general

Disinformation is just one example of many where technical approaches are used to detect or filter content. Similar methods are also used in the technical implementation of the politically controversial upload filter.

Upload filters aim to prevent the distribution of questionable content while it is being transferred to a platform on the internet. Here, a conceivable approach is to efficiently recognize content that has previously been stored in a database. In the particular case of images and videos, image recognition tools can be used. Another example is preventing cyberbullying, where the goal is to recognize and block unauthorized nude images of people (especially minors) that have been created or distributed.


Challenges in the misuse of data and its unlawful dissemination, be it text, audio, image or video, arise in many scenarios and are repeatedly addressed with similar or identical methods. In REVISE, various tools from the field of IT-security are applied to meet these challenges. Potential research goals are:

Ziele

Goal 1

Detection of Artificially Generated Multimedia

Deepfakes and Voice-Cloning are the results of AI algorithms that learn to imitate people's voices and facial expressions. Therefore, they can be used to discredit public figures and undermine the democratic decision-making process. Current deepfakes are so realistic that they can fool both humans and software-based recognition techniques alike, thus the need of more robust detection methods.

Goal 2

Combating Disinformation

Spreading false information - either unconsidered or deliberate - can have severe personal and societal consequences resulting in death, the polarization of public opinions, or even military conflicts. Automatic and reliable methods to find and counteract disinformation are of absolute necessity.

Goal 3

Protection of Minors

Another challenge is the protection of minors from, for example, cyberbullying or cybergrooming. The challenge here is to develop protective mechanisms that on the one hand contribute to an improved protection of minors, but on the other hand do not restrict people's rights and personal development.