Projects in REVISE
CRISIS: Cross-Domain Disinformation Analysis
Nowadays, news is increasingly spread and consumed via social media. Since most of the posts do not undergo any verification process prior to their publication, there is a strong risk that posts contain false information. The project CRISIS aims to examine posts in social media for disinformation, where information is present in the form of texts, images, videos, and audio recordings. Various machine learning analysis methods are to be employed in order to
- trace the dissemination paths of (dis)information or to identify themes and trends (Social Media Analytics)
- recognize maliciously "recycled" content and trace it back to its original source as well as to match spread information with previously conducted fact-checks (Semantic Similarity Analysis)
- and to support manual fact-checking process by preselecting media or specific sections of media that are particularly worth checking (Check-Worthiness Analysis).
The results will be integrated into a demonstrator that will support practitioners, such as journalists and fact-checkers, in investigating and identifying disinformation.
DREAM: Deepfake REcognition and Artificial Media
The DREAM project studies different methods for recognizing and identifying synthesized or manipulated media content that have been generated through artificial intelligence. A special focus is placed on detecting manipulations in various forms of media, namely images, videos and audios which are generated with the intention to impersonate. So-called deepfakes are able to automatically replace faces appearing in images or videos with faces of any person using "deep learning". Images can be generated by text-to-image synthesis methods such as DALL-E, StableDiffusion or Midjourney. For videos, face swapping or facial reenactment techniques such as "lip-sync-attacks" can be used. For audio data, the voice of a specific target person is imitated, e.g. through voice conversion or text-to-speech synthesis, so that words can be put into their mouth. To gain a better understanding of multimodal manipulations, the project also involves generating these types of fake media.