Social media platforms are a significant source of blatant propaganda and misinformation. Recent advancements in video and photo formatting, as well as artificial intelligence techniques, had also made it much easier to tinker with audio and visual files, such as with so-called “fake news ”, which also incorporates and overlaying images, audio, and film clips to start creating compilations that look like the real recordings.
Researchers from the Web Interdisciplinary Research center (IN3 K-riptography) and Data Security for Accessible Channel and Communications Systems & Social Reform groups at the Université de Oberta de Catalunya had also initiated a project to grow technological innovations that, utilizing intelligent data invisibility techniques, must help users instantly distinguish between original and tainted media content, DISSIMILAR is a worldwide movement led by the UOC which includes researchers from Warsaw Technical University in Slovakia as well as Okayama University.
“The project does have two goals: first, should provide content producers with techniques to watermark their creative works, trying to make any alteration easily traceable; and second, the purpose of providing people on social media with a tool for detecting fake online media premised on cutting-edge sensor processing and artificial intelligence techniques,” clarified Professor David Megas, KISON principal investigator and board member of the IN3. Moreover, DISSIMILAR intends to incorporate “the cultural component and the point of view of the end-user all through the overall project,” from tooling to user testing at different phases.
Biases are dangerous
There are currently two tools available for fake news detection. First, there are fully automated machine learning-based, of that which (at the moment) just a few projects exist. Second, there are real based on deep learning platforms with human interference, including Twitter and Facebook, which necessitate people to participate to identify if specifics are truthful or fake. As per David Megas, this centrally controlled solution may well be impacted by “various biases” and it may inspire censorship. “We assume that an independent evaluation related to technological tools could be a better choice, given that users have had the final say in deciding if they can trust definite material or not,” he says.
Verifying multimedia files automatically
Watermarking is a set of data invisibility techniques that incorporate imperceptible information from the original file to “conveniently and spontaneously” confirm a multimedia file. “It could be used to validate material by verifying, for instance, that even a clip or picture was dispersed by an official media agency, and can even be seen as an authorization mark, which is removed if the material was modified, or to detect the beginnings of the data. In the other sayings, it can ascertain whether the original source (for example, a Twitter feed) is spreading misinformation “Megas elaborated