Exploration on JPEG Fake Media
Recent advances in media manipulation, particularly deep learning based approaches, can produce near realistic media content that is almost indistinguishable from authentic content to the human eye. These developments open opportunities for creative production of new content in the entertainment and art industry. However, they also lead to the risk of the spread of manipulated media such as ‘deepfakes’. This may lead to copyright infringements, social unrest, spread of rumours for political gain or encouraging hate crimes.
Clear annotation of media manipulations is considered to be a crucial element in many usage scenarios. However, in malicious scenarios the intention is rather to hide the mere existence of such manipulations. This already triggered various governmental organizations to define new legislations and companies (in particular social media platforms or news outlets) to develop mechanisms that can detect and annotate manipulated media contents when they are shared. Therefore, there is a clear need for standardisation related to media content and associated metadata. The JPEG Committee is interested in exploring if a JPEG standard can facilitate a secure and reliable annotation of media modifications, both in good faith and malicious usage scenarios.
At its 95th online meeting, the JPEG committee released a Final Call for Proposals (CfP) for JPEG Fake Media and associated “Use Cases and Requirements for JPEG Fake Media” document.
Interested parties are invited to join the JPEG Fake Media mailing list and regularly consult the JPEG.org website for the latest news.