Exploration on JPEG Fake Media
Recent advances in media manipulation, particularly deep learning based approaches, can produce near realistic media content that is almost indistinguishable from authentic content to the human eye. These developments open opportunities for creative production of new content in the entertainment and art industry. However, they also lead to the risk of the spread of manipulated media such as ‘deepfakes’. This may lead to copyright infringements, social unrest, spread of rumours for political gain or encouraging hate crimes.
Clear annotation of media manipulations is considered to be a crucial element in many usage scenarios. However, in malicious scenarios the intention is rather to hide the mere existence of such manipulations. This already triggered various governmental organizations to define new legislations and companies (in particular social media platforms or news outlets) to develop mechanisms that can detect and annotate manipulated media contents when they are shared. Therefore, there is a clear need for standardisation related to media content and associated metadata. The JPEG Committee is interested in exploring if a JPEG standard can facilitate a secure and reliable annotation of media modifications, both in good faith and malicious usage scenarios.
An initial draft document “JPEG Fake Media: Context, Use Cases and Requirements v0.1” delineates the context and provides initial identified use cases and requirements. JPEG calls experts to provide feedback on the document and propose additional use cases.
Interested parties are invited to join the JPEG Fake Media mailing list and regularly consult the JPEG.org website for the latest news.