JPEG Fake Media - Context, Use Cases and Requirements
Recent advances in media manipulation, particularly deep learning based approaches, can produce near realistic media content that is almost indistinguishable from authentic content to the human eye. While these developments open opportunities for production of new types of media contents that are useful for the entertainment industry, it also risks the spread of doctored media (e.g., ‘deepfake’) leading to copyright infringement, social unrest, spread rumours for political gain or encouraging hate crimes.
Declaration of media manipulation is considered to be important in many usage scenarios including version controlling or traceability which however is not always the case where the intention is to ‘hide’ the mere existence of such manipulations. This is already leading various Governmental organizations to plan new legislation or companies (especially social media platforms or news outlets) to develop mechanisms that would clearly detect and annotate manipulated media contents when they are shared. Thus, there is a clear need for standardisation related to media content and associated metadata. The JPEG Committee is interested in exploring if a JPEG standard can facilitate a secure and reliable annotation of media modifications, both in good faith and malicious usage scenarios.
An initial draft document “JPEG Fake Media: Context, Use Cases and Requirements v0.1” delineates the context and provides initial identified use cases and requirements. JPEG calls experts to provide feedback on the document and propose additional use cases.
Interested parties are invited to join the JPEG Fake Media mailing list and regularly consult the JPEG.org website for the latest news.