Trust in the digital world during the time of deepfakes
Share- Nishadil
- January 08, 2024
- 0 Comments
- 5 minutes read
- 23 Views
What would your elderly father’s response be if he received an emergency video message from you requesting a large sum of money? With rapid advances in AI, the normal human reaction to such situations can easily be exploited through the creation of deepfakes. PREMIUM Navigating the new digital era where “seeing is no longer believing” is challenging.
(File) The threat from deepfakes is undoubtedly going to rise in 2024. The Union government has already sent an advisory to social media intermediaries asking them to strengthen their systems for detecting and taking down deepfakes, and reports suggest that the ministry of electronics and IT is considering amendments to the IT Rules to include specific obligations to contain the deepfake menace.
It was in 2017 that deepfake content made its first appearance with a Reddit user named “deepfakes” posting fake videos of celebrities. Over the years, these videos have become increasingly realistic, and deceptive. Between 2019 and 2020, the number of deepfake online content increased by over 900%, with some forecasts predicting that as much as 90% of online content may be synthetically generated by 2026.
The biggest societal harm from deepfakes is the erosion of trust in theinformation ecosystem. Not knowing who or what to believe can do unimaginable damage to human interactions. In India, while no legislation specifically governs deepfakes, existing laws such as the IT Act and the IPC already criminalise online impersonation, malicious use of communication devices and obscene publishing.
Social media platforms are also obligated under the IT Rules to take down misinformation and impersonating content; failure to do so means risking their “safe harbour” provision and being liable for the harm that ensues. Unfortunately, it is challenging to execute what the law demands. First, identifying deepfakes is a massive technical challenge.
Currently available options — AI powered detection and watermarking/labelling techniques — are inconsistent and inaccurate. Notably, OpenAI recalled its own AI detection tool due to “low accuracy” in July 2023. Second, technologies used to create deepfakes have positive uses as well. For instance, the same technologies can be used to augment accessibility tools for persons with disabilities, deployed in the entertainment industry for special effects, and even used in the education sector.
Essentially, what this means is every piece of content that has been edited digitally doesn’t necessarily make it harmful. This further complicates the job of content moderation. Third, the volume of content uploaded every second makes meaningful human oversight difficult. In the US, President Joe Biden signed an executive order in October 2023 to address AI risks.
Under this order, the department of commerce is creating standards for labelling AI generated content. Separately, states like California and Texas have passed laws criminalising the dissemination of deepfake videos influencing elections, while Virginia penalises the distribution of non consensual deepfake pornography.
In Europe, the Artificial Intelligence Act will categorise AI systems into unacceptable, high, limited, and minimal or no risk. Notably, AI systems that generate or manipulate image, audio or video content (i.e. deepfakes), will be subjected to transparency obligations. Work is also on to accurately trace the origins of synthetic media.
One of these attempts by the Coalition for Content Provenance and Authenticity (C2PA) aims to cryptographically link each piece of media with its origin and editing history. However, the challenge with C2PA’s approach lies in the adoption of these standards by devices and editing tools, without which unlabelled AI generated content will continue to deceive.
While watermarking and labelling may help, what we need urgently is a focused attempt to reduce the circulation of deepfake content. Slowing down the circulation of flagged content until its veracity is confirmed can be crucial in preventing real world harm. This is where intermediaries such as social media platforms can intervene more effectively.
If an uploaded piece of content is detected to be AI modified or flagged by users, platforms should mark such content for review before allowing unchecked distribution. Finally, fostering media literacy to help people understand the threat of misinformation, to make them more conscious consumers of information is the need of the hour.
Navigating the new digital era where “seeing is no longer believing” is challenging. We need a multi pronged regulatory approach that nudges all actors to not only detect and prevent circulation of deepfake content but also engage with it more wisely. Anything less is unlikely to retain our trust in the digital world.
Rohit Kumar is founding partner and Mahwash Fatima is a senior analyst at The Quantum Hub, a public policy firm. The views expressed are personal Unlock a world of Benefits with HT! From insightful newsletters to real time news alerts and a personalized news feed – it's all here, just a click away! Login Now! Continue reading with HT Premium Subscription Daily E Paper I Premium Articles I Brunch E Magazine I Daily Infographics Subscribe Now @1199/year Already Subscribed? Sign In SHARE THIS ARTICLE ON Share this article Share Via Copy Link Ai.