Meta to operationalise Elections Centre, to curb misinformation
New Delhi: Facebook and Instagram-owner Meta on Tuesday vowed to crack down on any misinformation and misuse of AI-generated content during general elections in India, saying it will remove them if they suppress voting, incite violence and use a network of fact-checkers to label content that is fake, altered or manipulated.
Meta said it will operationalise an India-specific Elections Operations Centre, bringing together experts from across the organisation to identify potential threats and put specific mitigations in place across its apps and technologies in real-time.
The social media giant said it is building tools to label AI-generated images from Google, OpenAI, Microsoft, and others that users post to Facebook, Instagram and Threads, as the social media giant pledged its commitment towards election integrity efforts ahead of the Lok Sabha polls.
The general election is scheduled to be held in seven phases starting from April 19, and results will be declared on June 4.
Meta said it firmly believes that it is important for people to know when photorealistic content has been created using AI, adding it already labels photorealistic images created using 'Meta AI' by putting visible markers that can be seen on the images, and both invisible watermarks and metadata embedded within image files.
"We are also building tools to label AI-generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads," Meta said in a blogpost titled 'How Meta Is Preparing For Indian General Elections 2024'.
"We are dedicated to the responsible use of new technologies like GenAI and collaborating with industry stakeholders on technical standards for AI detection, as well as combating the spread of deceptive AI content in elections through the Tech Accord," it
said.
The social media giant said the India-specific Elections Operations Centre will bring together experts from across the company from intelligence, data science, engineering, research, operations, content policy and legal teams, to identify and mitigate threats in real-time across its apps and technologies.
Meta said that starting this year, the company also requires advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases. This applies if the ad contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do. Or if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.
As the world's largest democracy prepares for the general elections, Meta said it will continue its efforts to limit misinformation, remove voter interference, and enhance transparency and accountability on its platforms to support free and fair elections.
Meta said it is closely engaged with the Election Commission of India through the Voluntary Code of Ethics it joined in 2019, which gives the commission a high-priority channel to flag unlawful content to the company.