Deepfake
AI-generated morphed videos and images are dangerous enough to pose a serious threat to relationships, finances, and even elections. How do we stay safe?
Reality TV — love them, hate them but you will watch them. I, for one, do follow them (primarily international ones as most Indian reality TV is cringeworthy). Reality TV is a guilty pleasure; an ideal mode of entertainment after a hard day’s work when the mind refuses to be used for another additional minute. It’s also a voyeuristic intrusion into the private lives of people, providing a 'fly on the wall' perspective on society's evolving habits, behaviours, and values. The most interesting reality programmes showcase the contemporary world where relationship and couple goals have drastically changed even as people desperately try to hold onto traditional promises of love and commitment. From those that refuse monogamous relationships to those that are scared to get married, today’s OTT platforms are replete with jaw-dropping shows, many of which promote the use of deception, lies, and unscrupulous behaviour. And as reality TV reflects the world we live in, keeping pace with modern times, there is a new Spanish show on Netflix called ‘Falso Amor’ (Deep Fake Love). Five couples test their love and commitment on this show while living with singles even as deepfake technology and AI (Artificial Intelligence) manipulates real-life situations, and asks the couples to vote if what they have seen their partner doing is true or false. All for a cash prize of course.
The pitfalls of deepfake technology are scary. ‘Deep Fake Love’ exposes the repercussions of AI and deepfake technology on human relationships. Critics of the show have called it cruel, devious, and hugely problematic in the manner in which it literally emotionally tortures participants by showing their partners in sexually provocative situations that may or may not be real. For me, the greatest takeaway from this show is the dangers of deepfake technology, whose misuse is already making its presence felt in India and the world. A recent picture of Pope Francis in a white Balenciaga jacket or closer home, a fake photo of smiling faces of the protesting wrestlers instead of their original grave expressions — are instances on how advanced this technology is and the ramifications that it carries.
Till a few years ago, image and video morphing were being used in generating inferior quality fake pornographic content of celebrities. Today, however, the access and use of deepfake technology is more mainstream. Earlier this year, business tycoon Anand Mahindra had shared a video that showed a man turning into Virat Kohli, Shahrukh Khan, and Robert Downey Jr using AI-generated deepfake video. In February, the Indian government instructed all social media platforms to remove deepfake images and act within 24 hours of receiving a complaint. This month, a 73-year-old man from Kerala fell victim to deepfake technology getting scammed out of Rs 40,000. The scamster used deepfake technology to video call the victim and convinced him that he was an old acquaintance who is in urgent need of financial assistance. Cyber fraud is reaching epidemic stages in India, with almost every person having their own sob story of defraudment. 57 per cent of all frauds in India happen online and as per a PwC report, 92 per cent of customer frauds were done through payments (credit cards and wallets), 42 per cent through account or identity takeovers, 25 per cent through cloning or theft, while synthetic ID caused 19 per cent frauds. News reports suggest that along with this, 45 per cent of frauds faced by enterprise platforms happened due to malware, 26 per cent by phishing, and 25 per cent by ransomware.
Next year’s Lok Sabha elections will be a challenging time for authorities as deepfake technology poses a serious threat that can cause serious trouble, stoke communal tensions, and alter public perception. Chief Election Commissioner Rajiv Kumar raised concern over the abuse of technology and use of deepfakes to build fallacious narratives during the elections.
It’s not easy for the human eye to spot deepfakes even as the tech teaches itself to become closer and closer to the original. However, there are some tips shared by experts to spot a fake. Online tools can detect edited or photoshopped images, meticulous examination by the human eye can spot vagaries (Pope Francis’ image, for example, had several inconsistencies and distortions that were apparent on closer inspection), cloning of smaller objects in pictures can be caught, while observing shadows or the lack thereof, can be useful hacks to catch a deepfake. But in a world where people rely on WhatsApp for knowledge and news consumption rather than on more reliable sources, most people won’t bother to check if an image or video is deepfake, and therein lies the vulnerability of human society.
The writer is an author and media entrepreneur. Views expressed are personal