MillenniumPost
Editorial

Perturbing loopholes

Perturbing loopholes
X

Technological advancement is a phenomenon that has ran parallel to the progress of humanity. The face of the globe has witnessed an absolute transformation over the centuries, and at the root of this change has undoubtedly been technology. Essentially, one after another, humans invented extensions of their own body parts to amplify their capacities. If walking and running were human capacities incumbent upon legs, humans invented wheels that would multiply their speed manifold. If speech was a human attribute, humans found an answer in loudspeakers and a range of broadcasting and telecasting devices. Humans have literally made prototypes of many of their body organs — from kidneys to brain and whatnot. What Artificial Intelligence is capable of is generating an avatar of humans — embodying not just thinking but also the unparalleled capacity of copying the values like empathy and attributes like logical thinking and creativity. Artificial intelligence is not just another technology, it appears to encompass all previous achievements within one landscape. Once irrefutable belief that machines could never replicate core human capacities has now become at least debatable. Besides the tremendous ability with which they come with, AI-driven chatbots are arguably loaded with high potency for risks and nefarious designs. The initial reservations put forth by critics were warded off by AI chatbot companies who claimed to have incorporated a range of guardrails that would prevent the nefarious use of the bots, such as eliciting information about bomb-making and creating hate contents. Humanity’s trust on technology has been so profound that an assurance from tech companies is accepted readily. However, what the researchers at Carnegie Mellon University, Pittsburgh, and the Centre for AI Safety, San Francisco, have claimed is baffling. The researchers have demonstrated that by merely adding characters to the end of user queries, safety guardrails can be broken — provoking the chatbot to produce harmful content, misinformation, or hate speech. More disturbingly, the safety breach can be done in ‘unlimited’ ways. Indeed, this new claim by AI researchers is horrifying, but not completely shocking. More than any revelation, it brings forth a validation for what might be already known. News reports suggest that lately, a bot named FraudGPT has been circulating on the dark web. It is known to facilitate various criminal activities, including crafting cracking tools, phishing emails, and more. This AI-powered bot possesses the capability to generate malicious code, design evasive malware, identify leaks, and exploit vulnerabilities. Moreover, it is an established fact that chatbots rely on algorithms and machine learning models to function effectively. If these algorithms are vulnerable to manipulation or exploitation, attackers could feed malicious input to the chatbot, causing unexpected behaviours or responses. Conceptual terms like ‘model poisoning’ and ‘adversarial attacks’ have already gained traction. These terms imply that by providing carefully crafted inputs, attackers can manipulate any model's behavior to generate inaccurate or harmful responses. It is also possible that employees or contractors who have access to the chatbot's development or operational environment can intentionally or inadvertently cause security breaches. Moreover, many chatbots integrate with various APIs to access external services and data. If these APIs have security flaws, attackers might exploit them to gain unauthorized access or execute code on the server side. One may argue, and rightly so, that chatbot developers must focus on encryption of sensitive data, securing APIs, making regular security audits, and continuously monitoring suspicious activities, among other things. These are crucial aspects, but not enough for the cause. It is high time that governments assume effective control of the situation, and place necessary regulations on the fast-emerging industry. Any laxity in approach could be detrimental to public interest and safety. It may be recalled that Sam Altman, head of one of most prominent AI chatbots, has himself called for international rules on generative AI. National governments must take a proactive approach in this direction if cyberspace, nuclear domain and other areas are to be safeguarded.


Next Story
Share it