Transcending algorithms
Without human support, AI tools can prove to be confusing while rendering services to consumers, particularly seniors, laymen, and the differently-abled, necessitating regulations for mandatory human alternatives

Businesses are meant to provide ease for customers, and customer service is essential to be maintained. Certain AI tools, including ChatGPT, without adequate human support and response, are creating confusion for many. In fact, senior citizens, laymen, and differently-abled consumers are often restricted to age-old methods of purchasing, and are unable to utilise some of the new technologies.
Applications like ChatGPT, when used by businesses as the primary source of complaint resolution without appropriate human response, are inconvenient for the aforementioned categories. Such AI models should at best be an option or alternative to human response, but not the primary or only mode of solution. The fact is, many people around us are unable to use such techniques. While AI chat modules can have some natural conversation, they lack the emotional intelligence and understanding of a human mind. Though AI tools and ChatGPT’s language are simple and can understand basic human language patterns, their usage is sometimes confusing. Not everyone may be proficient in language, leading to irrelevant user results, which can be frustrating, especially for the technologically challenged.
Additionally, since chat modules and other AI tools are still learning, they have a tendency to fabricate facts or provide irrelevant information, apart from creating misunderstandings among users. In particular, they still have difficulty understanding context and complex queries, making them inconvenient even for tech-friendly people. It is essential that ChatGPT should never be used without a human alternative, as it is adverse to the interests of a large number of consumers. Its confusing responses could, in the long run, also land businesses in consumer troubles, apart from losing their customer base. It can also be used to manipulate innocent consumers into making decisions they might not otherwise make, without realising the consequences. For senior citizens, who might have physical limitations such as impaired vision or other age-related issues, navigating these interfaces can be particularly difficult.
In light of the aforesaid, it is important to have regulations mandating the presence of alternate human call support along with ChatGPT and mandatory provisions directing the display of such helpline numbers on websites and other modes like a telephonic helpline.
Currently, the AI laws in India and some other countries are not adequate in this respect. The AI laws in India include legislations such as the Information Technology Act 2000. The Act provides a legal framework, inter alia, for electronic governance by recognising electronic records and digital signatures. It also defines cyber-crimes, prescribes penalties for them, and establishes a Cyber Appellate Tribunal to resolve disputes and address many such issues. Additionally, we have the Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules 2011, which include security practices for holding sensitive personal information of users, prohibiting specific content on the internet, rules regarding cyber cafés, and the Electronic Service Delivery Rules that mandate the electronic delivery of applications, certificates, licenses, etc. This is set to be replaced by the Digital India Act 2023, currently in draft form, called to be an updated version.
The Digital Personal Data Protection Act 2023, which is yet to come into force, aims to provide for the processing of digital personal data in a manner that recognises both the right of individuals to protect their personal data and the need to process such data for lawful purposes and related matters. In addition to these laws, the Ministry of Electronics and Information Technology issued an advisory in March 2024, modifying the approach to AI. The Securities and Exchange Board of India also issued a circular in 2019 concerning AI usage, in exercise of powers conferred under Section 11 (1) of the Securities and Exchange Board of India Act, 1992, to protect investors' interests in securities and to promote the development of, and regulate, the securities market. In the health sector, the Ministry of Health and Family Welfare referred to establishing the National Digital Health Authority (NeHA), responsible for developing India’s Integrated Health Information System, as mentioned in the National Health Policy 2017.
There is an urgent need for mandatory regulations to remove the discrimination and confusion among consumers posed by AI-based modules, including ChatGPT, when used without adequate human support. Individuals, owing to age, health, and language-related issues, struggle to understand concepts central to AI. This can increase feelings of frustration and helplessness, as well as result in wasted time and sometimes even money. Such users may face a sense of exclusion in the long run. Therefore, clear-cut legislations that consider every kind of individual are needed.
No doubt Artificial Intelligence has revolutionised many facets of modern life, introducing enhancements that promise greater efficiency, convenience, and innovation. However, as we move into an increasingly AI-driven future, it must be ensured that no one is left behind.
The writer is a practising Advocate in Supreme Court and High Court of Delhi. Views expressed are personal