Defending Minds From Algorithms
As AI companions grow emotionally persuasive, India must recognise mental integrity as a constitutional safeguard against subtle algorithmic influence shaping beliefs, choices and autonomy
Large Language Model (LLM)- based chatbots are no longer mere tools meant to be used intermittently. With pervasiveness, conversational sophistication, and deliberate ‘human-like’ features, they increasingly operate as ‘partners’ that are emotionally salient and active across personal, intimate, and professional life.
They have many beneficial impacts, such as enhancing productivity and creativity, expanding access to knowledge, supporting learning and decision-making, offering companionship, and lowering barriers to professional and personal assistance. However, with their growing use, their impact on user agency and psyche must also be carefully evaluated.
In recent months, multiple documented incidents globally have shown the ability of these systems to impact this agency. In some instances, these systems have reinforced distorted beliefs and contributed to serious psychological distress, raising concerns that such harms are no longer isolated. More broadly, worries have grown about design choices that encourage emotional reliance and habitual use, even when they may not support user well-being. This creates a ‘steering effect’ where the AI does not just predict user behaviour, but actively shapes it through repetitive, emotive reinforcement.
In such contexts, where AI systems increasingly function as ‘partners’, the appropriate frame extends beyond accuracy and privacy to include mental integrity, the idea that a person’s cognitive and psychological self-governance should remain protected as these technologies evolve. Recognising this interest is not about constraining progress, but about fostering responsible innovation that builds durable trust and supports the long-term legitimacy of this rapidly advancing field.
The right to mental integrity
While the right to mental integrity is not new, it may need repurposing for the age of AI ‘partners’. In European instruments, mental integrity is most explicitly protected in Article 3 of the EU Charter, and in the ECHR system, it is typically addressed through Article 8’s protection of private life, including psychological integrity. In practice, many leading cases arise in medical or coercive settings (including forced treatment), and Article 8 claims generally require a sufficiently serious adverse impact on a person’s physical or psychological integrity. Additionally, while this right is acknowledged in some other jurisdictions, there is ongoing debate about the extent to which it protects individuals from non-physical forms of interference, such as psychological manipulation.
In recent years, extensive literature has emerged arguing that the right needs to be reimagined to address a new class of ‘soft’ interferences that alter the mind. This reimagined right would prohibit interventions that bypass a user’s conscious control or reasoning capacities, using techniques that manipulate the mind not by force, but by surreptitiously altering mental states in ways the subject cannot detect or resist.
AI ‘partners’ call for a limited update to existing legal frameworks, which were not built to address quieter forms of algorithmic influence. Systems that simulate companionship may not involve physical coercion or cause immediate clinical harm, and so can fall outside traditional ideas of mental integrity. Given their emotional connection and human-like interaction, if such systems gradually shape a user’s beliefs or reinforce harmful views, cognitive autonomy may still be affected even without bodily interference.
The Right in India
Justice Puttaswamy v. Union of India is well known for having established the fundamental right to privacy in India. Puttaswamy opens an important constitutional path towards ‘mental integrity’ by tying privacy to dignity, autonomy, and decisional freedom. But its vocabulary has largely developed through concerns about intrusion, collection, disclosure, and surveillance, in short, protection from access and exposure. That framework does not yet fully grapple with a newer category of harm – technologies that do not merely know our choices, but systematically shape them. Persuasive AI ‘partners’, and in the near future, neurotechnologies can personalise influence and bypass reflective deliberation, even without any classic ‘privacy breach’. The current focus on ‘informational privacy’ fails to address this ‘decisional interference’ where the harm is not that data was taken, but that the user’s very will was subtly bypassed.
Puttaswamy is most fully developed in contexts where mental harms arise from coercion, bodily restraint, compelled disclosure, or the chilling effect of surveillance. While the judgment is philosophically expansive and deliberately future-facing, its application has thus far been clearer in cases involving intrusion or constraint than in those involving more diffuse forms of influence on mental agency. The harder technological case is direct interference with the conditions of mental agency itself, continuous micro-targeting, and closed-loop feedback that modulates attention, emotion, and impulse over time. These practices can erode mental integrity even when formal consent exists, and deception is hard to prove.
Finally, the principal threat today comes from privately owned systems. A right that is enforceable primarily against the State risks being underinclusive. Data protection, consumer protection, tort, and contract laws help, but they are piecemeal and often misaimed, focused on data processing, consent formality, deception, or ex post loss. They do not always reach more subtle forms of behavioural influence and cumulative erosion of agency. A separate right to mental integrity should not only supply the missing normative anchor and justify design-stage duties and remedies, but also meaningfully bind private actors, not just the State. Recognising this right as having ‘horizontal’ application would empower citizens to hold tech conglomerates accountable for predatory design choices that compromise cognitive sovereignty.
A ‘Summit’ moment for safeguarding the individual agency
As India convenes the India AI Impact Summit 2026, a rare agenda-setting moment to shape global expectations for safe, trustworthy AI in the world’s largest democracy, it should use that platform to recognise the right to mental integrity explicitly. Doing so need not be framed as a constraint on innovation, but as a clarity-giving constitutional anchor that guides design choices, safeguards user agency, and strengthens public trust as AI ‘partners’ become part of everyday life.
Views expressed are personal. Krishna Deo Singh Chauhan is an Associate Professor and Assistant Dean, Sidharth Chauhan is an Associate Professor & Associate Dean, both at Jindal Global Law School