Flat Preloader Icon Loading...

LK Academy

​Model conduct: On India, AI use

December 30, 2025

India has been regulating Artificial Intelligence (AI) use by expecting due diligence from platforms under the IT Act and Rules, regulating the financial sector, and with privacy and data protection Rules. It does not yet have a consumer safety regime that deals with the state’s duty of care. China pitched such a regime with draft rules that it unveiled last week, which target emotionally interactive services, and propose to require companies to warn against excessive use and intervene when they detect signs of extreme emotional states. If these rules seem justified for targeting psychological dependence that general rules about unlawful content do not address, they may also be harsh because expecting providers to identify users’ states can incentivise more intimate monitoring. India’s posture is less intrusive, but also more incomplete, because it banks on existing laws. Thus, it regulates adjacent risks but has not articulated a duty of care vis-à-vis AI product safety, especially for psychological harms. MeitY has used the IT Rules to push platforms to curb deepfakes and fraud, and define and label “synthetically generated” content. Financial regulators have also adopted structural measures, with the RBI setting expectations to govern model risk in credit and developing its FREE-AI framework process and SEBI pushing for clear accountability on how regulated entities use AI tools. While some measures are preemptive, MeitY has been largely reactive.

India has a large ecosystem that adopts models but is far behind the U.S. and China in building frontier models of its own. In this context, it should beware the cons of ‘regulate first, build later’, especially since domestic capacity is lacking. A more practical way might be to consider how it can nurture a frontier model and govern the overall use of models, many of which will remain privately built and foreign for a while, inside Indian markets. On the first count, India can focus on improving access to computational resources, upskilling the workforce, increasing public procurement, and translating research to industry while sidestepping the pitfall of paralysis by consensus, which could increase dependency. On the second, India should consider regulating downstream use more assertively without choking upstream capability. It can do this by adding obligations on companies that are deploying products in high-risk contexts, and regulating how they monitor and respond to a model’s behaviour, to existing privacy and consumer protection rules, for example, by expecting companies to submit incident reports, rather than requiring them to monitor users’ emotions. This way, India can write rules for how Indians use models without assuming that the global technology trajectory will rearrange itself to match its preferences.

Overall Analysis

The editorial examines India’s current approach to regulating Artificial Intelligence and finds it cautious but incomplete. It begins by outlining how India relies on existing legal frameworks—such as the IT Act, financial regulations, and data protection rules—to manage AI-related risks. The author contrasts this with China’s recently proposed AI regulations, which impose a direct duty of care on platforms, especially concerning emotionally interactive AI services. This comparison helps frame India’s regulatory posture as less intrusive but also less comprehensive.

The editorial carefully balances approval and criticism. While it acknowledges that China’s rules address psychological harms that general content regulation may overlook, it also warns that such measures could encourage excessive surveillance. In contrast, India’s reliance on due diligence obligations is portrayed as restrained but insufficient, particularly because it lacks a clearly articulated consumer safety regime for AI-driven psychological risks. The language here is measured and analytical, avoiding alarmism while highlighting regulatory gaps.

In the second part, the focus shifts to India’s strategic position in the global AI ecosystem. The author notes that India is a major adopter of AI but lags behind the U.S. and China in developing frontier models. Against this backdrop, the editorial cautions against a “regulate first, build later” approach, arguing that premature regulation could stifle domestic innovation. The phrase “paralysis by consensus” underscores the risk of over-deliberation slowing capacity building.

The concluding argument proposes a balanced path: India should invest in building domestic AI capacity while regulating downstream, high-risk applications more assertively. Rather than intrusive monitoring of users’ emotional states, the editorial recommends practical obligations such as incident reporting and accountability for AI behaviour. Overall, the article presents regulation as a tool to shape responsible use without undermining innovation or assuming global technological alignment with Indian preferences.

Important Vocabulary (5)

  1. Due diligence – reasonable steps taken to avoid harm or comply with regulations.
  2. Intrusive – excessively interfering or invasive.
  3. Frontier models – advanced, cutting-edge AI systems at the leading edge of research.
  4. Downstream use – application and deployment of technology after it has been developed.
  5. Paralysis by consensus – inability to act due to excessive consultation or agreement-seeking.

Conclusion & Tone

The editorial concludes that India needs a nuanced AI governance framework—one that strengthens consumer protection and accountability without hindering innovation or domestic capacity building. It advocates regulating how AI is used in high-risk contexts rather than policing users themselves.

Tone: Balanced, pragmatic, and forward-looking, with a strong emphasis on policy realism and strategic restraint.

0
    0
    Your Cart
    Your cart is emptyReturn to Shop