The Death of the “Black Box”: Why AI Terminations are a Legal Minefield for Indian Employers (2026 Update)
- reetika72
- 5 days ago
- 4 min read
A major tech firm in Gurugram dismisses a senior analyst. The termination letter cites a “Low Productivity Score” generated by an automated performance tool. When the employee asks why the score dropped despite consistently meeting targets, HR responds: “The algorithm’s logic is proprietary.”
In 2026, this response is no longer just an example of poor management—it potentially violates specific statutory provisions under Indian law. What was once defended as “technology-driven efficiency” is now being tested against transparency, consent, and fairness. Below is a breakdown of the exact legal risks that make so-called “Black Box” firings increasingly indefensible.
What Is the “Black Box” Problem?
In technical terms, a Black Box refers to an AI system whose internal logic—how it converts raw data into a final decision—is not explainable to humans.
The Inputs: Emails, keystrokes, meeting hours, login data, and code commits
The Box: Complex neural networks that weigh these factors in opaque ways, often beyond full developer comprehension
The Output: A command—“Terminate,” “Promote,” or “Flag”
The legal problem is simple: if you terminate an employee based solely on the output, without understanding or being able to explain the reasoning inside the “Box,” you cannot defend the decision in court. In 2026, “the computer said so” is no longer a valid legal defence in India.
The Legal Landscape: Three Key Risks for Employers
1. The DPDP Act, 2023 (Operationalised Late 2025)
The Digital Personal Data Protection (DPDP) Act, 2023 is now fully operational, and it presents the most immediate compliance risk for AI-driven HR systems.
Consent & Purpose Limitation: If an AI system processes employee emails, logs, or chats to assess performance, the employer becomes a Data Fiduciary and the employee a Data Principal. Under Section 6(1), such processing requires explicit and purpose-specific consent.
Crucially, data collected for one purpose cannot be reused for another. An employment contract signed in 2023 consenting to “data processing for security or IT monitoring” does not authorise “AI-based performance scoring” in 2026. Processing data for an unstated purpose—contrary to the Notice under Section 5—renders the entire exercise illegal.
Right to Grievance Redressal: Employees have a statutory right to seek explanations regarding how their personal data was processed. If an employee asks, “How did your AI analyse my email metadata to conclude I am inefficient?” and the response is “It’s a black box,” the employer is failing this obligation.
Such failure can be escalated to the Data Protection Board, which has the power to levy penalties for lack of transparency and redressal mechanisms.
2. MeitY’s AI Governance Guidelines
The Ministry of Electronics and IT (MeitY) has significantly raised the compliance bar through its AI due-diligence advisories.
The March 1, 2024 advisory explicitly warns that AI systems must not permit bias or discrimination. While issued under the IT Rules as an advisory, courts routinely rely on such guidelines to assess whether an organisation met the required standard of care.
If an HR algorithm disproportionately penalises women due to maternity gaps, medical leave, or flexible work arrangements, the employer may be acting contrary to government-mandated due diligence—opening the door to negligence and discrimination claims.
3. Labour Court Precedent: The Requirement of “Reasoned Orders”
Indian labour jurisprudence—particularly under the Industrial Disputes Act, 1947—has always required that termination for non-performance be supported by a reasoned order. This includes documented failures, prior warnings, and an opportunity to improve.
An AI score is a conclusion, not a reason. If the employer cannot translate that score into specific, provable incidents of underperformance, labour courts are likely to view the termination as arbitrary and illegal.
The Global Warning: What Indian MNCs Must Know
For Indian companies operating internationally, the compliance risk multiplies.
EU AI Act (2026): AI used for employment and workforce management is classified as High Risk. Mandatory human oversight applies, with fines up to €15 million or 3% of global turnover—and up to €35 million or 7% for prohibited practices such as workplace emotion recognition. Source: EU AI Act Compliance Overview
United States: States like New York mandate annual bias audits for Automated Employment Decision Tools (AEDTs), with public disclosure requirements.
The global direction is clear: opaque AI in employment decisions is no longer tolerated.
The Safe Path Forward: A Compliance Checklist for 2026
To use AI in HR without inviting litigation, Indian employers must adopt a Human-in-the-Loop (HITL) framework.
1. Update Privacy Notices
Employment contracts and notices must explicitly state that personal data will be used for automated performance assessment. Vague “company policy” clauses are unlikely to survive a DPDP challenge.
2. Follow the “Researcher, Not Judge” Rule
Treat AI outputs as indicators—not final decisions.
High Risk: Auto-generating termination letters based solely on AI scores
Compliant: AI flags low performance → a human reviews underlying data → verifies missed targets → initiates a Performance Improvement Plan (PIP)
3. Audit for Bias
Does the system penalise legitimate leave or flexible work? Does reduced login activity automatically translate to “low engagement”? If context is missing, discrimination risk is high.
Conclusion: AI Is a Tool—You Are the Master
The era of the HR “Black Box” is ending, not because the technology failed, but because the law has caught up. In 2026, the smartest organisations are those that use AI to augment human judgment, not replace it.
Don’t let an algorithm sign your termination letters.




Comments