AI AND HUMAN RIGHTS by Adv. Trupti Ravindra Raut and Adv. Nikita Kailas Bhujbal

ABSTRACT
This article focuses on AI and fundamental rights is rapidly transforming various aspects of human life, raising constitutional concern with fundamental rights. As an AI this system also it influence with government and people in the society. An AI driven surveillance and it threatens privacy. While algorithmic biases can lead to the discrimination, violations with the right to equality. This paper explores the constructional dimensions of AI’s and it impact on fundamental rights and also it analyzing how legal frameworks can evolve to safeguard individual freedoms while embracing technological advancement of AI, also it examines international perspectives on the AI governance to ensure AI aligns with constitutional principles.
INTRODUCTION
The Artificial Intelligence (AI) plays crucial role in human life. As we see technology rapidly changing, history shifts its space when touched with scientific vision. Artificial Intelligence (AI) is one such technical field that is transforming human society into one of robots and machines. What’s exactly AI? So AI includes machine learning, natural language processing, major data analytics, algorithms, and much more. However, as human intelligence is marked by intrinsic bias in decision-making, such characteristics can also be found in AI products that work with human-created intelligence. These phenomena of bias and discrimination rooted in a cluster of technologies and embedded in social systems are a threat to universal human rights. Indeed, AI disproportionally affects the human rights of vulnerable individuals and groups by facilitating discrimination, thus creating a new form of oppression rooted in technology.
Also meanwhile there are some advantage also like AI has the potential to enhance human rights by facilitating access to information, justice, and medical care. However, concerns have been raised regarding privacy breaches, discriminatory practices, and the risks associated with mass surveillance. The legal landscape surrounding AI and human rights is complex, requiring regulatory interventions to mitigate risks and promote ethical AI development. AI has positive and negative impact on individuals fundamental freedom within the purview of international and domestic legal frameworks.
CONCEPT OF ARTIFICIAL INTELLIGENCE(AI)
Artificial Intelligence is a technological innovation that involves the simulation of human cognitive functions in machines, enabling them to process information, learn, and execute decision-making functions. AI encompasses machine learning, natural language processing, computer vision, and robotics, with its applications spanning across multiple industries[1]. AI systems rely on vast datasets to identify patterns, automate processes, and optimize decision- making. The regulation of AI is of growing concern, as its deployment must align with human rights laws, ethical principles, and statutory obligations. Governments and policymakers must develop legal frameworks to ensure transparency, fairness, and accountability in AI decision- making processes.
CONCEPT OF HUMAN RIGHTS
Human rights constitute fundamental entitlements that are inherent to all individuals, irrespective of nationality, race, gender, or social status. These rights are enshrined in national constitutions, statutory laws, and international treaties. Civil, political, economic, social, and cultural rights, including the right to privacy, freedom of expression, and access to healthcare, form the foundation of human rights protections[2]. The legal recognition of human rights is grounded in the principles of dignity, equality, and due process, ensuring individuals’ protection against state and non-state actors. AI technologies must be developed and implemented in a manner that upholds these principles, preventing unlawful discrimination and safeguarding individual freedoms.[3]
DEVELOPMENT OF ARTIFICIAL INTELLIGENCE(AI)
The conceptualization of AI can be traced back to mid-20th-century academic discourse, with foundational contributions from Alan Turing and John McCarthy. The term “Artificial Intelligence” was officially introduced in 1956 at the Dartmouth Conference[4]. AI development has progressed through iterative phases, from early expert systems to contemporary deep learning applications. Early AI research focused on rule-based systems, but modern AI leverages vast computational power and big data analytics to achieve unprecedented levels of automation and accuracy. Legal considerations regarding AI’s use and regulation have emerged as integral aspects of policymaking, given AI’s growing influence over critical sectors such as criminal justice, healthcare, and financial services.[5]
ARTIFICIAL INTELLIGENCE (AI) AND FUNDAMENTAL RIGHTS
1. The Need for AI Regulation Through a Human Rights
Artificial Intelligence (AI) has the potential to revolutionize various aspects of human life, from strategic forecasting to democratizing access to knowledge. However, in order to harness these benefits, it is crucial to establish regulatory frameworks that prioritize human rights.[6]
1. Risk-Based Regulation: Focuses on self-regulation and self-assessment by AI developers, transferring significant responsibility to the private sector. This approach often results in regulatory gaps.
2. Human Rights-Based Regulation: Embeds human rights principles throughout AI’s lifecycle, from data collection to deployment, ensuring that AI does not reinforce discrimination, bias, or authoritarian governance.
3. Bias and Discrimination: AI algorithms have been found to exhibit biases in employment, law enforcement, and financial decisions, leading to potential breaches of anti- discrimination laws and human rights statutes.
4. Job Displacement: The automation of labour intensive tasks raises concerns regarding economic rights and employment protections under labour laws and international human rights frameworks. Governments must implement policies to reskill and upskill workers to ensure economic security and workforce adaptability.
AI regulation must be people-centric, placing human rights at the core to prevent potential misuse. If not properly regulated, AI can widen inequalities, strengthen surveillance, and even undermine fundamental freedoms.[7]
2. Threats AI Poses to Fundamental Rights and Privacy
While AI presents numerous opportunities, it also comes with significant risks to fundamental rights. These include:
1. Mass Surveillance and Privacy Violations
AI-powered facial recognition and data collection tools can lead to mass surveillance, eroding privacy rights.
2. Bias and Discrimination in AI Systems
AI models used in criminal justice, hiring, and law enforcement have been shown to reinforce biases, disproportionately affecting marginalized communities.
3. Lethal Autonomous Weapons and Societal Control
AI can be misused for authoritarian governance, leading to mass control, censorship, and even military applications
4. Right to Privacy and Data Protection
AI-driven mass surveillance, facial recognition, and data collection infringe upon personal privacy, leading to unregulated state and corporate monitoring.[8]
5. Right to Equality and Non-Discrimination
AI algorithms in criminal justice, education, healthcare, and job recruitment often reinforce existing biases, making discrimination nearly invisible yet deeply impactful.
The impact of AI on fundamental rights is not just a future concern but it is already happening in nowdays. Governments and corporations must take proactive measures to prevent further harm.[9]
3. Balancing AI Innovation with Accountability and Oversight
To ensure AI benefits society without infringing on fundamental rights, strict transparency and accountability measures must be implemented.
1. Mandatory Human Rights Impact Assessments AI systems should be evaluated before, during, and after deployment to assess their impact on fundamental rights.
2. Independent Oversight Mechanisms
AI decision-making processes must be transparent, with clear avenues for addressing grievances and legal remedies for those affected.
3. Banning AI Systems That Violate Human Rights
Technologies that cannot comply with international human rights laws should be suspended or banned until adequate safeguards are in place.[10]
By enforcing strict regulations, governments can prevent AI from being misused while still encouraging responsible innovation.
4. Ethical and Legal Challenges in AI Regulation
While AI has transformative potential, the lack of human rights-focused Corporate Exploitation and Manipulation of Democracy and Free Speech. The Governments have been slow to establish AI laws, allowing corporations to operate with minimal accountability while profiting from AI’s unchecked expansion. 12The AI-generated deepfakes, disinformation, and automated censorship threaten democratic processes and suppress freedom of expression.[11]
5. The Role of Governments and International Bodies in AI Governance
Governments and international organizations must take urgent action to regulate AI effectively. The United Nations (UN) can play a crucial role in Consolidating AI Governance Frameworks Establishing universal regulations aligned with human rights principles. Ensuring Inclusive Decision-Making Bringing marginalized communities, women, and minority groups into discussions on AI governance. Exploring an International Advisory Body A dedicated global body could provide recommendations on AI governance and align regulatory standards with fundamental rights. [12]
Strengthening AI Oversight at the International Level Human rights bodies and governments must enforce strict regulations that align AI governance with international human rights law. Tech companies must be held accountable for AI-related harm, with regulations requiring ethical AI deployment and transparency in AI decision-making. The world delayed action on climate change; it cannot afford to make the same mistake with AI. Regulatory frameworks
must be established now to ensure AI serves humanity without compromising fundamental freedoms.
6. Limitation and Risk
In September 2021, U.N. Human High Commissioner for Human Rights, Michelle Bachelet, gave an impassioned speech to the Council of Europe’s Committee on Legal Affairs and Human Rights urging states to stop the use of AI until appropriate safeguards can be put into place to prevent human rights violations.
A) The risk of misuse
The risk of misusing AI technologies in ways that violate basic human rights is not unique to the private sector. In fact, from a practical and policy standpoint, there is potentially an even larger risk of misuse by states when states are positioned as both the regulator and the user of these technologies.
B) AI and risk to abuses human rights
States are the primary violators of human rights, making it critical that any system that allows (and encourages) states to use AI and machine learning technologies is on high alert for potential risks associated with their use. There is always a risk that states will end up turning around and using the same technology to further human rights violations or evade responsibility and accountability.[13]
C) Lack of Accountability
The opaque nature of AI decision-making creates challenges in establishing legal liability for wrongful outcomes, particularly in sectors such as criminal justice and financial services. Regulatory bodies must enforce transparency mandates and legal accountability for AI-generated decisions.
AI as a tool for Expanding Language Access
AI-driven speech-to-text and translation services enhance real-time language access, crucial for international human rights. These tools can process multilingual audio, video, and text, aiding rights monitoring and advocacy. AI-powered chatbots help bridge language gaps, enabling real-time interviews with rights abuse victims and compiling multilingual data.
Examples
1. The Supreme Court of India is used AI to translate judgments into regional languages, but accuracy issues exist. A notable error translated “leave granted” as “holiday approved.[14]
2. The High Court of Kerla Uses the AI tool ‘Anuvadini’ for translating judgments into Malayalam, improving accessibility but requiring human oversight for accuracy.[15]
In the case of Delfi AS v. Estonia (European Court of Human Rights, 2015) Delfi AS, an Estonian news website, published an article that received anonymous comments containing hate speech and threats. The company was sued after failing to remove these comments promptly. Delfi argued that holding it liable violated its right to free expression under Article 10 of the European Convention on Human Rights. “The European Court of Human Rights ruled that Estonia was justified in holding Delfi liable. The court emphasized that online platforms must take reasonable steps to prevent harmful content, even if it comes from users.”
ROLL OF ARTIFICIAL INTELLIGENCE(AI) IN INDIA
The Artificial Intelligence (AI) is rapidly growing in India, The (AI) influencing many areas of life, including citizens privacy, and also law and governance. The AI takes many benefits, such as improving healthcare and business operations, and its also raises concerns about privacy, fairness, and accountability. So, the India has some important development related to the artificial intelligence and citizens right to privacy.
1. The Digital Personal Data Protection Act, 2023
The Digital Personal Data Protection Act, 2023 is known as a (DPDP Act, 2023), it is a new law in India which is protects people’s personal data. The (DPDP Act, 2023) ensures that companies and the government handle personal information responsibly. The Act must include some important key points-
1. People must give clear permission before their personal data is collected.
2. Companies must protect personal data and not misuse it.
3. If data is misused or leaked, the company responsible will face strict penalties.
4. There is a system for people to file complaints if their data is mishandled.
This Act is important because of the AI systems often use personal data, and this law ensures that people’s information is protected.[16]
2. UNDP Report on AI and Human Rights in India (2022)
The United Nations Development Programmed (UNDP) published a report in 2022 about AI’s impact on human rights in India. The report studied how AI is used in different industries like,
a) In Healthcare – The artificial intelligence (AI) helps doctors diagnose diseases faster.
b) In Finance – The artificial intelligence (AI) detects fraud and improves banking services.
c) In Business – The artificial intelligence (AI) helps businesses understand customer preferences.
d) In the needs jobs opportunity humans shares data to AI and AI collects a lot of personal data by these information, which can be misused. The report suggests that government regulations and company policies should focus on fairness and privacy when using AI.
3. Anil Kapoor’s Legal Case on AI and Personal Rights
Bollywood actor Anil Kapoor won a major legal case in India in 2024. The case was about AI-generated fake images and voices. Some people were using AI to create fake videos and pictures of him without permission.
The of High Court Delhi ruled that No one can use a person’s face, voice, or image without their permission. AI cannot be used to create fake videos or deepfakes that damage a person’s reputation. This decision is important because it helps protect celebrities, influencers, and ordinary people from AI- driven identity theft and misinformation.[17]
CONCLUSION
AI is rapidly evolving and expanding into human rights, raising concerns but also offering promising opportunities. This Note explores how AI can ensure rights and hold governments accountable for violations. Traditional regulatory frameworks struggle to keep pace, especially at the international level. A proposed evaluation framework can guide state and private actors, helping civil society organizations assess AI’s risks, benefits, and necessary safeguards. The proper legal and regulatory constraints in place, the potential for positive enhancement of human rights because of the use of AI is encouraging. Even where self-evaluation is lacking, this framework can serve as a tool for accountability. With proper legal and regulatory measures, AI has the potential to positively enhance human rights protections.
References
1. Universal Declaration of Human Rights (UDHR), United Nations, 1948.
2. International Covenant on Civil and Political Rights (ICCPR), United Nations, 1966.
3. Council of Europe’s Committee on Legal Affairs, September 2021.
4. The Digital Personal Data Protection Act, 2023
5. https://www.privacyinternational.org
6. https://www.futureoflife.org
7. https://www.un.org/en/about-us/universal-declaration-of-human-rights
8. https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx
9. https://ec.europa.eu/digital-strategy/en/white-paper-artificial-intelligence
[1] United Nations Educational, Scientific and Cultural Organization (UNESCO), Recommendation on the Ethics of Artificial Intelligence, 2021.
[2] Universal Declaration of Human Rights (UDHR), United Nations, 1948.
[3] International Covenant on Civil and Political Rights (ICCPR), United Nations, 1966
[4] McCarthy, John, What is Artificial Intelligence?, Stanford University, 2007.
[5] European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, 2020.
[6] European Court of Human Rights, Delfi AS v. Estonia, Application no. 64569/09
[7] Binns, Reuben, Human Rights, Ethical Principles, and the Regulation of AI, Oxford Internet Institute, 2021.
[8] 8Michelle Bachelet, UN High Commissioner for Human Rights, Speech on AI and Human Rights, Council of Europe’s Committee on Legal Affairs, September 2021.
[9] Privacy International, The Rise of AI Surveillance: A Privacy Threat, 2022.
[10] European Parliament, Resolution on Artificial Intelligence in Criminal Law and its Use by the Police and Judicial Authorities, 2021.
[11] Binns, Reuben & Veale, Michael, Fairness and Accountability in Machine Learning, Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, 2019.
[12] United Nations Development Programme (UNDP), AI and Human Rights in India Report 2022
[13] High-Level Expert Group on AI, Ethics Guidelines for Trustworthy Artificial Intelligence, European Commission, 2019
[14] Supreme Court of India, AI-Driven Judgment Translation Initiative, accessed 2024
[15]High Court of Kerala, Implementation of AI Tool ‘Anuvadini’ for Judgment Translation
[16] Digital Personal Data Protection Act, 2023, Government of India
[17] Anil Kapoor v. AI-Generated Content, High Court of Delhi, 2024 Judgment on Deepfake Protection.