top of page
Writer's pictureAnushka Ashar

Human Rights Risk due to AI: A Policy Review



If you are interested to apply to GGI Impact Fellowship, you can access our application link here.


1. Introduction


With the development and improvement of technology, technology will inevitably intersect with human factors and seep in to play a vital role in how humans interact and lead their daily lives. Once again, we find ourselves at the cusp of technological change where there will be a change of pace as scientific vision intervenes. The use of machine learning and artificial intelligence has the potential to effect revolutionary changes in the world. But in simple terms, AI is human-created intelligence, and the intrinsic bias of decision-making that humans have will eventually also seep into AI, Robots and anything that humans create. This phenomenon of prejudice and discrimination, rooted in social systems and embedded in technology, presents a new challenge, which is the threat to universal human rights. There is already a movement against data privacy and cybersecurity.


With the recent developments in AI and how far it has come, it presents an uphill task of assuring safety. Indeed, AI disproportionally affects the human rights of vulnerable individuals and groups by facilitating discrimination, thus creating a new form of oppression rooted in technology. This report on AI and its effect on human rights try to identify various factors that will result from the development of AI, such as layoff's in the workforce, discrimination and bias due to AI and its effect on decision making. We have also tried to identify multiple government policies in India that are meant to mitigate the threats to modern civilization due to the development of AI. Human rights have always been a sensitive issue. There is no automatically created bias but one that humans made along with creating AI, and moving forward, it becomes even more important to mitigate and regulate AI's implication on human rights.



2. Impact of AI driven corporate movements resulting in human rights violations


2.1 Companies, AI & Hiring


Current recruiting process involving AI based ATS (Applicant Tracking System) is often criticized. While the hiring decision should be on the basis of the quality of an applicant's work experience & their personality traits, the ATS technology reduces an applicant CV to a simple ‘yes or no’ purely based on a keyword search.


Some of the issues and arguments associated with use of AI in hiring are:

  • Human qualities such as empathy and contextual understanding are practically impossible to replace with software.

  • The AI tools signify technological innovations but are criticized for lack of scientifically derived methods or research programs. Hence, it is unclear as to whether the models used by the AI tools accessing the candidate’s applications have valid underlying hypotheses.

  • Learned biases towards selected gender & race and/or an applicant with disability.

  • Candidate privacy may be violated, data relating to personal attributes & affiliations, lifestyle, social media presence could be used by the AI models.

  • Gender Bias: A distinct case observed in the case of AI based systems is with respect to biases against women.

  • Firstly, associated with the naming pattern used in ML models, in an analysis of the use of the terms girl(s) and boy(s) in a corpus of text of British, American and New Zealand English it is observed that the term ‘girl’ is 3 times more likely used to describe adult women as opposed to term ‘boy’ to refer to an adult man.

  • Furthermore, with respect to biased descriptions of women compared to men, research has shown that girls and boys are represented differently with girls being more objectified and portrayed in more negative contexts. Use of adjectives used to describe women by training data for the ML models thus can impact the AI tools.

  • Lastly, associated with the occurrence of women in text, in an analysis of the British National Corpus, ‘Mr.’ occurs more often than ‘Mrs.’, ‘Miss’ and ‘Ms.’ combined. Major discrepancies are observed in mentioning of women in texts related to business literature, news articles in papers, tabloids etc.


2.2 Companies, AI & Layoffs


In the age of technological advancements & advent of A.I, automation and fierce global competition are two major forces transforming the modern-day workspace.


To keep pace with dynamic changes brought into the workspace due to technological advancements such as A.I, the firms often resort to episodic restructuring and routine layoffs. This in turn leads to unemployment due to technological advancement or "technological unemployment". The famous phrase "technological unemployment" was popularized by John Maynard Keynes in the 1930s.


The two major impacts to the company due to these layoff practices are damaged employee engagement and decreased company profitability. Amongst other impacts to the company, one key observation showed that after layoffs, the layoff survivors experienced a 20% decline in job performance. Layoffs by the companies have led to decreased innovation (study of one Fortune 500 tech firm done by Teresa Amabile at Harvard Business School discovered that after the firm cut its staff by 15%, the number of new inventions it produced fell 24%), decline in company reputation (E. Geoffrey Love and Matthew S. Kraatz of University of Illinois at Urbana–Champaign found that companies that did layoffs saw a decline in their ranking on Fortune’s list of most admired companies) and ruptured ties between salespeople and customers.


Alternatives to Layoffs: A concise and clear methodology needs to be employed by the companies to counter layoffs, for which the companies need to answer three broad and important questions:

  • How will the company plan for workforce change on an ongoing basis?

  • Who will be accountable for managing and supervising the measures taken?

  • What metrics should the company use to determine whether their actions are effective?

By addressing these key questions, companies can devise strategies to overcome periodic layoffs of their employees.


2.3 Corporates & Human Rights Policy


An existing and/or a new established Human Rights Policy exists in corporates of major industries impacted with AI.


Technology Leaders such as Amazon, Google, Microsoft, Apple and Facebook, Indian technology leaders such as Infosys, TCS and Wipro; global financial services corporations like JPMC, Goldman Sachs and Morgan Stanley; Indian fintech corporation PayTm; global pharmaceutical companies mainly J&J, Novartis, Pfizer, Roche as well as Indian pharma companies Cipla and Biocon are amongst the industry leaders to incorporate Human Rights Policy.


2.4. Data Scarcity is Real


Data is crucial for building AI/ML systems for corporations. However, Data scarcity is an issue which is often faced by corporations.


Data scarcity is when (a) there is a limited amount or a complete lack of labeled training data, or (b) data imbalance, meaning, lack of data for a given label compared to the other labels.


Larger technology companies tend to have access to abundant data, but the issue of data imbalance is often encountered by larger companies. Smaller technology companies on the other hand, typically suffer from scarce availability of labeled training data. Corporations are required to employ newer methods to address this data scarcity issue. Methods such as incremental learning or federated learning; transfer learning and self-supervised learning can be employed by corporations to fix the issue.


To overcome data scarcity employing a synthetic dataset is often practiced. A synthetic dataset is one that resembles the real dataset, which is made possible by learning the statistical properties of the real dataset. For example, in the financial sector, to get more customer checking account data to feed a model, more customers are needed to open checking accounts. The second requirement will be to wait a length of time for them to start using the accounts and building up transaction histories. However, with synthetic data, looking at the existing and current customer base and synthesizing new checking accounts with their associated usage, allows the models to use this data right away.


In cases of inadequate amount of data, therefore, corporations are speculated to breach privacy of the users to extract data using powerful AI models.


One major concern with synthetic data is that synthetic data may include the failure to replicate signals that are present in the original data set - or conversely, include signals that don't exist in the original data set. Overfitting may also result if a small data set is used to generate a much larger synthetic data set.


One peculiar case with the use of synthetic data is its application in Generative Adversarial Networks (GANs), which are currently the most marketed neural network framework. If trained well, they are extremely good at generating realistic-looking synthetic data.

The pictures below show people who do not exist. These pictures were generated using deepfake technology. But it is noticeable how good GANs are at generating synthetic data.



2.5 Privacy- A Global Threat


Data privacy & cybersecurity are of utmost importance to major corporations. Some of the most common cybersecurity threats & associated breach of data privacy are as a result of these cyberattacks: Phishing (90% cyberattacks reported by this technique), malware, ransomware (SMEs are at greatest risk here), AI powered DDoS attack, & cloud computing vulnerabilities. Insider threats to security are another type of attacks to security faced by corporations.


3. Humanitarian impact of AI on the end-users in India


3.1 AI’s Impact on Decision Making


The existing approach is both short sighted and counter-productive. It fails to meaningfully address the ethical, social, and technical limitations that undergird the use of AI technologies.


India is an important jurisdiction to consider for a number of reasons. Its sheer size and burgeoning AI industry makes it an influential power. There are sectoral challenges identified in policy making processes in India.


AI systems as a concern, the report recognizes that bias is embedded in data, with the possibility of such bias getting reinforced over time. It recommends that one possible way to deal with this is to, “identify the in-built biases and assess their impact, and in turn find ways to reduce the bias. ”Instead of considering AI as a purely mathematical model, there is a lack of putting it across as a socio-technical system”.


The potential risks and limitations of data-driven decision-making, and their ethical and social impacts needs to be reconfigured into a central consideration in India’s AI policy development. India’s National Data Sharing and Accessibility Policy (NDSAP) contemplates sharing of non-sensitive data generated using public funds through the Open Data Platform, but this has had moderate success in solving the data parity problem, as a majority of quality data in India is restricted solely to the private sector.


3.2 AI based monitoring violates data privacy and personal information


The challenge of security is bigger than one can sound . An excellent illustration to illustrate the lack of obscurity in moments' online realm is the below print, a revision of the popular New Yorker cartoon of 1993, by the political advertising company Campaign Grid. It shows all the information that Campaign Grid can know about an internet stoner, including accurate particular details similar as age, address, profession, profitable status, political cooperation and implicit shopping and travelling plans. This draws a deep discrepancy with the original times of the internet when the identity of the druggies could fluently be concealed. The Cambridge Analytica Facebook reproach explains that there need to be stricter laws for social media companies in their data handling and participating practices, because they maintain a depository of particular data of millions of druggies, and unethical use of similar data can give some stakeholder overdue influence on moulding public opinion.


In the absence of an express law, there are various intermittent judicial and legislative developments that dictate data protection in India. The IT Act, 2000, which deals with cybercrime and e-commerce, under its section 43A details the Information Technology (IT) Rules, 2011, for ‘reasonable security practices’ for the handling of ‘sensitive personal data or information’. These rules impose various limits on the scope of organizations to collect, use, retain and disclose the personal data of individuals, and require them to have a privacy policy. Like all the rules, they have their loopholes as well, though, as these apply only to corporates and it becomes powerless when a government agency tries to seek the information. Hence, by leaving the government out of its ambit, the IT Rules only give marginal control to citizens over their personal information. A re-examination at the efficiency of our laws was mandated following the trepidation and a petition was submitted in 2012 accusing the Aadhar scheme of violating the right to privacy. In the landmark Puttaswamy judgement of May 2017, the Supreme Court declared the ‘right to privacy’ as a fundamental right of all citizens. Information privacy was an intrinsic part of the privacy and expert committee under the chairmanship of justice BN Srikrishna was set up to draw out a framework of data protection for Indian.


3.3 Individuals with low skills suffering job loss due to the AI transformation


According to the International Labour Organization (ILO), 60 percent of the formal employment in India relies on middle-skill or blue collar jobs, including clerical, sales, service, skilled agricultural, and trade-related work, all of which are prone to automation. For example, employment losses in the IT sector in India alone are reported to have reached 1,000 over the past year, particularly due to the integration of advanced technologies such as AI and machine learning. Also, the growth of computers is expected to reach an industry of two million workers in the near future. Unfortunately, relative to what robots do to IT's less-skilled brothers, it is a drop in the bucket. Few years ago, large e-commerce stores that were staffed with a military of individuals are now manned by 200 robots created by GreyOrange, a corporation in Gurugram, India. Unlike humans, these autonomous mobile robots can sprint around to lift and stack boxes 24 hours each day and reduce manpower by up to 80%.


3.4 AI has made human connections more distant


Human connection is an intuitive need to create a social rapport with others. In today’s digital era, we are missing out on true connections, intimate conversations and an empathetic heart while asking others about their well-being. It is nearly impossible to have an online recreation of the kind of social rapport and human connections we get offline. The world has become a global village and our day begins and ends with work emails, social media feeds, laptop screens, mobile devices, and instant messaging platforms. It goes without saying that round the clock we skip from app to app, vying for true human connections and bonds, all while staying stuck behind the perceived safety of our screens. AI is setting on the choices for us in the form of suggesting videos on YouTube or Netflix or tuning in a pre-made playlist for us on Spotify. Our primary research reflects that 46.2% of the respondents use chatbot conversation these days while shopping online or booking an appointment and 59.7% of the respondents often respond to LinkedIn messages and Emails using smart replies.

Many people consider that AI is enhancing human capacities but some predict the other way around - that people's deepening dependence on machine-driven networks is eroding their abilities to think for themselves, take decisions and actions independent of automated systems and interact and communicate effectively with others. Futurologist Ian Pearson predicts that by 2050, the number of human relationships with robots will surpass those between humans.


The documentary “Hi AI” has done an incredible job at observing human - robot relationships of all kinds without any prejudices or biases. We may see Alexa taking a humanoid form and being a nanny, or companion that keeps a loved one company in the hospital in the near future. Our primary research also reflects that 45.4% of the respondents are satisfied in using Alexa/Siri for scheduling a task or searching something on the web. However, on the contrary, 47% of Americans claim that they feel lonely, perhaps there being a robotic space to address this modern day epidemic of loneliness.


4. Government policies to regulate the adverse impact of AI on human rights in India


AI i.e., artificial intelligence is gaining ground and has become an emerging focus area of policy development in India. The Indian government has put the adoption, development and promotion of AI high in it’s list of priorities which is based on the premise that introduction of AI will make lives of the people easier. However, the National AI Policy framed by the Niti Aayog presents a mixed picture wherein it attempts to regulate the adverse impacts of use of AI on human rights at some places but misses out on a few.


India as a nation has a very rich and diverse background which comprises various ethnic groups, gender groups, religious groups, linguistic groups and so on. Historically, discrimination on the basis of the same parameters were rampant in the Indian society and still runs deep. Welcoming AI usage in every field may exacerbate this. What if a ‘Muslim man’, despite his solid economic background is denied lending, or a qualified ‘woman’ is given less preference compared to her male counterparts, or a ‘Dalit’ is facing tougher punishment for the same crime did by someone from general caste who got a watered-down version of it. Since machine learning uses inputs which have been previously stored and collates them to form big data around that, thus its decisions might be affected with the same. ‘Algorithmic bias’ is the term used for such a phenomenon. Also, even coding is done by humans and subconsciously they too may be carrying some notions and feeding the same to the computer. Thirdly, biases can also occur due to incomplete data. What if a criminal database is overloaded with criminals having certain color, gender, features etc., then there are higher chances of false mismatches. The National AI policy does not cater to the historical bias and discrimination prevalent in the Indian society.


Further to it, we all are aware that data has become the new currency. It has become the fuel for the working of a nation’s machinery. But the question arises is till what extent? Concerns are three-pronged: Firstly, AI is highly data reliant which may use personal data as well. Secondly, data might be used without the ‘consent’ of the individual and lastly, potential discerning of sensitive information from outputs of a system. All three will amount to gross violation of Right to Privacy (Article 21) of the Indian Constitution. Technologies like DeepFakes which puts a person’s face to another body puts human rights in peril. Countries like China are using it for mass-surveillance. Indian government is increasingly resorting to facial recognition of criminals which captures sensitive information like iris, fingerprints and recently a bill was proposed to capture the DNA information of the criminals. Apps like Aarogya Setu, though helped to survive the transmission of the virus, still the dangerous mix of health data and digital surveillance puts our human rights and privacy at peril. Niti Aayog has framed a strategy called ‘Responsible AI for all’ but no national regulation exists as of now. B.N. Srikrishna committee’s Data Protection Bill, 2018 is yet to become an act. Thus, the National Policy falls short on addressing the question of privacy.


Another area of concern is whether the National AI Policy will assure fair treatment and equity for all. While the National AI policy, has one of the domains of focus as AI for the differently abled and “accessible technology”, still it might miss out on some chunk of population. Taking the example of the beneficiary selection for various government schemes, if the beneficiaries are not identified correctly, they may lose out on government schemes and would be prohibited from enjoying the benefits of the same. Senior citizens as a social group may be excluded since AI based tools rely on fingerprints and such features for beneficiary identification, which might not be read well by the device given their old age. This happened in the state of Jharkhand, wherein few old age beneficiaries could not get the ration under the National Food Security Act, 2013 due to the issue of non readability by the devices. Thus, it's a mixed bag. In some sectors the policy holds promising features and leaves out on some. Since the AI implementation is building up, we will have to look on a real time basis the impact of the same.


However, the National policy on AI, does ensure social and inclusive distribution of the public goods like education and health. “AIforAll will aim at enhancing and empowering human capabilities to address the challenges of access, affordability, shortage and inconsistency of skilled expertise; AIforAll will focus on ensuring prosperity for all and for achieving the greater good” - Niti Aayog. The government policy on AI has a special segment called National Program for Government Schools: Responsible AI for youth wherein they try to demystify AI for youth and equip them with skill sets, democratize access to AI tools and train them and to create meaningful social impact solutions. Given the smartphone penetration in India, which is around 60%, AI may actually lead to more access to education opportunities for students who could not attend the same. Issues like language barriers, high drop out rate specially faced by girls, out of pocket expenditure (OOPE), could be solved through AI-based learning. Similarly, India, with a population of over 1.3 billion faces acute shortage of doctors. Thus, AI can prove essential to healthcare along with a novelty. Use of AI in healthcare has been divided in three broad categories i.e., descriptive, predictive and prescriptive. During the lockdown, it was through formulation of the apps like Aarogya Setu, Sandhane (for tracing covid 19 in rural and remote areas) and Sahyog that helped in further investigation. Even AI based tools were used for enforcing quarantine and social distancing.


Even though India is a data-dense country and has in place a National Data and Accessibility Policy, we do not have a robust and comprehensive open data set across sectors and fields. Most startups turn to open datasets in the US and Europe for developing prototypes and this is especially problematic because demographic representations in that data set is significantly different resulting in the development of solutions that are trained to a specific demography, requiring re-training on Indian dataset. Although AI is technology agnostic, in the cases of different use cases of data analysis, demographically different training data is not ideal. Robust open data sets is the only way in which access can be enabled for the masses, particularly for small start-ups as they build prototypes. Ryan Calo calls this "an issue of data parity", where only a few well-established leaders in the field have the ability to acquire and build datasets. This is particularly true for certain categories such as health, employment, and financial data, which are the sectors in India for which AI has maximum potential. India is one of the fastest-growing countries adopting artificial intelligence. A recent study reveals that between the years 2021 to 2026, the industry will grow at a CAGR of 35.1%. The National Roadmap for Artificial Intelligence by Niti Aayog proposes the creation of a National AI marketplace that consists of a data marketplace, data annotation marketplace, and deployable model marketplace/solutions marketplace. The biggest justification for AI innovation as a legitimate objective of public policy is its promised impact towards improvement of people’s lives by helping to solve some of the world’s greatest challenges and inefficiencies, and emerge as a transformative technology.


5. Recommendations


1. One possible recommendation in reducing gender biases in hiring could be to increase the shortlist of potential candidates. This simple solution has shown impressive results as one can infer from the following study published by the Harvard Business Review. The participants were asked to make an informal shortlist (3 best candidates) for a given role, in this case, the role of a CEO for a leading technology company. But, as popular verdict predicts, the number of people listing male candidates for the role was far more than female candidates [ratio of women to men 1:6]. Later, the participants in the study were asked to make another list, but this time, an extended list (with 6 candidates) and the results showed that the extended shortlist contained 44% more female candidates than the original list [the ratio of women: men 1:4].

2. Two main strategies that the company can employ to overcome layoffs are retrain & retain.


Retain: Many companies practice “rank and yank” layoffs to dismiss weaker employees. Such practices affect the effective management of employees. The 5’R’ strategy which includes Responsibility, Respect, Reward, Revenue sharing & Relaxation time shows great results with employee retention. Other strategies such as Providing Feedback too have shown effective results. A study on Harvard Business Review shows that the ideal ratio between positive and negative suggestions is 5.6 (positive) to 1 (corrective). Effective Inclusion of employees has shown to have an improved performance by 3.8 times.


Retrain: Training the existing talent of the company is a feasible strategy. Ways to incorporate effective retraining programs for employees include-Experiential learning, where studies show 33% employee prefer hands-on-training modules. This has shown proven results in up-skilling employees, as well as increased employee productivity. A combination of online & in-person training, where an InterCall survey found that 50 percent of employers believed in-person training helped them retain information. Learning on their own pace, has shown great results in retraining the employees where, a 2015 study from the Harvard Business School found that participants who were asked to stop and reflect on a task they’d just performed improved at greater rates than participants who just practiced a task & rushed through complicated concepts.


3. Many terms and conditions have clauses that mention how agreeing upon it will give them access into the end users personal data but due to the length and the legibility of these terms and conditions, the consumer neglects to read through it and blindly agree upon these. There needs to be a mandate that will ensure that the terms and conditions that appear before downloading and using an app or any website, has to be precise, short and highlighted. As our primary research survey also reflects that 67.3% of the respondents do not read the terms and conditions while installing a new application on the phone. A quick Q&A on the T&C before using the application further will ensure that the consumer is thorough with it. This way, the goodwill and the reputation of the company also goes high as well as the consumer is aware what he is signing into. Moreover, it would also ensure that consumers are well versed with the data privacy policies. As per our survey, 61.5% of the respondents are not aware about the laws pertaining to data privacy and 46.2% of the respondents have experienced their personal data getting breached.


4. There are various marketing strategies that influence the decisions of the consumers. Our survey shows that 84.6% of the respondents always get product recommendations on social media platforms based on their browsing history and 61.5% of the respondents almost always get to see their Google searches on YouTube, Facebook and Instagram. These figures suggest that choices made by consumers are highly influenced by the products and services being reflected on their social media feeds. So much so, 71.5% of the respondents say that they prefer to have personalized shopping experience on online retailers by getting a chance to see the product recommendation. We need to ensure that there isn’t a lot of impact over the decisions of the consumer, there needs to be a mandate on how information has to go the way it has reached one particular stakeholder and not bend it to influence the consumers decision. For eg; the information that you receive on any social media platform, has to be forwarded or marketed the way it has reached a particular person. Strict policies need to be put across in order to ensure that any change in the information received to influence a particular person's decisions, shall be penalized as it is not very difficult these days to identify the source of change in the information.


5. To cater to the algorithm bias the policymakers should formulate an Algorithm transparency bill. A separate bill is required since the proposed Personal Data protection bill,2018 does not mandate computer driven actions to be explainable. Data and algorithms need to be available for public scrutiny.


6. In order to facilitate innovation and encourage growth, the central and state governments need to actively pursue and implement the National Data and Accessibility Policy. Gaining access to data comes with its own questions of ownership, privacy, security, accuracy, and completeness, but there is a need for clean, accurate, and appropriately curated data for training algorithms for AI, and government bodies are often the gatekeepers of this data.


6. Case Study


Similar such steps have been taken in countries across the globe. For eg. Solely automated decisions are prohibited in Europe where there could be legal impact on the individual. The US Congress introduced the Algorithm Accountability Act in April 2019. Similarly, the New York City Council introduced the Algorithm Transparency Bill in 2017. There is an immediate need for such a bill since several Indian states are already using computer models for the purpose of law enforcement. States such as Rajasthan, Punjab, and Uttarakhand are using facial recognition software w.r.t for criminal records purposes. Delhi and Maharashtra are using predictive policing techniques. Secondly, the Indian government’s National AI Policy does less to answer the privacy concerns that are attached with the usage of data. B.N. Srikrishna committee’s Personal Data protection Bill, 2018 is yet to become an act. There is a need to bring in a regulatory framework for responsible management of the data. The European Union on April 21, 2021 has proposed a Regulation of AI. The main motto is to safeguard users' fundamental rights.

Meet The Thought Leaders


Shatakshi Sharma has been a management consultant with BCG and is Co- Founder of Global Governance Initiative with national facilitation of award- Economic Times The Most Promising Women Leader Award, 2021 and Linkedin Top Voice, 2021.

Prior to graduate school at ISB, she was Strategic Advisor with the Government of India where she drove good governance initiatives. She was also felicitated with a National Young Achiever Award for Nation Building. She is a part time blogger on her famous series-MBA in 2 minutes.


Naman Shrivastava is the Co-Founder of Global Governance Initiative. He has previously worked as a Strategy Consultant in the Government of India and is working at the United Nations - Office of Internal Oversight Services. Naman is also a recipient of the prestigious Harry Ratliffe Memorial Prize - awarded by the Fletcher Alumni of Color Executive Board. He has been part of speaking engagements at International forums such as the World Economic Forum, UN South-South Cooperation etc. His experience has been at the intersection of Management Consulting, Political Consulting, and Social entrepreneurship.


Karan Patel (he/him) is an undergraduate from IIT Madras. He is correctly employed with Teachmint, an ed-tech start-up in their strategy team. Prior to Teachmint, he worked at Dalberg Advisors as an analyst where he worked with multi-laterals and international foundations on gender, education and energy sectors. He has also interned in MIT Sloan, Qualcomm and IIM Ahmedabad giving him a plethora of experience in the corporate and academic world. He also started his own venture in hyperlocal air-quality monitoring. Karan is an avid sport-person and masala chai fantatic.


Meet The Authors (GGI Fellows)


Omkar Parulekar is a post graduate student from St. Xavier’s College, Mumbai. He holds an undergraduate degree in Microbiology. He is currently a research trainee at the National Facility for Biopharmaceuticals, Mumbai. He has research internship experience across domains of Biology, Physics, Material Sciences & Nanotechnology. He is a sports enthusiast and is currently getting formally trained in tennis. In his free time, Omkar enjoys watching films, learning a foreign language, or reading a book.


Darshita is a commerce graduate from Gargi College, University of Delhi. She is currently working at EY as an Assurance Associate with the EMEIA Private Equity Team. She is a major proponent of giving back to society and making others’ lives better. In her current organization, she is volunteering in EY Ripples Initiatives, Corporate Responsibility Program whilst contributing to EY’s mission of impacting one billion lives by 2030. Darshita is very inclined towards the rich culture and heritage of India. She is a Kathak Dancer of Jaipur Gharana and has completed her Visharad from Bhatkhande Sangeet Vidyapith, Lucknow.


Isabelle is a social entrepreneur and an intersectional environmentalist. Isabelle is a commerce undergraduate from Stella Maris College and has 4 years of progressive audit experience with US based clients at Deloitte. She is currently a Social Impact Fellow with Genpact-KEF and has been aligned with an NGO leading change in the legislative rights space in India. Isabelle co-founded the women’s initiative to tackle the waste management problem in Kerala by upcycling tailoring waste into educational toys, catering to the needs of local anganwadis, with the help of kudumbashree women. She is a green junkie for all things sustainable and an incurable gastronome and avid traveller when not at work.


Athullya is a Research scholar and a Doctoral Fellow with Indian Council of Social Science Research currently working with Adolescent survivors of Child Sexual Abuse. Previously, she has worked with startups that provided Psychosocial interventions to children diagnosed with developmental disorders and elderly diagnosed with Dementia. She started off as a Research Associate with IIT-Bombay and has a Masters in Clinical Psychology from Christ University and a Bachelor's degree from Osmania University. She is trained in Carnatic Music, Bharatnatyam and in Indian Musical Instrument, Veena. She is a movie/web series/stand up comedy/podcast enthusiast, fond of experimenting new recipes, exploring eateries, traveling, reading and writing.


Dopal is a Masters’ student studying Politics with a specialization in International Relations at the Jawaharlal Nehru University, New Delhi. She holds an undergraduate degree in Economics. She is currently a Policy Fellow at the Young Leaders for Active Citizenship. To understand the experiences at the grassroot level, in the domain of governance, she interned with various think tanks like Grassroots Advocacy and Research Movement, Think India, etc. She has also co-authored a research paper titled “Comparative Analysis of the Status of Women in Pre Taliban and Post Taliban Government 2.0”published at the IJPSL. When not engaged in policy matters, she spends time reading books, painting and engaging in dialogue on mental health.


Souptik is a Masters in Management student at Cranfield University in The United Kingdom. Prior to this he was working in a strategy role for a startup based out of Bangalore. Souptik has also worked in risk consulting at KPMG. He also actively participates in voluntary work and has worked in multiple capacities for various NGO’s and Not for Profit organisations. During his free time he enjoys reading and driving especially to unwind. He is also a massive Formula 1 enthusiast. Souptik is a hip-hop dancer and also enjoys activities such as cycling, hiking, trekking and cooking during his free time.



Bibliography:








474 views1 comment

1 Comment


josephhikiolriffin
Jan 06, 2023

The software of our company will help you find the right impulses for building productive working relationships within the company.

Like
bottom of page