Volume 89: Rights, Returns, and Responsibility: Can They Coexist in Tech Investment?
About ImpactPHL Perspectives:
ImpactPHL Perspectives is a multi-part content series that explores the many facets of the impact economy in Greater Philadelphia from the perspectives of its doers, movers, shakers, and agents of change. Each volume is written directly by a leader in this space to discuss best practices and share lessons learned while challenging our assumptions about financial and impact returns. For more thought leadership like this, check out the full catalog of ImpactPHL Perspectives.
Anita Dorett, Audrey Mocle, Kindra Mohr; Private Capital & Tech Accountability Working Group
Ever since OpenAI first released ChatGPT to the public in late 2022, artificial intelligence (AI) has been top of mind for everyone, including consumers, business leaders, policymakers, and investors. The proportion of companies adopting AI in at least one business function jumped from 55 percent in 2023 to 72 percent in 2024, and the proportion of businesses using generative AI rose even higher earlier this year, according to McKinsey. From online platforms to the healthcare system and providers to car manufacturers, there is significant investment in AI to enhance performance and boost workplace productivity. Reflecting this trend, private AI investment in the United States soared to $109.1 billion in 2024, outpacing China’s $9.3 billion by nearly 12 times and the U.K.’s $4.5 billion by 24 times. Generative AI alone accounted for $33.9 billion in global private investment, marking an 18.7% increase from the previous year.
Artificial intelligence—computer systems that simulate human learning and perform complex tasks normally done by human reasoning—is not itself a new technology; for years, many industries have implemented forms of algorithmic decision-making from finance to healthcare to housing to defense. Instead, what ChatGPT’s debut brought to the fore is generative AI: deep learning AI models that can generate high-quality text, images, audio, code, and other content from simple prompts. Large language models (LLMs), currently the leading approach to generative AI, are trained on large datasets to analyze patterns and relationships and then generate original content that is similar to their training data.
The advent of this technology has brought both promise and peril. Leaving aside broader concerns about AI’s potential to disempower humanity, the development and application of LLMs are already having adverse impacts on people and society. Generative AI tools are fueling disinformation and hate speech and amplifying existing data privacy concerns. They are being marketed as a means of controlling and eventually replacing entire workforces, and low-wage workers supporting generative AI’s development are being exploited. The rapid build-out of data centers undergirding the generative AI boom is exacerbating climate change and disrupting local communities. Particularly concerning, generative AI is already being deployed in active military conflicts to surveil and target citizens, and may be used to operate autonomous weapons.
“The consensus from the session was the urgent need for responsible AI global standards to inform the development of investor frameworks to discern responsible AI opportunities and decision-making that will also govern ongoing investment stewardship across both public and private markets.”
The Private Capital & Tech Accountability Working Group
In response to this rapidly evolving context, rife with risks and seemingly full of investment opportunities, a coalition of leading organizations—including the Business & Human Rights Resource Centre (BHRRC), BSR, Empower, Heartland Initiative, Investor Alliance for Human Rights, and Open MIC—launched the Private Capital & Tech Accountability Working Group in 2022. Building on an investor session at RightsCon on rights-respecting technology and Open MIC’s work through the NetGain Partnership to identify finance-focused strategies for the digital age, the working group was formed to raise awareness among investors of the negative consequences associated with generative AI investments and how to address them, as well as to strengthen the role of civil society in advocating for accountability within tech-related investments.
Together, our working group brings a diverse range of tools, from investigative research and investor education to advocacy, human rights monitoring, and advisory support for tech companies and investors. The working group’s goal is to bridge the accountability gap at the intersection of technology, finance, and human rights—an area still underserved but increasingly critical. By aligning our complementary expertise, the working group is helping to shape a more responsible investment landscape in tech.
Following seed funding from Aspiration Tech and engagement with different stakeholders, the working group convened in Philadelphia to map out a shared vision. We narrowed our focus to generative AI, which we view as a transformative technology that has been dominated by private capital-backed companies. With additional funding from the Ford Foundation, we devised a strategy to build the capacity of investors and civil society to identify and engage these companies on the risks this new technology poses to human rights and society.
The Risk Landscape
Although each working group member has closely followed investment trends and potential risks associated with tech developments over the years, we have continued to see a proliferation of human rights risks associated with AI. As AI systems increasingly make decisions that directly impact the lives of people, such as who gets hired, approved for a loan, or receives different types of medical care, the potential for these systems to replicate and amplify existing biases becomes a growing concern.
Algorithm-driven decisions, especially in critical areas like hiring, lending, law enforcement, and healthcare, can exacerbate systemic discrimination if not carefully monitored. This is often due to biased or incomplete datasets, a lack of diversity within AI product development teams, and limited human oversight during their design and deployment. Frequently operating with minimal transparency, these AI systems make it difficult for individuals to contest or even understand decisions that may deeply affect their lives. Moreover, the use of AI systems in surveillance and data collection raises concerns about privacy and freedom of expression, particularly in contexts with weak regulatory oversight.
The Investor Alliance for Human Rights, which supports investors to better understand, invest, and engage with tech and other portfolio companies whose products and services are powered by AI and generative AI, had the opportunity to dive into these topics as a panelist at the Total Impact Summit 2024 at the “Setting Boundaries: Investing Responsibly in AI” session. While the impact and sustainable investing ecosystem is eager to harness the power, benefits, and investment returns that AI and generative AI promise, the discussion centered on the importance for investors to assess its risks and understand the importance of rigorous standards and robust (human rights) due diligence processes to ensure the integrity of AI and generative AI development and use.
This led to a lively and interactive conversation—highlighted in Technical.ly—amongst the ImpactPHL investor audience and the panelists, which delved into high-risk AI systems and uses that threaten our safety and security, undermine civil liberties and democratic processes to what is needed to address and mitigate such risks including needing to set investor expectation on what “responsible or trustworthy” AI should encompass. This was followed by a robust discussion on the need for global AI standards and engagement with policymakers on legislative measures, as well as incentives to steer the development and deployment of trustworthy AI. The consensus from the session was the urgent need for responsible AI global standards to inform the development of investor frameworks to discern responsible AI opportunities and decision-making that will also govern ongoing investment stewardship across both public and private markets.
A Human Rights-Based Approach for Investors and Their Investees
So, how should investors respond to this landscape? A new briefing for investors provides a path forward. It explores these risks in depth, examining how AI and generative AI may impact human rights, governance, and business practices, while offering insights and practical recommendations for companies and investors. It provides actionable guidance for the responsible development, deployment, and use of AI and generative AI in line with the UN Guiding Principles on Business and Human Rights (UNGPs).
“Impact investors should also work with portfolio companies that develop, deploy, or rely on generative AI for their products and services to build their capacity and adopt UNGPs-aligned systems and processes to proactively manage and safeguard against human rights risks and impacts.”
As this briefing indicates, to understand and address the risks and impacts associated with generative AI, investors should look to the UNGPs, the global, authoritative standard on the business responsibility to respect human rights. The UNGPs use a ‘risk to people’ lens grounded in the principle of ‘do no harm’ that is fundamental to impact investing, where the general objective is to not only generate returns, but also to achieve positive, measurable outcomes for people and the environment.
The UNGPs provide investors and issuers with a framework to identify, prevent, mitigate, and account for negative human rights impacts throughout the investment lifecycle, from pre-investment to exit. A UNGPs-aligned approach involves investors having robust policies and governance over human rights issues in place, conducting ongoing human rights due diligence, facilitating and providing access to remedy when harm occurs, and engaging in meaningful stakeholder engagement, including with workers, customers, and others who may be negatively affected by the generative AI value chain.
Impact investors should also work with portfolio companies that develop, deploy, or rely on generative AI for their products and services to build their capacity and adopt UNGPs-aligned systems and processes to proactively manage and safeguard against human rights risks and impacts. As a starting point, this includes considering the following key questions about the company’s human rights performance with regard to generative AI—and helping to close gaps where they may exist. While some of these questions are more relevant for later-stage investments, they nonetheless provide guidance on the human rights fundamentals that investors should expect as companies mature.
Does the company embed ethical and human rights considerations in a Responsible AI approach? How has the company embedded human rights into its generative AI governance, risk mitigation strategies, and product policy development and enforcement, as well as relevant technical teams and functions?
Has the company assessed the human rights impacts associated with generative AI and developed measures to address them? Has the company worked through a ‘future scenario’ analysis to explore negative outcomes and relevant human rights risk mitigation strategies?
Does the company incorporate human rights risks and impacts in its disclosure practices?
Does the company have a remedy preparedness and response plan and processes in place in case harm occurs? How will this work in practice?
How is the company engaging with stakeholders who may be negatively affected by the generative AI value chain to better understand risks and how to address them?
For impact investors—and investors more generally—understanding and taking steps to address human rights risks and impacts associated with generative AI in their portfolios are critical for long-term, sustainable value creation. As investors consider the opportunities associated with generative AI, our working group aims to help them navigate the risks and pitfalls inherent in a rapidly evolving technology through a human rights-based approach. For impact investors that seek to drive positive change through generative AI, we see this approach as not only complementary to that goal but essential to achieving it.
Anita Dorett, Director of the Investor Alliance for Human Rights, a collective action platform for responsible investment that is grounded in respect for people’s fundamental rights.
Audrey Mocle, Deputy Director at Open MIC, a nonprofit organization that helps investors assess and address the impacts of digital and emerging technologies on society.
Kindra Mohr, Business and Human Rights Attorney who leads the Business for Social Responsibility's (BSR) work at the intersection of finance, social impacts, and human rights.