As they grow concerned with the human rights implications of software, large institutional investors put pressure on technology firms to accept responsibility for any misuse that may occur.
The Collective Impact Coalition for Digital Inclusion, which is made up of 32 financial institutions with $6.9tn under management, including Aviva Investors Fidelity and HSBC Assets Management are among the leaders in the drive to encourage technology companies to adopt ethical AI.
Aviva Investors met with chipmakers and tech companies in the last few months to warn about human rights risks associated with AI. These include surveillance, discrimination unauthorised facial recognition, mass layoffs, and more.
Louise Piffaut said that meetings on this subject had “accelerated” due to concerns about generative AI such as ChatGPT. Aviva Investors, like any other company with which it works, may decide to vote against the management in annual general meetings or raise concerns with regulators if engagement fails.
Piffaut explained that it is easy for companies not to take responsibility by saying, “It’s not my problem if someone misuses my product.” That’s when the conversation becomes more difficult.
Jefferies, an investment bank, said last week that AI may replace climate change as the “new big thing” for responsible investors.
The increased activity of the coalition comes just two months after Nicolai Tangen, the chief executive of Norway’s $1.4tn Oil Fund, announced that it would establish guidelines on how its 9,000 invested companies should use Artificial Intelligence “ethically” as he demanded more regulation for the rapidly-growing sector.
Aviva Investors has a small share in Taiwan Semiconductor Manufacturing Company (TSMC), the largest contract chipmaker in the world. This company has seen an increase in demand for advanced AI chips, such as those used to train the ChatGPT AI model.
Alphabet, Microsoft, Samsung Electronics and Tencent Holdings are among the tech companies it owns.
The asset manager also meets with companies in the consumer, media, and industrial sectors to ensure that they are committed to retraining employees rather than terminating them if there is a risk their jobs will be eliminated due to AI-related efficiencies.
Jenn-Hui Tan is the head of Stewardship and Sustainable Investing at Fidelity. She said that concerns about social issues such as “privacy, algorithmic bias, and job security”, had been replaced by “actual existential fears for the future democracy and humanity”.
She said that the UK-based group met with companies in these fields to discuss them, and it would consider divestment if they felt there was not enough progress.
Legal & General Investment Management (UK’s largest asset management company) has stewardship code for deforestation, arms supply, and other issues. It is currently working on a document similar to this on artificial intelligence.
Kieron Boys, the chief executive officer of Impact Investing Institute (a UK government-funded think tank), said that an “increasing” number of impact-investors were worried about AI’s potential to shrink entry-level jobs for women and minorities in all industries, putting workforce diversity years behind.
Richard Gardiner is the EU Public Policy Lead at the Dutch non profit World Benchmarking Alliance. The group that launched the Collective Impact Coalition, Richard Gardiner said, was pushing tech companies to concentrate on their entire supply chains to avoid ethical and regulatory risk. He said that investors like Aviva may have been worried about being held responsible for the human rights violations of their investee companies if they didn’t act.
He added: “If you create a bullet which does nothing when you hold it, but shoots someone else when you place it in their hand — how do you track the usage of the product?” Investors want to know that there are standards if they become liable.
In March, only 44 of the 200 tech companies assessed in the WBA survey had published an ethical framework for artificial intelligence.
The alliance stated that a few of the companies showed evidence of good practice. All employees in the Sony group must follow ethics guidelines regarding AI. Vodafone offers a right to redress if customers feel that they were treated unfairly because of an AI decision. Deutsche Telekom has a kill switch which can be used at any time to turn off AI systems.
Regulators have been pushing for technology and financial companies to be held accountable as well.
As part of the EU’s Corporate Due Diligence Directive, currently being negotiated between member states, legislators and executive, it is expected that companies such as chipmakers will be required to take into account human rights risks within their value chain.
The OECD has updated its voluntary guidelines to multinationals to include a statement that tech companies must try to avoid harming the environment or society with their products. This includes those linked to artificial intelligent.