ESG investors who bet big on tech are gripped by a ‘AI blowback’ anxiety

Fund managers who focus on environmental, social, and governance (ESG) investing have turned to big technology companies as a way to make low carbon investments with high returns. However, they are now growing worried about the technology sector’s use of artificial intelligence.
According to Marcel Stotzel of Fidelity, a portfolio manager based in London, AI exposure is now a “short term risk” for investors.

Stotzel is “worried” about an AI blowback, which he defines as a situation where something unexpected triggers an important market decline. He said that it only takes one incident to have a material impact.

Stotzel mentions AI-equipped fighter jets that have the ability to self-learn. He said that Fidelity has been talking with companies who are developing these technologies about safety features, such as a kill switch.

After embracing tech in a major way, the ESG investing sector may be more vulnerable to these risks than others. The funds with an explicit environmental, social, and good governance goal have more assets in the tech sector than any other. The world’s largest ESG exchange traded fund is dominated primarily by the tech sector, with Apple Inc., Microsoft Corp. Amazon.com Inc. Nvidia Corp. leading.

These companies are at the forefront in developing AI. Recently, tensions over the pace and direction of the industry’s development erupted in the public eye. OpenAI, which launched ChatGPT a year earlier, has fired its CEO, Sam Altman. then quickly rehired, causing a frenzy.

OpenAI’s ambitions were a source of internal disagreements, primarily due to the potential risks for society. Altman’s reinstatement puts the company back on track for his growth plans, including faster commercialization.

Apple has said it plans to Microsoft, Amazon and Meta Platforms Inc. have all agreed to implement voluntary safeguards in order to reduce abuse and bias of AI.

Stotzel says he is less concerned about the risks posed by small AI startups, than those posed by the tech giants. He said that the biggest companies are likely to cause the most harm.

Other investors also share these concerns. According to a spokesperson for the $248billion plan, the New York City employees’ retirement system is “actively” monitoring how its portfolio companies use AI. Generation Investment Management is a firm founded by former US vice president Al Gore. It told clients it was stepping up its research on generative AI, and that it spoke daily to the companies in which it invested about the risks, as well as opportunities, this technology presents.

Norway’s $1.4 billion sovereign wealth fund, , has told boards and companies that AI poses “severe risks and unknowns”.

Analysts at UBS Group AG estimate that when OpenAI launched ChatGPT in November last year, it became the fastest growing internet application ever. By January, 13 million users were using it daily. In this context, the share prices of tech giants who are developing or supporting similar technology have soared in 2018.

Crystal Geng is an ESG analyst with BNP Paribas Asset Management, Hong Kong. She says that the lack of regulation or meaningful historical data about how AI assets may perform over time should be cause for concern.

She said, “We do not have the tools or methodologies to quantify risk.” BNP asks portfolio companies to estimate how many jobs may be lost due to the rise of technologies such as ChatGPT. Geng stated, “I’ve not seen a single company that could give me an accurate number.”

Jonas Kron is the chief advocacy officer of Boston-based Trillium Asset Management. He helped Apple and Meta’s Facebook include privacy into their board charters. Kron has been pushing tech companies to explain their AI better. Trillium Asset Management filed a shareholder request with Alphabet, the parent company of Google, earlier this year asking for more information about its AI algorithms.

Kron stated that AI poses a governance risk to investors. He also noted that even OpenAI’s Altman has encouraged lawmakers to impose regulations.

AI, if left unchecked, could reinforce discrimination, especially in health care. AI has the potential to increase racial, gender and age biases. But it also poses a threat to personal data by allowing to be misused.

According to ‘s database, which tracks misuse of the technology, AI incidents and controversy have increased by 26 times since 2012.

Investors have submitted resolutions calling for greater transparency in AI algorithms. The AFL-CIO Equity Index Fund oversees 12 billion dollars in union pensions. Companies like Netflix Inc. and Walt Disney Co. were asked if they have policies to protect workers, consumers, and the public from AI risks.
Carin Zelenko is the director of capital strategy at AFL-CIO, Washington. She said that there are several points of concern, including discrimination and bias against employees, misinformation during elections, and mass layoffs due to automation. She also said that Hollywood writers and actors were worried about AI, which contributed to their high-profile strike this year.

She said that the experience “just heightened awareness” of how important this issue was in every business.

Post Disclaimer

The following content has been published by Stockmark.IT. All information utilised in the creation of this communication has been gathered from publicly available sources that we consider reliable. Nevertheless, we cannot guarantee the accuracy or completeness of this communication.

This communication is intended solely for informational purposes and should not be construed as an offer, recommendation, solicitation, inducement, or invitation by or on behalf of the Company or any affiliates to engage in any investment activities. The opinions and views expressed by the authors are their own and do not necessarily reflect those of the Company, its affiliates, or any other third party.

The services and products mentioned in this communication may not be suitable for all recipients, by continuing to read this website and its content you agree to the terms of this disclaimer.