ESG meets AI: can machines help to tell your sustainability story?

Make no mistake. Artificial intelligence (AI) is not a silver bullet – least of all for impactful corporate communications.

That said, it can assist in turning complex, often dry, financial information into a more compact and compelling message. At a time when ESG comms have never been more of a tightrope walk, turning to time-saving tech is an increasingly attractive option.

But given that AI also brings ethical dilemmas, from bias and inaccuracies to energy usage, how can we manage the risks? And what is the best way to reap the benefits, without falling foul of AI’s dark side?

Let’s explore…

Rationale are experts in brand and content strategy and ESG storytelling for businesses operating in highly regulated industries, with complex multi-audience messages to deliver. We can help take away the fear, so you can have confidence in speaking out about your ESG activity. It’s the frontier of competitive advantage for those who get it right.

Bite-sized ESG messaging

AI is already changing the way financial institutions (FIs) communicate their ESG goals and achievements. Some banks and asset managers are using large language models (LLMs) – such as ChatGPT and Claude – to 10x their sustainability storytelling. And they’re doing it as you read this!

In fact, AI’s ability to rapidly summarise complex information is particularly powerful for FIs, given the growing regulatory and stakeholder demand for transparency and actionable insights.

Think about it: what used to be a 150-page ESG impact report filled with data tables and technical jargon can now be transformed (quite literally at the click of a button) into bite-sized, easy-to-understand insights, thanks to LLMs. And every marketer knows that shorter, more focused content formats are increasingly popular among audiences who want valuable information quickly.

Instead of overwhelming stakeholders with a mountain of dense information, AI can highlight the most relevant facts and illustrate real-world outcomes of sustainability efforts in an instant.

Balancing authenticity with AI efficiency

But as AI breathes new life into ESG storytelling, it’s essential to look beyond its (undeniable) convenience and preserve the human touch at every step. This is the cardinal rule.

AI-driven communication can fall flat if it feels robotic or disconnected. A recent study found that over half (52%) of consumers disengaged if they suspected content was AI-generated.

So, regardless of time-constraints on your team, using LLMs for generating ESG communications shouldn’t mean replacing human insight – it should mean enhancing it. Stakeholders respond to authenticity. They need to believe in the ethical backbone of your organisation, and no algorithm can convey that on its own.

The trick is blending AI’s high-tech benefits with the depth of human insights and expertise. Here are five common missteps to avoid along the way:

AI pitfalls to watch out for

1. Historical stereotypes: beating the bias

AI models learn from historical data, and if that data contains biases (research suggests around 40% of AI models do), AI will replicate them. This can be especially problematic when communicating ESG topics, where fairness, diversity, equity, and inclusion (DEI) are paramount.

Having a human editor review all AI-generated content for inclusivity and fairness is therefore non-negotiable.

2. Dream on: managing AI hallucinations

When dealing with ESG data, credibility is everything. But LLMs have a big downside: they can generate false or fabricated information, and sound very credible in the process. Often, they will also misread documents and pull-out incorrect statistics.

These types of mistakes could severely damage your organisation’s reputation and trustworthiness. So, AI-generated ‘facts’ should never be taken at face value. Comms teams using AI must verify all data before publication. This includes cross-referencing AI-generated claims with trusted sources. 

3. Personality vacuum: deviations from tone and style 

Even the most sophisticated LLMs can produce sterile content, totally devoid of brand personality. When dealing with sensitive or emotionally-charged ESG topics, this clinical approach can really jar, and ultimately deter the audience. 

Investing some time to train your LLM, or create a custom GPT, can help, as can feeding in comprehensive style and tone of voice guidelines. But the best advice is only to use AI-generated content as a draft or starting point, never the final content asset! Always have your team, or agency, refine AI outputs to ensure they embody your brand’s personality and values. 

4. Not green or clean: energy consumption 

It might not seem like a priority for comms professionals, but concern is growing around the energy usage of LLMs. Training a model like GPT-3 (there is no publicly available data on the latest version) is estimated to use roughly as much electricity as 130 US homes would consume in a year. So, you’d need to stream around 1,625,000 hours of Netflix to equal the energy required to train GPT-3.

And as AI models become more sophisticated, energy usage and carbon emissions are only increasing – which seems counterintuitive when promoting sustainability, especially net zero goals.

To help tackle this issue, comms leaders can encourage the use of AI models designed to minimise energy consumption. Or advocate internally for data centres powered by renewable energy, for example. 

5. Sustainability sensitivities: data privacy concerns

While AI systems thrive on data, handling sensitive sustainability information can create privacy risks. This data might include details about employee demographics, community impact, or supply chain practices – information that could be sensitive if mishandled. 

Prioritise data privacy by anonymising data and following strict governance practices, in collaboration with your IT function. Also be transparent with stakeholders about data usage to reinforce trust and align with ESG values.

Taking a human + AI approach

While AI undoubtedly gives FIs the ability to process and present ESG data more powerfully, the pitfalls speak loud and clear: the human element is irreplaceable. It brings the purpose, vision, and the ethical judgement needed for genuine impact. 

At Rationale, we believe AI should only act as an enabler, making complex ESG stories more accessible. It is not a replacement for thoughtful human connection – which is how trust and credibility are built. 

In other words, the future of ESG communication lies in a partnership between humans and AI, where each enhances the other. But the human element will always be where the true magic lies, not in an algorithm. When it comes to effective storytelling, there is no substitute for originality, heart, and soul.

Additional tips for making the most of AI in ESG communications

1. Set clear ethical guidelines

Put transparent rules in place for AI usage, specifying the types of ESG-related content AI can generate and where human oversight is essential. 

2. Incorporate different perspectives

Involve diverse people in the content review process. Varied viewpoints can help identify and correct biases that might slip through AI algorithms. 

3. Invest in continuous learning

AI technology evolves rapidly. Stay up to date with the latest advancements and ensure your team is trained to use AI tools effectively (including how to ensure data privacy). 

4. Monitor for feedback and adjust

Pay attention to how stakeholders respond to your AI-enhanced communications. If you receive feedback, act swiftly to address any issues and improve future outputs.

Want to see how Rationale can support your ESG communications, with a human-first approach? Read about or four-step framework for fearless sustainability messaging in our recent blog or drop us a line to learn how we can bring your ESG story to life. We’re here to help!

Share this page

Related insights

January 2, 2025

Protecting vulnerable customers against the $1 trillion scam industry

Over $1 trillion is lost to scammers worldwide each year, and in the UK, 40% of all crime now is represented by scams and fraud.
November 19, 2024

Bold messaging for medtech brands

Your new technology is set to change the face of healthcare. But, while your solutions stand at the cutting edge, you are finding it harder to be heard and differentiate within the market.
November 7, 2024

AI in Healthcare

In this post, diving chess meets medical AI on how marketers can deliver the value of AI effectively, while also building trust and understanding among patients and providers.

Interested in working with us?