GenerativeAI and Sustainability Post: Part 1 of ?

By popular demand - the first in a series. Today's topic: Setting the table for a larger discussion around using LLMs/"AI" to address sustainability problems

Quick Announcements

Hi there! Here are a couple of quick announcements before we dig in:

  1. I’ll be in <Book of Mormon voice> Orlando on October 2 at the E-Scrap Conference, running a panel with some great leaders and innovators on Digital Product Passports (DPPs) and how they might help to drive sustainable outcomes for electronic devices and much more in the near future (if you’ve never heard of a DPP, I describe it as ‘Like a barcode or UPC, but for sustainability-relevant data’). If you’re reading this in your email inbox, hit reply and let me know if you’re interested in receiving a short DPP primer that I’ve written for that conference (or email us at sustainabilityatthefrontier<at>gmail<dot>com. Also - say Hi if you’re at the conference!

  2. I’ll be in New Haven the first week of November and am delighted to be delivering two lectures, one to Prof. Yuan Yao’s Industrial Ecology class on Next-Generation Corporate Sustainability Action, and one to my mentor and doctoral committee advisor Prof. Dan Esty’s Net Zero Pathways course on Emerging Data and AI Frameworks Driving Deep Decarbonization. Reply to this email if you’re in New Haven or NYC and want to get together.

A big Thank You to those of you who voted in the poll in our last newsletter. We had unanimous feedback for a post on GenAI. There’s so much to cover, we’ll try to scratch the surface in this initial post…I expect at least a couple follow-ons in the coming weeks and months. Enjoy!

If You Read Nothing Else in this Post:

After more than 2 years of using various generative AI (genAI) applications as a small part of the flow of my work, I argue that a fairly narrow range of genAI applications can be useful for sustainability professionals. Recognizing that sustainability teams at corporations and investment funds, who are charged with developing right-sized sustainability strategies and then executing projects to achieve the aims written in these strategies, are often under-enabled, under-funded, and short-staffed, I believe it’s in the interest of most sustainability professionals to achieve some minimum level of competence in using genAI in the flow of their work. I found that a combination of an initial on-ramp of about 15 hours of directed, role-specific practice using a leading genAI tool, coupled with tracking heavy genAI users for inspiration and new capability awareness, can result in an increase in efficiency and unlock some compelling uses that can help create more positive impacts than not. This is a “table-setting” post where I lay out some of the issues or challenges around genAI, some perspectives I’ve gleaned as a moderate-to-heavy user over the past couple of years, and tee up some future posts by pointing to use cases that are well-suited to genAI’s current capabilities.

What this Post Is Not (Yet)

Discussions around large language models (LLMs) - the models underpinning popular genAI tools - are fraught with tension, and with good reason. There are serious ethical concerns (e.g., intellectual property scraped from the internet without the creator’s consent to train LLMs), environmental concerns (we will pick apart and provide clarity on what’s known here in a future post, but briefly, the issues surround water and energy use by data centers on which LLMs train and are run), financial concerns (e.g., whether the growth-at-all-costs ethos of large technology companies is economically sustainable), and practical usage concerns (e.g., are genAI tools even good enough to be helpful?).

I’m not an expert ethicist, so I do not think I’m in a great position to comment deeply on that topic within these pages. Admittedly, I go through waves, and the use of genAI tools at times elicits more of an icky feeling than at other times (e.g., the “ick factor” was high recently when LinkedIn announced by default everyone’s data would be used to train AI models unless a user navigates several screens to opt-out). Where I’ve landed on using genAI looks something like this:

“Past and ongoing actions by tech companies putting out genAI tools are pretty shady and might be damaging to a large number of people and organizations, but if I continue believing my limited use and application of the technology to accelerate good sustainability- and climate-focused work is a net positive, I’ll continue using these tools. I reserve the right to update this based on new information that emerges about genAI’s environmental or societal or financial effects.”

-Jon’s stance on AI and genAI as of Oct 1, 2024 (subject to change!)

Similarly, I won’t dive into the myriad financial concerns here, but for those interested I recommend Ed Zitron’s newsletter, which goes into painful detail about the ways in which technology companies are doing things wrong and may be creating some dire consequences for themselves and the broader economy as they chase growth and returns associated with AI and genAI. I’ll link to his newsletter at the end of this post.

Finally, this post will not touch on the environmental concerns surrounding the current and potentially increased use and training of LLMs. I’ve gathered and pored over a ton of papers that have come out over the past couple of years and will say that (i) the target is moving a lot because of the evolution of various LLMs and (ii) it’s still tricky to pin down a simple “so how bad is it for the environment if I use a genAI tool?” because there’s still a lot of opacity around specific effects that (say) one prompt to generate an image has. I’ll warn you - the story is complicated, but in the spirit of what we try to do in this newsletter, I’ll do my best to demystify and boil down what we know into some valuable chunks of information - in a later post.

Why Should We Trust You on GenAI, Jon?

As a space getting a ton of time, attention, and investment, there’s an accompanying amount of grift surrounding AI and genAI, just like with other supposedly-revolutionary technology platforms or applications. It was only a few years ago that non-fungible tokens (NFTs) were a thing (one of my favorite grift stories around this is linked at the end of this post)! I’ll admit it - it’s tough to separate the wheat from the chaff regarding genAI and its capabilities. I hear this sentiment from sustainability and other professionals I frequently speak with on the topic. Will it materially transform the way we all do work now and forever? Is it an incredibly hollow and useless technology that amounts to nothing more than a parlor trick? Would the money poured into developing and scaling the tech been better off by being set ablaze?

I fall somewhere in the middle of the Useless-Transformational scale of opinion around GenAI. For me and the kind of work that I do, I think my opinion on the topic is valid because I’ve put in the hands-on-keyboard work to see for myself the handful of ways genAI tools are helpful (and the far larger number of ways they are not applicable or useful). I got selected for GPT-3 access sometime in early 2022 before its public release. I’ve probably spent an average of 1-2 hr weekly since then using various genAI tools with deliberate practice. I’ve used each of the major models, and the ones I subscribe to at a given time change as the tools evolve (Over the years, I’ve been a paid subscriber of OpenAI’s models, Anthropic’s Claude, Google’s Gemini, and Perplexity’s tool). Keep in mind that I am not an unbiased messenger on this matter. I’ve had the opportunity to give talks to thousands of leaders on the promise and peril of genAI in recent years. With my new firm, I advise corporates on how they right-size their use and investment in technology, including genAI, and will continue to do so. But those who know me can attest how (perhaps to a fault) I weigh multiple sides of an issue and have a strong willingness to update my thinking when I get new information.

Hopefully, with this backstory, I’ve given you the confidence that you can trust that the remainder of this post - and future related posts - is well-informed based on my experience using these tools and a lot of reading, speaking, and listening to other professionals in the space. I will present a few unique cases where genAI has been helpful for me, along with a brief narrative or example illustrating the concept. I’ll plan a future post focusing more on ‘the how’ and bringing additional examples to life.

GenAI for Sustainability Brief Discussion of Use Cases, Details coming in Future Post(s)

As promised up front, I’m calling this a table-setting post. I’m already at 2,000 words! In this section, I’ll provide example use cases where I’ve found genAI to be uniquely useful and a helpful complement to my normal workflow.

  1. Large-document research and review. There are many examples, but a classic one is reviewing new legislation to understand if and how it applies to your company. New regulations can be hundreds of pages long, and there are often large sets of supporting documents that accompany the official regulatory text (e.g., background on the legislation, summaries of meetings with stakeholders, etc). Trying to glean insights from thousands of pages of documents is daunting to even the most skilled researchers. I’ve found that a good prompting framework and articulating your critical questions in advance can materially reduce the time it takes to ‘get up to speed’ on a new topic, particularly details-oriented information like new regulations and legislation. It varies depending on the topic, but my level-of-effort is probably reduced 20-30% in some (but not all) cases when I’m diving into a new sustainability-related topic and need to research core documents to help inform questions to ask or my overall understanding of the topic.

  2. Conversation and negotiation scenarios. Using the voice conversation capability (I’ve used it with ChatGPT so far) can provide unique and under-discussed benefits to genAI. Imagine you’re applying for a new corporate sustainability job - you could upload your resume and the position description to ChatGPT, then prompt it in the voice conversation modality to be the hiring manager, and you can have a mock interview at the drop of a hat. Alternatively, say that you are entering any negotiation (it could be with your manager to discuss a raise or promotion, it could be negotiating terms with a client, etc.) - the voice conversation can be given a persona (e.g., “you’re the world’s toughest negotiator”) and you can hold a back-and-forth conversation to help refine your pitch. It’s a compelling way to work out some talking points or to get practice discussing something simply. To me this is largely a “net new” capability unlocked by genAI - most of us don’t have the luxury of dialing up an expert in <name your topic> who will give us undivided attention and play a specific kind of role just to help us brainstorm or practice. There’s an odd feeling, too, when you “beat” the genAI in a negotiation, too. One of my favorite variants of this use case involves preparing for a conference presentation. I assign ChatGPT the role of a formidable graduate school adviser - they then proceed to relentlessly question the accuracy and importance of my topic, requiring me to think on my feet and turn the topic on its head in multiple ways. I’ve had plenty of (human) sparring partners like this in the past, but there’s something super convenient about getting your practice in at any time.

  3. Creating topic-specific ‘podcasts’. Google has a new-ish tool called NotebookLM, which along with summarization capabilities and other features common to other leading genAI tools, it has a unique feature allowing you to quickly and automatically create a 'podcast-style audio file on topic(s) of your choice, informed by documents that you upload. Let’s say you wanted to understand more about a start-up's technology, and you had access to a white paper they wrote, the company’s pitch deck, and a couple of media articles on the company. You can upload these documents to NotebookLM and it will instantly create a podcast-style audio track featuring two people speaking back and forth on the contents of the documents you just uploaded. This feature is a little eerie but I can speak to the convenience of execution of this format, and you can really see this being useful for people who are more audio learners. .

A Related Note on Data Analysis and Predictive Analytics

I will probably explore these areas in a separate post, but I want to provide some commentary on the applications of genAI for data analysis and the subtopic of predictive analytics.

On data analysis - in brief, I perform moderate to complex data analysis as a part of projects and advisory that I deliver with my company, and data-driven work has been a key part of my professional “secret sauce” for a long time. At this point, I only trust genAI tools to do a limited set of tasks for me, mainly around getting the ball rolling for a coding task (R is my preferred language). I’ve thrown simple to moderate tasks at genAI and frequently get erroneous, frustrating, or just plain weird responses. Keep in mind I’ve mainly used genAI tools in this way as an individual user of the Pro (i.e., paid) versions of the largest and most sophisticated models, which means I’m mainly using best-in-class models. Back when I worked at Salesforce and had access to Enterprise tools, I can say that the results were not any more trustworthy. Others who are research scientists deep in AI and machine learning have reported similar benefits and cautions of using genAI for data work.

As for predictive analytics - I do not have much faith in an AI or genAI tool to do any predictive analytics better or faster than I could using standard methods and professional judgment. A key conclusion from a recently-published book by a couple of prominent academic figures in AI from Princeton essentially said “Predictive AI doesn’t work, fundamentally, any better than clipping a coin, and we don’t see how it could substantially improve in the future”. Pretty damning. As with other developments around genAI, I’ll continue keeping a “watching and learning” approach and update my thinking based on hype-free evidence and experience.

Related GenAI Links of Possible Interest

Below are links I’ve referenced in this post. The first two are good bookends for people who want to explore the pessimist and optimist/utilitarian views on genAI.

Do you know anyone who may enjoy this post? If viewing this in your email client, share this post with a friend by clicking below!

If you’re viewing this online, simply copy this link and email or post it to those who may enjoy the newsletter. Thank you so much for reading Sustainability at the Frontier. We’ll see you next time.

🧑‍🏭  Should We Work Together? — My new company, Apex Catalytic, advises leading corporates and impact investment funds on a low-friction retainer or project basis. Let my unique sustainability-driving experiences as an engineer, software leader, impact investor, and educator help you and your team move farther, faster. [Click to See if We May be a Fit]

📣 We’d Love to Hear from You! — If you view this in your email app, reply to send us your questions, comments, or feedback - we’d love to hear from you. If you’re viewing this online, reach out at sustainabilityatthefrontier <at> gmail <dot> com, or connect with us on LinkedIn.