- Sustainability at the Frontier by Jon Powell
- Posts
- Four Principles for Interrogating New Climate Science Research
Four Principles for Interrogating New Climate Science Research
Preamble
Hi there! Summer is almost over here in the US, and I plan to resume posting roughly biweekly. My posting hiatus was partly influenced by ideas popularized by Prof. Cal Newport, which I’ll link to at the end of this post. Highly recommended. Also - at the end of this post, I’ve put in a poll on whether I should post on practical uses of Generative AI and Sustainability - I’d love it if you weighed in! Today, we have a pretty meta post. Normally, we dive into new research and put that work in a broader and more practical context for our readers, who mainly consist of investors and sustainability leaders and practitioners, and researchers. Realizing that I take the tips and tricks I use to sift through a lot of research in my day-to-day work as a given, I thought it would be helpful to convey some principles that I’ve found useful in today’s post. Enjoy!
If You Read Nothing Else in this Post:
Academic research is critical in understanding various aspects of our changing climate and related technical, economic, and policy dimensions to address or adapt to climate change. However, professionals in important practitioner roles (e.g., corporate sustainability leaders, investors, and policymakers) often lack training to effectively assess new scientific research's quality or implications. Here, I’m presenting four principles, informed by my 20+ years as an academic researcher, investor, sustainability practitioner, and invited reviewer for several prominent scientific journals, that can hopefully serve as a helpful guide to a range of our readers (note that these principles are helpful even if you just want a more useful way to review and understand new scientific developments).
The four principles are: (1) Always source and review the original work, (2) Evaluate journal quality, (3) Align the paper’s methods and/or results to your work or interest, and (4) Confirm the scope of the research and its conclusions. Those who adopt these principles could help to achieve a positive, two-sided effect: (i) raising the profile of important, high-impact climate research and (ii) enhancing the implementation of important scientific research, bringing more rigorous scientific grounding across a range of practical sustainability-related contexts in the real world.
Principle 1: Always Source and Review the Original Work
The odds are that you will most likely become aware of new sustainability-related academic research (or any academic research, for that matter) through various traditional media or social media outlets. This is fine, as it would be hard for even the most dedicated of us to track publications from several individual research journals rigorously. Some estimates suggest there are more than 30,000 scientific journals, and multiply that by the number of original articles in each one, and the volume of published research quickly gets out of hand.
Why is it important to review the original work rather than relying on the headline and short article that summarizes what the new research actually says? Suppose your goal is to deeply understand and perhaps incorporate findings from the latest research into your work, or to update your way of thinking, or any of the other reasons you might be interested in new scientific research. In that case, I’d suggest one or more of the following factors are good reasons to seek out the original source:
Media coverage of the work likely won’t be done by someone with a deep science background. Thus, it’s likely that key points - and even the study’s central conclusion - may get lost in translation. Being a scientist isn’t a prerequisite for creating good-quality media coverage of research, but it certainly helps (I’ll give an example below).
There may be space constraints preventing the writer from digging into details or highlighting essential nuances of the scientific work. So, while it may take 1,000 words to do a new, complicated piece of scientific work justice, they may only have a 500-word allotment, so it’s easy to see how essential bits can get cut.
Writing about new research can sound more exciting (and get more clicks) if you conveniently leave out important context. Have you ever followed the excellent Twitter account called @justsayinmice? This account links to media coverage of new scientific studies that all conveniently leave out an important point: that the experiments were done on mice, not people. Example: “Exercise during pregnancy protects children from obesity, study finds.” Yep, the study in question was done on mice, not people. Context matters!
Story Time: When the findings and conclusions from a research paper I wrote got twisted and butchered by numerous media outlets. In 2015 I was fortunate enough to have a research paper published in a leading scientific journal, which got a lot of media coverage. I even hung out with Ira Flatow on NPR’s Science Friday. It was an incredible experience that I continue to appreciate to this day. But the downside to the story is that nearly every media outlet that covered the paper got fundamental facts completely wrong and, in many cases, put in conclusions or observations that were not mentioned (or even implied) in my work. Even the journal itself deeply mischaracterized the findings when it highlighted my paper in the monthly hardcopy version. My story is not an anomaly, and most of the papers that I dig into in this newsletter, I’ve found, likewise have press coverage that includes errors and pretty serious mischaracterizations. Thus, to reinforce Principle 1: Go to the source to avoid being misled or even absorbing information about the work that is incorrect.
OK, so what’s the best way to get a copy of the actual research work? To start, if you are made aware of interesting new research through a media outlet, seek out a link that takes you to the journal's website where the work was published. Unfortunately, such links are not always included in media coverage, and in those cases, you can probably Google around to find the source paper.
When you arrive at the journal’s website, you’ll either find that the research paper is Open Access (i.e., free to download for anyone, so you simply download at that time) or it is behind a paywall and it will have a price tag of $30+. Seriously. Universities pay large sums to academic publishers, so if you’re a student on a University network, you can likely access the article with no issues because the University pays for the access. However, far more people are not at a university, so how can we easily (and freely) access the article that’s behind a paywall? It’s easy.
When you’re on the webpage for the research paper, you’ll usually see something like this:

Getting a copy of a new research article is as simple as sending an email to the “corresponding author,” which is usually designated by an icon like the one shown here.
Find out who the corresponding author is and write a quick email, something like “Hi <esteemed researcher> - Could you please send me a PDF of the full article you recently published <link to the paper’s webpage>? I’m looking forward to reading it, and congratulations on the publication!”. Researchers love it when people are interested in their work, and I appreciated it even more when non-academics would reach out to me.
Principle 2: Evaluate Journal Quality
Not all scientific research is equally good or impactful. The fact that a “study shows” a finding or a result doesn’t signal importance (or even truth). When reviewing new research, A helpful data point is to understand the quality of the journal where the work was published. There are a few ways to quickly gauge journal quality, none perfect, but they are suitable for orienting yourself factors like the timeliness, impact, and/or importance of new research.
Method to Understand Journal Quality #1: Look up the journal’s metrics on a ranking aggregator like Scimago. This site is a good way to understand traditional academic metrics like citations, which is an imperfect indicator of a given research paper’s impact that counts the number of other papers cited in their references section. Journal rankings like this are imperfect, and can be a function of several factors (not only objective quality but also the scientific discipline covered by the journal, the number of scientists working in that discipline, how many papers the journal publishes in a given time period, and more).

Websites like Scimago provide an easy-to-navigate database of relevant metrics. Examples from the above include H-Index (i.e., the journal has x papers with at least x citations—so that means in the image above that the journal Ca-A Cancer Journal for Clinicians has 211 papers that have been cited in 211 other papers), Total Cites (the total number of other research papers that have cited work published in that journal), and some other averages.
Method to Understand Journal Quality #2: Google “<name of the academic journal> impact factor”. It's another imperfect measure, but if the impact factor is greater than 5, then it’s probably a decent journal.
Assessing journal quality is an inexact science, and the rankings and data points you get from the above two methods could be better. But it is essential to at least glance at these figures because, in general, it’s far harder to publish a paper in a leading journal than in a less reputable one. That’s the case for a few reasons. Typically, great journals have editors and reviewers who are leaders in their academic discipline, and with that comes a higher bar for assessing the importance of new research work and a higher bar for “novelty,” which is a squishy but critical element of publishing any new research to an academic. When a research team submits an article to a journal, the first decision stage lies with an editor, who can outright reject the paper for publication, or allow the paper to go to the next stage where the paper is peer reviewed by other academics and experts in the topic covered by the new research. The peer reviewers then evaluate the paper and provide a rating, and the paper either moves to publication or is rejected after that process plays out. Again, generally the better journals are more selective, while less-good journals have higher acceptance rates (here’s a study done by academic publisher Elsevier showing the average acceptance rate of submitted papers across 2,300 academic journals was 32%, with a staggering range of 1% to 93%).
All of this to say - the goal of doing a quick evaluation of journal quality should allow you to have in the back of your mind as you read the work, “Hm, these authors went through the gauntlet to get this work published; it’s probably an important advancement” (in the case of work published in a better journal), or the converse “Hm, I can’t find any ratings on this journal, or its impact factor is low…this work probably got published without much academic scrutiny, so I should be cautious of trusting the work or its overall scientific importance.”
Principle 3: Actively Read to Align the Paper to Your Work or Interest
This is another way to suggest reading the paper actively and paying attention to its methods, results, and conclusions. Suppose you intend to read a new scientific paper to keep up with the latest advances or advance specific work in your job. This principle can ensure you have something tangible or actionable after reviewing the paper. A nice article here gives some specific suggestions on actively reading scientific articles. Still, it boils down to reading the scientific paper in a very self-centered way and paying attention to details in the study’s methods (e.g., to see whether their technique, algorithm, software program or experimental design could be adapted for a problem you’re working on) or results (e.g., if data presented in one of their main results can bolster a technical or business case you’re making internally for some sustainability initiative).
Principle 4: Confirm the Scope of the Research and its Conclusions
This principle goes hand in hand with Principle 3 described above. New research normally has a defined scope, and it should be spelled out clearly in the paper. Reviewing and understanding the specific scope and context of the authors' results and conclusions is an important but sometimes overlooked aspect of scientific research.
Anyone who’s taken introductory algebra should know about the dangers of extrapolating too far from actual data. If we have limited observations or data points, conclusions should only be confined to what the data say. We get into trouble if we extrapolate too far from a limited data set. I’ll illustrate my point using the story of the Chicago Cubs’ own Tuffy Rhodes from 1994:

Finally, my opportunity to shoehorn the story of Tuffy Rhodes into this newsletter has arrived. GIF courtesy MLB’s YouTube page.
Tuffy hit three home runs on Major League Baseball’s Opening Day in 1994. There are 162 games in a Major League Baseball season, not counting the playoffs. Would it be appropriate to extrapolate Tuffy’s result to say he’d hit 3 × 162 = 486 home runs that season? Of course not (he ended the 1994 season with eight total home runs if you’re curious).
So, putting this principle in practical terms - when reviewing a research paper, keep an eye out for the context and the specific language the authors use to characterize their work, which is usually found in the Methods section and the Conclusions (or Implications) section of the paper. If you’re using the active reading approach described earlier, noting the scope and limitations of the work will be important when you incorporate or otherwise carry anything from this research work into your own.
Concluding Remarks
I’ve presented four easy-to-follow principles that you can use to improve the way to review, absorb, and incorporate new scientific findings into your work or everyday life: (1) Always Source and Review the Original Work, (2) Evaluate Journal Quality, (3) Actively Read to Align the Paper to Your Work or Interest, and (4) Confirm the Scope of the Research and its Conclusions. I suggest you try implementing these principles the next time you see an article about new scientific research that catches your eye, and send me an email to let me know if this approach was helpful to you.
Addendum: Link of Possible Interest
I mentioned at the beginning of this post that I intentionally slowed the pace of posting here this summer to prioritize other projects and time off, actions that I took informed by ideas popularized by professor and author Cal Newport. Specifically, his new book Slow Productivity discussed a couple of principles that resonated with me, namely “Work at a Natural Pace,” “Do Fewer Things,” and “Obsess Over Quality.” Naturally, there are pros and cons of principles and frameworks like this. Still, I suggest checking this book out if you’re like me and trying to build several ambitious things while reserving enough free time to pursue enriching “non-work” things.
Do you know anyone who may enjoy this post? If viewing this in your email client, share this post with a friend by clicking below!
If you’re viewing this online, simply copy this link and email or post it to those who may enjoy the newsletter. Thank you so much for reading Sustainability at the Frontier. We’ll see you next time.
🧑🏭 Should We Work Together? — My new company, Apex Catalytic, advises leading corporates and impact investment funds on a low-friction retainer or project basis. Let my unique sustainability-driving experiences as an engineer, software leader, impact investor, and educator help you and your team move farther, faster. [Click to See if We May be a Fit]
📣 We’d Love to Hear from You! — If you view this in your email app, reply to send us your questions, comments, or feedback - we’d love to hear from you. If you’re viewing this online, reach out at sustainabilityatthefrontier <at> gmail <dot> com, or connect with us on LinkedIn.