Matt Rozsa of Salon.com asked me to comment on ‘addiction as a metaphor for our ecologically unsustainable consumption patterns’ for a story he was writing. Unfortunately, I was too late for his story, so I publish here a lightly edited version of the largely substance-free content I sent to him.
We all have emotional voids that we are trying to fill with consumer products.
We are all constantly bombarded with advertising, much of it telling us that if we want to be held in high regard by others, we need to buy some fancy car or expensive trinkets — if we want to be loved by others, we need to buy more and more costly consumer products.
If we were all constantly bombarded by advertising telling us that we could fill emotional voids and achieve social status by consuming heroin, we would all be heroin addicts by now.
Imagine how many heroin addicts there would be if as we walked down the street, there would be shop after shop with alluring displays of heroin in various forms, advertising 50% off this week only.
The promotion of consumerism is as dangerous at a global level as the promotion of heroin is at an individual level.
It is one thing to be in poverty, and meeting real needs with increased consumption (shelter, food, clothing, etc).
It is another thing entirely to be living a life of affluence, attempting to get another shot of dopamine through impulse buying.
We are embedded in a world in which we are encouraged at every turn to sink ever deeper into our addiction to consumer products in a futile attempt to fill our emotional emptiness.
But consumer products do improve my life. Some consumer products do bring me real joy. For example, I love my bass guitar and my motorcycle.
But when I look at all the junk in my closets and garage, I see wasteful and unsuccessful efforts to solve problems with consumer products that consumer products cannot solve.
We need to find some way of consuming only that which will bring us real joy, and look within ourselves, and to family, friends and lovers, to fill our emotional voids.
This is an edited transcript from my Carl Sagan Lecture on 10 Dec 2018 at the American Geophysical Union 2018 Fall Meeting. The transcript has been edited, so in some places represents what I meant to say, rather than what I did say.
Introduction by Ariel Anbar:
I’m Ariel Anbar. I’m the president of the Biogeosciences Section of AGU which has the honor of organizing the Sagan Lecture this year. The same lecture is shared between the Biogeosciences and Planetary Sciences sections. On even-numbered years bio sciences hosts it on odd-numbered years planetary science hosted and so on behalf of both sections in the leadership of both sections.
I want to welcome you here. The Sagan Lecture has mostly focused on other worlds reflecting Carl Sagan’s identity as a planetary scientist and astrobiologist, but Carl Sagan was as passionate about the future of life on this world as he was about the search for life on others. And he saw these as related questions. That’s no better expressed than in his book “Pale Blue Dot: A Vision of the Human Future in Space”. He was inspired by the image of Earth taken by the Voyager 1 spacecraft as it passed the orbit of Saturn and here’s what he wrote:
The Earth is the only world known so far to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment the Earth is where we make our stand.
It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.
And it’s in that spirit that I’m thrilled to introduce Ken Caldeira from the Carnegie Institution of Science to get the 2018 Carl Sagan lecture.
Many astrobiologists know Ken for a highly influential paper that you wrote with Jim Lasting in Nature back in 1992 titled, “The Lifespan of the Biosphere Revisited”. And in that paper, Ken and Jim Kasting predicted that we have about a billion years to go before earth can no longer support a plant-based biosphere.
Since that time Ken has focused attention on the less distant future. He’s become a pioneer studying the environmental consequences of climate change and how we might avoid it. Notably, he was one of the first to point out the challenge of ocean acidification. More recently, he’s grappled with how humans might respond to the climate challenge, delving into energy transitions and even climate engineering.
Throughout, Ken has been an inspiration for his combination of creativity and clear thinking and his willingness to focus on key challenges. For this he became an AGU Fellow in 2010.
But he also became one of the more influential science voices reaching across and beyond the traditional science community. And to me, there’s no better example of that than a blog post in 2016 by Bill Gates, who as some of reads widely talks to people widely and blogs prolifically. Gates wrote a blog post in which he described Ken as “my amazing teacher” on matters of climate and energy. And so today we have the good fortune to welcome and honor Ken as our amazing teacher.
So with that, Ken Caldeira
Lecture by Ken Caldeira:
HI. First I’d like to thank Ariel and the Biogeosciences and Astrobiology groups for inviting me to do this lecture. And it’s certainly an honor.
And there’s no image in this talk that’s in any of my other talk so I rapidly tried to throw something together for this.
I was panicking. I’ve been thinking about this talk ever since Ariel asked me to do it having no idea what in the world I was going to talk about. I even expressed my panic on Twitter and got some suggestions but, anyway, here we go.
But as I was googling around looking for things to talk about, I found a 1954 (he’s 18 years old) reading list from Carl Sagan. And below this list of outside reading, there was his course readings. This was for a single quarter of the year.
He’s 18 years old and he’s reading “The Immoralist” by Gide. He’s reading Shakespeare’s “Julius Caesar”, a couple of books of Plato. First of all, I don’t want to compare myself to Carl Sagan, but it reminds me a little bit of my reading when I was in high school. And it just goes to the depth of interest, and that it’s not just about science but there’s some merging of the science and humanities to be a full human being.
Scientifically, he’s known for a number of things and sure David Grinspoon and others could expand on this more, but one was the synthesis of amino acids abiotically.
But he had a long career at Cornell as a scientist and obviously like all of us had a personal life in addition to a scientific life.
My first connection, the first time Carl Sagan penetrated my mind, was this book, “Dragons of Eden”.
It’s forty years ago or so that this book was published and some of my memory of it is forty years old. I didn’t go back and reread it but what I remember from that this book was he was writing about how we have this lizard brain that’s our basic emotional structure of fear and desires and hunger and so on, and then over this is lizard brain we have this neocortex that’s our super-ego or more rational decision-making overlay. Maybe it’s also going back to almost a Freudian id and super-ego but putting it in evolutionary terms that we have this basal brain and its overlay. And to me this was really remarkable.
I don’t have any first-hand evidence, but I’ve been told that he was able to write these books essentially dictating paragraph after well-formed paragraph and then getting back the notes of what he dictated and just making minor corrections on that. And I don’t know if that story is true but even if it’s partially true there’s obviously a mind that’s able to think coherently about a wide diversity of issues. And so this mind that’s willing to think about astrobiology and so on but also write books about evolution of consciousness is really amazing.
I was testing out some ideas in my department for this talk and have some speculations about evolution of consciousness, but we’ll see if we get to it. My department mates strongly suggested that I not talk about it.
After reading Dragons of Eden, the next thing was Cosmos. It was more or less around 1980 and this galvanized not only me but stimulated the entire country to be thrilled about space travel and space exploration. And this was at an important time because through the 60s there was all of this hullabaloo around landing on the moon and then by 1980 there was low interest in space travel. Carl Sagan’s almost single-handedly generated enthusiasm among broad swatches of the population in space exploration and, in general, broader curiosity and quest for knowledge.
I remember also at that time there was two quotes that stuck with me from Carl Sagan that I didn’t even realize that he was the one who said these quotes. I’ve said these things to other people not knowing who said them originally.
And this one I like because I always I’m always feeling like I have a gut feeling of what’s right or wrong and then there’s this famous quote from Carl Sagan:
“But I try not to think with my gut. If I’m serious about understanding the world, thinking with anything besides my brain, as tempting as that might be, is likely to get me into trouble.” — Carl Sagan
This is certainly true and would probably be good if some of our political leaders would adopt this thought process.
The other quote that that I didn’t realize that is attributed to Carl Sagan is:
“Extraordinary claims require extraordinary evidence.” — Carl Sagan
which is something I frequently say. In fact, I said this in a review I did recently without knowing it was a Carl Sagan quote.
And of course no brilliant comment like this springs from nowhere. Any time you say anything other people have said something similar earlier, so there are other earlier claims to this type of quote. One early claim is Laplace. In a less pithy way basically said a similar thing.
[“The weight of evidence for an extraordinary claim must be proportioned to its strangeness… . In our reasonings concerning matter of fact, there are all imaginable degrees of assurance, from the highest certainty to the lowest species of moral evidence. A wise man, therefore, proportions his belief to the evidence.” –Laplace]
One thing important that I alluded to is that Carl Sagan wasn’t only interested in astronomy and astrobiology, but also how well people were living here on Earth. And he was a believer in the power of curiosity the power of knowledge and the power of science. There’s this quote here:
“Science is the golden road out of poverty and backwardness for emerging nations. The corollary, one that the United States sometimes fails to grasp, is that abandoning science is the road back into poverty and backwardness.” — Carl Sagan
This statement has perhaps greater resonance today than it did when it was first uttered. This confidence that it was through science and technology and understanding that we were going to solve our problems is an important message for today.
Again, this seems to resonate more with our current political leadership than it did back when Carl was around:
“Widespread intellectual and moral docility may be convenient for leaders in the short term, but it is suicidal for nations in the long term.
One of the criteria for national leadership should therefore be a talent for understanding, encouraging, and making constructive use of vigorous criticism.” — Carl Sagan
Certainly our current political leadership is far from this.
Another thing, and I’ve seen this unfortunately too many times and one sees it increasingly as one grows older, is that we are intellectual and social organisms but we’re also biological organisms. And eventually our biological functioning, our homeostatic systems, fail. And for Carl Sagan it was a failure to make red blood cells and that led him to a premature death.
I hadn’t recalled that he had died at such a young age. When I was 20, 62 didn’t seem like such a young age but now that I’m here, 62 seems way way, way, way, too early. It’s just the tragedy and one wonders what he would have done if he had another 30 years or so.
So with that I’m going to step out of this Carl Sagan review section and go into a more question-asking discussion.
I went on Twitter and just said ‘oh I’m panicking’. What should I talk about? And one of the postdocs in my group responded back, and it doesn’t really fit in my talk but I thought it would be just worth throwing out to ponder. She said “We’re spending all this effort to search for life on other planets and meanwhile we’re destroying all these ecosystems here on earth.” I’m just going to throw that out here because I don’t know how to deal with this other than to say that that these two quests are not zero-sum and that appreciating life here on earth is not inconsistent with searching for life on other planets. We need to embrace both of these objectives or certainly we need to embrace the objective of not destroying things here on earth and at least think about our broader context. I thought this was worth taking note of.
But the basic theme I wanted to talk about is this question: “Can organisms be wildly successful at planetary scale without destroying the conditions that allow them to them to succeed?”
For astrobiology, this is a question of the probability of finding advanced life on other planets. Is advanced life necessarily short-lived because they develop technologies and produce wastes that ultimately make that that class of organisms unable to persist on the planet? Maybe advanced technological societies are very ephemeral and it’s not possible for them to be sustained for a long amount of time. But maybe it is possible to make them sustained. I will come back to this question. I just want to have you’d have this in your mind as the framing question that is important for astrobiology. But, obviously, for all of us living on the planet this is a central question.
I was fortunate enough to go to New York University for a PhD, which maybe is not one of the premier places, but at the time Tyler Volk was there (who is in the audience) and Marty Hoffert was running the department and Brian O’Neill was there and Francesco Tubiello and a few other people who are in the audience today.
The first thing I did when I got there was model a thing called Ecosphere. It was a glass ball with some water in it and it had brine shrimp in it some algae and bacteria. The idea was you would put it in your window and it would be a materially closed system but open to energy. The idea was that for a long time it would cycle all the material 100% and be energetically open and you’d have a closed ecosystem. Tyler at the same time was working on closed ecosystems for missions to Mars. Obviously there’s some cometary material and other things coming into the planet but more or less than planetary scale that ecosphere is a metaphor for the planet.
Another way of looking at it if we want to have an advanced industrial society that doesn’t accumulate wastes in the environment we might need to think about whether we can make our industrial ecology more like Ecosphere.
There was a paper on this Ecosphere in a journal called Ecological Modeling. As a Master’s student, I coded that up and then tried to put an evolutionary overlay into it. What if we had different plankton and different bacteria competing with each other? How would evolution work in such a thing? That ended up being a paper I did in Nature on evolutionary pressures on planktonic dimethyl sulfide emission.
Can we operate our modern industrial society closer to this materially closed system but be energetically open?
One of my big influences through this time was Marty Hoffert. I remember at that time, this was now the 1980s, that that Berner et al. had come out with the BLAG model, and Walker, Hayes and Kasting had come out with the WHAK model. There was this idea of silicate rock weathering controlling atmospheric CO2 concentrations.
My understanding is that this hypothesis came out from Jim Kasting who was doing a model of oxygen on the early Earth. He had to assume some temperature background conditions and so he came up with this idea that maybe the Urey reactions would control the temperature. Jim Walker led that study and Jim Kasting ended up being last author. This gave some idea that there was some consistency to how planets operate and regulate their temperature.
Back at NYU in the 80s, we were thinking “Oh, if we could only make a model that you’d say what’s the composition of the star and then what are the compositions of the planets and would you have plate tectonics” – the idea of having one unified model that could give you Mars, Earth, or Venus. Also around that time there were questions about the greenhouse effect and how strong would it be and when would we see it. The strongest evidence in support of the global greenhouse effect was that you couldn’t understand the climates of Mars and Venus without looking at the role of CO2 in the climate system.
I remember these 1D models of Martian CO2 concentrations with the CO2 going out at the poles and that sort of thing. That was the beginning for me of looking at earth science as a subset of planetary science.
The other thing that Marty said that I think is really right that is controversial among economists is that different fields like to see themselves as the primary science and every other field as derivative from them. Obviously the physicists have a good claim for being the fundamental science but economists like to think of everything as really all economics and everything’s a subset of economics.
Marty Hoffert used to say, “economics is the study of allocation of scarce resources by one species on the third planet orbiting some minor star in some galaxy that’s basically ignorable.” Economics is an important science but it’s a branch of behavioral biology. We need to take their mathematics with a grain of salt.
Sorry for making this a little autobiographical but I then went to Penn State and this is where I got more connected up with astrobiology because I had an opportunity to work with Jim Kasting.
Jim was super great and one of the greatest people I’ve ever had the pleasure to work with because Jim was somebody who would get more excited about my ideas than I would. There’s really nothing better in a collaborator that have somebody tell you your ideas are good because most people are telling you your ideas are boring and not worth working on, whereas Jim would be like “oh, that’s great”. And so we did things like: we did one paper on early Earth being susceptible to CO2 clouds. Was there a metastable state to early Earth? And then we extended Jim Lovelock’s work on the lifespan of the biosphere (and Ariel alluded to this).
Also Jim brought me to a conference at NASA Ames where I got to meet Carl Sagan. This was my one and only meeting with Carl Sagan. I remember at the time (I know this is maybe not a flattering thing) that he seemed to me a lot more like Mr. Rogers than I had anticipated.
Jim was in a geology department and he said, “look, this Earth is a planet, and earth science is a branch of planetary science which is a branch of astronomy. And so this Geosciences Department is a sort of astronomy department. It is an astronomy department focused on a narrow subset of the universe.” Earth science is a branch of planetary science and it’s about how does this planet function as a planet in some vast universe. That is a very different perspective from those of people who start at very small spatial and temporal scales.
In the 1980s, Jim Lovelock wrote a book about Gaia – about the earth being a homeostatic self-controlling system. And here’s a quote by Carl Sagan that’s in the same direction:
“What a marvelous cooperative arrangement – plants and animals each inhaling each other’s exhalations, a kind of planet-wide mutual mouth-to-stoma resuscitation, the entire elegant cycle powered by a star 150 million kilometers away.” — Carl Sagan
This idea is where I started in graduate school because we were heavily influenced by this Gaia idea. Really, it didn’t make much sense to me because I don’t think plants and animals are cooperating. The plant that gets eaten by an animal was not in a cooperative relationship.
I wrote a paper that was in a Gaia volume from a meeting in San Diego. The way I looked at the Gaia was that if you have a system that’s dominated by positive feedbacks it’s necessarily an unstable system and the system just blows up and converts. So stable systems by their very nature that they’re stable systems are stable because they’re dominated by negative feedbacks.
Let’s just even say you had random different amplifiers and made a million different systems well the you’d find that the population of the persistent ones are the ones that were dominated by negative feedbacks. Just because biology is so big on the planet, some of these systems are going to have biological mechanisms so it just makes sense that that this this planets going to be dominated by negative feedback systems and that that many of those will incorporate biology. It has nothing to do with teleology or goal-directedness.
One of the main examples is the rise of atmospheric oxygen a couple of billion years ago. We had anaerobes on this planet they produced oxygen as a waste product and eventually the surface of the earth oxidized and oxygen accumulated (I guess maybe the upper mantle oxidized too) — and the oxygen started accumulating in the atmosphere. And conditions were created that made those organisms not able to live in the environment that had facilitated their evolution.
One of the questions is: Is this going to be our fate also? Is that the way it is for creatures on planets – that if they’re wildly successful they have waste products and eventually those waste products accumulate in the atmosphere or in the environment and then create conditions that don’t allow those organisms to persist anymore? A reasonable first assumption is that this is the way planets with life work – that they produce wastes and eventually produce conditions that are not conducive to their survival.
Of course you can say these anaerobes were highly successful because they’ve created all of us. We are carrying around a bunch of anaerobes in our guts and they’re also in the soils and so on. But is this going to be our fate also – that we’re going to be in some future dome because we’ve destroyed the atmosphere and the waters. Now we create some special environments that we live in.
We are back to this question: Can a civilization be materially closed and energetically open and persist indefinitely? I think the answer has to be ‘no’ because there’s no perfect recycling of materials. There will always need to be material input and material output.
But that ‘indefinitely’ is a rather strong word. The question is can we do it on the billion year time scale. And I think the answer is that if we’re smart we could do it on the billion year time scale. It’s not gonna be perfectly closed and we can’t last forever but it can last long enough. And in this universe where looking 10 years ahead is kind of long distance, worrying about billion year time scales is maybe not something yet in the political system. We can’t last indefinitely but we could last on astronomical timescales.
One of the challenges for doing this is: Can evolution create organisms that can deal with fitness effects that will manifest only in future generations?
Evolution works on the organism level: Does that organism get to reproduce and produce a viable offspring? We’re in a situation now where if we are all just local optimizers, that’s not going to work. And so this transition towards long-term sustainability depends on organisms not worrying about their narrow fitness so much as the fitness of the group. And this is where you get back to the evolutionary theory.
I’m gonna do the “consciousness trophic levels argument”.
When the cow wants to eat of grass why doesn’t that blade of grass run away from the cow?
The answer is energetics. There’s just too low an energy density to sunlight and too low conversion efficiency of photosynthesis for plants to have a high energy lifestyle so they need to be have a low energy lifestyle and not be very motile. This is just a conjecture.
Something that I did a before I went to graduate school, I did a neural model of a sea slug aplysia. You could see from the wiring of the nervous system of the sea slug that it basically has sensors on both sides. If it’s sensing more light to the right the neurons go to the muscles on the left side of its body and it moves a little faster on the left and turns towards the light. You can understand its basic behavior patterns just from the wiring diagram of its nervous system.
But if you start getting to higher trophic levels – say a pack of lions or cats or something or dogs who are trying to chase down a highly motile animal — now you have to think “What’s that animal going to do? What am I going to do in response to what it does? How are other people or other organisms in my social group going to respond to that?” And so you start needing to model this system as: I have a mental model of the world. I’m representing myself as an actor in that world. I’m representing other creatures as actors that are making decisions and I’m trying to think of what-ifs and counterfactuals.
It is through this process of representing world as model and self as actor in the model, and other minds as actors in the models, that we get to consciousness.
Bees are the opposite end. Bees are just eating things that are pretty stable. Pollen is not running away from them and so the bees don’t have to have this mental modeling of what other agents are going to do. Bees have evolved in a way that they really do have this sort of group optimization. Obviously genetically there are reasons why, and so the question is: Can we use our brains to have more of this group optimization and not that short-term local-in-space-and-time optimization?
Depending on how the time was going I was going to go into a bunch of little side things but things got too long …
Consciousness and trophic levels — which people want me not to do but I do think that the basic point there is that to evolve consciousness on a planet you have to get to a point where there’s several trophic levels because you need really high energy organisms that are pursuing other high energy organisms, that need to keep mental models of the situation representing self as actor, and that’s where consciousness comes from.
Statistics of impacts and mass extinctions — I’ve been doing a bunch of work with Mike Rampino over the years and I never know what to think about this. Looking at life on Earth (and Mike is convinced and he’s convinced me even though I don’t really like it), if you look at the mass extinctions on Earth and then and also even the secondary ones and then you look at the statistics of impacts and things like flood basalts and that all these things do seem correlated. I was always wondering like how much of this is coming out of cherry-picking datasets. I’m working with Mike on these ones and we have probably done at least a half dozen papers on these correlations over the years. Mike is usually coming up with the numbers I’m doing the statistics and I’m always a little skeptical of the whole thing but it does it’s the same numbers keep popping out of different data sets. It does seem that at least there’s some extraterrestrial pacemaker to some punctuated events in Earth history, and that mass extinction events with extraterrestrial causation, or at least where the extra-terrestrial component is a substantial factor, does seem to be a characteristic of life on this planet and likely on other planets.
One of the things that have come up in this context is that we were doing some work in the Great Barrier Reef looking at the effects of ocean acidification on coral reefs. Elizabeth Kolbert came out and visited us and we ended up as a chapter in her Pulitzer Prize-winning book and which she called “The Sixth Extinction”. One of the questions is: Are we right now facing anything that’s at the scale of the five previous extinctions? Despite liking Elizabeth and liking her book and all, I don’t think what we’re doing now is anywhere near the end like the end Cretaceous extinction event. What we’re seeing we’re seeing a terrible loss of biodiversity but a lot of it is the bigger more charismatic stuff and stuff with economic value or stuff that’s not widespread. I think it’s tragic. I don’t think it’s at the same scale of anywhere close to the end Cretaceous extinction.
Life outside of traditional habitable zones — One of the things that’s really been interesting recently … Back when I was working with Jim Kasting, always the focus was on habitable zones and how did the silicate weather feedbacks affect the habitable zones. People didn’t think about how tidal forces could heat moons of outer planets and these tidal forces could make liquid water deep in the solar system. One of the exciting things that’s happened in this whole field since then is this expansion of the notion of what a habitable zone is and how there’s other energy sources other than the star that can support life. Whether that could support anything more advanced than bacterial life is a question.
I’m just gonna run to the end because I’m good running out of time.
How hard is it to destroy modern civilization? — This shows up in the global change discussion a lot. There’s a lot of people who think that global warming is an existential threat to modern civilization. And other people think we’re going to just muddle through. And it’s going to be a cost on society. It will be an existential for some people who lose their livelihood or lose their lives, but as a civilization it’s a challenge but not an existential threat. I tend to be on that side of things.
It’s a little bit like the extinctions. It’s tragic and unnecessary but not an existential threat. To some people and some communities, yes, but not to humanity.
How hard is it to kill off all life on Earth? — Once we were in some meeting and the question came up of how hard it is to kill off all life on Earth. I think that one’s hard unless you melt the planet because you have the deep biosphere you’ve got life all over the place. If you had a Cretaceous type impact, you could maybe kill off modern civilization but it’s really hard to kill off life on Earth without melting the entire planet.
Again, this question here: Can organisms be wildly successful at planetary scale without destroying the conditions that allowed them to succeed?
And the answer is that in most cases organisms would be expected to destroy the conditions that allowed the organisms to succeed but this is not a necessary outcome. And we’re in a special position to affect the answer to this question.
Lovelock was wrong but we can make him right. Lovelock had this idea that there’s all kinds of biological and negative feedbacks in the system and the biology is operating this system in a way that keeps the conditions good for life on this planet. Lovelock was wrong. There is no teleology. There’s no goal directedness to how the planet functions.
Because now we have these brains that model ourselves as actors and we think of counterfactuals and consequences of our actions. We have the ability to operate this planet in a goal-directed way.
And being a risk-averse person my goal-directed way of operating this planet is to interfere with natural systems as little as possible. The more we pull back from interfering with natural systems, the more likely we are to persist.
But people can disagree. There are some people who want to terraform Earth and make it nicer. But the main challenge is to make Lovelock right – to operate this planet in a teleological way.
Now coming back to some quotes from Carl Sagan: “Our passion for learning … is our tool for survival.” –Carl Sagan
We learn about this planet, about how it functions. And then we can start operating it a little more cleverly.
“You know about the concern that chlorofluorocarbons are depleting the ozone layer; and that carbon dioxide and methane and other greenhouse gases are producing global warming, …
Who knows what other challenges we are posing to this vulnerable layer of air that we haven’t been wise enough to foresee?” — Carl Sagan
Our department hopefully is going to hire some new people and one of the things I’ve been arguing is exactly where we should hire is reflected in this quote from Carl Sagan. Over the last century we’ve worried about lead and gasoline. We’ve worried about chlorofluorcarbons destroying the ozone layer, about CO2 and our fuels altering climate, about pesticides so on.
When we solve the climate problem, that is not the last thing. If we solve the climate problem amd we made it so pesticides didn’t kill off all the insects, and got rid of last of the CFCs and so on, something else is going to bite us down the road. Who’s thinking about what comes after the climate problem? What’s the next barrier that civilization is going to run into? We need to be thinking about this now.
Let’s say it was 1918 instead of 2018 and you said okay what science could we be have done in 1918 that would make the world a better place today? I’m doing now energy system forecasting. Energy system forecasting in 1918 would have been a complete waste of time. You wouldn’t have seen wind, solar ,nuclear, or the rise of automobiles, but coming up with new materials, obviously health and education, but coming up with new materials … If you were to come up with silicon chips and carbon nanofibers … All these things in 1918 that would have been great but you also needed to couple that with life cycle analysis so that when we release these new materials into the environment, we understand their long-term effect. These environmental studies is something that if they had done in 1918 could have protected people children from getting lead in their brains. It could have protected us from climate change. And this anticipatory science of what materials can we produce and then what happens when those materials are released into the environment is critical.
“Our species needs, and deserves, a citizenry with minds wide awake and a basic understanding of how the world works.” — Carl Sagan
Another thing that Carl Sagan pointed out is that democracy depends on an educated population. We obviously don’t have an educated population right now.
I’ve had people email me telling me that I’m a technologist and it’s how bad I am for believing in technology. And I say, “look you’re using a computer to tell me that technology is bad”. People assume a cellphone just works and that’s not technology. Technology is that scary thing.
“We have also arranged things so that almost no one understands science and technology.
This is a prescription for disaster.
We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.” — Carl Sagan
Unless people understand something about science we’re not going to be able to deal with our problems properly. We need a population that understands how the world works and can vote appropriately. And this is the centerpiece of my talk: Can we live on this planet a long time and can we get past this a tendency of evolution to optimize the fitness going just one generation forward? Can we can we make Lovelock right? Can we operate this planet for the long term?
“While our behavior is still significantly controlled by our genetic inheritance, we have, through our brains, a much richer opportunity to blaze new behavioral and cultural pathways on short timescales. ” — Carl Sagan
This is what we really need to be doing with our science and with our lives.
Ariel had this quote and so I’m not going to go through it again but just looking at our planet as one of many planets in the universe and realizing that we are on this Ecosphere – the spaceship Earth – and we need to try to help life as we like it persist.
Just to remind you of this other quote: “Extinction is the rule. Survival is the exception.” — Carl Sagan
We’re an exceptional species but we need to work at it.
Note that questions were not caught by the transcription, so these are lightly edited versions of Ken Caldeira’s answers to questions.
I said two things that were contradictory one is that that we have to learn how to run the planet and the other thing is I think that I my bias is towards interfering in natural systems as little as possible.
I don’t see that as a contradiction in that my feeling is that unless you really understand complex systems well, interference in them is likely to produce unanticipated consequences and is dangerous. If the natural system in which we evolved is providing us a pretty good home then maybe a risk-averse way to run that planet is to let that natural system go on.
I did some work with Edward Teller and he wasn’t worried so much about global warming as he was about going into the next ice age. He asked whether we, for the next Ice Age, we could engineer our way out of that. Obviously this interference is very dangerous.
Let’s say is the Sun heats up and now we’re not worrying about the next decades but say a billion years. We can do things to put particles either at the L1 point between the Earth and the Sun or in orbit around the earth or in the stratosphere and reflect additional sunlight away from the earth and extend the lifespan of the biosphere.
So I think right now our best course is to minimize intervention in the system but that eventually that it might be in people’s interest to take some active role.
But right now keeping the hands off the rudder is the best course of action. And right now unfortunately we we’re intervening in the system without understanding — or with understanding and without caring — and we have to stop doing that.
Back to economics a little bit …
We evolved as local optimizers but we are heavily culturally influenced. Camus, who Sagan was reading, had written about imagining Sisyphus as happy pushing that stone up the hill. And that you wonder about the people who built the Nortre Dame cathedral as a multi-generational project that was aspirational towards some idea of permanence. Are these are sort of serfs working on this thing and just because they need to get money for food or did this gave people meaning to people to lug these stones and build Nortre Dame. We can get collective me out of out of a project that would be positive for all of humanity and that in a way this sort of economics and even evolutionary theory emphasizing self-interest and narrow personal gain ….
I think a lot of us are motivated by approval of our peers, by wanting a feeling of meaning in our lives and so on. And not everything we do is narrowly self-interested. And maybe if in our culture we tried to emphasize more doing things for the public good that maybe more people would start doing things for the public good.
I don’t know how much time we have but okay.
I think intelligence is pretty easy to evolve that that I read this nice book by Frans de Waal. It was “Are We Smart Enough to Know How Smart Animals Are?” and a main point is that brains have a cost. They require a lot of energy so it’s resources won’t be used for anything else. Frans de Wal said basically that we have the brains that maximize our fitness. If you look at an interesting cases, look at octopus because most other intelligent organisms are vertebrates and we have come from the same line of brain function. Our brain architectures are the same so octopus are interesting to look at because they’re invertebrates. They have a distributed brain so they can tell one of their arms to to explore over there and the actual detailed exploration will be done in the intelligence of that arm. It will be done in the arm rather than in central processing.
But octopus only live a year or two. They’re carnivores. They invade disturbed places and and so they need to go in and have that dexterity to figure out how to adapt to a new situation and, having intelligence, know how to get prey. That’s where they need to think about what ifs — with the prey.
The fact that on this planet right now some forms of intelligence develop both in vertebrates and invertebrates and that’s just at this time now…
Why aren’t octopus more intelligent? Because it wouldn’t improve their fitness to be more intelligent.
It tends to be carnivores and social animals … and so social carnivores are the intelligent animals because they need to coordinate with other beings and they need to go after motile organisms.
Anytime you have high number of trophic levels and social organization you’re likely to get consciousness.
Unfortunately, there is there another session coming in here is somebody like waiting to use the room …
I don’t but I don’t think we have time to go into that so I’m happy to talk to you afterwards but I don’t have concrete ideas on what to do there unfortunately I said to be broadly educated and creative.
Oh this Sun — its the stellar evolution, the Sun is getting hotter and eventually we’ll lose our liquid water.
Scientists need a system to help them find the papers that are really worth taking the time to read carefully.
Right now, working scientists and those who would like to follow scientific literature have difficulty wading through the thousands of papers that are published every day to get to the papers that are worth reading.
The problem is caused by the emphasis on quantitative publication-based metrics to assess scientific productivity. These metrics give authors incentive to publish many papers describing micro-advances, and to divide a single integrated study into several papers. (These metrics also provide incentives to add co-authors who have contributed little, but that is another story.)
Working scientists, and people who would like to follow the work of scientists need help.
The following proposal is a rough sketch and not all of the details have been worked out.
Moss, Wunderlich Park
The basic idea is to create an online platform that would help people to understand what they should read to be up on the scientific conversation in a scientific topic area or sub-discipline.
The platform would be about recommending reading.
It would not be about criticizing content that is found in the literature, and it is not about saying what not to read. Aside from looking at the statistics of recommendations, the only action someone can do is recommend a paper for people interested in a topic area.
It is inspired by things like Stack Exchange and Reddit. In the ideal set-up, perhaps on some platform similar to Google Scholar, there would be a way to tell the system that you recommend people interested in, for example, metamorphic petrology to take the time to read this paper.
Key would be in enabling the sorting of recommendations in different ways.
Disciplinary expertise. Recommendations from different people could be weighted differently depending on how many (weighted) recommendations their own work has gotten within that topic-area (sub-discipline). So, a metamorphic petrologist whose work has gotten many recommendations would have more influence in ranking of papers within the metamorphic petrology topic area.
Different time periods. One could look at recommendations as a time-series, and use the net-present-value of weighted recommendations-instances to sort reading recommendations. If the user wanted to see what the most important papers were on the decadal time scale, they could use a decade as the discount rate. If the user wanted to see what the most important papers were over the last weeks, they could discount on a one week time scale. The time discounting could also be used to reduce the weight of recommendations from people who make recommendations very frequently.
Sorting and searching. While the institution that hosts the database should provide basic search functions, if the resulting database is open access, as it should be, many people could provide services filtering and sorting results in different ways. One could imagine constructing associations between topic areas by looking at papers recommended in more than one topic area, and searching for important papers to read based on those associations.
— How should other aspects of the platform be designed, including how to create topic areas within the system?
— To what extent can or should anonymity be provided?
— How can we design a system such that when people try to game the system, they are doing what is best for the system?
Of course, proposals like this suffer from a chicken and egg problem. If everyone were already be using a system like this, the system would be useful and busy scientists would have incentive to use it. But if nobody is using the system, then nobody has incentive to contribute to it. Therefore, a system like this would need to be initiated by people with some standing, perhaps Google, professional associations, or national academies.
It would be great if there were some kind of community-wide reading recommendation service with the granularity to be useful even on topics of extremely narrow interest.
This rant is from an email sent to my research group. It seems that some of us have been asking questions like, “What can I do with a climate model that has not already been done?” If this is the question we are asking, then we are asking the wrong question.
Most people in the world are focused on solving pressing problems (how to provide for their families, how to get access to health care, etc). Most people are faced with pressing problems that they have to solve, not problems they choose to solve.
Some people approach their scientific or technical work choosing to focus on pressing problems (“What can I be doing to most effectively help a transition to a clean energy system?”) but other people approach their work thinking, “I have a hammer; are there any nails around that I might be able to hammer on?” — Or even worse, “Are there any nails around that other people have already whacked at, but that I might be able to give another whack or two?”
If you are not working on a problem that you feel is important and pressing, then you are probably working on the wrong problem. (The reason the problem is important could be for fundamental scientific understanding, and not necessarily utilitarian concern.)
It is important to start with the problem, not the tool.
Once you have identified the problem, then your experience with specific tools might inform how you can most effectively contribute to problem solution, but the starting point should be the problem, not the tool.
An intermediate position is to ask: What are the important problems that this tool could contribute to solving? Realistically, this is where we are with much of our work.
The main point is: If you are having trouble finding important problems to address with the tools you already know how to use, that is probably a sign that it is time to learn to use new tools. (This is why I have been learning about economics and energy system modeling.)
You should not just address ever more arcane and irrelevant problems using the tools you already know how to use.
The world is replete with pressing problems. If you are not working on at least one of these problems, there is a good chance you are wasting your time and you should be doing something else.
If you have recently gotten your PhD or will get your PhD within the next year or two, and are interested in trying to address important problems using new tools or approaches, please apply for a postdoc job in my group.
I woke up this morning to read The Federalist quoting me out of context, putting words in my mouth that I did say but wished I had worded more carefully. For those not familiar with The Federalist, they are a right wing online magazine.
“This opens up the possibility that we could stabilize the climate for affordable amounts of money without changing the entire energy system or changing everyone’s behavior,” Ken Caldeira, a senior scientist at the Carnegie Institution for Science, told The Atlantic.
Here is the full email I sent to Robinson Meyer, writer for The Atlantic:
I am no expert in systems costing, but I read the paper as saying that Direct Air Capture of carbon dioxide would cost somewhere in the range of $100 to $250 per ton.
If these costs are real, it is an important result.
If you look at this paper (and this is what I could find quickly on the web)
Carbon prices projected for this century look like this for 2 C stabilization from a business-as-usual scenario:
If you notice, by the end of the century, these integrated assessment models project carbon prices of many hundreds if not thousands of dollars per ton CO2.
The IPCC estimated that these levels of carbon prices could shave 5% off of global GDP.
The result of David Keith and colleagues suggest that carbon prices could never go above the $100 to $250 range per ton CO2, because it would be economic to capture CO2 from air at that price.
This suggests that the hardest to decarbonize parts of the economy (e.g., steel, cement manufacture, long-distance air travel, etc) might continue just as they are now, and we just pay for CO2 removal.
To put these prices in context, $100 per ton CO2 works out to about $1 per gallon of gasoline. This suggests that a fee of somewhere between $1 and $2.50 per gallon would allow people to drive their ordinary cars, and we could just suck the CO2 out of the atmosphere later.
This opens up the possibility that we could stabilize climate for affordable amounts of money without changing the entire energy system or changing everyone’s behavior.
To give more context, global CO2 emissions is something like 36 GtCO2 per year. If we were to remove all of that with air capture at $100 per tonCO2, that works out to $3.6 trillion dollars.
Depending on how you count things, global GDP is somewhere in the neighborhood of $75 to $110 trillion. So, to remove all of this CO2 would be something like 3 to 5% of global GDP (if the $100 per ton number is right). This puts an upper bound on how expensive it could be to solve the climate problem, because there are lots of ways to reduce emissions for less than $100 per ton.
In any case, it makes it much easier to deal with the hardest to decarbonize parts of the economy.
Again, this is all with the caveat that I am no expert in costing of engineering systems. But, if this paper is correct, the result seems important to me.
It is always going to be easier and cheaper to avoid making a mess than to clean up one we have already made. It is easier to remove carbon dioxide from a smokestack, where the exhaust is 10 percent carbon dioxide, than from the atmosphere, which is 0.04 percent carbon dioxide.
When the Constitution of the United States of America was written, it seemed inconceivable that people would be released from slavery or that women would vote. Just a few years before gay marriage became the law of the land, it would have been impossible to predict such a sweeping change in social attitudes. For us to even have a chance of addressing the climate problem, we’ll need another huge change in public attitudes. It will need to be simply unacceptable to build things with smokestacks or tailpipes that dump waste into the air. This change could happen.
The point with my poorly worded quote was not that we don’t need revolutionary changes in our energy system, but that there are some very hard-to-deal-with sources of CO2 emission, like long-distance aviation, that could be addressed by using hydrocarbon fuels coupled with contemporaneous capture of CO2 by devices like that being investigated by David Keith and colleagues.
As recently as 1 June 2018, I wrote an email to Peter Frumhoff of the Union of Concerned Scientists, urging that organization to put out a statement saying:
Today’s emissions policies should be based on the assumption that most [of] our CO2 emissions will remain in the environment for hundreds of thousands of years. Emissions policies should not be made on the assumption that future generations will clean up our mess using carbon dioxide removal technologies and approaches.
There is a big difference in using direct air capture of CO2 to offset contemporaneous emissions and using direct air capture of CO2 to argue that we can continue emitting CO2 today in the hopes that someone else will clean up our mess in the future.
As a little egomaniacal side note, I would like to point out that Caldeira and Rampino (1990) may be the first paper to point out the approximately 300,000 year time scale for removal of atmospheric CO2 concentration perturbations by silicate rock weathering. This estimate has held up pretty well over the last decades.
What are the lessons learned?
When speaking or writing an email to a journalist, think about how each sentence can be read taken out of context. Even if you trust the journalist to represent your views well (and I think Robinson Meyer did an excellent job), somebody later can take a carelessly worded statement and use it out of context.
Also, we are busy, and when requests come in, we often try to respond with something quickly so we can get back to our day jobs (which in my case happens to be scientific and technical research). I should slow down a little bit and take the time needed to write more careful prose.
So, what do you do when a poorly expressed idea is quoted out of context by people with a political agenda?
My answer: “Write a blog post about it, and then Tweet and move on.”
My postdocs and I are having a discussion about how to be more efficient in producing high-impact papers in quality peer-reviewed journals. I sent the steps in my preferred process to them, which are repeated below.
Photo by Jess Barker
Steps are similar for the observationally-based work we do. The main difference is that obtaining additional observations is usually much harder than performing additional model simulations.
Steps to writing a scientific paper
1. Play until you stumble on something of interest. Obtain initially promising results. Alternatively, think about what paper people would find useful that you could write but has not yet been written.
2. Write a provisional draft abstract for the proposed paper. This defines the problem, the scope of work, the expected results, and why it is important or interesting. What is the main point of the study and why should anyone care? This is a good time to start thinking about the target journal.
3. Write the introduction of the proposed paper. This forces you to do a literature review and understand what else is out there. It also forces you to write up the problem statement while you still think the problem is important. Usually, by the end of the study, the result seems trivial and obvious, and the problem unimportant.
4. Do additional simulations, measurements, analyses, etc, needed to test out the basic hypothesis and produce data for tables and figures. Attempt to get enough of a mechanistic understanding so that the central result starts to seem trivial and obvious.
5. Create rough drafts of figures. Make an abundance of figures, assuming that some will be in the main paper, some in the supporting material, some for talks, and some not used at all. Make preliminary decision of what figures will be in the main paper.
6. Write first draft of paper around figures. Do this before iterating on figure improvement. The standard outline is: Abstract, Introduction, Methods, Results, Discussion, Conclusions. The Results section should describe the results produced by the model. Usually, the Discussion section should discuss the relevance of those model results to the real world. Sometimes, the exposition is smoother if results in a sequence are each in turn presented and then discussed. This is OK if care is taken to be clear about when you are referring the model and when you are referring to the real world.
7. Write figure captions. Figure captions are often among the parts of the paper read by the broadest audience. Place in figure caption a one sentence statement of the main point you expect the reader to derive from looking at the figure. Sometimes editors pull this sentence out, but they often leave it in. In any case, you should understand the main point of each figure.
8. Iterate improvement of the draft of the paper and main paper figures until the process starts to asymptote. Do additional simulations and make additional figures as necessary. Take care to make your figures beautiful. Beautiful figures not only communicate scientific content well to a broad audience, but also communicate that you care about your work and strive for a high level of excellence. Consider target journal guidelines and what should go in the supporting material and what should be in the main body of the paper.
9.Wherever possible, replace jargon and acronyms with ordinary English. Insofar as it is possible, improve felicity of expression. Write good prose. This is especially important in the abstract, first and last paragraphs, and figure captions.
10. Before submission, double check that the main story of the paper can be obtained by reading (1) the abstract, (2) the first paragraph, (3) the last paragraph, and (4) the figure captions. This is already more than what most ‘readers’ of your paper will actually read. Only experts will read the entire paper. Most readers will just want the idea of the paper and the basic results.
11. Make sure all codes, intermediate data, etc, are packaged up in a single directory. This is done both to facilitate making modifications later, and also so as to provide maximum transparency into and reproducibility of the scientific process.
12. Write cover letter to editor and submit. Stress the new finding and to whom this finding will be of interest. Suggest knowledgeable reviewers who you have not collaborated with recently. If you have written papers on related topics, people who have cited your previous papers would be good candidate reviewers.
Key is to have rough figures and a rough draft on paper early. It is much easier to improve existing text and figures than to start with a blank page.
Also key is recognizing when your manuscript is beginning to asymptote. A sloppy error-filled manuscript will give reviewers the feeling that your work is sloppy. However, perfectionism can mean low productivity. Striking the correct balance is hard.
Another thing is to do Step One 20 times. If you have 20 ideas for papers you can pick the best one. If you have only one idea, it is unlikely to be a great idea. People who have only one idea at a time tend to write papers that are footnotes to their previous papers, and then have careers that descend into meaningless detail that nobody cares about.
You might also want to take a look at this advice on writing scientific papers from George M. Whitesides, and this advice on the 5 most pivotal paragraphs in a scientific paper by Brian McGill.
Figure 2 from Winkelmann et al. (2015) indicating how much Antarctic ice loss is projected to occur as a result of different amounts of cumulative carbon dioxide emission, over the next one, three and ten millennia. Note that 10,000 GtC of cumulative emissions results in about 60 m (about 200 ft) of sea-level rise over the long term (taking additional contributions from Greenland and mountain glaciers into account).
If we divide 24,064,000 km3 by 10,000 GtC, assume the density of the ice is 1 kg per liter, and do the appropriate unit conversions, we can conclude that each kg of carbon emitted as CO2 will ultimately melt about 2,400 kg of ice. This is a huge number.
Another way of expressing this is that each pound of carbon released to the atmosphere as CO2 is likely to end up melting more than a ton of glacial ice.
Often, people like to think in units of tons or kg of CO2 instead of tons or kg of carbon. In these units, each kg of CO2 ultimately melts about 650 kg of glacial ice.
This works out to about 1.8 kg (about 4 pounds) of CO2 per hour per American. This is more than twice the per capita emission rate of Europe and about twenty times the per capita emission rate for sub-Saharan Africa.
If I am an average American, the CO2 emissions that I produce each year (by participating in the broader economy) will be responsible for melting about 10,000 tons of Antarctic ice, adding about 10,000 cubic meters of fresh water to the volume of the oceans.
That works out to about more than a ton of Antarctic ice loss for each hour of CO2 emissions from an average American. Every minute, we emit enough CO2 to add another five gallons of water to the oceans through glacial ice melt.
If you do the units conversion, this means that each American on average emits enough CO2 every 3 seconds to ultimately add about another liter of water to the oceans. The Europeans emit enough CO2 to add another liter to sea-level rise every 8 seconds, and the sub-Saharan Africans add a liter of seawater’s worth of CO2 emissions every minute.
In my freezer, there is an ice cube tray with 16 smallish ice cubes. The ice cubes in this tray all together had a mass of 345 g, or about 1/3 of a kg. That means that I am responsible for, every second, emitting enough CO2 to melt about an ice-cube-tray’s worth of Antarctica.
Economists often like to think in terms of “carbon-intensity of our economy” meaning how much CO2 to we emit per dollar of value produced or consumed. We can also think about the “ice-intensity of our economy”: How much ice is melted per dollar of value produced or consumed?
Admittedly, by the time scales of our ordinary activities, ice sheets take a long time to melt. The melting caused by a CO2 emission today will extend out over thousands of years.
There are complex moral questions related to balance short-term and long-term interests. Not everyone thinks we should be taking the long-term melting of Antarctica into account.
However, if the ancient Romans had undergone an industrial revolution similar to ours and fueled a century or two of economic development using fossil-fuels with disposal of the waste CO2 in the atmosphere, sea level today would be rising about 3 cm each year (more than an inch a year) due to the long-term effects of their emissions on the great ice sheets.
If their scientists had told them of the long-term consequences, but they had nevertheless decided to neglect those consequences so that they could be a few percent richer in the short term, I imagine that we would take a fairly dim view of their moral standing.
This is from an email sent today to colleagues in my department:
Postdocs in my lab either have gotten or may be about to get more permanent employment, which puts me in the position of constantly trying to recruit great people.
If you know people who are really good and who are going to get their PhD degrees within a year or two (or have gotten their degree within the past year or two), please forward this email to them.
I really don’t care about people’s domain knowledge. I look to see that they are smart, productive, creative, able to complete projects, can write, can speak, can do math, etc. Smart people can learn the relevant facts quickly.
We are a good place for people who want to understand the big picture, and who will not get lost investigating interesting but ultimately unimportant detail.
Ability to demonstrate an interest in the challenges associated with a clean energy system transition is important, but experience addressing these challenges is not important.
Two postdocs in my group engaged in geophysical modeling may move on this year, so there is space for at least two people who want to understand limits on and opportunities for clean energy systems from a geophysical perspective.
I am trying to build up our idealized energy-system-modeling effort, so there is room to hire a few people there. There is also room for people who want to do idealized economic analysis related to development and decarbonization.
On a different topic, we have had two Nature papers now which represent the culmination of our ocean acidification-related work on coral reefs in Australia (Albright et al, 2016, 2018). While I am not actively recruiting in this area, if there was a postdoc candidate who has a great idea on how to carry this work forward, and who would want to lead the project, I can make room for such a person.
In short, I would appreciate it if you would use your networks to help me find good people who are interested in topics that my group is interested in. We are open to hiring non-traditional candidates who have interest, but lack experience, in these topic areas.
We recently published a paper that does a very simple analysis of meeting electricity demand using solar and wind generation only, in addition to some form of energy storage. We looked at the relationships between fraction of electricity demand satisfied and the amounts of wind, solar, and electricity storage capacity deployed.
Our main conclusion is that geophysically-forced variability in wind and solar generation means that the amount of electricity demand satisfied using wind and solar resources is fairly linear up to about 80% of annually averaged electricity demand, but that beyond this level of penetration the amount of added wind and solar generation capacity or the amount of electricity storage needed would rise sharply.
Obviously, people have addressed this problem with more complete models. Notable examples are the NREL Renewable Electricity Futures Study and another is the NOAA study (McDonald, Clack et al., 2016). These studies have concluded that it would be possible to eliminate about 80% of emissions from the U.S. electric sector using grid-inter-connected wind and solar power. In contrast, other studies (e.g., Jacobson et al, 2015) have concluded that far deeper penetration of intermittent renewables was feasible.
What is the purpose of writing a paper that uses a toy model to analyze a highly simplified system?
Fig 1b. from Shaner et al. (E&ES, 2018) illustrating variability in wind and solar resources, averaged over the entire contiguous United States based on 36 years of weather data. Also shown is electricity demand for a single year.
The purpose of our paper is to look at fundamental constraints that geophysics places on delivery of energy from intermittent renewable sources. For some specified amount of demand and specified amount of wind and solar capacity, the gap between energy generation and electricity demand can be calculated. This gap would need to be made up by some combination of (1) other forms of dispatchable power such as natural gas, (2) electricity storage, for example as in batteries or pumped hydro storage, or (3) reducing electricity loads or shifting them in time. This simple geophysically-based calculation makes it clear how big a gap would need to be filled.
Our simulations corresponds to the situation in which their is an ideal and perfect continental scale electricity grid, so we are assuming perfect electricity transmission. We also assume that batteries are 100% efficient. We are considering a spherical cow.
Part of the issue with the more complicated studies is that the models are black boxes, and one has to essentially trust the authors that everything is OK inside of that black box, and that all assumptions have been adequately explained. [Note that Clack et al. (2015) do describe the model and assumptions used in McDonald, Clack et al. (2016) in detail, and that the NREL study also contains substantial methodological detail.]
In contrast, because we are using a toy model, we can include the entire source code for our toy model in the Supplemental Information to our paper. And all of our input data is from publicly available sources. So you don’t have to trust us. You can look at our code and see what we did. If you don’t like our assumptions, modify the assumptions in our code and explore for yourself. (If you want the time series data that we used, please feel free to request them from me.)
Our key results are summarized in our Fig. 3:
Figure 3 | Changes in the amount of demand met as a function of energy storage capacity (0-32 days) and generation.
The two columns of Fig. 3 show the same data: the left column is on linear scales; the right column has a log scale on the horizontal axis. [In a wind/solar/storage-only system, meeting 99.9% of demand is equivalent to about 8.76 hours of blackout per year, and 99.99% is equivalent to about 53 minutes of blackout per year.]
The left column of Fig. 3 shows, for various mixes of wind and solar, that the fraction of electricity demand that is met by introducing intermittent renewables at first goes up linearly — if you increase the amount of solar and/or wind power by 10%, the amount of generation goes up by about 10%, and is relatively insensitive to assumptions about electricity storage.
From the right column of Fig. 3, it can be seen that that as the fraction of electricity demand satisfied by solar and/or wind exceeds about 80%, then the the amount of generation and/or the amount of electricity storage required increases sharply. It should be noted that even in the cases in which 80% of electricity is supplied by intermittent renewables on the annual average, there are still times when wind and solar is providing very little power, and if blackouts are to be avoided, the gap-filling dispatchable electricity service must be sized nearly as large as the entire electricity system.
This ‘consider a spherical cow’ approach shows that satisfying nearly all electricity demand with wind and solar (and electricity storage) will be extremely difficult given the variability and intermittency in wind and solar resources.
On the other hand, if we could get enough energy storage (or its equivalent in load shifting) to satisfy several weeks of total U.S. electricity demand, then mixes of wind and solar might do a great job of meeting all U.S. electricity demand. [Look at the dark green lines in the three middle panels in the right column of Fig. 3.] This is more-or-less the solution that Jacobson et al. (2015) got for the electric sector in that work.
Our study, using very simple models and a very transparent approach, is broadly consistent the findings of the NREL, NOAA, and Jacobson et al. (2015) studies, which were done using much more comprehensive, but less transparent, models. Our results also suggest that a main difference in conclusions between the NREL and NOAA studies and the Jacobson et al. (2015) study is that Jacobson et al. (2015) assume the availability of large amounts of energy storage, and that this is a primary factor differentiating these works. (The NOAA study showed that one could reduce emissions from the electric sector by 80% with wind and solar and without storage if sufficient back-up power was available from natural gas or some other dispatchable electricity generator.)
All of these studies share common ground. They all indicate that lots more wind and solar power could be deployed today and this would reduce greenhouse gas emissions. Controversies about how to handle the end game should not overly influence our opening moves.
There are still questions regarding whether future near-zero emission energy systems will be based on centralized dispatchable (e.g., nuclear and fossil with CCS) or distributed intermittent (e.g., wind and solar) electricity generation. Nevertheless, the climate problem is serious enough that for now we might want to consider an ‘all of the above’ strategy, and deploy as fast as we can the most economically efficient and environmentally acceptable energy generation technologies that are available today.
If energy storage is abundant, then that storage can fill the gap between intermittent electricity generation (wind and solar) and variable electricity demand. Jacobson et al. (PNAS, 2015) filled this gap, in part, by assuming that huge amounts of hydropower would be available.
The realism of these energy storage assumption was questioned by Clack et al. (PNAS, 2017), but Clack et al. (PNAS, 2017) went further and asserted that Jacobson et al. (PNAS, 2015) contained modeling errors. A key issue centers on the capacity of hydroelectric plants. The huge amount of hydro capacity used by Jacobson et al. (PNAS, 2015) is necessary to achieve their result, yet seems inconsistent with the information provided in their tables.
Clack et al. (PNAS, 2017) in their Fig. 1, reproduced Fig. 4b from Jacobson et al. (2015), over a caption containing the following text:
This figure (figure 4B from ref. 11) shows hydropower supply rates peaking at nearly 1,300 GW, despite the fact that the proposal calls for less than 150 GW hydropower capacity. This discrepancy indicates a major error in their analysis.
(A dispatch of 1 TWh/hr is equivalent to dispatch at the rate of 1000 GW.)
Since the publication of Clack et al. (PNAS, 2017), Jacobson has asserted the apparent inconsistency between what is shown in Fig. 4b of Jacobson et al. (PNAS, 2015) and the numbers appearing in their text and tables was in fact intentional, and thus no error was made. Mark Z. Jacobson went so far as to claim that the statement that there was a major error in the analysis constituted an act of defamation that should be adjudicated in a court of law.
The litigious activities of Mark Z. Jacobson (hereafter, MZJ) have made people wary of openly criticizing his work.
I was sent a Powerpoint presentation looking into the claims of Jacobson et al. (PNAS, 2015) with respect to this hydropower question, but the sender was fearful of retribution should this be published with full attribution. I said I would take the work and edit it to my liking and publish it here as a blog post, if the primary author would agree. The primary author wishes to remain anonymous.
I would like to stress here that this hydro question is not a nit-picking side-point. In the Jacobson et al. (PNAS, 2015) work, they needed the huge amount of dispatchable power represented by this dramatic expansion of hydro capacity to fill the gap between intermittent renewable electricity generation and variable electricity demand.
In the text below, Jacobson et al. (E&ES, 2015) refers to:
Jacobson MZ, et al. (2015) 100% clean and renewable wind, water, and sunlight (WWS) all-sector energy roadmaps for the 50 United States.Energy Environ Sci 8:2093–2117.
Jacobson et al (PNAS, 2015) refers to:
Jacobson MZ, Delucchi MA, Cameron MA, Frew BA (2015) Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes.Proc Natl Acad Sci USA 112:15060–15065.
and Clack et al (PNAS, 2017) refers to:
Clack, C.T. M, Qvist, S. A., Apt, J., Bazilian, M., Brandt, A. R., Caldeira, K., Davis, S. J., Diakov, V., Handschy, M. A., Hines, P. D. H., Jaramillo, P., Kammen, D. M., Long, J. C. S., Morgan, M. G., Reed, A., Sivaram, V., Sweeney, J., Tynan, G. R., Victor, D. G., Weyant, J. P., Whitacre, J. F. Evaluation of a proposal for reliable low-cost grid power with 100% wind, water, and solar.Proc Natl Acad Sci USA DOI: 10.1073/pnas.1610381114.
Jacobson et al. (E&ES, 2015) serves as the primary basis of the capacity numbers in Jacobson et al. (PNAS, 2015)
May 25, 2015: Mark Z. Jacobson et al. publish paper in Energy & Environmental Science (hereafter E&ES), providing a “roadmap” for the United States to achieve 100% of energy supply from “wind, water, and sunlight (WWS).”
To demonstrate that the roadmaps in Jacobson et al. paper (E&ES, 2015) can reliably match energy supply and demand at all times, that study cites forthcoming study (Ref. 2) that uses “grid integration model” .
Ref. 2 is the at-that-point-forthcoming PNAS paper, “A low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes” “in review” at PNAS.
This establishes the link between the two papers:
(1) The E&ES paper provides the “roadmap” describing the mix of renewable energy resources needed to supply the US;
(2) The PNAS paper then attempts to demonstrate the operational reliability of this mix of resources.
Jacobson et al. (E&ES, 2015) makes it clear that ‘capacity’ refers to ‘name-plate capacity’
Table 2 of the E&ES paper explicitly describes the “rated power” and “name-plate capacity” of all renewable energy and energy storage devices installed in the 100% WWS roadmap for the United States. Both of these terms refer to the maximum instantaneous power that a power plant can produce at any given moment. These are not descriptions of average output, and nowhere in the table’s lengthy description does Jacobson et al. (E&ES, 2015) claim that hydroelectric power is described differently in this table than the other resources.
The table states that the total nameplate capacity or maximum rated power output of hydroelectric generators in Jacobson et al. (E&ES, 2015) is 91,650 megawatts (MW). In addition, column 5 states that 95.87% of this final installed capacity is already installed in 2013. Only 3 additional new hydroelectric plants at a size of 1,300 MW each, for a total addition of 3,900 MW over existing hydroelectric capacity are included in Jacobson et al. (E&ES, 2015) .
Jacobson et al. (E&ES, 2015) describes hydro capacity assumptions in some detail
Section 5.4 of the E&ES paper provides additional textual description of the WWS roadmap’s assumptions regarding hydroelectric capacity.
The text states that the total existing hydroelectric power capacity assumed in the WWS roadmap is 87.86 gigawatts (GW; note 1 GW = 1,000 MW).
It further states that only three new dams in Alaska with a total capacity of 3.8 GW are included in the final hydroelectric capacity in the WWS roadmap.
Note that throughout this text, Jacobson et al. (E&ES, 2015) distinguish between “delivered power,” a measure of average annual power generation, and “total capacity,” a measure of maximum instantaneous power production capability. It is this later “total capacity” figure that matches the “nameplate capacity” in Table 2 of 87.86 GW in the 100% WWS Roadmap for 2050.
The text explicitly states that the average delivered power from hydroelectric generators is 47.84 GW on average in 2050.
In Jacobson et al. (E&ES, 2015), the authors state both the maximum power production capability from hydroelectric power assumed in the WWS roadmap and distinguish this from the separately reported average delivered power from these facilities over the course of a year.
Most of the capacity numbers appearing in Jacobson et al. (2015) come from the US Energy Administration. They define what is meant by capacity as represented by their numbers:
Generator nameplate capacity (installed): The maximum rated output of a generator, prime mover, or other electric power production equipment under specific conditions designated by the manufacturer. Installed generator nameplate capacity is commonly expressed in megawatts (MW) and is usually indicated on a nameplate physically attached to the generator.
Generator capacity: The maximum output, commonly expressed in megawatts (MW), that generating equipment can supply to system load, adjusted for ambient conditions.
The remainder of Section 5.4. discusses several possible ways in which additional hydroelectric power capacity could be added in the United States without additional environmental impact, if it is not possible to increase the average power production from existing hydroelectric dams as Jacobson et al. (E&ES, 2015) assume is possible.
This text describes the potential to add power generation turbines to existing unpowered dams and cites a reference estimating a maximum of 12 GW of additional such capacity possible in the continguous 48 states.
The text also describes the potential for new low-power and small hydroelectric dams, citing a reference that estimates that 30-100 GW of average delivered power—or roughly 60-200 GW of total maximum power capacity at Jacobson et al.‘s (E&ES, 2015) assumed average production of 52.5% of maximum power for each hydroelectric generator.
Nowhere in this lengthy discussion of the total hydroelectric capacity assumed in the WWS roadmap and additional possible sources of hydroelectric capacity does Jacobson et al. (E&ES, 2015) mention the possibility of adding over 1,000 GW of additional generating capacity to existing dams by adding new turbines.
The May 2015 E&ES paper by MZJ et al. explicitly states that the maximum possible instantaneous power production capacity of hydroelectric generators in the 100% WWS roadmap for the 50 U.S. states is 91.65 GW.
Jacobson et al. (E&ES, 2015) also explicitly distinguishes maximum power capacity from average delivered power in several instances. The later is reported as 47.84 GW on average in 2050 for the 50 U.S. states.
Additionally, the authors explicitly state that 3.8 GW of the total hydro capacity in the 50 state WWS roadmap comes from new dams in Alaska. This is in addition to 0.438 GW of existing hydro capacity in Alaska and Hawaii as reported in the paper’s Fig. 4. This is important to note, because Alaska and Hawaii are excluded from the simulations in Jacobson et al. (PNAS, 2015).
The E&ES companion paper to the Jacobson et al. (PNAS, 2015) therefore explicitly establishes that the maximum possible power capacity that could be included in the PNAS paper in the contiguous 48 U.S. states is 87.412 GW (e.g. 91.65 GW in the 100% WWS roadmap for the 50 US states, less 3.8 GW of new hydropower dams in Alaska and 0.438 GW of existing hydro capacity in Alaska & Hawaii).
Summary of key relevant facts about Jacobson et al. (E&ES, 2015)
In summary, the May 2015 Jacobson et al. (E&ES, 2015) paper establishes several facts:
The E&ES paper explicitly states that the maximum possible instantaneous power production capacity of hydroelectric generators in the 100% WWS roadmap for the 50 U.S. states is 91.65 GW (inclusive of imported hydroelectric power from Canada).
The E&ES paper also explicitly distinguishes maximum power capacity from average delivered power. The later is reported as 47.84 GW on average in 2050 for the 50 U.S. states.
The E&ES paper explicitly states that 3.8 GW of the total hydropower capacity in the 50 state WWS roadmap comes from new dams in Alaska and reports that existing capacity in Alaska and Hawaii totals 0.438 GW. This is relevant, because Alaska and Hawaii are excluded from the simulations in the Jacobson et al. (PNAS, 2015) which focuses on the contiguous 48 U.S. states.
The E&ES companion paper to Jacobson et al. (PNAS, 2015) therefore explicitly establishes that the maximum possible power capacity that could be included in the PNAS paper in the contiguous 48 U.S. states is no more than 87.412 GW.
No where in Jacobson et al. (E&ES, 2015) do the authors discuss or contemplate adding more than 1,000 GW of generating capacity to existing hydropower facilities by adding new turbines and penstocks. In contrast, the paper explicitly discusses several other possible ways to add a much more modest capacity of no more than 200 GW of generating capacity by constructing new low-power and small hydroelectric dams.
Jacobson et al. (E&ES, 2015) establishes that Jacobson et al. (PNAS, 2015) is a companion to this E&ES paper and that the purpose of the PNAS paper is to confirm that the total installed capacity of renewable energy generators and energy storage devices described in the 100% WWS roadmap contained in the E&ES paper can reliably match total energy production and total energy demand at all times. The total installed capacities for each resource, including hydroelectric generation, described in the E&ES paper, therefore form the basis for the assumed maximum generating capacities in the PNAS paper.
Jacobson et al. (PNAS, 2015) relies on hydro capacity numbers from Jacobson et al. (E&ES, 2015)
December 8, 2015: The paper “Low-cost solution to the grid reliability problem with 100% penetration of intermittent wind, water, and solar for all purposes” by Jacobson et al. (PNAS, 2015) is published in PNAS as the companion to the May 2015 Jacobson et al. (E&ES, 2015) paper.
Jacobson et al. (PNAS, 2015) describes existing (year-2010) hydro capacity to be 87.86 GW.
The text further establishes that the installed capacities for each generator type for the continental United States (abbreviated “CONUS” in the text) are based on ref. 22, which is Jacobson et al. (E&ES, 2015).
“Installed capacity” is a term of art referring to maximum possible power production, not average generation. The paper’s description of “Materials and Methods” states the the “installed capacities” of each renewable generator type are described in the Supplemental Information Table S2 of Jacobson et al. (PNAS, 2015).
Table S2 of the Supplemental Information for the PNAS paper explicitly states the “installed capacity” or maximum possible power generation of each resource type in the Continental United States used in the study.
The explanatory text for this paper again establishes that all installed capacities for all resources except solar thermal and concentrating solar power (abbreviated “CSP” in the text) are taken from Jacobson et al. (E&ES, 2015), adjusted to exclude Hawaii and Alaska. Jacobson et al. (E&ES, 2015) is ref. 4 in the Supplemental Information for Jacobson et al. (PNAS, 2015).
Reference 4 (E&ES, 2015).
Total installed hydroelectric capacity in Table S2 of Jacobson et al. (PNAS, 2015) is stated as 87.48 GW. This is close to the 87.412 GW of total nameplate power capacity of hydroelectric generators in the 50 U.S. states roadmap, less the new hydro dams in Alaska and existing hydropower capacity in Alaska and Hawaii.
Footnote 4 notes that hydro is limited by ‘annual power supply’ but does not mention that instantaneous generation of electricity is also limited by hydro capacity:
Additionally, columns 5 & 6 of Table S2 separately state the “rated capacity” per device and the total number of existing and new devices in 2050 for each resource.
“Rated capacity” is a term of art referring to the maximum possible instantaneous power production for a power plant.
The rated capacity for each hydroelectric device or facility is stated as 1,300 MW and the total number of hydroelectric devices is stated as 67.3. This yields exactly 87,480 MW or 87.48 GW, the installed capacity reported for hydroelectric power in column 3. This provides further corroboration that the 87.48 GW of installed capacity reported refers to maximum rated power generation capabilities of all hydroelectric generators in the simulation, not their average generating capacity as MZJ asserts.
Nowhere in this table, its explanatory text in the Supplemental Information, or the main text of the PNAS paper do the authors establish that they assume more than 1,000 GW of additional hydroelectric generating turbines to existing hydroelectric facilities, as MZJ will later assert.
In contrast, the table establishes that the authors assume that total installed hydroelectric capacity in the Continential United States is assumed to increase from 87.42 GW in 2013 to 87.48 GW in 2050, or an increase of only 0.06 GW or 60 MW.
The hydro power capacity represented in the Jacobson et al. (PNAS, 2015) tables is inconsistent with the amount of hydro capacity used in their simulations
Despite explicitly stating that the maximum rated capacity for all hydropower generators in the PNAS paper’s WWS system for the 48 continental United States is 87.48 GW, Fig. 4 of Jacobson et al. (PNAS, 2015) shows hydropower facilities generating more than 1,000 GW of power output sustained over several hours on the depicted days.
Examination of the detailed LOADMATCH simulation results (available from MZJ upon request) reveals that the maximum instantaneous power generation from hydropower facilities in the simulations performed for Jacobson et al. (PNAS, 2015) is 1,348 GW, or 1,260.5 GW more (about 15 times more) than the maximum rated capacity reported in Table S2.
It is therefore clear that the LOADMATCH model does not constrain maximum generation from hydropower facilities to the 87.48 GW of maximum rated power capacity stated in Table S2.
(Note that hydropower facilities also dispatch at 0 GW for many hours of the simulation. It therefore appears that the LOADMATCH model neither applies a maximum generation constraint of 87.48 GW or any kind of plausible minimum generation constraint for hydropower facilities.)
Summary of key facts related to hydro capacity in Jacobson et al. (PNAS, 2015)
In summary, the December 8, 2015 PNAS paper establishes the following facts:
The installed capacity used in the simulations in Jacobson et al. (PNAS, 2015) is reported in Table S2 of the Supplemental Information for that paper. The total installed hydroelectric capacity or maximum possible power generation reported in Table S2 is stated as 87.48 GW.
This maximum capacity figure is also separately corroborated by taking the rated power generating capacity per device and total number of devices reported in Table S2, which also yields a maximum rated power production from all hydroelectric generators of 87.48 GW.
Table S2 states the the authors only assume 0.06 GW of additional hydroelectric power capacity is added between 2013 and 2050.
Nowhere in the text of Jacobson et al. (PNAS, 2015), its Supplemental Information document, or the explanatory text for Table S2 do the authors state that the term “installed capacity” or “rated capacity per device” for each resource reported in the table is used in any other way than the standard terms of art indicating maximum power generation capability. Nor do the authors establish that total installed capacity of hydroelectric generation is described differently in this table than the other resources and refers instead to average annual delivered power as MZJ claims.
Jacobson et al. (PNAS, 2015) also references and uses Jacobson et al. (E&ES, 2015) to establish the installed power generating capacity of each resource in the simulations performed in the PNAS paper, with the explicit exception of solar thermal and concentrating solar power. The maximum rated power from hydroelectric generation reported in Table S2 of 87.48 GW is consistent (within 68 MW) with the 87.412 GW of name-plate generating capacity reported in the E&ES paper for the 50 U.S. states less three new hydropower dams in Alaska and existing hydro capacity in Alaska and Hawaii reported in the E&ES paper.Recall also that the average delivered power from hydroelectric generators was explicitly and separately stated in the E&ES paper as 47.84 GW for the 50 U.S. states, and is therefore no more than 47 GW for the 48 Continential US states. The reported “installed capacities” for hydroelectric generation in PNAS Table S2 is therefore entirely consistent with the “name-plate capacity” reported in the E&ES paper and is not consistent with the average delivered power from hydroelectric generation reported in the E&ES paper.
Despite establishing a maximum rated power capacity of 87.48 GW, the simulations performed for Jacobson et al. (PNAS, 2015) dispatch hydropower at as much as 1,348 GW, or 1,260.5 GW more than the maximum rated capacity reported in Table S2.
Given available information in the published papers, a reasonable reader should interpret the “installed capacity” or “rated capacity” figures explicitly reported in Table S2 of the Jacobson et al. (2015) paper as referring to maximum generating capacity, because that is the definition used by the studies reported on in the table.
This assertion that the 1,348 GW of maximum hydro generation used in the LOADMATCH simulations for the PNAS paper constitutes an intentional but entirely unstated assumption rather than a modeling error (e.g. a failure to impose a suitable capacity constraint on maximum hydro generation in each time period) is, as we understand it, the primary basis for MZJ’s lawsuit alleging that Christopher Clack and the National Academies of Sciences (publishers of PNAS) intentionally misrepresented his work and thus defamed his person.
A reading of the E&ES and PNAS papers establishes that the MZJ et al. did not omit explicit description of the total rated power capacity of hydroelectric facilities. In point of fact, the authors establish in multiple ways that the maximum power capacity for hydroelectric facilities in the PNAS WWS study for the 48 continental United States is 87.48 GW, not the 1,348 GW actually dispatched by the LOADMATCH model.
Thus, information in the E&ES and PNAS papers do not appear to be consistent with MZJ’s assertions that he and his coauthors had intentionally meant to add more than 1,000 GW of generating capacity to existing hydropower facilities in their model. (It is outside the scope of this analysis to discuss the plausibility of adding more than 1,000 GW of hydro capacity to existing dams.) Nor does the available evidence indicate that they intentionally assumed more than 1,000 GW of additional hydro capacity and then simply failed to disclose this assumption at any point in either of the two papers. Such failure to explicitly describe such a large and substantively important assumption to readers and peer reviewers might itself constitute a breach of academic standards.
The operation of the LOADMATCH model is inconsistent with the maximum power generating capacity of hydropower facilities explicitly stated in Jacobson et al. (PNAS, 2015) and in the companion paper, Jacobson et al. (E&ES, 2015) upon which the generating capacities are based. Whether you call failure to impose a suitable capacity constraint on maximum hydro generation in each time period a “modeling error” is up to you, but that would seem to be an entirely reasonable interpretation based on the available facts.
Environmental science of climate, carbon, and energy