A proposal for recommending and ranking scientific literature

This blog post was motivated by a twitter conversation involving Andy Revkin.

Scientists need a system to help them find the papers that are really worth taking the time to read carefully.

Right now, working scientists and those who would like to follow scientific literature have difficulty wading through the thousands of papers that are published every day to get to the papers that are worth reading.

The problem is caused by the emphasis on quantitative publication-based metrics to assess scientific productivity.  These metrics give authors incentive to publish many papers describing micro-advances, and to divide a single integrated study into several papers. (These metrics also provide incentives to add co-authors who have contributed little, but that is another story.)

Working scientists, and people who would like to follow the work of scientists need help.

The following proposal is a rough sketch and not all of the details have been worked out.

20190112_115029_crop_lowres
Moss, Wunderlich Park

The basic idea is to create an online platform that would help people to understand what they should read to be up on the scientific conversation in a scientific topic area or sub-discipline.

The platform would be about recommending reading.

It would not be about criticizing content that is found in the literature, and it is not about saying what not to read. Aside from looking at the statistics of  recommendations, the only action someone can do is recommend a paper for people interested  in a topic area.

It is inspired by things like Stack Exchange and Reddit. In the ideal set-up, perhaps on some platform similar to Google Scholar, there would be a way to tell the system that you recommend people interested in, for example, metamorphic petrology to take the time to read this paper.

Key would be in enabling the sorting of recommendations in different ways.

Disciplinary expertise. Recommendations from different people could be weighted differently depending on how many (weighted) recommendations their own work has gotten within that topic-area (sub-discipline). So, a metamorphic petrologist whose work has gotten many recommendations would have more influence in ranking of papers within the metamorphic petrology topic area.

Different time periods. One could look at recommendations as a time-series, and use the net-present-value of weighted recommendations-instances to sort reading recommendations. If the user wanted to see what the most important papers were on the decadal time scale, they could use a decade as the discount rate. If the user wanted to see what the most important papers were over the last weeks, they could discount on a one week time scale. The time discounting could also be used to reduce the weight of recommendations from people who make recommendations very frequently.

Sorting and searching. While the institution that hosts the database should provide basic search functions, if the resulting database is open access, as it should be, many people could provide services filtering and sorting results in different ways.  One could imagine constructing associations between topic areas by looking at papers recommended in more than one topic area, and searching for important papers to read based on those associations.

Questions remain:

— How should other aspects of the platform be designed, including how to create topic areas within the system?

— To what extent can or should anonymity be provided?

— How can we design a system such that when people try to game the system, they are doing what is best for the system?

Of course, proposals like this suffer from a chicken and egg problem. If everyone were already be using a system like this, the system would be useful and busy scientists would have incentive to use it. But if nobody is using the system, then nobody has incentive to contribute to it. Therefore, a system like this would need to be initiated by people with some standing, perhaps Google, professional associations, or national academies.

It would be great if there were some kind of community-wide reading recommendation service with the granularity to be useful even on topics of extremely narrow interest.

swatch-white_8

On choosing problems to work on

This rant is from an email sent to my research group. It seems that some of us have been asking questions like, “What can I do with a climate model that has not already been done?” If this is the question we are asking, then we are asking the wrong question.

IMG_4424_crop

Carnegie postdocs Clara Garcia-Sanchez and Anna Possner using fluid dynamical models to investigate geophysical limits to wind power.

Most people in the world are focused on solving pressing problems (how to provide for their families, how to get access to health care, etc). Most people are faced with pressing problems that they have to solve, not problems they choose to solve.

Some people approach their scientific or technical work choosing to focus on pressing problems (“What can I be doing to most effectively help a transition to a clean energy system?”) but other people approach their work thinking, “I have a hammer; are there any nails around that I might be able to hammer on?” — Or even worse, “Are there any nails around that other people have already whacked at, but that I might be able to give another whack or two?”

If you are not working on a problem that you feel is important and pressing, then you are probably working on the wrong problem. (The reason the problem is important could be for fundamental scientific understanding, and not necessarily utilitarian concern.)

It is important to start with the problem, not the tool.

Once you have identified the problem, then your experience with specific tools might inform how you can most effectively contribute to problem solution, but the starting point should be the problem, not the tool.


An intermediate position is to ask: What are the important problems that this tool could contribute to solving? Realistically, this is where we are with much of our work.

The main point is: If you are having trouble finding important problems to address with the tools you already know how to use, that is probably a sign that it is time to learn to use new tools. (This is why I have been learning about economics and energy system modeling.)

You should not just address ever more arcane and irrelevant problems using the tools you already know how to use.


In short:

The world is replete with pressing problems. If you are not working on at least one of these problems, there is a good chance you are wasting your time and you should be doing something else.

swatch-white_8page-divider-clipart-line-dividers-clipart-superb-decorative-divider-2-cartoon-clipart-collection (1)

If you have recently gotten your PhD or will get your PhD within the next year or two, and are interested in trying to address important problems using new tools or approaches, please apply for a postdoc job in my group.

page-divider-clipart-line-dividers-clipart-superb-decorative-divider-2-cartoon-clipart-collection flip

swatch-white_8

What do you do when a poorly expressed idea is quoted out of context by people with a political agenda?

I woke up this morning to read The Federalist quoting me out of context, putting words in my mouth that I did say but wished I had worded more carefully. For those not familiar with The Federalist, they are a right wing online magazine.

The paragraph in question was:

This opens up the possibility that we could stabilize the climate for affordable amounts of money without changing the entire energy system or changing everyone’s behavior,” Ken Caldeira, a senior scientist at the Carnegie Institution for Science, told The Atlantic.

Here is the full email I sent to Robinson Meyer, writer for The Atlantic:

Rob,

I am no expert in systems costing, but I read the paper as saying that Direct Air Capture of carbon dioxide would cost somewhere in the range of $100 to $250 per ton.

If these costs are real, it is an important result.

If you look at this paper (and this is what I could find quickly on the web)

https://static1.squarespace.com/static/54ff9c5ce4b0a53decccfb4c/t/592bd365414fb5ddd39de548/1496044396189/Guivarch%2C+Rogelj+-+Carbon+prices+2C.pdf

Carbon prices projected for this century look like this for 2 C stabilization from a business-as-usual scenario:

unnamed (1)

If you notice, by the end of the century, these integrated assessment models project carbon prices of many hundreds if not thousands of dollars per ton CO2.

The IPCC estimated that these levels of carbon prices could shave 5% off of global GDP.

The result of David Keith and colleagues suggest that carbon prices could never go above the $100 to $250 range per ton CO2, because it would be economic to capture CO2 from air at that price.

This suggests that the hardest to decarbonize parts of the economy (e.g., steel, cement manufacture, long-distance air travel, etc) might continue just as they are now, and we just pay for CO2 removal.

To put these prices in context, $100 per ton CO2 works out to about $1 per gallon of gasoline. This suggests that a fee of somewhere between $1 and $2.50 per gallon would allow people to drive their ordinary cars, and we could just suck the CO2 out of the atmosphere later.

This opens up the possibility that we could stabilize climate for affordable amounts of money without changing the entire energy system or changing everyone’s behavior.

To give more context, global CO2 emissions is something like 36 GtCO2 per year. If we were to remove all of that with air capture at $100 per tonCO2, that works out to $3.6 trillion dollars.

Depending on how you count things, global GDP is somewhere in the neighborhood of $75 to $110 trillion. So, to remove all of this CO2 would be something like 3 to 5% of global GDP (if the $100 per ton number is right). This puts an upper bound on how expensive it could be to solve the climate problem, because there are lots of ways to reduce emissions for less than $100 per ton.

In any case, it makes it much easier to deal with the hardest to decarbonize parts of the economy.

Again, this is all with the caveat that I am no expert in costing of engineering systems. But, if this paper is correct, the result seems important to me.

Best,
Ken

My colleagues and I have been spending a lot of time thinking about how we are to decarbonize the hardest parts of the energy system to decarbonize. We have a paper in press on this very topic, which we expect out later this month.

My positions are fairly well known. In MIT Technology Review, I wrote in 2015:

It is always going to be easier and cheaper to avoid making a mess than to clean up one we have already made. It is easier to remove carbon dioxide from a smokestack, where the exhaust is 10 percent carbon dioxide, than from the atmosphere, which is 0.04 percent carbon dioxide.

In that piece, I went on to write:

When the Constitution of the United States of America was written, it seemed inconceivable that people would be released from slavery or that women would vote. Just a few years before gay marriage became the law of the land, it would have been impossible to predict such a sweeping change in social attitudes. For us to even have a chance of addressing the climate problem, we’ll need another huge change in public attitudes. It will need to be simply unacceptable to build things with smokestacks or tailpipes that dump waste into the air. This change could happen.

The point with my poorly worded quote was not that we don’t need revolutionary changes in our energy system, but that there are some very hard-to-deal-with sources of CO2 emission, like long-distance aviation, that could be addressed by using hydrocarbon fuels coupled with contemporaneous capture of CO2 by devices like that being investigated by David Keith and colleagues.

As recently as 1 June 2018, I wrote an email to Peter Frumhoff of the Union of Concerned Scientists, urging that organization to put out a statement saying:

Today’s emissions policies should be based on the assumption that most [of] our CO2 emissions will remain in the environment for hundreds of thousands of years. Emissions policies should not be made on the assumption that future generations will clean up our mess using carbon dioxide removal technologies and approaches.

There is a big difference in using direct air capture of CO2 to offset contemporaneous emissions and using direct air capture of CO2 to argue that we can continue emitting CO2 today in the hopes that someone else will clean up our mess in the future.


As a little egomaniacal side note, I would like to point out that Caldeira and Rampino (1990) may be the first paper to point out the approximately 300,000 year time scale for removal of atmospheric CO2 concentration perturbations by silicate rock weathering. This estimate has held up pretty well over the last decades.

Caldeira_Rampino2-1

Clipboard01


What are the lessons learned?

When speaking or writing an email to a journalist, think about how each sentence can be read taken out of context. Even if you trust the journalist to represent your views well (and I think Robinson Meyer did an excellent job), somebody later can take a carelessly worded statement and use it out of context.

Also, we are busy, and when requests come in, we often try to respond with something quickly so we can get back to our day jobs (which in my case happens to be scientific and technical research). I should slow down a little bit and take the time needed to write more careful prose.


So, what do you do when a poorly expressed idea is quoted out of context by people with a political agenda?

My answer: “Write a blog post about it, and then Tweet and move on.”

swatch-white_8

Steps to writing a scientific paper based on model results

My postdocs and I are having a discussion about how to be more efficient in producing high-impact papers in quality peer-reviewed journals. I sent the steps in my preferred process to them, which are repeated below.

CLab animation
Photo by Jess Barker

Steps are similar for the observationally-based work we do. The main difference is that obtaining additional observations is usually much harder than performing additional model simulations.

Steps to writing a scientific paper

1. Play until you stumble on something of interest. Obtain initially promising results. Alternatively, think about what paper people would find useful that you could write but has not yet been written.

2. Write a provisional draft abstract for the proposed paper. This defines the problem, the scope of work, the expected results, and why it is important or interesting. What is the main point of the study and why should anyone care? This is a good time to start thinking about the target journal.

3. Write the introduction of the proposed paper. This forces you to do a literature review and understand what else is out there. It also forces you to write up the problem statement while you still think the problem is important. Usually, by the end of the study, the result seems trivial and obvious, and the problem unimportant.

4. Do additional simulations, measurements, analyses, etc, needed to test out the basic hypothesis and produce data for tables and figures. Attempt to get enough of a mechanistic understanding so that the central result starts to seem trivial and obvious.

5. Create rough drafts of figures. Make an abundance of figures, assuming that some will be in the main paper, some in the supporting material, some for talks, and some not used at all. Make preliminary decision of what figures will be in the main paper.

6. Write first draft of paper around figures. Do this before iterating on figure improvement. The standard outline is: Abstract, Introduction, Methods, Results, Discussion, Conclusions. The Results section should describe the results produced by the model. Usually, the Discussion section should discuss the relevance of those model results to the real world. Sometimes, the exposition is smoother if results in a sequence are each in turn presented and then discussed. This is OK if care is taken to be clear about when you are referring the model and when you are referring to the real world.

7. Write figure captions. Figure captions are often among the parts of the paper read by the broadest audience. Place in figure caption a one sentence statement of the main point you expect the reader to derive from looking at the figure. Sometimes editors pull this sentence out, but they often leave it in. In any case, you should understand the main point of each figure.

8. Iterate improvement of the draft of the paper and main paper figures until the process starts to asymptote. Do additional simulations and make additional figures as necessary. Take care to make your figures beautiful. Beautiful figures not only communicate scientific content well to a broad audience, but also communicate that you care about your work and strive for a high level of excellence. Consider target journal guidelines and what should go in the supporting material and what should be in the main body of the paper.

9. Wherever possible, replace jargon and acronyms with ordinary English. Insofar as it is possible, improve felicity of expression. Write good prose. This is especially important in the abstract, first and last paragraphs, and figure captions.

10. Before submission, double check that the main story of the paper can be obtained by reading (1) the abstract, (2) the first paragraph, (3) the last paragraph, and (4) the figure captions. This is already more than what most ‘readers’ of your paper will actually read. Only experts will read the entire paper. Most readers will just want the idea of the paper and the basic results.

11. Make sure all codes, intermediate data, etc, are packaged up in a single directory. This is done both to facilitate making modifications later, and also so as to provide maximum transparency into and reproducibility of the scientific process.

12. Write cover letter to editor and submit. Stress the new finding and to whom this finding will be of interest. Suggest knowledgeable reviewers who you have not collaborated with recently. If you have written papers on related topics, people who have cited your previous papers would be good candidate reviewers.


Key is to have rough figures and a rough draft on paper early. It is much easier to improve existing text and figures than to start with a blank page.

Also key is recognizing when your manuscript is beginning to asymptote. A sloppy error-filled manuscript will give reviewers the feeling that your work is sloppy. However, perfectionism can mean low productivity. Striking the correct balance is hard.

Another thing is to do Step One 20 times. If you have 20 ideas for papers you can pick the best one. If you have only one idea, it is unlikely to be a great idea. People who have only one idea at a time tend to write papers that are footnotes to their previous papers, and then have careers that descend into meaningless detail that nobody cares about.

You might also want to take a look at this advice on writing scientific papers from George M. Whitesides, and this advice on the 5 most pivotal paragraphs in a scientific paper by Brian McGill.

swatch-white_8

How much ice is melted by each carbon dioxide emission?

I am refining and extending  a back-of-envelope calculation here that I did for an interesting discussion on the Carbon Dioxide Removal google group about Marzeion et al. (2018), which concluded that mountain glaciers contribute about 15 kg of ice melt for each kg of CO2 released.  

F2.large

Figure 2 from Winkelmann et al. (2015) indicating how much Antarctic ice loss is projected to occur as a result of different amounts of cumulative carbon dioxide emission, over the next one, three and ten millennia. Note that 10,000 GtC of cumulative emissions results in about 60 m (about 200 ft) of sea-level rise over the long term (taking additional contributions from Greenland and mountain glaciers into account).

According to the USGS, there 24,064,000 km3 of ice and snow in the world.

According to Winkelmann et al. (2015), it would take about 10,000 GtC to melt (nearly) all of this ice.

If we divide 24,064,000 km3  by 10,000 GtC, assume the density of the ice is 1 kg per liter, and do the appropriate unit conversions, we can conclude that each kg of carbon emitted as CO2 will ultimately melt about 2,400 kg of ice.  This is a huge number.

Another way of expressing this is that each pound of carbon released to the atmosphere as CO2 is likely to end up melting more than a ton of glacial ice.

Often, people like to think in units of tons or kg of CO2 instead of tons or kg of carbon. In these units, each kg of CO2 ultimately melts about 650 kg of glacial ice.


Each American emits on average about 16 tons of CO2 to the atmosphere each year, primarily from the burning of coal, oil and gas, and atmospheric release of the resulting waste CO2.

This works out to about 1.8 kg (about 4 pounds) of CO2 per hour per American. This is more than twice the per capita emission rate of Europe and about twenty times the per capita emission rate for sub-Saharan Africa.

If I am an average American, the CO2 emissions that I produce each year (by participating in the broader economy) will be responsible for melting about 10,000 tons of Antarctic ice, adding about 10,000 cubic meters of fresh water to the volume of the oceans.

That works out to about more than a ton of Antarctic ice loss for each hour of CO2 emissions from an average American. Every minute, we emit enough CO2 to add another five gallons of water to the oceans through glacial ice melt.

If you do the units conversion, this means that each American on average emits enough CO2 every 3 seconds to ultimately add about another liter of water to the oceans. The Europeans emit enough CO2 to add another liter to sea-level rise every 8 seconds, and the sub-Saharan Africans add a liter of seawater’s worth of CO2 emissions every minute.

In my freezer, there is an ice cube tray with 16 smallish ice cubes. The ice cubes in this tray all together had a mass of 345 g, or about 1/3 of a kg. That means that I am responsible for, every second, emitting enough CO2 to melt about an ice-cube-tray’s worth of Antarctica.


Economists often like to think in terms of “carbon-intensity of our economy” meaning how much CO2 to we emit per dollar of value produced or consumed.  We can also think about the “ice-intensity of our economy”: How much ice is melted per dollar of value produced or consumed?

In the United States, per capita GDP is a little less than $60,000 per year.  If our CO2 emissions per capita will ultimately melt about 10,000 tons of ice, that means that, on average, for every $6 we spend in our economy, we are melting another ton of ice.

In the European Union, per capita GDP is a little over $32,000 per year. If you do the math, this works out to a ton of ice of ice ultimately melted for every $8 (7 euros) spent in their economy.

Sub-Saharan Africa has a per capita GDP a little over $1400 per year. Their per capita GDP is about 1/40th of per capita GDP in the US, but their per capita emissions are about 1/20th of ours. This means that on average, for every $3 spent in Sub-Saharan Africa, about one ton of ice will ultimately be melted.


Admittedly, by the time scales of our ordinary activities, ice sheets take a long time to melt. The melting caused by a CO2 emission today will extend out over thousands of years.

There are complex moral questions related to balance short-term and long-term interests. Not everyone thinks we should be taking the long-term melting of Antarctica into account.

However, if the ancient Romans had undergone an industrial revolution similar to ours and fueled a century or two of economic development using fossil-fuels with disposal of the waste CO2 in the atmosphere, sea level today would be rising about 3 cm each year (more than an inch a year) due to the long-term effects of their emissions on the great ice sheets.

If their scientists had told them of the long-term consequences, but they had nevertheless decided to neglect those consequences so that they could be a few percent richer in the short term, I imagine that we would take a fairly dim view of their moral standing.


Post updated 26 March 2018.

swatch-white_8

 

Looking for postdocs wanting to help facilitate a transition to a near-zero emission energy system

This is from an email sent today to colleagues in my department:

Folks,

Postdocs in my lab either have gotten or may be about to get more permanent employment, which puts me in the position of constantly trying to recruit great people.

If you know people who are really good and who are going to get their PhD degrees within a year or two (or have gotten their degree within the past year or two), please forward this email to them.

I really don’t care about people’s domain knowledge. I look to see that they are smart, productive, creative, able to complete projects, can write, can speak, can do math, etc.  Smart people can learn the relevant facts quickly.

We are a good place for people who want to understand the big picture, and who will not get lost investigating interesting but ultimately unimportant detail.

Ability to demonstrate an interest in the challenges associated with a clean energy system transition is important, but experience addressing these challenges is not important.

Two postdocs in my group engaged in geophysical modeling may move on this year, so there is space for at least two people who want to understand limits on and opportunities for clean energy systems from a geophysical perspective.

I am trying to build up our idealized energy-system-modeling effort, so there is room to hire a few people there. There is also room for people who want to do idealized economic analysis related to development and decarbonization.

On a different topic, we have had two Nature papers now which represent the culmination of our ocean acidification-related work on coral reefs in Australia (Albright et al, 2016, 2018). While I am not actively recruiting in this area, if there was a postdoc candidate who has a great idea on how to carry this work forward, and who would want to lead the project, I can make room for such a person.

In short, I would appreciate it if you would use your networks to help me find good people who are interested in topics that my group is interested in. We are open to hiring non-traditional candidates who have interest, but lack experience, in these topic areas.

The job postings can be reached through this link: http://carnegieenergyinnovation.org/index.php/jobs/

Best,
Ken

shaner-fig1a
swatch-white_8

Geophysical constraints on the reliability of solar and wind power in the United States

We recently published a paper that does a very simple analysis of meeting electricity demand using solar and wind generation only, in addition to some form of energy storage. We looked at the relationships between fraction of electricity demand satisfied and the amounts of wind, solar, and electricity storage capacity deployed.

M.R. Shaner, S.J. Davis, N.S. Lewis and K. Caldeira. Geophysical constraints on the reliability of solar and wind power in the United States. Energy & Environmental Science, DOI: 10.1039/C7EE03029K (2018).  (Please email for a copy if you can’t get through the paywall.)

Our main conclusion is that geophysically-forced variability in wind and solar generation means that the amount of electricity demand satisfied using wind and solar resources is fairly linear up to about 80% of annually averaged electricity demand, but that beyond this level of penetration the amount of added wind and solar generation capacity or the amount of electricity storage needed would rise sharply.

Obviously, people have addressed this problem with more complete models. Notable examples are the NREL Renewable Electricity Futures Study and another is the NOAA study (McDonald, Clack et al., 2016). These studies have concluded that it would be possible to eliminate about 80% of emissions from the U.S. electric sector using grid-inter-connected wind and solar power. In contrast, other studies (e.g., Jacobson et al, 2015) have concluded that far deeper penetration of intermittent renewables was feasible.

What is the purpose of writing a paper that uses a toy model to analyze a highly simplified system?

shaner-fig1a

Fig 1b. from Shaner et al. (E&ES, 2018) illustrating variability in wind and solar resources, averaged over the entire contiguous United States based on 36 years of weather data. Also shown is electricity demand for a single year.

The purpose of our paper is to look at fundamental constraints that geophysics places on delivery of energy from intermittent renewable sources.  For some specified amount of demand and specified amount of wind and solar capacity, the gap between energy generation and electricity demand can be calculated. This gap would need to be made up by some combination of (1) other forms of dispatchable power such as natural gas, (2) electricity storage, for example as in batteries or pumped hydro storage, or (3) reducing electricity loads or shifting them in time. This simple geophysically-based calculation makes it clear how big a gap would need to be filled.

Our simulations corresponds to the situation in which their is an ideal and perfect continental scale electricity grid, so we are assuming perfect electricity transmission. We also assume that batteries are 100% efficient. We are considering a spherical cow.

Part of the issue with the more complicated studies is that the models are black boxes, and one has to essentially trust the authors that everything is OK inside of that black box, and that all assumptions have been adequately explained. [Note that Clack et al. (2015) do describe the model and assumptions used in McDonald, Clack et al. (2016) in detail, and that the NREL study also contains substantial methodological detail.]

In contrast, because we are using a toy model, we can include the entire source code for our toy model in the Supplemental Information to our paper. And all of our input data is from publicly available sources. So you don’t have to trust us. You can look at our code and see what we did. If you don’t like our assumptions, modify the assumptions in our code and explore for yourself. (If you want the time series data that we used, please feel free to request them from me.)

Our key results are summarized in our Fig. 3:

Shaner-Fig3

Figure 3 | Changes in the amount of demand met as a function of energy storage capacity (0-32 days) and generation.

The two columns of Fig. 3 show the same data: the left column is on linear scales; the right column has a log scale on the horizontal axis. [In a wind/solar/storage-only system, meeting 99.9% of demand is equivalent to about 8.76 hours of blackout per year, and 99.99% is equivalent to about 53 minutes of blackout per year.]

The left column of Fig. 3 shows, for various mixes of wind and solar, that the fraction of electricity demand that is met by introducing intermittent renewables at first goes up linearly — if you increase the amount of solar and/or wind power by 10%, the amount of generation goes up by about 10%, and is relatively insensitive to assumptions about electricity storage.

From the right column of Fig. 3, it can be seen that that as the fraction of electricity demand satisfied by solar and/or wind exceeds about 80%, then the the amount of generation  and/or the amount of electricity storage required increases sharply. It should be noted that even in the cases in which 80% of electricity is supplied by intermittent renewables on the annual average, there are still times when wind and solar is providing very little power, and if blackouts are to be avoided, the gap-filling dispatchable electricity service must be sized nearly as large as the entire electricity system.

This ‘consider a spherical cow’ approach shows that satisfying nearly all electricity demand with wind and solar (and electricity storage) will be extremely difficult given the variability and intermittency in wind and solar resources.

On the other hand, if we could get enough energy storage (or its equivalent in load shifting) to satisfy several weeks of total U.S. electricity demand, then mixes of wind and solar might do a great job of meeting all U.S. electricity demand. [Look at the dark green lines in the three middle panels in the right column of Fig. 3.] This is more-or-less the solution that  Jacobson et al. (2015) got for the electric sector in that work.

Our study, using very simple models and a very transparent approach, is broadly consistent the findings of  the NREL, NOAA, and  Jacobson et al. (2015) studies, which were done using much more comprehensive, but less transparent, models. Our results also suggest that a main difference in conclusions between the NREL and NOAA studies and the Jacobson et al. (2015) study is that Jacobson et al. (2015) assume the availability of large amounts of energy storage, and that this is a primary factor differentiating these works. (The NOAA study showed that one could reduce emissions from the electric sector by 80% with wind and solar and without storage if sufficient back-up power was available from natural gas or some other dispatchable electricity generator.)

All of these studies share common ground. They all indicate that lots more wind and solar power could be deployed today and this would reduce greenhouse gas emissions. Controversies about how to handle the end game should not overly influence our opening moves.

There are still questions regarding whether future near-zero emission energy systems will be based on centralized dispatchable (e.g., nuclear and fossil with CCS) or distributed intermittent (e.g., wind and solar) electricity generation. Nevertheless, the climate problem is serious enough that for now we might want to consider an ‘all of the above’ strategy, and deploy as fast as we can the most economically efficient and environmentally acceptable energy generation technologies that are available today.

swatch-white_8