Endangerment and Cause or Contribute Findings for Greenhouse Gases under the Clean Air Act—EPA's Denial of Petitions for Reconsideration, Volume 1: Climate Science and Data Issues Raised by Petitioners
On This Page:
- 1.0 Climate Science and Data Issues Raised by Petitioners
- 1.1 Validity of Paleoclimate Temperature Reconstructions and Related Issues
- 1.1.1 The "Divergence Issue" in Paleoclimate Reconstructions
- 1.1.2 Background
- 1.1.3 Assessment of the Evidence Provided by Petitioners Regarding the Climate Reconstructions
- 1.1.4 Assessment of the Evidence Provided by the Petitioners of Intentional Data Manipulation Regarding Tree Ring Data and the MWP
- 1.1.5 Assessment of Petitioners' Argument That the MWP May Have Been Warmer Than Present Temperatures
- 1.1.6 Assessment of the Petitioners' Argument that Questions About the MWP and Paleoclimate Reconstructions Limit Current Ability to Attribute Present Warming to Humans
- 1.1.7 Summary
- 1.2 Attribution of Recent Temperature Trends and Models
- 1.2.1 The Validity of the Human "Fingerprint" in the Vertical Temperature Structure of the Atmosphere Over the Tropics
- 1.2.2 Global Temperature Trends Over the Last Decade and Implications for Attribution of These Trends to GHGs
- 1.2.3 Climate Model Issues Raised by Petitions
- 1.2.4 Summary
- 1.3 Validity of the HadCRUT Temperature Record
- 1.3.1 Overview
- 1.3.2 Scientific Background on Surface Temperature Records and Underlying Datasets
- 1.3.3 Responses to Petitioners' Arguments Regarding the Validity of CRU Data
- 1.3.4 Claims of Flawed Approach to Correct for Urban Heat Island (UHI) Effects
- 1.3.5 Alleged Dependence of IPCC Conclusions on the HadCRUT temperature record
- 1.3.6 Summary
- 1.4 Validity of NOAA and NASA Temperature Records
- 1.4.1 Overview
- 1.4.2 Background on the Collection and Analysis of Surface Temperature Data
- 1.4.3 The Petitioners' Arguments and EPA Responses
- 188.8.131.52 Assessment of Issues Related to Alleged Station Dropout and Inappropriate Extrapolation
- 184.108.40.206 Issues Raised With Respect to Adjustments for the UHI Effect
- 220.127.116.11 Issues Raised in Additional Literature Provided by Petitioners
- 18.104.22.168 Petitioners' Allegations Regarding Data Adjustments at Specific Stations
- 22.214.171.124 Allegations Regarding the Independence of NOAA, NASA, and HadCRUT Temperature Records
- 126.96.36.199 Additional Issues Regarding Allegations of Manipulation of Data
- 1.4.4 Summary
- 1.5 Implications of New Studies and Data Submitted by the Petitioners
- 1.5.1 Overview
- 1.5.2 Implications of a New Study on Stratospheric Water Vapor
- 1.5.3 Implications of Material Indicating That CO2 Is Not Well Mixed in the Atmosphere and That the Airborne Fraction of CO2 Has Not Changed
- 1.5.4 Implications of New Tropical Cyclone Studies
- 1.5.5 Implications of New Studies on the Statistical Significance of Increases in Antarctic Sea Ice
- 1.5.6 Implications of Recent Data on Observational Snow Cover Trends
- 1.5.7 Petitioners Claim That EPA Ignored a Satellite Dataset
- 1.5.8 Summary
- 1.1 Validity of Paleoclimate Temperature Reconstructions and Related Issues
Acronyms and Abbreviations
|NEED HEADER||NEED HEADER|
|AIRS||Atmospheric Infrared Sounder|
|AMS||American Meteorological Society|
|AOGCM||Atmosphere-Ocean General Circulation Models|
|AR4||IPCC Fourth Assessment Report|
|BOM||Australian Bureau of Meteorology|
|CCSP||U.S. Climate Change Science Program|
|CDIAC||Carbon Dioxide Information Analysis Center|
|CMIP||Coupled Model Intercomparison Project|
|CRU||Climatic Research Unit|
|EPA||U.S. Environmental Protection Agency|
|FOIA||Freedom of Information Act|
|GHCN||Global Historical Climatology Network|
|GISS||Goddard Institute for Space Studies (NASA)|
|GSL||Global Snow Laboratory (Rutgers Universitry)|
|IEA||Institute of Economic Analysis|
|IISD||International Institute for Sustainable Development|
|IJC||International Journal of Climatology|
|IPCC||Intergovernmental Panel on Climate Change|
|IQA||Information Quality Act|
|LIA||Little Ice Age|
|MET||United Kingdom Meteorological Office|
|MWP||Medieval Warm Period|
|NASA||National Aeronautics and Space Administration|
|NCAR||National Center for Atmospheric Research|
|NCDC||National Climatic Data Center|
|NIWA||National Institute of Water & Atmospheric Research|
|NMS||National Meteorological Station|
|NOAA||National Oceanic and Atmospheric Administration|
|NRC||National Research Council|
|NSIDC||U.S. National and Snow and Ice Data Center|
|ORNL||Oak Ridge National Laboratory|
|PBL||Netherlands Environmental Assessment Agency|
|RTC||Response to Comments|
|RTP||Response to Petitions|
|SAB||Scientific Advisory Board|
|SALR||saturated adiabatic lapse rate|
|TSD||Technical Support Document|
|UAH||University of Alabama - Huntsville|
|UCAR||University Corporation for Atmospheric Research|
|UHI||urban heat island|
|USGCRP||U.S. Global Change Research Program|
|USHCN||United States Historical Climatology Network|
|W/m2||watts per meter squared|
|WMO||World Meteorological Organization|
|WWR||World Weather Reports|
A number of petitioners (the Coalition for Responsible Regulation, Peabody Energy, the Southeastern Legal Foundation, the State of Texas, and the Competitive Enterprise Institute) challenge the validity of the paleoclimate reconstructions of the temperatures of the past 1,000 and 10,000 years, as summarized in the assessment literature.1 Many of the issues they raise are similar to comments EPA received on the proposed Endangerment Finding, although in some cases the petitioners draw additional support from more recent information released in the media. In particular, several petitioners use e-mails involving scientists at the Climatic Research Unit (CRU) of the University of East Anglia in the United Kingdom that were made public in November 2009. Based on statements in these e-mails, along with other sources, petitioners specifically raise the possibility that the Medieval Warm Period (MWP) or the early Holocene (about 10,000 to 8,000 years ago) may have been warmer than the present. Additionally, they claim that the e-mails demonstrate deliberate, inappropriate manipulation of the data. Petitioners claim that by showing that recent warmth may not have been ‘unprecedented,’ that the attribution of recent warming to human causes is undermined, and therefore, the Findings are flawed.
EPA’s review of the arguments raised by the petitioners, as well as EPA’s review of the totality of the CRU e-mails and other documents, finds that the uncertainties about paleoclimate reconstructions highlighted by the petitioners were recognized and appropriately discussed in the Intergovernmental Panel on Climate Change (IPCC) and other assessment documents, as well as in the EPA Technical Support Document (TSD). EPA addressed the uncertainties involved in historical temperature reconstructions in the Response to Comments (RTC) document, as this was an issue raised by a number of commenters. Petitioners fail to acknowledge or engage substantively with the scope and comprehensiveness of the discussion of this issue, both in the assessment literature and by EPA. Further, they fail to recognize or discuss the other evidence relied on in assessing the conclusions drawn from the entire body of paleoclimate evidence. As a result, the petitioners’ arguments rely on innuendo and speculation with little scientific support or argumentation.
When viewed as a whole, the paleoclimate analysis contributes to our understanding of the climate system. These analyses provide supporting evidence that current warming is attributable to anthropogenic greenhouse gases (GHGs). The uncertainties concerning the precise temperature and climatic influences in the paleoclimate reconstructions are clearly recognized, acknowledged, and taken into account in the conclusions of the assessment literature and in EPA’s Endangerment Finding. Contrary to petitioners’ claims, EPA has not treated this evidence as ‘compelling’ but has instead weighed it appropriately given the uncertainties involved. Whether previous warm historical episodes were possibly slightly warmer globally than the present or, as the majority of the science suggests, slightly cooler on a global average than the present, does not materially change the understanding of the climate provided by the paleoclimate evidence.
We respond to petitioners’ arguments in the following subsections. Subsection 1.1.2 provides background on paleoclimate temperature reconstructions as context for the more technical responses that follow. Subsection 1.1.3 addresses issues including the ‘divergence issue’ concerning recent tree ring data. Subsection 1.1.4 addresses the arguments over manipulation of data including tree ring data. Section 1.1.5 addresses arguments regarding the relative warmth of the MWP and the present day. Section 1.1.6 addresses arguments on the issue of the role of paleoclimate reconstructions in attribution of observed temperature changes to GHGs. Subsection 1.1.7 summarizes EPA’s conclusions on the petitioners’ arguments
To evaluate the technical details of the petitioners’ arguments, it is useful to describe what paleoclimate temperature reconstructions are and how they are used to improve understanding of past and present climate change. Direct observation of surface temperature records have global coverage over approximately the last 130 to 150 years. To determine temperatures in time periods before the instrumental record, climate scientists need to use indirect methods (‘proxies’). These indirect methods include examining tree rings; sediment cores for pollen and plankton records; atomic isotope and chemical compound ratios in corals and other marine organisms; glacier extent and oxygen isotope ratios in the glacial ice; and subsurface ground temperatures. Determining temperatures from these proxies is not always straightforward. For example, tree growth is dependent not only on temperature but also water availability, CO2 concentrations in the air, and pollution exposure. Marine organism records can be influenced by salinity changes, glacier records can be influenced by precipitation changes, and so forth. Many of these proxies (including tree rings) are calibrated against the instrumental temperature record for the period where datasets overlap. The statistical relationship found between the proxy and regional temperatures over the past 150 years is then used to extrapolate over the hundreds or thousands of years before instrumental records. Examining more data sources and more proxies provides more certainty in the reconstructions than would be achieved using a single proxy study with fewer data sources, even if some of the included proxies have only a weak relationship with temperature.
Researchers combine a number of different proxies from around the world to develop their temperature reconstructions of the past. Different researchers use different subsets of data—some rely on only one proxy, others use multi-proxy reconstructions. Fewer proxies are available for examination as scientists extend their research further back in the past, and the uncertainty of conclusions about past surface temperatures becomes larger. These reconstructions contribute to our understanding of historical temperatures and variability and enable comparison of present day changes to changes in the past 1,000 years, as well as allowing testing of climate models and our understanding of how the climate system responded to historical conditions.
The ‘divergence’ issue refers to a certain subset of the tree ring records whose growth has not correlated with temperature change in recent decades. Basically, in recent years, where the regional instrumental record shows warming, some tree ring proxies do not show a corresponding growth rate. This is not a new issue; one of the earliest papers on divergence was published by CRU researchers in 1998 in the journal Nature (Briffa et al., 1998).’ More recently, reports of the IPCC (Jansen et al., 2007) and National Research Council (NRC) (2006) cite a number of studies that have explored this topic, assessing potential explanations ranging from ozone depletion to specific regional changes in drought and precipitation. EPA’s TSD discusses this issue and includes the NRC (2006) multiproxy reconstruction in Figure 4.3. As discussed in the TSD, and in more detail in Volume 2 in the RTC document, NRC (2006) found divergence in some trees north of 55 degrees latitude. The TSD summarizes the NRC (2006) and IPCC conclusions on paleoclimate, including important discussions of the uncertainty in the reconstructions, stating on page 32, ‘Considering this study and additional research, the IPCC (2007d) concluded: ‘Paleoclimatic information supports the interpretation that the warmth of the last half century is unusual in at least the previous 1,300 years.’ However, like NRC (2006b), IPCC cautions that uncertainty is significant prior to 1600 (Jansen et al., 2007).’
Some researchers are concerned about the implications of this divergence in the use of tree rings for reconstructing historical warm periods; other researchers note that because the divergence only applies to trees north of 55 degrees latitude, and a similar difference in behavior between northern and southern trees was not seen before the last few decades, it is reasonable to conclude that the divergence is a unique phenomenon of the last few decades.
Placing the paleoclimate work into the broader climate science context, the TSD cites the U.S. Global Change Research Program (USGCRP) statement that ‘The second line of evidence arises from indirect, historical estimates of past climate changes that suggest that the changes in global surface temperature over the last several decades are unusual (Karl et al, 2009).’ The phrase in Karl et al. regarding ‘indirect historical estimates’ refers to the paleoclimate reconstructions based on proxies. Following Karl’s statement, the unusual nature of the current warming in the context of the past 1,000 years contributes to one of the lines of evidence supporting the attribution of current warming to human activities. Note that ‘unusual’ does not mean unprecedented, and past warming must be considered in the light of what we know about past climatic forcings such as solar and volcanic activity. Additionally, in the IPCC chapter on attribution, Hegerl et al. (2007) states that ‘[a]nalyses of palaeoclimate data have increased confidence in the role of external influences on climate.’ Hegerl et al. are stating that paleoclimate information improves our understanding of the difference between how the climate responds to external changes, such as changes in solar radiation, orbital characteristics, GHG concentrations, or atmospheric loadings of aerosols (such as from volcanic eruptions), compared to internal changes such as el Nio events.
Several petitioners (the Coalition for Responsible Regulation, Peabody Energy, the Southeastern Legal Foundation, the State of Texas, and Competitive Enterprise Institute) argue that the ‘divergence’ issue in the tree ring records ‘makes comparing the temperatures during the MWP [Medieval Warm Period] with recent observed temperatures virtually impossible’ and that these concerns were suppressed in IPCC reports.
As noted in the background on paleoclimate above, the divergence issue (between certain tree rings and recent temperature changes) is not yet a fully explained phenomenon, and the e-mails quoted by the petitioners recognize these uncertainties. However, these uncertainties are not new. The science community has long been aware of the tree ring divergence issue, as well as other issues on the certainty of proxy reconstructions. After thorough evaluations, both the IPCC and NRC properly recognized these issues and appropriately reflected the uncertainty in their scientific conclusions, as did EPA. Thus, these uncertainties were fully presented in the assessment literature. In Response 2-64 and 2-67 of the RTC document, EPA addressed divergence and uncertainty in paleoclimate reconstructions, stating:
Some temperature reconstructions end in 1980 or earlier because of a well-recognized (in the assessment literature) ‘divergence’ problem, where some tree ring records present temperature trends that do not correlate well with the instrumental (thermometer) records. Explanations for the divergence discussed in NRC (2006b) include water (i.e., drought stress) becoming a limiting factor, increasing winter precipitation (leading to delaying snowmelt), greater ultraviolet radiation (resulting from ozone depletion) or bias in instrumental temperature. However, NRC (2006b) notes a number of tree-ring records have not been impacted by divergence, and it is primarily concentrated north of 55 latitude. The IPCC (Jansen et al., 2007) notes it is not even ubiquitous in that region.
Specifically, the NRC also found that ‘Elevational treeline sites in Mongolia (D’Arrigo et al., 2001) and the European Alps (Bntgen et al., 2005) are not affected by ‘divergence.’ This geographic separation was confirmed by Cook et al. (2004) (NRC, 2006). The significance of these studies is that they demonstrate that divergence is not an issue with all tree ring proxies, much less the many non-tree ring proxies, such as retreating glaciers, sediment cores, corals, and other data sources.
It is important to recognize that the assessment reports looked at all of the proxy evidence (e.g., see Fig 4.3 in the TSD [U.S. EPA, 2009]) as well as other paleoclimate evidence, such as the clear record that warmer periods in the past coincide with periods with higher levels of CO2. The assessment reports rely on this entire body of evidence to support their conclusions, correctly reflecting any uncertainty expressed in the literature, while petitioners discuss only one part of this evidence—the tree ring divergence for trees in some locations—and do not contest any of the other parts of the entire body of evidence. Petitioners also do not attempt to show that the entire body of evidence supports a different conclusion than that drawn by the assessment reports and by EPA.
Peabody Energy cites papers by D’Arrigo et al. (2008), Esper and Frank (2009), and Loehle (2009) on the subject of divergence. Loehle (2009) presents an analysis on the implications of divergence being a result of non-linear responses to temperature. Peabody Energy quotes D’Arrigo’s paper as stating that the divergence issues impede ‘a robust comparison of recent warming during the anthropogenic period with past natural climate episodes such as the Medieval Warm Period,’ and states that the Esper and Frank paper ‘concluded that the divergence problem ‘is of importance.’’
EPA has reviewed the petitioners’ submission of D’Arrigo et al. (2008), Esper and Frank (2009), and Loehle (2009) and finds that it was not impracticable to raise the objection during the public comment period and that the reasons for the objection did not arise between June 24, 2009, and February 16, 2010. Petitioners could have submitted these studies during the comment period on the proposed Endangerment Finding. Although, in most cases, the petitioners provide excerpts from the CRU e-mails in support of their assertions, EPA’s review has determined that this evidence does not support their allegations, and that the information submitted by petitioners on these topics was available before the comment period for the Endangerment Finding. Petitioners have not shown why it would have been impractical for them to have submitted these studies then. Indeed, similar points were already raised, and responded to, in the RTC. Despite the fact that these objections fail to meet the statutory timeframe for evidence supporting a petition for reconsideration, we briefly explain why, contrary to petitioners’ allegation, they fail to call into question the Finding.
Though the petitioners cite D’Arrigo et al. (2008) to support their contention that the divergence problem undermines paleoclimate reconstructions, D’Arrigo et al. (2008) recognize the possibility that divergence is a uniquely modern phenomena, though they do include caveats on that conclusion: ‘Although limited evidence suggests that the divergence may be anthropogenic in nature and restricted to the recent decades of the 20th century, more research is needed to confirm these observations.’ Similarly, the quote from Esper and Frank (2009) that the petitioners select comes from a context-setting paragraph and is not a conclusion of the paper, despite the contention of the petitioner. Esper and Frank also note that the divergence phenomena is ‘widely perceived’ and the ‘potential consequences discussed (e.g., IPCC, 2007),’ providing confirmation that the researchers in the field approve of the assessment literature conclusions. Indeed, more recent work by Esper (Esper et al., 2010) may have found that some analysis methods do not show the same divergence phenomena when used with high latitude Siberian trees.
Loehle (2009) is a more theoretical study examining the implications for reconstructions if the reason for ‘divergence’ results from a non-linear response of trees to warming. He shows that if the trees respond quadratically to warming rather than linearly, then it is possible that reconstructions using these trees would not reproduce some historical warm periods. However, these questions are not new: the possibility of such non-linear response was addressed in a qualitative form by the NRC (2006). Additionally, some reconstructions have examined the effect of not including any tree rings whatsoever and still find that modern warming is slightly larger than other events in the past millenium (Mann et al., 2008).
This is an area of ongoing research, but petitioners present no evidence that demonstrates that the treatment of this issue by EPA failed to address the uncertainties involved in these reconstructions.
Peabody Energy raises objections to the IPCC Fourth Assessment Report (AR4) treatment of Holocene temperature variation, to the sentence in the TSD referring to reconstructions of this period, and to the discussion in the RTC document on the subject. Peabody Energy claims that certain e-mails raise doubts about IPCC conclusions regarding the early Holocene (about 10,000 to 8,000 years ago). Peabody Energy states that ‘Obviously, warmer temperatures in the early Holocene when GHG concentrations were lower than those today would seem to further undermine the conclusion that today’s temperatures are unprecedented and, therefore, must be the result of anthropogenic GHG emissions.’ Peabody Energy quotes several e-mails from IPCC authors discussing two papers (Porter, 2000 and Thompson et al., 2006) that were not included in the AR4. Peabody Energy states that these e-mails support its claims that the IPCC omitted references and skewed the discussion of this issue in the AR4, and that the two papers demonstrate that there was simultaneous warmth in the tropics and the Southern Hemisphere during the early Holocene, in contrast to the IPCC conclusions.
For example, a petitioner quotes the following from Olga Solomina from July 2006:
I attach here a version of glacier box and suggestions (in red) how to include there the reference to the new Thompson et al., 2006 paper.
In this relation - I am getting more and more concern about our statement that the Early Holocene was cool in the tropics - this paper shows that it was, actually, warm - ice core evidences+glaciers were smaller than now in the tropical Andes. The glaciers in the Southern Hemisphere (Porter, 2000, review paper) were also smaller than at least in the Neoglacial. We do not cite Porter’s paper for the reason that we actually do not know how to explain this - orbital reason does not work for the SH, but if we do cite it (which is fair) we have to say that during the Early to Mid Holocene glaciers were smaller than later in both Northern, and Southern Hemisphere, including the tropics, which would contradict to our statement in the Holocene chapter and the bullet. It is probably too late to rise these questions, but still just to draw your attention.2
The petitioner further quotes Valerie Masson-Delmotte on uncertainties in Holocene land reconstructions:
It seems to me that there is still a large uncertainty about the temperature versus precipitation effect on these tropical glaciers. Other indications from South America are related to lake levels with contrasted views in the low versus highlands.
Several references suggest that there is the end of a wet period after the early Holocene in tropical south America; this is expected to induce an increase of 18O signals. One review was conducted several years ago within the PEPI project (http://wwwpaztcn.wr.usgs.gov/pcaw/ and references herein).
I think that the state of the art is that we have no reliable proxy record that is sensivite to temperature only on the tropical lands for the Holocene; therefore the statement that was written for the Holocene was based on areas of the tropical oceans where SST reconstructions were published.
Do we have to write more explicitely about the uncertainty?3
The petitioner objects that ‘Ultimately, the Coordinating Lead Author of Chapter 6, Eystein Jansen, decided, contrary to the contentions of Thompson et al. and Foster, that the early Holocene evidence from the tropics and Southern Hemisphere is not a reliable temperature indicator, and to just leave the text the way it was — that is, without including the published evidence from Thompson et al. (2006), Foster (2000), or any discussion surrounding these studies that runs contrary to the orbital hypothesis,’ including Jansen’s e-mail:
I agree with Valerie that the ice core evidence is ambiguous. I would personally place more weight on the alkenone data, which is a reasonable well calibrated SST proxy. Foraminifer transfer function based SSTs and some Mg/Ca results that are available suggest a similar picture as far as I know. Of course it is possible and plausible that the tropical oceans are behaving in a non consistent manner and not all areas are showing the same signal, but a sizeable portion appear to do so in order to conclude as we do in the chapter in my opinion. Some signals may be due to changes in in trade wind induced coastal upwelling strength, but there are enough cores with alkenone data outside of these areas. If we were to say more about the uncertainties it may be the fact that proxies are seasonally skewed.
My conclusion is to let the chapter say what we say at the moment.4
The petitioner raises three main claims: that the e-mails demonstrate bias in writing the IPCC chapters, that the omitted references demonstrate global early Holocene warmth, and that early Holocene warmth raises doubt about attribution of current warming. On the first claim, our analysis finds that the e-mails are examples of legitimate scientific discussion between the authors trying to determine what is appropriate for inclusion in a chapter and that several of the issues discussed in these e-mails were indeed included in the final chapter even if the two papers in question were not directly referenced. On the second claim, we note that there is uncertainty regarding early Holocene warmth, that this is still an area of ongoing research, and that EPA made no claims about Holocene warmth in the TSD or the Finding. On the third claim, early Holocene temperature reconstructions are not central to arguments regarding attribution. Additionally, the Thompson paper in question makes statements consistent with the EPA TSD regarding the effects of human activities.
With regard to Peabody Energy’s assertions about the Holocene based on glacier literature from Porter (2000) and Thompson et al (2006), the final version of the IPCC chapter (Jansen et al., 2007) discusses early Holocene reconstructions without referring to either paper. Thompson et al. (2006) finds that glacier data suggests tropical Pacific warmth between 11,000 and 8,000 years ago, in apparent contrast to Figure 6.9 in the IPCC chapter, which shows a cooler tropical Pacific in that time period. The e-mail discussion between Eystein Jansen (Professor of Marine Biology, University of Bergen, Norway), Olga Solomina (Senior Scientist, Institute of Geology, Russian Academy of Sciences), and Valrie Masson-Delmotte (Laboratoire des Sciences du Climat et de l’Environnement, France) cited by the petitioners includes the reasoning of the e-mail authors that there is more confidence in using marine sediment and shell data for determining ocean temperatures than there is using glacier oxygen isotope data relied upon in the Thompson paper, because the glacier data can also be influenced by changes in precipitation. This kind of ongoing discussion regarding which data best reflects past temperatures is appropriate for lead authors of a scientific assessment, and reflects scientific dialogue attempting to determine the most appropriate answer.
A very recent paper (Leduc et al., 2010) finds cool early Holocene tropical oceans based on alkenone data but an apparently contradictory result based on Mg/Ca data. A possible explanation suggested by this paper was that one proxy was capturing winter trends, and the other proxy summer trends, and there are orbital reasons to think that the trends might be different. Interestingly, this is consistent with Jansen’s statement in the quoted e-mail that the proxies may be seasonally skewed. This is clearly an area of ongoing research.
Contrary to the assertion of the petitioners that there was no discussion included in the chapter that ‘runs contrary to the orbital hypothesis,’ Chapter 6 of Working Group I (Jansen et al., 2007) discusses that there are some Holocene results that are not fully explained by the orbital hypothesis, stating: ‘At high southern latitudes, the early warm period cannot be explained by a linear response to local summer insolation changes (see Box 6.1), suggesting large-scale reorganization of latitudinal heat transport.’ Similarly, Box 6.3 discusses tropical glacier retreat stating ‘tropics indicate short, or in places even absent, glaciers between 11 and 5 ka,’ consistent with both Porter and Thompson.
Therefore, the quoted e-mails demonstrate that the authors were appropriately attempting to explore this issue and use their best scientific reasoning to determine which proxies were most appropriate for use in their assessment. One of the e-mails even presciently identified a possible source of seasonal skew which was a core finding of a paper published four years later. Evidence that ‘runs contrary to the orbital hypothesis’ was in fact included in the final version of the chapter, as was the issue about absent glaciers.
The Endangerment Finding does not directly address the early Holocene. The TSD statement that ‘current data limitations limit the ability to determine if there were multi-decadal periods of global warmth compared to the last half of the 20th century prior to about 1,000 years ago’ remains accurate after considering these two papers on Holocene glaciers. The RTC document (RTC Response 3-55) also quotes this statement from the TSD in response to a comment regarding early Holocene temperatures. The TSD also stated (quoting the NRC report) that ‘Very little confidence can be assigned to statements concerning the hemispheric mean or global mean surface temperature prior to about 900 A.D. because of sparse data coverage and because the uncertainties associated with proxy data and the methods used to analyze and combine them are larger than during more recent time periods.’ Again, nothing that the petitioner has provided suggests that this statement about uncertainty regarding early Holocene data is incorrect. Therefore, the issues raised by the petitioner do not cast doubt on or change any of the reasoning or the scientific basis for the Administrator’s Findings.
With regard to the implications of early Holocene temperature reconstructions to attribution of recent temperature changes to human influences, we quote the NRC again:
Surface temperature reconstructions have the potential to provide independent information about climate sensitivity and about the natural variability of the climate system that can be compared with estimates based on theoretical calculations and climate models, as well as other empirical data. However, large-scale surface temperature reconstructions for the last 2,000 years are not the primary evidence for the widely accepted views that global warming is occurring, that human activities are contributing, at least in part, to this warming, and that the Earth will continue to warm over the next century. (NRC, 2006)
Just as temperature reconstructions of the last 2,000 years are not the primary evidence for anthropogenic climate change, neither are temperature reconstructions of 8,000 years ago. Because climate forcings in the early Holocene were very different from present, whether the early Holocene was slightly warmer or cooler than today (an uncertainty recognized in the TSD and assessment literature) does not prove or disprove the anthropogenic nature of recent warming. Additionally, the major conclusions of Thompson et al. (2006) are consistent with those of the TSD on attribution. In fact, with regard to recent glacier retreat, the conclusion of the paper states:
"These observations suggest that within a century human activities may have nudged global-scale climate conditions closer to those that prevailed before 5,000 yr ago, during the early Holocene. If this is the case, then Earth's currently retreating glaciers may signal that the climate system has exceeded a critical threshold... "
Therefore, the e-mails quoted by the petitioners demonstrate that the authors were making deliberate and appropriate choices in writing their chapter that reflected their ongoing dialogue over the best understanding of the science. Several of the issues identified by petitioners were indeed included in the chapter. While there is continuing uncertainty in reconstructions of early Holocene temperatures, as reflected even in the most recent papers, the EPA TSD makes no statements inconsistent with this uncertainty. Finally, early Holocene reconstructions are not central to the determination of attribution.
Referring to previous discussions about Holocene temperatures, the petitioner states ‘Thus, it is clear that the IPCC AR4 is not an accurate assessment of the scientific literature, but instead includes only a selection of the literature that supports a particular viewpoint — one either held by the chapter IPCC authors, or which had been dictated to them by more influential IPCC authorities. This is perhaps the type of behavior that Chapter 6 Lead Author Briffa was referring to when he told a colleague ‘I tried hard to balance the needs of the science and the IPCC, which were not always the same.’
Response 1-3 above demonstrates that the e-mail authors were engaging in a dialogue over the best understanding of the science, and that the chapter did include uncertainties and conclusions consistent with the omitted papers. Response 2-30 in Subsection 2.2 of this Response to Petitions (RTP) document shows how the quote on the ‘needs of the science’ was taken out of context by the petitioners, and that the work by Briffa was complimented on how it managed to ‘convey the the science accurately.’
The Competitive Enterprise Institute refers to a presentation by Don Easterbrook at the Geological Society of America (Easterbrook, 2009), stating Professor Easterbrook ‘demonstrated how tree ring data from Russia, which show a cooling after 1961, were artfully truncated in graphs presented in IPCC publications.’ The petitioner claims ‘This truncation gave the false impression that the tree ring data agree with reported late 20th Century surface temperature data, when in fact they did not. This artful deceit, now exposed, indicates that the IPCC Assessment Report 4 (AR4) is scientifically questionable. The CRU e-mails leaked in November confirmed that this deception was deliberate.’
EPA has reviewed the petitioner’s information and has determined that the graph provided by the petitioner from Professor Easterbrook’s talk is not from the AR4, but rather from the Third Assessment Report, which was published in 2001 and is no longer the most recent IPCC assessment report. In the AR4, the latest data on this issue was assessed and depicted in Figure 6.10. In this figure, three out of the 12 reconstructions terminate in 1960 (as noted in Table 6.1 in the same chapter), because the reconstructions in the underlying papers also terminate in 1960. Six of the 12 reconstructions extend to 1990 or later, again based on the time period covered by the relevant dataset.
EPA’s TSD actually uses a figure from the NRC (2006), which shows six reconstructions, one of which terminates in 1960. Because these assessment reports are showing the entirety of the data represented in the underlying literature, there is no evidence of any ‘artful deceit,’ nor is this evidence that the AR4 is ‘scientifically questionable.’
Peabody Energy also quotes e-mail exchanges by Richard Alley and by John Mitchell before the release of the NRC (2006) report regarding divergence. For example, Peabody Energy cites the following excerpt from an e-mail by Richard Alley of Penn State to Jonathan Overpeck of the University of Arizona, written on March 11, 2006:
My impression is that, for good reasons, the US NRC panel looking at the record of temperatures over the last millennium or two is not going to strongly endorse the ability of proxies to detect warming above the level of a millennium ago, and that a careful re-examination of the Chapter 6 wording and its representation in the TS and SPM would be wise. . .
These considerations do somewhat affect the confidence that can be attached to the best estimate of recent warmth versus that of a millennium ago. If the paleoclimatic data could be confidently be interpreted as paleotemperatures, then joining the paleoclimatic and instrumental records would be appropriate, and the recent warmth would clearly be anomalous over the last millennium and beyond. By demonstrating that some tree-ring series chosen for temperature sensitivity are not fully reflecting temperature changes, the divergence issue widens the error bars and so reduces confidence in the comparison between recent and earlier warmth5
Peabody Energy also quotes Professor John Mitchell from the UK Met Office as stating:
There needs to be a clear statement of why the instrumental and proxy data are shown on the same graph. The issue of why we dont show the proxy data for the last few decades (they dont show continued warming) but assume that they are valid for early warm periods needs to be explained
I have not had time to check the original chapter, but the comments give the impression that the recent 50 yr warming is unprecedented over the last 500years (seems reasonable) and elsewhere over the last 1000years (less clear).6
Finally, Peabody Energy quotes an e-mail by Keith Briffa which stated ‘I know there is pressure to present a nice tidy story as regards 'apparent unprecedented warming in a thousand years or more in the proxy data' but in reality the situation is not quite so simple.’7
Peabody Energy claims that ‘Taken together, the foregoing emails present a record of serious and sustained doubt about the validity of the proxy record, and particularly, as developed from tree rings. The divergence problem plainly concerned a number of respected researchers leading them to question not only the continued use of tree ring data in the science of paleoclimatology, but also the key theory based on that data: that recent warming of the 20th century is truly unprecedented and unmatched over a period of at least 1000 years.’
The petitioner’s arguments are flawed for several reasons, previously discussed. First, not all tree ring records demonstrate divergence, only a subset mainly in far northern latitudes. Second, while this divergence does raise potential issues about the applicability of those trees for reconstructions of warm eras, there are also a number of potential reasons for this divergence that are due to anthropogenic influences and that therefore would not reduce the value of these tree rings for reconstructions. Third, the TAR and AR4 do not solely rely on tree ring records in their assessment of temperatures in the past 1000 years. There are a number of other proxies that have been used. Mann et al. (2008) shows proxy reconstructions that do not use any tree rings at all.
Finally, the EPA TSD quotes both the NRC and the AR4 on the significant uncertainties involved in reconstructions before 1600. The TSD never claims that the current warming is ‘truly unprecedented in 1000 years,’ and neither does the AR4. What the IPCC AR4 says (Jansen et al., 2007) is:
The TAR pointed to the ‘exceptional warmth of the late 20th century, relative to the past 1,000 years.’ Subsequent evidence has strengthened this conclusion. It is very likely that average Northern Hemisphere temperatures during the second half of the 20th century were higher than for any other 50-year period in the last 500 years. It is also likely that this 50-year period was the warmest Northern Hemisphere period in the last 1.3 kyr, and that this warmth was more widespread than during any other 50-year period in the last 1.3 kyr. These conclusions are most robust for summer in extratropical land areas, and for more recent periods because of poor early data coverage.
The AR4 does not use the term ‘unprecedented’ in this statement, nor with respect to temperature in its Summary for Policymakers, and discusses that there is increased uncertainty before the ‘more recent periods’.
In addition to these issues, the petitioners do not quote the full context of the e-mails and instead cherry-pick only segments. For example, in the above e-mail quoted from Richard Alley, he provides a long and detailed assessment of many of the issues involved in divergence. The petitioners do not, for example, quote his statement that ‘[t]hese considerations do not affect the best estimate that recent warmth is greater than that of a millennium ago; the central estimate from proxy data of latter-twentieth-century warmth is still above that of a millennium ago, with greater spatial coherence recently in the signal.’8 John Mitchell and Jonathan Overpeck are discussing how to address issues of uncertainty: Jonathan Overpeck forwarded these concerns from Mitchell to Keith Briffa and Tim Osborn so that they would be addressed in the IPCC chapter, and as noted above, the IPCC chapter includes a discussion of the relevant uncertainties and the divergence issue. Michael Mann’s response to Briffa’s arguments was that ‘Keith and Phil have both raised some very good points’9, and many of Briffa’s arguments were included in the final version of the IPCC AR4 chapter. For example, Keith Briffa argues in his e-mail that ‘I prefer a Figure that shows a multitude of reconstructions,’ and the figure in the chapter includes a multitude of reconstructions. The discussion of the MWP in the IPCC AR4 chapter (Jansen et al. 2007) heavily referenced and was consistent with Briffa’s own research, concluding that:
The evidence currently available indicates that NH mean temperatures during medieval times (950—1100) were indeed warm in a 2-kyr context and even warmer in relation to the less sparse but still limited evidence of widespread average cool conditions in the 17th century (Osborn and Briffa, 2006). However, the evidence is not sufficient to support a conclusion that hemispheric mean temperatures were as warm, or the extent of warm regions as expansive, as those in the 20th century as a whole, during any period in medieval times (Jones et al., 2001; Bradley et al., 2003a,b; Osborn and Briffa, 2006).
Therefore, the petitioners do not appropriately characterize the e-mails which they are excerpting, and the fundamental issues of uncertainty related to historical reconstructions were properly reflected in the EPA TSD and the Endangerment Findings.
Peabody Energy quotes a 1999 e-mail in which Cook (Professor at the Lamont-Dohert Earth Observatory) asks Briffa (Professor at CRU), ‘Also, there is no evidence for a decline or loss of temperature response in your data in the post-1950s (I assume that you didn’t apply a bodge here)’10 in relation to a discrepancy between a Nature paper published at that time by Vaganov (1999) and an earlier Nature paper by Briffa (1998). Peabody Energy states that ‘By saying he assumed Briffa had not applied a ‘bodge,’ he seemed to be asking Briffa to confirm that Briffa had not masked divergence in the data through statistical legerdemain.’
In this 1999 e-mail, two scientists are discussing the results of papers that are now more than a decade old. The earlierNature paper by Briffa (1998) (on which Vaganov was also a co-author) clearly stated in its abstract, ‘During the second half of the twentieth century, the decadal-scale trends in wood density and summer temperatures have increasingly diverged as wood density has progressively fallen.’ This statement is clearly flagging the divergence issue and rebuts the petitioner’s claim that Briffa et al. ‘masked divergence’ in their paper.’
The paper by Vaganov explores the possibility that later spring snowmelt due to increased winter precipitation might explain the divergence observed in the data. In the e-mail discussion with Cook, Briffa is doubtful that this is the answer (though he does not rule it out) because his data do not show an effect of high precipitation and because the precipitation data in the region is ‘dubious.’ Cook notes that Vaganov used a different, more distant city from the tree ring site for the meteorological data for his paper than Briffa had used in the original, which may explain this difference.
This e-mail chain is clearly a professional discussion between colleagues attempting to understand the implications of newer literature. There is no evidence of any inappropriate behavior on the part of the scientists involved.
The Independent Climate Change E-Mail Review (The Independent Climate Change Email Inquiry, 2010) discussed the term ‘bodge’ in their report, finding that the term referred to an ad hoc adjustment that was later validated:
The term ‘bodging’ has been used, including by Briffa himself, to refer to a procedure he adopted in 1992. The ‘bodge’ refers to the upward adjustment of the low-frequency behaviour of the density signal after 1750, to make it agree with the width signal. This ad hoc process was based on the conjecture that the width signal was correct. There is nothing whatsoever underhand or unusual with this type of procedure, and it was fully described in the paper. The interpretation of the results is simply subject to this caveat. The conjecture was later validated when it was shown to be an effect due to the standardisation technique adopted in 1992. Briffa referred to it as a ‘bodge’ in a private e-mail in the way that many researchers might have done when corresponding with colleagues. We find it unreasonable that this issue, pertaining to a publication in 1992, should continue to be misrepresented widely to imply some sort of wrongdoing or sloppy science.
Peabody Energy quotes from a July 2000 discussion between Bradley (Professor at UMass Amherst) and Frank Oldfield (Executive Director of the International Geosphere-Biosphere Programme Past Global Changes Core Project), in which they express concern that the divergence issue could ‘become a foothold for climate change skeptics.’
The petitioner highlights part of the e-mail from Bradley, ‘Indeed, in the verification period, the biggest ‘miss’ was an apparently very warm year in the late 19th century that we did not get right at all. This makes criticisms of the ‘antis’ difficult to respond to (they have not yet risen to this level of sophistication, but they are ‘on the scent’). Furthermore, it may be that Mann et al simply don’t have the long-term trend right, due to underestimation of low frequency info.’11
The petitioner states that this quote shows ‘complete disregard of objective science in Bradley’s statement that the ‘antis’ had not yet realized a possible flaw in the reconstructions but were ‘on the scent.’ Bradley appears much more interested in preserving the views of himself and his colleagues than in an objective and transparent scientific process.’
Additionally, the petitioner highlights another July 2000 quote from Bradley that stated ‘The results were good, giving me confidence that if we had a comparable proxy data set for post-1980 (we don’t!) our proxy-based reconstruction would capture that period well. Unfortunately, the proxy network we used has not been updated, and furthermore there are many/some/tree ring sites where there has been a ‘decoupling’ between the long-term relationship between climate and tree growth, so that things fall apart in recent decades.... this makes it very difficult to demonstrate what I just claimed.’12
As noted in the latter quote provided by the petitioner, Bradley postulated that if more data for the period post-1980 existed, it would support his existing conclusions. This suggests that he believed that his conclusions are sound, though he recognized that the divergence issue limits the number of useable sites where his theory could be tested. Regarding discussion of the ‘antis,’ there is no evidence of any attempt to hide or ignore uncertainties by Bradley or his colleagues.
To the contrary, the quote from the petitioner is embedded in a larger paragraph that begins, ‘But there are real questions to be asked of the paleo reconstruction.’ Furthermore, the e-mail ends, ‘In Ch 7 we will try to discuss some of these issues, in the limited space available. Perhaps the best thing at this stage is to simply point out the inherent uncertainties and point the way towards how these uncertainties can be reduced.’13 Neither of these statements suggests that Bradley was merely ‘interested in preserving the views of himself and his colleagues,’ and both support the contention that he was indeed interested in ‘an objective and transparent scientific process.’ The petitioner selectively lifts words or phrases from e-mail correspondence to make unsupported, discrediting accusations.
Peabody Energy claims that ‘The bona fides of McIntyre and McKitrick should have become evident after their 2003 paper and follow-up papers were published in peer-reviewed scientific journals. Moreover, the independent Wegman Report subsequently confirmed the validity of their critiques of the hockey stick analysis and the methodological errors in that analysis that they brought to light, and the NRC report also confirmed that it was not confident of the central conclusions of the hockey stick paper.’ The Southeastern Legal Foundation also cites McIntyre, McKitrick, and the Wegman Report in order to criticize the ‘hockey stick affair’.
The petitioners are critiquing a 1998 paper by Michael Mann (‘the hockey stick paper’). This paper broke new ground in terms of temperature reconstructions of the past several hundred years and therefore attracted a lot of attention. Indeed, the petitioners focus on this 1998 paper despite the fact that many other reconstructions have been published in the decade since the original paper and several additional assessments have been conducted (including NRC , which focused specifically on the divergence issue and new literature since Mann’s paper). The TSD relied on the NRC (2006) report and the IPCC AR4, which were comprehensive assessments of the published literature, not only the early papers examined by the ad hoc Wegman report (2006) and McIntyre and McKitrick (2003).
We also note that there have been a number of peer-reviewed critiques and discussions of the McIntyre and McKitrick analyses (e.g., Rutherford et al. 2005, Juckes et al. 2007, von Storch and Zorita 2005, Huybers 2005, Wahl and Amman 2007). These papers question the validity of some aspects of the McIntyre and McKitrick critiques and find that correcting for other valid aspects of the critiques have ‘no significant effects on the reconstruction itself’ (Wahl and Amman, 2007).
Therefore, the petitioners have overstated the implications of the ‘hockey stick’ critiques, in terms of both the validity of the critiques and the reliance of EPA on the original ‘hockey stick’ papers.
Peabody Energy states that ‘In the fall of last year , as a result of McIntyre’s requests for data, the authors of a paper published in Science were forced to admit that they had flipped a data set upside down,’ referring to Kaufman et al. (2009).
A correction to Kaufman et al. (2009) was recently published in Science (Kaufman, 2010). This correction updated some of the proxies to ‘conform to the interpretations of the original authors’ and acknowledged the contributions of ‘H. McCulloch and others who have pointed out errors and have offered suggestions.’
However, the correction also stated that ‘[t]he original conclusions of the paper have been strengthened as a result,’ and ‘[t]he primary trends of the Arctic temperature reconstruction, however, are not changed, including the millennial-scale summer cooling that was reversed by strong warming during the 20th century and (on the basis of the instrumental record) continued through the last decade.’
‘I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) and from 1961 for Keith’s to hide the decline.’14
The petitioners claim that this e-mail ‘evidences CRU staff’s effort to deliberately manipulate data to yield desired results. The Southeastern Legal Foundation states that some responses to this text reached the conclusion that the statement shows ‘an attempt to cook the books to conceal the fact that the famous ‘hockey stick’ is a manipulated, misleading barrel of scientific nonsense.’
Based on our review of this string of e-mails, it appears that the quote citing ‘Mike’s Nature Trick’ referred to a graph prepared for the front cover of a WMO report, ‘WMO Statement on the Status of the Global Climate in 1999,’ which was under development at the time (WMO, 2000). While few details were provided about the cover graphic, the caption and text box referring to it discussed the uncertainties involved in historical temperature reconstructions. The figure developed for this WMO report is unrelated to the IPCC process. While it would have been clearer about its sources if the graph had overlaid the instrumental record using a separate color rather than merging the instrumental and proxy records together, this is an issue for WMO to address, considering the purpose of the report and the role of the graph on the cover. However, the graph and the method used to prepare the graph bear no relationship to the detailed technical discussion found in the assessment reports which EPA relied upon in the development of the TSD, which addressed all of the evidence and discussed the issue of tree ring data in the context of the entire body of paleoclimate evidence.
In fact, the evidence shows that the research community was fully aware of these issues and was not hiding or concealing them. The figure as developed for the WMO report was not used by the IPCC. Rather, the Third Assessment Report, published in 2001 (IPCC, 2001), had a full paragraph on ‘important caveats to be kept in mind’ regarding paleoclimate reconstructions that use tree rings. The paragraph included a discussion of the divergence and concluded that tree rings were best used as one of multiple proxies rather than being the sole source for a climate reconstruction. The AR4, published in 2007, addressed the divergence issue in a paragraph that began ‘Several analyses of ring width and ring density chronologies, with otherwise well established sensitivity to temperature, have shown that they do not emulate the general warming trend evident in instrumental temperature records over recent decades.’ Figure 6.10 in the AR4 and Figure S-1 in the NRC report (which was replicated in Figure 4.3 in the EPA TSD) both clearly show a separate line for the instrumental temperature record and the lines provided by proxy reconstructions, unlike the WMO figure. Hence, the IPCC and additional assessment literature relied on by EPA transparently document, illustrate, and discuss the divergence issue, as did EPA in Volume 2 of the RTC document. The evidence (the CRU e-mails) the petitioners cite on this issue have no relevance on the Administrator’s Findings.
The petitioners highlight the use of the word ‘trick.’ Contrary to the responses described by the Southeastern Legal Foundation, more formal reviews did not find that this phrase indicated an ‘attempt to cook the books’. The UK Science & Technology Committee (2010) reviewed this specific e-mail in its investigation of the disclosure of climate data from the CRU. This investigation concluded: ‘We are content that the phrases such as ‘trick’ or ‘hiding the decline’ were colloquial terms used in private e-mails, and the balance of evidence is that they were not part of a systematic attempt to mislead.’
As part of a set of e-mails that purportedly ‘confirm certain scientists’ efforts to ‘artificially adjust’ data,’ the Coalition for Responsible Regulation quotes from a 1996 e-mail by Gary Funkhouser of the University of Arizona: ‘I really wish I could be more positive about the Kyrgyzstan material, but I swear I pulled every trick out of my sleeve trying to milk something out of that.’
The Funkhouser quote regarding ‘trying to milk something out of that’ does not appear to be an example of an effort to ‘artificially adjust’ data. The full e-mail contains quotes such as ‘The data’s tempting but there’s too much variation even within stands’ and ‘Not having seen the sites I can only speculate, but I’d be optimistic if someone could get back there and spend more time collecting samples, particularly at the upper elevations.’15 Thus, this e-mail is most likely an example of a researcher being who is aware of the limits of the data, is committed to data that are of high quality, and did not draw stronger conclusions than the data warranted. We further note that this email is almost 14 years old, and has no relevance to the literature assessed in the reports upon which EPA relied or the Finding.
As part of a set of e-mails that purportedly ‘confirm certain scientists’ efforts to ‘artificially adjust’ data,’ the Coalition for Responsible Regulation quotes from a 2006 e-mail from Keith Briffa (a scientist at CRU) noting that ‘the PC1 time series in the Mann et al. analysis was adjusted to reduce the positive slope in the last 150 years,’ and that ‘this adjustment was arbitrary and the link between Bristlecone pine growth and CO2 is, at the very least, arguable.’16
The Briffa e-mail does not appear to raise questions about artificially adjusting the data in the analysis. While Briffa appears to disagree with the adjustments used by Mann, he does note that the Mann adjustment was ‘following an earlier paper by Lamarche et al.’17 based on CO2 fertilization estimates and therefore is consistent with the literature. Disagreement among scientists is part of the normal scientific process and is not evidence of deceit.
As part of a set of e-mails that purportedly ‘confirm certain scientists’ efforts to ‘artificially adjust’ data,’ the Coalition for Responsible Regulation quotes from a 2003 e-mail from Michael Mann (then of the University of Virginia) stating that ‘it would be nice to try to ‘contain’ the putative ‘MWP’ [Medieval Warm Period].’18
With regard to the Mann quote about containing the MWP, petitioners do not explain why the word ‘contain’ is not a reference to timeframe rather than, as they imply, data manipulation. The full quote is: ‘I think that trying to adopt a timeframe of 2K [2000 years], rather than the usual 1K [1000 years], addresses a good earlier point that Peck made w/ regard to the memo, that it would be nice to try to ‘contain’ the putative MWP. 19 A 1,000-year-long reconstruction would only cover (i.e., ‘contain’) part of the MWP, whereas 2,000 years would contain the entire MWP. In sum, although petitioners read into the quotes evidence of artificial adjustment of data, when examined in their entirety the quotes do not support this conclusion.
Petitioners make a number of arguments claiming that the MWP may have been warmer than present temperatures. For example, petitioners claim that the divergence issue undermines our understanding of climate change, because ‘if tree rings missed the warming of the latter half of the 20th century, they may also have missed the warming in the MWP.’ They claim that all proxy data are unreliable for use in any time period. For example, Peabody Energy states that the effect of the divergence between tree ring proxy data and recent temperature data is to ‘reduce the mean and range of reconstructed values compared to what they actually were,’ thereby making comparisons between the MWP and recent observed temperatures ‘virtually impossible.’
Petitioners ascribe great significance to the reconstructions of paleoclimate and their importance to our understanding of anthropogenic global warming and claim this evidence was misused by the IPCC. For example, Peabody Energy states that because both the IPCC Third and Fourth Assessment Reports relied on tree rings ‘to significantly downplay the MWP and LIA [Little Ice Age],’ this casts doubt on ‘their conclusions that current warming is likely unprecedented in 1000 years.’ Southeastern Legal Foundation claims that ‘[p]rior naturally occurring episodes of warming and cooling present a problem of proof for the promoters of catastrophic anthropogenic global warming (‘AGW’), namely how to explain that modern warming is caused by man when prior episodes of equal or greater warming self-evidently were not.’
The Finding states that ‘We agree there was a Medieval Warm Period in many regions but find the evidence is insufficient to assess whether it was globally coherent. Our review of the available evidence suggests that Northern Hemisphere temperatures in the MWP were probably between 0.1°C and 0.2°C below the 1961—1990 mean and significantly below the level shown by instrumental data after 1980. However, we note significant uncertainty in the temperature record prior to 1600 A.D.’
The TSD addresses paleoclimate in more depth, finding that ‘Less confidence can be placed in large-scale surface temperature reconstructions for the period from 900 to 1600 A.D’ based on the NRC (2006) report, and further stating that ‘Considering this study and additional research, the IPCC (2007c) concluded: ‘Paleoclimatic information supports the interpretation that the warmth of the last half century is unusual in at least the previous 1,300 years.’ However, like NRC (2006), IPCC cautions that uncertainty is significant prior to 1600 (Jansen et al., 2007).’
With regard to the claim that the IPCC AR4 ‘downplays’ the MWP and LIA, Jansen et al. (2007) clearly acknowledge the existence of and analyze the best estimates of the magnitude of both periods, noting the uncertainty due to limitations of past data, and conclude that
The evidence currently available indicates that NH mean temperatures during medieval times (950—1100) were indeed warm in a 2-kyr context and even warmer in relation to the less sparse but still limited evidence of widespread average cool conditions in the 17th century (Osborn and Briffa, 2006). However, the evidence is not sufficient to support a conclusion that hemispheric mean temperatures were as warm, or the extent of warm regions as expansive, as those in the 20th century as a whole, during any period in medieval times (Jones et al., 2001; Bradley et al., 2003a,b; Osborn and Briffa, 2006).
Therefore, the TSD appropriately highlighted the uncertainty that was reflected in both the NRC report and the IPCC statements. The NRC and IPCC assessed the best available science and came to conclusions that appropriately reflected the underlying literature. The petitioners raise nothing new that was not already addressed by existing characterizations of the uncertainties involved in paleoclimate reconstructions made by the IPCC, NRC, and EPA in the TSD.
Petitioners present a study by Loehle and McCulloch (2008) that calculated a 2,000-year reconstruction without using tree rings. Petitioners claim that this study shows that the MWP was as warm as the late 20th century.
EPA has reviewed the petitioners’ submission of Loehle and McCulloch (2008) and finds that it was not impracticable to raise the objection during the public comment period and that the reasons for the objection did not arise between June 24, 2009, and February 16, 2010. Petitioners could have submitted this study during the comment period on the proposed Endangerment Finding. Although, in most cases, the petitioners provide excerpts from the CRU e-mails in support of their assertions, EPA’s review has determined that this evidence does not support their allegations, and that the information submitted by petitioners on these topics was available well before the comment period for the Endangerment Finding. Petitioners have not shown why it would have been impractical for them to have submitted this study then. Indeed, similar points were already raised, and responded to, in the RTC. Despite the fact that these objections fail to meet the statutory timeframe for evidence supporting a petition for reconsideration, we briefly explain why, contrary to petitioners’ allegation, they fail to call into question the Finding.
The petitioners presented a reconstruction from Loehle and McCulloch (2008) that claimed that without using tree rings they could show that the average of the warmest three decades of the MWP was a little warmer (though not in a statistically significant sense) than the three decades ending in 2006. The paper uses the straight average of 18 proxies, apparently with no attempt to weight the proxies to take into account the geographic distribution of the sites or the strength of their ability to detect temperature changes. In contrast, Mann et al. (2008) presented reconstructions both with and without tree rings, using geographic and other weighting corrections, and unlike Loehle (2008), they found that ‘Recent warmth appears anomalous for at least the past 1,300 years whether or not tree-ring data are used.’ We acknowledge that this is an area of ongoing research and that disagreements among researchers are to be expected, and as noted, the assessment literature and the TSD appropriately characterize the Medieval Warm Period and the uncertainties associated with reconstructing temperatures prior to 1600.
The Southeastern Legal Foundation claims that ‘The importance of these proxy reconstructions to the AGW conjecture can hardly be overstated. Since the Climategate documents gravely impeach the validity and reliability of these reconstructions, they are obviously of central relevance to the Endangerment Finding.’ Peabody Energy claims that the doubts raised in the CRU e-mails would make it harder to attribute recent warmth to human activity. The petitioners claim that if the current warming is not unprecedented, this undermines our ability to determine that the recent warming is due to human influences.
Peabody Energy suggests that EPA relies on reconstructions of historic temperatures over the last 1,000 to 2,000 years as one of the three lines of evidence supporting the statement that GHGs are the root cause of recently observed climate change, but that neither the IPCC nor NRC provide ‘compelling’ evidence that the temperatures of the last several decades are unusual in that context. Peabody Energy further refers to the CRU e-mails discussing uncertainties in historic temperatures, including the divergence in the tree-ring temperature reconstruction in the late part of the 20th century.
The assertions that proxy reconstructions are of ‘central relevance to the Endangerment Finding’ and that the importance of these reconstructions ‘can hardly be overstated’ are inconsistent with the scientific literature on attribution. On this issue, the TSD states that the unusual nature of global surface temperature changes in the past decades are one line of evidence contributing to the confidence in statements about attribution (pg. 47), relying on Karl et al. (2009). The Finding makes the same argument (page 66523). The TSD also notes in Box 5.1 that the IPCC AR4 found that paleoclimate analyses increased confidence in the role of external influences on climate, relying on Hegerl et al. (2007). However, the relative warmth of the MWP and Holocene are just part of the paleoclimate data referred to by Karl et al. and Hegerl et al., and historical paleoclimate data is only one line, and not the primary line, of evidence supporting attribution.
According to Hegerl et al. (2007), paleoclimate data is used to ‘test understanding of the climate response to external forcings’ (where external forcings include volcanic eruptions, solar variability, and changes in GHGs, but not internal variability such as El Nio events). Hegerl et al. note that the past 1,000 years of analyses focus on responses to changes in solar radiation and volcanism. The other time periods mentioned in the chapter are the mid-Holocene (6,000 years ago) and the last glacial maximum (21,000 years ago). Hegerl et al. state that the similarity between modeled and reconstructed temperatures over the past 1,000 years increases confidence that the inability to simulate modern warming without anthropogenic forcing shows the impact of human activity. However, they also note that the uncertainty in both temperature reconstructions and the solar and volcanic forcings makes it difficult to fully assess models in this way. For example, a reconstruction that showed a colder MWP simultaneous with a warmer sun would be harder to explain than a warmer MWP.
The NRC (2006) explicitly addressed this issue at length:
Surface temperature reconstructions have the potential to provide independent information about climate sensitivity and about the natural variability of the climate system that can be compared with estimates based on theoretical calculations and climate models, as well as other empirical data. However, large-scale surface temperature reconstructions for the last 2,000 years are not the primary evidence for the widely accepted views that global warming is occurring, that human activities are contributing, at least in part, to this warming, and that the Earth will continue to warm over the next century. The primary evidence for these conclusions (see, e.g., NRC 2001) includes:
- Measurements showing large increases in carbon dioxide and other greenhouse gases beginning in the middle of the 19th century, instrumental measurements of upward temperature trends and concomitant changes in a host of proxy indicators over the last century.
- Simple radiative transfer calculations of the forcing associated with increasing greenhouse gas concentrations together with reasonable assumptions about the sign and magnitude of climate feedbacks.
- Numerical experiments performed with state-of-the-art climate models.
Supporting evidence includes:
- The observed global cooling in response to volcanic eruptions is consistent with sensitivity estimates based on climate models.
- Proxy evidence concerning the atmospheric cooling in response to the increased ice cover and the decreased atmospheric carbon dioxide concentrations at the time of the last glacial maximum is consistent with sensitivity estimates based on climate models.
- Documentation that the recent warming has been a nearly worldwide phenomenon.
- The stratosphere has cooled and the oceans have warmed in a manner that is consistent with the predicted spatial and temporal pattern of greenhouse warming.
Surface temperature reconstructions for the last 2,000 years are consistent with other evidence of global climate change and can be considered as additional supporting evidence. In particular, the numerous indications that recent warmth is unprecedented for at least the last 400 years and potentially the last several millennia, in combination with estimates of external climate forcing variations over the same period, support the conclusion that human activities are responsible for much of the recent warming. However, the uncertainties in the reconstructions of surface temperature and external forcings for the period prior to the instrumental record render this evidence less conclusive than the other lines of evidence cited above. It should also be noted that the scientific consensus regarding human-induced global warming would not be substantively altered if, for example, the global mean surface temperature 1,000 years ago was found to be as warm as it is today.
It is important to point out that, in drawing the conclusion that multiple lines of evidence indicate GHGs are the root cause of recently observed climate change, EPA does not claim that any individual line of evidence by itself necessarily forms the ‘compelling’ evidence of human-induced climate change. Rather, the multiple lines of evidence collectively form the compelling evidence.
EPA did not state that the reconstructions of prior temperatures alone provide evidence that the recent warming is unusual, nor that the evidence that recent warming is unusual is ‘compelling’ as stated by the petitioners. Instead, EPA appropriately reflected the uncertainty raised in the scientific literature in its statements, as described in Subsection 1.1.2 and many responses to comments in Subsection 1.1.3 of this Response to Petitions (RTP) document. The TSD appropriately highlighted the uncertainty that was in both the NRC report and the IPCC statements. EPA properly found that this evidence ‘supports’ the interpretation that the recent warming is unusual. The arguments raised by Petitioners about the MWP and about other issues related to temperature reconstructions do not change or undermine the basis for this conclusion.
In addition, even if the global mean temperature was as warm or warmer during the MWP as it is today (which is not consistent with the best estimates of the scientific assessments, nor supported by petitioners’ arguments), recent warming would still be ‘unusual,’ even if not ‘unprecedented.’ EPA never stated that recent warming was ‘unprecedented.’ In fact, the TSD reserved the use of the term ‘unprecedented’ for the rate of increase of radiative forcing in the past 10,000 years and for the likely impacts on ecosystems due to a combination of changes in climate and other global change drivers over the next 100 years, should GHG emissions continue at or above current rates.
Additionally, uncertainty over the exact temperatures in the MWP does not change the fact that other critical lines of evidence strongly support the view that GHGs are the root cause of recent warming, such as our basic physical understanding of how GHGs trap heat, how the climate system responds to an increase in GHGs, and how other human and natural factors influence climate and the broad, qualitative consistency between observed changes in climate and the computer model simulations of how climate would be expected to change in response to human activities. The NRC report ‘Advancing the Science of Climate Change’ (NRC, 2010) summarizes the many lines of evidence that support the conclusion that most of the observed warming over at least the last several decades is due to human activities:
- Both the basic physics of the greenhouse effect and more detailed calculations using sophisticated models of atmospheric radiative transfer indicate that increase in atmospheric GHGs should lead to warming of the Earth’s surface and lower atmosphere.
- Earth’s surface temperature has unequivocally risen over the past 100 years, to levels not seen in at least several hundred years and possibly much longer, at the same time that human activities have resulted in sharp increases in CO2 and other GHGs [as discussed above].
- Detailed observations of temperatures, GHG increases, and other climate forcing factors from an array of instruments, including Earth-orbiting satellites, reveal an unambiguous correspondence between human-induced GHG increases and planetary warming over at least the past three decades, in addition to substantial year-to-year climate variability.
- The vertical pattern of atmospheric temperature change over the past few decades, with warming in the lower atmosphere and cooling in the stratosphere, is consistent with the pattern expected due to GHG increases and inconsistent with the pattern expected if other climate forcing agents (e.g., changes in solar activity) were responsible.
- Estimates of changes in temperature and forcing factors over the first seven decades of the 20th century are slightly more uncertain and also reveal significant decadal-scale variability, but nonetheless indicate a consistent relationship between long-term temperature trends and estimated forcing by human activities.
- The horizontal pattern of observed temperature change over the past century, with stronger warming over land areas and higher latitudes, is consistent with the pattern of change expected from a persistent positive climate forcing.
- Detailed numerical model simulations of the climate system are able to reproduce the observed spatial and temporal pattern of warming when anthropogenic GHG emissions and aerosols are included in the simulation, but not when only natural climate forcing factors are included.
- Both climate model simulations and reconstructions of temperature variations over the past several centuries indicate that the current warming trend cannot be attributed to natural variability in the climate system
- Estimates of the climate forcing and temperature changes on a range of timescales, from the several years following volcanic eruptions to the 100,000+ year Ice Age cycles, yield estimates of climate sensitivity that are consistent with the observed magnitude of observed climate change and estimated climate forcing.
- Finally, there is not any compelling evidence for other possible explanations of the observed warming, such as changes in solar activity, changes in cosmic ray flux, natural climate variability, or release of heat stored in the deep ocean or their climate system components.
Another example of evidence supporting human influence on recent warming given by the TSD is that observations of natural influences such as solar variability and historical volcanism over the past century are not consistent with the pattern of observed temperature trends. In general, Karl et al. (2009) found that a number of aspects of modeled and theoretically predicted responses to human-induced climate change were consistent with observations. As Karl et al. state:
This conclusion rests on multiple lines of evidence. Like the warming ‘signal’ that has gradually emerged from the ‘noise’ of natural climate variability, the scientific evidence for a human influence on global climate has accumulated over the past several decades, from many hundreds of studies. No single study is a ‘smoking gun.’ Nor has any single study or combination of studies undermined the large body of evidence supporting the conclusion that human activity is the primary driver of recent warming.
The IPCC, NRC, and TSD appropriately reflected the uncertainty involved in temperature reconstructions, including those for the MWP and the early Holocene. This involved considering the entire body of evidence, including the kinds of evidence and arguments presented by petitioners. Petitioners’ evidence and arguments do not warrant any revisions to these conclusions and their related caveats on degree of certainty. In general, petitioners have not considered the breadth of evidence on these issues and the clear recognition and documentation of the uncertainty concerning temperature reconstructions of the past. They have instead relied upon a limited selection of e-mails, studies, and other evidence that does not warrant the broad conclusions they have drawn.
The issue of divergence of some tree ring records was discussed in depth in the assessment literature and in the RTC document, and the petitioners have not shown that these discussions were incorrect or incomplete. While the graph used for the cover of a 1999 World Meteorological Organization (WMO) report did not disclose this divergence, the graph is both outdated and unrelated to any of the assessment reports or the Findings. The quotes provided by the petitioners as examples of ‘deliberate manipulation’ or ‘artificial adjustments’ are taken out of context and do not support this view.
The petitioners have not shown that the uncertainty concerning the temperature during historical time periods is any different or higher than the degree of uncertainty clearly recognized in the assessment reports and considered by the Administrator in the Endangerment Finding. EPA stated that ‘We agree there was a Medieval Warm Period in many regions but find the evidence is insufficient to assess whether it was globally coherent. Our review of the available evidence suggests that Northern Hemisphere temperatures in the MWP were probably between 0.1°C and 0.2°C below the 1961—1990 mean and significantly below the level shown by instrumental data after 1980. However, we note significant uncertainty in the temperature record prior to 1600 A.D.’ The assessment literature did not conclude that the current warming is definitively unique or unprecedented, but that it is unusual, properly accounting for the uncertainty associated with temperature reconstructions. The petitioners have provided no evidence that indicates a different conclusion is appropriate in light of all of the scientific evidence, or that the Administrator failed to properly account for this uncertainty. The broad conclusions drawn by petitioners are not warranted by the limited evidence they rely upon.
The CRU e-mails and other evidence do not materially change or warrant any less reliance on the paleoclimate temperature reconstructions. Petitioners’ objections to certain aspects of the science behind paleoclimate reconstructions are without merit, as discussed above. Petitioners fail to consider or contest the other scientific bases for the temperature reconstructions, and fail to provide arguments that take into consideration the entire body of evidence. Petitioners also mischaracterize the limited role that paleoclimate temperature reconstructions play in attributing recent warming to atmospheric concentrations of GHGs. EPA and the assessment literature consider this one of three lines of evidence, and largely as supporting evidence, in light of the recognized uncertainties in this area of science. Petitioners' arguments do not support changing the degree of reliance EPA properly placed on this one line of evidence.
The CRU e-mails and other evidence also do not materially change or warrant any less reliance on the other important lines of evidence linking GHGs and climate change: our basic physical understanding of the effects of changing GHG concentrations and other factors; the broad, qualitative consistency between observed changes in climate and the computer model simulations of how climate would be expected to change in response to human activities; and other important evidence of an anthropogenic fingerprint in the observed warming. All of the lines of evidence cited in the TSD concerning attribution, including paleolcimate temperature reconstructions, properly reflect the literature as far as the degree of certainty involved in each, and the scientific conclusion on attribution that is drawn from them is based on consideration of the entire body of evidence and not from any single element. Petitioners do not attempt to assess this entire body of evidence and do not show that when considered as a whole, it leads to a different conclusion on attribution from that found by EPA.
Supporting the attribution of GHGs to climate change is the vertical pattern of temperature change that has been observed in recent decades, as briefly described in Subsection 188.8.131.52 and discussed in Section 5 of the TSD. ‘Fingerprint studies’ are often used to assess and identify the human contribution to observed changes in the vertical temperature structure of the atmosphere. Fingerprint studies compare the patterns of observed temperature changes with results from models that account for GHG levels and determine whether or not similarities could have occurred by chance. The assessment literature documents the identification of these fingerprints—where observations and models match and similarities are not expected to have occurred by chance—which the TSD summarizes as follows:
Fingerprint studies have identified GHG and sulfate aerosol signals in observed surface temperature records, a stratospheric ozone depletion signal in stratospheric temperatures, and the combined effects of these forcing agents in the vertical structure of atmospheric temperature changes (Karl et al., 2006).
In many parts of the world, the anthropogenic fingerprint has been unambiguous. However, the fingerprint in the tropics only recently has become apparent, as the TSD explains:
However, an important inconsistency may have been identified in the tropics. In the tropics, most observational data sets show more warming at the surface than in the troposphere, while almost all model simulations have larger warming aloft than at the surface (Karl et al., 2006). Karl et al. (2009) state that when uncertainties in models and observations are properly accounted for, newer observational data sets are in agreement with climate model results.
EPA references Karl et al. (2009) (the USGCRP 2009 assessment), which synthesizes the results of several new studies to conclude that models are in general agreement with observations in the tropical troposphere. EPA referred to these same studies in Volume 3 of the RTC document, in response to public comment.
Peabody Energy asserts that the CRU e-mails call into question the reliability of studies supporting the existence of a human (i.e., anthropogenic GHG) ‘fingerprint’ in the vertical temperature structure of the atmosphere (i.e., in the troposphere) over the tropics. Peabody Energy claims that the CRU e-mails reveal that the authors of fingerprint studies published their results in an ‘inappropriate and indeed unethical way.’ Peabody Energy also suggests that the lack of an anthropogenic fingerprint in the tropics would challenge the assessment literature finding of consistency between models and observations, which supports a primary line of evidence linking GHGs and climate change.
Specifically, Peabody Energy claims the authors of a significant article (Santer et al., 2008) supporting the existence of an anthropogenic tropical fingerprint, along with some other scientists with whom Santer corresponded, exerted improper influence to delay the print publication of a paper by Douglass et al. (2007) that did not identify a tropical fingerprint. Peabody Energy’s chief allegation is that the print publication of Douglass et al. in the International Journal of Climatology (IJC), a paper which was originally published online in December 2007, was stalled until November 2008 to coincide with publication of Santer et al. (2008), a paper which rebutted the Douglass et al. findings. According to Peabody Energy, the simultaneous print publication of Santer et al. (2008) and Douglass et al. (2007) prevented Douglass et al. from having an opportunity for a simultaneous response to Santer’s paper. The petitioner claims, ‘The gamesmanship behind this strategy diverted the process of scientific inquiry from its proper path and tainted the material on which the Agency now seeks to rely.’ Peabody Energy claims that dozens of e-mails document inappropriate conduct on the part of the authors and editors of the journals that published the studies supporting the presence of a tropical fingerprint.
Peabody Energy does not offer any substantive basis for questioning the technical validity of the studies cited by EPA (and USGCRP), which rebut Douglass et al. The Santer et al. study convincingly detailed a number of errors in the Douglass et al. study, the most blatant of which was the inappropriate calculation of uncertainty range of model estimates, which was off by a factor of almost five in Douglas et al. (2007). Peabody’s claims of lack of reliability of the studies at issue are without scientific basis.
We also note that more than 18 months have passed since publication of Santer et al. (2008). Certainly, Douglass et al. have had ample opportunity to publish a rebuttal and defend their analysis during this time, either as a reply to Santer et al. in the IJC or as a new analysis in a different journal. Such a step is common in areas of active scientific discourse, where scientists go back and forth in the literature until the issues are resolved. However, to our knowledge, Douglass et al. have not responded with a scientific response that has been accepted for publication in a peer-reviewed journal.
Peabody instead relies upon an assertion of lack of scientific credibility because of the timing of the article, not the substance of the article. The timing of the study’s publication bears no relationship to its scientific or technical validity. At most, Peabody is concerned that the paper publication of an article already available online occurred at the same time as a rebuttal. The lack of an opportunity for a simultaneous response is unrelated to the scientific merit of the study, especially when it appears no response has been forthcoming. There is no scientific merit to Peabody’s argument.’
Furthermore, Peabody Energy is incorrect in asserting that EPA relied solely on Santer et al. to rebut Douglass et al. (2007). In response to public comment, RTC (3-7) also cites studies by Haimberger et al. (2008) and Allen and Sherwood (2008), which were published in other journals (Journal of Climate and Nature Geoscience, respectively) and assessed by USGCRP.
Peabody Energy attempts to implicate several of the authors of the Haimberger et al. and Allen and Sherwood studies by linking them to the IJC dispute, because several of the authors were copied on some of the CRU e-mails in which the Douglass et al. (2007) results are discussed. However, Peabody Energy provides no evidence these authors were in any way involved in the publication timing of either the Douglass et al. or Santer et al. paper in the IJC.
In summary, questions raised by petitioners concerning the timing of the paper publication of papers in IJC do not provide any substantive basis for questioning the validity of the scientific conclusions of the studies at issue or the assessment literature (i.e., Karl et al., 2009) and EPA on this issue. For more detailed discussion of the CRU e-mails pertaining to the publication of studies on the existence of a human fingerprint in the vertical temperature structure of the atmosphere over the tropics and what the e-mails signify, refer to Subsection 3.3.3 in Volume 3 of this RTP document. .
Peabody Energy suggests that Ben Santer, a research scientist at Lawrence Livermore Laboratory and lead author of the Santer et al. (2008) study discussed above, improperly accuses Douglass et al. of neglecting datasets that do not suggest a discrepancy between observations and models in the vertical structure of the atmosphere in the tropics. The petitioner quotes an e-mail from Melissa Free, co-author of a paper acknowledging the existence of a discrepancy between observations and models, which was also the basic conclusion of Douglass et al. (2007). Free’s e-mail states: ‘What about the implications of a real model-observation difference for upper-air trends? Is this really so dire’20
Peabody Energy quotes Santer’s response: ‘What is dire is Douglass et al.’s willful neglect of any observational datasets that do not support their arguments.’21 According to Peabody Energy, Santer was referring to two radiosonde datasets (RAOBCORE v1.3 and v1.4) that were excluded from the analysis in Douglass et al. (2007). Peabody Energy states that Douglass et al. (2007) explained in an addendum that these datasets are faulty.
The comment by Santer about Douglass’ neglect of datasets was an informal communication with colleagues, which was written in December 2007, before Douglass et al. submitted the addendum to which Peabody Energy refers; the addendum was submitted in January 2008. We note that this addendum was not accepted for publication and seems to be available only via the author’s personal website. The formal rebuttal to Douglass et al. remains the peer-reviewed paper by Santer et al., which states:
Our results contradict a recent claim that all simulated temperature trends in the tropical troposphere and in tropical lapse rates are inconsistent with observations. This claim was based on use of older radiosonde and satellite datasets, and on two methodological errors: the neglect of observational trend uncertainties introduced by interannual climate variability, and application of an inappropriate statistical ‘consistency test.’
As stated in the preceding response, to date, Douglass et al. have not published a rebuttal to Santer et al.’s formal critique. Furthermore, the quote from Melissa Free does not acknowledge the existence of a discrepancy between observations and models; it is raising the possibility as a hypothetical.
According to the latest National Oceanic and Atmospheric Administration (NOAA) data (NOAA, 2010a), the decade spanning 2000—2009 was substantially warmer than the prior decade (1990—1999). Every year from 2000 to 2009 was warmer than the 1990—1999 average. Using data available through 2008, the TSD in Box 4.1 states that ‘the warming rate in the last 10 30-year periods (averaging about 0.30°F [0.17°C] per decade) is the greatest in the observed record. However, depending on one’s choice of temperature record and the start and stop date chosen for computing a trend in that record, it is possible to demonstrate a slowdown in the rate of warming over the last 10 years or so relative to the rate of warming observed over the last several decades.
The assessment literature, as summarized in the TSD, emphasizes that ‘that year-to-year fluctuations in natural weather and climate patterns can produce a period that does not follow the long-term trend’ (Karl et al., 2009). More recent assessment literature, not available for inclusion in the Endangerment TSD, states (NRC, 2010), ‘it is not appropriate to look at only a short period of the overall record (such as changes over just the last five or ten years) to infer major changes in the trajectory of global warming.’
The responses in this section address petitioners’ arguments about the meaning and implications of a possible slowdown in the rate of warming.
184.108.40.206 Assessment of Arguments Regarding Global Temperature Trends Over the Last Decade and Implications for Attribution of These Trends to GHGs
Peabody Energy alleges that the ‘CRU material’ contradicts EPA’s assertion that the results of climate models constitute a third line of evidence that can be relied on to attribute climate change to anthropogenic GHG emissions.’ To support this conclusion, Peabody Energy refers to CRU e-mails in which scientists discuss the possible slowdown in warming over the last 10 years or so.
This issue is not new and was the subject of public comment during development of EPA’s Findings. As discussed in the TSD, analysis of surface and lower tropospheric temperature data over the last 10 or so years indicates that the rate of warming may have temporarily slowed, although the magnitude of the slowdown varies depending on dataset analyzed and choice of start date. Contrary to the argument of the petitioner, such a possible slowdown does not ‘eviscerate’ the scientific support for climate change, which is a long-term process. As explained in Box 4.1 of the TSD:
It is important to recognize that year-to-year fluctuations in natural weather and climate patterns can produce a period that does not follow the long-term trend (Karl et al., 2009). Thus, each year will not necessarily be warmer than every year before it, though the long-term warming trend continues (Karl et al., 2009).
In addition, the recent NRC (2010) states: ‘Individual years, or even individual decades, can deviate from the long term trend due to natural climate variability. Thus, it is not appropriate to look at only a short period of the overall record (such as changes over just the last five or ten years) to infer major changes in the trajectory of global warming.’
As demonstrated by the detailed responses below, the petitioner’s evidence does not warrant the conclusion that current explanations for the possible slowdown in warming are invalid, or that this slowdown undermines our confidence in the utility of climate models for either attributing or projecting climate change over appropriately long timescales.
In support of the argument that the short-term slowdown in warming undermines the science supporting the Findings, Peabody Energy specifically highlights an e-mail from climate scientist Kevin Trenberth (Head of the Climate Analysis Section at the National Center for Atmospheric Research) stating, ‘The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t.‘22On this basis, Peabody Energy asserts that ‘Trenberth was unconvinced that the recent lack of warming was consistent with the scientific understanding of the climate system on which the models are based.’ Peabody Energy concludes that ‘Trenberth’s statement eviscerates the grounds for EPA’s Endangerment Finding’ and posits that ‘if, as Trenberth says, the science is too uncertain to determine whether GHG reductions will produce a measurable climate response, then there is no basis to regulate and no basis to express confidence that anthropogenic GHG emissions are primarily responsible for the warming of the last several decades.’
Peabody Energy’s interpretation of the Trenberth e-mail is a profound misinterpretation of the context and meaning of his statement. It is not in any sense an admission that the science underlying the climate models is compromised. In response to questions about the context of the quote, Trenberth provided the following clarification on the website of his employer, the University Center for Atmospheric Research (Trenberth, 2010):
It is amazing to see this particular quote lambasted so often. It stems from a paper I published this year bemoaning our inability to effectively monitor the energy flows associated with short-term climate variability. It is quite clear from the paper that I was not questioning the link between anthropogenic greenhouse gas emissions and warming, or even suggesting that recent temperatures are unusual in the context of short-term natural variability.
This paper tracks the effects of the changing Sun, how much heat went into the land, ocean, melting Arctic sea ice, melting Greenland and Antarctica, and changes in clouds, along with changes in greenhouse gases. We can track this well for 1993 to 2003, but not for 2004 to 2008. It does NOT mean that global warming is not happening’
Furthermore, the paper Trenberth references in this clarification states the following (Trenberth, 2009):
The present-day climate is changing mainly in response to human-induced changes in the composition of the atmosphere as increases in greenhouse gases promote warming’
Peabody Energy’s assertion that Trenberth was implying that the ‘science is too uncertain to determine whether GHG reductions will produce a measurable climate response,’ is a gross mischaracterization of the meaning and significance of both the quote and Trenberth's position. Trenberth was not implying or questioning the validity of climate models used for attribution and projections. He was identifying a gap in the Earth-observing system, which if filled, would improve our understanding of short-term variations in climate.
Petitioners (Competitive Enterprise Institute and Southeastern Legal Foundation) refer to a recent interview between CRU scientist Phil Jones and the BBC in which temperature trends are discussed, with an emphasis on the last seven to 15 years (Harribin, 2010). They highlight a question for CRU scientist Phil Jones - ‘Do you agree that from 1995 to the present there has been no statistically-significant global warming’ - to which he responds ‘Yes.’ The Competitive Enterprise Institute parenthetically mentions that Jones states that there had indeed been warming during the period but that it was not statistically significant. Based on this statement from Phil Jones, Southeastern Legal Foundation concludes: ‘Under the circumstances, it is apparent that there is no need to rush headlong into a potentially irrational and arbitrary decision [a reference to the need to regulate GHGs].’ Additionally, the Competitive Enterprise Institute notes that, in this interview, Jones indicates that there has been statistically insignificant cooling since 2002, appearing to contradict a statement (Reuters, 200823) he made on how the 2001—2007 period was warmer than the previous decade.
The Competitive Enterprise Institute and Southeastern Legal Foundation are not presenting new information or making a new argument. The Jones statement does not substantively add to what we have already responded to in the RTC (e.g., see RTC 2-41). The TSD is very clear that some datasets show a possible slowdown in the warming trend in the last decade or so. Box4.1 of the TSD states:
Though most of the warmest years on record have occurred in the last decade in all available datasets, according to an analysis of the HadCRUT dataset in the ‘State of the Climate in 2008’ report (Peterson and Baringer, 2009), the rate of warming has, for a short time, slowed. The temperature trend calculated for January 1999 to December 2008 was about +0.13 0.13°F (+0.07 0.07°C) per decade, which is less than the 0.32°F (0.18°C) per decade trend recorded between 1979 and 2005 (or 0.30°F [0.17°C] per decade for 1980 to 2008 as stated above). However, NOAA [the National Oceanic and Atmospheric Administration] (NOAA, 2009a) and NASA [the National Aeronautics and Space Administration] (NASA, 2009) trends do not show the same marked slowdown for the 1999-2008 period. The NOAA trend was ~0.21°F (0.12°C) per decade while the NASA trend was ~0.34°F (0.19°C) per decade. The variability among datasets is a reflection of fewer data points and some differences in dataset methodologies. Analysis of trends for the years 2000, 2001, and 2002 through 2008 indicate a rather flat trend, with slight warming or cooling depending on choice of dataset and start date. It is important to recognize that year-to-year fluctuations in natural weather and climate patterns can produce a period that does not follow the long-term trend (Karl et al., 2009). Thus, each year will not necessarily be warmer than every year before it, though the long-term warming trend continues (Karl et al., 2009).
In other words, the fact that the warming rate may have temporarily slowed was recognized and taken into full account in the endangerment record.
In addition, the petitioner only includes part of Jones’ full response to BBC’s interview question. Here is the question and complete response (Harribin, 2010):
[BBC] - Do you agree that from 1995 to the present there has been no statistically-significant global warming
[Jones] Yes, but only just. I also calculated the trend for the period 1995 to 2009. This trend (0.12C per decade) is positive, but not significant at the 95% significance level. The positive trend is quite close to the significance level. Achieving statistical significance in scientific terms is much more likely for longer periods, and much less likely for shorter periods.
As the complete response indicates, the Competitive Enterprise Institute and Southeastern Legal Foundation fail to mention that Jones found the warming nearly statistically significant and that he explains the limitations of analyzing temperature trends over short time periods. The petitioners’ implication that the lack of statistically significant warming over a few years somehow negates the rationale for action is not supported by the science or Dr. Jones’ statements.
Regarding the ‘cooling’ Jones mentions since 2002, as the Competitive Enterprise Institute itself notes, it is statistically insignificant. Furthermore, it is analyzed over a completely inappropriate time period for discerning long-term trends, as stated in both the scientific literature and the endangerment record. The fact that Jones computed a negative temperature trend over this very short period does not contradict the statement he had made the year before (in 2008) about the 2001—2007 period, described by the news service Reuters accordingly (Reuters, 2008):
Underscoring an underlying rise in temperatures, British forecaster Phil Jones said 2001-07, with an average of 0.44 Celsius above the 1961-90 world average of 14 degrees, was 0.21 degrees warmer than the corresponding values for 1991-2000.
In this Reuters article, Jones was referring to absolute temperature levels in 2001—2007 compared to absolute temperatures in 1991—2000, rather than the 2002—2009 time series discussed in the BBC interview. Further, a very recent analysis by NOAA’s National Climatic Data Center (NCDC) (NOAA, 2010b)’using its own temperature record’corroborates Jones’ statement. The NOAA analysis indicates that the 2000—2009 period was substantially warmer than the prior decade. Importantly, every year from 2000 to 2009 was warmer than the 1990—1999 average, as shown in the figure below.
To summarize, the TSD and endangerment record address short-term trends in the temperature record in a manner that is entirely consistent with the statements in Jones’ BBC interview. The petitioners have not provided any new information that alters our understanding of temperatures over the last 10 to 15 years’when temperatures have been the warmest in the instrumental record.
From the BBC interview discussed in Response 1-22, the Competitive Enterprise Institute and Southeastern Legal Foundation also refer to Jones’ statement that for the periods 1860 to 1880, 1910 to 1940, 1975 to 1998, and 1975 to 2009 ‘the warming rates’ are similar and not statistically significantly different from each other. The Competitive Enterprise Institute concludes:
If there has been no change in warming rates, this contradicts one of EPA’s basic contentions. During this same period, atmospheric levels of carbon dioxide and other greenhouse gases levels dramatically increased-according to EPA, to ‘essentially unprecedented levels.’ 74 FR 66, 517. Yet if increasing levels of these gases did not produce a clear acceleration of warming, then the role of these gases as a major driver of temperature becomes even more dubious.
The warming rates for those four periods are similar, but it is important to note that these four periods are of different lengths (21, 31, 24, and 35 years), making trend comparisons incongruent. The TSD, comparing periods of equal lengths, notes that the recent warming rates are the greatest on record, though we note they are closely followed by (and therefore similar to) earlier periods (TSD Box 4.1):
The warming rate in the last 10 30-year periods (averaging about 0.30°F [0.17°C] per decade) is the greatest in the observed record, followed closely by the warming rate (averaging about 0.25°F [0.14°C] per decade) observed during a number of 30-year periods spanning the 1910s to the 1940s.
The Competitive Enterprise Institute’s assertion that similarity in warming rates between the present and past contradicts attribution of recent warming to anthropogenic GHGs is not supported by scientific analysis and ignores the scientific assessments on which EPA relied. In contrast to petitioners, the TSD accurately characterizes current scientific understanding, which does not look at GHGs in isolation but considers them in the context of other drivers of warming or cooling. For example, Section 5(a) of the TSD states:
The IPCC (Hegerl et al., 2007) finds that anthropogenic GHG emissions were one of the influences contributing to temperature rise during the early part of the 20th century along with increasing solar output and a relative lack of volcanic activity. During the 1950s and 1960s, when temperature leveled off, increases in aerosols from fossil fuels and other sources are thought to have cooled the planet. For example, the eruption of Mt. Agung in 1963 put large quantities of reflective dust into the atmosphere. The rapid warming since the 1970s has occurred in a period when the increase in GHGs has dominated over all other factors (Hegerl et al., 2007).
Section 5 of the TSD provides substantial additional information to support this statement. Multiples responses (e.g., 1-24, 1-25, 1-27, 1-28, and 1-29) in this RTP discuss the possible slowing of the warming rate in the last decade and implications for attribution to anthropogenic GHGs.
Thus, the Competitive Enterprise Institute provides no new information to support its conclusion that the role of GHGs in temperature change is ‘dubious.’ The issues they raise are not new, their argument improperly looks at GHGs in isolation, and these issues were fully addressed in the Endangerment Finding, TSD, and RTC document.
Peabody Energy contends that EPA’s understanding of the causes of the warming in the latter part of the 20th century is contradicted by the lack of warming over the 1998—2008 period. They assert that EPA did not cite any natural forces (such as a volcanic eruption) that might explain the lack of warming during this period, a period when atmospheric GHG concentrations continued to increase.
The possible slowdown in warming from 1998 to 2008 does not contradict understanding of the causes of warming over the latter part of the 20th century. EPA is very clear about the well-known scientific limitations of taking a short snapshot of the temperature record and attempting to draw any conclusion from it’with respect to either natural variability or the human influence on climate. As stated in the RTC 3-4:
‘ drawing conclusions from short time-scales is of limited value. Directly comparing global GHG emissions with global temperatures on decadal or shorter time-scales must consider all plausible variations and other existing non-linear inter-relationships.
We note there are other 10-year periods in the temperature record that when extracted show no trend even though these periods are embedded within a longer period showing substantial overall warming (Easterling and Wehner, 2009). Therefore, it is clear that temperatures do not rise monotonically despite the continuing increase of GHG concentrations. Observations over such short periods examined in isolation may be misleading in the interpretation of the longer-term trend in temperatures.
In spite of the possible slowdown in warming over the last 10 or so years, TSD Box 4.1 documents that the overall rate of warming is about 0.29—0.30°F per decade over the last 30 years and 0.24—0.25°F per decade over the last 50 years.
Peabody Energy correctly states that EPA does not cite specific natural factors that might explain temperature trends over the last decade. This is because attribution of trends over short timescales is more difficult and is an emerging area of research. The paper by Solomon et al. (2009) discussed in Response 1-81 of this volume provides a plausible explanation related to changes in stratospheric water vapor to explain at least some of the trend, but concludes that more research is required for understanding temperature changes on short time scales. Additionally, in Response 1-27 for example, we acknowledge gaps in describing the causes of short-term variability but emphasize that these gaps do not undermine the case for linking longer term changes in temperature to GHGs.
The TSD’citing the assessment literature’summarizes the significant advances in scientific understanding that have been made in linking long-term temperature trends to GHGs in Section 5(a):
The increased confidence in the GHG contribution to the observed warming results from (Hegerl. et al., 2007):
- An expanded and improved range of observations allowing attribution of warming to be more fully addressed jointly with other changes in the climate system.
- Improvements in the simulation of many aspects of present mean climate and its variability on seasonal to inter-decadal time scales.
- More detailed representations of processes related to aerosol and other forcings in models.
- Simulations of 20th-century climate change that use many more models and much more complete anthropogenic and natural forcings.
- Multi-model ensembles that increase confidence in attribution results by providing an improved representation of model uncertainty.
To summarize, the possible slowdown in warming over the last decade or so does not undermine the linkage between GHGs and temperature over sufficiently long timescales.
Peabody Energy asserts that EPA’s explanation for the slowdown in warming’that there is enough natural variability in the climate system to accommodate the lack of warming over a short time period’is not consistent with the following statement from the Academies of Science for the G8+5 countries, which EPA cites in RTC 1-43:
‘ climate change is happening even faster than previously estimated; global CO2 emissions since 2000 have been higher than even the highest predictions, Arctic sea ice has been melting at rates much faster than predicted, and the rise in the sea level has become more rapid. (National Academies, 2009)
Peabody Energy states:
EPA cannot have it both ways. It cannot insist that the cessation of warming over the last eleven years must be the result of poorly understood natural variability; whereas, other climate phenomena during that period conclusively demonstrate man’s impact on the climate and are not explained by the same variability.
Moreover, EPA’s discussion of the period over which warming must be sustained to provide confidence that the warming is not natural is confusing. According to EPA, ‘[b]oth the IPCC and the TSD note that ‘difficulties remain in attributing temperature changes on smaller than continental scales and over time scales of less than fifty years,’ and that with limited exceptions attribution at these scales has not yet been established.’ Since the warming of the second period lasted only about thirty years before it ceased, and since the plus-fifty year warming trend of the 20th century includes a period in which the warming did not result primarily from anthropogenic GHGs, it would appear that EPA cannot justify its conclusion that the warming of the last thirty years can be definitively attributed to anthropogenic GHGs.
The above statement from the Academies of Science for the G8+5 countries cited by EPA in RTC 1-43 (National Academies, 2009) describes trends in GHG emissions, Arctic sea ice, and sea level but does not address attribution, whether over a short time period or a longer time period. The statement is provided in the context of listing many major national and international scientific societies and academies that have endorsed or expressed support for the findings and conclusions of the assessment literature. It is not scientific information synthesized for the TSD or the Findings.
This Academies of Science statement does not address attribution and was not used to support arguments pertaining to attribution, and EPA is not ‘having it both ways’. Peabody Energy correctly articulates our characterization of the attribution issue when it states:
According to EPA, ‘[b]oth the IPCC and the TSD note that ‘difficulties remain in attributing temperature changes on smaller than continental scales and over time scales of less than fifty years,’ and that with limited exceptions attribution at these scales has not yet been established.’
There is no contradiction or inconsistency in the view that the warming rate may have slowed in the last decade or so and the fact that the long-term warming trend has been attributed to anthropogenic GHGs’as described in Response 1-24. Our statements about temperature trends and their relationship to the observed increases in GHGs are internally consistent within the endangerment record, and consistent with the assessment literature.
Peabody Energy refers to the following CRU e-mail, written by climate scientist Kevin Trenberth in October 2009, to suggest that the climate community could never demonstrate that reducing GHG emissions will reduce warming:
[t]he fact that we can not account for what is happening in the climate system [the lack of warming] makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not!24
Peabody Energy asserts that Trenberth’s reference to ‘geoengineering’ includes reducing GHG emissions, citing a letter Trenberth wrote in Physics Today in February 2009 (Trenberth, 2009), stating:
The ethical questions associated with climate manipulation loom so large that some forms of geoengineering are simply unacceptable. The forms that are acceptable include those that reduce emissions and mitigate the rates of change or reduce the amount of carbon dioxide in the atmosphere.
Based on Trenberth’s expressed doubts about the ability to evaluate geoengineering, Peabody Energy claims:
‘Trenberth stated that the flaws in the climate community’s understanding of climatic forces that are exposed by the lack of warming is so fundamental—and the extent of natural variability must be so great—that it could never be demonstrated that reducing GHG emissions will reduce warming.
Peabody Energy’s conclusions are speculative and unsupportable, and are based on selectively linking disparate statements made at different times by Trenberth. Examining each statement in context makes clear that he does not agree with the views Peabody Energy ascribes to him.
Although Trenberth included GHG emissions reductions as an example of a geoengineering method in the Physics Today letter, that does not mean he is referring to GHG reductions as ‘geoengineering’ within the context of the statement, ‘we will never be able to tell if it is successful or not,’ from the CRU material.
Generally, ‘geoengineering’ does not refer to GHG emissions reductions achieved by lowering anthropogenic emissions, but rather, alternative options to counter the warming effects of GHGs. The IPCC’s discussion (Barker et al., 2007) of geoengineering discusses options such as iron and nitrogen fertilization of the oceans (to take up CO2) and deployment of technologies that reduce the amount of sunlight accepted by the Earth’s system (e.g. injecting reflective aerosols into the stratosphere and installing a deflector system in space to block sunlight). In fact, in a recent paper in Science authored by Trenberth and a colleague (Trenberth and Fasullo, 2010), Trenberth describes geoengineering accordingly:
Proposals for addressing global warming now include geoengineering, whereby tiny particles are injected into the stratosphere to emulate the cooling effects of stratospheric aerosol of a volcanic eruption.
Furthermore, as discussed in Response 1-21, Trenberth makes clear that his comments in this context are in reference to his frustration about gaps in the observing system that make it difficult to understand the Earth’s energy balance, and hence climate variability, on short timescales. Trenberth and Fasullo (2010) specifically mention the importance of understanding the flow of energy in the Earth’s system for tracking the effectiveness of potential geoengineering projects:
Implicitly, such proposals assume understanding and control of the energy flow, which requires detailed tracking of energy within the climate system.
Thus, EPA finds no evidence to suggest that Trenberth intended to imply that reducing GHG emissions will not reduce warming. In fact, Trenberth’s July 2008 testimony (Trenberth, 2008) before the U.S. Senate Committee on Environment and Public Works suggests the opposite:
I believe that mitigation actions are certainly needed to significantly reduce the build-up of greenhouse gases in the atmosphere and lessen the magnitude and rate of climate change. Action taken now to reduce significantly the build-up of greenhouse gases in the atmosphere will lessen the magnitude and rates of climate change. In fact I believe there is a crisis of lack of adequate action in this regard.
Trenberth’s public statements and recent published work very strongly suggest that the Trenberth statement cited by Peabody Energy does not mean it is impossible to know if GHG reductions will reduce warming, but simply that better understanding of the energy budget is needed if we are to track potential geoengineering projects (where ‘geoengineering’ is referring to injecting particles into the atmosphere).
Peabody Energy raises a number of methodological concerns with the Easterling and Wehner (2009) and Knight et al. (2009) analyses and concludes that ‘the models demonstrably do not take account of the current no-warming trend.’
As discussed in Response 1-20, the NRC (2010) states ‘it is not appropriate to look at only a short period of the overall record (such as changes over just the last five or ten years) to infer major changes in the trajectory of global warming.’ See other prior discussions (e.g., Response 1-22) concerning whether there is a slowdown in the rate of warming over a limited number of recent years. Furthermore, in the TSD, EPA cites the NOAA ‘State of the Climate in 2008’ report (Knight et al., 2009 25 ), which finds that climate models possess internal mechanisms of variability capable of reproducing the possible current slowdown in global temperature rise. Further, in a response to public comments on the issue (2-41), we refer to Easterling and Wehner (2009), who find that short-term trends in long-term time series occasionally run counter to the overall trend.
As a general matter, the assessment literature is clear that the ability (or lack of ability) of climate models to reproduce a short-term slowdown in warming in no way invalidates or changes their reliability for attributing or projecting long-term changes in global climate variables resulting from sustained anthropogenic forcing of the climate system. The analysis of long-term trends is their primary purpose, rather than projecting year-to-year changes over a period of a decade or so or less. As the NRC (2010) states: ‘Robust analyses of global climate change thus tend to focus on trends of at least several decades.’ Our responses to the specific issues raised by Peabody Energy regarding Easterling and Wehner (2009), Knight et al. (2009), and related issues are discussed below.
Peabody Energy contends that the failure of the planet to warm over the last 11 years raises serious questions about the accuracy of computer models, specifically challenging the reliance on the studies of Easterling and Wehner (2009) and Knight et al. (2009),26 which support the notion that climate models can produce short periods of temperature trends that run counter to the overall long-term warming trend.
Peabody Energy expresses several concerns about Easterling and Wehner (2009), asserting that:
- The examples of pauses from warming in the past (running counter to the long-term trend) from 1977 to 1985 and from 1981 to 1989 are shorter than the pause in warming from 1998 to 2008. Peabody Energy writes that ‘no global warming during the eleven-year period 1998-2008 is a much more unlikely event than no global warming during the nine-year periods, 1975-1983 and 1981-1989.’
- The period of no warming from 1977 to 1985 was driven by the eruption of El Chichon in 1981.
- Easterling and Wehner make a factual error, stating there was no warming from 1981 to 1989 because the Goddard Institute for Space Studies (GISS) and CRU datasets indicate warming of +0.08°C per decade.
- [T]he probability of a ten-year period of no warming, as determined by Easterling and Wehner from climate model projections as 10%, is too large. If the actual emissions were used to drive the climate models, the probability of occurrence of a ten year period of no trend would be smaller, and the mismatch between model expectations and observations would be higher.
- The distribution of models used in the study disproportionately included models that were more likely to simulate a pause in the warming, biasing the results.
Regarding Knight et al., Peabody Energy then contends that:
- Knight et al. did not compare observed trends with the results of the model when run with an emissions scenario that is closest to observed emissions.
- Knight et al. changed various parameters within the climate model that do not occur in nature; the actual variability is only produced by a single set of physics.
Peabody Energy concludes that these two studies were designed and conducted in such a way as to lead to an artificially inflated probability of occurrence of low (or no) trends in global temperatures over short time periods and thus, provide false confidence in the ability of the climate models to replicate temperature trends over such short time periods.
As we have discussed in previous responses (e.g., 1-24 and 1-25), the ability to project and attribute global trends in temperature over short time periods is limited, but this does not undermine their reliability in longer-term projection and attribution of temperature trends to GHGs. It is well recognized that climate models do a better job of simulating global changes over longer time scales, when short-term ‘noise’ in the climate system due to year-to-year climate variability (e.g., El Nio or La Nia events27) tends to become smoothed.
However, some of the limitations of modeling short-term variations in global climate can be overcome by running multiple simulations in which the input parameters are altered slightly to produce a range of plausible results. This is exactly what Easterling and Wehner and Knight et al., relatively new studies published during and after the Proposed Endangerment Finding public comment period (refer to Response 1-29 for discussion on the timing of the publication of these studies), did to test whether models could simulate periods of no warming under a range of conditions.
With respect to Peabody Energy’s criticisms of Easterling and Wehner, we note the following:
- When Peabody Energy asserts that an 11-year pause in warming (from 1998 to 2008) is much less likely than 9-year pauses in warming (from 1977 to 1985 and 1981 to 1989), Peabody Energy fails to mention that Easterling and Wehner indicate that the lack of warming from 1998 to 2008 is largely related to the start and end date chosen. Easterling and Wehner write: ‘if we drop 1998 and fit the trend to the period 1999-2008 we indeed get a strong, statistically significant positive trend.’
- When Peabody Energy notes that the pause in warming from 1977 to 1985 was affected by the 1981 volcanic eruption of Mt. Chichon, it is well recognized that volcanoes are one kind of natural variability that can temporarily slow the rate of warming. Just as the 1981 eruption caused a slow down in warming, there are other drivers of variability within the Earth system such as La Nia (or El Nio) events that may slow (or steepen) GHG-induced warming over short time periods. Peabody Energy does not explain why this specific example of temporary slowdown in warming embedded within a longer-term warming trend is significant or why it represents any sort of flaw in the Easterling and Wehner study.
- Though Peabody Energy claims that Easterling and Wehner make a factual error in computing no warming from 1981 to 1989, Easterling and Wehner do not say there was no warming’just that there was no statistically significant warming (i.e., ‘no trend’).
- In indicating that Easterling and Wehner did not use a sufficiently aggressive emissions scenario in its simulations, Peabody Energy fails to mention that Easterling and Wehner defend their choice of emissions scenario accordingly: ‘The A2 scenario postulates a ‘business as usual’ future with little reduction in anthropogenic emissions resulting in large greenhouse gas concentrations by the end of the 21st century.’ In other words, Easterling and Wehner chose an aggressive emissions scenario. While it may be true that the current rate of increase in GHG emissions is higher than that scenario, as discussed in Section 6 of the TSD, a range of future emissions scenarios are plausible and will depend on assumptions regarding population and economic growth, implementation of policies, technology change and adoption, and other factors.
- Peabody Energy’s assertion that the models chosen by Easterling and Wehner disproportionately included models more likely to simulate a slowdown is not substantiated. Easterling and Wehner state that they used all available simulations in the Climate Model Intercomparison Project (CMIP) database. Peabody Energy states that ‘the models with the greater number of runs [in the database]—particularly those models with a large degree of internal noise—had a greater influence on the overall distribution’ but provides no additional evidence to support this assertion.
With respect to Peabody Energy’s criticisms of Knight et al., we note the following:
- Just as in the Easterling and Wehner study, Peabody Energy criticizes the choice of the emissions scenario in the Knight et al. study as being too conservative. However, Knight et al. indicate that they use ‘several’ scenarios from the IPCC Special Report on Emissions Scenarios, which is appropriate to reflect the range of plausible futures.
- Peabody Energy’s claim that Knight et al.’s technique of varying model parameters is inappropriate because only a single set of physics applies in nature is based on the flawed assumption that modelers can perfectly replicate nature in its model settings. Modelers vary these parameters to capture the uncertainty in their modeling of the science. Through multiple model runs with these varying parameters, the model output can reflect the whole range of plausible results rather than a single value for which there would be limited confidence. Furthermore, Peabody Energy’s assertion that producing multiple simulations artificially inflates the probability of the occurrence of low trends in global temperatures is false. Randomly increasing the number of simulations should equally increase the number of results that show both large and small trends.
To summarize, we find Peabody Energy’s criticisms of the Easterling and Wehner and Knight et al. studies misplaced, unsubstantiated, and/or flawed. These studies demonstrate temporary slowdowns in the rate of warming have occurred in the recent past during a period of sustained long-term warming and conduct legitimate modeling experiments showing that models can and do produce sustained multi-year periods of reduced warming, no trend, and/or even slight cooling embedded within the longer term warming projected in the next century.
Peabody Energy notes that the Easterling and Wehner study (discussed in Response 1-28) was published just before the public comment deadline, such that the public did not have an opportunity to fully comment on it. The petitioner also notes that the Knight et al. paper was published after the public comment deadline, so the public did not have an opportunity to comment on it. Finally, the petitioner notes that the Easterling and Wehner and Knight et al. studies were published too recently to have matured to the point that the issue has received extensive analysis.
Peabody Energy’s fundamental concern about these studies’that they allegedly may overstate how well models simulate or explain short-term changes in temperature does not undermine the basis for the Findings or justify their reconsideration. We have discussed above why EPA concluded that the Easterling and Wehner and Knight et al. results are legitimate. However, these studies are not materially important to the Findings. The TSD acknowledges there are ‘difficulties’ in explaining the causes of short-term temperature change, stating in Section 5(a): ‘The IPCC (Hegerl et al., 2007) cautions that difficulties remain in attributing temperature changes over time scales of less than 50 years.’ Thus, the amount of time the public had to comment on these studies as well as the concerns raised about them are insignificant. The issue of the ability of the models to project temperatures over short time periods was in fact highlighted in the TSD and subject to public comment. EPA’s response to comments included an appropriate discussion of studies that are relevant to these issues.
Finally, while the Easterling and Wehner and Knight et al. studies are relatively recent, they have been highly visible and subject to significant scrutiny. The Easterling and Wehner study was cited in the 2009 report, Global Climate Change Impacts in the United States (published in June 2009) — a major assessment report of the USGCRP (Karl et al, 2009). The Knight et al. study was incorporated into NOAA’s peer-reviewed State of the Climate 2008 report (issued in August 2009), published as a special supplement to the Bulletin of the American Meteorological Society (AMS) and disseminated to all AMS members (NOAA, 2009). Most recently, Easterling and Wehner and Knight et al. were cited in the NRC assessment, ‘Advancing the Science of Climate Change’ (NRC, 2010). We are not aware of any published criticisms of these studies.
These studies, and the petitioner’s criticisms of them, do not change the degree of uncertainty expressed in the assessment reports and identified by EPA as associated with the use of temperature data over these kinds of short time periods. Confidence in linking observed climate change to GHGs is not based on modeling analysis of short-term temperature trends but rather modeling analysis of longer-term temperature trends and additional lines of evidence. In summary, the results of these studies are not central to the Findings, and do not change the scientific position expressed in both the proposed and final Findings that temperatures over time periods as short as those at issue have only limited relevance to determining the existence of long-term temperature trends or attributing such trends to their causes.
23 CEI provided the following link in its petition, but this link does not seem to work:http://www.news24.com/Content/SciTech/News/1132/ 1249c274c6df42cca1302d82e4236ef6/11-01-2008-06-57/Earth_still_warming#. We believe an active link for the article cited is http://www.news24.com/SciTech/News/Earth-still-warming-20080111.
27 El Nio is characterized by unusually warm ocean temperatures in the Equatorial Pacific, as opposed to La Nia, which characterized by unusually cold ocean temperatures in the Equatorial Pacific. El Nio is an oscillation of the ocean-atmosphere system in the tropical Pacific having important consequences for weather around the globe.
Petitioner Arthur Randol states that the CRU e-mails ‘reveal serious questions about the validity of the IPCC models and the modelers,’ and therefore, EPA’s reliance upon models. Arthur Randol also quotes a number of EPA and IPCC sources discussing model uncertainties (e.g., ‘What does the accuracy of a climate model’s simulation of past or contemporary climate say about the accuracy of its projections of climate change? This question is just beginning to be addressed.’) (Randall, 2007).
EPA responded to comments on computer modeling issues at length in the RTC document (Volume 4.1), in terms of both their uses and limitations. Response 4-1 of the RTC is provided here, as it is the general response to the petitioner’s issues already contained within the RTC document. Responses to specific information provided by the petitioner are provided below. Response 4-1 stated:
First, models are not the foundation of climate science, rather they are the tools used to better understand information and data from multiple sources and disciplines. Paleoclimate data, basic theory, observations of climate changes, and other branches of climate science together have provided (and continue to provide) the basis for key findings in the assessment literature. Indeed, research long before the advent of the computer found that the climate should respond to increased CO2 concentrations. Recently, scientists have used paleoclimate data about historical analogues such as the last interglacial and glacial maximum to estimate climate sensitivity, sea level response to temperature change, and other important climatic variables (Jansen et al., 2007, Hegerl et al., 2007). Computer modeling is, of course, important because it improves refinement of predictions, attributions, and analysis of non-linear interactions of a complex system, and thus climate models will continue to play a major role in understanding and projecting the future of the climate system. However, the characterization of a number of commenters that the projection and attribution findings of the IPCC, the U.S. Global Change Research Program (USGCRP), and others are supported only by the output of models is not accurate.
With respect to the issues commenters raised concerning flaws in the models used for projections and attribution studies, it is well recognized that models are representations of complex systems and may not be able to perfectly represent all interactions in the system being modeled. For example, clouds are too difficult to model computationally because the physics involved in cloud formation occurs at scales smaller than the resolution of most climate models. Although model-based results are subject to some degree of inherent uncertainty, as reflected in the assessment literature and in the TSD, these uncertainties are acknowledged; uncertainties do not mean that the models are fatally flawed or unreliable representations of the climate system. Climate models have been demonstrated to successfully simulate a number of climatic properties, as documented in the IPCC, CCSP, NRC, and USGCRP reports on which the TSD primarily relies.
Absolute certainty is not required, and in fact, the TSD summarizes both the important role and the limitations of models in Section 6(b), quoting Meehl et al. (2007):
[C]onfidence in models comes from their physical basis, and their skill in representing observed climate and past climate changes. Models have proven to be extremely important tools for simulating and understanding climate, and there is considerable confidence that they are able to provide credible quantitative estimates of future climate change, particularly at larger scales. Models continue to have significant limitations, such as in their representation of clouds, which lead to uncertainties in the magnitude and timing, as well as regional details, of predicted climate change. Nevertheless, over several decades of model development, they have consistently provided a robust and unambiguous picture of significant climate warming in response to increasing greenhouse gases.
Karl et al. (2009) reaches a similar conclusion, stating:
All of the models used in this work [Karl et al., 2009] have imperfections in their representation of the complexities of the ‘real world’ climate system. These are due to both limits in our understanding of the climate system, and in our ability to represent its complex behavior with available computer resources. Despite this, models are extremely useful, for a number of reasons.
First, despite remaining imperfections, the current generation of climate models accurately portrays many important aspects of today’s weather patterns and climate. Models are constantly being improved, and are routinely tested against many observations of Earth’s climate system. Second, the fingerprint work shows that models capture not only our present-day climate, but also key features of the observed climate changes over the past century. Third, many of the large-scale observed climate changes (such as the warming of the surface and troposphere, and the increase in the amount of moisture in the atmosphere) are driven by very basic physics, which is well-represented in models. Fourth, climate models can be used to predict changes in climate that can be verified in the real world. Examples include the short-term global cooling subsequent to the eruption of Mount Pinatubo and the stratospheric cooling with increasing carbon dioxide. Finally, models are the only tools that exist for trying to understand the climate changes likely to be experienced over the course of this century. No period in Earth’s geological history provides an exact analogue for the climate conditions that will unfold in the coming decades.
A CCSP report (2008a) assessed model strengths and limitations in detail, and the introduction states:
Scientists extensively use mathematical models of Earth’s climate, executed on the most powerful computers available, to examine hypotheses about past and present-day climates. Development of climate models is fully consistent with approaches being taken in many other fields of science dealing with very complex systems. These climate simulations provide a framework within which enhanced understanding of climate relevant processes, along with improved observations, are merged into coherent projections of future climate change.
The science of climate modeling has matured through finer spatial resolution, the inclusion of a greater number of physical processes, and comparison to a rapidly expanding array of observations. These models have important strengths and limitations. They successfully simulate a growing set of processes and phenomena; this set intersects with, but does not fully cover, the set of processes and phenomena of central importance for attribution of past climate changes and the projection of future changes.
The consensus of the assessment literature is that models serve a useful purpose within the field of climate science, successfully modeling a number of processes. This is true despite the complexity of the system that was mentioned by one commenter. Although there are a number of limitations and uncertainties, they are accounted for, on a global scale, by providing a range of global mean temperature estimations for any given emission scenario (e.g., Figure 10.26, Meehl et al., 2007). This range, though important, still enables robust conclusions about the impact of GHGs on temperature.
A number of specific comments on model flaws are addressed by responses within this volume. It is EPA’s conclusion that the models have demonstrated the ability to accurately simulate many key aspects of climate, and in light of the key conclusions of the IPCC, CCSP, and USGCRP in regard to model skill and limitations, we have determined that it is fully appropriate to report model-based projections and attribution studies in the TSD.
The Figure you sent is very deceptive. As an example, historical runs with PCM look as though they match observation—but the match is a fluke. PCM has no indirect aerosol forcing and a low climate sensitivity’compensating errors. In my (perhaps too harsh) view, there have been a number of dishonest presentations of model results by individual authors and by IPCC. This is why I still use results from MAGICC to compare with observed temperatures. At least here I can assess how sensitive matches are to sensitivity and forcing assumptions/uncertainties. Tom[Wigley].28
The figure referred to in this e-mail is an informal figure developed by Gavin Schmidt for posting at the blog ‘realclimate’ and is related to the perception that the climate was warming more slowly than the models had predicted. This figure does not appear in the IPCC or even the published literature. Gavin Schmidt replies later in the e-mail chain that the reason for using the full CMIP3 archive (including the PCM historical runs referred to by Wigley) as the basis for the figure, rather than the MAGICC model developed by Wigley, is because the behavior being explored by this figure is related to unforced variability, which requires the more complex models. MAGICC, a simpler model, does not display unforced variability. This is a normal discussion between scientists about how best to explore a given question. While Wigley objects to one specific model (PCM), the reasoning behind using the full ensemble of different models (including PCM) is that the ensemble often demonstrates better attributes overall than any of the individual models within the ensemble (each of which have their own strengths and weaknesses). This feature of model ensembles was discussed in the paper ‘How well do coupled model simulate today’s climate’ (Reichler and Kim, 2008). The paper concluded, based on the authors’ analysis of the CMIP3 model ensemble compared to earlier efforts, that ‘Both improved performance and more physical formulation suggest that an increasing level of confidence can be placed in model-based predictions of climate.’
In the e-mail discussion, Wigley also promotes a couple of different methods of his own to explain the observed variability in the rate of warming. One method used by Wigley involves comparing an artificial distribution of unforced variability to the difference between the observed trend and the best estimate of a trend without variability, rather than using the CMIP3 archive to determine the variability. Another method involves using known sources of variability like the solar cycle, ENSO, or volcanoes to explain recent temperature trends. Using both these methods, Wigley also finds that the recent rate of warming is consistent with the models:
At the risk of overload, here are some notes of mine on the recent lack of warming. I look at this in two ways. The first is to look at the difference between the observed and expected anthropogenic trend relative to the pdf for unforced variability. The second is to remove ENSO, volcanoes and TSI variations from the observed data. Both methods show that what we are seeing is not unusual. The second method leaves a significant warming over the past decade. These sums complement Kevin’s energy work. Kevin [Trenberth] says ... "The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t". I do not agree with this.29
This e-mail from Tom Wigley does not reveal serious questions about the validity of models as a whole, nor does it demonstrate that the PCM model should not be used. It reflects a discussion over alternate ways to analyze one specific question.
The question I want to raise is not related to the very important dialogue on how to handle the errors and the statistics, but rather how to think about the models. The attached paper by Forster et al. appeared recently in GRL. It taught me something I didn’t realize, namely that ozone losses and accompanying temperature trends at higher altitudes can strongly affect low altitudes, through the influence of downwelling longwave.
No global general circulation model can possibly be expected to simulate this correctly unless it has interactive ozone, or prescribes an observed tropical ozone trend. The AR4 models did not include this, and any ‘discrepancies’ are not relevant at all to the issue of the fidelity of those models for global warming.30
Response (1-32):We note, with respect to this e-mail from Susan Solomon, that she specifically stated that the issues raised were ‘not relevant at all to the issue of the fidelity of those models for global warming.’31 This means that this issue does not raise doubts about conclusions based on using these models for long-term global warming studies. In addition, the fact that there are phenomena not captured by the models should not be surprising, as described above. The question is, how much will the inclusion of these phenomena alter the conclusions reached from using the model? The particular phenomena described by Solomon is the possibility that cooling of the stratosphere due to ozone depletion may also lead to cooling of some parts of the troposphere: Solomon suggests that this could further explain any discrepancies between the observed warming in the tropical troposphere and theoretically predicted warming (the ‘hot spot’). The comment at the start of this e-mail fragment about ‘the errors and the statistics’ is a reference to the Douglass (2007) paper on the ‘hot spot.’ The errors in the Douglass paper were the subject of a paper (Santer, 2008) that is discussed in Responses 1-18 and 1-19 of this RTP document. Our review finds that the petitioner may have misunderstood the e-mail. The e-mail does not challenge the IPCC results; if anything, it suggests a way in which some existing discrepancies might be further narrowed.
Maybe we need to step back and rephrase the question in terms of the physics rather than aiming solely to rebutt Douglass et al? In this case the key physical questions in my view would be:
Is there really a stratospheric radiative influence? If so, how low does it go? What is the cause? Are the numbers consistent with the underlying governing physics or simply an artifact of residual obs errors?
Can any models show trend behaviour that deviates from a SALR on multi-decadal timescales? If so, what is it about the model that causes this effect? Physics? Forcings? Phasing of natural variability? Is it also true on shorter timescales in this model?
I think in the future the Forster et al paper may be seen as the more scientifically significant result when Douglass et al is no longer cared about ...32
This quote is also related to discussion about the ‘hot spot’ in the troposphere, as was the quote discussed in Response 1-32. The e-mail’s author indicates that there are more significant scientific questions involved on this topic than would be addressed merely by rebutting Douglass et al. (2007). The discrepancy that scientists are interested in is the possible difference between the theory of saturated adiabatic lapse rate (SALR) and the observations of the tropical tropospheric temperatures. While the Santer (2008) rebuttal of the Douglass et al. study resolved this discrepancy in large part, there is still the interesting question of whether a model could demonstrate a way in which the troposphere would not warm as much as the surface, in contradiction to the basic theory. The Santer (2008) paper is discussed in Responses 1-18 and 1-19 of this RTP document. This quote is not about flawed models, but rather, about determining whether models can be used to explore potentially interesting behavior about the vertical temperature structure of the troposphere.
For example, we can not advise the policymakers about re-building the city of New Orleans - or more generally about the habitability of the Gulf-Coast - using climate models which have serious deficiencies in simulating the strength, frequency and tracks of hurricanes.
It is not without precedent that quite deficient climate models are used by large communities simply because it is convenient to use them. It is self-evident that if a coarse resolution IPCC model does not correctly capture the large-scale mean and transient response, a high-resolution regional model, forced by the lateral boundary conditions from the coarse model, can not improve the response. Considering the important role of multi-scale interactions and feedbacks in the climate system, it is essential that the IPCC-class global models themselves be run at sufficiently high resolution.33
The petitioner also quotes the EPA TSD, underlining, among other statements, the following sentence:
Uncertainty also increases as the temporal scales move away from present, either backward, but more importantly forward in time.
In the e-mail from Shukla quoted by the petitioner there is also the statement—in fact many of us were arguing for stronger language with a higher level of confidence at the last meetings of the Las [i.e. at the previous IPCC AR4 lead author meeting],34 and a note that Shukla was not objecting to the use of model results for discussions of mitigation, but rather for providing information that is useful for adaptation. The statement on hurricanes is consistent with the assessment of hurricane projections in Section 6(e) of the TSD. Moreover, that the ‘chaotic variability’ of weather (required for predicting hurricane tracks) is different from climatic changes is well understood (see Response 4-2 of the RTC document). The uncertainties in regional modeling are well understood and summarized in the TSD (as quoted by the petitioner).
The TSD does not rely on downscaled modeling (where a high resolution model is run for a small area within a larger lower resolution model), including the kind critiqued in the e-mail. However, we note that within the quote provided by the petitioner, there is no statement that downscaling cannot work, merely that the global models in which the downscaled models are embedded should be run at an appropriate resolution.
The quoted e-mail argued for stronger language with a higher level of confidence, the uncertainties involved with regional modeling were well represented in the TSD, and specific issues highlighted in the e-mail by the petitioner were both consistent with the TSD and not germane to the Findings. Therefore we find that the petitioner has not raised any issues that would result in a change of any conclusions in the TSD or the RTCs.
Arthur Randol claims that ‘the CRU email reveal serious questions about the validity of the IPCC models and the modelers,’ based on the e-mails discussed above and also on an e-mail from Ben Santer. The Santer e-mail discusses using a subset of models to examine certain climatic features, stating:
We’ve had fun computing a whole range of metrics that might be used to define such a subset of ‘better’ models. The ultimate goal is to determine the sensitivity of our water vapor D&A results to model quality. I think that this kind of analysis will be unavoidable in the multi-model world in which we now live. Given substantial inter-model differences in simulation quality, ‘one model, one vote’ is probably not the best policy for D&A work!35
The e-mail concludes with ‘The results are fascinating, and show (at least for water vapor and SST) that every model has its own individual strengths and weaknesses. It is difficult to identify a subset of models that CONSISTENTLY does well in many different regions and over a range of different timescales.’36
That different models have different strengths and weaknesses is well-understood and as described above was discussed in the assessment reports and the Findings. This is why conclusions in the literature are considered more robust when multiple models exhibit similar behavior and the conclusions don’t rely on just one model or one line of evidence. Parallel to the conclusions of Santer in this e-mail, Reichler and Kim (2008) published a paper which showed that it is often difficult to identify one best model, and that using the average of multiple models often produces better results than any model individually over a range of metrics. Therefore, the issue that models have differential abilities to simulate different processes is well documented in the literature and the assessment reports, and was taken into consideration when considering model results for purposes of developing an Endangerment Finding. The commenter does not explain how these recognized differences between models impacts the validity of the conclusions drawn in the assessment reports and in EPA’s Endangerment Finding, which recognized and took into account the differences, strengths, and weaknesses of the various climate models.
Petitioners attempt to challenge key scientific information supporting the Endangerment Finding pertaining to temperature trends in the vertical structure of the atmosphere, the rate of warming over the last 10 or so years at the surface, the ability to attribute observed temperature trends (both in the vertical structure of the atmosphere and at the surface) to increasing GHG concentrations, and the general ability of models to replicate what has been observed and then make valid projections. Petitioners refer to material in the CRU e-mails in some cases to support their assertions.
We find the following:
- The claim that the CRU e-mails provide evidence that there is no human fingerprint in the vertical structure of the atmosphere in the tropics is not supported by scientific evidence but rather speculative assertions pertaining the timing of the publication of a journal article.
- Petitioners’ overstate the significance of a possible slowdown in the rate of warming in the last 10 or so years; misinterpret statements on the issue from prominent scientists, as well as published literature; and rely upon an inappropriate examination of a short period of the temperature record to draw broad, conclusive statements about global warming and the ability to attribute it to human activities, an issue which requires analysis of data over several decades and consideration of a wide body of evidence.
- The discussions among scientists in the CRU e-mails do not undermine the validity of climate models or raise any new issues of their reliability. The discussions revolve around the strengths and weaknesses of individual models, which have been long understood by the scientific community and were reflected in and taken into account in the assessment literature and in EPA’s endangerment record. The e-mails do not change the appropriate degree of certainty that should be associated with the results obtained from such models, including the increased degree of certainty when multiple different models are used to evaluate an issue, as reflected in the assessment reports and EPA’s Endangerment Finding.
A number of petitioners challenge the validity of the surface temperature record developed by scientists at CRU. Many of their issues were raised during notice and comment on the Endangerment Findings, although in some cases the petitioners claim additional support from more recent information released in the media. In particular, several petitioners reference the CRU e-mails that were made public in November 2009. Based on statements in the e-mails, petitioners question the validity of the HadCRUT temperature record and other associated temperature records.
As described in more detail in this section, the petitioners make a number of claims involving the HadCRUT temperature record, including alleged destruction of data; inappropriate data manipulation; selective use of data from Russian weather stations to create a biased global temperature record; and inappropriate corrections for urban heat island (UHI) effect, particularly in China.
The petitioners argue that these errors demonstrate that the CRU datasets in general, and the HadCRUT temperature record in particular, are flawed and unreliable. In some cases, they argue that errors in unrelated datasets maintained by CRU also undermine the HadCRUT temperature record. They then argue that these datasets and temperature records are a critical underpinning of the IPCC assessment reports, and that questions about the quality of the CRU datasets must inevitably raise questions about the validity of the IPCC’s conclusions. Finally, they argue that EPA’s Endangerment Finding must be reconsidered as a result of these questions, because EPA relied on the IPCC findings in reaching its conclusions.
EPA’s analysis indicates that the issues and evidence presented by the petitioners do not support the conclusions they draw. First, we note that many of the issues raised by petitioners were already addressed in EPA’s RTC document, such as the alleged destruction of data and the alleged problems involving Chinese UHI analyses. As explained in more detail below, other issues identified by the petitioners involve codes that have not been shown to be used in a public product; speculative analyses of Russian data that do not have any effect on modern temperature trends;, and objections to logs of quality control efforts (related to another CRU dataset, but not the HadCRUT temperature record that is the main issue in the petitioners’ allegations), with no showing of any bias of the final products.
We note that the HadCRUT temperature record is one of three surface temperature records relied on by the AR4 or the EPA Findings. There are also many other records of warming, such as satellite and other records of atmospheric temperatures, as well as evidence of warming in many aspects of the environment. These multiple and independent sources of physical evidence of warming provide a comprehensive and consistent picture of warming. Petitioners fail to address this body of evidence, and instead, as discussed below, make inappropriate and exaggerated claims concerning one of the three surface temperature records.
For all of the temperature records referenced in the IPCC’s AR4 and the TSD, including the HadCRUT temperature record, there are known to be both strengths and uncertainties with respect to data coverage, instrumental error, and other issues. There is no assertion that historical temperatures are precisely known. The size of the error ranges shown in one paper (Brohan et al. 2006) on the HadCRUT temperature record, for example, appropriately indicates the certainty of that temperature reconstruction. The petitioners have not shown that the existing error range was too small, nor have the petitioners presented any analysis indicating these error bounds should be adjusted. In addition, as described in responses throughout this section, their accusations of bias are unfounded.
In reaching the Endangerment Finding, the Administrator took the recognized degree of certainty of the underlying scientific evidence into consideration when making her decision. For many reasons that we discuss below, EPA strongly disputes the arguments that the HadCRUT temperature record or the CRU datasets serve as ‘the only’ or ‘the most important’ basis for 1) determinations pertaining to warming in recent decades, 2) all analyses of anthropogenic global warming, and 3) IPCC’s models of future warming.
EPA’s review indicates that neither the claims of error or bias nor their broad and unfounded conclusions about the validity of EPA scientific findings are supported by the evidence that they present.
Note: petitioners’ arguments regarding the three surface temperature records together are addressed in Section 1.4 of this document.
Before the detailed response to petitioners' arguments, we provide some background on the development and use of the major surface temperature records and datasets. We list the records and datasets in the following text boxes. The CRU temperature record and datasets are described in this section, and the NOAA and NASA temperature records and datasets are described in Section 1.4 of this document.
At CRU (University of East Anglia)
Temperature record; land only, based on multiple sources including unadjusted GHCN (see below)
Temperature record: land and ocean, based on CRUTEM3 plus ocean data
CRU TS2.1 and TS3.0
Multiple climate elements (temperature, rainfall, cloudiness) at a high resolution
At NOAA (NCDC)
GHCN (Global Historical Climate Network)
Unadjusted: Data collected from a number of weather stations, used by NASA and CRU in developing their temperature record.
USHCN (U.S. Historical Climate Network)
Adjusted: Uses various techniques to correct for time of observation and corrections based on neighboring stations.
At NASA GISS
Referred to in other documents as the GISS temperature record, or GISTEMP. Available as a land only temperature record or as land plus ocean. Based on unadjusted GHCN, adjusted USHCN, and Antarctic station records.
Other data sources
Monthly reports from national meteorological stations organized through WMO
World Weather Records(WWR)
Station data from around the world, published decadally for most of the century. WWR was originally published by the Smithsonian Institution, later by the U.S. Weather Bureau and other organizations, and was digitized by the National Center for Atmospheric Research (NCAR).
Monitoring the changes in the surface temperature of the Earth is one of several key components of studying climate change. Surface temperature is not the only metric of a changing climate. Receding glaciers, sea level rise, changes in Arctic ice, tropospheric and stratospheric temperatures, ocean heat content, changes in precipitation and hurricanes, movement of plants, and a number of other indicators are also valuable physical parameters to monitor and evaluate observed changes of the Earth’s climate. The surface temperature record is but one of many sources of evidence leading to the conclusion that global warming is occurring.
Surface temperature records are built on the data collected from thousands of weather stations around the globe, as well as sea surface temperature records taken by ships crossing the ocean on different routes. Some of this data goes back for well over 100 years. These weather stations and the data collected by the stations were not originally intended to be used for climate monitoring, and therefore, a number of adjustments often have to be made to the data they have collected over time. Typical adjustments include corrections for the introduction of artificial biases, such as changes in the time of day that observations were made, changes in station locations, changes in the type of measurement instrument or temperature recording method, and addressing the effects of UHIs (the increased temperature observed within cities compared to rural locations). For example, sea surface temperatures used to be recorded by dipping a canvas or wooden bucket into the sea and checking the temperature after the bucket had been brought on board the ship. More recently, these temperatures are recorded in intake valves of ships. Studies have shown that these two methods result in different temperatures for measurements made at the same time and place. Thus, a long-term climate record that includes a transition from one method to the other requires an adjustment so that the whole time series consistently reflects the actual changes in temperature trends, rather than artifacts of different data collection methodologies.
We note that the petitioners have raised no objections concerning the sea surface temperature record, despite the fact that the oceans cover about 70% of the Earth’s surface, and therefore, ocean surface temperature trends are the dominant component of any global temperature trend analyses. The corrections described above to adjust for the bucket to intake valve transition (documented in Rayner et al., 2005 and Folland and Parker, 1995) result in a dataset that shows less warming than the raw data would suggest. Folland and Parker noted that the first, crude attempts to correct for this bias in the 1980s involved adding 0.3 degrees to all data before 1940; this is the largest single source of bias correction involved in the global surface temperature record. Even after this correction, the sea surface temperature record shows a clear warming signal over both the past several decades and the past century. Petitioners ignore this aspect of the surface temperature record.
Other routine quality control measures involve checking for and deleting data that are shown to be duplicative. This can occur, for example, when data are obtained from different sources, and some stations are included in both datasets. Often, such duplicate stations are identified because two records have the same WMO identifier. In some cases, weather stations in different countries, or at different points in time, will have inconsistent measurement methodologies or formats for reporting data. When this occurs, the data must be adjusted to ensure that the observed results reflect actual temperature change and not methodological differences. Further, the recording stations are not evenly distributed around the planet. The methodology for building up a global average temperature record must ensure that areas with many stations are not overrepresented in the record and areas with few stations are not underrepresented. In this case, available data from existing recording stations need to be averaged in areas with high station coverage and extrapolated for areas with poorer coverage. Overall, the kinds of adjustments made to the underlying raw data are designed so that the results of analyses of the dataset reflect as much as possible the actual direction and magnitude of any change in surface temperature and do not reflect other changes, such as in measurement devices or in the location of stations, as if they were changes in surface temperature.
In general, the method for estimating temperature trends does not attempt to determine the average surface temperature of the Earth, but rather, to determine if temperatures have changed over time, both regionally and globally. The difference between the temperatures at a given time period compared to a reference period is known as the temperature ‘anomaly.’ Anomalies or equivalent methods are used for long-term trend analyses that combine multiple stations. Temperature anomalies are superior to absolute temperatures for climate analysis because absolute temperatures can vary significantly over small distances, whereas anomalies are better correlated with large regional changes. For example, two nearby stations—one on top of a mountain and one in the neighboring valley—will likely have different absolute temperatures but similar anomalies (changes in temperature) over time, as a shift in climate will cause both temperatures to drop or both temperatures to rise in concert. Additionally, use of anomalies, rather than absolute temperatures, allows for determination of temperature trends, even when the data availability changes (see Subsection 220.127.116.11 on ‘station drop-out’ for an explanation of why this matters).
The process by which the HadCRUT global surface temperature record is collected and computed is described in a series of publications from CRU scientists, such as Jones (1988), Jones (1994), Jones and Moberg (2003), and Brohan et al. (2006). The earliest versions of these temperature records primarily relied on the World Weather Records. The Jones (1988) work included 1,873 stations (mostly in the Northern Hemisphere), plus 16 stations in Antarctica. The 1994 update added a number of additional stations that had been collected through the efforts of Karl et al. (1993) and incorporated some methodological improvements. Even with these improvements in spatial coverage and accuracy, the hemispheric averages were nearly unchanged. Jones attributed this to the robustness of the hemispheric mean temperature series and the high correlation of anomaly measurements over large areas. He estimated that for a hemispheric average, only 100 well-placed stations would be necessary to give a robust result. However, for estimating continental or smaller regional-scale temperature trends, more temperature records would be necessary.
The Jones and Moberg (2003) paper reported the inclusion of the GHCN dataset (described in Subsection 1.4.2 of this document on NOAA and NCDC data); improved 18th and 19th century records; and integrated data from individual National Meteorological Services (NMSs) in countries around the world, data exchanged over the WMO CLIMAT network and other global datasets covering the years after 1981, and data from Antarctica. The 2003 temperature record was based on data from 5,159 stations. Jones and Moberg (2003) describes the ‘laborious and time consuming’ analysis that was required when certain stations existed in both the original Jones temperature record and in the new data that was acquired from other sources, and the approach taken in such cases to determine how to appropriately ‘homogenize’ the data. This ‘homogenization’ involves adjusting data to correct for discontinuities in the data due to nonclimatic effects. For example, in 1994 the Australian NMS switched from reporting a ‘daily mean’ temperature, which was the average of the maximum and minimum temperature each day, to a ‘daily mean’ temperature, which was the average of every temperature recorded over the day on a three-hour basis. These two methods yield different results for the same date, and without an adjustment to correct for the change in average temperature before and after the switch, the apparent temperature trends would reflect the change in methodology in addition to any actual climatic change. Other quality control methods involved examining every outlier (defined as five standard deviations from the mean) and determining if there was a reason that the data point was so far off from the other data from that station. Jones and Moberg (2003) compared the CRU analysis to the NOAA and NASA analyses as well as the older Jones study (1994), and found that the Northern Hemisphere patterns remained remarkably similar in all four series. There were some larger differences in the Southern Hemisphere because fewer stations were located there, and therefore, differences in methodologies used for the different temperature records (in terms of how to include Antarctica and South Pacific islands) could lead to more differences in the Southern Hemispheric record than were seen in the Northern Hemisphere.
The final published update (Brohan et al., 2006) added a few more stations, deleted some stations where there were duplicates, and performed additional quality control checks. This update also describes three kinds of errors that are incorporated into an error analysis: station error due to bad measurements or adjustments to an individual station, sampling error because of too little coverage, and large-scale bias error due to changes in methodologies (like the Australian NMS switch or the bucket to ship intake switch) (see Brohan et al. Figure). We note that while bad measurements or adjustments to an individual station can lead to incorrect assessments of very local trends, on a large scale, these kinds of random errors will, on average, cancel out. Large-scale bias errors have the potential to be more important for large-scale trends, but the major sources of biases have all been examined, and corrections for these biases have been incorporated into the temperature records and uncertainty estimations. Petitioners have not identified new sources of large-scale bias that are not already considered and addressed by the scientific literature.
Another graph is included in Brohan et al. (2006) that summarizes all the homogenization adjustments that were made to the data, showing a fairly symmetric distribution, although with slightly more adjustments that cooled older data (attributed to a number of stations moving from inside a city to an out of town airport). The paper also discusses how the uncertainty estimates take into account the potential magnitude of the effects of urbanization (UHIs) and changes in thermometer methodologies. Continued updates to the HadCRUT methodology, along with links to the data, are reported on the UK Met office website (UK Met Office, 2010a).
These data, along with a number of other temperature records (such as the NOAA and NCDC temperature records), are used to analyze long-term trends to compare to both model simulations and satellite trends. The recent warming shown in HadCRUT and other temperature records has contributed to statements by the IPCC that ‘warming in the climate system is unequivocal’ (IPCC 2007a), among other statements.
In addition to the HadCRUT temperature record, CRU also maintains a dataset whose current version is known as TS3.0. TS2.1, also referred to in some of the CRU e-mails, is the older version of this dataset. The TS datasets include various temperature metrics and data on precipitation, wet-day frequency, frost-day frequency, vapour pressure, and cloud cover. The gridding of these data is at a high resolution (half a degree) compared to the 5-degree resolution of the HadCRUT temperature record, and is described in Mitchell and Jones (2005). These data have been used by many researchers, in combination with other datasets that measure the same variables, to analyze long-term trends and compare to model simulations. The TS2.1 dataset is referred to twice in the IPCC AR4 in relation to historical precipitation data. However, almost all of the references to global temperatures over time that refer to CRU data refer to the HadCRUT temperature record. The HadCRUT temperature record is distinct from the TS datasets. Compared to HadCRUT, the TS datasets include many more climatic variables than temperature, are gridded at a finer resolution, and are updated only once every several years rather than every month. While the TS2.1 dataset did use the CRUTEM dataset from Jones and Moberg (2003) for temperature data, it also included temperature data from five other sources. Arguments made by petitioners about the TS datasets are not relevant to the HadCRUT temperature record.
Petitioners, including the Competitive Enterprise Institute, the Pacific Legal Foundation, Peabody Energy, the Southeastern Legal Foundation, the State of Texas, and the Coalition for Responsible Regulation, raise a number of concerns regarding the data held and analyzed by CRU. The petitioners present five major arguments regarding the validity of the CRU data. The first argument is that alleged destruction of raw data for the HadCRUT temperature record renders the scientific data on surface temperature worthless and makes replication of temperature trends impossible. The second argument alleges that comments within code and log files are evidence of manipulation that ‘undercuts the credibility of CRU databases.’ The third argument refers to a report from the Moscow-based Institute of Economic Analysis (IEA) (Pivovarova, 2009) that claims to show that the Russian stations used in the HadCRUT temperature record were selectively chosen to show increased warming. The fourth argument refers to a dispute about a 1990 paper coauthored by Phil Jones (Jones et. al, 1990) and claims that the IPCC improperly relied on this study for its conclusions about the magnitude of the UHI effect. Finally, petitioners argue that the CRU data is the primary basis for the conclusion of ‘unprecedented’ warming and the foundation of anthropogenic global warming analyses.
The following subsections address each of these five arguments. The alleged data destruction was addressed in the RTC document. No original raw data were destroyed, the original raw data are and remain the property of the NMS that collected them and can be obtained from the original stations (or national meteorological offices), subject to their procedures. With respect to the code and log files, the quotes selected by the petitioners are not evidence of deliberate data manipulation and are mainly the result of quality control processes determining how to best address the vast quantities of data collected from different sources. The IEA analysis of Russian temperature stations shows little divergence between the HadCRUT dataset and its own analysis after 1950, and while there is some divergence in the pre-1950 timeframe, the IEA analysis is not credible because it uses improper methodologies to do the comparison. Jones et al. (1990) was addressed in the RTC document and is not a central component of the IPCC analysis of UHI effects. Finally, the HadCRUT temperature record does not serve as the only or most important basis for recent warming or other anthropogenic global warming analyses. It is one of several temperature records used in analyses of global warming, along with numerous other analyses of physical evidence of warming.
Several petitioners claim that CRU has not kept raw data collected from the surface weather stations, only the homogenized and value-added data (e.g., the data corrected for station moves and measurement changes), and that therefore, the evidence for warming in the past century is questionable. For example, the Competitive Enterprise Institute objects that CRU has been involved in ‘data dumping’ the original raw data for its datasets and claims that this action ‘renders independent review and verification of the 150-plus year temperature trends published by the Hadley Center-CRU impossible - a clear violation of basic principles of science and was found by the British Information Commissioner’s Office to be a violation of the British Freedom of Information Act.’ Along similar lines, the State of Texas argues that ‘revelations that CRU data was destroyed, lost, or simply withheld indicate a different, but equally serious, problem: that the data can neither confirm nor deny how quickly, how far, for how long, or even, in some cases, whether, temperatures have risen.’ The Southeastern Legal Foundation cited a newspaper article claiming that ‘Shortly after the Climategate documents became public, CRU was forced to admit on or about November 29, 2009 that much of their original data has been destroyed’ (Leake, 2009).
Petitioners assert that these alleged actions by CRU make EPA’s ‘claim of warming trends scientifically indefensible’ (Competitive Enterprise Institute) and render the CRU data ‘scientifically worthless’ (State of Texas).
Peabody Energy states:
EPA was aware at the time it published the Endangerment Finding that some of the raw data used in HadCRUT3 had been destroyed but dismissed concerns about the reliability of these records on essentially three grounds. First, it uncritically adopted CRU’s public position that since (a) ninety-five percent of the raw data has long been available to researchers on the Global Historical Climatology Network (‘GHCN’) and (b) CRU’s adjustment methodology is available, then (c) researchers could independently replicate and confirm the appropriateness of CRU’s adjustments.243 But as shown in section VI(E) below, the reason CRU and independent researchers cannot replicate CRU’s adjustments is because CRU can no longer determine which data from the GHCN it used. EPA’s unwillingness to make its own assessment of this issue further illustrates that EPA has ceded its own judgment to third parties.
CRU has not retained all of the raw weather station data collected during development of the HadCRUT temperature record, and CRU has openly acknowledged this point and provided a detailed explanation for its handling of the data. In fact, this issue was raised in public comment on the proposed Endangerment Finding and is addressed at length in Volume 2 (Response 2-39) of the EPA RTC document. In that response, we explained that:
CRU has stated (see Appendix B) that they do not hold the original raw data but only the value added (i.e., quality-controlled and homogenized) data for ‘some sites,’ and refers readers to a long list of peer-reviewed references that describe how the value-added data were generated. To the extent possible, CRU has made all data for the HadCRUT record available (see Appendix A). In certain instances, data may be unavailable to the public due to constraints of the arrangements made in obtaining the data between CRU and other governments/organizations, as described in Appendix B.
CRU’s statement on this issue is available on their website (CRU, 2010).
The fact that CRU has not retained all the raw data does not interfere in any way with replication of or development of independent estimates of global or regional surface temperature history. The vast majority of the raw global weather station data is available from the GHCN and other public data sources. It is therefore possible to generate an independent estimate of global temperatures, as GISS, NCDC, and other groups have done. While attempting an exact replication of a particular analysis could be useful in understanding how it works and possibly identify minor bugs, this kind of exact replication is certainly not necessary. Indeed, for a result to be robust, it should yield a very similar answer, even if there are differences in the method of the underlying analysis. As discussed in Section 1.4 of this document, NOAA and NASA analyses of global surface temperature records provide such evidence. These analyses find temperature increases of similar magnitude and thus, strongly support the conclusion that the CRU analysis is robust. Similarly, as described in the background above, the major conclusions about warming based on the HadCRUT temperature record have remained robust, even as more data were integrated and methodologies refined over a period of two decades. Additionally, CRU has posted a graph showing that there is very little difference in global average surface temperatures resulting from using the 80% of the station data that is publicly available compared to the full dataset (MetOffice Data Subset Figure; UK Met Office, 2010c). At smaller spatial scales there may be differences between the full and partial datasets, but at the global scale, this confirms the trend is very robust and reliable.
Petitioners have not conducted an independent analysis to determine global temperature trends, although the data are available, and they do not provide any global analysis that yields a different result. They have provided no evidence that an additional or different analysis using the publicly available temperature data would yield a result that differs substantively from the warming over the century reflected in the HadCRUT and other analyses of global surface temperature.
In contrast, the recent Independent Climate Change E-mails Review (2010) managed to write computer code from scratch in a space of two days that produced results similar to the HadCRUT temperature record and other independent analyses, working with publicly accessible data. This independent effort included the two key characteristics of any global temperature analysis: use of anomalies (or equivalent method) in order to focus on the change in temperature rather than the absolute temperature, and use of gridding (or equivalent method) to take into account the spatial distribution of the data. In the process, the Review determined that:
The exercise and comparison of all figures demonstrates that:
- Any independent researcher may freely obtain the primary station data. It is impossible for any group to withhold data.
- It is impossible for any group to tamper improperly with data unless they have done so to the GHCN and NCAR (and presumably the NMO [National Meteorological Office]) sources themselves.
- The steps needed to create a temperature trend are straightforward to implement.
- The computer code necessary is straightforward to write from scratch and could easily be done by any competent programmer.
- The shape obtained in all cases is very similar; in other words, if one does the same thing with the same data one gets very similar results.
- The result does not depend significantly on the exact list of stations.
- Adjustments make little difference.
Studies that use consistent methods for determining annual anomalies and spatial interpolation weighting factors produce results broadly consistent with the HadCRUT temperature record. Contrary to the assertions of petitioners, EPA has not ceded its judgment to third parties or uncritically adopted positions by other organizations; it has assessed the major surface temperature records and the methodologies of creating them and determined that the global trends found in these temperature records are robust. Contrary to the implications by the petitioners, it is not necessary to perfectly duplicate the HadCRUT temperature record to the precise decimal point in order to make scientifically useful conclusions from replication exercises.
In addition, the limited amount of raw data that is no longer in CRU’s possession does not adversely affect the scientific usefulness of the surface temperature analyses and results prepared by CRU. There is a long record of peer-reviewed publications explaining the process of preparing the raw data for analysis, presenting the results of the analyses, and evaluating the magnitude of various possible sources of error. It is an unwarranted leap in logic to assume these analyses have no merit just because a small percentage of the underlying raw data is no longer in CRU’s possession.
In summary, we find that the petitioners arguments were previously addressed in EPA’s RTC document, and they have shown no evidence that CRU’s lack of a small portion of the raw temperature data impedes the ability to check if the publicly available data give results consistent with the HadCRUT temperature record, or that it changes the scientific validity of the analyses performed by CRU.
The Coalition for Responsible Regulation claims that ‘According to CRU’s own programming staff, the CRU temperature data were ‘fudged,’ noting that ‘the file BRIFFA_SEPT98_E.PRO reveals that the programmer ‘Appl[ied] a VERYARTIFICIAL correction for decline!!,’ and literally labeled several of the adjustments as ‘fudge factors.’’
Based on the comments in the code, BRIFFA_SEPT98_E.PRO appears to be a provisional test program, with comments in capital letters to remind the programmer to replace the temporary fudge factors with more valid adjustments before the code is used for public products. Additionally, although the petitioners claim that this code shows that ‘CRU temperature data were ‘fudged,’ it appears that the fudge factor in question is related to the divergence issue discussed in Section 1.1 of this document, and therefore the fudge factor is not related to the HadCRUT temperature record. There is no evidence or reason to believe that the BRIFFA_SEPT98_E.PRO code was actually used for any public final product. On its face the ‘Artificial Correction’ was designed as a temporary adjustment.
The State of Texas refers to ‘one notable email’ in which ‘a CRU staff member discuss a ‘trick’ to ‘hide the decline’ in CRU temperature data sets from 1981-2000,’ and concludes that ‘Such emails show that the CRU did not simply gather raw temperature data, enter it into computer programs, and produce conclusions based on collated raw data. Instead, the CRU gathered temperature data and manipulated it to produce a result that was sometimes different from the result that the raw data would have produced.’ The State of Texas states that ‘Consequently, the British Meteorological Office (the ‘MET’) has announced that it will reexamine 160 years of climate data, attributing the need to reexamine the data to a ‘lack of public confidence based on the leaked e-mails.’’ The Southeastern Legal Foundation also brought up the MET reexamination in its petition.
The State of Texas is referring to an e-mail that addressed paleoclimate temperature reconstructions, not the modern surface temperature record. As discussed in Subsection 1.1.4, the ‘decline’ referenced in this e-mail was a reference to the divergence between some tree ring records and the instrumental record. This divergence was recognized in the published literature, and in some cases the authors of these publications were the same people who were the authors of the e-mails. This is unrelated to the HadCRUT temperature record.
In addition, the petitioner objects to the ‘manipulation’ of raw temperature data but has produced no evidence or argument that CRU made any inappropriate adjustments to the data in the process of producing the HadCRUT temperature record. The adjustments to the raw data are made in order to make the resulting temperature record a more accurate representation of the actual large-scale historical temperature changes. There is no scientific merit to petitioners’ allegations that the adjusted data is faulty because it gives different results than the unadjusted data. That ignores the scientific basis for making the adjustments. EPA addresses these issues in Subsection 1.3.2 on the Scientific Background on the HadCRUT Temperature Record.
The MET announced a reexamination of climate data, but creating an investigative body to reexamine the data is not evidence that the original conclusions are in error. Indeed, the proposal from the MET (UK Met Office, 2010b) stated, ‘[i]t is important to emphasize that we do not anticipate any substantial changes in the resulting global and continental-scale multi-decadal trends. This effort will ensure that the datasets are completely robust and that all methods are transparent.’ There have been several investigations of CRU and other climate researchers, and to date, none of the released conclusions of these investigations have found evidence of inappropriate data manipulation, nor has any evidence been shown that the HadCRUT temperature record is not robust. The existence of similar trends in the NASA and NOAA records, along with other physical signs of warming, such as glacial melt, satellite records, and ocean heating, strongly supports the robustness of the HadCRUT temperature record, and it is all of these sources that provide the comprehensive evidence that has led to the conclusion that warming in the climate system is unequivocal.
The State of Texas also references the investigation into CRU by the British House of Commons, stating ‘On December 2, 2009, Science and Technology Committee Chairman Phil Willis wrote to the Vice Chancellor of East Anglia University asking for an explanation of CRU’s conduct and expressing concern about allegations that CRU data may have been manipulated or deleted in order to produce evidence on global warming.’
The UK House of Commons Science and Technology Committee has released the report (2010) from its independent investigation and found no evidence of manipulations or deletions to produce evidence on global warming. See the Decision Document, as well as Subsections 1.3.5 and 18.104.22.168.
Additionally, the ‘Report of the International Panel set up by the University of East Anglia to examine the research of the Climatic Research Unit’ was issued recently (University of East Anglia, 2010a), and found that:
We saw no evidence of any deliberate scientific malpractice in any of the work of the Climatic Research Unit and had it been there we believe that it is likely that we would have detected it. Rather we found a small group of dedicated if slightly disorganised researchers who were ill-prepared for being the focus of public attention. As with many small research groups their internal procedures were rather informal.
The Pacific Legal Foundation claims that ‘The e-mails may suggest that the authors manipulated and ‘massaged’ the data to strengthen the case in favor of unprecedented global warming, and to suppress their own data if it called global warming into question. See e-mails 0938018124, 0843161829, 0939154709, 0942777075, and 1059664704.’37
The petitioners do not state what portions of these selected e-mails are evidence for their claim of manipulation. All of these e-mails were written in 1999. The first e-mail (0938018124)38, for example, is from Michael Mann to Keith Briffa, Chris Folland, and Phil Jones and includes in the first paragraph a statement that is directly contrary to the assertion of the petitioners. This paragraph demonstrates appropriate acknowledgment of limitations and avoidance of overconfidence:
And I should point out that Chris, through no fault of his own, but probably through ME not conveying my thoughts very clearly to the others, definitely overstates any singular confidence I have in my own (Mann et al) series. I believe strongly that the strength in our discussion will be the fact that certain key features of past climate estimates are robust among a number of quasi-independent and truly independent estimates, each of which is not without its own limitations and potential biases. And I certainly don’t want to abuse my lead authorship by advocating my own work.39
E-mail 084316182940 is from Gary Funkhouser, about a certain Kyrgyzstan tree ring dataset from which he was unable to make robust conclusions. We respond to the issues in this e-mail in Response 1-12 in Subsection 1.1.4 of this RTP document, and find that the scientist is following proper procedures and not using data unless the data meets statistical requirements. The e-mail ends with Funkhouser hoping that someone will take some more data from the site to get better statistics.
E-mail 093915470941 contains data from a temperature reconstruction from Tim Osborn to Mike Mann and others. This e-mail contains a discussion of the post-1960 non-temperature signal that has been discussed in Section 1.1 on ‘divergence’, but again this e-mail contains no evidence of improperly massaging data:
Keith has asked me to send you a timeseries for the IPCC multi-proxy reconstruction figure, to replace the one you currently have. The data are attached to this e-mail. They go from 1402 to 1995, although we usually stop the series in 1960 because of the recent non-temperature signal that is superimposed on the tree-ring data that we use. I haven’t put a 40-yr smoothing through them - I thought it best if you were to do this to ensure the same filter was used for all curves.42
E-mail 094277707543 is from 1999 and similarly refers to the divergence issue, and has been quoted extensively because it used the term ‘Mike’s Nature trick’. We address this e-mail in Response 1-11 in Subsection 1.1.4 of this document, and find that this e-mail refers to a graph used in a 1999 WMO report which is unrelated to the IPCC and that the word ‘trick’ does not imply deceit. The underlying issues involved with divergence are addressed in Section 1.1 of this document.
E-mail 105966470444 is the e-mail from Mike Mann discussing ‘calibration residuals.’ We respond to these issues in Response 1-80 in Section 22.214.171.124 where it is shown that the uncertainties arising from these residuals were discussed in the published papers by Dr. Mann.
None of these e-mails provide evidence of or support the conclusion that the e-mail authors conducted inappropriate ‘massaging’ or manipulating of the data or suppressed their own data.
37 E-mail files 0938018124.txt (September 22, 1999) page 203 line 12, 0843161829.txt (September 19, 2006) page 11 line 4, 0939154709.txt (October 5, 1999) page 229 line 32, 0942777075.txt (November 16, 1999) page 243 line 31, and 1059664704.txt (July 31, 2003) page 558 line 6 of PDF version entitled: CRU Emails 1996-2009.pdf
The petitioners submitted a large number of quotes from a 300 page, 90,000 word document named HARRY_READ_ME.txt45. The HARRY_READ_ME.txt debugging notes are a record of ‘Harry’s’46 attempt to update the CRU TS2.1 product to TS3.0 during the years 2006 to 2009 by merging six years of additional data (covering 2003 to 2008) to an old dataset running until 2002, and migrating the code to a new computer system at the same time. As noted in the science background in Subsection 1.3.2 of this document, CRU TS2.1 and 3.0 are different from the HadCRUT temperature record that is referred to in the EPA TSD. Arguments made by petitioners about the TS datasets are not relevant to the HadCRUT temperature record.
These quotes mainly fall into the following categories:
- Expressions of frustration with documentation or system processes that do not indicate any problems in the underlying data.
- Notes about imperfections in the data received from various organizations that make it difficult to integrate the new data with the old database. The problems that arise from this integration are resolved one by one throughout the time period covered by the file.
- Expressions of frustration based on introducing errors into code which then have to be fixed. Bugs are almost always introduced in the coding process, and the key is to find them and fix them. The file includes evidence that this is being done.
- A few unresolved data issues arise at the end of the log file, and there is no indication in the log file of how these imperfections were fixed. This does not mean that these issues were not fixed between the version of the log file that was released with the CRU e-mails and the final TS3.0 version. Additionally, none of these issues appear to involve large-scale errors that would introduce bias, but rather unresolved inconsistencies in the data of one or two stations for a subset of the climate variables being recorded.
In sum, the HARRY_READ_ME.txt is a long, multi-year log of expressed frustrations with old codes, merging datasets from different sources, changing computer systems, and the arduous process of performing quality control. This file reflects both specific technical terminology and the personality of the researcher. Taken out of context, as done by petitioners, the quotes from the file can provide misleading impressions. As the files show, the datasets are not perfect, but that is not a surprise. As exemplified by the Brohan et al. (2006) uncertainty analysis discussed in Subsection 1.3.2 (Background), it is well known that there are a number of reasons for there to be errors and uncertainties in the underlying raw data set. Importantly, the petitioners have not identified any errors that are likely to bias that would affect the large-scale trends analyzed with the TS dataset. As discussed, measurement or other errors at individual stations matter for very local trends but tend to cancel out when averaged over large scales. Importantly, EPA was unable to identify any quotes among the many dozens submitted that indicated any deliberate attempt to bias the data (nor did EPA find any such evidence in its analysis of the underlying HARRY_READ_ME document).
The evidence shows that ‘Harry’ was working on identifying and solving these errors in order to improve on the CRU TS2.1 dataset, much like the HadCRUT temperature record has gone through multiple iterations and improvements. Again, the petitioners have shown no evidence of bias or of errors that are of sufficient magnitude to compromise the climate signals in the various climate variables included in the TS dataset.
The CRU TS2.1 dataset was not used in the IPCC for evaluating the trends in surface temperature. The HadCRUT temperature record was used for that, and it is a separate and distinct dataset from TS2.1. The TS2.1 products are only directly cited in Working Group I of the IPCC AR4 in two places: in Figure 3.12/Table 3.4 as one example out of several of a global land precipitation dataset (Trenberth et al., 2007), and in Figure 9.19 for comparing Sahel precipitation trends to model simulations (Hegerl et al., 2007). Therefore, in addition to a lack of evidence that the contents of the HARRY_READ_ME.txt file materially change any large-scale trends, analyses of TS2.1 data in most cases are presented alongside other independent data that support the same conclusions.
The TS datasets in particular involved up to nine different climate variables collected from thousands of weather stations from dozens of different countries covering a century of time. One problem in particular involving WMO codes that appears in many of the quotes highlighted by petitioners was explicitly highlighted in the paper published about TS2.1, as well as in other papers in the literature as discussed in Response 1-46. None of the quotes highlighted indicated any problems that are outside the bounds of the known uncertainties and imperfections in the data that are discussed in the literature.
This quote is followed, a long time later, by a discussion from ‘Harry’ of trying to get this code working on the computer system of a client (the British Atmospheric Data Center): ‘Next problem - station counts. I had this working fine in CRU - here it’s insisting on stopping indefinitely at January 1957.’ The first part of the quote shows that ‘Harry’ did fix the station count problem. The last part of this quote refers to additional frustrations involved in getting the code to work on an unfamiliar computer system, which is a fairly typical occurrence when moving from one system to another.
The paragraphs preceding and following the quote clearly indicate that this problem was identified by an automated quality control routine, that ‘Harry’ followed up by e-mailing the Australia Bureau of Meteorology, and that he was able to resolve the problem by determining that the 1962 to 1993 portion of the COBAR AIRPORT AWS station was actually from the Cobar MO station.
Southeastern Legal Foundation provides a quote: ‘Well, dtr2cld is not the world’s most complicated program. Wheras cloudreg is, and I immediately found a mistake! Scanning forward to 1951 was done with a loop that, for completely unfathomable reasons, didn’t include months! So we read 50 grids instead of 600!!!’
Examination of this file shows that this quote refers to a piece of code that ‘Harry’ had just written, and the quote was immediately followed by the comment, ‘Running with those bits fixed improved matters somewhat’,’ showing that the goal of the exercise was to identify and address issues with the merging/updating of the data system, and that the problems were addressed.
Another quote Southeastern Legal Foundation provides is: ‘I was expecting that maybe the latter contained 94-00 normals, what I wasn’t expecting was that they are in % x10 not %! Unbelievable - even here the conventions have not been followed. It’s botch after botch after botch. Modified the conversion program to process either kind of normals line.’
In this case, the problem was merely that some input data had different units than other input data, and that ‘Harry’ developed a routine to address this. The resolution of the problem (modifying the conversion program) is included within the quote provided by the petitioner.
Peabody Energy claims that if adjustments are made to a methodology of developing a temperature record after peer review, then the resulting temperature record no longer qualifies as peer-reviewed itself, stating ‘If CRU cannot replicate its adjustments (as in the case with their HadCRUT3 data), or if adjustments were made after the publication of peer-reviewed methodology (as in the case with their TS2.1/3.0 data), there is no basis to say that the adjustments, in fact, were properly applied. CRU – and EPA – in essence are saying ‘trust me,’ but any basis to do so seems to have collapsed with the Harry_Read_Me files.’ Peabody Energy further argues that any dataset that is updated or changed cannot count as peer reviewed, stating ‘Moreover, as is evident in several emails, the CRU temperature data set is in a constant state of flux, with the data set constantly being modified both in terms of the inclusion/exclusion of data as well as the methods employed. Thus, the datasets now available have not been directly peer-reviewed.’
The petitioner misunderstands how the peer-review process is applied to the creation of temperature records and similar datasets. The papers published by CRU over the years detail the sources and methods used to generate the temperature record (as summarized in Subsection 1.3.2). It is expected that as new data become available, they will be incorporated, and as minor errors or updates are made that those too will be incorporated, without triggering any need for a new peer review. If any of this new data or updates resulted in substantial changes to the final output or the methodology, then that might call for further review. However, as has been noted repeatedly, the HadCRUT temperature record is both reliable and robust, with similar trends resulting from the analyses of NOAA, NASA, and other groups. The evidence suggests that these temperature records and other datasets are being produced in accordance with the peer-reviewed papers in which they are described, where these peer-reviewed papers discuss the limitations of the underlying data records and other uncertainties. The petitioner has not shown evidence of any changes that are substantial enough to warrant a new peer-review process.
The Harry_Read_Me file is not related to the HadCRUT temperature record, but rather to the creation of TS3.0, as explained in Subsection 1.3.2. TS3.0 (or Time-Series 3.0) is a separate dataset produced by CRU, with different stations, spatial resolution, and set of variables. Regardless, the notion that CRU and EPA are saying ‘trust me’ with regard to the surface temperature record is flawed; the GHCN data is publicly available for any individual or researcher to use to generate his or her own global temperature record. The petitioners have not provided or referenced any credible analysis of the GHCN data that reaches a different conclusion than those of NOAA, NASA, and the CRU.
The Coalition for Responsible Regulation, Peabody Energy, and Pacific Legal Foundation all provide sections of a quote from the HARRY_READ_ME file: ‘What the hell is supposed to happen here? Oh yeah - there is no ‘supposed’, I can make it up. So I have :-)’The Coalition for Responsible Regulation claims that ‘The programmer asserts in the HARRY READ ME.txt file that, in the absence of data from the period 1990 through 2003, he made up what to do’ and states ‘EPA cannot support a scientific judgment premised on ‘very artificial,’ ‘fudged,’ and ‘made up’ data. Because these Disclosures demonstrate that at least some of the temperature data managed and maintained by CRU were ‘fudged’ or ‘made up,’ EPA must reconsider its Finding and demonstrate that the data upon which it relied are valid and not corrupted.’
Related quotes about WMO codes are provided by the above petitioners and the Southeastern Legal Foundation: ‘You can’t imagine what this has cost me - to actually allow the operator to assign false WMO codes!! But what else is there [in] such situations? Especially when dealing with a ‘Master’ database of dubious provenance (which, er, they all are and always will be),’ ‘So with a somewhat cynical shrug, I added the nuclear option - to match every WMO possible, and turn the rest into new stations ... In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don’t think people care enough to fix `em, and it’s the main reason the project is nearly a year late,’ and ‘False codes will be obtained by multiplying the legitimate code (5 digits) by 100, then adding 1 at a time until a number is found with no matches in the database. THIS IS NOT PERFECT but as there is no central repository for WMO codes - especially made-up ones—we’ll have to chance duplicating one that’s present in one of the other databases. In any case, anyone comparing WMO codes between databases - something I’ve studiously avoided doing except for tmin/tmax where I had to - will be treating the false codes with suspicion anyway. Hopefully.’
The Coalition for Responsible Regulation claims that these quotes are evidence of ‘deliberately fabricating non-existent temperature-recording stations’ and that ‘This kind of data manipulation, including creation of false temperature stations, undercuts the credibility of the CRU databases and requires EPA to reconsider its Endangerment Finding.’
Petitioners mischaracterize the emails by ignoring the context of the legitimate quality control process that they reflect. The problems encountered in merging WMO station data from different countries and time periods has been recognized and discussed in the peer reviewed literature (two examples are quoted in this response), and the quality control measures taken here are consistent with those discussed in the literature.
These quotes address a specific issue where the programmer had written an automatic quality control routine to be used when merging incoming data provided by national weather organizations to the existing data in the database. As mentioned in the background discussion above, data are assumed to be from the same weather station if their WMO codes match, but sometimes issues arise that cast doubt on this assumption. To address these situations, the programmer developed a routine that ran prior to merging the data that checked for sufficient correlation (e.g., seasonal patterns) between the new and the old data. If the data were correlated, the merge would proceed. However, in cases where the correlation routine finds something anomalous, the program warns the user that an inconsistency exists and provides the user with three options:
You have failed a match despite the WMO codes matching.
This must be resolved!! Please choose one:
1. Match them after all.
2. Leave the existing station alone, and discard the update.
3. Give existing station a false code, and make the update the new WMO station.
Therefore, this quality control check is ensuring that for any data that seems odd, a user will conduct a manual review and determine whether the best option is to assume that both datasets are from the same station or whether the new station should be discarded or turned into its own station. We note here that the use of ‘false codes’ is not ‘falsification,’ as inferred by the Pacific Legal Foundation, but rather a way to address the problem that a few stations in the new data set had the same WMO code as stations in the old data set, despite the fact that quality control routines determined that the data from these stations had different statistical characteristics.
This is not ‘making up’ data. No data have been created; the only thing ‘made up’ is an identification number associated with the data (the WMO code), because the previous identification number was already in use. The petitioners did not highlight these additional quotes that show the fuller context of the now oft-cited ‘I can make it up’ quote.
Moreover, the problems involved with WMO codes are not new and are well documented in the published literature. As explained in the published paper on the CRU TS2.1 database (e.g., Mitchell and Jones, 2005), there are challenges involved in matching WMO codes.
However, not all sources attach WMO codes to their data, and not all stations have been assigned WMO codes, so additional information was used: location, name and country. Each additional station was compared with the stations already in the database, both to avoid unnecessary duplication and to ensure that each station record is as complete as possible.
If an additional station was already present in the database, then the two records were compared’.The comparison was based on any available overlap between the records; if none was available, then an attempt was made to construct a reference series that overlapped both records (as in Section 2.4). If an overlap was found, then it was used to alter the statistical characteristics of the additional station to match those of the existing record, using the method in Section 2.4.4; the two records were then merged. If no overlap was found, then the records were assumed to be for different stations, because of the possibility of the two records having different normals.
Where the sources were very recent (CLIMAT and MCDW) the additional station was assumed to be the same without the above data check. This was justified because the normals from these sources were likely to be the same as the post-adjustment normals from other sources. This assumption was necessary for some climate variables (notably wet days) for which overlaps with stations from other sources were very rare; without it the normals could be calculated for very few recent data.
As demonstrated by this quote from the published literature, the standard procedure was to assume that a new record was a different station if the data did not have sufficient overlap to merge the records with confidence. This is what ‘Harry’ was doing when he was ‘making up’ a new WMO code.
Peterson, Daan, and Jones (1997) also addressed some of the challenges involved when gathering data from many independent national services based on WMO codes, and the well understood imperfections in this procedure:
These four sources total 15,433 stations. But how many are unique? To answer this question, all the stations needed to be merged into a single list, as shown in Fig. 2. This was not an easy task: in the four sources, latitudes and longitudes were recorded in hundredths of degrees, tenths of degrees, and minutes, depending on the source. Therefore, station locations seldom agreed exactly. In addition, stations often move slightly over time so data from earlier sources might not have the current location. Not only did the spelling of station names change over time, but in many cases, particularly in newly independent countries, the entire station name changed. These problems are compounded by keypunch errors that occur when keying in hard-to-read metadata from handwritten RCS lists. WMO station numbers also change with time, not only for individual stations. Sometimes a country will change all their WMO numbers at one time. Since most of the metadata flags were keyed to WMO numbers, the final station list was merged with the current WMO Volume A station list to tag each station with the appropriate WMO number whenever possible.
The result was a list of 8653 unique stations, but the list is far from perfect. It is very likely that some errors in merging remain. An example is the assigning of a WMO number from a short-term synoptic reporting airport station to a nearby long-term climate station with a similar name. Also, normals stations were treated as if they started in 1961, which is often erroneous but was the best that could be done with the information available. Therefore, the list should be carefully inspected for accuracy by individuals from each country that has stations initially selected for the GSN.
This quote from Peterson et al. (1997) is a perfect example of the difficulties involved in dealing with data from different time periods and different nations, which results in many of the frustrations expressed throughout the HARRY_READ_ME file. These kinds of difficulties would not lead to the kind of large-scale bias errors with the potential to impact large-scale trends that were discussed in Subsection 1.3.2 of this document.
As stated before, the ‘Harry’ quality control program gives the user several options of how to treat the data. The three options are to merge the new and old data together in one station, to discard the new data, or to give the old data a ‘made-up’ station number and use the existing WMO code for the new data. ‘Making up’ a station number, and assigning it a code that is clearly not a standard WMO code, is a straightforward attempt to properly treat imperfect data in the case where the user of the code concludes that this is a better solution than either merging the data or throwing away the new data. Therefore, this is not a case of ‘fudged’ or ‘made-up’ data as claimed by the petitioners, but rather a legitimate attempt to address imperfect input data.
The scientists referred to "false codes" and running programs to "allow bad databases to pass unnoticed, and good data bases to become bad," see Appendix A, 15-16, leading to the possible inference that the CRU scientists may have falsified or manipulated the data. The fact that the British Information Commissioner’s Office has found that CRU violated British law by failing to make disclosures under British freedom of information laws lends substantial credence to the proposition that CRU engaged in at least some wrongdoing in connection with the CRU Data. See Guy Chazan, U K Says University Broke Law on Turning Over Data, Wall St. J., Jan. 29, 2010, at A8. See also David Derbyshire, Scientists broke the law by hiding climate change data: But legal loophole means they won’t be prosecuted, Daily Mail, Jan. 30, 2010.
The statement about ‘false codes’ is addressed in Response 1-46 above which demonstrates that this statement does not refer to falsification of data.
The issues involved with the British freedom of information laws are unrelated to the issues involved in the HARRY_READ_ME file. Freedom of information issues are addressed in Volume 3, Section 3.4 of the RTP document.
OH F[---] THIS. It’s Sunday evening, I’ve worked all weekend, and just when I thought it was done I’m hitting yet another problem that’s based on the hopeless state of our databases. There is no uniform data integrity, it’s just a catalogue of issues that continues to grow as they’re found.
The petitioner claims that this shows that ‘the CRU programmer admits the CRU data are without integrity.’
The programmer is upset because one dataset has a resolution of 2.5 degrees of latitude and longitude, but the programmer wanted to do an analysis comparing this dataset to another dataset with a resolution of 0.5 degrees. Data integrity in this case refers to the fact that the historical dataset used different procedures for calculating rain days before 1990 and from 1990 onwards, and therefore, ‘Harry’ needs to add a conditional statement to his code to address this difference. ‘No uniform data integrity’ does not mean ‘without integrity,’ it just means that different data have been treated in different ways. This makes a programmer’s job more difficult, but it is not a major flaw in the data. The petitioner clearly misconstrues terminology commonly used by computer programmers. In addition, this dataset and analysis is unrelated to the HadCRUT temperature record. The petitioner provides no support for the assertion that ‘the CRU data’ in general are without integrity.
But what are all those monthly files? DON’T KNOW, UNDOCUMENTED. Wherever I look, there are data files, no info about what they are other than their names. And that’s useless... take the above example, the filenames in the _mon and _ann directories are identical, but the contents are not. And the only difference is that one directory is apparently `monthly’ and the other `annual’ - yet both contain monthly files.
In this quote, ‘Harry’ is complaining about the way certain intermediate data products (monthly files) were stored. There is no evidence that these intermediate data products were ever meant to be used again, but ‘Harry’ needed to go back to these because of a coefficients file. This coefficients file was eventually reconstructed. Therefore, this quote is not evidence of flaws in the output of either the older product (TS2.1) or the newer product (TS3.0) produced by CRU. Neither of these products is related to the dataset used for the HadCRUT temperature record.
Back to the gridding. I am seriously worried that our flagship gridded data product is produced by Delaunay triangulation - apparently linear as well. As far as I can see, this renders the [weather] station counts totally meaningless. It also means that we cannot say exactly how the gridded data is arrived at from a statistical perspective - since we’re using an off-the-shelf product that isn’t documented sufficiently to say that. . . . Was too much effort expended on homogenisation, that there wasn’t enough time to write a gridding procedure? Of course, it’s too late for me to fix it too. Meh.
This quote refers to a quality control problem with the CRU TS dataset that is identified and later resolved. Delaunay triangulation is an alternate method for determining global maps from an uneven distribution of stations. One method used to produce a global map is to create a set of square grid cells and average together the stations within each grid cell before averaging all the grid cells together. The Delaunay method instead makes triangles of different sizes with stations at the vertices and weights the stations by the size of the triangles on which they are located. The use of an ‘off-the-shelf product’ means that CRU did not have access to the code that was used for the Delauney triangulation method, and there might be subtle differences depending on exactly how the method is implemented. Petitioners provide no evidence that the ‘off-the-shelf product’ is inaccurate or that Delaunay triangulation is an inappropriate method.’
‘Harry’ does use a different product to determine station counts, because the Delaunay triangulation method is inappropriate for that use. He also refers to a functional gridding program that he wrote: ‘Got the gridding working, I think. IDL of course’.
I am seriously close to giving up, again. The history of this is so complex that I can’t get far Enough into it before my head hurts and I have to stop. Each parameter has a tortuous history of manual and semi-automated interventions that I simply cannot just go back to early versions and run the update prog.
This quote from the HARRY_READ_ME file expresses frustration that CRU TS2.1 was built by using both manual and automated procedures, making it more difficult to integrate new data. Good programming practice would have standardized the procedures. However, there is no evidence that this mix of manual and automated procedures produces incorrect results. Therefore, this quote is not evidence of flaws in the output of either the older product (TS2.1) or the newer product (TS3.0) produced by CRU. It is only evidence of the frustration generated by implementing an arduous quality control process.
The Southeastern Legal Foundation claims that the ‘HARRY_READ_ME.txt’ file reveals ‘an appalling series of manipulations of the data and the code, with no provisions for review, no QA/QC on the data or the code, no double-check on data integrity. As has been noted by several commentators, the HARRY_READ_ME.txt file really is the ‘smoking gun’ of the AGW argument.’
As demonstrated by the responses in this section, these arguments by the petitioner are gross assertions without scientific merit. Indeed, in contrast to the assertion of the petitioner that this file included no QA/QC or double-checks on data integrity, the entire HARRY_READ_ME file is a massive and comprehensive QA/QC endeavor. There is no ‘appalling series of manipulations.’ Instead, there is clear evidence of a researcher attempting to do the best job possible of bringing together large disparate databases into one product with as much integrity as possible.
With respect to the ‘smoking gun’ claim, as noted before, nothing in the HARRY_READ_ME file deals with issues in the data set used for the HadCRUT temperature record. The TS products that are the focus of this file are not related to the HadCRUT temperature record. Thus, we conclude that there is no ‘smoking gun’ here.
The Pacific Legal Foundation referenced the HARRY_READ_ME file and stated that ‘A complete analysis of the programming routines within this file and their implications would reasonably require more than five days, which is the number of days between the time EPA received the supplement to the petition to reopen the public comments based upon the CRU data, and the time that the Endangerment Finding was finalized.’
The CRU data were made public on November 19th, which was more than five days before the Endangerment Finding was finalized. This was enough time for EPA’s initial review to determine that the e-mails and data were not likely to materially change the Endangerment Finding. EPA’s Response 11-2 of the RTC document addressed this issue:
On December 2, 2009, another commenter (11537) submitted a supplement to an October 5th petition asking EPA to reopen the comment period in light of what they alleged was new information. This commenter claims that the recent disclosure of hacked e-mail messages and documents from the Climate Research Unit (CRU) of East Anglia University in the United Kingdom undermines the Intergovernmental Panel on Climate Change (IPCC) science and assessment process upon which the Technical Support Document (TSD) and the Findings primarily rely.
From our review, it appears that the scientific issues raised in the e-mails were also raised in public comments. In fact, we believe that the public comments submitted on our Proposed Findings are more comprehensive than the discussions of the issues in the e-mails because many commenters expended considerable effort to describe their point of view and concerns to us, and several also provided supporting literature and data. In preparing the Finding, EPA has addressed many of the issues raised in the hacked e-mails, among the hundreds of issues raised by commenters. Our responses are fully documented, transparent and available to the public. The science on which the Administrator has based her determinations regarding the endangerment of both public health and welfare, and the process EPA has undertaken, have been fully open and transparent.
Some groups have also used the hacked e-mails to attack the credibility of the IPCC process and its findings. We received many comments on the process used to develop, review and approve or accept IPCC reports; see our responses on these issues in Volume 1 of this Response to Comments document. The disclosure of the private communications of a few individual scientists, among the hundreds of scientists that have participated in the development of the IPCC reports and the thousands that have developed the literature that was assessed, provides no evidence that contradicts the key conclusions and basic science underlying climate change. As IPCC Chairman Rajendra K. Pachauri recently stated:
IPCC relies entirely on peer reviewed literature in carrying out its assessment and follows a process that renders it unlikely that any peer reviewed piece of literature, however contrary to the views of any individual author, would be left out. The entire report writing process of the IPCC is subjected to extensive and repeated review by experts as well as governments. Consequently, there is at every stage full opportunity for experts in the field to draw attention to any piece of literature and its basic findings that would ensure inclusion of a wide range of views. There is, therefore, no possibility of exclusion of any contrarian views, if they have been published in established journals or other publications which are peer reviewed.
We note that many of the concerns about the emails appear to be based on a misunderstanding of the importance of certain issues and how concerns raised about a specific issue or study relate to the fundamental conclusions reached through the assessment of hundreds of scientific studies; in other words a misunderstanding that results in an exaggeration of the importance of these issues. Our responses on these specific issues are provided in the relevant volumes of the Response to Comments document. As an example of overstatements regarding the events at CRU, many commenters and others have implied that the CRU dataset on global surface temperatures is ‘unique’ or ‘the most important’ or even ‘the basis for virtually all peer-reviewed literature.’ These statements are incorrect. In fact, as we discuss in detail in Volume 2 of the Response to Comments document, the CRU data set is not unique (in fact, the other two data sets are developed by NASA and NOAA), and it there are multiple lines of evidence that support the conclusions reached in the assessment literature upon which we primarily rely.
Additionally, the HARRY_READ_ME file is entirely concerned with the TS2.1 and TS3.0 products, which are not germane to the Endangerment Finding. The in-depth analysis of the e-mails and files provided in the Decision and this RTP document indicates that EPA was correct in its conclusions, reflected in the response above, that there were no issues in the CRU e-mails or the HARRY_READ_ME file that would materially change the Endangerment Finding.
The Pacific Legal Foundation claims that ‘The ‘HARRY READ ME’ file may suggest that the data selected, used, and analyzed by CRU not only had substantial gaps but was also potentially irreparably compromised. See Appendix A, 10-21.’ Peabody Energy and other petitioners also provided a large number of quotes.
Appendix A of Pacific Legal Foundation’s petition contains a large number of excerpts from HARRY_READ_ME. Most of the substantive excerpts are addressed in this section of the RTP. The majority of the comments included in the appendix provided by Pacific Legal Foundation are merely expressions of frustration or notes on the programming process that do not indicate ‘substantial gaps’ or ‘irreparably compromised’ data. In addition, as noted previously, this does not relate to the dataset used for the HadCRUT temperature record.
A number of quotes provided by the petitioners are completely innocuous, such as a complaint about the number of files involved, a quote about naming two files with the same name, and a quote indicating ‘Harry’s’ intention to start work on the final product. A large number of quotes bemoan inadequate documentation for the original code, but this is not an indication of ‘substantial gaps’ or ‘irreparably compromised’ data. One quote addresses a network crash: ‘The biggest immediate problem was the loss of an hour’s edits to the program, when the network died,’ which is clearly not a problem related to the data produced by CRU. In some cases, the petitioners even quote statements that indicate success, such as ‘So let’s say, anomalies are done. Hurrah. Onwards, plenty more to do!’
A large number of quotes address trivial errors that were quickly addressed. For example, one quote is the result of confusion about what number indicates missing data: ‘Harry’ had been assuming that ‘-9999’ indicated missing data, but the code for missing data was actually ‘-1999.’ Running an analysis using the wrong missing data code resulted in ‘Harry’s’ comment ‘Oh Tim what have you done, man’ But this was followed almost immediately by his parenthetical realization about the difference in codes. Another quote, ‘It’s botch after botch after botch,’ refers to a file that had different units than the programmer had been expecting. Once he corrected his code to account for the proper units, this was not a problem.
One quote provided was ‘Oh, GOD, if I could start this project again and actually argue the case for junking the inherited program suite.’ But the very next line of the document (notably not highlighted by the petitioners) is ‘OK.. the .ann file was simply that it refuses to overwrite any existing one. Meh.’ Again, this quote reflects frustration because the programmer was unfamiliar with the code, but this is not a problem with the data. The solution to the problem of the program not overwriting files is merely to allow overwriting of existing files, and HARRY_READ_ME clearly indicates that ‘Harry’ figured out and fixed the problem immediately.
A series of quotes result from the attempt by ‘Harry’ to work on the precipitation database: e.g., ‘If the latest precipitation database file contained a fatal data error ... then surely it has been altered since Tim last used it to produce the precipitation grids?’ In this case, the fatal data error was that one month of one year of data at one station had been corrupted and was providing a non-physical value. When that data point was removed, the program was able to read the data file. However, it appears that TS2.1 had used a different file without this corrupted data bit, and similarly, ‘Harry’ appears to use a different version of the file later. Therefore, these quotes are about an inconsequential file (another example of messy, poorly documented code, but no evidence of flaws in the TS2.1 or TS3.0 output).
A large number of quotes result from ‘Harry’s’ work on merging the Australian Bureau of Meteorology (BOM) data with the CRU data. Because the Australian BOM changed formatting and coding for a number of their stations, this was a challenging process. The file indicates that ‘Harry’ worked with the Australian BOM to resolve the majority of these issues. This is another example showing that the CRU researchers were working on quality control to properly merge a set of challenging datasets and succeeding in resolving a number of tricky issues.
A number of quotes refer to an issue with Russian ‘rain day’ data from one or two stations, where quality control procedures found indications that there are internal inconsistencies between data before a ‘missing block’ in the 1980s and data after that missing block. This particular error in the data for these one or two stations may never have been resolved (it is unclear from the debugging file). For these stations, ‘Harry’ stated, ‘although I can match WMO codes (or should be able to), I must check that the data correlate adequately,’ indicating that he was doing the appropriate quality control for dealing with data provided by outside organizations. A few imperfect stations in a large dataset do not indicate that the data is ‘irreparably compromised.’ Imperfect data is accounted for in the uncertainty analyses in the data. As stated in New et al. (1999), ‘Isolated errors and subtle inhomogeneities not detected during quality control do not have a significant effect at the regional scale.’
In sum, petitioners have assembled a lengthy collection of quotes that do not support the broad conclusions they drawn.
The programmer is complaining because some of the data is stored with longitudes from 0 to 360 rather than the more standard longitudes that run from -180 (W) to +180 (E). In the next line of the file, the programmer discusses his solution to the formatting issue (‘So, I wrote ‘revlons.for’’) which changes the data from one format to the other. Again, this does not indicate any flaws with the data or the products of this work, and the file indicates that the issue ‘Harry’ encounte’red was solved, as were almost all the other problems identified in the HARRY_READ_ME file. In addition, this does not relate to the HadCRUT temperature data set used for a long-term temperature record.
This quote appears to be referring to an intermediate database (postdating the CRU TS2.1 product, as it includes more recent data than what was included in TS2.1), which had been assembled without checking for duplicate stations, and which apparently didn’t always have the same number of stations recording maximum daily temperatures and minimum daily temperatures. The programmer resolved both these problems by producing TS3.0. Therefore, this problem was not present in TS2.1 and was solved for TS3.0. In any case, these problems, if left uncorrected, would have led to missing or duplicate data, but there is no evidence that these problems would have led to any kind of bias.
Peabody Energy states that it ‘does not have the resources to try to untangle the coder notes and the TS2.1/3.0 data files to determine the magnitude of the errors in those files. Given the state of the record at this point, the TS2.1/3.0 dataset and the HadCRUT3 cannot legitimately be relied on nor can any study or modeling which utilized these data.’
HadCRUT3 and TS2.1 and 3.0 are two different products. Peabody Energy and other commenters have not demonstrated that there are any problems with these databases that materially affect any conclusions based on the data. Imperfections with individual stations are to be expected when data is being collected from thousands of stations throughout dozens of nations, but the large-scale patterns are still robust (especially after quality control). While the code used to generate TS2.1 could have been better documented and organized, the petitioners have not shown that the output of this code was flawed or that the quality control improvements by the programmer ‘Harry’ were not successful. The scientific conclusions drawn from use of these datasets by the research community are consistent with conclusions drawn from other similar datasets, as well as other sources of data on climatic changes over time. Therefore, the conclusion by Peabody Energy that these datasets cannot be relied on is not supported by our analysis of the underlying files.
The Southeastern Legal Foundation claims that ‘The entire HARRY_READ_ME.txt file is so riddled with such damning admissions that there’s really little point in highlighting them all. The key conclusion is inescapable: no reasonable person can conclude that the models that CRU produced, and the datasets on which they were based, meet basic scientific standards for reliability, and therefore the IPCC analyses and projections cannot be said to be reliable, and therefore EPA’s incorporation of IPCC’s findings and conclusions must be reconsidered. Put differently, there is zero chance that CRU’s work could survive a Daubert challenge15 that was armed with the Climategate documents. Since CRU’s work would be inadmissible in federal court as junk science, it should not be relied upon by the EPA as the foundation of the most far-reaching and consequential action in its history.’
While the petitioner has quoted from HARRY_READ_ME at length, none of the statements in this file are ‘damning.’ This file has demonstrated that the first version of the code was poorly documented, and that the process of integrating data from a large group of datasets provided by a number of different nations and sources is a challenging process. It also has demonstrated that CRU put a high value on tracking down and resolving inconsistencies in the data and being as complete as possible. The petitioners have not shown that the issues raised in the internal quality control process documented in this README file would have resulted in any major change in the TS2.1 product, or that they were not addressed properly in creating the TS3.0 product. Therefore, the characterization of this product as ‘junk science’ has no factual backing.
Moreover, the petitioner is making an unfounded leap to conclude that any problems involved with the TS2.1 and 3.0 products are also reflected in other CRU products. The TS product upgrade was unrelated to the HadCRUT temperature record. While the HadCRUT temperature record and the TS products include the GHCN temperature data set in common, there is no evidence that any of the issues discussed in the ‘Harry’ file were related to the GHCN data set. CRU does not produce any models, counter to the claim of the petitioner. And finally, IPCC analyses and projections are based on a wide range of different sources and datasets that have produced consistent results, not just the CRU output. Therefore, the petitioner is making an unjustified leap to assume that any problems in the TS2.1 and TS3.0 product, or even CRU products as a whole, would undermine the conclusions of the IPCC, CCSP, National Academies, and other organizations that have all found after extensive assessments that there are robust conclusions about a number of aspects of the climate system. These robust conclusions are the ones that have formed the basis of EPA’s Findings, and nothing presented by the petitioner has materially undermined these conclusions.
The reference to Daubert is unavailing. As described above, there is no basis to characterize this as "junk science." In addition, Daubert discusses the Federal Rule of Evidence 702, which establishes a threshold inquiry for introduction of scientific evidence at a trial, a formal adjudicatory proceeding. Even if that were applicable here, the criteria would be readily met:
If scientific, technical, or other specialized knowledge will assist the trier of fact to understand the evidence or to determine a fact in issue" an expert "may testify thereto." (Emphasis added.) The subject of an expert's testimony must be "scientific . . . knowledge." The adjective "scientific" implies a grounding in the methods and procedures of science. Similarly, the word "knowledge" connotes more than subjective belief or unsupported speculation. The term "applies to any body of known facts or to any body of ideas inferred from such facts or accepted as truths on good grounds." Webster's Third New International Dictionary 1252 (1986). Of course, it would be unreasonable to conclude that the subject of scientific testimony must be "known" to a certainty; arguably, there are no certainties in science. See, e.g., Brief for Nicolaas Bloembergen et al. as Amici Curiae 9 ("Indeed, scientists do not assert that they know what is immutably 'true' -- they are committed to searching for new, temporary, theories to explain, as best they can, phenomena"); Brief for American Association for the Advancement of Science et al. as Amici Curiae 7-8 ("Science is not an encyclopedic body of knowledge about the universe. Instead, it represents a process for proposing and refining theoretical explanations about the world that are subject to further testing and refinement" (emphasis in original)). But, in order to qualify as "scientific knowledge," an inference or assertion must be derived by the scientific method. Proposed testimony must be supported by appropriate validation -- i.e., "good grounds," based on what is known. In short, the requirement that an expert's testimony pertain to "scientific knowledge" establishes a standard of evidentiary reliability. 509 U.S. 579, 589-90.
45 The complete text of this document has been placed in the docket for this action. See Docket ID No. EPA-HQ-OAR-2009-0171 at www.regulations.gov.
Petitioners state that the ‘Moscow-based Institute of Economic Analysis (IEA) claims the UK Met Office’s Hadley Center for Climate Change tampered with Russian meteorological station data that did not support the anthropogenic global warming theory.’ The Southeastern Legal Foundation further states
The Institute for Economic Analysis in Moscow prepared a report showing that selective use of only 25%of the available Russian stations in the HadCRUT dataset imparted a warming bias of .64 C greater than the trend calculated using all available data. D’Aleo & Watts (2010) at p. 16. This results from the disproportionate use of more southern and urban stations, and from interpolating or ‘infilling’ data from these warmer stations to colder areas for which actually available data was eschewed. Id.at 16-17. Russia represents 11.5% of the Earth’s land mass, so this is a significant issue.
Petitioners claim that CRU selectively chose Russian data stations to create a biased dataset that would show more warming than the full dataset. Petitioners provide a link to a translation (hosted at a blog) of the original report written in Russian by the IEA in Moscow (Pivovarova, 2009).
EPA has carefully examined this document and finds that it does not support the petitioners’ claims. The Moscow IEA temperature record derived from the full set of Russian stations agrees well with the record derived from the set of HadCRUT stations after 1955, and the difference between the records derived from the two datasets is mainly in the 1850 to 1950 section of the record, with the HadCRUT temperature record showing more warming over the period, but a smaller peak in 1940.47 The validity of the IEA analysis is suspect, however, due to flaws in the methodology it used. IEA averaged together the 90 grid cells containing temperature data from HadCRUT and compared it to an average of the 152 cells containing temperature data from the full Russian set. A proper comparison would involve using the same kind of geographic infilling used by HadCRUT in calculating temperatures across all of Russia on both the 90-cell and 152-cell datasets. Because HadCRUT extrapolates temperatures from cells with data to some cells without data, when computing a regional or global average HadCRUT does not necessarily weight cells evenly; a cell with data that is adjacent to a cell without data will be used to help ‘fill’ that cell in (along with other cells adjacent to the no-data cell), and therefore, this cell will be weighted more heavily than a cell that is surrounded by other cells that have their own data. The geographic weighting algorithm is more important when data is sparse, as it is in the decades before 1955 where the IEA finds that their analysis of the full station set diverges from their analysis using the HadCRUT stations (and where the Brohan et al. 2006 figure in Subsection 1.3.2 shows that there is more uncertainty in the temperature record). Because of the simplistic and flawed methodological approach taken by IEA, its analysis is a faulty ‘apples and oranges’ comparison of the two datasets.
We also note that there has been no evidence provided that the station selectivity was a result of choices made by CRU. Because CRU depends on a number of other data sources, such as WMO reports, World Weather Records, GHCN, and NMS, it is possible that the HadCRUT dataset did not use all available stations because the data sources it was relying on also did not include these additional Russian stations. CRU also uses several metrics to ensure that data can be used, such as the quantity of data available during the baseline period. The petitioners did not assess whether these additional Russian stations were provided to CRU, or whether these stations met the criteria laid out in published papers. Thus, there is no support for the claim that CRU was biased in its use of the Russian data.
Moreover, in contrast to the petitioners’ assertion that the HadCRUT data show excessive warming over Russia, a discussion of a preliminary analysis reported by the MET stated: ‘The ECMWF [European Centre for Medium-Range Weather Forecasts] analysis shows that in data-sparse regions such as Russia, Africa and Canada, warming over land is more extreme than in regions sampled by HadCRUT’ (UK Met Office, 2009). If the data-sparse regions actually show a greater warming trend, this means that the approach for data infill taken by CRU in developing the HadCRUT temperature record may understate the recent warming over Russia, contrary to the assumption of petitioners.
In summary, petitioners’ evidence does not support the conclusion that HadCRUT selectively picked stations. Their analysis comparing the HadCRUT data and the full Russian dataset is flawed, and the differences that they show with the flawed analysis mainly appear before 1950. Other analysis by the ECMWF suggests that HadCRUT may actually be underestimating recent warming in the region.
47 Figure 8 and Figure 9 in the original document at http://www.iea.ru/article/kioto_order/15.12.2009.pdf or in the unofficial English translation at http://climateaudit.files.wordpress.com/2009/12/iea1.pdf.
1.3.4 Claims of Flawed Approach to Correct for Urban Heat Island (UHI) Effects
Petitioners take significant issue with UHI corrections in the light of a dispute about the possible relocation of certain Chinese weather stations that were used in a study by Jones et al. (1990). Petitioners argue that this study was challenged as fraudulent by Doug Keenan, an ‘amateur climate analyst’ (as described by the State of Texas). The State of Texas asserts:
‘The history of where the weather stations were sited was central to the 1990 paper because it concluded that the warmer temperatures in China were caused by climate change rather than the heat-island effect of growing cities’ and that ‘[t]he Fourth Assessment relied on the Jones-Wang study to support the conclusion that ‘any urban-related trend’ in global temperatures was ‘an order of magnitude smaller’ than other trends.’
The State of Texas further alleges that ‘Although the university found ‘evidence of the alleged fabrication of results,’ it exonerated Wang.’ The State of Texas finally notes that ‘Ironically, Phil Jones submitted a report to the Journal of Geophysical Research re-examining temperatures in eastern China. His report concluded that not only was the urban heat effect not ‘negligible’ it could account for 40% of the warming shown in the study.’
Petitioners raise the treatment of UHI effects in the HadCRUT dataset as another example of alleged data manipulation. EPA has long been aware of the UHI issue and it is not new. There is substantial research underway to better determine the scope and magnitude of UHI effects and to improve methodologies for minimizing these effects in surface temperature records. We addressed UHI issues in responses 2-28 through 2-30 of the RTC document. Response 2-30, in particular, addresses the Jones et al. (1990) paper to which the petitioners refer, in the context of UHI corrections in the global dataset:
‘Though the assessment literature continues to cite Jones et al. (1990), we dispute that it is solely ‘relied upon.’ Many subsequent studies (Peterson et al., 1999; Peterson, 2003; Parker, 2004; Peterson and Owen, 2005; Parker, 2006) have found supporting results (as discussed in response 2-28). In IPCC (Trenberth et al., 2007), Jones et al. (1990) is referenced as just one of a number studies which support the Trenberth et al. (2007) conclusion that ‘urban heat island effects are real but local, and have not biased the large-scale trends’ which is summarized in the TSD.’
As noted in Response 2-30, Jones et al. (1990) is referenced, but it is not the only study that the IPCC AR4 referenced with respect to UHI adjustments. We also note that satellite records are not susceptible to UHI and globally show similar trends to the HadCRUT and other records of land-based measurements over their overlapping time period.
There is no merit in the petitioners’ claim that in a later paper Jones et al. (2008) published in the Journal of Geophysical Research, Jones found different results than in his previous study. This paper actually concluded that ‘accounting for site moves has no impact on the results given by Jones et al. (1990),’ which is not supportive of the petitioners’ claim regarding the results of the study. Jones et al. (2008) did find that changes in UHI effects contributed to measured temperature trends in China over the full time period up to 2004, but this was because of data not available for the original study. They also found that UHIs did not contribute to measured temperature trends in other urban areas such as London and Vienna. So the claim that UHI effects bias the temperature record upward in all urban areas is not supported.
We also note that this issue has been reviewed by other organizations. The University of East Anglia has also issued a statement on this controversy (University of East Anglia, 2010b), stating that the accuracy of the data was confirmed in a later paper and that the findings of the 1990 paper with regard to UHI effects were confirmed by other papers used by the IPCC. In addition, contrary to the assertion by the State of Texas that ‘the university found ‘evidence of the alleged fabrication of results,’’ the statement from the University of Albany (as quoted in the news article used by the State of Texas as a reference (Pearce, 2010) and the linked copy of the confidential letter from the university) was ‘it found ‘no evidence of the alleged fabrication of results’ and exonerated him.’ Therefore, the State of Texas is misquoting its source and asserting the exact opposite.
In sum, contrary to the petitioners’ claims, as discussed in the TSD and the RTC for the Finding, UHI effects are not inappropriately treated or masked but instead are appropriately accounted for using widely accepted and transparent methods. The petitioners’ evidence and arguments do not show that the original Jones paper was mistaken or fraudulent, nor have they addressed the many papers on UHI effects in the subsequent two decades. We reviewed and responded to these same issues in Volume 2 of the RTC document, and based on the available literature, we found that the UHI effect was small on a global scale
1.3.5 Alleged Dependence of IPCC Conclusions on the HadCRUT temperature record
Petitioners ascribe great significance to the alleged issues with the HadCRUT temperature record, claiming that it is the primary or core support for IPCC conclusions on current warming, attribution, and projections of future warming. For example, the Coalition for Responsible Regulation asserts:
the CRU data form the primary basis for EPA’s determination that ‘unprecedented’ warming has occurred in recent decades.
Similarly, Southeastern Legal Foundation states:
The CRU dataset constitutes one of the most important sets of information on which all analyses of anthropogenic global warming (‘AGW’) are based. In addition, the model used by CRU formed the basis for IPCC’s models of future global warming.
EPA finds no merit in the view that the HadCRUT temperature record is the primary or core support on which our understanding of climate change rests. As we have previously discussed in the TSD and RTC document, the IPCC and USGCRP have concluded that warming of the climate system in recent decades is ‘unequivocal.’ This conclusion is not drawn from any one source of data, but is based on a review of multiple sources of data and information, including the HadCRUT temperature record as well as additional temperature records from other sources and numerous other independent indicators of global warming. All of the different elements of this body of scientific evidence have their strengths and weaknesses, and these are discussed and explained in the TSD and the RTC. However it is this entire body of evidence considered together that is the primary and core support for EPA’s conclusions on warming. The overall consistency of the evidence of warming across multiple types of evidence is central to this conclusion. Petitioners have failed to consider or rebut this body of evidence, and instead have focused on arguments aimed at one source of surface temperature record. As shown above, their arguments and evidence concerning the HadCRUT temperature record are misplaced and do not warrant any change in the weight placed on this one part of a much broader body of evidence.
This consistency across various kinds of evidence of warming also supports the robustness of the evidence at issue here. As documented in Section 4(b) of the TSD, NOAA (from NCDC) and NASA temperature records (from GISS) show nearly identical warming trends to the HadCRUT temperature record, despite different analysis methodologies. As discussed below, satellite data and other methods of determining atmospheric temperature trends are also consistent with the HadCRUT and other surface temperature records, and the sources of atmospheric data are independent of the datasets relied upon for surface temperature records.
Contrary to the Coalition for Responsible Regulation’s assertion, EPA does not refer to the warming observed in recent decades as being ‘unprecedented.’ We do note the following facts, which are consistent across the NOAA, NASA, and HadCRUT temperature records in the TSD’s Box 4.1:
- Eight of the 10 warmest years on record have occurred since 2001.
- The 10 warmest years [on record] have all occurred in the past 12 years.
- The 20 warmest years [on record] have all occurred since 1981.
Significantly, entirely independent records of lower tropospheric temperature measured by both weather balloons (also known as radiosondes) and satellites (from the University of Alabama and Remote Sensing Systems) in recent decades demonstrate strong agreement with the HadCRUT surface temperature record as well as NOAA’s and NASA’s. Additional independently monitored indicators of global warming discussed in the TSD include:
- Increasing global ocean heat content (Section 4(f) of the TSD)
- Rising global sea levels (Section 4(f) of the TSD)
- Shrinking glaciers worldwide (Section 4(i) of the TSD)
- Changes in biological systems, including poleward and elevational range shifts of flora and fauna; the earlier onset of spring events, migration, and lengthening of the growing season; and changes in abundance of certain species (Section 4(i) of the TSD)
Reaffirming this evidence, NOAA’s State of the Climate in 2009 report (Kennedy et al., 2010) states:
The IPCC conclusion (Alley et al. 2007) that ‘warming of the climate system is unequivocal’ does not rest solely upon LSAT [land surface air temperature] records. These constitute only one line of evidence among many, for example: uptake of heat by the oceans, melting of land ice such as glaciers, the associated rise in sea level, and increased atmospheric surface humidity (Figure 2.5) [as shown below]. If the land surface records were systematically flawed and the globe had not really warmed, then it would be almost impossible to explain the concurrent changes in this wide range of indicators produced by many independent groups. The observed changes in a broad range of indicators provide a self-consistent story of a warming world.
The figure below (Figure 2.5 from Kennedy et al., 2010) illustrates a range of indicators that would be expected to correlate strongly with the surface temperature record. Note that stratospheric cooling is an expected consequence of GHG increases.
In this figure, the indicators that we would expect to trend upward with surface temperatures (e.g. sea surface temperature, specific humidity) do so, and those that one would expect to trend downward (e.g., stratospheric ozone, sea-ice extent) also behave as the science predicts. It is this large body of evidence that supports the conclusion that there is an unambiguous warming trend over the last 100 years, with an increase in the rate of warming over the past 30 years.
The investigation into the CRU e-mails and the validity of the HadCRUT surface temperature record conducted by the British Parliament’s House of Commons Science and Technology Committee reached a conclusion that is consistent with our own assessment of these issues (UK Parliament, 2009):
We have established to the extent that a limited inquiry of this nature can, that the NCDC/NOAA and GISS/NASA datasets measuring temperature changes on land and at sea have arrived at similar conclusions using similar data to that used by CRU, but using independently devised methodologies. We have further identified that there are two other datasets (University of Alabama and Remote Sensing Systems), using satellite observations that use entirely different data than that used by CRU. These also confirm the findings of the CRU work. We therefore conclude that there is independent verification, through the use of other methodologies and other sources of data, of the results and conclusions of the Climate Research Unit at the University of East Anglia. [emphasis in the original]
Thus, EPA finds that the Coalition for Responsible Regulation’s claim that the HadCRUT temperature record serves as the basis for all anthropogenic warming studies is unsubstantiated and unsupportable. Numerous studies cited in the assessment literature have been published that rely on data and methods independent of the HadCRUT temperature record, many pertaining to the indicators above. Though it is true that many studies do, in fact, cite CRU data, there is no reason to question the legitimacy of those studies. Petitioners have not provided a basis for changing the weight placed on the HadCRUT temperature record, as described above. In addition, the CRU results are consistent with other surface temperature records and atmospheric temperature records, as well as with all of the other physical evidence of warming. Even if there were a basis to place less weight on this one surface temperature record, it would be grossly overbroad to conclude that all studies that even reference this dataset would be suspect.
The claim by Southeastern Legal Foundation and others that the HadCRUT temperature record is the foundation for computer modeling of the climate and that all climate models are thus suspect is also without merit. The models used to generate projections of future warming described in the IPCC AR4 do not use the CRU or other surface temperature data as an input. These projections of future temperatures arise from the use of Atmospheric-Oceanic General Circulation Models (AOGCM). These models are driven by physical equations describing the radiative properties and dynamics of the atmosphere and oceans and parameterizations of small-scale processes. They do not use observed CRU or other sources of temperature data as inputs.
The projections of future warming come from the largest coordinated global, coupled climate model experiment ever attempted, providing the most comprehensive multi-model perspective of any IPCC climate assessment. A set of coordinated, standard experiments was performed by 14 AOGCM modeling groups from 10 countries using 23 models. The HadCRUT and other temperature records may be compared with AOGCM output to assess how well the models replicate the observed climate, but temperature data are not input to these models. There is no ‘model’ used by CRU that was used as a basis for the AOGCM models, and the AOGCM models do not rely on or use CRU analysis of historical temperature data.
The observed warming of the climate is an important factor considered in the Endangerment Finding. The Finding notes that this warming is observed in surface and ocean temperatures, atmospheric temperatures, melting of snow and ice, rising sea levels, and other physical indicators of warming. The Finding states, ‘The global surface temperature record relies on three major global temperature datasets, developed by NOAA, NASA, and the United Kingdom’s Hadley Center. All three show an unambiguous warming trend over the last 100 years, with the greatest warming occurring over the past 30 years.’ The evidence and arguments relied on by petitioners do not show otherwise and do not support their broad assertions and conclusions.
Petitioners have not presented any additional or different analysis of the underlying surface temperature data. They have pointed to computer work that involves work on a non-public document with no relationship to the Endangerment Finding, or CRU datasets other than the HadCRUT temperature record. The CRU datasets at issue had a tangential at best relationship to the TSD supporting the Endangerment Finding. The computer quality control on this unrelated dataset appears to be just that—comprehensive and successful quality control—and does not provide support for allegations that changes made were intended to or did bias the resulting dataset. Petitioners’ objections to the analysis of temperature data from Russia or China stations appear unfounded and speculative and not supported by the body of evidence.
Certain reviews of CRU and its scientific work have been completed, and there continues to be additional independent investigation. EPA supports and appreciates that these reviews and investigations are being conducted and takes the results into consideration as they are released. The reviews that have been completed to date are consistent with EPA’s conclusions described above.
EPA and the assessment reports have considered the degree of certainty in the various lines of evidence on the existence of warming, including the HadCRUT (Brohan et al., 2006); NOAA (2009); and NASA (Hansen et al., 2010, submitted) temperature records; the satellite temperature records; and the observed changes in the global oceans, glaciers, and sea levels. Petitioners’ evidence fails to show that these sources of information are not credible and reliable. EPA continues to believe that this body of evidence indicates an unambiguous warming trend, when viewed as a whole, and that petitioners have not presented evidence that would indicate otherwise. EPA remains confident in the appropriateness of its overall scientific conclusion of a warming trend over the last 100years, with the greatest warming occurring over the past 30 years. This is based on the multiple sources of consistent information indicating such warming, information including the three separate surface temperature datasets, as well as a wide variety of other data and information independent of the three surface temperature datasets. This broad body of evidence is stronger than any one single source of data. Neither the evidence presented by petitioners and reviewed by EPA, nor the existence of an ongoing review of the CRU and its work, warrants a change in the scientific conclusion drawn by EPA concerning the existence of global warming.
A number of petitioners (e.g., Competitive Enterprise Institute, the Southeastern Legal Foundation, Peabody Energy, the State of Texas, Pacific Legal Foundation, Commonwealth of Virginia, Coalition for Responsible Regulation) question the validity of NOAA and NASA surface temperature records. As discussed in detail below, they raise a number of claims, including station ‘drop-out,’ flawed or manipulative adjustments to data, improper use of smoothing in data presentations, and a lack of independence between the three major surface temperature records. To support their arguments, the petitioners reference a number of studies—two peer-reviewed and several non-peer-reviewed—and in one case provide a quote from the CRU e-mails.
Many of the issues raised by the petitioners were also raised in comments and responded to in the Endangerment Finding. Several of the petitioners’ arguments have already been convincingly rebutted in the peer-reviewed literature or by the organizations in question. In these cases, petitioners have failed to acknowledge these rebuttals and have not provided any additional explanation for why their concerns remain valid. In addition, many of the sources on which petitioners rely are not peer-reviewed and use highly flawed methodologies. In some cases, petitioners have provided additional arguments or new information, which we have reviewed and evaluated. Upon review, petitioners’ evidence and arguments do not support the conclusions they draw, and do not materially change the credibility or reliability of the NOAA and NASA global temperature record or the conclusions drawn from them.
The remainder of this section includes background on the scientific issues being discussed, a description of the petitioners’ arguments, and EPA’s responses.
There are two steps to creating a global surface temperature record. The first is to collect the raw weather data, and the second is to process the data into a useable form. As described in Subsection 1.3.2 of this document, the CRU team, over several decades, acquired and merged data from a number of different sources and then applied corrections and geographical smoothing algorithms as appropriate to generate the HadCRUT temperature record. Petitioners’ claims regarding the HadCRUT temperature record are responded to in Section 1.3 of this document. Here, we discuss petitioners’ claims related to the NASA and NOAA surface temperature records, including claims that alleged problems with the HadCRUT temperature record also undermine the reliability and credibility of the work by NASA and NOAA. A list of temperature records and underlying datasets are also provided in Subsection 1.3.2 of this document.
One of the sources of data for the HadCRUT temperature record is the GHCN, which was developed and is maintained by NOAA’s National Climatic Data Center (NCDC). The GCHN dataset is also used by both NOAA and NASA in their surface temperature records. NOAA, NASA, and CRU each have their own algorithms to calculate global surface temperature trends from a combination of GHCN data and other data sources, with each group applying its own set of adjustments and corrections. Subsection 1.3.2 of this volume describes the process used to develop the HadCRUT temperature record and algorithms; this section will describes the GHCN and the approaches used by NOAA and NASA to develop their temperature records.
The original version of GHCN is described in Vose et al. (1992). Version 2 is described in Peterson and Vose (1997). In this paper, Peterson and Vose describe how they collected data from 31 sources by ‘1) contacting data centers, 2) exploiting personal contacts, 3) tapping related projects, 4) conducting literature searches, and 5) distributing miscellaneous requests’ because of the lack of a central repository of temperature and climate data. The raw GHCN dataset underwent quality control, which included merging data that was provided by different sources for the same weather station. In many cases, however, multiple duplicates of the same station are included in the record where the original data was not identical, so as to let users determine the best approach to merging these sources. The next quality control step involved rejecting datasets that did not include original observations, were derived from unreliable sources, or showed significant processing errors. Another step corrected mis-located stations, data that was repeated month to month, or unexplained discontinuities. A final stage examined outlying data—in cases where the data failed additional tests, they were kept in a separate file in case later researchers wanted to reincorporate them. Only three of the 31 sources used provide regular updates of additional monthly data.
NOAA (NCDC) (NOAA, 2009) and NASA (Goddard Institute of Space Studies or GISS, or sometimes GISTEMP) (Hansen et al., 2010, submitted) both use GHCN and other data (including sea surface temperature data) to calculate surface temperature trends around the globe. Each group performs different adjustments to the data in building their temperature records, and in some cases uses different subsets of the GHCN data or includes other outside datasets. As stated by Hansen et al. (2010, submitted):
These analyses are not independent, as they must use much of the same input observations. However, the multiple analyses provide useful checks because they employ different ways of handling data problems such as incomplete spatial and temporal coverage and non-climatic influences on measurement station environment.
For example, the NASA algorithms include an adjustment to urban temperature stations based on satellite maps of nighttime lights and uses the fact that temperature anomalies measured at a specific spot are correlated with anomalies up to 1200 kilometers away to extrapolate measurements over large areas where the observations are sparse (giving greater coverage in certain areas such as the Arctic). The adjustments that NOAA and NASA make to the unadjusted GHCN dataset are almost all automated, in contrast to some of the adjustment procedures used in the HadCRUT temperature record. In the case of NASA, there were two stations (St. Helena Island and Lihue, Hawaii) where manual adjustments were made because of detected errors and the lack of neighboring stations that could be used by the automated algorithms. In both cases, the raw data shows more warming than the adjusted data.
The U.S. Historical Climate Network (USHCN) temperature record developed by NOAA and used by both NOAA and NASA uses a slightly different methodology than the adjustments used by NOAA for the global GHCN dataset. The most important difference is that NOAA corrects for several systematic changes in the U.S. monitoring system that could lead to artificial biases, the most important of which are shifts from afternoon to morning temperature data collection at many stations, and shifts from ‘liquid in glass’ (LiG) measurement instruments to maximum/minimum temperature systems (MMTS) (described in Menne et al. 2009). USHCN2 uses a sophisticated automated methodology to detect and correct for discontinuities in the temperature data at specific stations such as these shifts in measurement time or instrumentation, and this methodology also corrects for station moves (whether documented or not).
GHCN data are available online, both in the unadjusted (raw) format and in an adjusted format that includes the automated corrections by NOAA (NOAA, 2010c). Similarly, the NASA-adjusted data is all available online along with the GHCN data used to derive the adjusted dataset (NASA, 2010a).
1.4.3 The Petitioners' Arguments and EPA Responses
126.96.36.199 Assessment of Issues Related to Alleged Station Dropout and Inappropriate Extrapolation
Petitioners raise a number of issues regarding the alleged ‘drop-out’ of stations after 1990 from surface temperature records, and the extrapolation of data from ‘warmer’ areas to ‘colder’ areas due to this drop-out or for other reasons, which the petitioners claim leads to bias in the global surface temperature record. The Competitive Enterprise Institute cites a non-peer-reviewed report by D’Aleo and Watts (2010), which states:
‘Around 1990, NOAA began weeding out more than three-quarters of the climate measuring stations around the world. They may have been working under the auspices of the World Meteorological Organization (WMO). It can be shown that they systematically and purposefully, country by country, removed higher-latitude, higher-altitude and rural locations, all of which had a tendency to be cooler.’
Petitioners argue that station dropout (from 6,000 in the 1970s to about 1,500 currently) ‘likely contributed to false warming trends over the entire globe – in part because the dropped stations ‘are generally in colder climates’ or because remaining stations were biased towards lower latitudes, lower elevations, and urban locations.’
Petitioners also cite the specific example of Bolivia, where they claim that GHCN has no data collection but shows interpolated warming there that ‘is purely an artifact of interpolation from distant warmer and lower altitude stations.’ The petitioners claim that station dropout is also a problem in the United States, stating, ‘The USHCN has dropped 90% of its climate stations. Most of the remaining stations are at airports, and in the west most of the higher elevation stations are gone. In California, the only remaining stations are in San Francisco, Santa Maria, Los Angeles and San Diego.’ Finally, the Southeastern Legal Foundation states, ‘Infilling from warmer temperature stations to colder grid cells for which no actual data is collected also imparts a spurious warming signal.’
Many of the petitioners’ arguments rest on a non-peer-reviewed document by D’Aleo and Watts (2010). D’Aleo and Watts’ (2010) study contains a number of inaccurate statements, and relies on a scientifically flawed analysis.
Peterson and Vose (1997) describe the procedures for updating the GHCN database, and explain—in a clear and transparent manner—the reasons why there are fewer measuring stations covering the post-1992 period than there were for the 1980s. Namely, Peterson and Vose explain that only three out of 31 sources submit regular monthly updates, and that the remainder of the data would only be updated on ‘a highly irregular basis’:
Thirty-one different sources contributed temperature data to GHCN. Many of these were acquired through second-hand contacts and some were digitized by special projects that have now ended. Therefore, not all GHCN stations will be able to be updated on a regular basis. Of the 31 sources, we are able to perform regular monthly updates with only three of them (Fig. 5). These are 1) the U.S. HCN, 1221 high quality, long-term, mostly rural stations in the United States; 2) a 371-station subset of the U.S. First Order station network (mostly airport stations in the United States and U.S. territories such as the Marshall and Caroline Islands in the western Pacific); and 3) 1502 Monthly Climatic Data for the World stations (subset of those stations around the world that report CLIMAT monthly code over the Global Telecommunications System and/ or mail reports to NCDC). Other stations will be updated or added to GHCN when additional data become available, but this will be on a highly irregular basis.
Therefore, the reduction in stations with data after the 1990s is due to the fact that NOAA has not completed a major data collection effort since Peterson and Vose (1997). We note, however, that such an effort is indeed currently ongoing and will result in a new dataset—GHCN Version 3—likely in late 2010. In contrast to this public explanation of the GHCN process, D’Aleo and Watts provide no evidence that there was a systematic and purposeful ‘weeding out’ process, and do not address the reasoning and process described in the public, peer reviewed documentation of the GHCN data set.
The petitioners, by relying on the flawed methodology used in the D’Aleo and Watts study, have assumed that dropping stations that are at higher latitudes and in colder climates would result in a biased, warmer temperature trend. This is an unfounded assumption that is based on a misunderstanding of the basic methodology used in analyzing surface temperature data. The surface temperature datasets evaluate the change in temperature over time at the various stations (the temperature ‘anomaly’), and do not base their evaluation on the absolute temperature level, as discussed in Subsection 1.3.2. This use of temperature changes or anomalies minimizes localized differences in absolute temperature and makes the resulting temperature record relatively insensitive to addition or removal of stations. This is because of the correlation of temperature changes over distances.
In fact, theory predicts that high latitudes and colder climates will warm more rapidly than low latitudes and warmer climates. Thus, the change in temperature at higher latitude or colder climates should tend to indicate greater warming than elsewhere, and dropping those stations would thus tend to lead to underestimating the resulting trend not overestimating it (although as long as there are at least a couple of stations in any region, the correlation of temperature measurements over hundreds of kilometers should also reduce this potential bias). We previously responded to comments on this issue in Volume 2 of the RTC document (2-36). The petitioners incorrectly assume and do not explain why dropping these stations would bias the trend in the change in temperature toward greater warmth.
An analysis (Clear Climate Code, 2010) used an emulation of the NASA GISTEMP code to only output the data from those stations that kept reporting data after a given year, or only output data from stations that stopped reporting data after that year. A graph from this analysis is shown here, where the blue line is a graph of global land temperatures (no ocean data) from stations that ‘dropped out’ before 1992, and the red line is the same analysis but only from stations that were still reporting past that date. As can be seen in the graph, the difference between the two groups of stations is very small in modern times. Differences are larger pre-1945 because there are a smaller number of stations, especially in the southern hemisphere, so taking a subset of a small number of stations increases the potential for errors due to insufficient coverage of the land surface (the increased uncertainty in early 20th century temperatures can be seen in the Brohan, 2006 figure reproduced in Section 1.3.2 of this document).
This graph shows that the station drop-out issue which many petitioners raise has not imparted any kind of ‘spurious warming signal.’ This analysis and graph were produced as part of the Clear Climate Code project, an independent project which involved the replication the NASA GISTEMP code in Python. We have independently verified this analysis. Much like the Independent Climate Change Email Review creation of a surface temperature record from public data in two days, the ability of the group working on this project to exactly reproduce the NASA temperature record and then apply the same code to new analysis demonstrates the overall availability, transparency, and utility of public code and data which enable any scientist to examine the issues involved in developing temperature records.
It also appears that D’Aleo and Watts failed to account for the methodologies used by NOAA, NASA, and HadCRUT to create a record of the trend in changes in global temperature, called anomalies. These methods determine the change in temperature at various stations and then average the temperature changes; they do not average the absolute temperatures of the stations. Thus, dropping colder stations does not make the resulting record of the change in temperature warmer unless the change in temperature at colder stations is lower than the change in temperature at warmer stations. As discussed previously, this does not appear to be the case, and scientific literature in general indicates that the opposite is true (the change in temperature in colder, higher latitude regions is generally larger that at warmer, lower latitude regions).
In contrast, the D’Aleo and Watts analyses use a scientifically improper methodology to evaluate the change in temperature over time. They determine simple averages of the absolute temperatures at the stations—without, apparently, taking into account their geographic distribution, much less calculating the change in temperature at the stations (the anomalies). As discussed in the science background in Subsection 1.3.2 on HadCRUT, two nearby stations can have very different absolute temperatures but very similar anomalies or changes in temperature, which is one reason why using changes in temperature is a more correct method than using absolute temperatures. In addition, not taking into account the geographic distribution of stations leads to overweighting those regions with many temperature records, such as the United States, rather than weighting each area of the globe equally in order to determine the global average temperature. These are both significant errors that undermine the petitioners’ critique of the temperature records.
With respect to Bolivia, the NASA temperature record does not appear to have any data for Bolivia after 1990. The reason is likely because stations in Bolivia have not been reporting through the CLIMAT network. However, there is no merit to the argument that extrapolating temperature changes or anomalies from bordering countries will create a warming bias in Bolivia because those countries are warmer than Bolivia. As already explained, the issue is not the absolute temperature, but rather temperature trends that have been shown to be correlated over large distances. The petitioners provide no data to support that neighboring countries are warming faster than Bolivia, rather than being warmer in an absolute sense.
There is no factual basis for the petitioners’ claim that the USHCN has dropped 90% of its stations. As demonstrated in the following Figure (Number of U.S. HCN stations with temperature records) from Menne et al. (2009), the station coverage in the past few years is only slightly lower than the peak coverage:
Similarly, the claim that only four stations are reporting from California conflicts with the NOAA temperature record for California (NOAA, 2010), which shows more than 40 stations in California with data in 2009.
Finally, we also note that the station dropout comes during the period when satellite data are available. As noted in the TSD, ‘The satellite tropospheric temperature record is broadly consistent with surface temperature trends,’ confirming that the reduction in the raw data reported from weather stations has not created a warming bias in the surface temperature dataset.
‘Station drop-out’ refers here to the precipitous decline in the number of temperature records included in the GHCN dataset. In the 1970’s more than 6000 stations were active. Today the figure is 1500 or less. D’Aleo & Watts, p. 10, n. 94 The following graph, prepared by Ross McKitrick5 shows the relationship between station drop-out and average temperature, where ‘Average T’ is a mean of raw unprocessed temperature data, and No. of Stations.
D’Aleo & Watts, p. 11. See http://www.uoguelph.ca/~rmckitri/research/nvst.html Exit (last visited Feb. 10, 2010) for a full explanation of this chart, and access to data it represents. This chart shows that the station drop out coincides with a sharp and significant increase in average raw temperature, and thus suggests that the change in temperature is the result of sampling bias and not climate change.
The stations that were dropped were disproportionately rural. D’Aleo & Watts, p. 11-12. Further, the remaining stations were biased towards lower latitudes, lower elevations, and urban locations. Id., citing E.M. Smith6, http://chiefio.wordpress.com/2009/11 /03/ghcn-the-global-analysis/ Exit(last visited Feb. 10, 2010). All of these tendencies away from random sampling impart a warm bias to the record. D’Aleo & Watts show that station drop out has occurred all over the world, but the greatest station drop-out has occurred in Siberia and Canada, where these global temperature datasets purport to show the greatest warming has occurred.
In the previous response, EPA addressed petitioners’ allegations that weather station dropouts have led to a warming bias in the temperature record. As described above, 1)petitioners rely on a non-peer-reviewed source that contains a number of inaccurate statements and relies on a scientifically flawed analysis; 2) petitioners demonstrate a fundamental misunderstanding of how to determine a warming or cooling trend from a temperature record and what issues actually would lead to either a warming or cooling bias in that record; and 3) petitioners fail to acknowledge that climatic records other than land surface temperature records also show clear warming trends consistent with the trend shown by the surface temperature data.
The graph submitted by the petitioner shows the absolute temperature averaged across all the stations existing at any given point in time, which has no real physical meaning—proper analysis averages the changes of station temperatures across a set of stations. ‘Additionally, the analysis should take into account station location when making global average rather than weighting every station equally. Equal weighting tends to introduce a bias by overweighting regions with many stations.
The Competitive Enterprise Institute states that D’Aleo and Watts found that ‘NASA modified 20% of the historical record 16 times in two and one-half years ending in 2007’ and that every modification ‘resulted in temperature trends that appeared to increase faster than they did in reality.’
EPA’s review of the information provided indicates that the Competitive Enterprise Institute statement that ‘every instance of manipulation resulted in temperature trends that appeared to increase faster than they did in reality’ is not supported by D’Aleo and Watts (2010) or by Goetz (2010), the source on which D’Aleo and Watts rely. On the contrary, D’Aleo and Watts (2010) state: ‘John Goetz showed that 20% of the historical record was modified 16 times in the 2 years ending in 2007. 1998 and 1934 ping pong regularly between first and second warmest year as the fiddling with old data continues.’ The Web page with the Goetz analysis states, ‘I will note that the overall trend in changes between now and Sep. 24, 2005 is very close to zero’ (Goetz, 2010). This contrasts with the assertion by the Competitive Enterprise Institute that these changes to the temperature record always resulted in a warmer trend. The petitioners evidence does not support their claim of manipulation with the intent to bias the temperature record inappropriately.
In late January 2010, the findings of D’Aleo, Watts, and Smith were confirmed by investigative journalists. They found, for example, that in the 1980s 600 Canadian monitoring stations were used in the NOAA dataset. Now only 35 are used, with only one above the Arctic Circle. Yet, Environment Canada reports that the government maintains 1400 stations with over 100 above the Arctic Circle.
According to Environment Canada, there are 1480 stations with climate normals in Canada. Of these stations, only a subset (about 140) are marked as reporting data through the CLIMAT system (WMO, 2010). As noted in the Section 1.4.2 of this document, Peterson and Vose (1997) finished their data collection effort in the mid 1990s. As part of this collection effort, they integrated historical data from hundreds of Canadian stations. The main source of updated data since this collection effort has been the automated CLIMAT reports. Therefore, the weather stations that do not produce CLIMAT reports would not be expected to be included in GHCN records for the past decade—this includes the majority of the Canadian stations. The NOAA GHCN dataset does contain data from 49 stations in Canada. Because many of the Canadian stations that report CLIMAT data only started doing so recently, they do not have the minimum quantity of data during the baseline period (1961 to 1990) to be included in the GHCN dataset. Given that Jones (1994) found that only 100 well-placed stations around the world would be sufficient to determine hemispheric average trends, 49 stations within Canada should be more than sufficient for determining large-scale trends.
Petitioners assert that the UHI adjustments performed by NASA are insufficient or improperly applied, both globally and in the USHCN. The State of Texas claims, ‘But according to the TSD, the CRU’s temperature data ‘applies an urbanization adjustment,’ and therefore ‘the CRU temperatures cited by both the IPCC and EPA do not reflect temperatures that were actually captured by weather station thermometers, but rather temperatures that have been recalculated by the CRU for one purported scientific reason or another.’ The Southeastern Legal Foundation objects that Version 2 of the USHCN record does not correct for UHI, that ‘The notion that there is zero urban heat island effect in Manhattan is not valid,’ and that NASA UHI adjustments have the wrong sign based on an analysis by D’Aleo and Watts (2010) of the Puerto Maldon station in Peru. The Southeastern Legal Foundation points to an analysis of UHI adjustments by the Science & Public Policy Institute (Long, 2010). The petitioner quotes from the study, ‘Thus, the adjustments to the data have increased the rural rate of increase by a factor of 5 and slightly decreased the urban rate, from that of the raw data,’ and states that therefore, ‘The consequence, intended or not, is to report a false rate of temperature increase for the Contiguous U.S.’ Additionally, the Southeastern Legal Foundation claims that a project—surfacestation.org—has shown that 90% of stations in the United States ‘were sited in ways that result in errors exceeding 1°C.’
The Long (2010) study cited by petitioners averaged together 48 rural stations (one per state) in the continental United States and compared them with 48 urban stations (one per state). Based on this method, Long found that the net effect of NOAA adjustments to the raw data had little effect on temperature trends for urban stations but led to more warming in rural stations. However, neither the petitioners nor Long analyze the reasons for the adjustments to rural stations or present any reason or argument indicating that the adjustments were inappropriate. Their sole argument is that the result of the adjustment must mean the adjustment was wrong. However it is the reason behind the adjustment that determines its validity, not the result of the adjustment. In the United States, the time of day at which temperature observations were made was changed at some stations. This change of observation time is known to introduce a nonclimatic trend into the raw data that would create apparent cooling if uncorrected (Vose et al. 2003). These time of observation changes were mostly limited to rural stations in the United States. Therefore, the warming adjustments found in rural stations by Long are likely the result of the correction of this ‘time of observation bias.’48
With regard to the objection that some temperature records do not correct for the UHI effect, we responded to a very similar comment when finalizing the Endangerment Findings. Response 2-28, notes, ‘The different surface temperature datasets shown or cited in the TSD all account for urbanization, either directly and/or indirectly.’ The claim that NASA makes corrections of the wrong sign, using Puerto Maldon as an example, does not take into account the homogenization routine that NASA employs that uses neighboring stations to correct station data. Under that routine, stations identified as urban are automatically shifted to come into agreement with the neighboring stations. These automated homogenization routines use the fact that temperature changes over time (i.e., anomalies) are correlated over distances to correct for things like weather station moves and other changes of the immediate environment of a weather station. These changes, as explained in NASA’s documentation, can be either positive or negative.
The argument by the State of Texas that these temperature records ‘do not reflect temperatures that were actually captured by weather station thermometers’ is without scientific merit as the adjustments are appropriate and necessary either because of station moves, changes in time of observation, urbanization, or other reasons. If changes such as switching from afternoon measurements to morning measurements were not corrected, then the temperature record would not accurately reflect the surface temperature reality.
We also note that overall, these adjustments have had little effect on the global temperature trends. This can be seen in the two GHCN figures included below, one based on adjusted data and one based on the unadjusted, raw data. Adjustments are more important on a local basis. Also, within the United States itself, adjustments have more importance because of the need to correct for systematic changes in the time of observations and measurement instruments that were more widespread in the United States than elsewhere.
With regard to the surfacestation.org analysis, this was addressed in depth in response 2-27 of the RTC document for the Endangerment Findings. EPA found that ‘NOAA has provided extensive information to the public in response to the concerns raised by the commenters, available at http://www.ncdc.noaa.gov/oa/about/response-v2.pdf (NOAA Climate Services, 2009).’ NOAA stated in this document, ‘A peer-reviewed study specifically quantified the potential bias in trends caused by poor station exposure (Peterson, 2006). The analysis examined only a small subset of stations — all that had their exposure checked at that time — and found no bias in long-term trends.’ Since the publication of the RTC document, another analysis by Menne et al. (2010) has also concluded that well-sited stations give a trend similar to the full set of stations, therefore providing further evidence that poorly sited stations have not undermined the final temperature record. The surfacestation.org analysis estimated station errors by visual inspection, not by measuring actual temperatures. In addition, the station error visual rating method results in estimated errors for absolute temperatures, not for changes in temperature. Therefore, methodologies using changes in temperature that average across a large number of stations could be expected to have much smaller errors than the error in absolute temperature at any individual station. This projection of small errors for large-scale trends was confirmed by the analyses by Peterson (2006) and Menne et al. (2010).
The Southeastern Legal Foundation, in their Third Amendment to Petition, objects to the changes in the USHCN temperature record between Version 1 and Version 2. To support this argument, they present several graphs. The first is a temperature graph from D’Aleo and Watts (2010) attributed to James Hansen in 2010.
The petitioner states that the 20th-century record was ‘misrepresented’ in order to ‘exaggerate a 20th century warming trend and claim it was caused by AGW [anthropogenic global warming].’ They object to the replacement of the UHI adjustment used in Version 1 of the USHCN with the ‘change point algorithm’ used in Version 2 because the petitioners claim that while this algorithm can detect abrupt changes it ‘cannot account for long term changes to the temperature record, such as UHI, making such signals indistinguishable from the climate change signal that is sought.’
The petitioner provides another chart from D’Aleo and Watts (2010) showing the difference between USHCN Version 1 and Version 2 (Figure USHCN V2-V1).
The petitioner objects: ‘The warming of the 1930’s has been minimized by negative adjustments, and the recent warming has been exaggerated by positive adjustments (or by failure to properly adjust for urban heat islands), thereby imparting to the 20th century a warming trend that the raw data and rural stations do not show.’
A third chart that the petitioner provides related to this issue is one showing the adjustments to the temperature record in Central Park in New York City, stating: ‘The notion that there is zero heat island effect in Manhattan is not valid.’
The petitioners note that GISS uses a different procedure:
The GISS temperature dataset maintained by NASA retains an urban heat island adjustment for the US, and for this reason diverges from USHCN. Between 1950-2008, the trend difference between GISS and USHCN is approximately 0.7 F/century. See id. at 42-43. The asserted warming trend over the 20th century is 0.74 0.18°C (TSD p. 27). However, for the rest of the world, the GISS UHI adjustments have the wrong sign. Id. At47-50. As summarized by D’Aleo & Watts, there is compelling evidence that the 20th century surface temperature record has been improperly adjusted downward during the warming of the 1930’s, and improperly adjusted upward (or not adjusted for UHI) in the late 20th century warming. These improper adjustments and failures to adjust impart a spurious warming signal to the 20th century temperature record and render it scientifically invalid and unreliable for use in the EPA’s Endangerment Finding.
The changes between USHCN Version 1 and Version 2 are well documented. This documentation shows that the changes between the two versions in the past 50 years are not related to any changes in how the USHCN versions address the UHI effect (although the UHI adjustment change is important pre-1960). In their analysis, the petitioners do not account for the valid corrections to the data for dates after 1995 due to a lack of correction for known instrumentation changes. Furthermore, the petitioners make assertions about NASA adjustments that are not supported by the underlying document they cite, and that show a misunderstanding of the purpose of the NASA algorithms.
Figure 11 in Menne et al. (2009) shows an analysis of the differences between the various USHCN versions.
The changes between the two versions after 1995 are a result of the fact that USHCN Version 1 had not updated its data to correct for changes in measurement instrumentation at some weather stations after that date.
An analysis of gradual trends (like those expected from the UHI effect) by Menne et al. (2009) found that in some cases the fully adjusted HCN Version 2 did remove trends at the station when compared with its neighboring stations (including a specific example of Reno, Nevada). Additionally, when comparing the HCN network with the larger Cooperative Observer Program (COOP) Network, the analysis found that trends in the HCN network were more likely to be cooler than warmer than the trends in the COOP network. Finally, Menne et al. calculated a trend from the 30% of stations that were identified as being most urbanized and found that the trend from these urbanized stations was actually slightly smaller than the trend from the remaining stations. These analyses suggest that the USHCN Version 2 methodology is appropriately accounting for UHI trends.
With regards to the assertion that outside of the United States the NASA UHI adjustments ‘have the wrong sign,’ the petitioners ignore the clearly documented methodology regarding the NASA adjustments, and they make unsupported extrapolations of the non-peer-reviewed literature that they are quoting.
A discussion of the data sources and routines used by NASA to produce the GISS temperature record can be accessed on the NASA website (NASA 2010b). As stated on this site, the urban homogenization adjustments are designed to address a changing urban environment—whether ‘warming or cooling’:
The goal of the homogenization effort is to avoid any impact (warming or cooling) of the changing environment that some stations experienced by changing the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations. If no such neighbors exist or the overlap of the rural combination and the non-rural record is less than 20 years, the station is completely dropped; if the rural records are shorter, part of the non-rural record is dropped.
What this documentation states is that NASA’s methodology replaces the long-term trend of any ‘urban’ station (as identified by nightlights) with the trend of the neighboring rural stations, regardless of whether these rural stations are warming faster or slower than the urban station. If there are no neighboring rural stations, the urban station’s trend is not used at all. Therefore, the existence of some stations that are adjusted to have a larger warming trend is consistent with the documentation.
The implication by the petitioners that all of the NASA adjustments outside the United States lead to greater warming than the unadjusted data would show is without support: the literature on which petitioners rely shows only one example of a station that was adjusted to show greater warming in order to agree with neighboring rural stations, along with a hypothetical example where the methodology could potentially result in a warming adjustment to an urban station that would be problematic if the rural stations all had artificial warming.
Several analyses have been replicated for different subsets of the data: stations ranked highly by surfacestations.org compared with stations ranked poorly (Menne et al., 2010), stations that drop out early compared with stations that are still reporting (Clear Climate Code, 2010), rural stations compared with urban stations (Menne et al., 2010), and so forth. Every one of these analyses that we are aware of that uses appropriate methodologies (e.g., anomaly and spatial interpolation techniques and, in the United States, corrections for time of observation bias changes) has resulted in similar trends over the century for the United States and for the globe with only small differences from analysis to analysis.
With regards to the differences between the United States temperature record in the NOAA and NASA analyses, the petitioners’ comments are not new: see Response 2-28 in the RTC. Note that the discrepancy between NASA and NOAA discussed in Response 2-28 is much smaller now that NASA has updated to using USHCN Version 2. NASA previously used Version 1 without the UHI adjustment, and now uses Version 2 as an input. As noted above, Version 2 includes important corrections for stations that changed measurement instruments after 1995. The updated trends for U.S. data in the two datasets are 1.28°F/century for NOAA and 1.12°F/century for NASA from 1901 to 2008. For 1950 to 2008, the trends are 1.50 and 1.39 °F/century for the two temperature records respectively. The smaller trend in the NASA temperature record is due to the nightlight based urban correction routines.
We find that while NASA uses a nightlights map and an adjustment methodology for the United States that produces a smaller trend, it is not clear that this is to be preferred over the NOAA approach. It might produce more realistic temperature estimates for some stations, but might over-correct for urban effects for other stations. Also, the NOAA algorithm has the advantage of using the larger COOP database for its closest neighbor calculations. In the case of Central Park specifically, NASA’s nightlight adjustment routine does result in a smaller trend than NOAA uses in its USHCN temperature record. Recall, however, that it is not the heat island effect in Manhattan that is in question, but rather the change in the heat island effect over time that is important. NASA uses more distant HCN stations to determine such trends and the appropriate correction, whereas NOAA uses a larger number of nearby COOP stations to determine appropriate ‘changepoint’ corrections. (The HadCRUT temperature record does not include the Central Park station, as it does not use the USHCN dataset.)
In sum, the petitioners have not shown inappropriately manipulated data in terms of either the U.S. or global temperature records. The findings of a warming temperature trend in the United States and the globe over both the past 50 years and the past century remains robust, as shown by different methodologies used by different groups, as well as analyses that use only subsets of the data (such as only those stations that are not near brightly lit areas as defined by nighttime satellite measurements).
The Southeastern Legal Foundation, in their Third Amendment to Petition, discusses the surfacestations.org project and the related critiques of the USHCN network and the U.S. temperature record:
Anthony Watts’ project, surfacestations.org, has surveyed 1067 of 1221 (87.4%) surface stations in the USHCN network. Stations are evaluated for the quality of their location according to criteria developed by the Climate Reference Network (‘CRN’) Site Information Handbook, which specifies the requirements for establishing and maintaining a weather instrument site. Id.at 32.8 Deviations from the siting standards introduce a range of error in the measurements according to a scale set forth in the handbook. The error scale runs from less than 1 C for stations classified as CRN class 1 or 2, up to greater than 5 C for stations rated CRN class 5. Id. 28-33. For the stations surveyed the SurfaceStations.org volunteers determined that 90% were sited in ways that result in errors exceeding 1 C according to the handbook’s error scale. In the following chart, these stations are categorized by where they fall in the CRN handbook’s error scale:
Since the asserted warming trend over the 20th century is 0.74 0.18°C (TSD p. 27), the petitioner claims that the error swamps the signal. On this basis, the petitioner asserts that the data are invalid and unreliable and cannot be relied on for the Endangerment Finding.
The petitioner cites a study that has determined that a large number of stations do not match the best criteria as developed for the Climate Reference Network. However, the petitioner does not show that the identified siting issues contribute to a systematic bias. The error scale used covers both positive and negative error. Errors that vary randomly from one station to another would not be expected to bias large-scale temperature records. In addition, these errors refer to errors in absolute temperatures, not in changes in temperature. In contrast, researchers at NOAA have performed analyses that show the difference between trends calculated from stations that have been identified as ‘well-sited’ compared to trends calculated from stations that have been identified as ‘poorly sited.’ A key figure from Menne et al. (2010) (Figure 7, provided below) compares various U.S. temperature records derived by using all the stations, only stations using ‘good’ siting, only stations using ‘poor’ siting, and only stations using the new US Climate Reference Network (USCRN). USCRN was developed over the past few years with the goal of having stations designed for the purpose of long-term climate monitoring, in contrast to the current network of stations primarily designed for weather monitoring. These USCRN stations are all designed for the purpose of long-term climate monitoring. Note that all four sets of stations give almost identical trends (for USCRN, only over the recent years that it has been operational).
The assumption by the petitioner than an error of 1 degree or more at individual stations ‘swamps the signal’ is also incorrect based on basic statistics. Because there are a large number of measurements at a large number of stations, the statistical error is much smaller than the error for a single measurement at a single station.’
See also Response 1-67 in this document regarding the surfacestations.org project.
Therefore, the conclusion of the Watts surfacestation.org project that, using one method of rating stations, only a small subset of the U.S. temperature stations are well sited does not demonstrate that the overall temperature record is biased or unreliable. In fact, analysis using only the subset of stations determined as ‘good’ by Watts, as well as analysis using the new Climate Reference Network stations, found very similar trends to the analysis using all the stations, demonstrating that the conclusions about U.S. temperature trends are robust.
On February 25, 2010, Edward R. Long, Ph.D.10 published a paper through the Science and Public Policy Institute analyzing the effect of adjustments to the temperature record for the continental United States that are made in the NCDC temperature record for rural and urban stations.11 °For the rural stations in the study, the raw data showed a linear trend of 0.13 C per century, while for urban stations the raw data showed a trend of 0.79 C per century. Id. at p. 8-9. The long term trends were very similar until about 1965, when the trend in the urban raw data increases faster than in the rural data. Id. at 9-10.
NCDC’s adjusted data for rural stations show a trend of 0.64 C per century, compared to 0.13 C per century for the raw data. In other words, the NCDC adjustment increased the rural trend by nearly five times. Id. at 11. The adjusted data for urban stations show a trend of 0.77 C per century, compared to a raw urban trend of 0.79 C per century. Id.‘Thus, the adjustments to the data have increased the rural rate of increase by a factor of 5 and slightly decreased the urban rate, from that of the raw data.’ Id.This has the effect of hiding urban heating, and permitting the warming present in the adjusted data to be attributed not to urban warming, but to climactic warming. As Long concludes, ‘The consequence, intended or not, is to report a false rate of temperature increase for the Contiguous U.S.’ Id.at 13. The EPA should not base economically devastating regulations, or any regulations, on false reports, and should reconsider the Endangerment Finding to make sure that it has not done so here.
The Long study does not demonstrate inappropriate adjustment for rural temperature stations, because this study did not take into account the appropriate ‘time of observation bias’ adjustments or adjustments for changes in measurement instruments. Because of these flaws in the Long analysis, the conclusions by Long are without scientific merit. As demonstrated in previous responses, U.S. trends calculated using only ‘good’ stations or only Climate Reference Network stations show very similar trends to the trends calculated using the full set of data. See also Response 1-66 on the Long study and other UHI issues in this section.
The Coalition for Responsible Regulation cited two papers that they state show that replication of temperature records is difficult, and that therefore, EPA’s Endangerment Finding does not meet the requirements of the Data Quality Act. Regarding these two papers, the coalition states, ‘One study, noting that the raw temperature records in the USHCN are adjusted substantially to account for a variety of potential contaminants, concluded the effects of such adjustments ‘produce a significantly more positive, and likely spurious trend in the USHCN data’ (Balling and Idso, 2002). The other study concluded ‘that the inability to replicate the [Northern Hemisphere surface thermometer temperature] trend was likely a result of ‘data padding’ used to smooth and filter data’ (Soon et al., 2004).
EPA has reviewed the petitioners’ submission of Balling and Idso (2002) and Soon et al. (2004) and finds that it was not impracticable to raise the objection during the public comment period and that the reasons for the objection did not arise between June 24, 2009, and February 16, 2010. Petitioners could have submitted these studies during the comment period on the proposed Endangerment Finding. Although, in most cases, the petitioners provide excerpts from the CRU e-mails in support of their assertions, EPA’s review has determined that this evidence does not support their allegations, and that the information submitted by petitioners on these topics was available before the comment period for the Endangerment Finding. Petitioners have not shown why it would have been impractical for them to have submitted these studies then. Indeed, similar points were already raised, and responded to, in the RTC. Despite the fact that these objections fail to meet the statutory timeframe for evidence supporting a petition for reconsideration, we briefly explain why, contrary to petitioners’ allegation, they fail to call into question the Finding.
Balling and Idso (2002) draws conclusions based on an analysis that is no longer valid, as it relies on the University of Alabama—Huntsville (UAH) satellite temperature dataset before corrections identified by Karl et al. (2006) in CCSP Product 1.1 were applied to the satellite record. These corrections have been accepted by all the researchers involved, including those at UAH, and increased the temperature trend in the satellite dataset, eliminating many of the discrepancies with the surface temperature dataset. Additionally, even before these corrections were identified, Vose et al. (2003) found with respect to Balling and Idso (2002) that ‘the time of observation bias adjustments in HCN appear to be robust,’ contrary to the assertions of Balling and Idso that these adjustments were biased.
As background, Soon et al. critiqued the application of the smoothing algorithm used by Mann and Jones (2003) at the very end of the time period that was analyzed. The algorithm is a 20-year average, and a decision must be made about what temperature to use for the last 10 years. For example, one could choose to reflect the end of the temperature record (making the years after the end of the record a mirror image of the years before the end of the record), or assume that all years after the last year of the record are equal in temperature to the last year, or assume that the subsequent years continue the trend of the previous years in the record. Soon et al. felt that application of this ‘data padding’ (though they were not able to exactly duplicate Mann and Jones) led to unjustifiably high temperatures at the end of the smoothed temperature record.’
A subsequent peer-reviewed rebuttal of Soon et al.’s critique was published by Mann (2004). Mann (2004) states that ‘comparisons that are uninformed (e.g., Soon et al., 2004) by objective evaluation criteria (e.g., MSE [Mean Square Error]), are unlikely to provide useful insights into the relative merits of alternative boundary constraints.’ Mann’s contention is that there needs to be an objective way to evaluate which smoothing routine to use. While he does not claim that MSE is necessarily the best function, he notes that Soon et al. do not use any objective criteria at all. His analysis also suggests that his approach will choose methods that reflect the underlying trends in the data, whereas smoothing that does not use the MSE criteria can generate spurious trends.
Petitioners identify several instances of allegedly flawed data adjustments at specific weather stations, citing the records of some individual stations that they claim show inappropriate manipulation. The Coalition for Responsible Regulation references a non-peer-reviewed analysis by Eschenbach (2009) as a basis for claiming: ‘For example, adjustments made by the Global Historical Climatology Network (GHCN) in Darwin, Australia, transformed a temperature trend falling at 0.7°C per century to one that was warming 1.2°C per century.’
The Southeastern Legal Foundation, the State of Texas, and the Coalition for Responsible Regulation makes similar claims regarding stations in New Zealand, stating, ‘Similar ‘adjustments’ are also evident in one striking comparison of raw temperature data with homogenized temperature data ‘adjusted’ by the New Zealand national weather service, the National Institute of Water & Atmospheric Research (NIWA),’ claiming that the ‘New Zealand Climate Science Coalition concluded that all of the ‘adjustments’ made by Salinger and the NIWA served to show inaccurate increases in warming’ (referencing a non-peer-reviewed analysis by Treadgold, 2009).
The specific assertions made about the Darwin and New Zealand stations rely upon two non-peer-reviewed analyses that found that temperature adjustments at various stations caused temperatures to be higher than they otherwise would have been. In the case of New Zealand, the petitioners do not show that the adjustments performed by these organizations were inappropriate, just that the adjustments resulted in a larger warming trend. As we have explained previously, there are several legitimate reasons to adjust temperature records, from time of observation changes to moving the location of the thermometer.’
NIWA has published details of its temperature analysis online (New Zealand NIWA, 2010). Their site includes links both to a set of 11 stations with no required adjustments since 1930 and the set of seven stations analyzed by Treadgold. Both sets of stations show warming over the period between 1930 and present. In the latter set, NIWA links to a complete description of the station history of Hokitika, the station with the largest change in trend due to adjustments. NIWA shows that the composite, adjusted Hokitika record is similar to that of two nearby stations from the unadjusted set during those periods when the stations were all recording data. We also note that the raw data (for all 18 New Zealand stations linked to by NIWA) provided to NOAA for the GHCN dataset does not include the homogeneity adjustments from the NIWA researchers. The adjusted NOAA temperature record does show a warming trend for these stations, however. While the NOAA GHCN adjustments differed in some ways from the NIWA adjustments, the largest adjustment made to the Hokitika station by both NOAA and NIWA occurred before 1912, consistent with this statement from NIWA: ‘It is noted in the Hokitika station history file (see Appendix 2) that the maximum temperatures were believed to be about 3°F too high through the period 1894 to August 1912.’ Given that NOAA uses automated procedures to correct for station moves based on data from neighboring stations, this is strong independent confirmation that the manual adjustments made by NIWA were appropriate (and vice versa).
Of the other 2 major temperature records, HadCRUT uses most of the same manual adjustments as NIWA. The automated algorithm used by NASA, on the other hand, is designed to correct for changes in trends at urban stations rather than sharp discontinuities, and therefore does not capture the errors that are corrected by both the NOAA algorithm and the HadCRUT/NIWA manual adjustments.
While NIWA has only addressed in depth one of the seven stations analyzed by Treadgold, based on comparisons to nearby stations, New Zealand researchers have shown that the adjustments to the other stations were appropriate (New Zealand NIWA, 2010). Additionally, Folland and Salinger (1995) found that the nearby sea surface temperatures also showed similar warming since the beginning of the century, adding further independent confirmation that the adjusted trend used by NIWA, NOAA, and HadCRUT is a good reflection of reality. Therefore the evidence indicates that NOAA and NIWA both made similar, independent, and appropriate adjustments. Note that this example is also further confirmation of the independence of the adjustment methodologies of the NOAA, NASA, and HadCRUT temperature records addressed in 188.8.131.52.
Regarding the Darwin station, these adjustments only appear in one of the three temperature records—the adjusted GHCN record from NOAA (HadCRUT uses raw GHCN data, and NASA uses raw data starting in the 1960s). There is a clear discontinuity at Darwin in 1941 at the time of a major station move when the temperature record apparently dropped by more than half a degree, and it would be expected that adjustments would be required under such circumstances to ensure that the station trend best represents the real temperature trend at that location. However, according to a letter by Tom Peterson of NOAA (reproduced in full in Response 1-73 below), the Darwin station might be an example of a ‘random walk’ where a rare combination of circumstances means the automated algorithm can introduce an erroneous trend. However, a possible error at one station (which only occurs in one out of three datasets) will not materially affect large scale trends. Statistical analysis has showed that:
[t]he difference in trends between homogeneity-adjusted and unadjusted data can be enormous at an individual station and very significant in regional analyses. However, Easterling and Peterson (1995a,b) found that on very large spatial scales (half a continent to global), positive and negative homogeneity adjustments in an individual station’s maximum and minimum temperature time series largely balance out, so when averaged into a single time series, the adjusted and unadjusted trends were similar (Peterson et al. 1998).
Thus, it is not surprising that data from one station or region would show a large difference between adjusted and unadjusted data. The important point is that when the stations and regions are combined for a global analysis, these kinds of effects are balanced out and do not introduce bias in the overall result.
Therefore, we find that the petitioners have not shown evidence that the adjustments in New Zealand were inappropriate, or shown that the adjustments at Darwin biased larger scale temperature reconstructions. We also note that the three temperature records used different methods to determine the appropriate adjustments: in the case of New Zealand, NOAA and HadCRUT used independent methods (one manual and one automatic) and found similar adjustments needed to be made, whereas in the case of Darwin NASA and HadCRUT both used the raw data with no adjustments (though different time periods of useable data).’
The Coalition for Responsible Regulation provides a number of references (Exhibits K through V) to support their statement that ‘There are numerous and recently-released studies documenting significant warming biases in the temperature databases that, in light of the Disclosures [i.e. the CRU emails], reveal the fruit of this manipulated process.’
Exhibits K through V are Balling and Idso, 2002; Christy et al., 2006; Christy and Norris, 2009; D’Aleo, 2009; D’Aleo, 2010; Davey and Pielke, 2005; Davey et al., 2006; Hale et al., 2006; Pielke et al., 2007a; Pielke et al., 2007b; Soon et al., 2004; and Watts, 2009.
EPA has reviewed the petitioners’ submission of these 12 studies and finds that for many of these submissions it was not impracticable to raise the objection during the public comment period or that the reasons for the objection did not arise between June 24, 2009, and February 16, 2010. Petitioners could have submitted most of these studies during the comment period on the proposed Endangerment Finding. Although, in most cases, the petitioners provide excerpts from the CRU e-mails in support of their assertions, EPA’s review has determined that this evidence does not support their allegations, and that the information submitted by petitioners on these topics was available before the comment period for the Endangerment Finding. Petitioners have not shown why it would have been impractical for them to have submitted these studies then. Indeed, similar points were already raised, and responded to, in the RTC. Despite the fact that these objections fail to meet the statutory timeframe for evidence supporting a petition for reconsideration, we briefly explain why, contrary to petitioners’ allegation, they fail to call into question the Finding.
Most of the 12 references provided to support the statement by the petitioners that ‘document significant warming biases’ are not new. Three references (Christy et al., 2006, 2009; Pielke Sr. et al., 2007a) were explicitly addressed in the Volume 2 of the RTC document (Responses 2-28 and 2-29), which found that these effects would be negligible on hemispheric and continental scale averages. Below is EPA’s response from RTC Volume 2:
EPA has reviewed additional literature submitted by commenters on this issue (e.g., Christy et al., 2006, 2009; Lin et al., 2007; Pielke Sr. et al., 2007, Ren et al., 2007; Walters et al., 2007), which documents effects that may result in biases at individual stations. CCSP (2006) also addresses this issue, and finds: ‘To the effect that these effects could be large enough to have a measurable influence on global temperature, these changes will be detected by the land-based surface network [which corrects for these effects].’ The large number of stations in land-based surface networks greatly facilitates temporal homogenization, since a given station may have several ‘near-neighbors’ for ‘buddy checks’ where adjustments and/or change point algorithms can be employed per the previous response 2-28 (CCSP, 2006). In a comprehensive reassessment of errors in the HadCRUT temperature record, Brohan et al. (2006) conclude: ‘Since the mid-20th century the uncertainties in global and hemispheric mean temperatures are small and the temperature increase greatly exceeds its uncertainty.’
Through our review of the data sets and the literature, EPA concurs with IPCC (Trenberth et al., 2007) and CCSP’s overall assessment that the effects of urbanization and land use changes on the land-based temperature record are negligible as far as hemispheric- and continental-scale averages are concerned because the very real but local effects are avoided or accounted for in the datasets used.
Three more references (Davey and Pielke, 2005; Hale et al., 2006; and Pielke et al., 2007b) were not raised during public comment, but we find that they are variations on the same issues of effects at individual stations that may result in bias at the individual stations and responded to with respect to the Christy and Pielke studies discussed in the RTC document (as quoted above). Thus, these references do not lead to any changes in EPA’s conclusions on the robustness of global and continental-scale averages.
Davey et al. (2006) is a study of the use of a different metric (equivalent temperature) that takes into account how the heat capacity of air depends on humidity and explores the implications of weighting temperature measurements by this heat capacity. However, while this is an interesting academic exercise, the current standard metric is the average surface temperature, and this study does not demonstrate any biases in the measurement of this standard metric. The equivalent temperature trends found in Davey et al. (2006) are actually larger (averaged over the year) than the comparable, standard surface temperature trends for the region examined.
Two more references (Soon, 2004; Balling, 2002) have been superseded in the literature or depend on obsolete data, as discussed in Response 1-70.
The only new material (D’Aleo, 2009, 2010; Watts 2009) addressed the U.S. surface temperature record and are very similar to D’Aleo and Watts (2010) and share the same, fundamental flaws that were discussed in many responses in Section 1.4 (such as Response 1-62).
Therefore none of the 12 exhibits submitted by the petitioner support the contention that there are ‘significant warming biases’ in the U.S. or global surface temperature records, and most of them do not meet the statutory timeframe for evidence supporting a petition for reconsideration.
A similar instance of chicanery arose with the manipulation of the Australian climate data. The HARRY_READ_ME.txt file makes numerous references to manipulating the Australian data, but one must examine the data itself to fully understand what the Climategate crew was doing. While it concerns a single location, the adjustments to data from Darwin, Australia, admit no legitimate explanation.
The petitioner quotes Willis Eschenbach:
Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style — they are indisputable evidence that the ‘homogenized’ data has been changed to fit someone’s preconceptions about whether the earth is warming. One thing is clear from this. People who say that ‘Climategate was only about scientists behaving badly, but the data [are] OK’ are wrong. At least one part of the data is bad, too. The Smoking Gun for that statement is at Darwin Zero.20 [Footnote 20: Id.]
The Coalition for Responsible Regulation also cites the same Web page (Eschenbach, 2009) as further support for their statement that ‘There are numerous and recently-released studies documenting significant warming biases in the temperature databases that, in light of the Disclosures, reveal the fruit of this manipulated process.’
EPA responds to the allegations regarding data adjustments in Response 1-71. There is a clear rationale for at least one adjustment to the Darwin station because of a major station move in 1941. Because the adjustments by NOAA use an automated procedure based on nearest neighbor and other calculations, there is no opportunity for human ‘chicanery’ or ‘messing with the data.’ Note that we are unclear as to what is meant by messing with the data ‘Egyptian-style.‘
Additionally, as shown in Response 1-66, the sum total effect of all land-based adjustments by NOAA on trends in the global GHCN temperature record are very small (as discussed in Subsection 1.3.2, ocean data adjustments result in asmaller warming trend). For the continental United States, the time of observation bias correction (to correct for the shift in temperature measurement from the afternoon to the morning) and the corrections due to changes in measurement technology do result in a warmer trend than the raw data would suggest, but these corrections are essential for proper analysis, not ‘chicanery.’
Tom Peterson from the NCDC at NOAA responded to a request for information from the author of the ‘smoking-gun-at-darwin-zero’ post, posted in the comments at Watts (2010) and reproduced here:
Dear Willis Eschenbach,
I received your questions today. They are quite detailed and would take some digging through files from the mid to late 1990s for me to answer all of them. This would take time I don’t have right now (I actually should be on annual leave right now, but had a few things I wanted to get done before I take off for the rest of the year in a few hours). So let me respond in general terms first and provide you with some articles to make sure we’re both starting from the same page.
One of the problems we were trying to address in some of the procedures we developed back in the mid-1990s was how to take advantage of the best climate information we had at each location at each point in time. We had spent a great deal of time and energy digitizing European colonial era data (article sent) which went a great deal towards making global data prior to 1950 more global (see http://www.ncdc.noaa.gov/img/col.gif for a movie loop of the stations we digitized or acquired for GHCN by this project). This means that in some parts of the world, we might have more stations available to build a reference series from prior to the country’s independence than afterwards. To utilize data that did not span the whole period of record, we used what we called the first difference method (article sent). Using this approach we built a reference series (article sent) one year at a time.
There were two concerns about this approach. The first was how to make sure we didn’t incorporate a change in station location (etc.) artifact into the reference series. That aspect was done by using the 5 highest correlated stations for the reference series and removing the value from the highest and lowest of the 5 highest correlated first difference values for that year based on the assumption that the mean of the three center most values provided a robust measure of the climate signal and if a station moved up or down a hill, its value would likely be the highest or lowest due to the impact of the station move that year. (This last part was a later addition and is explained in the homogeneity review paper (paper sent).)
The homogeneity review paper explains the reasons behind adopting this complex reference series creation process. It did indeed maximize the utilization of neighboring station information. The downside was that there was a potential for a random walk to creep into the reference series. F or example, if the nearest neighbor, the one with the highest correlation, had a fairly warm year in 1930, its first difference value for 1930 would likely be fairly high. The first difference value for 1931 would therefore likely be low as it probably was colder than that experienced in that very warm year preceding it. So the reference series would go up and then down again. The random walk comes in if the data for 1931 were missing. Then one gets the warming effect but not the cooling of the following year. The likelihood of a warm random walk and a cold random walk are equally possible. Based on the hundreds of reference series plots I looked at during my mid-1990s evaluation of this process, random walks seemed do be either non-existent or very minor. However, they remained a possibility and a concern.
Partly in response to this concern, over the course of many years, a team here at NCDC developed a new approach to make homogeneity adjustments that had several advantages over the old approaches. Rather than building reference series it does a complex series of pairwise comparisons. Rather than using an adjustment technique (paper sent) that saw every change as a step function (which as the homogeneity review paper indicates was pretty standard back in the mid-1990s) the new approach can also look at slight trend differences (e.g., those that might be expected to be caused by the growth of a tree to the west of a station increasingly shading the station site in the late afternoon and thereby cooling maximum temperature data). That work was done by Matt Menne, Claude Williams and Russ Vose with papers published this year in the Journal of Climate (homogeneity adjustments) and the Bulletin of the AMS (USHCN version 2 which uses this technique).
Everyone here at NCDC is very pleased with their work and the rigor they applied to developing and evaluating it. They are currently in the process of applying their adjustment procedure to GHCN. Preliminary evaluation appears very, very promising (though of course some very remote stations like St Helena Island (which has a large discontinuity in the middle of its long record due to moving downhill) will not be able to be adjusted using this approach). GHCN is also undergoing a major update with the addition of newly available data. We currently expect to release the new version of GHCN in February or March along with all the processing software and intermediate files which will dramatically increase the transparency of our process and make the job of people like you who evaluate and try to duplicate surface temperature data processing much easier.
I hope this email and the series of articles I am sending will answer some of your questions at least (e.g., in the homogeneity review paper it clearly states that the first difference correlation threshold of 0.8 is between the candidate station and the final reference series, not the individual stations that make up the reference series). They are likely to also stimulate some additional questions. So if it is all right with you, I won’t follow up on your questions when I return in January but rather will wait until you send in a new set of questions or just send these old ones back to me.
We’re doing a lot of evaluation of our new approach to adjusting global temperature data to remove artificial biases but additional eyes are always welcome. So I would encourage you to consider doing additional GHCN evaluations when we release what we are now calling GHCN version 2.5 in, hopefully, February or March of 2010.
This letter demonstrates the care and rigor used by the researchers at NOAA when considering possible problems involved in the methodologies being used to construct global surface temperature records. Even though the researchers had determined that problems such as ‘random walks’ were ‘non-existent or very minor,’ they were working to develop a new methodology that would be superior and would not include the possibility of these errors. Importantly, such ‘random walks’ would be, as the name suggests, random—as likely to be a random walk warmer as cooler—and therefore very unlikely to be a source of large scale bias, even if a spurious trend on a local scale might be introduced. Darwin might actually be an example of such a ‘random walk’ where missing data, a lack of nearby neighbors, and the fact that Darwin included a lot of data from the colonial era and therefore had more data in the past than in more modern times, could have led to a rare case of the automated procedure introducing a spurious warming. However, on the global scale Darwin is a very small contributor, and there is no evidence that the algorithm performed similarly elsewhere. Neither NASA nor CRU use the same algorithm as NOAA, and therefore this potential spurious warming is not included in either of those records. Yet the large-scale trends of the NASA and HadCRUT temperature records are consistent with the large-scale trends of the NOAA temperature record, demonstrating the robustness of large-scale trends to errors in individual stations.
Additionally, note that the ‘Darwin’ adjustments highlighted by the petitioner are from NOAA, not CRU, and have nothing to do with the HARRY_READ_ME file which was about the CRU TS (time series) dataset and not the HadCRUT temperature record.
The Southeastern Legal Foundation cites a document by Richard Treadgold of the New Zealand Climate Science Coalition (Treadgold, 2009) to support their contention that there is data manipulation to show warming:
Other instances of such data manipulation are also beginning to surface. For example, new information published November 25, 2009 also shows an eerily similar pattern of ‘adjustments’ was made to raw temperature data in New Zealand—downward adjustments in the early 20th century and upward adjustments from the middle of the 20th century onward.23 While the information on the New Zealand temperature adjustments was not part of Climategate, it is new, and it adds to the pattern of similar manipulations - downward adjustments in the first part of the 20th century and upward adjustments in the latter half - yielding a warming trend over the last century that is almost entirely fictitious. EPA should evaluate this controversy to determine whether these recently revealed ‘adjustments,’ like those of Climategate, indicate more of the same collusion, manipulations, and chicanery.24
The Southeastern Legal Foundation also states (in a footnote to the above paragraph): ‘Without belaboring the point, yet another curious manipulation occurs in the context of the USHCN dataset. See ‘Difference Between Raw and Final USHCN Data Sets,’ found at: http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif (last visited December 23, 2009). Absent Climategate, one might assume the adjustments were legitimate. In light of the extensive (and ever expanding) evidence of improper manipulations, however, one must wonder.’
The Southeastern Legal Foundation raises similar questions about New Zealand in their Third Amendment to Petition, citing D’Aleo and Watts (2010). The State of Texas claims that ‘the CRU created an exaggerated appearance of 20th century warming in New Zealand’ and cites the USHCN graphic discussed by the Southeastern Legal Foundation to support the statement ‘The CRU manipulated the raw data to show lower temperatures for New Zealand in the early century.’ The Coalition for Responsible Regulation also cites Treadgold about the New Zealand Temperature record.
As discussed previously (see Response 1-71), EPA has evaluated claims concerning adjustments to data in New Zealand, and concluded that the adjustments by NIWA are in line with the station records and nearby station trends. Note that in almost all the stations the majority of the adjustments come before 1950. Therefore, the adjustments do not influence any warming trends after 1950. In addition, the State of Texas and the Southeastern Legal Foundation cite the graph ‘Difference Between Raw and Final USHCN Data Sets’ in support of their contentions about the New Zealand data. This graph documents the NOAA adjustments that correct for variations in the time of observation and changes in temperature measurement instruments for the U.S. temperature record. Petitioners do not explain or present argument to show why the adjustments are inappropriate. Also, this graph is not pertinent to the claims of inappropriate manipulations by CRU to the New Zealand data.
Their fifth case study is by Willis Eschenbach10 recounting the attempts by Professor Wibjorn Karlen to replicate the IPCC’s temperature analysis for the Nordic region. See Id. at 79; available athttp://wattsupwiththat.com/2009/11/29/when-results-go-bad/ (last visited Feb. 10, 2010). Eschenbach was drawn to the subject by the inclusion in the Climategate e-mails of correspondence between Professor Karlen and Phil Jones, former Director of the CRU, which follows the now familiar pattern of Jones willfully obstructing legitimate scientific inquiry. The IPCC shows substantial increases in temperature in the late 20th century, to 0.5C above the levels of the 1930s in the Nordic region. Karlen could not replicate the assertion by the IPCC that the recent warming exceeded that of the 1930s, either in the Nordic regions or in many other regions of the world, and sought clarification from Jones and Trenberth. They did little more than refer him back to the IPCC reports from which the question arose in the first place. Karlen’s analysis, replicated by Eschenbach, shows the same pattern of suppressing the warming of the 1930’s and inflating the warming of the late 20th century that is seen in surface temperature record discussed above. Eschenbach’s analysis of the Climategate e-mails concludes that Karlen made legitimate inquiries and ‘got incomplete, incorrect and very misleading answers.’ Id.
The analysis by the IPCC was of the ‘NEU’ region (Northern Europe), which included 48N to 75N, and 10W to 40E. Therefore, in addition to the Nordic region analyzed by Karlen, the IPCC region included Ireland, the United Kingdom, half of France, Germany, Belarus, and several other nations. Thus the Karlen and Eschenbach analysis covers a different geographic region, which they failed to account for. In addition, they appear to have used a flawed methodology as there is no evidence that either Karlen or Eschenbach performed any kind of spatial interpolation. Instead it appears that they used the flawed approach of taking a straight average of station anomalies (which is not the appropriate method to use unless the stations are perfectly evenly distributed geographically, which they are not). Additionally, Eschenbach used a 17-year Gaussian average to compare against an IPCC figure that is clearly labeled as using decadal averages. Therefore, the Eschenbach analysis was of a different region, using a different (and inferior) methodology for combining weather station data, and using different temporal averaging. These reasons explain why the results of Eschenbach differ from those of the IPCC, and do not support petitioners claim that the trend was manipulated, thus ‘inflating’ the late 20th-century warming.
The assertion by Eschenbach that Karlen received ‘incomplete, incorrect and very misleading answers’ is not supported by the record, which showed two researchers (Trenberth and Jones) taking the time to answer repeated questions from an emeritus professor from Sweden. The final e-mail cited by Eschenbach was from Phil Jones to Karlen, demonstrating the completeness and helpfulness of these researchers in contrast to the assertions by the petitioners:
[Jones] Fennoscandia is just a small part of the NH. When I’m back next week, I’ll be able to calculate the boxes that encompass Fennoscandia, so you can compare with this region. As you’re aware Anders did lots of the update work in 2001-2002 and he included all the NORDKLIM data. I can send you a list of the Fennoscandian data if you want—either the sites used or their data as well.49
Therefore, it is clear that the analysis by Eschenbach was a flawed and inappropriate comparison, and that Trenberth and Jones, in contrast to the assertions of the petitioners, were attempting to be helpful in their answers.
Some petitioners claim that the NOAA and NASA temperature records are not independent from the HadCRUT temperature record because they share the same raw data. The Commonwealth of Virginia states, ‘If NASA and NOAA used the same raw data as CRU, and if they have reached conclusions similar to those of CRU, then a finding of data unreliability with regard to the CRU Data may indicate systemic problems with all three of the data sets upon which EPA relied in promulgating its Endangerment Finding.’ The Competitive Enterprise Institute cites D’Aleo and Watts (2010) as stating, ‘Since all three use the same data, all three have experienced the same degradation in data quality in recent years’ (referring to the station dropout issue). Pacific Legal Foundation refers to the EPA RTC document, stating, ‘Curiously, EPA admits that the NOAA and NASA datasets show similar trends and were developed at least in part through similar methodologies to those employed by CRU. By making this admission, EPA unintentionally leads us to the inexorable conclusion that, if the CRU Data are methodologically problematic, the NOAA and NASA data may share such methodological problems. See EPA’s Response to Public Comments, Vol. 2, 2-28’ and that therefore ‘any finding that the CRU Data and its conclusions are unreliable should lead to a review of the NOAA and NASA data sets as well.’
We agree that these three major temperature records rely on a large amount of raw data obtained from GHCN, although the HadCRUT temperature record, in particular, integrates additional data obtained from other, independent sources. However, the petitioners have not demonstrated any major flaws in the raw data. The allegations that station dropout was inappropriate or would bias the data are unfounded as described in Response 1-62 of this document. A number of analyses using different subsets of data have produced similar global temperature trends to the whole set of data. See the description of the Jones paper (Jones, 1994) in the previous section, which found that 100 well-placed stations were sufficient to determine a global temperature trend, the Menne et al. (2010) paper that showed that subsets of the U.S. data that had been rated more highly by surfacestations.org also led to the same trend as the whole set of data, and the Clear Climate Code project (Clear Climate Code, 2010) which demonstrated that the dropped stations and remaining stations have similar trends in overlapping periods.
The processing of the GHCN and other data by the three groups is independent. HadCRUT used significant manual processing of the data; NOAA uses an automated adjustment procedure that corrects for discontinuities; and NASA uses an automated adjustment procedure that changes trends in urban stations to match nearby rural stations. The three groups use different procedures to extrapolate temperatures to data sparse regions, and different sources of ocean sea surface temperature data. Therefore, the similarities of the final temperature trends resulting from the processing of the raw data between the three groups provide additional confidence in those processing routines. Additionally, the GHCN data are available online, so it is possible for independent parties to test their own algorithms for producing global temperature records from the raw data. Petitioners and others have not done so, and instead rely on arguments and claims, discussed above, that do not support the broad conclusions they draw. Finally, the temperature trends derived from these datasets are consistent with modeling, satellite data, and various observational indicators, further supporting their reliability.
Peabody Energy, the Southeastern Legal Foundation (in their Third Amendment to Petition), and the Competitive Enterprise Institute all argue that the ‘similar trends’ between the three major surface temperature records are not sufficient if the three datasets are not independent and are, in the words of the Competitive Enterprise Institute, ‘extensively compromised.’ Peabody Energy states:
EPA says that the NOAA and NASA surface temperature records show ‘similar trends’ as the HadCRUT3 data set and hence the IPCC’s and EPA’s reliance on the HadCRUT3 records is not unreasonable.246 This contention does not hold up. The HadCRUT3 data was relied on extensively by the IPCC and in numerous studies cited by both the EPA and all of the ‘assessment literature’ that EPA cites. It is not enough for EPA to say that the HadCRUT3 data reveals ‘similar trends’ as other data. The specific amount of warming that occurred during the last several decades of the 20th century, and how that warming compares to other periods in the temperature record, was obviously critically important in all of these studies. Many of these studies involved complex statistical analyses of the underlying data. Those studies may have yielded different results had they used the NASA or NOAA data instead of the HadCRUT3 data.
Moreover, there are now significant questions about whether the NASA and NOAA temperature records are truly independent of the HadCRUT3 data set. In this regard, the investigation of the Science and Technology Committee of the United Kingdom Parliament that was recently initiated to investigate CRU’s conduct includes investigation into the question of ‘How independent are the other two international data sets’247 This is a critically important question given EPA’s view, based on the IPCC, that the rate and extent of warming in the last several decades of the 20th century is the key ‘fingerprint’ of an anthropogenic GHG cause. If that warming has been overstated in all three data sets, then the ‘fingerprint’ disappears, or grows more faint even if all three data sets show ‘similar’ warming.
As EPA shows throughout these responses, the petitioners have not demonstrated that there are flaws in the global temperature datasets that would call into question the clear warming trends. In addition, these issues were raised during notice and comment on the Endangerment Findings, and EPA responded in Volume 2 of the RTC document. The full statement from RTC 2-29 was:
EPA has concluded that the three primary global surface temperature records (NOAA, NASA, and HadCRUT) are reliable and credible. We note that these datasets have been widely reviewed and assessed within the climate change research community, and that while they are distinct and use different approaches, there is good agreement in the overall trend (as described in response 2-28).
EPA did not state that the similar trend was the only basis for concluding that the HadCRUT temperature record was reliable. EPA also recognizes that each temperature record includes possible errors as discussed both in the RTC and in this RTP Document. However, potential errors are reflected in the uncertainty assessments developed for these datasets, such as the Brohan et al. (2006) figure reproduced in Subsection 1.3.2, and are small compared with the overall trend. The petitioner provides no evidence that any study would have yielded substantially different conclusions had it used the NOAA or NASA temperature records instead of HadCRUT. Indeed, many studies do use these other datasets.
Additionally, the British House of Commons Science and Technology Committee report (2010), referred to by the petitioner, has been released. With respect to the independence of these datasets, it states:
48. In its memorandum UEA explained the differences between the methodologies used by three basic datasets for land areas of the world, NOAA, NASA and CRU/UEA:
All these datasets rely on primary observations recorded by NMSs [National Meteorological Services] across the globe.
GISS and NCDC each use at least 7,200 stations. CRUTEM3 uses fewer. In CRUTEM3, each monthly temperature value is expressed as a departure from the average for the base period 1961—90. This ‘anomaly method’ of expressing temperature records demands an adequate amount of data for the base period; this limitation reduces the number of stations used by CRUTEM3 to 4,348 (from the dataset total of 5,121). The latest NCDC analysis [...] has now moved to the ‘anomaly method’ though with different refinements from those of CRU.
NCDC and GISS use different approaches to the problem of ‘absolute temperature’ from those of CRUTEM3. The homogeneity procedures undertaken by GISS and NCDC are completely different from those adopted for CRUTEM3. NCDC has an automated adjustment procedure [...], whilst GISS additionally makes allowances for urbanization effects at some stations.
49. In our call for evidence we asked for submissions on the question of how independent the other international data sets are. We have established to the extent that a limited inquiry of this nature can, that the NCDC/NOAA and GISS/NASA data sets measuring temperature changes on land and at sea have arrived at similar conclusions using similar data to that used by CRU, but using independently devised methodologies. We have further identified that there are two other data sets (University of Alabama and Remote Sensing Systems), using satellite observations that use entirely different data than that used by CRU. These also confirm the findings of the CRU work. We therefore conclude that there is independent verification, through the use of other methodologies and other sources of data, of the results and conclusions of the Climate Research Unit at the University of East Anglia.
50. The fact that all the datasets show broadly the same sort of course of instrumental temperature change since the nineteenth century compared to today was why Professor John Beddington, the Government Chief Scientific Adviser, had the confidence to say that human induced global warming was, in terms of the evidence to support that hypothesis, ‘unchallengeable’:
I think in terms of datasets, of the way in which data is analysed, there will always be some degree of uncertainty but when you get a series of fundamentally different analyses on the basic data and they come up with similar conclusions, you get a [...] great deal of certainty coming out of it.
51. Even if the data that CRU used were not publicly available—which they mostly are—or the methods not published—which they have been—its published results would still be credible: the results from CRU agree with those drawn from other international data sets; in other words, the analyses have been repeated and the conclusions have been verified.
Therefore, EPA finds that the three major global surface temperature records use independent methodologies, though they do share a large quantity of the same raw data. Because the results are robust to different choices of and adjustments to the raw data, and because there are additional data (from satellites, oceans, ice sheets, ecosystem shifts, and so forth) that are consistent with these surface temperature records, there is great confidence in the major conclusions based on these records of historical temperature change (within the acknowledged uncertainty limits). Petitioners inappropriately look at the overlap of underlying raw data in isolation, and draw conclusions about the resulting temperature records based on this. EPA’s confidence in the results shown by the various global surface temperature records is based on considering all of the evidence, including the differences and similarities in the analyses performed by NOAA, NASA, and CRU, and the consistency with other records of warming, as discussed above.
The State of Texas quotes EPA as stating that "The NOAA global surface temperature dataset (Smith et al., 2008) employs the same methodology for addressing urbanization as is used in the HadCRUT." and draws the conclusion that ‘two of the three temperature sets that EPA relied on to reach its Endangerment Finding were homogenized based on CRU mathematical models.’
EPA has already addressed these issues in Responses 1-76 and 1-77 above, as well as other UHI issues in Subsection 1.3.4. While NOAA does use urbanization uncertainty estimates based on the methodology developed by CRU, that urbanization estimate is only a small part of the full analysis, and not used for the U.S. portion of the temperature record. Therefore, it is not correct to state that two of the three temperature sets that EPA relied on were homogenized based on CRU mathematical models.
The Coalition for Responsible Regulation claims that there was another type of ‘manipulation of data to produce the outcome IPCC authors wanted’ related to the temperature data. In support of their assertion that ‘Emails contained in the Disclosures confirm certain scientists’ efforts to ‘artificially adjust’ data through active collaboration,’ the Coalition for Responsible Regulation provides the following evidence:
Sept. 27, 2009 (Tom Wigley, of the University Corporation for Atmospheric Research, (UCAR) strategizing ‘to partly explain the 1940s warming blip,’ and noting that ‘if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean - but we’d still have to explain the land blip . . . It would be good to remove at least part of the 1940s blip, but we are still left with `why the blip’.’)
We first note that this quote is clearly not related to the IPCC AR4. In the e-mail, Wigley explicitly states that he was working on a report that he was ‘writing for EPRI’ (the Electric Power Research Institute).50 There is no evidence that the analysis was performed or that it was used in IPCC reports, or how it was a ‘manipulation of data to produce the outcome IPCC authors wanted.’
In addition, there is a known issue with the sea surface data in the 1940s that was recently highlighted by Thompson et al. (2008). This is based on the shift from recording sea surface temperature mainly by U.S. vessels (already using modern engine intake measurements) to a higher percentage of UK vessels (still using the older bucket measurements) after the end of World War II. The e-mail from Wigley does not make clear whether this is or is not the attempt of a modeler to assess potential implications of correcting for known biases in the instrumental ocean record in the 1940s. The language from Wigley suggests that he is interested in an explanation for the existence of a ‘land blip’ in addition to the ocean blip. This makes sense in the context of the Thompson et al. paper. This paper explains the ocean blip as being the result of an artificial bias of shifting to U.S. ships during World War II, but this explanation of bias would not explain a blip in land temperatures. Therefore, a full explanation of the historical temperature record would require some other rationale for the ‘land blip,’ whether an actual temperature event due to natural variability, or another artificial bias due to some other issue involved with temperature records during World War II. It is normal science to work to understand the minor variations or wiggles in the temperature record. In addition, Wigley’s language does not appear to be the language of someone trying to ‘artificially adjust data’ for a deceitful purpose. Note that Wigley specifically is not suggesting any adjustments to the land record of that time period. Therefore, the petitioners’ evidence is too vague and uninformative to support a claim of improper data manipulation particularly in the context of IPCC assessments.
The pressure to conform to a perceived ‘global warming’ consensus also led to manipulation of data to produce the outcome IPCC authors wanted. Emails and computer code contained in the Disclosures reveal that scientists at CRU manipulated their temperature databases and their findings with undisclosed, unverified and arbitrary adjustments. See supra, Section III(A) and (B). Emails contained in the Disclosures confirm certain scientists’ efforts to ‘artificially adjust’ data through active collaboration to ‘reduce the positive slope,’ ‘reduce the ocean blip,’ and ‘contain the medieval warm period.’ See, e.g., Exhibit A, 843161829.txt, Sept. 19, 1996 (Gary Funkhouser, of the University of Arizona, writing to Briffa, "I really wish I could be more positive about the Kyrgyzstan material, but I swear I pulled every trick out of my sleeve trying to milk something out of that");51 Exhibit A, 1163715685.txt, Nov. 16, 2006 (Briffa noting that ‘the PC1 time series in the Mann et al. analysis was adjusted to reduce the positive slope in the last 150 years,’ and that ‘this adjustment was arbitrary and the link between Bristlecone pine growth and CO2 is, at the very least, arguable.’);52 Exhibit A, 1254108338.txt, Sept. 27, 2009 (Tom Wigley, of the University Corporation for Atmospheric Research, (UCAR) strategizing "to partly explain the 1940s warming blip,’ and noting that ‘if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean - but we’d still have to explain the land blip . . . It would be good to remove at least part of the 1940s blip, but we are still left with ‘why the blip’.’);53 Exhibit A, 1059664704.txt, July, 31, 2003 (Mann sending calibration residuals to Tim Osborn, at CRU, and acknowledging that some are ‘pretty red,’ and asking Osborn not to ‘pass this along to others without checking w/ me first. This is the sort of ‘dirty laundry’ one doesn’t want to fall into the hands of those who might potentially try to distort things...’);54 Exhibit A, 0939154709.txt, Oct. 15, 1999 (Osborn discussing how data are truncated to stop an apparent cooling trend that appears in the results);55 Exhibit A, 1054736277.txt, June 4, 2003 (Mann noting ‘it would be nice to try to ‘contain’ the putative ‘MWP’ [Medieval Warm Period]’).56
EPA responds to these allegations of data manipulation in Subsection 1.1.4 of this Response to Petitions document. These e-mails are in general quoted out of context and in many cases are examples of scientists following appropriate procedures of not using data that doesn’t meet proper standards (e.g., Funkhouser) or testing the impacts of hypotheses (e.g., Wigley). Only two quotes were not addressed in those previous sections, and these two quotes are addressed here.
First, with regards to the calibration residuals mentioned in the e-mail from Mann to Osborn, Mann also states in the same e-mail, ‘In any case, the incremental changes are modest after 1600--its pretty clear that key predictors drop out before AD 1600, hence the redness of the residuals, and the notably larger uncertainties farther back...’. This quote indicates that the redness of the residuals (a term related to the type of noise in the signal: white noise is perfectly random, red noise indicates a ‘random walk’ where the signal can drift over time) are already reflected in the fact that the uncertainties presented in the paper are ‘notably larger’ before 1600. This is also consistent with the general conclusions of the NRC (2006) and the IPCC (2007c) assessments. In fact, the paper discussed in the e-mail (MBH1999) (Mann et al., 1999) explicitly discusses red noise in the context of reconstructions, though it does not present the specific data that Mann is sharing with Osborn. In other words, the e-mail quote is consistent with the published literature.
The Osborn e-mail discussing data truncation ‘to stop an apparent cooling trend’ is a reference to the divergence issue discussed in Section 1.1 on Paleoclimate Reconstructions and the ‘divergence’ Issue, and as discussed in that section the divergence issue was well known in the literature, and therefore this e-mail is not evidence of inappropriate data manipulation.
The observed warming of the climate is an important factor in the Endangerment Finding. The Finding states, ‘The global surface temperature record relies on three major global temperature datasets developed by NOAA, NASA, and the United Kingdom’s Hadley Center. All three show an unambiguous warming trend over the last 100 years, with the greatest warming occurring over the past 30 years.’ The Finding notes that this observed warming is observed in surface and ocean temperatures, melting of snow and ice, rising sea levels, and other evidence as well. Petitioners do not attempt to review the body of scientific evidence and show that the conclusion on warming is inaccurate or more uncertain than indicated. Instead they contest certain individual aspects or details of the surface temperature evidence, and in general raise objections that fail to take into account the context and reason behind various aspects of the surface temperature record.
Many of the issues raised by the petitioners are not new, and have been addressed within the TSD and RTC documents. Some objections fail to distinguish assessments that evaluate absolute temperature rather than changes in temperature. The petitioners, and the researchers on which the petitioners rely, repeatedly use incorrect methodology to make their claims. They average temperatures between stations without using anomaly-based methods that properly take into account the change in temperature rather than the absolute temperature, and they average together stations without methods that account for the geographic distribution of these stations. These basic and fundamental flaws underlie many of the petitioner’s arguments.
Other objections misconstrue the underlying studies. In several cases petitioners object that various adjustments to the raw data have an effect of changing the data, but they fail to consider that adjustments to the data are appropriately performed for the purpose of having an effect: to account for circumstances that otherwise would interfere with accurately isolating and determining a trend in surface temperature. For example, adjustments are made to account for the effect of a change in the measurement devices, so that such changes do not interfere with accurately assessing any change in the actual surface temperature. Petitioners fail to address the reasons behind the adjustments and fail to explain or show that the types of adjustments are not appropriate. Petitioners also fail to account for the various analyses which generally show that the same temperature trends result on a global scale using the adjusted or unadjusted temperature data,
Likewise petitioners fail to account for the valid data-driven reasons that have led to a reduction over time in the number of weather stations used for the surface temperature analysis, fail to explain or show that the reductions have biased the temperature record, and overstate the magnitude of the reduction in some cases. Petitioners fail to account for the analyses that show even a limited number of stations can provide a robust global temperature trend, and the same trends are indicated using only those stations that were not reporting data after 1992 or the stations that were reporting data the entire time.
Petitioners fail to explain why consistency between all three separate surface temperature records, as well as consistency between the three surface temperature records and other evidence of warming, such as satellite data, ocean temperature data, physical evidence of the effects of warming, should not be seen as confirmation of the evidence of warming. Petitioners instead appear to assume that all of this evidence must be wrong because they allege some of it is, and implicitly suggest EPA do the same.
Overall, petitioners’ arguments about the portions of the evidence they focus on fail to withstand scrutiny. They also fail to evaluate and take into account the entire body of evidence before EPA. The arguments made by petitioners do not change EPA’s views on the validity and reliability of the three major surface temperature records, or the overall body of evidence supporting the basic conclusion that there is an unambiguous warming trend over the last 100 years, with the greatest warming occurring over the past 30 years.
Several petitioners identify new scientific studies and data published since the Endangerment Finding was finalized, which they claim require EPA to reconsider our Finding. Some petitioners also argue that EPA ignored or misinterpreted scientific data that were significant and available when the Finding was made.
EPA has reviewed the new literature provided by petitioners. In many cases, we note that the issues raised by the petitioners are not new, but were in fact raised and addressed during notice and comment on the Endangerment Findings. In other cases, petitioners have misinterpreted or misrepresented the meaning and significance of the scientific literature, findings, and data they cite. Finally, there are instances where the petitioners have failed to take into account relevant other new studies in making their arguments.
The following issues are addressed in this section:
- Implications of a new study on stratospheric water vapor.
- Implications of material concerning whether CO2 is well mixed in the atmosphere and whether the airborne fraction of CO2 has changed.
- Implications of new tropical cyclone studies.
- Implications of new data on observational snow cover trends.
- A claim that EPA ignored a satellite dataset.
After reviewing all of these issues, EPA concludes that the studies, data, and arguments presented by petitioners do not change the solid scientific basis for the Administrator’s Finding.
Several petitioners (the Competitive Enterprise Institute, Coalition for Responsible Regulation, Peabody Energy) argue that a new study published by Solomon et al. (2010), ‘Contributions of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming,’ demonstrates that scientific understanding of the causes of climate change is incomplete. This study finds that stratospheric water vapor concentrations decreased by about 10% after 2000, and that this decrease slowed the rate of increase in global surface temperature over the period 2000 — by about 25% compared with that which would have occurred due only to CO2 and other GHGs. According to the Coalition for Responsible Regulation, the study’s findings ‘indicate that human emissions may have a considerably smaller role in climate change than previously thought.’ Peabody Energy concludes that the study ‘provides another example of how unsettled global warming science actually is.’
This study improves our understanding of how changes in the composition of the stratosphere can act to enhance or dampen GHG-induced climate change. Contrary to the petitioners’ claims, it does not demonstrate that anthropogenic GHG emissions play a smaller role in climate change. Just as reductions in stratospheric water vapor might have acted to decrease warming from 2000 to 2009, the study finds that increases in stratospheric water might have acted to steepen the observed warming trend in the 1990s.
The study’s authors make no claim that global warming science is ‘unsettled.’ In fact, they introduce the study with the following statement: ‘Over the past century, global average surface temperatures have warmed by about 0.75°C. Much of the warming occurred in the past half-century, over which the average decadal rate of change was about 0.13°C, largely due to anthropogenic increases in well-mixed greenhouse gases.’
The study improves scientific understanding of effects of changes in atmospheric composition on climatic observations, and it should lead to further improvements in the representation of stratospheric processes in models, allowing more refined projections of short-term climate behavior. As the study concludes (Solomon et al., 2010): ‘This work highlights the importance of using observations to evaluate the effect of stratospheric water vapor on decadal rates of warming, and it also illuminates the need for further observations and a closer examination of the representation of stratospheric water vapor changes in climate models aimed at interpreting decadal changes and for future projections.’
This study does not have meaningful implications for the Endangerment Finding, as it does not cast doubt on the fundamental role that GHGs have in climate change. As Solomon told the New York Times (Bhanoo, 2010):
This [study] doesn’t alter the fundamental conclusion that the world has warmed and that most of that warming has to do with greenhouse gas emissions caused by man.
As in almost all areas of scientific endeavor, further research often leads to greater understanding, but contrary to the petitioners’ claim, an advance in knowledge does not per se mean the prior knowledge base was any more or less certain or settled. The advance in knowledge needs to be put in context to understand its meaning, and petitioners fail to do that. In this case, the study increases our knowledge of other factors that can amplify or reduce the warming effect of GHGs, but it does not change in any way the warming trend that is expected from the GHGs themselves.
One petitioner (the Southeastern Legal Foundation) refers to literature that it claims alters our understanding of the distribution of CO2 in the atmosphere. The Southeastern Legal Foundation challenges the long-held conclusion that CO2 is well mixed in atmosphere (i.e., that its concentration is about the same everywhere across the globe). It cites a NASA press release (NASA, 2009c, dated December 15, 2009), which describes findings drawn from data taken from its Atmospheric Infrared Sounder (AIRS) instrument on the Aqua spacecraft. In this press release, NASA reports:
data have shown that, contrary to prior assumptions, carbon dioxide is not well mixed in the troposphere, but is rather ‘lumpy.’ Until now, models of carbon dioxide transport have assumed its distribution was uniform.
The Southeastern Legal Foundation states that the ‘AIRS satellite has collected data refuting the well-mixed assumption’ and that the ‘well-mixed assumption is indispensible to AGW model of the CO2 cycle.’
We agree with NASA that on shorter time scales, there is some variability in the distribution of CO2 and that it is not uniformly mixed. This has long been known. For example, Seinfeld and Pandis (1998) discuss hemispheric and seasonal differences in CO2 distribution. In fact, we acknowledge this behavior in the RTC (2-8), noting that there is a period of time or a lag before long-lived gases become well mixed in the atmosphere: ‘all long-lived gases become well-mixed at large distances from their sources or sinks over a period of one to two years.’ It is, therefore, not surprising that there may be some variability in CO2 concentrations averaged over a month as shown in this NASA image (referenced by the Southeastern Legal Foundation, NASA, 2009b):
But even this variability is small in magnitude, only on the order of 1.5 to 2% at most across this highly sensitive concentration gradient.
CO2’s variability is especially small compared with GHGs that are traditionally described as non-well-mixed, such as tropospheric ozone, the concentration of which typically varies by a factor of 10 as shown in the July 2009 graphic from NASA below (NASA, 2009d), and water vapor, as also shown below from May 2009 in a graphic from NASA (NASA, 2009a):
The relatively low spatial variability in CO2 shown in the AIRS imagery would even be smaller averaged over a longer time scale. While NASA’s press release does not distinguish between short and long time scales, there is no ambiguity that CO2 is well mixed on time scales relevant to the study of climate change—decades—as documented in the assessment literature and discussed by EPA. In referring to the group of well-mixed GHGs, including CO2 (methane and nitrous oxide), the IPCC (Solomon et al., 2007) states:
Because these gases are long lived, they become well mixed throughout the atmosphere much faster than they are removed and their global concentrations can be accurately estimated from data at a few locations.
When NASA states that researchers had not previously assumed CO2 is ‘lumpy’ as the AIRS data visualizations demonstrate, it is to highlight the improvements in resolution of small-scale CO2 variability made possible by the AIRS data. The data allow scientists to see the very limited variability in CO2 in much finer detail than before. The fact that this finer detail has not yet been incorporated into CO2 transport models is not relevant for the projections of climate change on which EPA’s Endangerment Finding relied. For further discussion about the relevance of this NASA finding for computer models, see Response 1-84.
Background: The Southeastern Legal Foundation raises an issue concerning our understanding of how CO2 in the atmosphere cycles through the Earth system. As background, some of the CO2 that is emitted to the atmosphere remains in the atmosphere, and some of it is absorbed by other parts of the Earth system, for example, the oceans and terrestrial biosphere (vegetation and trees) sometimes referred to as CO2 ‘sinks.’ The portion of CO2 that remains in the air is known as the ‘airborne fraction.’ If the airborne fraction of CO2 increases, the rate of global warming would likely increase. EPA states in Section 6(a) of the TSD (U.S. EPA, 2009):
for future projections, Meehl et al. (2007) found ‘unanimous agreement among the coupled climate carbon cycle models driven by emission scenarios run so far that future climate change would reduce the efficiency of the Earth system (land and ocean) to absorb anthropogenic CO2. As a result, an increasingly large fraction of anthropogenic CO2 would stay airborne in the atmosphere under a warmer climate.’
Comment: The Southeastern Legal Foundation disputes the IPCC’s projection cited by EPA that the airborne fraction of CO2 will increase and disputes whether CO2 is well-mixed. Southeastern Legal Foundation claims the airborne fraction will not increase because the airborne fraction metric has been relatively constant since 1850 according to a study the Southeastern Legal Foundation cites by Knorr (2009). Specifically, Knorr (2009) finds:
It is shown that with those uncertainties, the trend in the airborne fraction since 1850 has been close to and not significantly different from zero.
The Southeastern Legal Foundation concludes that the Knorr study:
gravely undermines the assumption that CO2 is long-lived and well-mixed and by the same token dovetails with NASA’s observations that CO2 is in fact not well-mixed, and with many experimental proofs that CO2 has a short residence time.
The Southeastern Legal Foundation presents no new evidence to challenge the IPCC’s projections, as summarized by EPA. The Knorr (2009) finding is not new. Section 6(a) of the TSD, referring to the assessment literature (IPCC), describes exactly the same conclusion as found in Knorr:
Historically, the airborne fraction of CO2 has shown no long term trend (Denman et al., 2007).
However, there are several scientific problems with the Southeastern Legal Foundation’s conclusion that the Knorr study ‘gravely undermines the assumption that CO2 is long-lived and well-mixed.’ First, by itself a trend in the airborne fraction of a gas provides no information about its atmospheric lifetime and whether it is well mixed, and the Southeastern Legal Foundation offers no explanation to the contrary. Second, as discussed in Response 1-82 above, NASA’s observations do not change the finding that the CO2 is well mixed on time scales relevant to the study of climate change. Third, the Southeastern Legal Foundation’s reference to ‘proofs’ that CO2 has a short residence time, citing work by Segalstad (e.g., Segalstad, 1997), were responded to in Volume 2 of EPA’s RTCs in the endangerment record. After reviewing this public comment, EPA determined (in RTC 2-3) that Segalstad’s work ‘does not address the lifetime of a change in atmospheric concentration of CO2, but rather the lifetime in the atmosphere of an individual molecule of CO2’ and that these ‘are two different concepts.’ This confusion of residence time and the adjustment time in the atmosphere is discussed at length in O’Neill (1997).
In addition, the latest scientific research suggests that the airborne CO2 fraction is likely increasing—consistent with future projections. A recent (December 2009) study (Le Qur et al., 2009) published in Nature Geoscience updates trends in the sources and sinks of CO2 and finds: ‘In the past 50 years, the fraction of CO2 emissions that remains in the atmosphere each year has likely increased, from about 40% to 45%, and models suggest that this trend was caused by a decrease in the uptake of CO2 by the carbon sinks in response to climate change and variability.’
The Southeastern Legal Foundation does not acknowledge or take into account this new study.
Southeastern Legal Foundation suggests that the current computer models on which EPA relies are based on assumptions that are challenged by the NASA finding regarding CO2 not being well mixed and the Knorr study discussed above (in Response 1-82 and 1-83). It concludes: ‘These results contradict the IPCC’s version of the carbon cycle, on which the climate models and claims of catastrophic AGW are based— and that EPA should, therefore, reconsider its Endangerment Finding.’
As described above (in Response 1-82), the NASA finding does not refute the well-established conclusion that CO2 is well mixed on time scales relevant to the study of climate change. Moreover, the primary finding of the Knorr (2009) study cited by the Southeastern Legal Foundation pertaining to trends in the CO2 airborne fraction is not new and was already reflected in EPA’s TSD. The information presented by the Southeastern Legal Foundation does not change or undermine either the view that GHGs are long lived and well mixed, in the time scales relevant to climate change projections, or the basis for scientific projections suggesting the airborne fraction of CO2 will increase as the climate warms. And, in fact, another peer-reviewed study (Le Qur et al., 2009) presents contrasting results finding an increase in the airborne fraction in the past 50 years.
Significantly, the same NASA press release that the Southeastern Legal Foundation cites concerning the short term ‘lumpy’ distribution of CO2 provides validation for the climate models results cited by EPA in the TSD which project substantial CO2-induced warming (NASA, 2009c):
AIRS temperature and water vapor observations have corroborated climate model predictions that the warming of our climate produced as carbon dioxide levels rise will be greatly exacerbated -- in fact, more than doubled -- by water vapor,’ said Andrew Dessler, a climate scientist at Texas A&M University, College Station, Texas.
Dessler explained that most of the warming caused by carbon dioxide does not come directly from carbon dioxide, but from effects known as feedbacks. Water vapor is a particularly important feedback. As the climate warms, the atmosphere becomes more humid. Since water is a greenhouse gas, it serves as a powerful positive feedback to the climate system, amplifying the initial warming [caused by CO2]. AIRS measurements of water vapor reveal that water greatly amplifies warming caused by increased levels of carbon dioxide. Comparisons of AIRS data with models and re-analyses are in excellent agreement.
In summary, the Southeastern Legal Foundation’s evidence and arguments do not change the validity of EPA’s characterization of CO2 and/or the carbon cycle, or their representation in climate models whose results were cited in support of the Endangerment Finding.
Two petitioners (the Competitive Enterprise Institute and the Southeastern Legal Foundation) present new studies that provide new and/or updated information on 1)historical trends in tropical cyclone activity, including frequency and intensity, and 2)whether these trends can be attributed to anthropogenic GHGs. The petitioners claim that the conclusions of these studies cast doubt on EPA’s characterization of these issues in the Endangerment Finding and therefore require a reconsideration of the Finding.
The Competitive Enterprise Institute provides a study by Hatton (2010) titled, ‘Has the Intensity and Frequency of Hurricanes Increased?’ This non-peer-reviewed study analyzes trends in hurricane data from 1999 to 2009 tested against the period from 1946 to 2009. It finds ‘that hurricane intensity and frequency is significantly higher in this period [1999—2009] in the North Atlantic. However in the Eastern Pacific, Western Pacific, and Northern and Southern Indian oceans, there is no evidence of significant change.’ The Competitive Enterprise Institute states that this finding ‘undermines EPA’s own claims regarding the supposedly increased risk of storms and hurricanes.’
Southeastern Legal Foundation presents a peer-reviewed study from Knutson et al. (2010). This study, developed under the auspices of the WMO, evaluates the current state of knowledge of tropical cyclones and climate change. The Southeastern Legal Foundation highlights two key findings from the study:
1) ‘In terms of global tropical cyclone frequency, it was concluded that there was no significant change in global tropical storm or hurricane numbers [i.e. frequency] from 1970 to 2004, nor any significant change in hurricane numbers [i.e. frequency] for any individual basin over that period, except for the Atlantic...’
2) ‘.we cannot at this time conclusively identify anthropogenic signals in past tropical cyclone data.’
On the basis of these findings, the Southeastern Legal Foundation argues, ‘EPA should reconsider its conclusions that GHGs endanger human health and welfare.’
We have reviewed both of the studies and considered their implications for EPA’s characterization of trends in tropical cyclone activity, including frequency and intensity, and our ability to attribute the trends to anthropogenic GHGs.
With respect to observed trends in frequency and intensity of tropical cyclones, the findings in these two studies are, in fact, consistent with EPA’s TSD and the Endangerment Finding. The TSD states in Section 4(k), ‘Kunkel et al. (2008) refer to a study that was not able to corroborate the presence of upward intensity trends over the last two decades in ocean basins other than the North Atlantic’ and ‘there is no clear trend in the annual numbers [i.e. frequency] of tropical cyclones [globally]...’. Specific to trends in the North Atlantic, the TSD states in Section 4(l): ‘IPCC (2007b) and Karl et al. (2009) report observational evidence of an increase in intense tropical cyclone activity in the North Atlantic (where cyclones develop that affect the U.S. East and Gulf Coasts) since about 1970, correlated with increases of tropical sea surface temperatures of nearly 2°F (1°C) in the main Atlantic hurricane development region (Karl et al., 2009). The strongest hurricanes (Category 4 and 5) have, in particular, increased in intensity (Karl et al., 2009).’
EPA’s characterization of whether any trends in tropical cyclone activity (either frequency or intensity) could be attributed to anthropogenic GHGs, drawn from the IPCC, does differ slightly from the Knutson et al. study (the Hatton study does not address attribution). Knutson found no conclusive evidence of an anthropogenic signal in observed hurricane activity to date, and like Knutson, the TSD did not claim any such conclusive evidence. Instead, the TSD states in Section 5(a) that it is ‘more likely than not that anthropogenic influence has contributed to increase in the frequency in the most intense storms.’ After that statement, EPA provided the following caveat, reflecting the uncertainty in science regarding attribution, stating:
the IPCC (Hegerl et al., 2007) cautions that detection and attribution of observed changes in hurricane intensity or frequency due to external influences remains difficult because of deficiencies in theoretical understanding of tropical cyclones, their modeling, and their long-term monitoring.
Importantly, given this uncertainty, EPA did not refer to any anthropogenic signal in historical tropical cyclone trends in the Findings themselves (nor in the Executive Summary of the TSD) and instead focused only on the limited evidence of a directional trend in the Atlantic, and more importantly on increasing future risks (Section I.A. of the Findings):
The conclusion in the assessment literature that there is the potential for hurricanes to become more intense (and even some evidence that Atlantic hurricanes have already become more intense) reinforces the judgment that coastal communities are now endangered by human-induced climate change, and may face substantially greater risk in the future.
The risk of stronger hurricanes in the future is echoed by the Knutson et al. (2010) study, which finds:
‘future projections based on theory and high-resolution dynamical models consistently indicate that greenhouse warming will cause the globally averaged intensity of tropical cyclones to shift towards stronger storms’
In summary, the new studies presented by petitioners are consistent with EPA’s characterization of observed tropical cyclone trends. The Knutson et al. (2010) study reaches a slightly revised conclusion on an anthropogenic GHG signal to date, but the revision is not meaningful considering EPA’s qualified discussion of the issue and the similarities in EPA’s and Knutson et al.’s characterization of future projections.
The Southeastern Legal Foundation discusses two studies (Comiso and Nishio, 2008; Turner et al., 2009), which it alleges find that the observed increase in Antarctic sea ice is statistically significant. The petitioner argues that this finding is contrary to information in the record for EPA’s Endangerment Finding, as well as information contained in the IPCC AR4. It states: ‘If the increase [in Antarctic sea ice] is statistically significant, then it is yet another empirical refutation of the IPCC models, which predict a loss of sea ice in the Antarctic.’ It ultimately concludes:
EPA has contradicted the literature and mischaracterized the statistical significance of the trend in Antarctic sea ice extent increase. It appears that this mischaracterization permits EPA to claim that the model projections have not been refuted and remain ‘robust’ even though any fair analysis shows they have been at best undermined and at worst falsified.
EPA has reviewed the petitioner’s submission of Comiso and Nishio (2008) and Turner et al. (2009) and finds that it was not impracticable to raise the objection during the public comment period and that the reasons for the objection did not arise between June 24, 2009, and February 16, 2010. Nonetheless, we have reviewed these arguments and responded.
The first study by Comiso and Nishio (2008) computes a positive trend in Antarctic sea extent of 0.9 0.2% per decade since the late 1970s, but contrary to the assertion of the Southeastern Legal Foundation, the study does not assess statistical significance. The second study, Turner et al. (2009), states, ‘the annual mean extent of Antarctic sea ice has increased at a statistically significantrate of 0.97% per decade since the late 1970s.’ In other words, both studies compute a similar trend, and one assesses statistical significance while the other does not.
In Section 4(i) of the TSD, EPA cites U.S. National Snow and Ice Data Center (NSIDC) data indicating that for the period 1979—2008, Antarctic sea ice had undergone a not statistically significant increase of 0.9% per decade (U.S. EPA, 2009). Although the conclusion about statistical significance differs from Turner et al. (2009), the magnitude of the trend reported by EPA is very similar, and the NSIDC trend reported by EPA is identical to the trend found in Comiso and Nishio (2008).
Therefore, EPA’s TSD and the studies referenced by the Southeastern Legal Foundation all portray Antarctic sea ice as increasing at about the same rate. The only difference between the TSD and one of the studies referenced by the Southeastern Legal Foundation is in the characterization of statistical significance in Turner et al. (2009). This narrow and limited basis does not support the Petitioner’s sweeping and unsupportable conclusion.
The fact that studies reach different conclusions regarding statistical significance in observed trends does not imply that model projections have been ‘refuted,’ ‘undermined,’ or ‘at worst falsified,’ especially when the trends do not materially differ in either direction or magnitude. Furthermore, the statistical significance of observed trends is not a criterion required for validation of models. Of particular importance is that EPA’s reporting of Antarctic sea trends is substantively consistent with the literature.
Although EPA did not rely on the IPCC (2007a) for this discussion in the TSD, the Southeastern Legal Foundation makes much of the fact that the newer literature also differs from the IPCC’s coverage of this issue. It claims that the IPCC’s finding that Antarctic sea ice trends are not statistically significant is ‘an outlier that is inconsistent with the peer reviewed literature – specifically referring to literature published after publication of the IPCC AR4. But given that the one study (Turner et al., 2009) referenced by the Southeastern Legal Foundation (finding statistical significance) was not available to the IPCC when it did its assessment, its claim that the IPCC’s treatment of the issue is inconsistent with more recent literature has little relevance. EPA relied on the latest available data and analysis from NSIDC rather than the IPCC.
Separate from its discussion of the new literature, the Southeastern Legal Foundation—in a footnote—refers to a blog post (Knappenberger, 2010) that alleges that the IPCC’s treatment of the issue was incomplete based on literature that was available at the time of its assessment. But again, this is not relevant to EPA’s treatment of the issue because EPA relied on the more current analysis by NSIDC, which is consistent with the latest literature on the issue, aside from statistical significance as discussed above.
Finally, it is important to put this issue into context. Antarctic sea ice is one climate indicator referenced in the TSD but not explicitly referred to in the Findings themselves. Given the multiple lines of evidence presented in the TSD, a small, and in this context trivial change in the statistical characterization of one of numerous physical indicators of warming does not materially impact the Endangerment Finding. The TSD appropriately states that Antarctic sea ice is increasing. The fact that as more studies are published it may now be possible to ascribe ‘statistical significance’ to this increase is not of material impact for the Endangerment Finding.
One petitioner (the Southeastern Legal Foundation) argues that recent trends in winter snow cover over the Northern Hemisphere and North America are at odds with future model projections. It notes that since 1989, there has been virtually no trend in snow cover in the Northern Hemisphere and that winter snow extent in North America has been increasing over this period.
To support its arguments, the Southeastern Legal Foundation relies primarily on two snow cover data time series (Goddard 2010a, 2010b) published on the blog WattsUpWithThat.com. These time series exhibit data from the Rutgers University Global Snow Laboratory (GSL). The first time series the petitioner displays shows an increasing trend in North American winter snow cover extent (e.g., for the months of December to February) from 1989 to 2010 (source of image: Goddard, 2010a) (SLF Graph #1).
SLF Graph #1
The second time series the petitioner displays shows the annual Northern Hemisphere snow cover since 1989, which shows ‘virtually no trend’ according to the Southeastern Legal Foundation (source of image: Goddard, 2010b) (SLF Graph #2).
SLF Graph #2
On the basis of these graphs, the petitioner concludes that ‘the IPCC and the EPA predicted reduced snow fall and reduced snow cover due to AGW [anthropogenic global warming]’ and that this is ‘yet another instance of the failure of model predictions.’
The Rutgers GSL is a credible and legitimate source of data on snow cover, and numerous peer-reviewed studies have used Rutger’s GSL data, including some that were cited by the IPCC. However, the Southeastern Legal Foundation reaches inappropriate and flawed conclusions based on these graphs, as the conclusions are biased by the selective time periods analyzed.
Southeastern Legal Foundation’s North American Winter Snow Cover Extent Graph (SLF Graph #1)
The North American winter snow cover graphic presented by the Southeastern Legal Foundation and shown above only samples the period 1989—2010, yet Rutgers GSL data date back to 1967. Using GSL data for the entire period of record (1967—2010, source: Rutgers, 2010a), EPA produced the following graph (EPA Graph #1 57):
EPA Graph #1
An examination of this graph of the entire dataset (EPA Graph #1) makes it clear that Southeastern Legal Foundation relies upon a graph (SLF Graph #1) that presents an exaggerated upward snow cover extent trend during the winter months by using unrepresentative start and end dates. The SLF Graph #1 start and end dates coincide with relatively small (1989) and large (2010) snow cover extents. The full time series we produce above shows a much smaller upward trend. The small long-term increase we show is broadly consistent with our statement in the TSD on this issue. In Section 4(j) of the TSD, citing the IPCC, we state that over North America ‘from 1915 to 2004, snow-covered area increased in November, December, and January due to increases in precipitation’ (U.S. EPA, 2009).
It is also important to recognize that snow cover trends in one season over one continent are not representative of snow cover trends in other seasons, on an annual basis, or in other parts of the world. In fact, one impact of climate change is expected to be an increase in snow cover in many areas due to increases in precipitation, which have been observed as noted above. As the TSD notes in Section 5(a): ‘heavy precipitation events averaged over North America have increased over the past 50 years - consistent with the observed increases in atmospheric water vapor, which have been associated with human-induced increases in GHGs.’ In contrast, in the warmer seasons of spring and summer, when cold air is more limited, warming has tended to deplete snow cover, and there has been little trend (slightly positive) during fall. The graph below, which EPA produced (EPA Graph #258 , data source: Rutgers, 2010a), shows long-term North America snow cover extent by season and on an average annual basis.
EPA Graph #2
*Whereas winter snow cover data are available from GSL beginning in 1967, annual snow cover data begin in 1972.
This graph (EPA Graph #2) clearly shows that average annual snow cover (blue diamonds) has declined over the long term over North America, with pronounced declines in spring (yellow triangles) and summer (light blue Xs) and small increases in fall (purple asterisks) and winter (pink squares). By focusing on just one season and arbitrarily using only part of the available dataset, the Southeastern Legal Foundation (SLF Graph #1) provides an incomplete analysis, which does not change or undermine projections for future declines in snow cover over North America.
Southeastern Legal Foundation’s Annual Northern Hemisphere Snow Cover Extent Graph (SLF Graph #2)
As with its first graph (SLF Graph #1), the petitioner’s second graph (SLF Graph #2) for the Northern Hemisphere also presents only a portion of the available data. Importantly, its choice of start date (1989) is biased because it conceals the long-term decline in snow cover extent evident when analyzing the entire data series. We graphed the entire Northern Hemisphere Snow Cover extent time series, plotting annual averages to produce the time series below (EPA Graph #3 59, data source: Rutgers, 2010b).
EPA Graph #3
This time series (EPA Graph #3) clearly shows a long-term decline in annual average Northern Hemisphere snow cover extent, which is consistent with model projections for the future.
The clear conclusion is that the petitioner bases its claim that the models are wrong on the use of time series that are biased both with respect to the time of year analyzed and choice of start and end date. Their assertion that future projections are flawed is based on this inappropriately limited analysis. The petitioner’s evidence and arguments do not show that models projecting future decreases in snow cover are flawed. In fact, observed long-term snow cover trends over both North America and the Northern Hemisphere are directionally consistent with future projections. Therefore, the information presented by the petitioner is inaccurate and does not support their claims.
One petitioner, the Competitive Enterprise Institute, contends that EPA’s Endangerment Finding ignores satellite datasets, which, unlike surface datasets, provide comprehensive coverage of the globe. The Competitive Enterprise Institute specifically refers to the satellite dataset produced by UAH. It states:
This dataset meets the critical principle of science of repeated, independent verification. These data show modest thirty year warming trends in the middle to upper latitudes of the Northern Hemisphere, little warming of the tropics and the Southern Hemisphere, and distinct cooling of the Antarctic region - hardly the endangering global warming proclaimed by EPA.
EPA clearly did not ignore satellite datasets, including the UAH dataset. The satellite datasets are discussed in detail in the record for the Endangerment Finding. EPA summarized trends from satellite datasets in both the TSD and RTC document. The TSD contains an entire section (4b) in which upper air temperatures are summarized, and that incorporates the trends from both Remote Sensing System and UAH satellite temperature records. In Section 4(b), the TSD summarizes the IPCC’s discussion of tropospheric satellite temperature records, which incorporate the UAH dataset:
The satellite tropospheric temperature record is broadly consistent with surface temperature trends. The range (due to different data sets) of global surface warming since 1979 is 0.29°F (0.16°C) to 0.32°F (0.18°C) per decade compared to 0.22°F (0.12°C) to 0.34°F (0.19°C) per decade for estimates of tropospheric temperatures measured by satellite.
Furthermore, also in Section 4(b), we update the IPCC’s summary of the satellite temperature record with a summary of the latest satellite temperature data trends in NOAA publications (NOAA, 2009 and Peterson and Baringer, 2009) through 2009, which also incorporate the UAH data:
For example, in NOAA (2009b) the satellite mid-tropospheric temperature trend computed for 1979—2008 ranges from +0.20 to 0.27°F (+0.11°C to + 0.15°C) per decade compared to the estimate of +0.22 to +0.34°F (+ 0.12°C to + 0.19°C) per decade given in IPCC (2007a). Combining the radiosonde and satellite records of the troposphere, the [NOAA] State of the Climate in 2008 report estimates the trend is +0.261 0.04°F (+0.145 0.02°C) per decade for the period 1958—2008 with the range of the trends calculated from the various datasets (Peterson and Baringer, 2009).
In addition, Volume 2 of the RTC document specifically refers to UAH data in four separate comment responses (2-47, 2-48, 2-49, and 2-51).
The commenter’s assertion about specific regional trends in temperatures derived from satellite records (including UAH’s) were also already addressed in the RTC:
- Temperature trends in the tropics were discussed in RTC 3-7.
- Temperature trends in the Southern Hemisphere relative to the Northern Hemisphere were discussed in RTC 3-5.
- Antarctic temperature trends were discussed in RTC 2-56.
To conclude, the warming observed in the satellite record, including the UAH data, was considered by EPA and supports EPA’s conclusion that the warming of the climate system is ‘unequivocal’ (see the Findings, Section IV.2.a.).
None of the issues raised by petitioners pertaining to new science or data that EPA allegedly did not address changes or undermines the scientific basis for EPA’s Findings. In many cases, the issues raised by the petitioners are not new, but were fact raised and addressed during the rulemaking. Petitioners have misinterpreted or misrepresented the meaning and significance of the scientific literature, findings, and data they cite, made claims that are not supported by the evidence they rely on, provided incomplete and biased analyses to support their claims, failed to acknowledge or account for important results, and, at times, ignored EPA’s endangerment record.
Climate science continues to advance and our understanding will continue to improve as new data and studies that are published will continue to add to our understanding of important climate science issues. But rather than challenging the scientific basis for the Endangerment Finding, the new science cited by petitioners does not undermine the key findings and conclusions that were reached in the assessment literature, and the very solid scientific foundation for EPA’s Findings. ‘
Allen, R.J., and S.C. Sherwood (2008). Warming maximum in the tropical upper troposphere deduced from thermal wind observations. Nature Geoscience 1:399—403.
Balling Jr., R.C., and C.D. Idso (2002). Analysis of adjustments to the United States Historical Climatology Network (USHCN) temperature database. Geophysical Research Letters 10. 1029/2002GL014825.
Barker, T., I. Bashmakov, A. Alharthi, M. Amann, L. Cifuentes, J. Drexhage, M. Duan, O. Edenhofer, B. Flannery, M. Grubb, M. Hoogwijk, F. I. Ibitoye, C. J. Jepma, W.A. Pizer, and K. Yamaji (2007). Mitigation from a cross-sectoral perspective. Climate Change 2007 Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, L.A. Meyer (eds.)], Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Bhanoo, S.N. (2010). Less Water Vapor May Slow Warming Trends. The New York Times A16. 28 Jan. 2010. http://www.nytimes.com/2010/01/29/science/earth/29vapor.html?_r=1 Exit
. Accessed July 23, 2010.
Briffa, K.R., et al. (1998). Reduced sensitivity of recent tree-growth to temperature at high northern latitudes. Nature 391:678—682.
Brohan, P., J.J. Kennedy, I. Harris, S.F.B. Tett, and P.D. Jones (2006). Uncertainty estimates in regional and global observed temperature changes: A new dataset from 1850. Journal of Geophysical Research. 111: D12106. doi:10.1029/2005JD006548
CCSP (2008). Climate Models: An Assessment of Strengths and Limitations. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research [Bader D.C., C. Covey, W.J. Gutowski Jr., I.M. Held, K.E. Kunkel, R.L. Miller, R.T. Tokmakian, and M.H. Zhang (Authors)]. Department of Energy, Office of Biological and Environmental Research, Washington, DC, USA, 124 pp.
Christy, J.R., W.B. Norris, and R.T. McNider (2009). Surface temperature variations in East Africa and possible causes. Journal of Climate. 22. doi:10.1175/2008JCLI2726.1.
Christy, J.R., W.B. Norris, K. Redmond, and K. Gallo (2006). Methodology and results of calculating central California surface temperature trends: Evidence of human-induced climate change? Journal of Climate.19:548-563.
Clear Climate Code (2010). The 1990s station dropout does not have a warming effect. Available at: http://clearclimatecode.org/the-1990s-station-dropout-does-not-have-a-warming-effect/ Exit. Accessed July 28, 2010.
CRU. (2010) Climatic Research Unit Data Availability. Available at: http://www.cru.uea.ac.uk/cru/data/availability/. Accessed on July 23, 2010.
Comiso, J. C., and F. Nishio (2008). Trends in the sea ice cover using enhanced and compatible AMSR-E, SSM/I, and SMMR data. Journal of Geophysical Research 113, C02S07, doi: 10. 1029/2007JC004257.
D’Aleo and Watts (2010) Surface Temperature Records: Policy Driven Deception? Science and Public Policy Institute. Available at: http://scienceandpublicpolicy.org/originals/policy_driven_deception.html Exit. Accessed July 23, 2010.
D’Aleo, J. (2010). Central Park—Temperatures Still a Mystery. Available at: http://icecap.us/images/uploads/CENTRAL_PARK.pdf (5 pp, 121K) and http://icecap.us/index.php/go/new-and-cool/central_park_temperatures_still_a_mystery/ Exit. Accessed July 23, 2010.
D’Aleo, J., (2009). United States & Global Data Integrity Issues. Science and Public Policy Institute, January 27, 2009. Available at http://scienceandpublicpolicy.org/images/stories/papers/originals/DAleo-DC_Brief.pdf (28 pp, 1.8MB) Exit.
Accessed July 23, 2010.
D’Arrigo, R., et al. (2008). On the ‘divergence problem’ in northern forests: A review of the tree-ring evidence and possible causes. Global and Planetary Change. 60:289.
Davey, C.A., R.A. Pielke Sr., and K.P. Gallo (2006). Differences between near-surface equivalent temperature and temperature trends for the Eastern United States, Equivalent temperature as an alternative measure of heat content. Global and Planetary Change 54: 19—32.
Davey, C.A., and R.A. Pielke, Sr. (2005). Microclimate Exposures of Surface-Based Weather Stations, Implications For The Assessment of Long-Term Temperature Trends. Bulletin of the American Meteorological Society: 497—504.
Douglass, D.H., J.R. Christy, B.D. Pearson, and S.F. Singer (2007). A comparison of tropical temperature trends with model predictions. International Journal of Climatology. 28:1693—1701.
Easterbrook, Don (2009). Portland Geological Society of America Annual Meeting.18-21 October 2009.
Easterling, D.R., and M.F. Wehner (2009). Is the climate warming or cooling? Geophysical Research Letters. 36: L08706. doi:10.1029/2009GL037810.
Eschenbach, W. (2009). The Smoking Gun at Darwin Zero. (December 8, 2009). Available at: http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero. Accessed July 23, 2010.
Esper et al. (2010). Trends and uncertainties in Siberian indicators of 20th century warming. Global Change Biology.16, 386—398, doi: 10.1111/j.1365-2486.2009.01913.x
Esper, J., and D. Frank (2009). Divergence pitfalls in tree-ring research. Climatic Change. 94: 261, 262.
Folland and Parker (1995). Correction of instrumental biases in historical sea surface temperature data. Quarterly Journal of the Royal Meteorological Society. 121: 319—367.
Goddard, S. (2010a). North American snow models miss the mark—observed trend opposite of the predictions. February 19. Available at: http://wattsupwiththat.com/2010/02/19/north-america-snow-models-miss-the-mark/ Exit Accessed July 23, 2010.
Goddard, S. (2010b). Why is Winter Snow Extent Interesting? February 18, 2010. Available at: http://wattsupwiththat.com/2010/02/18/why-is-winter-snow-extent-interesting/ Exit. Accessed July 23, 2010.
Goetz, J. (2010). Rewriting History, Time and TimeAgain. Available at: http://wattsupwiththat.com/2008/04/08/rewriting-history-time-and-time-again/ Exit. Accessed July 23, 2010.
Accessed September 24, 2009.
Haimberger, L., C. Tavolato, and S. Sperka (2008). Towards elimination of the warm bias in historic radiosonde temperature records—some new results from a comprehensive intercomparison of upper-air data. Journal of Climate. 21(18):4587—4606. doi:10.1175/2008JCLI1929.1.
Hale, R.C., K.P. Gallo, T.W. Owen, and T.R. Loveland (2006). Land use/land cover change effects on temperature trends at U.S. Climate Normals stations. Geophysical Research Letters. 33.
Hansen et al. (2010). Current GISS Global Surface Temperature Analysis. Available at: http://data.giss.nasa.gov/gistemp/. Accessed July 23, 2010.
Harriban, R. (2010). Q&A: Professor Phil Jones. Interview by Roger Harrabin. BBC News. British Broadcast Corporation. Available at: http://news.bbc.co.uk/2/hi/science/nature/8511670.stm Exit.
Accessed July 23, 2010.
Hatton, Les (2010). 1999-2009: Has the intensity and frequency of hurricanes increased? Available at: http://www.leshatton.org/Documents/Hurricanes-are-not-getting-stronger.pdf (19 pp, 106K) Exit. Accessed July 23, 2010.
Hegerl, G.C., et al. (2007). Understanding and Attributing Climate Change. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor, and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Huybers, P. (2005). Comment on ‘Hockey sticks, principal components, and spurious significance’ by McIntyre and McKitrick . Geophysical Research Letters. 32: L20 705. doi:10.1029/2005GL023395.
Intergovernmental Panel on Climate Change (IPCC) (2007a). Fourth Assessment Report: Climate Change 2007. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
IPCC (2007b). Summary for Policymakers. In: Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, M.L. Parry, O.F. Canziani, J.P. Palutikof, P.J. van der Linden, and C.E. Hanson (eds.), Cambridge University Press, Cambridge, UK, 7—22.
IPCC (2007c). Summary for Policymakers. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
IPCC (2001). Third Assessment Report: Climate Change 2001. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
The Independent Climate Change Email Inquiry. (2010) Available at: http://www.cce-review.org/About.php. Last accessed July 13, 2010
Jansen, E., J. Overpeck, K.R. Briffa, J.C. Duplessy, F. Joos, V. Masson-Delmotte, D. Olago, B. Otto-Bliesner, W.R. Peltier, S. Rahmstorf, R. Ramesh, D. Raynaud, D. Rind, O. Solomina, R. Villalba, and D. Zhang (2007). Palaeoclimate. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Jones (1994). Hemispheric surface air temperature variations: A reanalysis and an update to 1993. Journal of Climate. 7:1794—1802.
Jones (1988). Hemispheric Surface Air Temperature Variations: Recent Trends and an Update to 1987. Journal of Climate. 1:654-660.
Jones and Moberg (2003). Hemispheric and Large-Scale Surface Air Temperature Variations: An Extensive Revision and an Update to 2001. Journal of Climate. 16:206-223.
Jones, P.D., D.H. Lister, and Q. Li (2008). Urbanization effects in large-scale temperature records, with an emphasis on China. Journal of Geophysical Research. 113: D16122, doi:10.1029/2008/JD009916.
Jones, P.D., P.Ya. Groisman,, M. Coughlan, N. Plummer, W-C Wang, and T.R. Karl (1990). Assessment of urbanization effects in time series of surface air temperature over land. Nature 347:169—172.
Juckes, M.N., M.R. Allen, K.R. Briffa, J. Esper, G.C. Hegerl, A. Moberg, T.J. Osborn, and S.L. Weber (2007). Millennial Temperature Reconstruction Intercomparison and Evaluation. Climate of the Past. 591—609. Available at: http://www.clim-past.net/3/591/2007/cp-3-591-2007.pdf (19 pp, 839K) Exit. Accessed July 23, 2010.
Karl, T., J. Melillo, and T. Peterson (eds.) (2009). Global Climate Change Impacts in the United States. Cambridge University Press, Cambridge, United Kingdom.
Karl, T.R., S.J. Hassol, C.D. Miller, and W.L. Murray (eds.) (2006). Temperature Trends in the Lower Atmosphere: Steps for Understanding and Reconciling Differences. U.S. Climate Change Science Program and Subcommittee on Global Change Research. Available at: http://www.climatescience.gov/Library/sap/sap1-1/finalreport/sap1-1-final-all.pdf (180 pp, 9MB). Accessed July 23, 2010.
Karl, et al. (1993). A New Perspective on Recent Global Warming: Asymmetric Trends of Daily Maximum and Minimum Temperature. Thomas R. Karl, Philip D. Jones, Richard W. Knight, George Kukla, Neil Plummer, Vyacheslav Razuvayev, Kevin P.
Kaufman (2010). Corrections and Clarifications. Science 5: Vol. 327. no. 5966, p. 644. doi: 10.1126/science.327.5966.644-d. February 2010.
Kaufman, D.S., D.P. Schneider, N.P. McKay, C.M. Ammann, R.S. Bradley, K.R. Briffa, G.H. Miller, B.L. Otto-Bliesner, J.T. Overpeck, B.M. Vinther, and Arctic Lakes 2k Project Members (2009). Recent Warming Reverses Long-Term Arctic Cooling, Science. 325 (5945), 1236. doi: 10.1126/science.1173983.
Kennedy, J.J., P.W. Thorne, T.C. Peterson, R.A. Reudy, P.A. Stott, D.E. Parker, S.A. Good, H.A. Titchner, and K.M. Willett, 2010: How do we know the world has warmed? State of the Climate in 2009. Bulletin of the American Meteorological Society. Soc. 91 (6), S79-S82. Parliament, 2009.
Knappenberger, C. (2010). Yet Another Incorrect IPCC Assessment: Antarctic Sea Ice Increase. Available at: http://www.masterresource.org/2010/03/yet-another-incorrect-ipcc-assessment-antarctic-sea-ice-increase Exit.
Accessed July 23, 2010.
Knight, J., J.J Kennedy, C. Folland, G. Harris, G.S. Jones, M. Palmer, D. Parker, A. Scaife, and P. Stott (2009). Do global temperature trends over the last decade falsify climate predictions? State of the Climate in 2008. Bulletin of the American Meteorological Society. 90(8):S1—S196.
Knorr, W. (2009). Is the airborne fraction of anthropogenic CO emissions increasing? Geophysical Research Letters. 36: L21710, doi:10.1029/2009GL040613; University of Bristol press release: Controversial new climate change results. November 9, 2009. Available at: http://bristol.ac.uk/news/2009/6649.html Exit. Accessed July 23, 2010.
Knutson, et al. (2010). Tropical Cyclones and Climate Change. Nature Geoscience 3:157—163.
Le Qur, C., M.R. Raupach, J.G. Canadell, G. Marland, et al. (2009). Trends in the sources and sinks of carbon dioxide. Nature Geoscience. doi:10.1038/ngeo689.
Leake, J. (2009). Climate change data dumped. TimesOnline. Nov. 29. Available at: http://www.timesonline.co.uk/tol/news/environment/article6936328.ece Exit.
Accessed July 23, 2010.
Leduc, G., R. Schneider, J.-H. Kim, and G. Lohmann (2010). Holocene and Eemian Sea surface temperature trends as revealed by alkenone and Mg/Ca paleothermometry. Quaternary Science Reviews 29: 989-1004.
Loehle, C. (2009). A mathematical analysis of the divergence problem in dendroclimatology. Climatic Change. 94: 233.
Loehle, C. (2008). A Mathematical Analysis of the Divergence Problem in Dendroclimatology. Climatic Change. doi: 10.1007/s10584-008-9488-8.
Loehle, C., and J.H. McCulloch (2008). Correction to: A 2000-year global temperature reconstruction based on non-tree ring proxies. Energy and Environment 19: 93—100.
Long, E.R. (2010). Contiguous U.S. Temperature Trends Using NCDC Raw and Adjusted Data for One-Per-State Rural and Urban Station Sets. Available at: http://scienceandpublicpolicy.org/images/stories/papers/originals/Rate_of_Temp_Change_Raw_and_Adjusted_NCDC_Data.pdf (14 pp, 571K) Exit.
Accessed July 23, 2010.
Mann, M. E., R. S. Bradley, and M. K. Hughes (1999). Northern hemisphere temperatures during the past millennium: Inferences, uncertainties, and limitations, Geophysical Research Letters. 26(6):759—762, doi:10.1029/1999GL900070.
(PNAS): 105 (36).
Mann, M. E. (2004). On smoothing potentially non-stationary climate time series. Geophysical Research Letters. 31: L07214. doi:10.1029/2004GL019569.
Mann, M.E., and P.D. Jones (2003). Global surface temperatures over the past two millennia. Geophysical Research Letters. 30(15):1820. doi:10.1029/2003GL017814.
McIntyre, S. and R. McKitrick (2003). Corrections to the Mann et al. (1998) proxy data base and Northern hemispheric average temperature series. Energy and Environment. 14: 751-771.
Meehl, G.A. et al. (2007). Global Climate Projections. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor, and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Menne, M.J., C.N. Williams, Jr., and M.A. Palecki (2010). On the reliability of the U.S. surface temperature record. Journal of Geophysical Research. doi:10.1029/2009JD013094.
Menne, M.J., and C.N. Williams (2009). Homogenization of temperature series via pairwise comparisons. Journal of Climate 22(7):1700. Available at: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/menne-etal2009.pdf (15 pp, 2.2MB). Accessed July 23, 2010.
Mitchell, T.D, and P. Jones (2005). An improved method of constructing a database of monthly climate observations and associated high-resolution grids. International Journal of Climatology 25. DOI:10.1002/joc.1181 693 - 712 Royal Meteorological Society.
National Aeronautic and Space Administration (NASA) (2010a). GISS Surface Tempterature Analysis: Station Data. Available at: http://data.giss.nasa.gov/gistemp/station_data/. Accessed July 23, 2010.
NASA (2010b). GISS Surface Temperature Analysis: Sources. Updated July 2, 2010. Available at: http://data.giss.nasa.gov/gistemp/sources/gistemp.html. Accessed July 23, 2010.
NASA (2009a). AIRS Total Precipitable Water Vapor (mm). Digital image. Available at: http://photojournal.jpl.nasa.gov/jpegMod/PIA12097_modest.jpg. Accessed July 23, 2010.
NASA (2009b). Carbon Dioxide in the Mid-Troposphere. July. Available at: http://www.nasa.gov/images/content/411791main_slide5-AIRS-full.jpg. Accessed July 23, 2010.
NASA (2009c). NASA Outlines Recent Breakthroughs in Greenhouse Gas Research. December 15. Available at: http://www.jpl.nasa.gov/news/news.cfm?release=2009-196. Accessed July 23, 2010.
NASA (2009d). OMI/MLS Tropospheric Column Ozone (Dobson Units). Digital image. July. Available at: http://acdb-ext.gsfc.nasa.gov/Data_services/cloud_slice/gif/Jul09.gif. Accessed July 23, 2010.
National Academies (2009). G8+5 Academies’ joint statement: Climate change and the transformation of energy technologies for a low carbon future. Washington, DC. Available at: http://www.nationalacademies.org/includes/G8+5energy-climate09.pdf. Accessed June 11, 2010.
National Research Council (NRC) (2010). Advancing the Science of Climate Change. National Academy Press, Washington, DC.
NRC (2006). Surface Temperature Reconstructions For the Last 2,000 Years. National Academy Press, Washington, DC.
New Zealand National Institute of Water & Atmospheric Research (2010). NZ temperature record. March 4. Available at: http://www.niwa.co.nz/news-and-publications/news/all/2009/nz-temp-record Exit. Accessed July 23, 2010.
New, M., M. Hulme, and P. Jones (1999). Representing Twentieth-Century Space—Time Climate Variability. Part II: Development of 1901—96 Monthly Grids of Terrestrial Surface Climate. Journal of Climate 13: 2217—238. Available at: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%282000%29013%3C2217%3ARTCSTC%3E2.0.CO%3B2 Exit.
Accessed July 23, 20210.
National Oceanic and Atmospheric Administration (NOAA) (2010a). State of the Climate Global Analysis Annual 2009. Available at: http://www.ncdc.noaa.gov/sotc/?report=global&year=2009&month=13&submitted=Get+Report#gtemp. Accessed July 23, 2010.
NOAA (2010b). Annual Global (Land & Ocean) Temperature Anomaly Relative to 1901-2000 Base Period. Digital image. National Oceanic and Atmospheric Administration (NOAA). Available at: http://www.ncdc.noaa.gov/img/climate/research/2009/decadal-global-temps-1880s-2000s.gif. Accessed July 23, 2010.
NOAA (2010c). GHCN Monthly Version 2. Available at: http://www.ncdc.noaa.gov/oa/climate/ghcn-monthly/index.php. Accessed July 23, 2010.
NOAA (2009). State of the Climate Global Analysis Annual 2008. Available at: http://www.ncdc.noaa.gov/sotc/?report=global&year=2008&month=13&submitted=Get+Report. Accessed July 23, 2010.
O’Neill, B. (1997). Measuring Time in the Greenhouse. Climatic Change 37: 491—503.
Pearce, Fred (2010). Victory for openness as IPCC climate scientist opens up lab doors, Ben Santer had a change of heart about data transparency despite being hectored and abused by rabid climate sceptics. The Guardian. Available at: http://www.guardian.co.uk/environment/2010/feb/09/ipcc-report-author-data-openness Exit.
Accessed July 23, 2010.
Peterson, T.C., and M.O. Baringer (Eds.) (2009). State of the Climate in 2008. Bulletin of the American Meteorological Society. 90: S1—S196.
Peterson, T.C., and R.S. Vose (1997). An overview of the global historical climatology network temperature data base. Bulletin of the American Meteorological Society. 78: 2837—2849.
Peterson, Thomas, Harald Daan, and Philip Jones (1997). Initial selection of a GCOS surface network. Bulletin of the American Meteorological Society. 78: 2145—2152.
Pielke Sr., R.A., C. Davey, D. Niyogi, S. Fall, J. Steinweg-Woods, K. Hubbard, X. Lin, M. Cai, Y.-K. Lim, H. Li, J. Nielsen-Gammon, K. Gallo, R. Hale, R. Mahmood, S. Foster, R.T. McNider, and P. Blanken (2007a). Unresolved issues with the assessment of multi-decadal global land surface temperature trends. Journal of Geophysical Research. 112: D24S08. doi:10.1029/2006JD008229.
Pielke, R., Sr., J. Nielsen-Gammon, C. Davey, J. Angel, O. Bliss, N. Doesken, M. Cai, S. Fall, D. Niyogi, K. Gallo, R. Hale, K.G. Hubbard, X. Lin, H. Li and S. Raman (2007b). Documentation of Uncertainties and Biases Associated with Surface Temperature Measurement Sites for Climate Change Assessment. Bulletin of the American Meteorological Society: 913—928.
Pivovarova, N. (2009). Institute for Economic Analysis (IEA): How warming is made. The case of Russia. December 15. Available at: http://www.iea.ru/article/kioto_order/15.12.2009.pdf (21 pp, 461K) Exit; translation at: http://climateaudit.files.wordpress.com/2009/12/iea1.pdf (21 pp, 3.1MB) Exit.
Accessed on July 23, 2010.
Porter, S.C. (2000). Onset of Neoglaciation in the Southern Hemisphere. Journal of Quaternary Science 15: 395—408.
Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi and K.E. Taylor, 2007: Cilmate Models and Their Evaluation. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Rayner et al. (2005). Improved Analyses of Changes and Uncertainties in Sea Surface Temperature Measured In Situ since the Mid-Nineteenth Century: The HadSST2 Dataset. Journal of Climate. 19:446-469
Reichler, T., and J. Kim, (2008). How Well Do Coupled Models Simulate Today’s Climate? American Meteorological Society (AMS) 89: 303—11.
Reuters (2008). Earth still warming. News 24.com. Available at: http://www.news24.com/SciTech/News/Earth-still-warming-20080111Exit. Accessed on July 23, 2010.
Rutgers University, Global Snow Lab (2010a). Snow Cover Data for North America. Available at: http://climate.rutgers.edu/snowcover/files/moncov.namgnld.txt Exit. Last accessed July 23, 2010.
Rutgers University, Global Snow Lab (2010b). Snow Cover Data for Northern Hemisphere. Available at: http://climate.rutgers.edu/snowcover/files/moncov.nhland.txt Exit.
Accessed July 23, 2010.
Rutherford, S., M. E. Mann, T. J. Osborn, K. R. Briffa, P D. Jones, R. S. Bradley, M. K. Hughes. (2005). Proxy-based Northern Hemisphere surface temperature reconstructions: Sensitivity to methodology, predictor network, target season and target domain. Journal of Climate. 18:2308—2329.
Santer, B.D., et al. (2008). Consistency of modeled and observed temperature trends in the tropical troposphere. International Journal of Climatology. doi:10.1002/joc.1756.
Segalstad, T.V. (1997). Carbon Cycle Modeling and the Residence Time of Natural and Anthropogenic Atmospheric CO2: On the Construction of the ‘Greenhouse Effect Global Warming’ Dogma,’ Global Warming: The Continuing Debate. European Science and Environment Forum (ESEF). Cambridge, England: 1998. Available at: http://folk.uio.no/tomvs/esef/ESEF3VO2.htm. Accessed on July 23, 2010.
Seinfeld, J.H., and S.N. Pandis (1998). Atmospheric Chemistry and Physics From Air Pollution to Climate Change. Wiley Interscience, New York, 1326 pp.
Smith, E.M. (2009). NOAA/NCDC: Global Historical Climatology Network Exit (GHCN)—The GlobalAnalysis. November 3. Available at: http://chiefio.wordpress.com/2009/11 /03/ghcn-the-global-analysis/ Exit.
Accessed July 23, 2010.
Solomon, S., et al. (2010). Contribution of Stratospheric Water Vapor to Decadal Changes in the Rate of Global Warming. Science. 327: 1219-1223. doi: 10.1126/1182488.
Solomon, S., G.K. Plattner, R. Knutti, and P. Friedlingstein (2009). Irreversible climate change due to carbon dioxide emissions. Proceedings of the National Academy of Sciences 106:1704—1709.
Solomon, S., D. Qin, M. Manning, R.B. Alley, T. Berntsen, N.L. Bindoff, Z. Chen, A. Chidthaisong, J.M. Gregory,G.C. Hegerl, M. Heimann, B. Hewitson, B.J. Hoskins, F. Joos, J. Jouzel, V. Kattsov, U. Lohmann, T. Matsuno,M. Molina, N. Nicholls, J. Overpeck, G. Raga, V. Ramaswamy, J. Ren, M. Rusticucci, R. Somerville, T.F.Stocker, P. Whetton, R.A. Wood, and D. Wratt (2007). Technical Summary. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B.Averyt, M. Tignor, and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Soon, W., D.R. Legates, and S.L. Baliunas (2004). Estimation and representation of long-term (>40 year) trends of Northern-Hemisphere gridded surface temperature: A note of caution. Geophysical Research Letters. 31: L03209, doi:10.1029/2003GL019141.
Thompson et al. (2008). A large discontinuity in the mid-twentieth century in observed global-mean surface temperature. Nature 453:646—649. doi:10.1038/nature06982.
Thompson, L.G., et al. (2006). Abrupt tropical climate change: past and present. Proceedings of the National Academy of Sciences 103:10536—10543.
Treadgold, R. (ed.) (2009). Are we feeling warmer yet? November 25. The New Zealand Climate Science Coalition. Available at: http://www.climateconversation.wordshine.co.nz/2009/11/are-we-feeling-warmer-yet/ Exit. Accessed July 23, 2010.
Trenberth, K. (2010). Brouhaha over Hacked Climate Emails. National Center for Atmospheric Research. Available at: http://www.cgd.ucar.edu/cas/Trenberth/statement.html Exit. Last accessed April 12, 2010.
Trenberth, K.E. (2009). Geoengineering: What, How and for Whom? Physics Today: 10-12. The Climate and Global Dynamics Division (CGD)'s Climate Analysis Section. National Center for Atmospheric Research, Available at: http://www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/GeoengineeringPhsToday.pdf (1 pg, 25K) Exit. Accessed July 23, 2010.
Trenberth, K. (2008). Testimony before the U.S. Senate Committee on Environment and Public Works. Available at: http://www.cgd.ucar.edu/cas/Trenberth/TrenberthTestimony0708_v2.pdf (13 pp, 423K) Exit.
Accessed June 23, 2010.
Trenberth, K.E. et al. (2007). Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B.Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Turner, J., J.C. Comiso, G.J. Marshall, T.A. Lachlan-Cope, T. Bracegirdle, T. Maksym, M.P. Meredith, Z. Wang, and A. Orr (2009). Non-annular atmospheric circulation change induced by stratospheric ozone depletion and its role in the recent increase of Antarctic sea ice extent. Geophysical Research Letters, 36:L08502. doi:10. 1029/2009GL037524.
UK House of Commons Science and Technology Committee (2010). The disclosure of climate data from the Climatic Research Unit at the University of East Anglia. Eighth Report of Session 2009—10. HC 387-I. Available at:http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/38702.htm Exit.
UK Met Office (2010a). A number of corrections have been made to the station data that are used to produce CRUTEM3 and HadCRUT3. Available at: http://hadobs.metoffice.com/crutem3/index.html Exit. Accessed July 23, 2010.
UK Met Office (2010b). Proposal for a New International Analysis of Land Surface Air Temperature Data, submitted to the World Meteorological Organization, Commission for Climatology, Fifteenth Session, Antalya, Turkey, February 19-24, 2010.
UK Met Office. (2010c) Global average land-surface, sea-surface and combined land and sea-surface temperature (1850-2009). Digital image. Met Office on Climate Change. Available at: http://www.metoffice.gov.uk/climatechange/science/monitoring/ Exit. Accessed July 23, 2010.
UK Met Office (2009). New evidence confirms land warming record. December 18. Available at: http://www.metoffice.gov.uk/corporate/pressoffice/2009/pr20091218b.html Exit. Accessed July 23, 2010.
UK Parliamentary (2009). Select Committee Report. Available at:http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/387/387i.pdf (61 pp, 313K) Exit. Accessed July 23, 2010.
U.S. EPA (2009). Technical Support Document for Endangerment and Cause or Contribute Findings for Greenhouse Gases under Section 202(a) of the Clean Air Act. Washington, DC: U.S. Environmental Protection Agency.
University of East Anglia (2010a). Report of the International Panel set up by the University of East Anglia to examine the research of the Climatic Research Unit. Available at: http://www.uea.ac.uk/mac/comm/media/press/CRUstatements/SAP Exit. Accessed July 23, 2010.
University of East Anglia (2010b). Statement from the University of East Anglia in response to ‘UK scientist hid climate data flaws’ (Guardian, February 2, 2010). Available at: http://www.uea.ac.uk/mac/comm/media/press/CRUstatements/guardianstatement Exit.
Accessed July 22, 2010.
Vaganov E.A., Hughes M.K., Kirdyanov A.V., Schweingruber F.H., and Silkin P.P. (1999). Influence of snowfall and melt timing on tree growth in subarctic Eurasia. Nature 400:149—151.
von Storch, H., and E. Zorita (2005) Comment on ‘Hockey sticks, principal components, and spurious significance’ by S. McIntyre and R. McKitrick. Geophysical Research Letters. 32, L20 701. doi:10.1029/2005GL022753.
Vose, R. S., C.N. Williams, Jr., T.C. Peterson, T.R. Karl, and D.R. Easterling (2003). An evaluation of the time of observation bias adjustment in the U.S. Historical Climatology Network. Geophysical Research Letters. 30(20): 2046. doi:10.1029/2003GL018111.
Vose, R.S., R.L. Schmoyer, P.M. Steurer, T.C. Peterson, R. Heim, T.R. Karl, and J. Eischeid (1992). The Global Historical Climatology Network: Long-term monthly temperature, precipitation, sea level pressure, and station pressure data. ORNL/CDIAC-53, NDP-041, 325 pp. [Available from Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831.]
Wahl, E.R. and C.M. Ammann (2007). Robustness of the Mann, Bradley, Hughes Reconstruction of Surface Temperatures: Examination of Criticisms Based on the Nature and Processing of Proxy Climate Evidence. Climatic Change. 85:33—69. doi: 10.1007/s10584-006-9105-7.
Watts, A. (2010). Darwin Zero Before and After | Watts Up With That? Watts Up With That? Available at:http://wattsupwiththat.com/2009/12/20/darwin-zero-before-and-after/ Exit. Accessed July 23, 2010.
Watts, A. (2009). Is the U.S. Surface Temperature Record Reliable? Chicago: The Heartland Institute. 29 pp.
World Meteorological Organization (WMO) (2010). Volume A Report- Canada: Region IV North and Central America. Available at: http://climate.weatheroffice.gc.ca/prods_servs/wmo_volumea_e.cfm Exit.
Accessed July 23, 2010.
WMO (2000). "WMO Statement on the Status of the Global Climate in 1999.’ WMO-No. 913. ISBN 92-63-10913-3. Available at: http://www.wmo.int/pages/prog/wcp/wcdmp/statemnt/wmo913.pdf.
Wegman, E.J., D.W. Scott, and Y.H. Said (2006). Ad Hoc Committee Report on the ‘Hockey Stick’ Global Climate Reconstruction. Report presented to the U.S. House of Representatives Committee on Energy and Commerce, July 14, 2006. Available at: http://www.uoguelph.ca/~rmckitri/research/WegmanReport.pdf (91 pp, 1.4MB) Exit.