This is a long over due post. First, a little background.
There has been a long standing controversy in the climate debate about temperature data from the near surface observing stations, and that inferred from satellite data: According to most climate models, when the surface temperature rises, the temperatures in the atmosphere should on average rise slightly more: this is apparently a consequence of the lapse rate following the moist adiabat. This is a reasonable prediction, in that it appears to be grounded well in physical theory. But satellite data have long shown less warming than one would expect from any significant degree of amplification. This led to quite a bit of wrangling, and adjusting of data, until the present, where the existence of the discrepancy between data and models depends on the dataset chosen-which vary in trends based on arguably subjective methodological differences. My personal preference is for the UAH data, which if accurate suggests the discrepancy is present given accuracy of the surface data. I have numerous reasons for preferring the UAH product (and, I will note, I have continued to prefer it even as RSS has been cooling globally relative to it in recent years, unlike sadly some other skeptics) because of several publications that have been put out by John Christy identifying specific sources of bias in other datasets, and independent work confirming some of them. I would note that some have the attitude that it would be nice to show UAH is wrong. At any rate, I’m using it for this analysis, and as we shall see, I think an interesting argument in favor of UAH can be made on the basis of the analysis I will do.
Okay, so, to begin with: The United States (specifically, the lower 48 states) probably has the densest temperature observing network anywhere on Earth-as such, analysis of US data has a lot to work with and has a good chance to catch and correct biases. I think it is reasonable to expect, a priori, that the US has minimal bias in estimates of surface temperature trends compared to other places in the world. You can get the USHCN data for CONUS averages in absolute monthly temperatures here. Note that one must convert from Farenheit to Celsius for comparison with UAH satellite data (available for various regions in monthly anomalies here, including over the CONUS). But I decided I wanted to test whether satellite data can show any sign of possible non-climatic bias in the US data. This is tricky because temperatures at the surface and the atmosphere are expected to vary differently, and by an unknown factor. Rather than modeling this factor, I attempted to estimate it empirically. I have done something like this before for global means. I get the following results:
Based on linear regression of anomalies against anomalies, a fluctuation of 1 degree in lower troposphere temperatures over the US is generally associated with a fluctuation of ~1.27 degrees in surface temperatures. Regression of detrended anomalies leads to a coefficient of ~1.28. By doing 12 month running averages on anomalies, I get ~1.22 and by doing the same on detrended anomalies, I get about 1.26. Using the highest and lowest values found for coefficients, I “predict” the surface anomalies on the basis of UAH data, I can estimate, assuming the UAH data is correct and that the processes which determine lapse rate variations are independent of timescale, the bias in the surface temperature data over the US. As it turns out these estimates result in (very slight) trend cooling bias. At most, USHCN appears to run about .02 degrees per decade too cool, relative to variance adjusted UAH, at least, about .01. These results are very favorable for USHCN and indicate that there are unlikely to be serious homogeneity problems present in the data given the assumptions we are making. Here is a plot of the two estimates of the bias:
Now, a lot of skeptics might not like this result. If you don’t like this result, I ask you to please hold off on complaining that you don’t like the result just yet. This is just the US, where we have a lot of high quality data: the absence of any warming bias there in the last 30 years doesn’t mean that there is no bias anywhere else. In fact, looking at the global data will be interesting.
To examine this issue globally, it was necessary to make a choice as to which surface temperature dataset would be most comparable to the UAH product for an estimate. Thinking about it, due to their having a large spatial smooth that allows “coverage” of areas with sparse data, GISS is more comparable to UAH which has full coverage over 85N-85S. But GISS extrapolates beyond that range. I can also sidestep the recent questions about HADCRUT4 surrounding a paper attempting to correct it’s coverage bias and finding an underestimate of warming. I am looking into doing a similar analysis of the Cotwan and Way data, which might be interesting. Therefore, I downloaded GISS 1200 km smoothed data from KNMI, masked to the satellite spatial range. Doing the same kind of analysis of this data as I did with the USHCN data, I found for the respective coefficients, ~.79, ~.57, ~.89, and ~.61, respectively. This leads to a range of estimates for the bias in GISS data given our previous assumptions for USHCN bias calcultion, of between ~.04 degrees per decade and ~.08 degrees per decade too much warming. This analysis, which indicates USHCN has a very small cooling trend bias, indicates GISS has a large warming bias. And a plot of the differences suggests, to me, that the larger estimate of the bias may be more accurate: specifically, for the smaller estimate, there is a large dip in the bias close to the 1998 El Niño, suggesting that there are climatic effects in the “bias” estimate, that do not appear to be present in the higher estimate of the bias:
Note the presence of ENSO artifacts in the (blue) low estimate of warming bias in GISS. Note also that the differences, relative to the magnitude of the trends, are no negligible as they were over the US. These results can be seen in these plots of the estimated surface anomalies and the official surface anomalies, with the estimates being those that lead to the red differences above, both smoothed with a 12 month moving average filter (also shown are their linear trends before this is done):
The red curve is GISS, the blue is what I believe is a best estimate UAH based “estimated GISS” without non-climatic biases. The warming trend is cut in half. I repeat, this analysis suggests that half of the surface temperature warming since 1979 does not reflect an addition of actual heat to the climate system, the real trend is lower.
Now, for interpreting these two results. Let’s suppose you agree with the following proposition: That the long term lapse rate variations are governed by the same processes as, and should be the same in proportion, as short term ones. My understanding is that theory and models both suggest this should be so.
Given that, here are some sets of internally consistent beliefs you can hold about what this analysis shows:
If you like USHCN, you should like UAH, if you like UAH, you should like USHCN. If you believe UAH validates the surface temperature adjustments in the US, you have to admit that it invalidates them globally. Any correction to the UAH data to bring it into better agreement with models and GISS would destroy it’s agreement over the US.
Or you can believe neither dataset is accurate.
Now, many skeptics would like to think USHCN has a large warming bias. Well okay, you can believe that if you reject it’s agreement with UAH as a complete coincidence. That would be a consistent (if a little unreasonable) set of beliefs.
Many of the alarmed would like to think that the USHCN is accurate and that UAH is wrong. This belief is inconsistent. Many of them would also like to think that the USHCN data are accurate and the global near surface temperature data are equally accurate. This belief is consistent as long as it entails a belief that the UAH data coincidentally agrees with the USHCN data but is otherwise completely randomly wrong. But the belief also appears to require that satellite analyses with more warming are more accurate. This is inconsistent: Do the same analysis as above for yourself with RSS, and I believe you will find the agreement with USHCN is terrible. This actually provides an interesting argument why UAH is probably better: it agrees well with the best surface temperature data.
Your only alternative to these positions is to reject the idea that short term lapse rate variations are governed by different processes than long term ones. But, over the US, where long term surface changes would presumably lead to significant changes in long term boundary layer/lower troposphere coupling, there is no evidence for this: the lapse rate variations are basically exactly proportionate on all observed timescales…as long as UAH is correct and USHCN is correct. Which seems reasonable since again them agreeing so well by chance seems unlikely if they aren’t both correct. This may be true elsewhere and just not in the US for unknown reasons: the temperature trends at the surface could still be real. But, that would entail mechanisms not currently included in present climate models, involving boundary layer dynamics, and those trends would not accurately reflect a gain of heat from greenhouse warming.
So while everyone else is focused on whether or not the “pause” can be eliminated by extrapolating data over areas where we have no surface observations, using satellite data, I am interested in the question instead, how much temperature trend over the area of satellite observations is actually a reflection of accumulating heat? The answer is a lot less than the amount that has been measured. I would have to check, but I am pretty sure this would, even with polar extrapolation, significantly lengthen the “pause” and increase disagreement with models. That the trend in surface temperatures related to heat accumulation is so small, suggests drastically reduced climate sensitivity relative to all studies that use the surface data. This includes several recent papers giving “lukewarm” sensitivity estimates.
Well, anyway, food for thought.