Archive for January, 2014

Vaporization

January 28, 2014

Okay. So the surface of the Earth undergoes evaporative cooling at a current rate of 86.4 W/m^2. According to Wentz et al.precipitation globally increased at a rate of ~1.4% per decade from 1987-2006. Evaporation = Precipitation-water balance condition. Implies a trend of 1.2096 W/m^2 additional evaporative cooling per decade. Simultaneous trend in the average of GISS (1200 km), HADCRUT4, and NCDC v3.2.0 was about .2 K per decade. Simple algebra, evaporative cooling per degree of warming: 6.048 W/m^2 K. Necessary temperature change for evaporative cooling to cancel decrease in radiative cooling by 3.7 W/m^2: ~.61 K.

Sanity check! Models increase evaporation at a rate of 1-3% per K. This translates to between 0.864 W/m^2 K and 2.592 W/m^2 K, assuming Earth-like baseline latent heat flux, which compensates 3.7 W/m^2 decrease in radiative cooling between ~4.4 to ~1.43 K. Models typically range in sensitivity between 1.5 to 4.5 K for a doubling of carbon dioxide. Okay, numbers check out-maybe slight underestimate? Ice albedo feedback?

Folding in other findings for maximum climate extremism:

Detrend average surface temperature index and UAH LT (over the same period) annual average anomalies. Quick regression, suggests amplification of short term fluctuations of 1.44 LT relative to surface. Divide LT anomalies by this factor. Trend over 1987-2006 is ~.12 K per decade. Simple algebra again: increase in evaporative cooling per degree of warming: 9.793 W/m^2 K. Sensitivity implied: ~.38 K per doubling.

Wow okay that’s pretty small. I can push it a little closer if I assume a smaller LT amplification factor (which is probably biased by GISS’s reduced interannual variability?)

Note this is a calculation of the feedback. If you want to get those numbers higher to the sensitivity you like, you can’t wave your arms around blathering like an idiot about “transient climate response.” Instead you need to wave your arms around blathering like marginally less of an idiot about “non linear feedback” or “time dependent feedback.” The current result indicates that there is a very high slope tangent to the curve of outgoing radiation as a function of temperature. Higher sensitivity requires this slope to drop off pretty rapidly. Just simple physics would suggest a baseline increase in the rate at 4σT^3-4σT0^3. You need some positive feedback that is relatively weak now but very strong at just a slightly higher temperature. Or I don’t know maybe you can appeal to ice sheet melting and carbon cycle feedbacks and we can agree that climate change could be a problem, you know, in a few hundred years. Certainly not this century.

Well, good luck with that.

Advertisements

Using Phase matching to identify the ENSO signal

January 21, 2014

Using a technique I have previously established, and used to isolate various signals in temperature data, I thought it would be interesting to identify the ENSO signal in global temperature data-using the “Invariant ENSO Index” described here. While I don’t think it generally wise to consider ENSO something to be “removed” from the temperature data (since ENSO is itself a part of the climate system and thus part of climate response) it is nevertheless interesting to examine the issue. because ENSO is clearly a major aspect of weather and climate variations, and it provides an additional opportunity to show how the technique I am using can identify signals in the temperature data that are not easily separated out otherwise. I identified events as any 12 month or longer excursion of the average of 13 and 11 month centered averages of the IEI (multiplied by -1 and divided by 10) above or below zero (continuously). That is, if the annually smoothed index changed sign for even a single month, month of the switch back was considered a new event for compositing. In compositing the time evolution of ENSO events, I used the unsmoothed and inverted and standardized index. This is what those look like:

IEIcompositeeventprofileRed is the composite evolution of El Niño events, green the composite evolution of La Niña events. Note that, the La Niña event of 2010 is not included as an event in the composite (except as a follow on of the previous event) because too few months have passed since then, the La Niña composite would be much shorter than the El Niño composite otherwise, instead of being of comparable length. Then I aligned the HadCRUT4 data similarly (with the low frequency signal removed as previously established in my post on volcanic signals in the data) The averages there look like this:

ENSOResponseProfilesAs one can clearly see, a typical El Niño event is indeed followed by an increase (red) in global average near surface air temperatures, and a typical La Niña by a decrease (blue). By smoothing both the temperature response profiles and the ENSO event profiles, and removing some trends in the first 28 and 24 months (when the smoothed profiles switch which is greater than the other) and rescaling the smoothed profiles, I identify the peak values of events and responses. Peak event values occur 12 to 11 months in (for El Niño and La Niña respectively) and peak responses occur 14 months into an event. I can then take those smoothed, early trend corrected, rescaled profiles’ values for their peak event magnitudes and responses, and use those to estimate the linear effect:

ENSOregressionEncouragingly, the the responses to La Niña and El Niño seem to scale the same (that is, a straight line as opposed to one with an obvious bend indicating asymmetric response). Using the slope of the regression, and lagging one two or three months, I can then “remove” the ENSO signal thus detected from the global data. Here is what that looks like, annually smoothed:

HADCRUT4ENSOsignalremovedIt is evident that this did not remove the all the effects of every individual ENSO event-some may have a large impact than others-but it did, I think, remove the “average” ENSO response. The above graph has a number of interesting features-for example the effect of the large El Niño in the mid 1940’s was to in effect turn two isolated temperature spikes into a persistent “hump” in the temperature data.

A New Normalized Short Term Index for ENSO

January 17, 2014

I previously tried to create an index for ENSO which would have a stable long term mean and variance. Now, using the Southern Oscillation Index, I have modified the approach somewhat:

First of all, one 0f my concerns was shifting seasonality in the data, so when I did my smoothing process (described here) I repeated it ten times on each month as a timeseries separately. This did indeed suggest there were changes in the seasonal structure of the SOI. These were then rescaled by a factor of approximately 1.4, as suggested by a simultaneous linear regression. I then renormalized each month to a mean (1876-2013) of zero and standard deviation of 10 (that is, I divided by their standard deviations and multiplied them by 10). I then took that data, took the absolute value of each data point, and repeated by smoothing procedure 10 times on that, which gave me a sort of index of the variations in the variance, over the long term. I took that, divided it by it’s average value so it would scale to a mean of 1, and then divided my normalized timeseries by that variance factor. For comparison purposes I also renormalized the original SOI data to a long term mean and standard deviation of 10. Here is what they look like in comparison to one another:

IEIvsSOIRed is the original SOI, black the IEI. The main difference appears to be that the variance of ENSO in the middle of the record is increased, and near the beginning and end it is reduced. Specifically, there seems to have been reduced ENSO variance from the 1920’s to the 1970’s, a period of relative ENSO quiescence. However the greatest variance was, originally, at the beginning of the record, indicating that ENSO variance has tended to decrease. But the purpose of isolating the trends of these kinds and removing them is to judge ENSO events themselves, as to how “abnormal” they are relative to typical background climate. This is the “background” we are removing:

SOIminusIEIThere isn’t really much of a trend in this data (or in the SOI data to begin with) and it is not at all obvious how these changes in the SOI “background” might relate to global warming or anything else. They appear, instead, to simply be slow variations in the ENSO phenomenon that have heretofore gone unrecognized. For easier visualization and connection with ENSO events, I also divided the indices by 10, multiplied by negative one, and took the average of 11 and 13 month centered averages:

IEIvsSOIinvertedstandardizedsmoothedThe El Niño circa 1940 is much more prominent, now, being in fact larger than the El Niño of 1997, but not 1982.

Similarly I can take that index (ie divided by 10 and multiplied by -1), but instead of annually smoothing, I can take calendar year averages, and then rank the years from most negative, to most positive. The 20 strongest La Niña years, in order from strongest to weakest:

1917
1950
2011
1975
1956
1955
1971
1910
2008
1879
1938
2010
1974
1988
1999
2000
1973
1964
1886
1989

The same for El Niño years:

1905
1940
1941
1896
1982
1987
1888
1994
1997
1965
1919
1977
1953
1992
1946
1877
1993
1991
1912
1983

It should be interesting to examine various data for evidence of weather differences in such years. Because they are distributed the way they are, they should be essentially orthogonal to any long term trends.

Phase matching the solar cycle to Global temperature data; Unclear results.

January 14, 2014

So I decided to use the method I used to earlier investigate a possible solar cycle impact on US temps, to see if I could find a *global* solar cycle signal. Answer? Ehhhhhh (waves flat hand in manner of the universal gesture for “not much there, there.”)

HADCRUT4SolarCycleBlue is the average temperature profile (with low frequency component and volcanic signal removed) of temperature anomalies over a solar cycle, in months from minimum, and in red, the average sunspot cycle, standardized to have mean zero and the same standard deviation. If there is a signal of the solar cycle here, it’s highly out of phase and still difficult to find in the noise.  To be sure, there is a minimum in temperatures around 32 months after the sunspot minimum and a maximum about fourteen months before, but there are all these random wiggles obscuring any clear relationship even with some lag. Nevertheless, if we essentially “fish” for a signal, we can at least get something that isn’t zero-since the solar brightness does vary, even if climate were highly insensitive to perturbation (moreso than even I think it is) there should be some change from the small variation in solar brightness, and if there are amplifying mechanisms then the sensitivity would have to be very small indeed to accommodate almost no actual temperature change over the solar cycle. So if we make it so the mid points between solar cycle extrema line up with the mid point between temperature extrema, we get a lag of about 45 months, consistent with what we found for the US. And we get a bit of a relationship:

HADCRUT4SolarRegressionAnd we can take the regression and lag, and get the short term solar effect from the solar cycle on temperatures:

HADCRUT4ShortTermSunspotSignalInterestingly, this seems to imply that typical solar cycles have a temperature variation of about .05 K. The IPCC report, and frankly the work of a lot of scientists I respect, cite a number twice this large. The heck gives?

I have a theory. The main cite for the estimate of the solar cycle signal is Douglass and Calder. But Douglass and Calder estimate the magnitude of the solar cycle signal on lower tropospheric temperature. Since variations (though not trends) tend to be larger in the troposphere, it isn’t terribly surprising that there should be a larger signal in the troposphere. I also view the removal of ENSO from climate data increasingly as an erroneous and philosophically wrong approach to signal detection in climate. ENSO is a part of the climate system, it is not in some manner magically immune to radiative forcing. Lastly my approach to accounting for the confounding impact of volcanic eruptions is, I personally believe, superior.

It is worth noting that just because the impact of the sunspot cycle itself, over the short term, is a small effect, does not preclude the possibility of secular trends in solar activity, damped by the oceans thermal inertia, causing long term climate trends. To answer that question we need a full model that accounts for those effects, contains the right sensitivity and response time, and an accurate history of the forcing from all solar effects. These amount essentially to a long list of unknowns.

It’s also worth nothing that if the forcing over the solar cycle is actually rather large, due to a cosmic ray effect on clouds, such a small temperature change would require a low sensitivity or an inordinately high degree of thermal inertia-the latter however is probably inconsistent with relatively short time lags observed for solar and volcanic effects. Of course, even if there weren’t an additional forcing apart from solar brightness alone, such a small signal is compatible with a low sensitivity.

The Curious Case of NOAA-12

January 13, 2014

Much has been made of the differences between the UAH and RSS satellite data products for the lower troposphere layer average temperature anomalies. But the vast majority of the commentary on this issue is ignorant of the underlying data issues. Below is a plot of the differences between the two (note: I downloaded RSS from KNMI as the non-anomaly data, then anomalized to the 1981-2010 annual cycle to match UAH. I did this because UAH does not cut out some of the high latitude southern hemisphere data in their reported global averages but RSS does, whereas the KNMI non-anomalized data for RSS has the same spatial coverage. However, the differences are minimal as far as I can tell. I also rounded to 2 decimal places to match.)

UAHRSSDifferenceAlso present are the averages of 11 point and 13 point centered averages. I have highlighted two periods which are of interest. How did I select the dates? It wasn’t by looking at the discontinuities themselves. Rather, from my readings of various papers by John Christy, I saw that the transition from NOAA-11 to NOAA-12 was of particular interest in the differences of trends between the two satellites. So I identified the date at which NOAA-12 became operational; September 1991, and I identified the date at which the next satellite after NOAA-12, NOAA-14 came on line; April 1995. This would be the period during which the effect of the NOAA-11 and NOAA-12 transition should be most apparent, and indeed we see a continuous warming of RSS relative to UAH during that period. The later period, on the other hand, was identified as the period during which UAH was making use of AQUA as a data backbone. That was from August of 2002 to the end of 2009. At the time, non-AQUA satellites were diurnally drifting warm, as such they needed cooling corrections applied to them: the fact that RSS cooled during this period relative to UAH strongly indicates that RSS’s diurnal drift adjustment (which was not necessary for AQUA) is excessive.

Clarifying note: RSS also makes use of AQUA, however, it does not treat it the same way as UAH does. UAH treated AQUA as superior for assessing the trend over the period to other satellites (hence “backbone”) whereas RSS treated it as equal to the other satellites after applying their diurnal adjustment.

So we can be pretty confident that, at lest during that period, RSS is cooling excessively. This suggests that RSS should probably also be wrong about the earlier, warm shift, too, since it would also arise in that manner from excessive corrections by RSS.

But just to be sure, can we check the data against something else? For example, during a short term period, surface temps and LT’s roughly move together, albeit with different magnitudes. The answer might be yes. Herein, I will use GISS surface temperature data (downloaded from KNMI, 2500 km smoothing, anomalized to 1981-2010 mean, values from December 1978-November 2013). First, let’s detrend all the data: we only are interested in the spurious shift over a short period, not making data all agree in their long term trends, since their long term trends agreeing is a hypothesis we wish to be able to test. Next, let’s remove seasonal noise by taking the averages of 11 point and 13 point centered averages. That looks like this:

GISSvariationversusSatelliteVariationBlue is the average of the two satellite datasets: that way, we aren’t assuming either is superior to the other. Red is GISS Next, we estimate the tropospheric amplification factor by linear regression: the best fit slope is about 1.337. We use that factor to multiply GISS detrended anomalies, both smoothed and unsmoothed. Now, we compare those, by taking differences, to the UAH and RSS detrended anomalies (smoothed and unsmoothed) over the period of interest involving the spurious shift with NOAA-12:

GISSSatellitestepRed and dark red are RSS-GISS, blue and black UAH-GISS, green and purple are RSS-UAH. Both satellite datasets warming relative to GISS over the period of interest, but RSS definitely warms more. The differences of the smoothed endpoints for RSS-GISS, UAH-GISS, and RSS-UAH, are ~0.0787 K, ~0.0121 K, and ~0.0622 K, respectively. The linear trends are ~0.0389 K/yr, ~0.0216 K/yr, and ~0.0160 K/yr, respectively. GISS appears to confirm that RSS warms spuriously during this period, and even suggests the possibility that UAH warms spuriously over this period, too.

If I correct for RSS’s spurious shift, the differences now look like this:

UAHRSSDifferenceNOAA12CorrectedNotice that now, before the AQUA period, there is essentially no difference between UAH and RSS in trend terms. And remember: RSS would be expected to be wrong over this period and UAH right, due to the way in which they differently handle the AQUA data, which doesn’t need diurnal drift correction. So if I correct RSS for the drift over the AQUA period (and generously assume that there was no additional drift before or after) The differences now look like this:

UAHRSSDifferenceNOAA12andAquaCorrected

What we see is that two simple corrections, which are based on a combination of independent data and understanding of the underlying satellites, removes a lot of the distinctive features of the differences between the two datasets. However, it appears that the trend difference between the two is largely unchanged, because these two errors mostly balance one another out. RSS is still cooling relative to UAH over the entire dataset, and it is hard to determine the origin of the remaining discrepancies. If we return to the NOAA-12 discrepancy, UAH looked like it might have a slight warm bias, too. If I correct for that, it will bring the UAH trend down closer to RSS’s corrected trend. However, this doesn’t account for the whole remaining discrepancy, which remains about .01  K/decade of RSS cooling relative to UAH. Another possibility is that I need to extend the AQUA correction forwards and backwards in time a bit, since the satellites that drifted during that period in RSS probably drifted before and after, to. If I extend it backwards to the start of NOAA-15 in December of 1998, and forward to the present, the difference in long term trends reverses, and now RSS warms slightly relative to UAH. It looks like we have a plausible explanation for the UAH-RSS divergences, and slight variations in those adjustments for the differences can switch which warms more or less relative to the other. Here is our final estimate for both datasets adjusted for spurious shifts and trends:

CorrectedUAHandRSSRed is RSS and blue is UAH, with corrections applied to both for shifts relative to GISS during the NOAA-12 step, and RSS corrected for spurious cooling since NOAA-15.

Lessing Colding

January 9, 2014

Something tells me that I’m not having an entirely healthy reaction to the fact that people are actually taking the idea that cold weather is caused by warming the Earth seriously. Because I’m mostly just amused by it.

But the reality is that we have a deadly serious situation here. The State Science Institute is now in on the idea, so it seems that this is actually going to get some traction. Something like half of the population can readily become convince of virtually any sort of absurdity. All that appears necessary is that it is the “Liberal” position to believe it is so. And cognition grinds to screeching halt, all basic logic and reason out the window. The few that might worry that such an uncritical acceptance of an idea that is absurd on it’s face, need not: out is trotted one of the Kathedersozialisten, to give the veneer of “expertise” and “science” to the sort of thing you’d expect out of the Annals of Improbable Research.

I take that back, some of those articles make a lot more sense.

Well, okay, let’s take the idea that global warming is more colding, seriously. Let’s regard it as a hypothesis for us to test. So, when the mean temperature rises, do we get colder cold weather?

In a word: No.

The reader is encouraged to look it up for themselves, because NCDC’s climate at a glance page isn’t working right now, but:

Generally, January is the coldest month of the year, and since the 1970’s, ie the period of the “late twentieth century warming” that is allegedly exclusively anthropogenic in origin, this month has seen more warming than the other months of the year in the Continental United States. But we can go further than that.

Repeating an earlier analysis, now with data through 2013: I can ask two questions: ranking days within the year by temperature, which days warm most and least? As before, the answer is that warming of the coldest days is the greatest. This time I also include 2 sigma bounds and a 6th order polynomial fit for no particular reason:

USTempDistributionTrends

It actually turns out that the warming of the coldest days is statistical significant (that is, more than 2 sigma greater than zero), but the warming of the warmest day is not.

I can ask a second question, too: since it is evident that cold days are more variable (larger uncertainty in the regressions) I can ask the question instead, by regressing against the mean annual temperature instead of time, when the mean annual temperature changes by one K, how much on average does the temperature of each day by rank change? It turns out the answer looks like this:

USTempDailyTempsRegressedonAnnualTemps

What we see is that a change of the mean annual temperature by one degree is on average accompanied by changes on days by temperature rank that is mostly not statistically significantly different from 1 to 1 (a uniform warming throughout the year) except for the fact that many of the coldest days, in fact the 61 coldest days, all warm more than one K when the mean annual temperature warms one K, and a few warmer than median days that warm less than a one K when the mean annual temperature warms one K, statistically significantly. And although most days are not significantly different from uniform warming when the annual temperature warms, the general rule is that the warmer days warm less when the mean annual temperature warms and the colder days warm more.

Perhaps instead of warming, we ought to call it less colding.

So the claim that extreme cold occurs because of warming, doesn’t stand up to scrutiny.

A New Global Surface Temperature Index and “Unprecedented” Non-Warming

January 6, 2014

So it occurred to me it should be easy to create a global temperature surface temperature index using the Berkley Earth land surface data, and the recent HADSST3 reconstruction, the only sea surface temperature index which has been corrected at all for the post World War II discontinuity. Just take the anomalies of Land Surface temperature, multiply them by fraction land area (.29) (more precisely, fraction non ocean) and add add it to HADSST3 times fraction ocean area (.71). This results in this index:

BESTHADSST3Don’t be too worried about that last datapoint shooting off like that: it’s September, because that was where BEST ended when I downloaded…I think a week ago? Anyway, it looks to me like something in the BEST algorithm results in spuriously too high or low final anomaly values, some kind of endpoint problem. It doesn’t appear in other datasets at all which pretty much verfies that it is spurious, and I have seen similar spurious final anomaly problems with BEST in the cool direction. As they update the data, they disappear to be replaced by spikes in the new latest anomaly.

Regardless, I’ve done some interesting analyses. For one thing, I’ve made some improvements to my volcanic eruption profile detection, the lag in this index is closer to seven months and the effect is ever so slightly larger. I can explain some of my newer methodology if anyone likes. At any rate, the removal of volcanic eruption effects has some interesting results. First, in the original data, it’s worthwhile to compare the trend from December 1978 (when UAH satellite anomalies begin) to the end of the data, which is currently September 2013, with the highest earlier trend of the same length in the data (which happens to be from November 1907 to August 1942):

BESTHADSST3RawTrendComp

The two trends are almost parallel, and it is doubtful the trends are statistically significantly different. But keep in mind, during the latter period there where two volcanic eruptions, in the early 80s and early 90s, while no volcanoes erupted in the earlier period. So I remove my most recent estimate of the volcanic eruption effect, which has the following result on this comparison:

BESTHADSST3VolcanoTrendComp

Now, taking the volcanic eruptions into account, it turns out that the earlier warming was faster than the later one, albeit by an obviously negligible amount. Nevertheless, this means that a trend which most of the alarmed scientists concede was probably natural, was actually larger than the trend that is supposedly exclusively driven by anthropogenic forcing. And keep in mind again, we don’t actually know what caused the earlier warming, as even models which include larger changes in solar irradiance than probably actually occurred, and volcanoes, fail to reproduce the warming at the appropriate magnitude. So the question for people who buy the “attribution argument” is this: how do you know whatever caused the earlier warming didn’t in large part cause the late warming? Because if you don’t know, then it could have, and therefore the attribution argument-that nothing else could have caused the recent warming-falls apart.

Finally, let me state something about the recent “pause.” It is sometimes argued that, just because there has been a negative trend in the last ten years (and no significant trend for something like 15) does not mean that anything is really amiss: they point to two periods in the last 30 years, during a warming trend that latter continued, of about ten years in length. Those making these arguments, whether they be prominent, respected scientists at NOAA or NCAR (Easterling and Wehrner, Trenberth and Fasullo) or internet trolls (I find increasingly, in terms of intelligence, there is no difference) it highlights either the ignorance, incompetence, or dishonest of these individuals (in the latter case, who are federally funded to provide intelligent honest analysis to the American taxpayer!). Why do I have such a harsh judgement? Because: while it is technically true that there were indeed pauses during those periods, those pauses are associated with periods impacted by major volcanic eruptions, and it was those which caused the warming to halt during those periods: there is no indication that a similarly large eruption happened recently in the time frame necessary to be responsible for the halting of warming. Indeed, if we look at ten year trends by end date, and identify the period when they first become almost continuously positive (the 120 month trend ending in October of 1979 (that is, beginning in November 1969) so we may identify this as the start of the warming period) and look at trends in data before and after correction for the effects of volcanic eruptions, we find that both earlier, very brief halts in warming, disappear, and the warming trend becomes unambiguously continuous until recently, when it stops and goes negative:

BESTHADSST3Pause

The above shows 120 month trends, in K per annum, red without volcanic effects removed, blue with volcanic effects removed. You can see that in fact there was only one (very brief) period when the ten year trends dipped negative before, the later period associated with Pinatubo did not, in fact, go negative, merely very close to zero. Second, we see that picking periods beginning in about 1992 for trends is an excellent way to exaggerate warming to give the impression of a rapid rate, but this is dishonest because the cause of this elevation of the trend is due Pinatubo occurring at the beginning of such trends (eruption in 1991, results in maximum dip around about 1992). I recall some other “scientists” of the SS “skeptics are conspiracy theorists” crowd using exactly that to argue the trends are on par with models, but such dishonest scumbags are really not worth dignifying by naming them. What we see is that the halt in warming is without precedent in the recent warming period. As such, something *is* amiss with predictions of not only continued, but *accelerated* warming. The something that is amiss appears to be that:

  1. Sensitivity has been very significantly over estimated and
  2. Natural climate variability, whatever the cause, has been under estimated.

The former undermines the claim of drastic future warming, the latter undermines the claim that recent warming was uniquely attributable to anthropogenic forcing.

Let’s be absolutely clear: that represents a complete vindication of the skeptical position and a refutation of the alarmed position.