The auto-correlation of nonlinearly “detrended” HADCRUT4 anomalies (from a 1850-2012 baseline), for lags up to 30 years. What do you make of it?
Archive for December, 2013
I just saw this post on US temperatures by Pat Michaels and Chip Knappenberger, and I would recommend readers here give it a look. This line in particular is interesting:
But even if the rest of the month is not quite cold enough to push the entire year into negative territory, the 2013 annual temperate will still be markedly colder than last year’s record high, and will be the largest year-over-year decrease in the annual temperature on record, underscoring the “outlier” nature of the 2012 temperatures.
I have to agree that this is true, although, I think it is worth noting, as usual with “unprecedented” events, just how unprecedented they truly are depends in significant part on how you define them. If December averages at what it has averaged so far, the January to December average temperature in the US will drop about 3.12 degrees Fahrenheit, the previous record holder being 1934 to 35, which was a drop of 2.21 degrees Fahrenheit. If December cools down to average 27.6°F, the drop will be about 3.2 degrees: either way it shatters the record for a drop of Jan-Dec average temperatures. Buuuuut….This does not represent the largest drop of an twelve month average from the previous twelve month average. The average temperature from from April 1935-March 1936 was ~3.39°F below the April to March average of 1934-35, which represents the fastest drop of a twelve month average from the previous, in the entire USHCN record going back to 1895. However, Jan-Dec 2013 from Jan-Dec 2012 will represent the largest drop of all twelve month averages from the previous since then, and the third largest drop in the entire record (the second being March 1935-February 1936 from March 1934-February 1935. That’s still pretty far back to have to go (over 70 years) to find any drop larger.
The above represents what I am talking about: in red is the difference of each 12 month average from the previous 12 month average, by end date. In black are the values for January to December periods, and the green and purple dots represent projected values for 2013-2012. All this goes to show that one should never make too big a deal out of one of those warm spikes like 2012 (or a cold spike for that matter). The almost inevitable result is that the next year will cool down (or warm up) in opposition to the spike because it is a transient weather event. And the larger the event, the more dramatic the pendulum swing can be.
Previously, I began an investigation into why we have recently had warm weather in December here in Florida, when most of the rest of the country has not. I identified the likely culprit as being a teleconnection between US weather and warm anomalies in the middle of the Northern Pacific. To investigate this further, I decided to use HADSST3 to examine the anomalies (relative to the mean value for all Decembers available) in the region in question, which I estimate to be about 30-50N and 180-208E. Downloading the data from KNMI, I then looked at the years I had identified as probably matching our recent pattern historically (below average US, above average Florida). As it turns out, the average anomaly relative to the long term mean for those Decembers was about .4 K above average, which confirms my suspicion, I think, that our recent pattern has a bit to do with warm water in that region: there is at least some association between warm anomalies there and a pattern of below average temperatures for the US as a whole and above average temperatures for Florida, in the month of December. Of course, this is probably not the only phenomenon that can be associated with a warm Florida and a cold US: at least some of the years I selected had below average temperatures for that region (those that didn’t, had an average anomaly of ~.87K). At any rate, I figure it was worth looking into a couple of additional details. First, the PDO: For the official PDO data, the average December value in the years (excluding 1897, not available) I selected as analogs, was in fact about -0.68 below the long term mean (1900-2012) and the value for the years where there was, in fact, a warm anomaly in the region I selected, was about -1.2 below the long term mean, which confirms that the analog Decembers were generally during negative PDO conditions. In fact, in the subsample of years with actual positive temperature anomalies relative to the long term mean in the selected region, only 1992 had a PDO value for that December above the long term mean; it seems probable that the actual reason for the cold weather in the US that year was the eruption of Pinatubo in 1991, which probably also disrupted the PDO pattern’s connection to weather phenomena somewhat. I think this more or less confirms my diagnosis: a negative PDO (warm central North Pacific Ocean) is teleconnected to warm December weather in the South East US, a link especially strong during La Nina years, but present to a lesser extent, and in a reduced area, in years without a La Nina. Similarly, a negative PDO is associated with cold weather in December in much of the US, and this is even more true absent a La Nina.
Well, there was one other question my mother had, which was whether there was some connection to solar activity in the reduced temperatures in much of the US. Well, it took a lot of work to get, not much of an answer, honestly. The following shows the average monthly (degrees Fahrenheit) anomaly (blue) and average annual smooth (average of 11 month and 13 month centered averages, black) of USHCN data, in months from date of sunspot minima (data from here, minima determined by lowest value of the average of an 11 month centered average and a 13 month centered average, in a cycle), as well as the average of sunspots over the same periods, minus 64.81525 and divided by 184.91169136522 (red):
It is not obvious to me that there is any instantaneous effect of solar activity on temperatures in the US. Looking closely, it looks as though there might be a delayed effect, although there is a lot of noise. This shows the above, minus the blue curve:
Well, okay. So it’s hard to say if there is a relationship: there might be, but the data is very noisy, and it’s hard to detect. But, if there is about a 3 year lag, then since last minima was around 2008, we might still be feeling it’s lingering effects in our cold weather here. Maybe. It’s hard to say.
EDIT: Thinking on it, I remembered how, by removing the effects of long term variations, I was better able to discern the effects of volcanic eruptions on the temperature record. So I used the smoothing technique (10 times!) to take the long term variations out of the temperature data. So this is the new phase plot:
The new normalization factors for the sunspots are minus 63.1120103092784 divide by 202.857354779726, and I then lag them by 75 months to get the green curve (I also used the averaged cycle lagged 125 months to get the values from before 84 months before minima). That would put us, presently, a little before the full effects of the minima in 2008 would be fully felt, but only a year and four months away. The best fit linear regression coefficient for anomalies on this cycle is 0.516260800957563. This, then, shows the lag short term impact of sunspots (thermal inertia only crudely dealt with by the lag):
So, I think my answer is, yes, low solar activity might be contributing a little bit to recent cold weather in the US. Of course, the multidecadal impacts of Solar Activity can’t be well resolved by this sort of analysis. Nevertheless the impact does appear to be there, and it is not negligible.
In a recent conversation, I expressed some skepticism about the accuracy of the conversion of CO2 emissions scenarios, into estimates of concentrations. This is important because it adds an additional layer of inaccuracy of future estimates of global warming due to future increases in CO2: It is currently my opinion that the emissions scenarios themselves are not to be taken seriously as accurate but even if they were, the carbon cycle models appear to overestimate how much an increase in emissions leads to an increase in concentrations. This will tend to lead to overestimating future climate forcing and therefore, future warming.
In investigating the issue further, I came across another uh, curiosity. From here, one can download estimates of global fossil fuel burning CO2 emissions back to 1751. From here you can download historic CO2 concentrations (Mauna Loa back to 1959, global averages back to 1980), and from here, you can get ice core estimates of concentrations. According to this, our handy dandy conversion factor for ppm to metric tons of CO2 should be 7769028871.39107 tons/ppm. So from there, we can estimate the net mass flux into the atmosphere each year. It is from doing something like this that people estimate the so called “airborne fraction.” We’ve discussed that before, (see also discussion at Jeff’s blog). At any rate, compare emissions from fossil fuel burning to actual mass flux in and out of the atmosphere, and we get a graph like this:
Green is metric tons of emissions from fossil fuel burning, black is the ice core based mass flux, red the Mauna Loa based mass flux, and blue the global avergae concentration based mass flux. We can also take the differences:
Which shows that non fossil fuel burning sources and sinks of CO2 represent a large and growing net sink of CO2. But that’s not really what I deemed especially curious. What I deemed curious is, that before about 1910, these other sources and sinks represented, usually, a net source of CO2. Hm. That’s kinda interesting to me.
Now, as I understand it, there is also a contribution to total anthropogenic emissions from land use, which could mean that one interpretation of the above result should be that before about 1910, land use emissions contributed more to rising CO2 than fossil fuel burning. Hm. Does that make sense to anyone else? Well, there are estimates of emissions from land use from 1850-2005. Converting from teragrams to metric tons of carbon and from carbon to CO2, I can get comparable measures of land use emissions, and fossil fuel emissions. This is the ratio of Land Use emissions to total emissions:
Roughly, this aligns with what I just said-around the beginning of the twentieth century, fossil fuel burning started to represent about half of all emissions, and represented an increasing fraction of emissions thereafter. I can also (estimating emissions from land use after 2005 by extrapolating the rate at which the ratio between land use and fossil burning emissions declined from 1996-2005) find the net natural CO2 mass flux: the difference between the change in the amount of atmosphere, and the anthropogenic emissions (land use + fossil burning):
Okay so finally we get to the really curious thing. What was the natural net source of CO2 in the early 1880’s? I don’t think it’s Krakatoa: in the first place, it appears in other cases volcanic eruptions strengthened the natural sinks, besides which the net natural inflow starts before then. Or perhaps it is simply that human emissions were underestimated around that period? It’s hard for me to say.
Either way, however, we can see something very important from the above: An increasingly large amount of CO2 is never making it into the atmosphere, but is instead being absorbed by nature. An increasingly strong net natural sink would imply some interesting things: first, that future increases in CO2 that is in the atmosphere (and thus exerting a warming influence) will probably be significantly smaller than future emissions. Second, that nature is adaptive, and tends to stabilize itself in spite of our activities. Third, if emissions underwent a sudden decrease by a large fraction, the atmospheric concentration could, if sinks have a significant inertia in their size, actually decrease almost immediately. This last point is quite contrary to the idea that the CO2 concentration would remain elevated for thousands of years even if human emissions went to zero immediately. Such claims appear, to me, to be quite erroneous. Which I take as all the more reason to doubt the carbon cycle models used to estimate future CO2 concentrations can be trusted as accurate.
But please note: while I am skeptical of carbon cycle modeling, that skepticism does not extend to the attribution of increasing CO2 to human emissions: for me, it’s a simple matter of algebra: The amount of CO2 we have emitted is more than enough to account for the increase in atmospheric CO2. As such, it is illogical to suggest natural CO2 could account for the increase, since the natural CO2 mass flow to the atmosphere has had to be large and negative. That is, nature has taken CO2 out of the atmosphere, not put it in.
The 20th Century Reanalysis is an effort to use surface pressure observations to create a record of atmospheric conditions through out the 20th Century-in fact, it extends back into the 19th century to some extent. You can in fact download some data from KNMI, but I was messing around with it on the ESRL composites page for it to see if I could investigate what it might hint at for some of those earlier years in my last post (although I was somewhat frustrated by the fact that I can’t change their base period. Anyway, I decided it would be kind of interesting to ask what the near surface layer (1000 mb) temperature trends are over the whole dataset from 1871-2011-or rather, what the difference between the period 2001-2011 and 1871-1881 is in various places. I produced an interesting plot, which I changed the colors on to more clearly show the temperature change’s signs:
Interestingly, it looks like much of the US (and a few other places, including Turkey and much of the region called Levant) show long term *cooling* in this data! Keep in mind, these are *estimates* of the temperature, by a weather model, based on surface pressure measurements and apparently some sea surface temperatures. At any rate, I regarded this as sufficiently surprising as to warrant further investigation: while a reanalysis is likely to contain biases and errors, especially where not tightly constrained by observations or where new data sources are added in or taken out, it nevertheless raises some eyebrows: barometric pressure over the US is presumably very well characterized by an extensive observational network, and presumably has been for a long time. If a realistic simulation of weather processes can then take those measures of surface pressure and accurately estimate the temperature (a vaguely similar idea is not without precedent), then it raises the question whether this could be an indication of a problem with the estimate of the long term temperature trend over the US. At least for the satellite period, I have generally found the US data from NCDC (USHCN) to be pretty good in quality, agreeing well with well supported satellite analyses, but I am open to the possibility that the estimates of long term trends could be off. Now, partly it seemed to me a possibility this was merely because USHCN, going back as it does to 1895, may simply not capture some unusually warm years in the late 19th century in the US (a period of history especially dear to my heart, let me talk about it endlessly to you at some point). Well, NCDC does extend their full temperature dataset for the globe back to 1880, so it is possible to make a comparison of the 20th Century Reanalysis with the NCDC data to find, whence the difference, if any?
First, I focus on the region 24-49N 235-293E, which roughly delineates the US and surrounding coastal areas (and some of Canada and Mexico) and get the NCDC data from KNMI, and the same from KNMI for the 20th Century Reanalysis surface temps-rebaselining both to 1880-2009 to cover all the full years (er, except the last, just realized 20th Century does have all of 2010. Oh well, doesn’t matter much) they share. I then subtracting the NCDC data from the 20th Century Reanalysis data. The result was this residual:
Green is the monthly differences, red is the 12 month running mean, black is the linear OLS trend over the period 1880-2010. Clearly, the 20th Century Reanalysis shows a significantly warmer early US than does NCDC. Specifically, before about 1916, it almost always runs warmer, and it also runs warmer during much of the Great Depression and WWII, but from about the 50’s onwards, it runs consistently cooler, but also pretty flat: in fact, examining the period from 1979-2010, the Reanalsis-NCDC residual warms slightly by ~0.067 K/Century-almost not at all, which is pretty consistent with the conclusion we have had here at Hypothesis Testing: the Data in the US is pretty good over the satellite period. However, because of the differences above, it seems likely to me that the all important question of “warmest year” in the US would probably have been different in this data, at least prior to us having that big temperature spike in 2012-the sort of thing that happens from time to time in most places on Earth, by the way. The question is whether that is an artifact of the reanalysis, or the reanalysis may be capturing a real feature of the climate of the US over the 20th Century. It would be interesting to examine the surface pressure data for evidence of biases and inhomogeneity, and the surface temperature data for possible uncorrected inhomogeneities.
Incidentally, the difference doesn’t seem to be merely a function of the sea surface temperature data or data outside the US. The differences look very similar over the “core” US (31-40 N, 240-280 E):
So my mother asked me the question the other day: why is it that the rest of the US (or at least, the lower 48 states) has been so cold lately, but here in Florida, it’s been so hot? Well, my first thought was to ask what ENSO was currently doing to see if that would offer a clue: no dice, ENSO is doing nothing right about now. So I told her “I don’t know mom, weather is just weird sometimes” and let the matter rest there. But the question “Why?” kept eating at me. Finally I decide to do a bit of a “forensic” analysis. First, I decided to ask the question: when, in the past, has the US as a whole, seen Decembers colder than average, while at the same time Florida, as a whole, has seen Decembers warmer than average? It turns out that since 1895 (relative to the 1895-2012 mean), the answer is in the years 1897, 1902, 1911, 1916, 1919, 1924, 1926, 1932, 1948, 1951, 1961, 1964, 1967, 1972, 1978, 1990, 1992, 2008, and 2009. Now, looking at the years since 1948, I can create composites for those from ESRL’s composite page. The result looks like this:
A couple of things stood out to me: that these years tend to feature a warm Antarctic and a cold Arctic, at least in the Reanalysis. But on the other hand, the reanalysis is least reliable in those areas due to sparse data. Moreover, this December has, so far, been warmer than average in Alaska, where the above map shows it cold. One interesting place where the maps *do* match, however, is the North Pacific, where much of it is above average in temperature in both. This got me to thinking, “what about the PDO? ENSO is neutral, but the PDO is probably negative right now.” And sure enough, yup, it is! Which got me wondering, again, how the PDO correlates with December Weather in the US. This is the plot:
Indeed, Florida is in the Negatively correlated region, so a negative PDO would tend to be associated with a warm December in Florida-but oddly enough, also much of the rest of the US. However, negative PDO values are also often associated with La Nina conditions, which presently are not prevailing. This lessens the impact of the PDO in being associated with a cold US *generally*. So the explanation for our warm weather and the rest of the US’s cold weather? I think, but need to investigate further, that we can attribute it to the conditions in the Pacific: the presence of warm water in the middle of the North Pacific, in the absence of cold water in the Equatorial Pacific: Cold PDO-No La Nina pattern.
Recently, an attempt has been made at attributing the not too highly publicized reduction in US CO2 emissions to various changes in the US “energy mix.” I’m not going to question their conclusions per se, in as far as it goes, but I would note that focusing on the technological changes ignores the fact that much of this reduction has been accompanied by reduced economic activity. It is certainly true that, since the end of the first World War, the US has produced less CO2 per unit of economic activity in a fairly consistent downward trend in “carbon intensity.” This represents, generally speaking, a trend toward more efficient use of energy, and could be expected to occur with or without economic downturns like the Great Recession. However, economic disruption can force changes in over all energy use and efficiency beyond technological changes/improvements. So the trends Berkeley Earth find are partly related to improved technology in terms of carbon intensity (most especially, a change over from coal to natural gas, and to a much lesser extent, actual technological developments in renewables. Otherwise, the changes are to sources of energy that produce less energy, not more efficient use of energy. So I set out to ask the question, “If the US economy hadn’t taken a major nosedive after 2007, what would emissions look like?
To begin with, my measure for economic activity is the sum of all non government expenditures: that is, GDP minus government spending. I do this to reflect, I think, a more accurate picture of actual economic production. I use this data for the portion of the economy which is made up of State, Local, and Federal spending (transfer payments between Federal, State, and Local are subtracted so that no spending is double counted). I use this data for the US GDP, and I change their deflator to baseline at 2012 so I get constant 2012 dollars. US Carbon Emissions are taken from here, and I use the preliminary estimates for 2011 and 2012. This is what the “Private Economy Carbon Intensity” looks like over time:
As one might expect, it appears that the most recent “Great Recession” in fact set back the trend in reduction of “Carbon Intensity.” This makes some amount of sense, since technological progress would presumably be more difficult to finance and implement with less wealth to go around. So, for the counterfactual “No Great Recession” the assumption will be that after 2007, the trend would have been about -4.4*10^-06 metric tons/dollar/year-the slope of the linear regression of data from 1980-2007. Next, I asked the question, what was the average % growth rate from 1980-2007, of the Private Economy? As it turns out, the answer is ~2.4%. So assuming that rate after 2007, I can get where the private economy “would have been” if it hadn’t had a recession. Next, I take my “Counterfactual Private Economy Carbon Intensity” and “Counterfactual Private Economy” series, and multiply them to get the amount of emissions that “would have” occurred, or an estimate, at any rate. The result is that, absent a recession, US emissions would have declined from 2007 to 2012 by about 15 million metric tons, versus an actual decline of almost 193 million metric tons: 92% of emissions reduction since 2007 can be attributed to the poor performance of the economy, not technological progress. The plot below shows the actual (red) and counterfactual (blue) emissions, since 1800):
Clearly, one should not be too pleased with these emissions numbers, as they merely indicate that the US economy has been performing poorly. Unless, of course, that is your goal.