Archive for February, 2014

It’s Beginning to Look a Lot Like El Niño

February 17, 2014

I’ve noticed it’s been awfully wet for the dry season lately here in my part of Florida. Really wet. I found this kind of interesting: there are some people saying they expect an El Niño this year or next. Now, there is some association of El Niño here in this part of Florida with wet conditions, but even so, if El Niño is the “cause” of the increase in precipitation, why is precipitation increasing as if in anticipation of El Niño? This looks like a job for phase matching!

This time, to save time, I just used the raw SOI, took the average of 11 and 13 month centered averages, and identified the start of all periods where that index was negative (El Niño) or positive (La Niña) for at least 12 continuous months. The first such incident since 1895-when the US climate division data, and NCDC’s US data in general, began-was 33 months after January 1895, which gives us a long run up to the events in our composites. I average the events aligned by start month. I do the same with the monthly precipitation values for Florida Climate Division 6, but…that basically gave me weirdly aliased seasonal cycles. So I did it with percentage departure from the long term (1895-2013) mean for each month, and I also did it with the average of 11 month and 13 month centered totals. Now, SOI is inversely correlated with ENSO as defined by temperatures, and ENSO warm events are supposed to be associated with more precipitation and cold events with less (at least here, anyway), so SOI will be expected to be inversely related to precipitation. Additionally, the data are on very different scales. So, just for purposes of easier visual comparison and a better sense of lead/lag, I use linear regression of the time evolution of average ENSO events to predict the percent departure from long term mean, and the average of 11 and 13 month running totals. Again, this is just to put the ENSO events on the right scales for comparison-as a linear transformation of the time evolution of the event, it does not alter the basic shape. For La Niña events, things ended up looking like this:


The green is the actual precipitation, the blue is the evolution predicted by the SOI event at zero lag. On the left is the average of 11 and 13 month centered totals, on the right percent departure from the mean. The SOI does appear to very slightly precede the precipitation, more so as the event starts to end than when it starts to begin. La Niña does indeed appear to either cause, or be at least correlated with some cause, of reduced precipitation in Florida Climate Division 6. But when I did the same thing with El Niño, the result was a little different in an interesting way:


The green is the actual precipitation, the red is the evolution predicted by the SOI event at zero lag. On the left is the average of 11 and 13 month centered totals, on the right percent departure from the mean. Now this is different! Precipitation does, indeed, start to increase before the direction of SOI changes toward an El Niño! But it doesn’t start to decrease again until after the El Niño peaks. This means that the direction of causation here is actually ambiguous, if one even exists, but it also means that increasing precipitation in my part of Florida can be an indicator ahead of time that an El Niño is coming! And for that reason, I am predicting we will see an El Niño, and with it there will be more rain (here, anyway).

Grumpy 2.0’s Last Hurrah

February 14, 2014

I’m still working on my improved EBM, but I figure, since Grumpy 2.0 is so easy to implement as a simple multiple regression model, that I can do a fun little exercise that is bound to get me in trouble.

First of all, the nature of the problem: attempting to assess sensitivity and attribution from unknown forcing acting on the temperature record. Recall that in our model, the temperature (or rather the anomaly) should be equal to the forcing times the sensitivity, minus the derivative of temperature times the response time. Problem: the forcing is unknown. Solution: represent it in the simplest terms possible. F = K + U or forcing is equal to the “known” forcing, plus the “unknown” forcing. we can represent the unknown forcing as simply as possible by making it straight line. Since many people assume the unknown forcing must be negative (hiding away the warming), we’ll pick a line with negative slope. We’ll call that u, and have our regression model give it a coefficient b-which will be the sensitivity times the magnitude of the forcing-so that b*u = a*U where a is the sensitivity. The model will get to pick what value of b gives the best fit to the data. On the other hand, the coefficient on K will just be the sensitivity, a, and K itself we will take to be the sum of all greenhouse gas forcings, and the volcanic forcings. We stress that this strives to explain the data in the simplest terms possible. The final predictor variable is the derivative of T, which represents the response time. We take that to be the average the first differences and the first differences shifted back a year, with the first month’s and last month’s values being averaged with zero. We fit to monthly HadCRUT4.

So what does the model say? Well, it picks a sensitivity equivalent to about .5 K per doubling of CO2, it picks a negative coefficient for the unknown term, indicating it prefers a solution where something else contributes to the warming trend, not that something hides it, and a time constant of much less than a month, indicating the model prefers to fit the data using negligible thermal inertia. All but the last of these I personally find plausible. The low response time is probably a consequence of the fact that there is almost no relationship between T and dT/dt at such a short timescale (it is overwhelmed by noise in the data). On the other hand, there is little basis to assume either strong aerosol forcing or that natural variability made a negligible contribution to the observed trend, given that the (admittedly simplistic) model works best if the opposite is true in both cases.

Anyway, if I’ve managed to get myself into a sufficient amount of trouble with all that, I guess you understand why I am trying to create a more sophisticated and defensible model.

Current Project Preview-“Grumpy 3.0”

February 13, 2014

Many years ago I developed a simple EBM-well, actually that would be an great exaggeration of what I actually did, which was really just use a very simple functional form more or less equivalent to what I’ve been using. At any rate, I called it “Grumpy” a sort of self mocking reference to my own generally curmudgeonly persona, and also a reference to Lucia’s similar model exercise, “Lumpy” so named because it is a “lumped parameter model.” Anyway, unfortunately the work, which was kind of amateurish but meant to be a sort of sensitivity test for conclusions about model fits to the observed data, has been lost to the sands of internet time-by which I mean, Climate Audit’s forum is defunct.

At any rate, much of the work I’ve been doing since then has been with what I suppose one might call “Grumpy 2.0” which really doesn’t feature any improvements over the old version, but has been intended for uses in curve fitting exercises.

But for a bit now, I’ve been working on something. It’s not ready for prime time yet, but it’s a lot more sophisticated than my previous modeling exercises, and offers the potential for improving on the previous results significantly. Unfortunately it has many more unknowns, and I could spend an eternity searching the parameter space. At any rate, for those of you who want to see something more sophisticated than “one box” I give you Grumpy 3.0, a three box energy balance model:


Like I said, I’m not ready for prime time with this just yet. I’ve got a lot more work to do. But it’s kind of a cool project.

Now, it would be totally pointless for me to just tell you “what I’m working on” with nothing more than that. So I guess I’m also wondering if there is anyone out there interested in helping with some heavy lifting math ways and statistics ways that I’m…just a tiny bit out of my depth on? One of my current goals is to use this in conjunction with my work on volcanic eruptions to determine what combinations of parameters can be interpreted to be consistent with the data on volcanic response. But like I said, the parameter space is huge, if only because there are so many. Which might help people understand why I often say it’s a fairly trivial matter to claim anything is consistent with your preconceived notions about sensitivity. All one needs is to toy around with various parameter values. And people need to understand how many of these parameters-which are mathematical simplifications of real processes that one needs to represent reality accurately-are almost completely unconstrained, or constrained very poorly. For example, the eddy diffusion coefficient kappa, is not known to within better than an order of magnitude. As far as I can tell, the land-sea coupling constant v is even more poorly constrained. And in most cases, the function ΔQ, the radiative forcing, is largely unknown. And of course, lambda-the sensitivity-is not even claimed to be known to within better than ±50%, meanwhile the reality of the situation is probably worse than that. But at least there are ways one might constrain it’s value independently of those other uncertainties. I’ve looked into a number of such approaches, virtually all of which have given answers very close to one another and all lower than even the lowest “accepted” edge of the mainstream values. I gets someone frustrating at times. I don’t really want to be a climate extremist, being a political extremist is hard enough work. Being a lukewarmer would be a lot easier. Or at least I like to think so. I mean, I could fit in with all the cool people and not have to justify myself to literally everyone, only most people. Because I seem to occupy, If I do say so myself, the unpleasant position in the debate of being that guy who has no friends because he’s a critic of everybody. Well, okay, I’m not the only guy in that position.

But here’s my guess. Of the people who do analytical work on climate blogs I respect, I’d guess their best estimate of what the sensitivity is, is at least 3 to 4 times where I’d currently put it. So there’s a bit of a tension there that large goes unnoticed. And the part I dislike the most about this is that I think the gap is getting wider. When I first got really engaged in this debate-what has it been, like, 5 years now or something?-I would have only just barely failed to qualify as a lukewarmer. If I’m critical of the fact that mainstream sensitivity estimates are literally the same now, without even an improvement of uncertainty, as they were in the late 1970’s-and I am-I have to also be critical of myself and others-those I consider to be good analysts and largely unbiased-for failing to converge, and even diverge in our opinions. And since I’m the one whose opinion has changed, it’s concerning to consider the possibility that I am the problem.

Wow, I really kinda drifted on that one. Anyway, if you’re still reading after all that, and would like to contribute to “team Grumpy” I’ll be pleased to hear from you.