Grumpy 2.0’s Last Hurrah

I’m still working on my improved EBM, but I figure, since Grumpy 2.0 is so easy to implement as a simple multiple regression model, that I can do a fun little exercise that is bound to get me in trouble.

First of all, the nature of the problem: attempting to assess sensitivity and attribution from unknown forcing acting on the temperature record. Recall that in our model, the temperature (or rather the anomaly) should be equal to the forcing times the sensitivity, minus the derivative of temperature times the response time. Problem: the forcing is unknown. Solution: represent it in the simplest terms possible. F = K + U or forcing is equal to the “known” forcing, plus the “unknown” forcing. we can represent the unknown forcing as simply as possible by making it straight line. Since many people assume the unknown forcing must be negative (hiding away the warming), we’ll pick a line with negative slope. We’ll call that u, and have our regression model give it a coefficient b-which will be the sensitivity times the magnitude of the forcing-so that b*u = a*U where a is the sensitivity. The model will get to pick what value of b gives the best fit to the data. On the other hand, the coefficient on K will just be the sensitivity, a, and K itself we will take to be the sum of all greenhouse gas forcings, and the volcanic forcings. We stress that this strives to explain the data in the simplest terms possible. The final predictor variable is the derivative of T, which represents the response time. We take that to be the average the first differences and the first differences shifted back a year, with the first month’s and last month’s values being averaged with zero. We fit to monthly HadCRUT4.

So what does the model say? Well, it picks a sensitivity equivalent to about .5 K per doubling of CO2, it picks a negative coefficient for the unknown term, indicating it prefers a solution where something else contributes to the warming trend, not that something hides it, and a time constant of much less than a month, indicating the model prefers to fit the data using negligible thermal inertia. All but the last of these I personally find plausible. The low response time is probably a consequence of the fact that there is almost no relationship between T and dT/dt at such a short timescale (it is overwhelmed by noise in the data). On the other hand, there is little basis to assume either strong aerosol forcing or that natural variability made a negligible contribution to the observed trend, given that the (admittedly simplistic) model works best if the opposite is true in both cases.

Anyway, if I’ve managed to get myself into a sufficient amount of trouble with all that, I guess you understand why I am trying to create a more sophisticated and defensible model.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: