Numerical modelers construct little parallel realities, simulating natural or engineered systems in a binary alternate universe of patterned electrons. It is more fun than anyone should have at work.
It’s also hard.
Most models include uncertain parameters, numbers that represent some process we don’t understand or data that are too difficult, too expensive, or too variable to collect reliably. Before we use a model built on uncertain parameters to predict the future, we need to demonstrate that the model can reliably explain the past. We call this calibration. During model calibration, we adjust uncertain parameters within reasonable range until the model reproduces some measured historical change.
However, most natural system modelers are wary of calibrating to a single time series. “What if that time series isn’t representative?” we ask. If we tune the model to eccentric circumstances, the model will predict poorly. A temporally constrained calibration will compromise our model’s generality.
We often build river models right after a flood. So it is tempting to calibrate our models to the flood, since this is the dramatic period of change that dominates everyone’s imagination. However, we have found again and again, that if you calibrate to a short period of rapid change, the model underperforms when predicting the future.
So “multiple time series calibration” emerged as standard practice in most fields that apply numerical models to natural systems. Multiple Time Series Calibration is simply the idea that if you test uncertain model parameters against recent events and historic observations, you are more likely to construct a robust, predictive model, which can handle the full range of possible future conditions.
This process came to mind recently as I thought about how we construct our worldviews. We are constantly “calibrating” our world views. We are constantly tweaking the uncertain parameters in or conceptual model of reality to match observations. However, our reality calibrations have temporal bias. They tend to focus on recent, usually dramatic events. 
This is why I’ve found cultivating friendships with the insightful dead so valuable.
By investing some of my world view formation resources in ancient thinkers, those that pre-date the turbulence of the information age, or even the non-stationatiry of the post-enlightenment milieu, we can submit our world view to multiple time scale calibration.
Certainly our conceptual model of reality has to encompass Descartes, Darwin and Derrida as well as more contemporary (and more diverse voices). Our model will not have predictive power if it can’t encompass these observations. But tuning our model exclusively to these recent events (and especially if tuning our model hyper recent foci of, say, Twitter) won’t generate a robust framework.
Limiting our world view calibration to contemporary voices leads to an unhealthy myopia, a bias Lewis called “presentism” and that Chesterton colorfully described as “the oligarchy or those who happen to be breathing.”
The future is uncertain, and our models of reality are built on uncertain variables that make the parameters that span five orders of magnitude in my field seem adorably concrete. But tuning those parameters to a brief, recent, time series is unlikely to produce a robust model. Real diversity includes temporal diversity, seeking out ancient voices about how to be human, about what matters and why.
A model robust enough to explain modern and ancient observations, is flexible enough to handle the uncertain time series ahead of us.
This post was written by listening to the Dawes Pandora Station.
 Disclaimer: I have to build a technical bench with this argument. So I spend over 300 words talking about numerical modeling, before I get to the point of general interest. Hopefully it will be worth it.
 For those who don’t know, that’s what I do. I build numerical models of rivers and river processes to support ecological and engineering decisions.
 Multi-parameter models can yield the same answer different ways. They are non-unique solutions. So just because a model reproduces the past, doesn’t mean the parameters are correct, or that it will predict the future reliably. This is the principle of equifinality. We do not talk enough about how equifinality is embedded in the world view formation process…and by not enough, of course, I mean, at all.
 And, to be fair, that funded the study. Floods almost always lead to work for me. Natural disasters bring economic attention to natural processes, so those of us who model these systems often end up feeling like “ambulance chasers.”
 Evaluating a calibration to a second, independent time series is often called “validation” or “verification” but this language is extremely fraught, and I’ve argued that the whole debate is academic and totally out of touch with practitioner experience. So I have adopted this more general terminology which has arisen independently from multiple corners of the water world which describes the process more precisely and the experience more frankly.
 More and more, recently, they also have spatial bias, as content providers deliver information targeted to interest, leaving us totally ignorant of huge swaths of calibration data unless we INTNTIONALLY inhabit diverse information ecologies.
 Wow, that was something of an abrupt transition. But some of my most cherished mentors have been dead for centuries.
 You are going to have to just trust me that I didn’t try to alliterate there. But I’m also not going to pretend I didn’t find that enormously satisfying.
 The best argument for excluding historic data in world view formation is that they are too badly biased, selecting for privilege. This is not a trivial argument. But there is an analogy for this in science too. Historic data are usually biased. Preservation is a stochastic and fundamentally biasing process.* But we find historic data so valuable in model formation, so useful in building model robustness, that we’ve developed careful methods to incorporate it. My first reaction to finding historic data isn’t to toss it because its biased, but to do a little dance of joy because I know it will make my model more robust if I handle it well.
*I currently have a student pulling 1700 newspapers and another student examining 40 tree cores to provide historic context for the modern measurements I’m dealing with, and also to provide a BS check for my model. But in both cases we talk about “preservation bias thresholds.” There are significant events that we do not capture because they escape the notice of reporters and trees. But the historic record is too important to ignore because it’s biased. We come up with creative ways to correct for the bias, and learn from the data.
So yes, history and historical philosophy and historical theology was written by white men who had enough resources to study and write. Historic world view data have “preservation bias thresholds.” And we need to make some correction for that. But ignoring the historic data that could add robustness to our model because of preservation bias is a fallacy that the scientific community rejects in our model building.
 Because right or wrong, he never did it any other way. Incidentally, even though Lewis and Chesterton are moderns, they kind of count as surrogate ancients, and not just because Lewis self-identified as a dinosaur, but because they themselves submitted their world-views to multi-time scale calibration.
 Andy Crouch recently tweeted “The absurdity of our time might be summed up in this: almost every high school student reads Kafka, but almost none read Chesterton.”