April 7th, 2012

eyes black and white

Unevaluated Models

There is no experimental science whatsoever where there is no experiment. Now history, whether political, economical or climatic, doesn't obviously lends itself to experiments: you can't rewind the flow of time and repeatedly measure the effects of introducing just one change in parameter at a time. Hence macroeconomics or climatology are but pseudo-science when they drape themselves in the mantle of experimental science just because they use mathematical models that superficially resemble those of physics and biology.

These models are most often but a way to smuggle arbitrary conclusions, encoded in the very structure of the model and/or choice of fudge factors, in a way that confuses the incautious reader into believing these conclusions are the output of a scientific process rather than the input of a pseudo-scientific fraud. Attempts to question the process are met with intimidation out of the pseudo-authority of the pseudo-scientists. The pseudo-scientific pseudo-peers are thus elevated to the priesthood of a pseudo-atheist church, the hierarchy of which chooses which models to bless as official, which better spin doctors to co-opt as part of the hierarchy, which models to dismiss as having "undesirable" conclusions, and which dissident voices never to promote. These churches can then leverage their usurped authority to negotiate their spot in the sun among the Establishment and extract their share out of the pie of politically stolen riches.

Now, the inability to start from "identical" conditions and redo an experiment using either the same or different parameter values doesn't mean that models are altogether impossible to evaluate in historical contexts, and therefore necessarily worthless. Not only can you (and should you) evaluate these models in terms of internal consistency and consistency with other models that are indeed experimentally validated, you can compare them to each other in terms of predictive power: you can compare the actual facts in the recorded past to the predictions the model could have made in an even further past based on information then available, and see how these predictions themselves compare to those of other models in terms of accuracy, including whatever popular benchmarks or previously acclaimed models you claim to enhance upon. These benchmarks ought to include the best competition, as well as various "null" or "random" models, in which, for various understanding of what things are independent parameters and what other things are derived variables, the parameters are assumed not to change, or to change randomly. And even there, some caution must be taken to distinguish predictions made from data actually available at the time, from predictions made from data retrospectively available from that time; yet evaluation based on retrospective data can also be used to evaluate models, since hindsight may be 20/20 for some directly measured input parameters, yet can be much less than that for other derived output variables, which can help filter bad models of said variables.

Shouldn't such an evaluation be a standard requirement without which any paper proposing a model as the basis for predictions and policy recommendations (as opposed to merely a mathematical toy that might be used in future scientific research) would be rejected sight unseen? After all the burden of the proof always lies in the proponents of a theory, and the statistical regression test of a historical theory should be part of any scientist's due diligence procedure before to claim that said theory models reality better than other theories. Yet I never see any such measure accompanying models that are published, and still most of the time I see papers ending in policy recommendation with a pretense of scientific backing. From which I conclude that even though there could conceivably be some science in mathematical models of the economy or the climate, nonetheless precious little such science is to be found in most of what currently passes for economics or climatology, which is only propaganda used to push political agenda. My guess is that most of the people who are proficient enough to develop good mathematical models of the world and apply them with integrity to the scientific prediction of future events, are already busy making millions in the finance industry, whereas those who stay in the pseudo-scientific academia are mostly the power-hungry fakes and the superstitiously believing sheeple, who are in it to climb the ladder of the pseudo-scientific hierarchy and defend it from the unbelievers who would reveal its fraud.

PS: If there are people with integrity in the economic or climate modeling industries, they will hopefully soon start a "Journal of Verifiable Economic Modeling" and a "Journal of Verifiable Climate Modeling", where no paper is accepted unless it includes all the code, all the raw data with a certified chain of sources, all the manual "corrections" and "fudges" with adequate justificatory annotations for each and every one of them, and, most importantly, a statistical analysis of the past predictive power of the model (depending on both past date the prediction could have been made and past future date the prediction was made for), as compared to other benchmark models and to any other model that is claimed to be improved upon.