Three Essays on Statistical Methods For Complexity Economics

John S. Schuler

Advisor: Richard E Wagner, PhD, Department of Economics

Committee Members: David Levy, Carlos Ramirez, Robert Axtell

Online Location, Online
August 10, 2020, 02:00 PM to 04:00 PM

Abstract:

The research program commenced in this dissertation has three pillars: economics, complexity science, and statistics. Notwithstanding the frequency with which the three interact, they have very different foundations and thus are often in methodological tension. It is the goal of this research program to study concrete problems at the intersection of these fields with the goal of both their partial solution and also a better explication of their foundations.

Economics is, above all, the science of interdependence. The "division of labor is limited by the extent of the market" as Adam Smith famously said. Even in a simplified general equilibrium setting, a change in the extent of the market alters the situation immensely. Statistics, on the other hand, is the science of independence. Even exchangeability, a concept basic to practically all of statistics, is at its core, conditional independence; which is to say, structured independence. Complexity science provides the framework that can bridge this gap. All scientific theories ultimately involve the arrangement of facts within a set of counterfactuals; distinguishing realizations, or data, from other possible realizations and also those regarded to be impossible. Complex systems display a history. Different historical trajectories of complex systems then, can be considered independent. In order to build an economic science that is aware of complexity then, the scope of existing economic models must be broadened.

This is where the basic tension comes to the fore. Statistics does have a tool set for dealing with data that has history through the use of exchangeability. What is needed is a richer class of models such that the agent-based concepts map onto the statistical constructs easily with explicit modeling of the counterfactuals. For this sort of work, more flexible statistical methods are necessary. Leo Brieman famously distinguished ``two cultures'' in data analysis: traditional statistical data modeling and predictive modeling. Traditional statistical modeling allows for ease of interpretation while statistical learning is optimized for prediction. It seems though that a good data model ought to be good at predicting as well. Thus, a logical line of inquiry is whether the two cultures may be bridged. I believe the answer is yes.

The key is attempt to use statistical learning techniques to estimate not the data itself but rather a model that could have generated it. Thus, the outcome of such a modeling exercise is \emph{not} a series of point estimates but rather a joint probability distribution that allows for simulation. The strength of such an approach is that all the traditional econometric tools defined in terms of conditional moments may be easily adapted to this more flexible modeling framework. An additional advantage is that the concepts of Judea Pearl's \emph{Causal Inference} map very easily onto such a model as they are expressed in terms of Bayesian networks and conditional independence.

These advantages together allow such methods to be powerful tools for the statistical emulation of agent-based models. If we have shown that such a model works well as a reduction of a more complex agent-based model, we can describe the properties of the agent-based model in terms of the joint probability distribution or the Bayesian network. This also makes calibration of the agent-based model to data a more straightforward task. To this end, this dissertation offers three chapters: first, I attempt to adapt traditional time series methods to the study of the Cantillon Effect and articulate the several reasons why this does not work well. In the second chapter, I offer a possible solution to this problem: a novel statistical method of the sort described above. Finally, I argue that a bottom up / agent-based economics enhanced with Austrian ideas, is actually more faithful to Ragnar Frisch's original vision for econometrics than the developmental path that it actually took.

Of course, a great deal remains to be done. This dissertation raises many more questions than it solves. Of particular interest is the ontic and epistemic status of so-called "heavy-tailed" distributions in economics. The first chapter demonstrates that, as was well-known to Benoit Mandelbrot, price changes exhibit heavy-tailed distributions, rendering traditional regression techniques unreliable for the study of price dynamics. An open question is to what extent these heavy-tailed distributions bear on the Hayekian notion of spontaneous order. Clearly prices convey information and yet, this information does not wish to yield traditional statistical treatments. An interesting question for further research will be whether complexity science can give an account of this.

A final theme of this dissertation is the universality of recursion as a basic intellectual construct. Much of traditional science was based instead on classical set theoretic mathematics. If we wish economics to be a process-oriented science, then perhaps the modeling framework ought to proceed according to procedural rather than class-based notions of abstractions. Thus, constructive mathematics and theoretical computer science are the mathematical sciences to which we must appeal.