Nick Rowe is concerned that agent-based modelling (ABM) is a black box that provides no intuition and doesn’t really add to our knowledge:
Agent-based models, or any computer simulations, strike me as being a bit like [a] black box. A paper written by a very reliable economist where all the middle pages are missing and we’ve only got the assumptions and conclusions. I can see why computer simulations could be useful. If that’s the only way to figure out if a bridge will fall down, then please go ahead and run them. But if we put agents in one end of the computer, and recessions get printed out the other end, and that’s all we know, does that mean we understand recessions?
My question is how a model where you set the rules can ever be a black box? Shouldn’t the results always be understandable by reference to the initial conditions and ‘rules of the game’?
I work a lot with computable general equilibrium models, which are often referred to as black boxes. It’s true that they produce voluminous results that aren’t always easy to understand. But the reason they’re difficult to understand is usually due to the volume of inputs and outputs, not the overwhelming complexity of the model. Consequently, you can usually provide intuition to the results if you go back and examine the initial conditions and ‘rules’ of the simulation. Maybe ABMs are different because of the complexity of the rules, but that’s not the sense I get from talking to modellers.
My hypothesis is that people who point at models and call them black boxes usually just can’t be bothered expending the effort to understand them. As Rowe says, any mechanism that you don’t understand can appear to be a black box. Is there really anything special and different about ABMs?