An argument of which Hilary Putnam was fond, in his post-computational functionalism years, was an argument regarding “computational plasticity” to the effect that computationalism suffers from the same problems for which its early proponents criticized the type physicalist theories over which computational functionalism was meant as an advance: it has no way of establishing any physical, logical, or metaphysical identity between psychological properties and computational properties. This argument was in effect the multi-realizability argument extended to computational functionalism. At the beginning of Representation and Reality, Putnam writes, summarizing the argument: “mental states are not only compositionally plastic (the same “mental state” can, in principle, be a property of systems which are not of the same physical constitution) but computationally plastic as well—the same mental state (e.g., the same belief or desire) can in principle be a property of systems which are not of the same computational structure. Mental states cannot literally be “programs,” because physically possible systems may be in the same mental state while having unlike “programs.”” (Putnam 1988, xiv).
A simple example substantiates this computational plasticity argument: Suppose we have two chess engines, say AlphaZero and Stockfish which both play chess exceedingly well. AlphaZero and Stockfish are mechanistically distinct systems (Silver et al. 2017, Silver et al. 2018). To instance some differences, they employ very different data structures to “envision” the board: Stockfish uses a bitboard for game state representation, representing the board as a bit array, while a convolutional neural network is used to determine and represent board state in AlphaZero. So for Stockfish, the space in which the game takes place is the linear bit array which serves to represent the board. For AlphaZero it is the output vector of the convolutional neural network which takes as input an image of the board state and delivers as output a hyperdimensional vector. They thus “see” and in a sense appear to be “thinking about” different things altogether. To evaluate moves, Stockfish uses alpha-beta pruning while AlphaZero uses Monte Carlo Tree Search. Importantly, while Stockfish uses handcrafted evaluation functions (although recent versions of Stockfish now solely rely on a NNUE [efficiently updatable neural network] for evaluation) to assess the relative merit of the branches of the search tree, AlphaZero learns its evaluation function from many rounds of self-play using a reinforcement learning algorithm. The computational states that underlie even the same choices in the same game situations in both systems are thus quite distinct. These two systems, computationally distinct but performing functionally similar tasks, present us with the same sort of challenge as two distinct neurobiological states realizing the same intentional state — in the latter case, the intentional state is physically multiply realized; in the former, it is computationally multiply realized. If so, then computationalism does no better than the ID theories at telling us what the nature of mental states is.
But perhaps there is a way to type the distinct computational states so that we secure the equivalence of the computational states of, in our example, AlphaZero and Stockfish when they are in the state we might identify as the intentional state, “In this chess position, the bishop should be moved to so and so square”? Putnam argues that there isn’t — and even more, he contends, even “if such an equivalence relation existed, it would be undiscoverable—not just undiscoverable by human beings, but undiscoverable by physically possible intelligent beings.” This is because generating such an equivalence class would require enumerating all the possible computational states that realize the relevant intentional state, and that is an infinitary task; or else it would require putting a priori constraints on the functional (in this case computational) properties of any mental state — for instance by limiting how the board is ‘seen’ by a chess-playing system if it is to count as playing chess — which is an arbitrary way to parse mental states and is in any case also an infinitary task. This is the so-called Equivalence Argument: there is no way to secure an equivalence class of all the computational states that may realize any arbitrary mental states (Putnam 1988). Unfortunately, not much has been said about this fascinating argument (for a significant exception see Jeff Buechner 2008).1
The reason, I suspect, is because many philosophers who would otherwise care about the fate of computational functionalism take for granted that such an equivalence class can be found, and perhaps this presumption is secured by an iffy if implicit grasp of the Church-Turing thesis, for a natural response to the Equivalence Argument is to ask, “Doesn’t the Church-Turing thesis guarantee that there is some such equivalence class?” The essence of this retort is something to the following effect: if we fixed a universal model of computation, say a Turing machine, into which we could translate the programs implemented by any other computational systems, then could we not simply take the relevant equivalence class to be the program in the Turing machine which realizes the relevant state? There are two issues with this strategy:
First, it assumes that there is one and only one program for computing a function once a model is specified. This is of course a false assumption for several reasons. One reason is that there are many variables which might differ from model to model of the same kind, for instance, the set of symbols over which the model operates. Even if we mandated what these variables must be, there would still be the simple mathematical fact that many functions are amenable to solution by differing algorithms — there is more than one way to solve for a factorial even on a Turing machine with a fixed symbol set. And the problem is precisely to state what unites all the different algorithms which realize the same function. To say that they are unified by the fact that they realize the same function does not make any progress at all over identity theories.
Second, this approach assumes that we can translate into the level of abstraction of whatever model we fix to define the equivalence from the level of abstraction of the different programs which realize the mental state of interest without loss. But it might turn out that the relevant psychological properties we are interested in arise out of features of what computer scientists term the “natural level of abstraction” (Gurevich 1999)2 of the original programs, that is the very architectural primitives of the system, which have no equivalent in the model into which we translate them (this recalls Searle’s biological naturalism). We know this is the case because, for instance, certain properties arise “cheaply” out of connectionist machines which are hard to get out of sequential machines, such as “free generalization”. Or consider a toy example: Imagine a finite state deterministic program designed by a Laplacean agent, consisting of a lookup table encoding every possible decision problem, S, in some finite universe with their corresponding responses. Call this device a Laplacean program. From a purely formal perspective, the Laplacean program is equivalent to some actually intelligent system (maybe the Laplacean agent who designed it itself) which does not rely on a lookup table, so we could say that it is an intelligent system. From another more thoroughgoing perspective in which intelligence requires solving problems without being pre-given the solutions to them, the Laplacean program is not intelligent. If intelligence is a property of interest to us, then we get it from the Laplacean agent themselves but not the system they build.
So Putnam’s challenge to computational functionalism seems to endure.
Notes
- Jeff’s response to Putnam is spirited, but I think it does no more than show that for any finite set of computational states there might be a way of drawing up (gerrymandering might be the appropriate term) an equivalence class, but that as long as the set is left open, which is Putnam’s point, computationalism is no better than ID theories; in short the best computationalism can be is an ad-hoc empirical theory, not a principled theory of intentional states. ↩︎
- Incidentally, Gurevich’s Abstract State Machine might provide a way out of Putnam’s Equivalence Argument, but the argument is likely to be very torturous. I have been trying for quite a while to formulate just such an argument but it’s proved very challenging. ↩︎
References
Jeffrey Buechner. Gödel, Putnam, and Functionalism: A New Reading of Representation and Reality. 2008.
Yuri Gurevich, ‘The Sequential ASM Thesis’. Bulletin of European Association for Theoretical Computer Science. 1999.
Hilary Putnam. Representation and Reality. 1988.
David Silver et al. ‘Mastering the game of Go without human knowledge.’ Nature. 2017.
David Silver et al. ‘A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play.’ Science. 2018.