(More fundamentals of Critical Code Studies)
What is the edge between math and code? Where does Critical Code Studies end and Critical Math Studies begin. Well, Im not willing to make such exclusionary moves just yet. But while I know there has been good work by the likes of Brian Rottman and others, looking at mathematical entities as small as the 0, for me, code gets most interesting when it has a little more to it than an equation might have. Just what is that more?
To answer this question, in a round about way, Ive been pursuing the line between math and code, or between algebra and code. Earlier on Twitter, I asked whether the invention of lambda calculus made all code studies, math studies by default. Im not willing to go that far just yet.
Item for today: =
In Donald Knuth and Luis Trabb Pardos article on the history of computers, the note the moment at which = moves from equivalency to assignment. Here is a moment where mathematical notation and code separate on the basis of assignment, where it moves from a real that represents abstractions to a realm that controls memory locations.
For all intents and purposes
Algebra: x = 0; and computer code: x=0;
seem to mean the same thing.
However, on the most fundamental levels, they are not. The one establishes equivalence of signs. The other tells the computer to store the value 0 in the location represented by x.
In CCS, we have not just a mathematical system, for surely much of algorithms is mathematical. However, when critics talk about the materiality of these performative declarations in programming languages, they are talking about this latter notion of x=2.
Again, I dont want to rule out the possibility of critically analyzing mathematics. I just want to talk about this moment of the separation, where the computational instructions gain additional semantic meaning because there signs are not just representations, but commands with material ramifications.