Digital Meaning


On the Possibilities for, and Limits of, Digital Meaning

0.0 BILL: What is the meaning of digital meaning? What are the potentials for so-called “artificial intelligence?” What are its limits?

0.17 BILL: We’re going to start with Allan Turing, one of the great inventors of modern computing. In the title of a celebrated 1950 article he asked, “Can machines think?”

He went back to Ada Lovelace’s until then largely forgotten article of 1843 where she had predicted that computing machines would be able to do remarkable calculations, but only according to the instructions they had been given.

Turing agreed, but also wanted to push the case to the extent that they could come up with surprising answers to mathematical problems, and that step wise, they could act on their own calculations by automatically applying further calculations.

0.56 BILL: Here is the Manchester 1 computer that Turing and his colleagues built with leftover parts salvaged from the code-breaking machines they had developed during the Second World War. By 1949, they were able to announce in an article in The Times newspaper that this “mechanical brain,” as they called it, had done something that was practically impossible to achieve on paper. It had found a some previously undiscovered, extremely large prime numbers.

  • Reference: Cope, Bill and Mary Kalantzis, 2020, Making Sense: Reference, Agency and Structure in a Grammar of Multimodal Meaning, Cambridge UK, Cambridge University Press, pp. 160-63.

1.24 BILL: And here is Turing getting a surprise. “How did this happen?” is his note on the teletype printout.

How did the computer arrive at this calculation? Perhaps it was a bug, or perhaps it was a new insight based on so many, laborious calculations that no human could feasibly achieve.

So, machine intelligence is much less than human intelligence—computers can only calculate. And it is much more—computers can calculate more numbers and faster than humans. We have cause to be in awe at the super-human brilliance of their feats of calculation. But we also need to be cautious about what computers cannot do.

2.06 BILL: To reframe the question in terms of our grammar, what are the limits of the transposability of various kinds of meaning into quantity? Here are just two examples of many possible transpositions. I have three dogs. Each is an irreducibly different dog, but when I chose some criterial features in the concept of dog, and transpose my dogs into number, I have the property, three.

2.34 BILL: And another transposition, let’s say one of our dog instances is a Golden Retriever, then I have distinguished that instance by its a color – but when I take a digital photograph, the color of the dog is represented by thousands of pixels each represented in a particular mix of red, green and blue dots in one of sixteen million possible combinations and intensities. Capturing the image on a camera, then rendering the image to a screen, involves a humanly impossible amount of absolutely tedious counting and calculation.

So, we want to argue that…

3.09 BILL: The limits of artificial intelligence are the limits of the transposability of meanings into quantity.

Now, we want to propose that computers can mechanize four primary kinds of transposition. This is all they can do, remarkably, but no more. This is a way to work out which parts of machine intelligence are smarter than humans, and which parts can never be as smart. Then artificial intelligence becomes an oxymoron, because it is completely artificial and barely intelligent in the way we apply that word to humans.

  • Reference: Cope, Bill and Mary Kalantzis, 2020, Making Sense: Reference, Agency and Structure in a Grammar of Multimodal Meaning, Cambridge UK, Cambridge University Press, pp. 169-72.

3.42 MARY: The first of these four transpositions of meaning into quantity, I want to call “namability.” In the world of digital text, we have relatively stable and mostly universal schemas for naming things.

Unicode is the universal character set for the digital representation of text in every human language, over 100,000 characters, now including thousands of emojis. The visual design of text is stabilized in HTML. The workings of the body are described in tens of thousands of points of detail in the International Classification of Diseases used by medical practitioners and insurance companies. Place is definitively identified in numerical GPS coordinates and named in GeoNames. iCal connects people to places and times in a shared language of events. URLs are billions of unique names for every page of the web. These schemas that organize this naming are called ontologies – and there are many more. The names are written Unicode for the purposes of representation and communication.

The scale, stability and universality of the naming is something unique to the digital era. But the calculations involved in making the names are trivial compared to the extent and complexity of the naming itself.

5.17 MARY: Now I want to make a grammatical point, on the difference between and instance and a concept. This bag is an instance, with a barely speakable name, and in fact a name that can at best only be read – which is a multimodal practice requiring transposition from text. But mostly it is not even read – machines read the barcode, which allow you to read where your bag is on your airline app. This is the internet of things, where objects can speak their names.

5.53 MARY: Now here is a concept in our grammatical terms, because the name on the barcode is not for a single item, but a product. A huge number of these sardine cans has probably been manufactured. I might have purchased one instance at a particular time and place. You might have purchased another one at another time and place. There are 110 thousand products in a Walmart supermarket, hundreds of millions on Amazon; all are unique. All are classified into categories.

Naming and ontologies that put these things together into a theory of all the meanings in the world more significant and powerful than the – always relatively trivial – counting and calculation. This is our critique of algorithmic reason.

6.43 MARY: Let me track back for a moment into the history of computable meaning. Here is the most published and famous linguist of all time, Noam Chomsky.

6.53 MARY: Chomsky proposed that language had a simple logical kernel, where names of things could be relatively easily be swapped out. The sentence above could in the same structure be “the cat ate the sardines.” He didn’t care about the differences between balls and sardines.

In the 1950s, people thought this kind of approach would offer a path to computing human meaning. But it didn’t because, among the billions of things in the world, the difference between sardines and balls is significant. We don’t eat normally balls or throw sardines.

7.31 MARY: Now here a linguist you probably don’t know, Robert Mercer, who earned his Ph.D. in 1972 from the University of Illinois. He was a leader in a different approach to meaning in text called “statistical machine translation” which looked out for patterns of words where the word “eat” was often near the word “sardine” and the word “throw” often near the word “ball.” The names of things became important to machine reading of text.

By the way, Robert Mercer was so good at numbers and making sense of the meanings of investment that he became hugely rich as hedge fund operator. Then he spent some of his fortune to fund the Donald Trump’s election campaign, and apply his insights about statistical approaches to language to the campaign’s Facebook advertising, finely differentiating different target groups with tens of thousands of different adverts.

8.33 MARY: The second major transposition is counting – how many bags are going onto the plane, and their total weight, or how many sardine cans have been sold and whether more need to be ordered to keep the shelf stocked.

8.48 MARY: Each bag or sardine can is an instance of a concept (or absence in the case of zero), and these can be counted. There’s nothing so very intelligent about this, except the amount of counting and its mechanical accuracy.

9.05 MARY: And one more aspect of counting. Here is Claude Shannon, author of a book published by the University of Illinois Press in 1949, “A Mathematical Theory of Communication.” In 1938 Shannon had come up with the idea that relay circuits or on/off switches could represent mathematical symbols as zeros and ones. Then the work of calculation could be done electrically.

9.33 MARY: Applying the elementary logic of nineteenth century mathematical philosopher, George Boole, he suggested that when the circuit is closed, a proposition could be considered false, and when it is open, it could be considered true. If there were just two numbers in the world, zero and one, represented in relays that are on or off, this could be turned into the simplest of logical steps.

We should call this the binary age, not the digital. The number of fingers we have as a species is accidental and no longer of any particular relevance to quantification of meanings.

10.11 MARY: On to the third transposition now - the qualities of things can be measured and these properties recorded in number – distances, weights, temperatures, colors and a myriad of other things.

10.26 MARY: We have instruments that “read” these things automatically and communicate their results – thus is a kind of ambient intelligence which guides us on our way with GPS, reminds us of a meeting time, or warns us that it is going to rain soon.

10.46 MARY: Finally, a fourth transposition, calculations can be used to make meanings material – to make them visible or hearable. The word you see here, “renderability” was converted to number from my keyboard, then pixel by pixel recreated here on the screen – using an amazing amount of incredibly boring number.

11.11 MARY: The more pixels, the more dumb calculation, the sharper the letter. Rendering involves is a painful amount of absolutely tedious transposition through number and binary calculation.

11.24 MARY: Here now is two-dimensional counting of the three colors which can in different combination represent millions of colors.

11.33 MARY: Now counting the tiny points on three dimensions in 3D printing.

11.38 MARY: All of these meanings can be manufactured with the support of quantification. Some parts of human experience, however, cannot be manufactured by calculation—tastes, smells, or emotions for instance.

11.50 MARY: So, words, images and the meaningful world can be comprehensively named, counted, measured and rendered using technologies of calculation. Artificial intelligence looks for patterns in the numbers – words that are commonly found together, the pattern of pixels that is your face alone, measurable quantities etc. Machine learning is where the machine sees repeated patterns in the numbers.

But the intelligence is not in the calculation - it is in the namability, countability, measurability and renderability of the world. In these four ways, the processes of transposition, of meaning through binary number using artificial intelligence, are nothing like human intelligence, and can never be. Artificial intelligence is both a lot more and a lot less than human intelligence.

  • Reference: Cope, Bill, Mary Kalantzis and Duane Searsmith, 2020, “Artificial Intelligence for Education: Knowledge and Its Assessment in AI-Enabled Learning Ecologies,” Educational Philosophy and Theory, 52:5, 1-17. doi: http://doi.org/10.1080/00131857.2020.1728732