There’s a tendency among the worst dregs of postmodern discourse to make overly strong claims regarding the inherency and degree to which our thoughts / cognitive structures / internal life are a product of social context. “There is nothing outside the text” is a particularly obnoxious refrain and has been used repeatedly in the science wars to try and marginalize the formulative role of cognitive interaction with the objective physical world. One can interpret “nothing outside the text” different ways but “nothing is outside context” is trivial and “there isn’t any objective material reality” is silly — the fairest and most substantive translation I feel is rather a stupendously greedy social-construction. That is to say the hypothesis that while biology sets the foundational structure of our mind practically everything from there on out is socially inherited; the mechanisms we use to perceive the world are learned patterns from social interaction. Or that such social factors constitute such a vast majority of influence in them as to make nothing else ever relevant.

Well it occurs to me that this is actually a testable assertion. And that test is called the Turing test.

We’re all familiar these days with cheap chatbots that use lookup-tables of previously recorded human interactions to try to approximate some semblance of conversation. The immediately apparent limitations of unforeseen prompts and coherence between responses over an extended conversation are actually quite fundamental, especially in the case of the latter. Right now we can crudely mimic the operations of neural networks with programming constructs, feed them examples of language and let them build associations out of it. But it’s not enough merely to fully map out associations, those associations must be appropriately weighted. And therein lies the trouble because the weightings of associations viewed on face value in aggregate are not directly reflective of the strength of the underlying conceptual weightings we undertake in our own heads. To take an overly simple example, the word “Constantinople” may be matched with the word “crusades” more often than it is with its actual synonym “Istanbul”, it’s obviously unlikely for a program running through all recorded texts to build the meta structure out of associations necessary to start using “Istanbul” and “Constantinople” interchangeably. Remember, the sum of all human exchanges throughout history, much less the portion written down in english, is still very much finite. Certain realities of calculational complexity are going to still apply.

In many respects what we end up saying in outright language is a cipher that results from asymmetric calculations involving an underlying conceptual space. And the most meaningful construction of a Turing test† is one that tests the replication of that conceptual space. Merely plugging into an association-forming program from get-go meta structures to recognize conceptual relations like verb/subject/object or synonyms or much higher-level but still utterly simple concepts like temporality, localization, and discreteness-versus-continuity are a valid medium-term approach but not particularly relevant to testing the way we learn. Some basic Chomskian grammar and physics (causality, it’s a thing?) might be allowable, but the point is we don’t come into this world bundled with the sort of advanced philosophical concepts at play in any intelligent conversation, we have to form them out of associations from experience, much like a neural network starting mostly from scratch. And here’s where we can actually play around with the weighting of social versus physical experience and the resulting complexities involved in building a conceptual space capable of generating conversation.

In short I want to argue that children are able to make the language to underlying-concept jump because they have a very significant pre-built conceptual space from physical experience to match language to. And so subsequent physical experience can have a dramatic effect upon conceptual structures more fundamental than the socially constructed ones. Ultimately, at root, we view physical experience with the inert world less through the lens of social construction than we view social constructions through the lens of physical experience (even if in our present society most people past a certain age largely remove ourselves from new experiences of the physical world). Consequently, there’s at least some core reality to what’s broadly labled “Science!” that’s utterly immune to the critiques of Postmodernism.

Anyway, the point is we don’t have to appeal to fragmentary historical case studies of say people born blind and paraplegic (although every example I’ve found was characterized as ridiculously mentally limited), the development of AI provides about as perfect a test as one could hope for. As we start to grow†† stronger AIs we’ll reach a point where any crudely pre-plugged in biases or frameworks that we might start them off with are going to be less dynamic and nuanced than those they’re (mostly) able to grow themselves. Watch how fast they’re able to crack/reproduce the conceptual meta-associative structures behind language. I’m inclined to believe that AIs fed solely text without at least some form of hands-on contact with the objective physical world will never be able to crack language and hold conceptually coherent much less substantive deep conversations.  Or at least that it will be many orders of magnitude easier for those AIs with hands-on contact.  Further, the same pattern validating the critical role physically-derived concepts play in human perceptions will continue into higher levels.  We may learn basic things like causality at an early age (informing distinctions like subject-verb-object) but humans don’t stop learning new things about how the material world works at some point and only then move on to language, many more complex realities to nature that our own neural nets come to recognize relatively later in life (turbulence, entropy, etc) continue to significantly influence our conceptual space.

† I’m ignoring, among more simple versions, Turing tests that search for common examples human stupidity or limitation as I feel such is functionally irrelevant to the topic, but also because I don’t see any reason to believe there are issues of calculational complexity in human stupidity of comparable orders of magnitude to what I’m discussing. So the Turing Test relevant here is one that merely searches for conceptual weirdness/nonsense/inefficiency in the conversation and is perfectly okay with the interlocutor being able to spit out the millionth digit of Pi in milliseconds. Transhuman cyborg geniuses (or say a kid with a calculator) should be able to pass the Turing test as a Turing test that excludes all augmented minds that were originally human misses the whole freaking point of a Turing test.

†† Can I just use this moment to harshly push a bit of radical tech language, like AFK versus IRL, and emphasize that we really should to start making the conscious choice to use the term “grow” instead of “build” when talking about AI development.