In a short time, chatbots have evolved to the point of passing as “intelligent.” A machine passes as “intelligent” if it can fool anybody during a conversation perceived as “human.” But this “imitation game” conceals the fact that no chatbot is close to expressing critical thinking, not to talk about consciousness or subjective experience.
A 2016 study on insects asks where insects are somehow “sentient” or should be considered as little more than tiny biological marvels driven by evolutionary traits, nothing more than mere collections of reflexes and therefore not that different from, say, small robots?
The study, by honeybee scientist Andrew Barron and philosopher Colin Klein from Macquarie University in Australia, raised the doubt of this common assumption by exploring the possibility of subjective experience (that is, the ability to perceive oneself as a distinguishable entity from the surroundings) in these invertebrates.
In the case of vertebrates, subjective experience “is supported by integrated structures in the midbrain that create a neural simulation of the state of the mobile animal in space,” an ability referred to as “phenomenal consciousness“.
Cognition in nonvertebrates
But, what about insects? According to Barron and Klein, structures in the insect brain can perform a similar experience despite the difference in complexity (the honeybee brain has less than a million neurons, whereas humans deal with 100 billion-neuron brains).
Insects’ cognitive abilities suggest they are more than organisms capable of reacting to a specific stimulus in the environment due to reflexes and innate behavior ingrained in their genetics, or this is at least what the study suggests.
If vertebrates feel the environment from a first-person perspective, insects would be able to move around thanks to a similar “phenomenal consciousness,” inanimate objects, plants, or robots don’t share the same ability.
In its most complex forms, like when it comes to us humans, consciousness fosters the ability of self-reflecting, even though sometimes we may try to escape from the existential angst of “being aware that we are aware.”
To neuroscientist Björn Merker, all humans need for the arousal of awareness is already on the midbrain, a primitive neural core similar in all vertebrates that, in our case, is surrounded by a massive neocortex.
As they started venturing in dangerous, evolving terrain, animals evolved a brain core capable of combining different information that, once assembled, offered a richer perspective of the world. Insects would activate a similar neural model that would increase their survival chances.
Schopenhauer, music, and language
Self-reflection, as we understand it in the human realm, is defined by our ability to influence our thoughts thanks to language. Two schools of linguistics have debated about whether language determines human thought, or it just influences it.
There are thoughts that seem to emerge from a depth that precedes language, as artistic expression would suggest. German philosopher Arthur Schopenhauer, in his quest for a primeval, universal language of senses, argued that music is an unfiltered translation of the emotions that nature arises in us.
Unlike other arts, more subjected according to Schopenhauer to the human cognitive and emotional filter, music is a “manifestation of will” and therefore the least adulterated translation of the inspiration we find in nature: of the order arising from chaos, the patterns we appreciate around us. Music and nature share parallel features.
Deterministic linguists argue that we depend on language to reflect on the world thanks to our self-awareness; according to them, our language also determines or conditions our thought.
Others are not as sure, at least when it comes to our ability to elaborate abstract thinking: Albert Einstein famously experienced language delay. His struggle with complete sentences wasn’t an impediment to mastering abstract thinking.
Einstein’s creative thinking
Later in life, Einstein would confess to French mathematician Jacques Hadamard that his thinking process wasn’t subjected to the rigidities of language but based on “physical entities” and “images” more or less defined.
Hadamard had sent a survey to several accomplished mathematicians to find out if there were particular mental processes and routines of thought that would set them apart. Albert Einstein confessed:
“The words or the language, as they are written or spoken, do not seem to play any role in my mechanism of thought. The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be ‘voluntarily’ reproduced and combined.
“There is, of course, a certain connection between those elements and relevant logical concepts. It is also clear that the desire to arrive finally at logically connected concepts is the emotional basis of this rather vague play with the above-mentioned elements. But taken from a psychological viewpoint, this combinatory play seems to be the essential feature in productive thought — before there is any connection with logical construction in words or other kinds of signs which can be communicated to others.”
To Einstein’s abstract thought, language seemed to come at a secondary, more elaborated stage of ideas:
“The elements mentioned above are, in my case, of visual and some of muscular type. Conventional words or other signs have to be sought for laboriously only in a secondary stage, when the mentioned associative play is sufficiently established and can be reproduced at will.”
In search for “the exact name of things”
The connection between Einstein and Schopenhauer’s theory of music may be found in Walter Isaacson’s recall of young Einstein’s reliance on one of his most revered routines, playing the violin. When playing the instrument, Isaacson explains, the physicist would engage in some sort of primeval interaction.
Einstein would instinctively follow all his life the elegant simplicity of laws of nature, famously despising quantum physics odd unreliability as some apparent flaw (responding to a letter to Max Born, he would assert that God “does not play dice”; there had to be some universal, inescapable beauty emerging from the order of things, and maybe, ultimately, from the beam of a unifying theory of everything.)
Linguistic determinism, or the scientific view holding that with no language there’s no thought, doesn’t seem to consider with the importance it deserves the fact that, sometimes, we feel and think about things that we have a hard time putting into words. This phenomenon has arguably not only created frustration and misunderstanding but propelled art disciplines such as poetry.
The special relationship between poets and language makes them experts of linguistic nuance; poets have complained through the ages and traditions about the limitation of words to express all the meaning of their thinking.
To nineteenth-century Spanish Romantic poet Gustavo Adolfo Bécquer (which I read and revered in early high school), there is a painful difficulty in finding the right words to express what he means. Another Spanish poet, Nobel Prize Juan Ramón Jiménez, would spout later on:
“Intelligence, give me
the exact name of things!
… I want my word to be
the thing itself,
created by my soul a second time.
So that those who do not know them
can go to the things through me,
all those who have forgotten them
can go to the things through me,
all those who love them
can go to the things through me…
Intelligence, give me
the exact name, and your name
and theirs and mine, for things!
Wittgenstein’s language games
To other poets, language is not just a prison but the fatalistic work of humanity that, for better or worse, shapes the human condition, our sparks of genius (and precision), and our limitations. In his notebooks published by Gallimard in la Bibliothèque de la Pléiade, French poet and philosopher Paul Valéry stated that, if language were perfect, man would stop thinking.
Language is non-mimetic, and the lack of precision or exact convertibility between words and thought opens the world for art, philosophy, and poetry to exist. Poetry tries to mend the “defect” or “limitations” of language, argued Valéry.
Both Juan Ramón Jiménez and Paul Valéry shared a profound allergy to the rigidities of grammar and the ambiguity of nouns: there is no noun that, once used, can be isolated from its context, which it conditions.
To Valéry, if we were to forbid ourselves the use of all nouns that don’t represent a well-defined action or a very clear object is no less than “to assassinate philosophy.”
Linguistic relativity states that language influences but does not determine us. Nothing as powerful as the legendary struggle of poets to understand to which extent imprecision, nuance and a special sensitivity for context make human language highly subjective, as Austrian-British analytical philosopher Ludwig Wittgenstein tried to show with language games: a word or sentence carry their exact meaning depending on the “rules” of the context where they are used.
Words we have a hard time translating
Language’s reliance on custom and context explains the limitations of chatbots and neural networks, no matter how fast they evolve. Machines that learn by example don’t “understand” their incremental improvement delivering accurate responses in the context they have been programmed.
They don’t sit, wait, and contemplate in search for an adequate word or expression. Conversely, the community of speakers of some languages learns to designate certain concepts and feelings that remain more equivocal in other traditions. This learning process builds a common set of cognitive competence we call “worldview.”
Take, for example, this recent reflection on the difficulties of one German geographer to express himself in English (despite being fluent in this language and living in Melbourne, Australia):
“There are about a dozen German words that can’t be translated into English but are so important that I hope English speakers will start using them.
“Verschlimmbessern is such a word. The act of making something worse while trying to make it better.”
Conversely, some of the most celebrated translators of literary works struggle with the task of conveying meanings between the original text and derivative copies in other linguistic traditions, even when such traditions share cultural and historical roots.
To Paul Valéry,
“Translating is producing analogous effects by different means.”
Mark Twain was aware of the titanic task of finding a way to say the same in a different tradition:
“The difference between the right word and the almost right word is really a large matter – it’s the difference between lightning and a lightning bug.”
Something gets lost in translation, but some authors nourished in between cultures believe the titanic task of meaning emergence can also improve things.
To Salman Rushdie,
“The word ‘translation’ comes, etymologically, from the Latin for ‘bearing across.’ Having been borne across the world, we are translated men. It is normally supposed that something always gets lost in translation; I cling obstinately to the notion that something can also be gained.”
Paul Valéry predicting chatbots
Language and thought aren’t complementary processes as precise as we (and computer scientists) would like them to be, declared Paul Valéry at the beginning of the twentieth century. What is even more remarkable is Valéry’s intuition on the “coming of a science of “living mechanics,” or cybernetics:
“The mind is undoubtedly linked to an organ, to a material system we know nothing about. My hypothesis is as follows: this organ, however, specialized it may be, is no less an organ and, as such, has functional features which are common to it with the others… Is it absurd or impossible to seek in the products of the mind any trace of these general functioning characteristics?”
Chatbots and neural networks learning what they scrap from the Internet and taking their own decisions to optimize their responses may accommodate behavior or reactions that offer a scope of responses that could potentially inherit or amplify bias or undesired behavior.
If, to improve rapidly, a chatbot teaches himself to get the reaction he wants from people by repeating the expressions that create the highest engagement, he could end up mastering bully, aggressive discourses.
Natural language is a very difficult skill to master, especially for machines. So, today’s human editors, or “conversation designers” behind chatbots, are not going anywhere soon. Maybe soon the distributed neural farms they conform will yield them a decent living, enough to cultivate themselves and delivering a better training and edition than cold algorithms on their own.
Language of all things
Social researcher Tobias Rees believes that the field of artificial intelligence goes way beyond machines’ ability to interpret and reproduce intelligible and meaningful human language in any situation. Planet Earth includes living organisms that, from the perspective of their impact and importance to the biosphere, have a saying in “planetary governance.”
Classifications from the past establish such a sharp differentiation between humans and other living organisms, and between humans and technology, is getting blurry, Rees argues. Together, the human, natural and technological conform the planetary:
“More specifically, the planetary is an opportunity to rethink everything in terms of a single Earth system that was produced by microbes and by biogeochemical processes over the last 3.5 billion years.”
The conscience of the planetary asks for an expansion of what means an institution or a language, concepts that are today still attached to a traditional perspective of human affairs:
“(…) we have entered (because of the pandemic, climate change, etc.) a period where this understanding of governance and policymaking has become insufficient: In the age of climate change and pandemics, we must now also govern the biosphere. To do so, we need new governance and policymaking institutions that match the new conceptual understandings of the planetary.”
Beyond human exceptionalism
From the perspective of the planetary, the concept of language also evolves. The conception of language as human skill as opposed to mere communication in animals and machines is going to give way to a perspective where language is not exclusively human anymore:
“How could a project where we discuss language in humans, animals, and machines aid the question of the planetary? That’s precisely where an ontology for the planetary age can be elaborated: By recognizing that humans have languages, animals have languages, and machines too have languages, we can transform language from a human-exclusive thing into a series with lots of entries.
“This would enable a shift from a human-exceptionalist ontology to trying to elaborate a new kind of ontology. This would, in turn, be an analogy for what we need to do in terms of governance: to transform politics from something that is only concerned with human affairs to something that is truly planetary.”
Pingback: Tradition & free will: on the little things we can transform – *faircompanies()