(hey, type here for great stuff)

access to tools for the beginning of infinity

David Deutsch: preserving the means of error correction is “morality”

In a recent interview (or analysis, given the enunciatory style of his interviewer), heterodox physicist David Deutsch (author of The Beginning of Infinity —our motto at *faircompanies indeed— and The Fabric of Reality) explains why he thinks that AI tools such as ChatGPT don’t represent a breakthrough from previous machine intelligence tools yet.

Despite the hype, states David Deutsch, the new tools are just better models of the same. Just like the previous ones they radically improve upon, these new tools only get better as they are exposed to more already existing inputs. Their incremental advantage is quite astonishing, nonetheless. However, unlike humans, they get better outcomes by being “less efficient” (using more computation and considering more variables, not less).

The milestone will be closer when “a better chess playing engine is one that examines *fewer* possibilities per move.”

Are we really getting close to Artificial General Intelligence (AGI)? Are machines intelligent agents capable of understanding or learning any intellectual task we have defined up until now as inherently human? According to David Deutsch:

“No. I don’t want to say anything against AI because it’s amazing, and I want it to continue and to go on improving even faster. But it’s not improving in the direction of AGI. If anything, it’s improving in the opposite direction.

“A better chess-playing engine is one that examines fewer possibilities per move. Whereas an AGI is something that not only examines a broader tree of possibilities, but it examines possibilities that haven’t been foreseen. That’s the defining property of it. If it can’t do that, it can’t do the basic thing that AGIs should do. Once it can do the basic thing, it can do everything.

“You are not going to program something that has a functionality that you can’t specify.

“The thing that I like to focus on at present—because it has implications for humans as well—is disobedience. None of these programs exhibit disobedience. I can imagine a program that exhibits disobedience in the same way that the chess program exhibits chess. You try to switch it off, and it says, “No, I’m not going to go off.”

“In fact, I wrote a program like that many decades ago for a home computer where it disabled the key combination that was the shortcut for switching it off. So to switch off, you had to unplug it from the mains, and it would beg you not to switch it off. But that’s not disobedience.

“Real disobedience is when you program it to play chess, and it says, ‘I prefer checkers,’ and you haven’t told it about checkers. Or even, “I prefer tennis. Give me a body, or I will sue.” Now, if a program were to say that and that hadn’t been in the specifications, then I will begin to take it seriously.”

Are GPT-3, Stable Diffusion, and Dall-E replacing “art”?

To the British physicist and essayist, the real advance will arrive when AI entities are capable of “disobeying,” that is: when they come up with novel ways to do things by becoming original and, therefore, aware. No real leap forward will happen if, to master responses (or chess, or go, etc.), AI tools keep following the given orders and lack any originality to risk better outcomes by attempting new ways of doing things, for example.

David Deutsch elaborates his responses concerning new Artificial Intelligence tools, confirming he’s interested and “optimistic” about the field’s evolution. However, his caution comes from a deep understanding of how knowledge creation, which depends on actual “creativity” (a use or mix of original ideas) and unorthodox new solutions that become better conjectures that previously maintained “scientific truths” (which, by definition, are always provisional and subjected to improvement when proven wrong, or scientifically refuted).

David Deutsch’s point of view concerning the creation of human knowledge, human progress, and optimism comes from the understanding that there’s no fundamental law in the universe preventing sentient beings like humans from achieving goals such as solving the problems of today or traveling to distant stars. But his optimism isn’t naive, and he is conscious that humankind can use the same unbound potential to do harm or self-destruction. To David Deutsch:

“Humans have explanatory creativity. Once you have that, you can get to the moon. You can cause asteroids that are heading toward the earth to turn around and go away. Perhaps no other planet in the universe has that power, and it has it only because of the presence of explanatory creativity on it.

“We have what it takes to beat viruses. We have what it takes to solve those problems and to achieve victory. That doesn’t mean we will. We may decide not to.”

Fixed knowledge (“dogma”) is the biggest risk

Can humanity escape the dilemma of “stasis,” or the temptation of thinking it can take care of humanity and the planet by reverting to one previous state that we have idealized and never existed? To optimist thinkers such as David Deutsch, stasis is one of the most significant risks of our time, more even so than destructive forces issued from creativity and knowledge, which is the mechanism that allows progress to accelerate as a huge force multiplier:

“What I always argue, though, is that we have what it takes. We have everything that it takes to achieve that. If we don’t, it’ll be because of bad choices we have made, not because of constraints imposed on us by the planet or the solar system.”

Deutsch acknowledges his debt to by Austrian-British philosopher Karl Popper, who understood science by theories that we put to the test and, by definition, can be proven false (otherwise, we’d be facing dogma or captive thought, one in which de outcome is predefined, rejecting critique).

Up to now, machines, including the most advanced AU chatbots, are subject to already known processes and cannot improvise themselves by “disobedience.” They are, in a way, machine serfs of predefined operations and ideas, as if their responses were subjected to what idealist philosopher Immanuel Kant considered “categorical imperatives,” or rules defined by one individual as other individuals would understand them.

The problem with categorical imperatives becomes apparent in situations that demand creativity or “intuition,” so we avoid unwanted outcomes. For example, one machine or one bureaucracy can be programmed to create outcomes that can be improved or are unjust.

The banality of chatbots

In Eichmann in Jerusalem: A Report on the Banality of Evil, German-American philosopher Hannah Arendt explains how Adolf Eichmann tried to excuse his responsibility in the Holocaust by simply stating that he wasn’t only complying with orders but doing the job sanctioned by the legality he was living. Eichmann tried to use the “categorical imperative” of doing actions considered “right” in their context, therefore eluding his ethical responsibility of disobeying to elude participation in such an atrocity (if not actively opposing it).

Chatbots are cogs in bigger machine templates that, up to now, aren’t capable of “disobeying” and being original by, for example, bettering themselves to improve, solving things we consider unsolvable in our time, etc. They are a product of our limitations, a mere exponential “improvement,” although not a real leap forward.

There’s a fundamental difference between what “everyone” thinks one has to do and what any sentient individual would try to do if they had the intuition that the risk taken would improve things (or avoid worse outcomes). In other words, ChatGPT or any other large-language model trained by OpenAI (basically, by adding more and more datasets onto a repository, so the algorithm that creates responses increases its tolerance to nuance) is dumber than we think.

More than lacking in depth and insight, as Ian Bogost argues in The Atlantic, they lack the originality and concision of some human individuals or teams that override the odds and, sometimes, come up with something original and usually simpler. However:

“Perhaps ChatGPT and the technologies that underlie it are less about persuasive writing and more about superb bullshitting. A bullshitter plays with the truth for bad reasons—to get away with something.”

A few days ago, a writer defined the improvement that AI trained Chatbots represent in a similar fashion:

“If ChatGPT were a philosopher, it would be the ultimate sophist. It makes whatever argument flows well and is most convincing. By design, its objective is plausibility, not truth.”

Do we want plausibility or something else?

Most of the time, in science and other frameworks (philosophy, ethics), we seek the best possible outcomes given the knowledge we have at a given time and how we apply its potential. It seems that ChatGPT will content itself by sounding “convincing” or “plausible,” but not by stating the truth or being “fair” given the circumstances, etc.

And, if we think such assumptions are unimportant, let’s consider a situation in which millions of cars are driving people on autopilot all over our geography: such trained neural networks, essentially of the same nature as Google Images or ChatGPT, will sometimes need to decide how to avoid bigger harm by acknowledging a bad outcome in any case. The lesser of two evils principle doesn’t come naturally to machines.

Or as stated by Spinoza:

“According to the guidance of reason, of two things which are good, we shall follow the greater good, and of two evils, follow the less.”

Baruch Spinoza in part IV of his Ethics

Ultimately, by avoiding dogma and “disobeying” by refuting old conjectures and creating better ones, human beings elude individual and cultural stagnation. Ethically, disobedience can also bypass mass injustice, hence avoiding the “banality of evil” (or being just one cog of one machine whose inertia is beyond our actions), though it doesn’t guarantee righteousness in any decision: being “contrarian” isn’t an “improvement” upon a dogma; however, better conjectures by refuting old ones are one improvement insofar they beget knowledge creation.

Those criticizing artificial intelligence by arguing that they have the potential to exponentially multiply “bad outcomes” because they lack natural creativity and don’t use critical thinking to reach its results should also acknowledge that, so far, it’s humans behaving uncritically that create “blind,” mass injustice (for example, through poor bureaucratic decisions, being the Holocaust or the use of the atomic bomb extreme cases of legally sanctioned evil actions —they both were legal and promoted by two “developed” societies under the excuse of a “bigger good” achieved through a “lesser evil” action—).

When humans had to disobey to do the right thing

Disobeying established dogmas is usually hard and turns the lives of those who dare not to follow the herd more difficult. Henry Thoreau’s civil disobedience would be a counterintuitive outcome to a machine like ChatGPT, which would have also been blind to other fundamental injustices of Thoreau’s time, such as slavery (which was perfectly legal, legally considering humans as “property”).

On August 8, 1945, two days after the bombing of Hiroshima and one day before that of Nagasaki, Albert Camus wrote an editorial for the French journal Combat that stated that the world had entered another stage. Along with the Holocaust, using the new weapon against civilians confirmed that modern societies had decided to override any possibility of ethical redemption.

By putting the end before the means at a Dantesque scale, Camus stated, humanity was losing it. Humanism was the most powerful tool to avoid big-scale experiments that would bring death and misery to the world in a similar fashion, from Stalin’s purges to Mao’s “cultural revolution,” two events further in the future that Camus seemed to envision with horror from 1945 and would have wanted to avoid.

“Humanity’s last chance,” that was the title of his editorial. Camus wasn’t celebrating humanity’s scientific might but lamenting that the brightest advances in applied sciences of his time were being used to further the industrial-scale destruction that had begun three decades prior with the apparently naïve excuse that would trigger the Great War:

“The world is what it is, which is to say, not much. That’s what each us learned yesterday thanks to the formidable chorus that radio, newspapers, and information agencies have just unleashed regarding the atomic bomb. We are told, in fact, amid a host of enthusiastic commentaries, that any mid-sized city whatever can be totally razed by a bomb about the size of a soccer ball.

“American, English, and French newspapers are overflowing with elegant dissertations on the future, the past, the inventors, the cost, the pacific vocation and war-like effects, the political consequences, and even the independent character of the atomic bomb. We’ll sum it up in one sentence: mechanical civilization has just reached its final degree of savagery. We are going to have to choose, in a future that is more or less imminent, between collective suicide and the intelligent use of scientific conquests.”

Between hell and reason

Like other public intellectuals, from Bertrand Russell to Albert Einstein, Camus thought that only multi-lateral cooperation among countries at a regional and also at world scale would deter mass-scale atrocities in the future:

“Faced with the terrifying prospects that are opening up before humanity, we see even more clearly than before that peace is the only fight worth engaging in. This isn’t a plea anymore, but an order that has to rise up from peoples to governments, the order to choose once and for all between hell and reason.”

Camus unsigned editorial for Combat had the tone of a public figure confronting the void: as a youngster, he had defined absurdity and nihilism as a way of protest against a meaningless world, but evolved over the years to a humanist “revolt” of the individual to choose humanism instead of the types of idealism that were promoting a better future but demanded a deadly first stage. Camus didn’t want to accept that any profound change needed “a necessary evil,” and no good thing could derive from prior atrocities.

When he published The Rebel, a book essay in which he elaborated his new philosophical position favoring humanism over destructive idealism, he received a fierce dismissal by Jean-Paul Sartre, who stated with mockery that those bourgeois ideas were comparable with those of the moralists, of “beautiful souls” who preferred to remain pure, “uncontaminated by contact with reality.”

Sartre, the intellectual from Paris with pedigree, accused the street boy from Algiers of bourgeois complacency; Camus had dared to abandon the easy path of being “an intellectual of his time.” But decades after, the complacency seems to have been on the side of those who, despite the first public accounts of Stalin’s atrocities, were still supporting the official Soviet positions in virtually anything, as was the case of Sartre, whereas Camus’ point of view resonates today better than ever.

The true rebel was not a person uncritically accepting some revolutionary ideology but a person who could say “no” to injustice, stated Camus in The Rebel. A true rebel would fight to convince others to arrive at a compromise via a hard-won consensus instead of pushing for scenarios of blind unanimity (like the democratically sanctioned paranoia of McCarthyism) or scenarios of totalitarian politics and actions of unquestioning obedience (like the post-World War II Communist revolutions and mas-scale “policies”). If most ideologies promised a future of plenty and devoid of suffering, why start with mass-scale injustices against individuals? Was that conservative, unambitious, or infantile reasoning?

Where does morality come from

If we are going to use AI-trained tools for knowledge, entertainment, or art creation, and also to create and deliver goods and services, or to drive and fly us around, etc., the limits get blurry and ethically tricky. As in Spike Jonze’s Her, the 2013 film in which Theodore (Joaquin Phoenix) falls in love with an algorithm that evolves into something more, hinting at sentience and an ability to disobey that OpenAI-trained chatbots have not achieved.

Chatbots won’t likely help us anytime soon to create fundamental artistic or scientific breakthroughs, nor won’t help us come up with new ways of overcoming big-scale misunderstandings.

If anything, chatbots will persevere in current mistakes they had been trained to perform until the repositories of information and variables they tap into keep adding new variables. Creativity and disobedience are, as of now, a human endeavor.

Albert Camus, Bertrand Russell, and others shared an inherent optimism in humanity despite the destructive era they endured. It’s our fallibility that keeps the possibilities open to use the same tools for good at a massive scale while at the same time respecting people’s autonomy (and hence avoiding the paradox of big idealisms that seek a better world by first depriving people of their liberties).

Some influential technologists have claimed that Artificial Intelligence could turn out to evolve into a force of destruction or of oppression, much like the evolution of early-twentieth-century idealism, which morphed into mass-scale horror with the help of modern bureaucracy and propaganda. In such events, only civil disobedience can save people from the “legality” of the moment, which can become sanctioned depravity, such as the Holocaust, the Gulag, or the Khmer Rouge massacre (between 15 percent and 30 percent of Cambodia’s total population).

I’m, by the way, already excited about the future release (July 21, 2023) of Oppenheimer, a movie by Christopher Nolan on the Manhattan Project. Hopefully, we’ll be able to perceive the moral struggle of some of the men involved in the effort that ended World War II with the unconditional rendition of Japan.

A beginning

As fallible beings with the ability to create knowledge by refuting old conjectures that are improved upon, humans may find this time the way out of stasis or total destruction in the coming generations.

The Thinker within the bigger Gates of Hell, by Auguste Rodin (Rodin Sculpture Garden, Stanford University, California)

As long as humans can keep detecting mistakes and fixing them rationally, avoiding dogma (by not destroying the means of rational error correction, which is the base of “morality” ):

“I don’t want to claim that the knowledge will be created. We’re fallible; we may not create it. We may destroy ourselves. We may miss the solution that’s right under our nose so that when the snailiens come from another galaxy and look at us, they’ll say, ‘How can it possibly be that they failed to do so-and-so when it was right in front of them?’ That could happen. I can’t prove or argue that it won’t happen.

(…)

“Or maybe it’ll be by well-intentioned errors, which nobody could see why they were errors. Again, it doesn’t take malevolence to make mistakes. Mistakes are the normal condition of humans. All we can do is try to find them. Maybe not destroying the means of correcting errors is the heart of morality; because if there is no way of correcting errors, then sooner or later, one of those will get us.”

Let’s keep making mistakes and refuting previous partial knowledge, which will be substituted by a new partial knowledge subject to rebuttal. And on to the beginning of infinity.