Deus Ex Machina

“There’s no reason for you to know what your data means, how it might be used, you can’t contribute, we don’t know who you are, we don’t want to know you, you’re worthless, you’re not going to get paid, it’s only valuable once we aggregate it but you know nothing, you will know nothing, you’re in the dark, you’re useless, you’re hopeless, you’re nothing. And then a robot made of your data will replace you. The robot is something. The robot is a successor master species. The robot is God. And you’re garbage.”

– Jaron Lanier on the prevailing attitude in Silicon Valley

Increasingly, the users of centralised platforms in the internet – us – come to realise that free services come at a hidden cost: We are the product. Our innocent scrolling through meme posts and birthday pictures is what we perceive at the frontend – funny colourful pixels jumping up and down – while we are in the dark about what we reveal about ourselves at the other end. Not only don’t we know the details of the deal we are agreeing to, we are oblivious to what the deal is about in the first place. Even if every little clause were to be perfectly understandable and transparent, we would still be in oblivion about the nature of the project we let ourselves be roped in for.

Whatever that endeavour looks like, it surely is not about exploiting humans in order to create a successor robot master species, is it, Mr. Lanier? On what grounds might he be formulating such a bewildering, seemingly outlandish, conclusion?

At this point, it helps to put Lanier’s background into context with the perils of the data krakens we are surrounded with nowadays. He made a name for himself in the 80’s by fathering virtual reality technology and has since then appeared as an outspoken critic of the ethos that accompanied the advent of such immersive technologies like VR. Lanier is a true oddball in the Valley: not primarily because he is the owner of over 1000 obscure musical instruments but rather because he is a man with a conscience, not receding from calling out the more reckless souls and trends, which he sees himself surrounded with in his professional environment. Lanier did not make a lot of friends amongst his fellow tech entrepreneurs and futurists, as he is one of the few who takes serious the estranging ideological projects that his fellow futurists sign up for – wilfully or not.

In one way or another, the trajectory on which the vast majority of the Valley’s tech entrepreneurs find themselves on is aggregating heaps of data in order to train the smartest oracle they can usher into existence. The predictive powers of the supercomputers is already far beyond human grasp in many domains already – according to Ray Kurzweil, head of engineering at Google, it is destined to exceed our capabilities not just in many domains but in all. The age of Artificial General Intelligence (AGI) will then have dawned. After nodding away the perils of creating such an immensely powerful intelligence, many lead engineers go on to praise its potential for everything from curing not just cancer, but all diseases, to world peace. When and where this towering giant of a computer will emerge from code is supposedly in the distant, but reachable future – and many of leading AI engineers, like Kurzweil, cannot await it. Even more so, they are working hard to bring that day closer.

Once embarked on this quest, the breakthroughs happening in front of one’s eyes make it hard to not project the upside of these achievements into a glorious future. When one starts seeing the creation of AGI as inevitable one and its promise for achieving good exceeding everything else, the quest becomes a divine one. It becomes humanity’s and, even more so, one’s own destiny to let such a messiah of an algorithm descend upon us. It is the religious fervour of the true believer that can be perceived in the eyes of the leading AI figures – and sometimes it also taints their words with a curious religious undertone. For instance, Alan Perlis once said that “one year spent in AI is enough to make on believe in God”. There are “monks in the basement of every tech company”, working ardently to spell out the word of God, according to Joscha Bach. Possessed by the light that one’s projections of the promised land emanate, one can easily forget that the devil is in the details. AI researcher Eliezer Yudkowsky voiced that “by far, the greatest danger of artificial intelligence is that people conclude too early that they understand it.” Given our

epistemic limitations in truly understanding technologies that by design learn to exceed our understanding, the even greater danger appears to be instead that we conclude too early that we understand ourselves. If we do not keep our biases in check, we run the danger of letting them run wild when neural networks perpetuate them.

Neural networks are biased by design. They are designed to be biased to optimise a loss function, which is the technical way of saying that they recalibrate themselves to optimise for a particular goal. And this goal is one that we humans are setting the standards for, even if the consequences of our intentions here can be far from aligned with the intentions themselves. If we do not know what we aim for – what the relevant metric is that we let the machine optimise for – we exacerbate the threat of summoning the wrong djinn out of the bottle. The perils of powerful AI, not aligned with our interest as a human species, are already great enough to warrant an exceptionally high degree of care in its design, as eloquently lined out in Nick Bostrom’s book “Superintelligence”. It does not even require invoking a dismal future in which a “robot master species” supersedes humans to make this point: It suffices to look at the power that algorithms by Facebook & Co. have in shaping the fate of our societies. This is not a warning against the development of technologies per se, it is only about the way in which we think and design these technologies. We amplify our biases, good as well as bad, in the shaping of intelligence in our likeness, and let this echo reverberate far into the future.

The sectarian undertone that underpins more private reflections of those working towards finding the holy grail of intelligence, should let one raise an eyebrow at least. Rather, the reverence with which many of them talk about the breakthrough entity they seek to create should cause deep uneasiness: When those very few best positioned to realise the potentially world-transforming impact of true AGI fall for projecting god-like qualities into such a machine, who is there to hold these developments in check? The race for the universal oracle, the most powerful, learning entity brings about unprecedented challenges, which can turn into dangers of gravest concern, if not overcome. The quest for the empowerment of the human race as a whole seems to be of only marginal concern to those behemoths of companies single-mindedly racing towards the ultimate augmentation of their business model: Prophetic counsel by deus ex machina.

AGI can indeed be the greatest, and potentially the last, invention of humankind. But in order to thwart the odds of derailing this unprecedented experiment – the creation of a power that escapes our understanding – devotional adoration needs to be replaced with self-reflected conscientiousness. The trend towards monopolisation of detailed information about the world and every human on it consolidates power in the hand of those able enough to weave supreme intelligences growing from the unremitting stream of data we continuously feed it with. Let us remember that it is the machines which are supposed to be to the service of humanity, not vice versa, before the human race and its foremost thinkers come to fear a new master species they carelessly heaved onto the throne.

error: Content is protected