The last decade of the 20th century began, so we thought, as a step towards a bright future. The USSR collapsed and the free world had won. The idea of the global village began to take root thanks to computer networks. Amazing technological advances were plentiful, and startup company shares enjoyed a meteoric rise. It was, we believed, the culmination of history and the end to all wars. We were promised that the free market could fix every ill, smooth over every bump, it would help us overcome our tendency to aggression and isolationism, it would create a wonderous world of cooperation and closeness.
It was the last decade of the bloodiest and most violent century of our history, and we faced a new revolution, no less significant than the industrial revolution – but infinitely faster. We thought we were going to heal the world, transform it into a better place – for me, and you, and the entire human race.
We connected to computers in order to connect with people. We wallowed in online forums. We found fellow enthusiasts to discuss the most esoteric interests and our most intimate experiences. Fantastic technological initiatives, past and present, assure us that smart investments can solve the world’s problems, and if necessary – find a new world for us. Impartial algorithms – it was said – will miraculously crack problems of poverty, education, transportation, and resource allocation.
Some years passed. The vision of the end to all wars seems just as hopeless, as is the dream of human bonds no longer restricted by biography and geography. Instead, we are in endless dialogue with bots and digital assistants, spending much of our time in a flat and demanding digital environment. Digital accessibility aids and language and facial recognition algorithms are changing the way we perceive and discuss our world. AI systems scrutinize our patterns, provide our services, and invisibly filter out experiences, content, and products. Our lives are supervised undetectably using code.
With the passing of the millennium, technology’s glossy bubble of optimism burst thunderously, replaced by a calamitous economic crisis, the first of two that struck the world during the first decade of the new era. Global corporations paid for none of the losses, as they already traversed national borders, and were exponentially more powerful than long-established government institutions.
Then the 9/11 attacks occurred, loosening the final grip on any restraint on individual surveillance. Along with the dot.com crisis, this was the watershed moment for technology companies and capitalism in general, the point which technology researcher Shoshana Zuboff terms “surveillance capitalism”. Capitalism is insatiable. It is the capitalist way. It seeks more and more territories that operate outside the market to pull them in, convert them into commodities. Surveillance capitalism is the offspring of familiar capitalist logic and the technological ability to amass increasingly more information about us and our behaviors. It dares to appropriate the private human experience itself, seeing it as raw materials to be processed and utilized. Behavioral data accumulates, and then is stored in systems powerful enough to make unprecedented computations, all designed to predict human conduct.
One could claim surveillance capitalism began in 2001 in Google. It seemed at the time, as the dot.com crisis raged on, the company understood how to use information stored in its servers previously considered “extra data”. This information constituted digital testimony of the human experience, and Google realized how to commodify it, to sell it to advertisers and generate money. Lots of money.
It was the last nail in the coffin of late the 20th century cyber-hippy credo. Technology was no longer regarded as the cure to all human ills produced by the industrial revolution. The dream of world redemption became a footnote, to be replaced by a profit margin.
And yet, the rhetoric of global change is still shamelessly rampant. Silicon Valley clichés are still abundant. We are no longer as naïve as we once were, but neither are the technology companies. They know we like to hear that the world can be changed. They know we want to ensure a better future for ourselves. And even when we are no longer perceived as people, just users, that promise it forever dangled before us.
But this façade is a one-way mirror, with the frenzied hustle behind it only escalating.
The development departments of technology giants, for example, are well known for investing many resources into designing capabilities and qualities to assist the disabled, bringing them also into the fold of participants, users, consumers. Digital accessibility is now hugely popular among tech institutions, as they currently enjoy double and triple benefits: fantastic press coverage, the boon of new potential users, and – perhaps most importantly – quietly developing methods of tracking and monitoring that would have met public resistance had they not been shielded by the headline “accessibility”.
Image May Contain
The artist and technology researcher Lior Zalmanson is on a search for a leaner, more efficient language, one that permits easy and effective dialogue with algorithmic entities. In his first solo exhibition “NewSpeak“, Zalmanson explores how we can conduct meaningful and productive communication with machines – and how computer programs (specifically Google and Facebook) are changing our perception of reality. His materials are the ways in which various algorithms read, detect, and decipher information, and Zalmanson employs the very same accessibility services offered to disabled individuals, examining the clumsy ways they translate information for end users. Meaning, he explores the awareness generated by algorithms, whatever their capacity and limitations.
In 2016, Facebook launched a unique accessibility tool designed for the blind and sight impaired. The aim was to enable these populations to experience the images shared by friends on social media. The starting point of the series of photographs “image may contain” is a Facebook image-recognition algorithm. The software engineers that designed AAT (Automatic Alternative Text) decided to use an AI algorithm to identify various objects in images, providing verbal descriptions for the sight impaired. This technology was supposed to open the door for this group to the wealth of visual information to be found on Facebook, Instagram, and WhatsApp. The only problem is that the system is not capable of discerning that photographs provide far more than the mere sum of their discrete parts.
However, in this series Zalmanson takes historical photographs and allows AAT to translate them. The system does what it was designed to do, producing sparse textual descriptions of their content.
Thus, the image of President John F. Kennedy driving through the streets of Dallas in a Lincoln Continental just moments before he was assassinated, translated verbally to: “Ten people, car.” The photographed evidence burned into global collective memory becomes a flat itemized list – and Zalmanson uses that description to search for Facebook images similarly labeled. The eclectic database gathered this way joins the original photograph using a lenticular print, enabling the artist to present an array of images, each seen from a different angle. The algorithm flattens them, dismantling them from their canonical status, transposing them with ordinary, everyday, marketing materials. The lenticular prints encourage viewers to keep moving, ever more flooded with an amalgamation of images, indifferent to the hierarchy usually applied to them, refusing to be seen in their entirety.
Zalmanson avoids exhibiting the series on the gallery walls, instead positioning them on metal bars at its center, hereby stressing the gap between the work’s façade – where photographs meld into a sequence that demands viewers keep pacing back and forth – and their rear – comprised of metal scaffolds supporting the black boxes with their random texts, such as: “Ten people, car.” Automatic text is indeed alternative.
In “Excess Ability“, Zalmanson uses footage of a HQ launch event from 2009 of a new product by Google’s Innovation Lab: an automatic subtitle service based on a voice-recognition and language analysis algorithm designed to translate YouTube clips for the hearing impaired. The press conference included lectures delivered with oratorial pathos on accessibility and technology’s ability to fulfil and enhance human abilities.
Zalmanson implants the subtitle software on the video footage, producing a dismal and ridiculous result that emphasizes the gap between a utopian rhetoric filled with the hubris and arrogance of the technological world on display and the complex reality of human limitations. The title of this work – “Excess Ability” – is taken from the YouTube automatic subtitle software translated the word “accessibility”.
Basic Basic English
The video installation “Basic Basic English” does not present the language and translation limits of various algorithms, but rather proposes a possibility of dialogue between man and machine.
At the center of the work stands the projected image of a trainer, some cross between a spiritual advisor and digital prophetess. This hologram entity invites viewers to learn a new language, one based on a redacted version of English. The software and devices wear a woman’s form, serving as a liaison and coach, a futuristic stewardess, explaining the rules that should be followed to communicate with machines: keep to short sentences, simple words, use the imperative, and speak slowly. Zalmanson’s lexicon contains only 648 words, enabling productive communication with AI systems without creating confusion.
Zalmanson’s lexicon is based on “Basic English: A General Introduction with Rules and Grammar” by the linguist Charles Ogden, containing 850 essential English words. With the support of the British government, Ogden spent the 1930s toiling over what he believed to be a utopian endeavor to create a rudimentary new language, based on English, to be used in the British colonies throughout Asia.
To consolidate Zalmanson’s even smaller lexicon (just 648 words), the artist recorded people from around the world reading aloud a long list of words. These recordings were then fed into a Google algorithm, with any words not recognized then erased from the list. Thus, the remaining words do permit fruitful and comprehensible exchange with AI interfaces, but the drastic redaction of language also inevitably diminishes the ability to imagine – a familiar subject for outrage among those grieving for forgotten words and the “rise of the emoji”.
The work includes several lessons, presentations, and demonstrations of the new language. The first lesson displays a vision of mutual understanding between biological and digital entities. In one presentation, the instructor lists words excluded from the lexicon, and in another she reads aloud a Basic Basic English rendition to the lyrics of “Every Breath You Take” by The Police. In this version, the song loses its grim undertone, but the repetition of words gives the impression that the original speaker is still present, still watching your every move. The obsession abides, even when the ability to express it has been lost.
Zalmanson employs the same reasoning behind Ogden’s English, but this new language is powerfully reminiscent of “Newspeak”, George Orwell’s lexicon employed to control the masses in his novel “Nineteen Eighty-Four”. Language formulates reality, and whoever controls speech also controls thought. People with limited imagination and inadequate vocabulary are incapable of imagining change, powerless to initiate it, and helpless to resist existing conditions.
And as for us? Can we imagine change at this time? Are we capable of envisioning life in a different economic system? With a different prime minister?
Zalmanson does not take part in this utopian optimism of tech devotees, but also avoids the dystopian depression of its critics. He is enthralled by the potential inherent in this budding language between the digital and biological and sees machines not just as functional entities but also as a metaphor for humanity itself – a fact that affects the way we may view them.
The similarity between man and machine is growing daily. AI capabilities are constantly being expanded, and by now are typified by rapid learning and even decision making based on partial or deficient information. Our own capabilities have also transformed: systems so radically indifferent to our existence have stripped us of some of our humanity, pigeonholing us in the category of mere “users”. In this new world swiftly becoming entrenched around us, Zalmanson suggests an attempt to communicate with the algorithm beyond simple needs of expediency, to conduct a dialogue with the potential to evolve into a new language – to establish a material form that creates a new reality.
We have been viewing the world through algorithms for some time now. Without educating them to understand this world, we will soon lose our ability to comprehend it.