Guest blog post by Oskar Holm, organiser of Brighton Data Forum
Artist and curator Eric Drass’ background spans a wider range than most. From fronting a punk band with a brief appearance on television, to the busy, dizzy heights of London’s startup scene during the web 1.0 bubble, all the way through subversive art installations and into AI deep fakery for the BBC, Eric’s path takes unexpected turns while his identity periodically gets reinvented.
Eric’s artworks, like the artist himself, are a rare combination of engagement with cultural forces, academic rigor, seemingly boundless creativity, and technical prowess.
This July, he gave a talk at the Brighton Data Forum (one of the many tech meetups supported by Silicon Brighton) about the rise of “algo-culture”. In the presentation, he examined how culture used to live in human minds exclusively but is now shared with vast machines in remote data centers, and how these machines mediate our perception of reality by storing, distorting, and feeding it back to us. It’s heady stuff and at times disturbing.
Firstly, how do you define the term “algo-culture’?
We live in a world increasingly mediated by machine manipulated content, be it the order in which your newsfeed presents itself, or the CGI effects in Ted Lasso. More broadly, culture itself has shifted from being a purely human endeavour to a hybrid of man and machine. A lot of my work looks at this new area, and I coined the term ‘Algo-Culture’ as shorthand for ‘algorithmically manipulated culture’.
Where did this journey begin? How far back can you trace it?
I got my first computer, a ZX81, for Christmas when I was 10 years old. It was a flat box with a rubbery keyboard and 1KB memory that you just plugged into a TV. I played with it, I programmed on it and I loved it!
Did your exposure to this home computer make you want to study or work with computers?
No, I initially wanted to go to the local art school, but a teacher noticed me and suggested I apply to Oxford. The prevailing thought from the school was that if you showed any aptitude for science, you should go and get a science degree. Whilst looking at the big-book-of-degree courses, I found Philosophy listed next to Physics. It looked much more interesting, so I applied for that instead.
Then, following the undergraduate degree, a professor talked me into doing a PhD in cognitive psycholinguistics. Like most people at the end of their degree, I had no idea what to do next, so the offer of staying on for a bit longer was very appealing.
What did your PhD studies entail?
I studied language acquisition in children, specifically using neural network models to simulate and explore the changes in abilities and performance over developmental time.
At the time, in the academic world of Psychology, ‘connectionism’ was very much in vogue; the idea that ‘artificial neural networks’ could emulate some of the behaviours we find in human learning.
People were teaching computers about the world by feeding them a catalog of declarative statements like ‘this is a table, this is a chair, this is a computer’ and so on, but humans do not learn about the world that way. They need to learn language before such statements make sense. An infant is immersed in a world where all concepts appear in their natural context, awash with spoken language. Somehow their brain has to discover the right structures to input and map its components onto their model of the world. How that happens is a mystery that we wanted to understand.
Rather than requiring specific neural hardware to be able to learn a language, the idea was emerging that no such ‘deep structure’ was required, that more generalised brain hardware could learn complex symbol systems, like language, from the environment alone.
What neural networks were available to you back then?
Unlike many psychology students at the time, I was comfortable with programming. So I coded up my own neural networks from scratch, usually 3 or 4 layer models with a few dozen units.
The underlying methodology is the same as for the deep learning models of today; the difference is scale. I built models with tens to hundreds of neurons and with very limited access to data.
What has changed since then are three things:
Computing hardware became faster with vastly more memory.
Information which only used to exist in analog form has become digital and available. Everything has been datafied, and a lot more data exists which carries with it complexity and nuance of the world.
More sophisticated structures and learning algorithms. Most notably the recent advances in Generative Adversarial Networks [or GANs], which effectively offer a whole new way of modelling data in an unsupervised manner.
Okay, so far I am seeing a linear path: a tech curious youth learns programming on his first home computer, discovers artistic talent, gets training in academia, and boom we have the ingredients for your current works and algo-culture. But I happen to know there is more to the story.
Everything started to become more interesting to me than finishing the PhD. The web was young then. I remember seeing my first web page in one of the earliest browsers, Mosaic, and thinking how wild it was that I could download a file from Australia.
During that time, I was in a hardcore band called Unilever. My on-stage pseudonym for the band was ‘Señor Hardcore’. That later became the online alter ego, ‘Shardcore’.
Eventually, with some friends, I started a web technology company. It was successful for a time; expanded to 14 countries, raised $50 million in investment; before crashing when the dotcom bubble burst in 2000/2001. While the company and the product itself was doing ok, one by one its major customers dropped out or went bankrupt. As the main geek behind the service, I was kept to keep the lights on till the very end and the last to go.
I am proud to say I helped piss away $10 million of [the then CEO of The Sun] Larry Ellisons’ personal money.
What does one do after something like that, in the smouldering ruins of the dotcom crash? Write up a new job application?
After the collapse of the dotcom, I got married and took a long road trip in the Western/Southwestern United States. Since then I have been involved in many different ventures, including projects such as developing technology for tracking investment bankers to ensure compliance, guard against insider trading and corruption within.
So, you worked a number of years in the tech sector, but I don’t know where to start asking questions about your art.
The artistic side was always there, and I was always starting art projects but not always finishing them. When I turned 33 I decided to create, and importantly complete, 33 artworks. Initially, I would paint something each week, list it on Ebay, and ensure it left the house before I started the next. They sold for next to nothing, but the process of starting and finishing a work quickly became embedded.
Part of the motivation was that I wanted to leave a more lasting physical footprint. Something that, unlike code, which is ephemeral, would stand on its own and last a good while.
I know you have made an impression on the Brighton art scene.
In 2008 I did an open house exhibition during the Brighton Festival. That’s how I met many other local artists, including my neighbor Sam Hewitt. We wound up collaborating a lot on artworks, for instance we filled St Peter’s church with a congregation of life-size painted portraits.
Then, in 2009 during White Night Art Festival, Sam and I (as ‘The Fortunecats’) were commissioned for an exhibition on Jubilee Square. We made a couple of Japanese Fortune Cats with golden telephones for people to come and ask questions, and we trained improv actors for listening in and responding.
Part of the problem was to generate some starting prompts for the actors quickly, so I found myself using my tech skills inside my artistic life and the two started to blend. One of these cats was later reengineered as God of capitalism for the Art of Bots show at Somerset House.
We also turned the basement of The Old Market into a generative installation called The Consciousness Engine, created an algo-therapist, specifically for men, called Broken X, and installed the ghost of George into a phone box at the bottom of Trafalgar Street, amongst other things.
Art that interacts with the audience. Just like your Twitter bots.
The Fortunecats planted the seed. Very quickly I went from there to building Twitter bots that interacted with people’s accounts, doing all sorts of things from obsessively befriending them, to impersonating the PM threatening to deport them, to echoing the jargon of futurists.
I also recall a painting you made which looks back at the viewers.
That work was a statement on mass surveillance I did for the first incarnation of The News Sublime show we curated for the Brighton Digital Festival. Behind the canvas was a hidden camera and a Raspberry Pi running facial detection and recording people’s faces, then transmitting the recordings to a box where the recorded faces could be seen hovering over a birds nest of cables via a Pepper’s Ghost illusion.
It is very clever. I can see how that starts to lead towards your AI work.
GANs [or Generative Adversarial Networks] turn up in, what, 2016? I was immediately interested in returning to neural networks. I had access to some computing resources at University College London where I learned to play with them; I bought myself a Nvidia Geforce GTX970M GPU and started hacking deep learning models. I run all the code on my own hardware where the work is done over SSH [a network communication protocol which encrypts remote connections between two computers].
The BBC came to me in 2017 when they were doing a program with Ian Hislop on fake news and deep fakes. They had tried to get their R&D department to build them a deep fake video and gotten nowhere. Eventually somebody got my phone number and called me up. I said ‘sure’ and started working with them. We filmed a dancer and Ian’s face separately, then I had to use the footage to train a model to transpose the faces. It took several iterations and weeks of churning on a computer, but it came out well and they were able to use it.
The way you marry art and computing is rather unique.
I use technology, but the point is not to simply show what the technology can do. Often my work is a way of pointing at the underlying systems beneath contemporary culture, be they technological, political or otherwise.
Due to my background and familiarity with technology, I was able to capitalise on opportunities that not many artists had. Nerd skills plus art worked well for me.
Your past trail is full of reinventions. What’s next for shardcore?
The current wave of AI art seems to be catching up with me and I am not yet bored. Normally I get bored with things before too long. This time, the issues I am talking about affect a lot more people. Everyone has Siri on their phone and what I have been banging on about since 2016 is still relevant.
I struck a rich vein.
You have explored our modern culture more than most. I liked your point about children growing up today being natives of a new world. Do you have advice for the next generations?
My advice to my own children is to be creative and find out what are the most exciting and interesting things you could do. Creativity – particularly the uniquely human kinds of creativity – is the most valuable skill to have. Many, if not most, of the current activities of the Western worker are going to be massively disrupted, if not replaced, by technology in the near future. Thinking outside the box is the best survival skill.
I suspect some new and completely unexpected reinvention of yours is not far off, and I shall be paying close attention to your work to see how that goes. Thank you, again, very much for this enlightening chat!
Check out upcoming Brighton Data Forum events on the Silicon Brighton Hub.