Our Responsibilities as The First Generation

I’d like to set the record straight by saying that I am not “against AI”. However, as Žižek once said, “I don’t buy into cheap optimism”. Artificial Intelligence will no doubt become the greatest advancement in computing since the invention of the transistor. However, it will be on us to decide if it’s worthy of optimism. I am very much in favor of AI, but I am concerned with the direction that it’s been taking. My intention is for you to walk away from here today with an awareness of what lies ahead, with some ideas about what must be done.

I’d like to reflect for a moment on the remarkable arc of our journey. For nearly 90 years, scientists, sci-fi writers, and innovators alike have all worked in pursuit of AI. This journey has been a rollercoaster of breakthroughs and setbacks. It’s a narrative of human endeavor that’s as erratic as it is inspiring.

Consider that just 35 years ago, the birth of the Internet marked the dawn of our digital age. Back then our primary focus was accessing information. Now, it’s about leveraging information, using algorithms and AI to enhance its utility and “value”. This new paradigm has made the efforts to use and regulate AI the central themes in the modern struggle over our information. How did we end up in such a situation? That story actually starts almost 50 years ago.

By the end of the 20th century, interest in AI began to wane. False promises and failures to deliver lead to a lack of interest that choked progression in the field sometime in the mid 80s. The issue back then was information scarcity. There were no sources of high-quality data available to train and experiment with. Although our computational capability wasn’t nearly as advanced back then, the theories were present, simply waiting for the data to be available to test them.

Entering the 21st century, the advent of the Internet suddenly made it possible to gather the vast data needed to train AI. It was basic algorithms at first, nobody thought of it as “AI” like we do today. But over time the techniques were refined, and the algorithms grew in complexity - just as the Internet grew into our lives.

Back then the online world was a playground of possibilities. We were promised a future where information was at our fingertips, where knowledge was democratized. But somewhere along the way, that vision got co-opted. You see the algorithms only work if there is data, and the bigger and better algorithms require ever more data. Information became the new oil, and you and I, became the unwitting miners.

This realization marked a turning point. The initial optimism of a Utopic Internet evolved into an understanding that data was not just a resource but a currency in its own right. The seeds of a new revolution were sown.

June 12th, 2017 was the day it finally happened. The invention of the transformer model cataylzed the started of the AI revolution. Efforts to capitalize on this were slow at first, but as soon as it was proven to not be a pipe dream, interest in AI exploded. In a relatively short span, we‘ve progressed from basic systems like Siri to the bewildering abilities of SORA. The capabilities have grown far beyond our expectations.

And now we stand at the cusp of creating artificial intelligences so advanced, they will rival the abilities of humanity itself. What’s more, we are a mere decade away from a reality in which every AI system will be interlinked, creating an immense network of artificial intelligence. Just as the Internet once connected people, this new network will connect every fragment of knowledge. Data will flow seamlessly, taking on new formats and shapes as it passes form one model to the next. “Multi-modality". “Augmented retrieval”. These are buzzwords of the new millennia. It will be the dawn of a new collective consciousness, borne of all humanity. But here comes the critical question: Who decides what gets fed into the colossal ‘giga-brain’? Who decides what it’s allowed to say, what it’s allowed to do?

I leave that as a thought exercise for you dear reader.

The very essence of ethics, the core of what makes us human, hinges on the moral fabric of our society holding fast against those conspiring against it. Those who would unravel it for their personal gain. The struggle for the ethical use of technology, a fight almost as old as technology itself, will be won or lost by our ability to do what is hard - because it is what is necessary. The choices we make today, the paths we choose to tread, are not just for us but for everyone after us. It is our burden to bear - because ours will be the last generation to remember a world where AI was not omnipresent. From this point on, every advancement in AI, every new algorithm and model, will be inextricably linked to the all-encompassing network.

So it falls upon us now, our generation, unique in the entire history of humankind, to lay down the laws for AI.

Unfortunately, we’ve been doing a bad job. It is clear to me now why that has been the case.

We were raised to see technology as a tool, a means to streamline and enrich our lives. We were the masters of machines, and they did only as we programmed them. That was the way things were. But now the tables have turned. Our creations are starting to master us. The AI systems we interact with are observing us, learning from us, and now are being used to manipulate us. We were taught to never trust what we saw on TV, but no one warned our parents about the Internet.

As the 2010’s raged on, it became clear that the narrative of technology as a benevolent tool for good was unraveling. The things we created, once predictable and manageable, now harbored the power to influence how we treat each other. In that line of thought, I’d like to tell you a story.

Just a couple of months ago, I was debating with a colleague of mine about the ethics of AI. We were in Burger U on Plaza Drive. You know, that place where all us intellects collide. As we were arguing what would become of OpenAI, he leaned back in his chair, and declared a matter of factly: “There will be no copyright crackdown on AI”. He paused to survey my face, then added, “And it will all be because no one cares about the copyright problem, so long as it’s easy to use”.

It became clear to me in that moment, that we’d be spending the next decade fighting to stop AI from cannibalizing society. The battle between advancing AI and safeguarding the autonomy of the human race had begun. I don’t mean to sound dramatic, but in the short time since we’ve had advanced AI models, we’ve already started eating away at ourselves. Artists are being put out of the job as I speak, manual labor are under threat by AI robots, and the apparent safety of knowledge-based jobs shrinks by the day.

It is true that you and I won’t lose our jobs overnight, but AI will become a part of the way we work. Those who cannot augment themselves with AI will be slated for replacement by it. We are deprecating human beings, and we have no plans as a society to support them when this happens. Like tears in the rain.

This is to be expected. We didn’t legislate AI with ethical requirements from the start. That I feel, was a fundamental oversight in our approach. As a species we’re reactionary, not precautionary. And now we’re grappling with the consequences of what has already begun. We need a proactive strategy, with policies that not only address the current state of AI but anticipate its future trajectory. We cannot stop the advancement of AI, nor should we try - it is truly inevitable.

Artists always portrayed AI as taking over the meaningless jobs, the dangerous and boring, so we could focus on creative pursuits. What a cruel twist of fate that they would be the first to go. If you think your job is immune and thus AI is not a problem for you, I invite you to take a step back and consider the other effects of AI on society.

We willingly feed the machines every aspect of ourselves. Every picture taken, every song played, every word texted. No moment is intimate when it’s captured by your phone and uploaded into their databases, ready to train the next generation of AI. And what’s the first thing they do with this feast of data? They use it to peddle stuff we never knew we needed. I once Googled the size of toilet seat bolts, and now Amazon seems convinced that my life’s mission is to refurbish bathrooms.

It’s a funny example, but that really is the essence of commerce in the 21st-century – manipulation. To get you thinking about Product A or Topic B. Sometimes they don’t even care if you’re buying something. You might have heard of the “attention economy”? By far the easiest way to get someone’s attention is their emotions. By understanding our deepest desires, fears, and hopes, the tech giants can grab you by the eyes and keep you engaged in “the content”. Imagine now how generative AI can turbocharge that process. Instead of influencers, we could have personal AIs, ingesting your interests and in return providing a feed of information created on the fly - especially for you. “If you liked that, you’ll surely love this”, and then they have you hooked. Even if your job is safe today, your social circle, your sense of self, may not be for much longer.

Unless we act swiftly, we’re heading towards a future where AI doesn’t just assist us but controls us, shaping our thoughts, desires, and actions. A future where our very essence is harvested and fed into the insatiable digital maw. A world that was once inconceivable, and if allowed to take shape, irreversible. In this world, the notion of “organic content” will be nothing more than a quaint artifact of the past.

In response to this threat, we’ve seen policies emerge around the world, attempting to regulate the usage of our data. GDPR, KOSA, COPA. An Alphabet soup of laws that was all far too little too late. They already took the data they needed to start training the AIs, and now they’ll simply make us consent to continue providing it. This isn’t a distant dystopia. This is happening as we speak.

It didn’t start in an instant, it wasn’t a crash in the night. It was a slow snake of insidious intent, sliding its way up the tree to offer us the Apple. Under the guise of “services”, the tech giants said “For the small price of some terms and conditions, you can have all this stuff for free”. “It’s just a small agreement,” they said “you don’t even have to read it”. All those artists and authors complained about their pictures being stolen, their books being copied and fed to the machine. But even if we stopped them now, they’ve got a taste. Pretty soon if you want to use any service on the Internet you’ll have to agree to allow your data to be trained upon. The copyright problem won’t matter anymore, you’ll simply be forced to relinquish control.

Time is not our ally in the fight to fix this problem. Our generation, straddling the fence between the world that was and will be, might be our last hope to steer the ship away from the iceberg. We must do so without resorting to brute force or authoritarian measures. We have to convince ourselves to reject convenience, a Herculean task, if there ever was one.

So what must we do? We start by drawing lines in the sand. We refuse to engage with entities that exploit AI for abusive purposes. We reject those who use AI to manipulate human behavior, or to discriminate against our neighbors.

Individual action alone will not be enough. We need to enact effective legislation to preserve our data ownership, and prevent us from being forced to sign away our rights in the name of terms and conditions. It means mandating transparency for the algorithms that make critical decisions about our lives. A computer cannot be held accountable, and therefore it should never be placed in a position where its action alone may bring misjustice.

The European Union’s proposed Artificial Intelligence Act is a promising start, but we need to go further. Global coordination and cooperation to develop shared standards for responsible AI will be required, lest we allow our neighbors to shoulder to blame. We must invest heavily in AI literacy and education, and empower people to understand and navigate AI. This includes not just technical skills, but the critical thinking and media literacy to identify what is real and what is “fake”. We need a society that is not just consumers of AI, but active participants in shaping its development and use.

None of this will be easy. It will require us to make hard choices, to sacrifice some convenience for the sake of our humanity. But the alternative - a world where our lives are shaped by machines we cannot control - is far grimmer. We’ve seen the world with and without the AI juggernaut. We understand what’s at stake, and we have the foresight to shape the trajectory of the world. It is our opportunity, our responsibility, and our moral obligation to steer AI towards a future that amplifies and enhances our humanity.

The choices we make in the coming years will echoe across generations. Let us choose wisely, and let us choose together. Thank you.