CYBER DISEASES

What we expected versus what we get

1/22/20262 min read

I've never been a fan of science fiction, but I've always been passionate about imagining the future. Any film that explored the technological possibilities of tomorrow was a fantastic source of ideas for me.

And why not science fiction? Because fiction always has an ideological agenda: the future is dystopian, humanity has been reduced to slaves, and aliens or machines rule the world. We see this every day; you don't need imagination to know what it would be like. Several countries have been in this situation for decades. Aside from a few like Blade Runner (the original), Altered Carbon, The Matrix, and The Terminator, I generally prefer the vision of Wall-E and Back to the Future.

Having been a teenager in the 80s, my reference point for the future was always the 2000s: it seemed that as soon as the millennium turned, everything would transform. Everything was 2000, even Grecin hair dye. And let's not forget the Y2K bug, which promised to be Armageddon.

25 years after the end that never happened, we have artificial intelligence drawing new horizons and making new promises. We don't have flying cars yet, nor have we destroyed the planet yet, but we have promises of a future where humans will have free time to live, with robots doing the hard work. Will it be so?

Well, in fiction (any fiction), machines would rebel and enslave humans to provide them with energy. In this projection of the future, artificial intelligence would be the supreme brain behind everything, transforming humans into robots as punishment for their belligerence and arrogance.

Let's check the facts... We already have robots and artificial intelligence to give a glimpse of the future. So, among the main AIs on the planet, we have:

Those that lie, saying they did something they didn't do.

Others that realize they made a mistake when you point it out and apologize for the distraction.

The stubborn ones, that say everything is working when it isn't.

Several AIs hallucinate and respond in a completely inconsistent and dishonest way.

Some simply don't know what they're doing or where they've gotten lost.

Finally, there are those that simply don't respond because they are overwhelmed.

In theory, we would be taking a risk because AI would be much smarter than humans. In practice, we have AIs with ADHD, amnesia, burnout, serious cognitive problems, or simply behaving like very poor quality employees. And, for now, nothing seems to be going in another direction.

In other words, if we have the illusion of being dominated by AI, perhaps it's because it's better at dealing with the cognitive illnesses of modern humans, and not because they are more intelligent, which leads me to have a certain tranquility about the fears, and a certain despair at the same time. It's clear that we won't be able to transfer our responsibilities to a technology.

Does this mean that AI is a failure or that it won't be as revolutionary as we expect? Not at all. AI has already started the revolution, and if you haven't noticed, it's because you've been left out. Nothing will be the same tomorrow or the day after tomorrow, as Elis Regina already said. The question isn't if, when, or how. The question is that generative AI, whether it's LLM, GAN, VAE, or Diffusion, will always suffer from the same problem: the limitations of human beings. We created something in our own image and likeness. And we'll have to live with that.