Artificial Intelligence is the buzzword on everyone’s lips. Companies the world over are racing to put it in any and all of their offerings. Often without really caring whether it makes sense or not to have it there.
My best friend says he is very concerned about the possibility of a self aware intelligence arising from the Large Language Models populating the web these days. He fears a future like those predicted by Terminator (Skynet) or the Matrix might be where we are headed to.
I don’t.
I worry that I spend hours or days putting words together while an AI can spit out the equivalent of entire books in minutes. And that there are people already doing that.
I worry that this machine doesn’t only create a lot of “content” very quickly, but it is often inaccurate or plain wrong. It hallucinates information and mixes it with truth almost randomly. It would be perhaps inaccurate to say it lies, because there is no actual intent behind it, nor any capacity to differentiate truth from fiction.
I have heard LLMs being referred to as plausible but not credible. and I believe that is accurate.
Therefore, I think one of the most dangerous things that AI can actually do is not rise up and kill all humans. At least not yet. It is to flood the Internet with meaningless nonsense, making it very hard to find trustworthy information.
This is already happening in sites like Amazon, where auto generated books are mixed with actual human made ones. Inaccurate or hallucinated information thus spreads unchecked and is hard to separate from legitimate data.
I worry about the way Israel is using AI to speed up and automate its indiscriminate bombing campaign in Gaza. This is a tool to kill more people, faster, and dodge any accountability. After all, no human can be blamed for the choices of the automatic machine.
Don’t get me wrong, I also think there are some use cases where AI can be genuinely good. We use a plugin that cleans up recorded voices and makes them sound great. It is borderline miraculous. I have seen reports of AI speeding up research on protein candidates for the treatment of diseases.
But pretending we can just automate creativity and cut costs may just be a middle manager’s dream. It will spell a nightmare for the rest of us.
So here I make a simple, and small promise:
This blog is written by me: A humble and flawed human. It will be slow, inconsistent. It will contain typos, structural and grammatical errors. It will sometimes be wrong, but strive to be better. It will change positions when I learn and new information is available.
It will have a soul: mine. Because I will write it letter by letter.
I refuse to host even more AI generated slop in here. It may have its place and its use. But not in this page.
As bots flood social media, and LLMs fill the web with content at breathtaking speeds, we need to consciously save even a small corner of the Human Internet, and I will do my best to contribute to that goal. With all the flaws that may bring.
Long live Ned Ludd.
Leave a Reply