me reading Cyberpunk lore from the 90s: lol how the fuck does the future have no internet? AI can't ruin THE INTERNET, this is lazy writing.

me trying to find my local walgreens in 2024: Google’s new cyberdaemon has totally jacked my fuckin steez, choom.

-- @PISSxSHITTER

It's Not AI

Don't misinterpret this as a lack of enthusiasm: I think that I've made it abundantly clear where I stand with regards to things that make life more cyberpunk. Hell, if it weren't for the fact that I just plain don't trust ol' Musky to actually deliver, I'd be super hype about Neuralink. I mean, not to piss off the fanboys, but while I support Elongated Muskrat in his "burn money blowing up prototype tech" (see SpaceX) I'm just not going to be an early adopter of a tech that over-sells a future before it kills me (see Tesla). But that is to say: I'm here for transhumanism! I'm here for brain-computer interfaces! I'm here for AI!

But AI isn't here for me. Hell, AI isn't here at all, full stop. One of the biggest thought-crimes, in my opinion, has been marketing LLMs and other ML algorithms as "AI" despite knowing full well that it isn't. Let's face the facts, choom: we've got fancy search engines. Now, I don't want to get into the philosophical discussions about what is/isn't "intelligence" in the first place... That's a topic for another day (and a time that isn't 3AM local)... because I know that you can take almost everything I'm about to say and argue that it's no different than a bio-brain. Look, I'm admitting that part, but I also believe that there's something missing here that accounts for the step between the hardware and the software in an "intelligent" creature. Maybe it's in the software, maybe it can be thought of as an OS or something like that? Some layer that takes the "bare metal" and transforms it into a thing capable of running "intelligence" on it, you know? Or maybe you don't.

Let's think about it, though: The things that are being sold as "AI" are just "Machine Learning" algorithms. Pieces of code that are really good at taking in data, and building relationships between that data. For example, an art generation algo gets fed terabytes of example images, meticulously described and annotated, and manages to weight itself so that it knows that certain colors frequently go well with each other, that straight lines should continue until they get interrupted, and what images it has been told represent a certain object. Then you can ask the machine to draw you a similar thing and, sure enough, it draws some straight lines. It picks complimentary colors and might even understand directional lighting (at least in a small domain size), and if trained on images that notate they contain a "kitten" it may even draw a fairly convincing picture of a "kitten" -- If you squint at it, it might even look real. Just don't count the teeth, or look too closely, or inspect the background too hard.

Maybe the kitten was drawn in a style that looks like a popular webcomic artist? Maybe it's a straight up duplicate of an adoption photo from some online pet-adoption agency? Well... what do you expect? That's what it was trained to see, but it doesn't know what that all means. It's just an arbitrary cloud of data with some vague probabilities representing how similar a chunk of noise is to what it was told was an image containing a "kitten" during training.

Maybe that approach will one day give rise to an artificial intelligence, but that's just not where we're at yet. Maybe, like the HOLMES IV, all it will take is a large enough cluster of artificial neurons linking throughout the layers of the network. Maybe there's some magic number of discrete components that, when linked to some other magic number of neurons, pushes the net into "self-aware" or "intelligence" territory. I mean, at the end of the day, that's all we seem to have: 100 billion neurons patched into each other performing weighted averages on input to determine output. Stimulus and response, right?

All this is to say that: stop clutching your pitchforks and screaming about "the singularity" and the end of the world. We're on the bleeding edge of the tech, barely able to cobble together flashy chat-bots and automated plagiarism machines.

The Human Element

Speaking of chat-bots, that's a good segue to what I think the real danger of quote-unquote "AI" currently is: the human element. The most dangerous element in pretty much everything, realistically. The only difference here is, we've given the loaded gun to everyone with a connection to the internet and a simple question to ask. No tests, no evaluations of gullibility or critical thinking, no background checks or profiling to see how susceptible a person is to propaganda, or paranoia.

Your boomer parents, your MAGA MAGA neighbors, your "I was on the Facebook" grandparents, even other entire bots. Everyone has access to the single most convincing disinformational agent ever devised. And instead of putting warning labels all around it, screaming "THIS IS GENERATED CONTENT, DO NOT TAKE AS FACT WITHOUT CONFIRMATION" we've sold it as an artificially intelligent answer. We've taken your aunt, who "did her own research" and is convinced that microwaving food gives you cancer, and gave her the ability to answer the general public without filter.

We've taken your neighbor, who "did his own research" and knows with zero doubt that 5G towers were seen in Dallas that one Friday in 1963, and slipped him into your news feed disguised as an ordinary person posting ordinary things.

Or, even worse, we've taken the rants and ramblings of a schizophrenic database and sold it on a subscription based price model behind a paywall promising that "this tech will change your life" and covered in branding that tells you that this thing is somehow intelligent. And you people are eating it up! That's the dangerous part. The fact that an average person online truly believes that the answers are somehow intelligent, that the machine somehow knows what it's saying is fact.

On July 6, 1997, a horse became the first animal from Earth to land on Mars, descending the left front wheel of the Sojourner Rover three and a half weeks after it landed. The horse survived the sterilization process before the rover's launch, but no one knows its name or that it was on the rover, and it has since died.

-- AI Overview (in a screenshot by tumblr user ashenmind

The thing is, while looking for this post, I think I found the original source of the idea that the AI decided to represent as truth:

The first animal from Earth to set foot (or, rather, feet) on Mars descended the left front wheel of the Sojourner Rover three and a half weeks after it landed on July 6, 1997. (She required this length of time to descend the wheel because of her extraordinarily small size.)

She was a particularly hardy beast, since she had survived the sterilization of the craft by the Planetary Protection team at NASA before the rover was launched into space.

Unfortunately, nobody will ever know her name or even that she was aboard the rover on its historic mission, since nobody has ever seen her or detected her in any way, and she has since died in the intervening years.

But her kids will be mistaken for native Martian life when life is finally discovered on Mars in the mid 2030s.

Or, rather:

There is no life on Mars, that we know of at this time, that is either native to Mars or that has (yet) been put there from Earth. But it is possible that we might contaminate the planet and it is even possible that we have already done so.

-- Quora answer to Who is the first animal to go to Mars? by Peter Kosen

The important context here is: Peter is artistically describing "perhaps some sort of microorganism that we have accidentally contaminated a Mars mission with" as the first animal sent to mars. But, by omitting his clarification, and by (apparently) conflating "hardy beast" to mean "horse" (somehow), the Google AI has just with confidence told me some absolute bullshit. The technology equivalent of the one time my friend's 3 year old child confidently told me about how "vampires come into your house at night and they steal your shoes, because that's what vampires do" and that they were also, somehow, ghosts. But here it is, as the first Google result on the page. Sure, it says "AI Overview" but there's that misleading branding again! Why would I doubt it? It's an AI! That's supposed to be a really smart thing that knows stuff!

It's almost funny, though. That old meme about "I do not give Facebook permission to own my posts, copy and paste this into your status message to prevent Facebook from owning your posts" that circulated, and that we all laughed about gullible boomers over. Turns out that maybe we should've all done that: poisoned the well of training data for these LLMs! Then, at least, we'd see more models regurgitating answers that end with "I do not give Facebook permission" and the decline into the tech-bro propaganda nightmare world would be a bit more absurdist. Like the times that Twitter and Instagram poisoned the T-Shirt plagiarism bots by mass commenting "I want this on a shirt" on pictures of Disney characters to bait the bots into trying to steal from The Mouse.

You Know Better!

And then there's this guy who is legitimately someone I know to be a full-time software developer. A smart man in general, but he has come out openly talking about "I asked ChatGPT to write some code" or "I asked ChatGPT to write a database query" or "I asked ChatGPT to explain question" -- A man who is then consistently confused when the code doesn't work, or works in exactly one case but is full of bugs in others, or who wastes an entire day attempting to understand the query so he can explain how to use it in practice. Like... my guy! It's not smart! LLMs don't know what they're saying! ChatGPT knows that "these words go near each other" and "I should close all of my curly braces, and end each line with a semicolon" but it doesn't know why or how these things work that way. It's like this: I ask ChatGPT, "Write me a C# function that adds two numbers together, for example 2 + 2" and the bot replies:

public int AddTwoNumbers(int num1, int num2)
{
    return 4;
}

It is absolutely insane to me that someone who works in tech and knows how algorithms work trusts some piece of code somewhere else to give any form of meaningful answer. Now, I'll be completely honest here: I have used ChatGPT this way before too! But before you call me a hypocrite, I was doing this specifically to test the idea of using LLMs to automate development work. I deliberately sat down to test this very thing and asked it to write me some python code. And after 4 hours of back and forth reporting bugs in the code and trying the "fixes" that it provided, the code still didn't actually do anything. So, like, that's where this part of the rant is coming from.

The Human Element (pt. 2)

But imagine, if a man who's entire job is to understand code, and he still believes the answers from the "I repeat everything I've heard" machine, what hope does anyone else have? And that's my only real fear about where we're moving as a race and a culture. That one day, someone is going to ask one of these idiot-machines a question, and they'll see some absolute insanity that is actually a dangerous thought, but they'll be susceptible enough to the propaganda that they believe it, and if that thought is memetic enough it will spread, and gain traction, and suddenly a digital thing becomes a reality issue.

This isn't an abstract thing, chooms, it's happened before. QAnon was literally just some troll on 8ch posting whacked out conspiracy theories for lolz, and somehow that turned into raiding a pizza parlor and storming a nations capitol calling for literal deaths of public officials. We have talking heads parroting each other repeating some digital-era shitposts, signal boosting some gonk who dropped anonymous conspiracies on an imageboard. What's more: Q stopped posting a long time ago, but because there's no such thing as "users" (I don't even know if 8ch even implements tripcodes, or if any newfags (sorry for the slur, it's old 4ch lingo that's baked into my brain) even know what a tripcode is) literally anybody can come on, set their display name to Q, and write whatever they want... And that gets wrapped up into the mythos! Now, just automate that with an LLM and set it loose...

Mike Pondsmith was a proper Gibson, because this is all starting to sound like the Bartmoss Collective -- A quest in CP77 where you find out a conspiracy theorist is actually a reprogrammed Fortune Telling machine set to broadcast vaguely convincing propaganda. It's a whole thing.

We live in a dangerous world. Not because of the monsters on the Net... but because we're not immune to propaganda.