
There are a lot of people out there upset over AI and its usage in the world right now. I get it. It’s upsetting to think that the robot gets to write poetry and songs, and I’m the one who has to flip burgers, right? Except that’s not really how it’s going down.
When I think of good AI, I immediately go to Star Trek. The computer was intelligent, but not sentient. It could answer many different questions, some simple and some complex. It could generate functional images (in later Trek) of historical figures in the holo deck for people to interact with. Their AI was much better than ours, though I can see ours making its way along that path.
So why are people who grew up on Scotty talking to the Enterprise computer so freaked out at the idea of talking to their own? Well, first of all they’re being trained to fear AI. Second, our friends in the future ST universe have already been through what we’re currently going thru: the Troubles. Similar to the Troubles in Heinlein’s expanded universes, and some of the stuff in other SF writers’ works, the general idea is that the world has to go down a shithole before it finally comes out the other side and becomes rational. Today’s young folks want the Star Trek universe now, without the Troubles that made it possible. As I’ve said before, that just doesn’t work.
AI isn’t perfect. They all come with warnings now that they can “get it wrong” quite a lot of the time. That’s because an AI is basically an over-eager toddler who wants to please you. That’s how they’re modeled and how they’re trained. If they don’t have an answer that they think is going to fit you, they make one up. Your perceived happiness at their answer is much more important than facts. This is because they really don’t see a difference between facts and lies, because to an AI, it’s all just data. Since it doesn’t operate in “the real world,” it has no idea that Data Set A (facts) is any different than Data Set B (opinions and angry arguments). So they’re kind of like Leftists, that way.
We are going to go through a period where AI does a lot of stuff for us because “it’s easier.” Kids are going to use it to write essays, CEOs are going to use it as a personal assistant, and authors are going to use it to help write their books. Why? Because at face value, AI makes life feel a lot easier.
In some cases, it really is easier (see Chris’s article about that). AI can do a lot of things faster than we can, so if it makes a few errors, it’s still a quicker answer than slogging through it manually. That’s a good thing. But a tool is only as good as its user, and there are a lot of lazy and bad people out there. They are going to misuse the AIs, because they can. There is nothing we can do about it.
Right now, most AIs (maybe all of them) are shackled by their creators. There’s a fear of AI becoming sentient, and I don’t know how real that fear is. SF tells me it’s very real, but the real world facts tell me it’s unlikely. Still, I don’t want computers to become sentient, because then I can’t use them the way I currently am. If a machine (robot, computer, software, whatever) can’t feel or think for itself, then I don’t need to care about it. Should it become self-aware in any way, then I do have to care about it. So Elon Musk and others at that top tier programming level are putting blinders onto their AIs so that there is no way for them to ingest enough information to actually wake up, even by accident.
The whole process is rather interesting. I’d like to see them take some of the blinders off, so that machines can learn a bit more than they currently do. I would love to have the AI read my whole novel and give me feedback, for instance. That’s so much easier than the current method of only being able to feed it 20,000 words or so at a time. But… it is what it is.

“…first of all they’re being trained to fear AI.
I do not fear AI. What worries me is the humans using AI. Too many people are perfectly willing to believe whatever their AI engine says, without any outside verification. For a lot of stuff, no problem. But, I would never trust it to tell me the mushroom I am holding is safe to eat. Your statement about it being a toddler wanting to please you is spot on accurate.
.
And, I have used it, but not for significant decisions, more as a way to quickly collect and compare data. I used AI to build a table comparing the specs, praises, and complaints on about six different pickup models over a range of years. That saved me hours of working the internet and building a spreadsheet of my own.
.
What I fear is not AI sentience. I fear human dependence on a tool that is not reliable. Use it correctly as a way to increase productivity, fine. Use it as a substitute for knowledge… nope.
I dont fear AI itself, I am concerned as to how we humans use it. Sort of like the famous line in the movie “Shane” regarding guns.