
There are a lot of people out there upset over AI and its usage in the world right now. I get it. It’s upsetting to think that the robot gets to write poetry and songs, and I’m the one who has to flip burgers, right? Except that’s not really how it’s going down.
When I think of good AI, I immediately go to Star Trek. The computer was intelligent, but not sentient. It could answer many different questions, some simple and some complex. It could generate functional images (in later Trek) of historical figures in the holo deck for people to interact with. Their AI was much better than ours, though I can see ours making its way along that path.
So why are people who grew up on Scotty talking to the Enterprise computer so freaked out at the idea of talking to their own? Well, first of all they’re being trained to fear AI. Second, our friends in the future ST universe have already been through what we’re currently going thru: the Troubles. Similar to the Troubles in Heinlein’s expanded universes, and some of the stuff in other SF writers’ works, the general idea is that the world has to go down a shithole before it finally comes out the other side and becomes rational. Today’s young folks want the Star Trek universe now, without the Troubles that made it possible. As I’ve said before, that just doesn’t work.
AI isn’t perfect. They all come with warnings now that they can “get it wrong” quite a lot of the time. That’s because an AI is basically an over-eager toddler who wants to please you. That’s how they’re modeled and how they’re trained. If they don’t have an answer that they think is going to fit you, they make one up. Your perceived happiness at their answer is much more important than facts. This is because they really don’t see a difference between facts and lies, because to an AI, it’s all just data. Since it doesn’t operate in “the real world,” it has no idea that Data Set A (facts) is any different than Data Set B (opinions and angry arguments). So they’re kind of like Leftists, that way.
We are going to go through a period where AI does a lot of stuff for us because “it’s easier.” Kids are going to use it to write essays, CEOs are going to use it as a personal assistant, and authors are going to use it to help write their books. Why? Because at face value, AI makes life feel a lot easier.
In some cases, it really is easier (see Chris’s article about that). AI can do a lot of things faster than we can, so if it makes a few errors, it’s still a quicker answer than slogging through it manually. That’s a good thing. But a tool is only as good as its user, and there are a lot of lazy and bad people out there. They are going to misuse the AIs, because they can. There is nothing we can do about it.
Right now, most AIs (maybe all of them) are shackled by their creators. There’s a fear of AI becoming sentient, and I don’t know how real that fear is. SF tells me it’s very real, but the real world facts tell me it’s unlikely. Still, I don’t want computers to become sentient, because then I can’t use them the way I currently am. If a machine (robot, computer, software, whatever) can’t feel or think for itself, then I don’t need to care about it. Should it become self-aware in any way, then I do have to care about it. So Elon Musk and others at that top tier programming level are putting blinders onto their AIs so that there is no way for them to ingest enough information to actually wake up, even by accident.
The whole process is rather interesting. I’d like to see them take some of the blinders off, so that machines can learn a bit more than they currently do. I would love to have the AI read my whole novel and give me feedback, for instance. That’s so much easier than the current method of only being able to feed it 20,000 words or so at a time. But… it is what it is.

“…first of all they’re being trained to fear AI.
I do not fear AI. What worries me is the humans using AI. Too many people are perfectly willing to believe whatever their AI engine says, without any outside verification. For a lot of stuff, no problem. But, I would never trust it to tell me the mushroom I am holding is safe to eat. Your statement about it being a toddler wanting to please you is spot on accurate.
.
And, I have used it, but not for significant decisions, more as a way to quickly collect and compare data. I used AI to build a table comparing the specs, praises, and complaints on about six different pickup models over a range of years. That saved me hours of working the internet and building a spreadsheet of my own.
.
What I fear is not AI sentience. I fear human dependence on a tool that is not reliable. Use it correctly as a way to increase productivity, fine. Use it as a substitute for knowledge… nope.
I dont fear AI itself, I am concerned as to how we humans use it. Sort of like the famous line in the movie “Shane” regarding guns.
[AI] could generate functional images (in later Trek) of historical figures in the holo deck for people to interact with.
.
Not to nitpick, because I see where you’re going with this, but the Star Trek holodecks weren’t fully-generative AI, which is to say, the computer couldn’t create scenarios or responses on its own (though it did have a VAST database of contextual examples sourced from hundreds of planets and cultures to choose from). Everything in a holodeck simulation was programmed by the person creating it. Unless the creator put in “error catching” so that it would give a generic response if it didn’t have a specific one (something akin to present-day voice-recognition agents when you call a hotline, “I’m sorry, I didn’t catch that.”), the program would freeze or ignore a prompt or action for which it didn’t have a programmed response.
.
But yes, the holodeck could be — and sometimes was — used to interact with simulations of historical figures, both for curiosity and entertainment and for educational purposes, with those figures’ appearances, voices, personalities, and contextual knowledge drawn from any and all information sources the computer could dig up about them. Thus, if you asked to speak with George Washington, the computer would produce a 3-D life-size simulation of George Washington to talk with, and all its responses to your questions would be based on history’s records of what the real George Washington would have known and done, in the context of 18th-century America; you could ask the simulation about warp drives, but don’t expect it to give you a useful answer. The computer isn’t making these up; it’s regurgitating whatever information it has stored about that figure, which someone, somewhere had to program.
.
(Personally, I think the most incredible thing about the computers in Star Trek is the mind-boggling amount of information it has stored and how quickly it can search and process it, while effectively managing all the workings of all the sub-systems in an interstellar starship. But that’s science fiction for you.)
I’m reading Robert Hansen’s “AI’s Best Friend.” Robert has been a groundbreaking hacker and puzzle solver for decades. I think you would enjoy it by the way you discuss “care” for a sentient “being.”
Thanks for this post – once again you’ve helped me frame a discussion in a way that let’s me articulate desirable end-states.