
With Gary Smith, Professor of Economics at Pomona College, Co-author & Author more than 80 academic papers and 12 books, including his latest, The AI Delusion
Podcast: Play in new window | Download
The AI Delusion
We live in an incredible period in history. The computer revolution may be even more life-changing than the industrial revolution. We can do things with computers and computers can do things for us that could never be done before.
But should our love of computers and artificial intelligence (AI) cloud our thinking about their limitations?
To get to the heart of this question, Steve speaks with Gary Smith, Professor of Economics at Pomona College. Gary received his Ph.D. in Economics from Yale University. He has written or co-authored more than 80 academic papers and 12 books, including his latest, The AI Delusion.
Computers Aren’t Smarter Than Humans
It’s often said that computers are smarter than humans. Hollywood takes it a step further, with movies that imagine a future where super-intelligent machines protect themselves by enslaving or eliminating humans.
Gary Smith says the delusion in his book’s title refers to misconceptions about smart computers. Computers can answer a host of questions, beat us at chess, and so on. But the intelligence they have is of a very limited kind. They can interpret digital data but don’t have any of the real-world context that humans have.
The Confounding “Axe Question”
Consider the following: “You can’t cut down the tree with that axe because it is too small.” Humans immediately know you’re referring to the axe being too small. But a computer doesn’t know what a tree is or what an axe is and gets the answer wrong half the time.
To a computer, a tree is just a bunch of pixels on a screen. The computer tries to find a match of those pixels in its database to know what the object is. Sometimes it gets it right, sometimes it doesn’t.
In his book, The AI Delusion, Gary Smith has a simple picture of a wagon—a rectangle with two circles, which are its wheels, and a handle. Humans immediately know it’s a wagon and what it’s used for. But a state-of-the-art drone network computer algorithm identified the wagon as a business. Another sophisticated program saw it as a badminton racquet. That’s because computers are currently limited to just matching pixels and don’t really understand our world.
Hillary Clinton’s Ada Probably Cost Her The Presidency
In The AI Delusion, Gary Smith also discusses Ada, an algorithm that Hillary Clinton relied on to figure out where to spend money, what issues to focus on, etc. She got the idea from Barack Obama, who relied on computer algorithms to ride his way to the White House. Obama’s computer identified voters and their issues across the country so Obama could tailor his messages accordingly.
Ada was Hillary’s secret weapon, but it failed miserably at exciting voters. Bernie Sanders and Trump, on the other hand, held rallies with 10,000 people cheering and screaming their support out.
Clearly, Ada had its limitations relative to humans, and Hillary’s over-reliance on its predictions cost her the election. Bill Clinton, the ultimate campaigner, was absolutely distraught that nobody was heeding his advice.
Common Sense And Wisdom
Humans have common sense and wisdom gathered from life’s everyday experiences. We use critical thinking to make sense of simple and complex scenarios.
Artificial intelligence programmers are keen to give computers these unique abilities but have had limited success. No one really knows how our brain looks at a drawing and knows it’s a wagon and not a tree. So how do you program a computer to do it?
Instead, AI has been applied to practical applications like spell checkers and search engines.
The Times, They Are A’Changing
To understand The AI Delusion, Steve quotes lines from a Bob Dylan song, “The Times They Are A-Changin’ ”:
Come gather around people wherever you roam
And admit that the waters around you have grown
And accept that it’s soon you’ll be drenched to the bone.
Humans intuitively know what Dylan’s singing about. But computers have no idea. This is what leads Gary Smith to put down the IBM Watson because it does no critical thinking.
But technologists, such as Ray Kurzweil, disagree. Ray believes machine intelligence will overtake human intelligence by 2040. Though machines will get faster, Gary Smith believes the roadblock is figuring out how our brains work and writing computer code to somehow replicate that.
AI Drives Trading On Wall Street
Steve notes that AI has now infiltrated Wall Street, with algorithms trading massive amounts of stock daily. Algorithms are largely based on finding patterns, but they cannot predict the impact of future events.
Algorithms gone awry have led to flash crashes with sound stocks trading at ridiculous prices. Steve attributes this to computers not understanding stocks and the external events that drive stock prices. All that algorithms do is react, sometimes disproportionately, to digital trends that resemble past occurrences.
If Hollywood has you worried about becoming enslaved to computers, rest easy. We aren’t anywhere close to that happening just yet. As Gary Smith writes in The AI Delusion, computers just aren’t equipped to see and understand the real world like we do and are eons away from replicating our fabulous brains for even the smallest of things, such as evaluating something as simple as the axe-and-the-tree.
Disclosure: The opinions expressed are those of the interviewee and not necessarily United Capital. Interviewee is not a representative of United Capital. Investing involves risk and investors should carefully consider their own investment objectives and never rely on any single chart, graph or marketing piece to make decisions. Content provided is intended for informational purposes only, is not a recommendation to buy or sell any securities, and should not be considered tax, legal, investment advice. Please contact your tax, legal, financial professional with questions about your specific needs and circumstances. The information contained herein was obtained from sources believed to be reliable, however their accuracy and completeness cannot be guaranteed. All data are driven from publicly available information and has not been independently verified by United Capital.
Steve Pomeranz: We live in an incredible period in history. The computer revolution may be even more life-changing than the industrial revolution. We can do things with computers that could never be done before and computers can do things for us that could never be done before. But should our love of computers cloud our thinking about their limitations?
This is a most important question for today’s fast-changing world. My guest is Gary Smith, professor of economics at Pomona College. Gary received his Ph.D. in economics from Yale University, has won two teaching awards, and written or co-authored more than 80 academic papers and 12 books. He’s with me to discuss his new book, The AI Delusion, artificial intelligence, that is, The AI Delusion. Gary, welcome to the show.
Gary Smith: Thanks for having me, Steve.
Steve Pomeranz: So I said the book is The AI Delusion, but yet there’s no subtitle to the book. What would the subtitle be?
Gary Smith: Maybe something like—no, computers are not smarter than us.
Steve Pomeranz: Okay, The AI Delusion, no, computers are not smarter than us.
So we’re told computers are smarter than human beings and that they do this thing called data mining and that data mining can identify previously unknown truths or make discoveries that will revolutionize our lives. And our lives may well be changed by these super-intelligent machines who will decide to protect themselves by enslaving or eliminating humans.
Gary Smith: [LAUGH]
Steve Pomeranz: You can see I’ve seen a couple of movies. So, let’s talk about that. I want to start with kind of your opening idea about why there are limitations to these computers, are they really thinking and should that be the thing we’re most afraid of?
Gary Smith: Yeah, so the delusion is that computers can tell us the square root of any number; they can tell us the capital of any country; they can tell us directions to the nearest gas station; they can tell us almost any question we want to ask them.
And they can beat us at Jeopardy; they can beat us at chess, backgammon, go, and so it’s kind of natural to think that they’re really, really smart. But the intelligence they have is a very limited kind. It’s very narrow and because what they see is, as like you said before, they see numbers, they see sound waves, they see pixels, they see letters, but they have absolutely no idea what it is that they’re seeing.
And so, for example, if I ask you this, what does it refer to in this sentence? You can’t cut down that tree with that axe because it is too small. We know immediately, it’s the axe, right? And computers can’t answer that question. In challenges, they get it right half the time because they don’t know-
Steve Pomeranz: Well, what’s the other answer?
Gary Smith: I’m sorry, go ahead.
Steve Pomeranz: That the tree is too small, what do they get-
Gary Smith: So you could do it two ways. You could do it…you can’t get that tree down without an axe because it is too small.
Steve Pomeranz: Yeah.
Gary Smith: Or you could say because it is too large.
Steve Pomeranz: Yeah.
Gary Smith: And one way it’s the axe is too small and the other way is the tree is too large. And so you ask the computer what does it refer to in that sentence, and they get it right about half the time because they don’t know what a tree is or what an axe is or what cut down means, what small means.
Steve Pomeranz: Well, I mean, it seems when a computer, quote, unquote, sees a tree, it’s seeing an object that is a tree, but you’re saying that really it isn’t. What is it seeing?
Gary Smith: What it’s seeing is pixels, and so what it does, when we see a tree, we look at what’s called the skeletal essence.
We see the trunk; we see branches; we see leaves, and we put them together in our mind and call that a tree. And all we have to do is see a half dozen trees in our lifetime, and we know what a tree is. And what a computer does is it takes the tree and it breaks it down into pixels.
And then it maps those pixels according to mathematical rules, and after being trained on billions of pictures of trees and wagons and cars and horses and dogs and jaguars and people, when it sees a new picture, it looks at the pixels and tries to match the new pixel representation with something in its database.
And sometimes it gets it exactly right and sometimes it gets it terribly wrong.
Steve Pomeranz: Yeah.
Gary Smith: One of the examples of my book is a little picture of a wagon. It’s very simple, it’s got a rectangle, it’s got two circles, which are the wheels, it’s got a handle.
And we immediately, we know that’s a wagon. And we know what a wagon does; we know if you get in there, we can be pushed or pulled. We know if we get it on the top of a hill it’s kind of dangerous. We know all those things. And I showed this to a state of the art drone network computer algorithm and it said it was a business.
Steve Pomeranz: [LAUGH] Why…
Gary Smith: And I showed it to another one, Wolfram’s deep neural network, and it said it was a badminton racquet because [LAUGH] somehow, because it doesn’t really know in any meaningful sense what a badminton racquet is or a wagon or a business. It just tries to match up pixels.
Steve Pomeranz: Right, right, right. Well, I guess-
Gary Smith: That’s what I mean to say, I doesn’t really understand the world.
Steve Pomeranz: It doesn’t understand, well, when we look at a tree, we may have climbed a tree, we may have seen a fallen branch, we may have seen squirrels putting nuts in it, whatever it may be, we have context and we have a life that experiences with our own sort of intelligence. With a computer, as you say, it’s just looking at pixels.
Let’s look at an example in the book which discussed Ada, which was the computer that the Hillary Clinton campaign used to figure out where to spend money and what, I guess, issues to stress and so on. Give us an idea of what happened there.
Gary Smith: Well, the origin of it was when Hilary first ran, she was the huge favorite to win and then this guy appeared, Barack Obama, who was largely unknown, had an unhelpful name, [LAUGH] and he won.
And he had in the background going on, this huge computer database where they tried to identify every single voter in the country and figure out what kind of issues would appeal to them, and then they had targeted appeals to bring out voters, and get donations, and he won.
And so when Hilary ran again, she said, well, I’m going to do the same thing. I’m not going to make that same mistake again. And she actually hired a lot of the Obama people to work for her on this top-secret computer program. And nobody knew about it outside a handful of people in her campaign because she didn’t want to come off as being mechanical and scripted or anything like that.
And so it was like this hidden secret weapon, but the problem with the secret weapon is there are a lot of things that you can’t put into a computer, like enthusiasm. And so when Hillary Clinton had a rally, there’d be a couple of hundred people who sat there quietly and listened, [LAUGH] and when Bernie Sanders had a rally, there’d be 10,000 people who showed up and yelled and screamed, and when Donald Trump had a rally, 10,000 people would show up and yell and scream.
And the people on the ground, the people who actually knew something about politics, they were working for Hillary, they would say, we got to do something. We’re going to lose this darn thing. And Hillary would say, no, no, the computer says I’m going to win because we always win Wisconsin, we always win Michigan, nothing to worry about.
Steve Pomeranz: Well, Bill Clinton was the ultimate campaigner, and there is a part your in book where you say he was absolutely distraught that nobody was listening to him.
Gary Smith: Yeah. And because he knew, listening to Trump and to Sanders, that what people care about is the same issue that Clinton used against Bush, that Bill Clinton used against Bush.
Steve Pomeranz: It’s the economy, stupid.
Gary Smith: It’s the economy, stupid.
Steve Pomeranz: Yeah, right.
Gary Smith: And she should be talking about jobs instead of just saying, well, I may not be perfect, but I’m not as bad as these guys. [LAUGH]
Steve Pomeranz: Yeah.
Gary Smith: [LAUGH]
Steve Pomeranz: Right, right.
Gary Smith: [LAUGH] And he got so mad after talking to her one time, he threw his phone out the window of his penthouse because [LAUGH] nobody would listen to him.
Steve Pomeranz: Wow. So, it brings the question in my mind is what is thinking? What is this differential where a computer can have activities that look like it’s thinking and making decisions, but it’s not really thinking. So, in a humanistic way, what is thinking?
Gary Smith: Yes, so one of the examples I used, is this question here:
Is it okay to walk downstairs backwards if I close my eyes? And you and I, we have common sense and wisdom. [LAUGH] We know what walking downstairs means. Walking downstairs backwards is kind of dangerous.
Steve Pomeranz: Right.
Gary Smith: And closing your eyes doesn’t help. And so, and computers, they can’t answer that question.
They have absolutely no way of understanding that question. So two of the words I use a lot are common sense and wisdom. From having lived life, we know certain things are nice or bad or dangerous or sweet or helpful or unhelpful.
Steve Pomeranz: Yeah.
Gary Smith: And we know those things.
Another thing is critical thinking. And so when we hear a claim, we think about the person making the claim, we think about the evidence for the claim, we think about ways the claim might be tested, we actually think about [LAUGH] what is being said.
Steve Pomeranz: Yeah, yeah.
Gary Smith: As opposed to just matching words or pixels.
Steve Pomeranz: We’re going to pick up this in a minute as I come back. My guest is Gary Smith and the book is The AI Delusion. We will be back in a moment.
Steve Pomeranz: I’m back with Gary Smith professor at Pomona College, and his book is The AI Delusion. We’ve been talking about what passes for intelligence in a machine is not in fact intelligence. And we’re trying to kind of capture the difference between the two. Gary, welcome back.
Gary Smith: Thanks.
Steve Pomeranz: We ended the last segment with what is thinking, and you wrote in the book we think in analogies. And I know I do a lot and I may be walking somewhere or in the bath or something and these things pop into my head. And I don’t know where they really come from, but you write it’s the fuel and the fire, the ceaseless flood of imagery and analogy.
So take us a little further on that.
Gary Smith: Well, it’s [LAUGH] it’s kind of odd because the idea for this book was I was in the shower, [LAUGH]
Steve Pomeranz: Okay.
Gary Smith: And I was thinking about data mining, I was thinking about problems with it, and it just popped in my mind that I had to write a book about it and like the chapter just popped in my mind and there it was.
Steve Pomeranz: Yeah.
Gary Smith: The fire and fuel of thinking I actually got that from Douglas Hofstadter who’s one of the pioneers in artificial intelligence. And he started out, he wrote a book when he was in his 30s that got him a National Book Award and Pulitzer Prize and set him up for life.
And he has spent his lifetime trying to figure out how to make computers think the way humans think. And it’s extremely difficult, and he hasn’t been able to make much progress. The field sort of went off in a different direction, which is let’s do something useful we can make money on.
Steve Pomeranz: Yeah.
Gary Smith: Like spell checkers and search engines and stuff like that.
Steve Pomeranz: Right.
Gary Smith: And his latest book is the argument that analogy is the fire and fuel of thinking. And it’s like that wagon we talked about before. When we see a wagon, we don’t see pixels, we see a rectangle.
And we see those two circles that are wheels and they could be pies, they could be bowling balls, they could be frisbees. But we know from all the wagons we’ve seen, we know they’re probably wheels. We know there’s probably wheels on the other side too, and we know there’s probably a hole in the wagon, a cavity and there might be kittens or toys in there.
We know all those things because that thing we see is like other things we’ve seen. It has wheels, it has a rectangle, it has a handle. And so that’s probably what it is and that’s probably what it can do, what it can be used for. And nobody knows exactly how our mind does that and to write a computer program that can think like that, that can really think by analogy is difficult.
Steve Pomeranz: Okay, but there’s a lot of new computer programs out there that make it seem like machines are intelligent.
Gary Smith: Yeah.
Steve Pomeranz: So we talked about IBM’s Watson.
Gary Smith: Yeah.
Steve Pomeranz: And winning on Jeopardy and the like.
Gary Smith: Yeah.
Steve Pomeranz: And I don’t really want to go into the structure of why what it was doing was not really intelligence.
Gary Smith: Right.
Steve Pomeranz: But I do like the part in your book where you talk about the fact that Watson can read 800 million pages per second.
Gary Smith: [LAUGH] Wow, [LAUGH]
Steve Pomeranz: You know where I’m going with this, which is ridiculous.
Gary Smith: Yep.
Steve Pomeranz: And it had actually identified key themes, in Bob Dylan’s work, themes like time passes, so it said, and love fades, which proves that.
And I’m quoting here, “unlike traditionally programmed computers, cognitive systems such as Watson understand reason and learn,” end quote. Now, you said leave it to a word counter. You said you don’t remember Dylan ever using the word civil rights or Vietnam. But, people, humans, listening to his songs knew that he was writing about the 60s.
It wasn’t time passes and love fades. And I’m going to read just six lines from “The Times They Are a Changing” and have a quick discussion about that. So here it is: “come gather around people wherever you roam. And admit that the waters around you have grown. And accept that it’s soon you’ll be drenched to the bone.”
What is that talking about? We all know what that’s talking about.
Gary Smith: If we’re not a computer, [LAUGH]
Steve Pomeranz: But a computer does not know because a computer can’t get context.
Gary Smith: Yep, and so there’s another quotation there. I don’t have the book in front of me, but from Roger Shank who’s another pioneer AI guy, and he also tried to write computer programs that would mimic the human mind.
And the essence of what he says is when I saw that claim about Watson, it made me want to laugh except it made me so angry. [LAUGH] I’ll say it, Watson is a fraud. Watson does not do critical thinking; Watson does not know what words mean And it’s actually revealing to think that Bon Dylan’s work is about time passes and love fades because-
Steve Pomeranz: That’s ridiculous.
Gary Smith: That’s not at all what Dylan was writing about.
Steve Pomeranz: Right, so we are all getting more and more fearful about this idea that AI is going to be taking over the world as an intelligence and we somehow will become subjugated to it. And there’s a very, very smart guy by the name of Ray Kurzweil, who says that there’s a term he created called singularity, which is the date when machine intelligence zooms past human intelligence. And he, I think it’s 2025, I forget the actual date but it’s not that far off. But based upon what you’re saying here, that’s just not going to happen.
Gary Smith: And so everybody I’ve talked to in the field who actually is serious about this stuff think that that’s like a joke.
For example, that statement I did before, you can cut down that tree with that axe because it is too small. Well, there is a guy, Oren Etzioni, who’s a professor of computer science at University of Washington and head of the Allan Institute for Artificial Intelligence, where they’re trying to figure out how to give computers common sense and wisdom.
And so he quipped how could computers take over the world when they don’t even know what it refers in a sentence. And it’s true, computers don’t know what the world is; they don’t know what survival is; they don’t know what humans are. And they couldn’t make plans to survive if they knew what it was.
Steve Pomeranz: But, Gary, the rate of learning, I guess you could say, is doubling every couple of years. I mean, at some point in time, when do you cross that line to intelligence? Can there just be so much speed and neural networks that somehow the light switch is turned on?
Gary Smith: Well, the roadblock is not the processing speed or memory. The roadblock is what computers are doing and as long as they’re just doing pattern matching, things are useful. Like looking up words in a dictionary and seeing if they’re spelled properly. As long as they’re doing like that, they’re never going to be able to think.
And so what the current real pioneers in the field are doing is trying to go back to the initial vision of Hofstadter and Shank and write computer algorithms that actually mimic human thinking, and it’s exceedingly difficult. But the roadblock is not processing speed or memory, like I say, the roadblock is figuring out how those neurons in our brains work and try to write computer code that somehow replicates that.
And part of the problem, of course, is that we don’t really understand how our brains work, [LAUGH] and so that’s part of the stumbling block.
Steve Pomeranz: My guest is Gary Smith, the book is The AI Delusion. Gary is Professor of Economics at Pomona College. You mentioned pattern matching before.
I’m in the investment business, and I know a lot about pattern matching because I live a life full of charts and graphs, right? And patterns and correlations are so easy to see when you’re looking at them visually. I guess, in way, I’m kind of pixellating that stuff too but, [LAUGH] so let’s get into that.
The result now are that you have a lot of algorithms, black boxes, you would call them, where nobody really knows what going on inside, as a matter of fact. They are self-learning, these algorithms. And so the creators don’t even really know what they’re thinking, and how they’re thinking. And now a lot of those algorithms or black boxes are trading massive amounts of stock and changing our investment world, tell us about that.
Gary Smith: Well, the danger is from data mining that at any set of data, even random coin flips or dice rolls or spins, you go back and look at the data from the past and you always find patterns. It’s absolutely inevitable that you will find some patterns in that data.
Steve Pomeranz: Yeah.
Gary Smith: The question is, does the pattern make any sense or is it just coincidental? And sometimes we come up with patterns just for the point of showing how ridiculous it is, like the number of lawyers in Nevada and the number of people who die by tripping over their own two feet.
So we laugh at that because we know it’s stupid.
Steve Pomeranz: Yeah, it’s ridiculous.
Gary Smith: Or as was the Superbowl indicator that was created to be stupid for people that ended up taking it seriously. And the problem was these black box trading algorithms is, they’re looking for patterns, they’re probably going to find patterns.
And then there is nobody to step in and say wait, that’s like Nevada lawyers and killing yourself by tripping over your own two feet, that’s preposterous.
Steve Pomeranz: Yeah.
Gary Smith: Because once it’s inside the black box, nobody knows what’s going on in there.
Steve Pomeranz: Yeah, so these black boxes have been responsible for some ridiculous trading days, called Flash Crashes, which you describe I think pretty well in the book.
And as much as they’re looking for patterns, when they see a pattern, they’ll trade large amounts very, very quickly. We’re talking about kind of nanoseconds type of speeds, but they don’t really know why or what. They’re just executing quickly and trying to execute ahead of everybody else.
And at some point, there’s nothing there to stop them. It’s kind of like a Dr. Strangelove, or you remember the War Games movie with Matthew Broderick?
Gary Smith: Yeah.
Steve Pomeranz: Where with the big computer and it had to be stymied by the simple game of tic-tac-toe? But there was nothing stopping that computer from continuing to do what it’s set to do, and it’s the same with these black box algorithms.
Gary Smith: Yep, that’s true and the Flash Crash actually, the big Flash Crash that happened actually got aborted when the New York Stock Exchange, I think they actually took a five-second timeout or something.
Steve Pomeranz: Five seconds.
Gary Smith: [LAUGH]
Steve Pomeranz: And that was enough.
Gary Smith: …somehow and nobody knows why they were trading, nobody knows why they decided to buy and sell.
Good sound stocks were trading for a penny a share or a hundred thousand a share, dollars a share. Because computers have no sense what stocks are or what money is or what a fair price is. They just, when this happens, they need to buy; when this happens, they need to sell.
Steve Pomeranz: Well, when you’re dealing with nanoseconds, five seconds is a lifetime?
Gary Smith: It is.
Steve Pomeranz: Right?
Gary Smith: It is.
Steve Pomeranz: It’s like a fruit fly or something, I mean, it’s a whole lifetime in 24 hours or something crazy like that. Well, Gary, unfortunately, we are out of time, I think you’ve made your point and it’s a great point.
This is a great book, by the way, totally, totally enjoyed reading it. And I recommend it to all of you. And remember to visit our website as well, stevepomeranz.com, to join the conversation, listen, or read all our segments. Sign up for our weekly update and while you’re there for important topics that we’ve covered, they go straight into your inbox every single week, that’s stevepomeranz.com.
Gary, thank you so much for sharing with us today.
Gary Smith: Thanks for inviting me. I really enjoyed talking with you.