[ad_1]

[voices_in_ai_byline]

About this Episode

Episode 84 of Voices in AI features host Byron Reese and David Cox discuss classifications of AI, and how the research has been evolving and growing

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. I’m so excited about today’s show. Today we have David Cox. He is the Director of the MIT IBM Watson AI Lab, which is part of IBM Research. Before that he spent 11 years teaching at Harvard, interestingly in the Life Sciences. He holds an AB degree from Harvard in Biology and Psychology, and he holds a PhD in Neuroscience from MIT. Welcome to the show David!

David Cox: Thanks. It’s a great pleasure to be here.

I always like to start with my Rorschach question which is, “What is intelligence and why is Artificial Intelligence artificial?” And you’re a neuroscientist and a psychologist and a biologist, so how do you think of intelligence?

That’s a great question. I think we don’t necessarily need to have just one definition. I think people get hung up on the words, but at the end of the day, what makes us intelligent, what makes other organisms on this planet intelligent is the ability to absorb information about the environment, to build models of what’s going to happen next, to predict and then to make actions that help achieve whatever goal you’re trying to achieve. And when you look at it that way that’s a pretty broad definition.

Some people are purists and they want to say this is AI, but this other thing is just statistics or regression or if-then-else loops. At the end of the day, what we’re about is we’re trying to make machines that can make decisions the way we do and sometimes our decisions are very complicated. Sometimes our decisions are less complicated, but it really is about how do we model the world, how do we take actions that really drive us forward?

It’s funny, the AI word too. I’m a recovering academic as you said. I was at Harvard for many years and I think as a field, we were really uncomfortable with the term ‘AI.’ so, we desperately wanted to call it anything else. In 2017 and before we wanted to call it ‘machine learning’ or we wanted to call it ‘deep learning’ [to] be more specific. But in 2018 for whatever reason, we all just gave up and we just embraced this term ‘AI.’ In some ways I think it’s healthy. But when I joined IBM I was actually really pleasantly surprised by some framing that the company had done.

IBM does this thing called the Global Technology Outlook or GTO which happens every year and the company tries to collectively figure out—research plays a very big part of this—we try to figure out ‘What does the future look like?’ And they came up with this framing that I really like for AI. They did something extremely simple. They just put some adjectives in front of AI and I think it clarifies the debate a lot.

So basically, what we have today like deep learning, machine learning, tremendously powerful technologies are going to disrupt a lot of things. We call those Narrow AI and I think that narrow framing really calls attention to the ways in which even if it’s powerful, it’s fundamentally limited. And then on the other end of the spectrum we have General AI.  This is a term that’s been around for a long time, this idea of systems that can decide what they want to do for themselves that are broadly autonomous and that’s fine. Those are really interesting discussions to have but we’re not there as a field yet.

In the middle and I think this is really where the interesting stroke is, there’s this notion we have a Broad AI and I think that’s really where the stakes are today. How do we have systems that are able to go beyond what we have that’s narrow without necessarily getting hung up on all these notions of what ‘General Intelligence’ might be. So things like having systems that are that are interpretable, having systems that can work with different kinds of data that can integrate knowledge from other sources, that’s sort of the domain of Broad AI. Broad Intelligence is really what the lab I lead is all about.

There’s a lot in there and I agree with you. I’m not really that interested in that low end and what’s the lowest bar in AI. What makes the question interesting to me is really the mechanism by which we are intelligent, whatever that is, and does that intelligence require a mechanistic reductionist view of the world? In other words, is that something that you believe we’re going to be able to duplicate either… in terms of its function, or are we going to be able to build machines that are as versatile as a human in intelligence, and creative and would have emotions and all of the rest, or is that an open question?

I have no doubt that we’re going to eventually, as a human race be able to figure out how to build intelligent systems that are just as intelligent as we are. I think in some of these things, we tend to think about how we’re different from other kinds of intelligences on Earth. We do things like… there was a period of time where we wanted to distinguish ourselves from the animals and we thought of reason, the ability to reason and do things like mathematics and abstract logic was what was uniquely human about us.

And then, computers came along and all of a sudden, computers can actually do some of those things better than we can even in arithmetic and solving complex logic problems or math problems. Then we move towards thinking that maybe it’s emotion. Maybe emotion is what makes us uniquely human and rational. It was a kind of narcissism I think to our own view which is understandable and justifiable. How are we special in this world?

But I think in many ways we’re going to end up having systems that do have something like emotion. Even you look at reinforcement learning—those systems have a notion of reward. I don’t think it’s such a far reach to think maybe we’ll even in a sci-fi world have machines that have senses of pleasure and hopes and ambitions and things like that.

At the end of day, our brains are computers. I think that’s sometimes a controversial statement but it’s one that I think is well-grounded. It’s a very sophisticated computer. It happens to be made out of biological materials. But at the end of the day, it’s a tremendously efficient, tremendously powerful, tremendously parallel nanoscale biological computer. These are like biological nanotechnology. And to the extent that it is a computer and to think to the extent that we can agree on that, Computer Science gives us equivalencies. We can build a computer with different hardware. We don’t have to emulate the hardware. We don’t have to slavishly copy the brain, but it is sort of a given that will eventually be able to do everything the brain does in a computer. Now of course all that’s all farther off, I think. Those are not the stakes—those aren’t the battlefronts that we’re working on today. But I think the sky’s the limit in terms of where AI can go.

You mentioned Narrow and General AI, and this classification you’re putting in between them is broad, and I have an opinion and I’m curious of what you think. At least with regards to Narrow and General they are not on a continuum. They’re actually unrelated technologies. Would you agree with that or not?

Would you say like that a narrow (AI) gets a little better then a little better, a little better, a little better, a little better, then, ta-da! One day it can compose a Hamilton, or do you think that they may be completely unrelated? That this model of, ‘Hey let’s take a lot of data about the past and let’s study it very carefully to learn to do one thing’ is very different than whatever General Intelligence is going to be.

There’s this idea that if you want to go to the moon, one way to go to the moon—to get closer to the moon—is to climb the mountain.

Right. Exactly.

And you’ll get closer, but you’re not on the right path. And, maybe you’d be better off on top of a building or a little rocket and maybe go as high as the tree or as high as the mountain, but it’ll get you where you need to go. I do think there is a strong flavor of that with today’s AI.

And in today’s AI, if we’re plain about things, is deep learning. This model… what’s really been successful in deep learning is supervised learning. We train a model to do every part of seeing based on classifying objects and you classify a lot – many images, you have lots of training data and you build a statistical model. And that’s everything the model has ever seen. It has to learn from those images and from that task.

And we’re starting to see that actually the solutions you get—again, they are tremendously useful, but they do have a little bit of that quality of climbing a tree or climbing a mountain. There’s a bunch of recent work suggesting… basically they’re looking at texture, so a lot of solution for supervision is looking at the rough texture.

There are also some wonderful examples where you take a captioning system—a system can take an image and produce a caption. You can produce wonderful captions in cases where the images look like the ones it was trained on, but you show it anything just a little bit weird like an airplane that’s about to crash or a family fleeing their home on a flooding beach and it’ll produce things like an airplane is on the tarmac at an airport or a family is standing on a beach. It’s like they kind of missed the point, like it was able to do something because it learned correlations between the inputs it was given and the outputs that we asked it for, but it didn’t have a deep understanding. And I think that’s the crux of what you’re getting at and I agree at least in part.

So with Broad, the way you’re thinking of it, it sounds to me just from the few words you said, it’s an incremental improvement over Narrow. It’s not a junior version of General AI. Would you agree with that? You’re basically taking techniques we have and just doing them bigger and more expansively and smarter and better, or is that not the case?

No. When we think about Broad AI, we really are thinking about a little bit ‘press the reset button, don’t throw away things that work.’ Deep learning is a set of tools which is tremendously powerful, and we’d be kind of foolish to throw them away. But when we think about Broad AI, what we’re really getting at is how do we start to make contact with that deep structure in the world… like commonsense.

We have all kinds of common sense. When I look at a scene I look at the desk in front of me, I didn’t learn to do tasks that have to do with the desk in front of me by lots and lots of labeled examples or even many, many trials in a reinforcement learning kind of setup. I know things about the world – simple things. And things we take for granted like I know that my desk is probably made of wood and I know that wood is a solid, and solids can’t pass through other solids. And I know that it’s probably flat, and if I put my hand out I would be able to orient it in a position that would be appropriate to hover above it…

There are all these affordances and all this super simple commonsense stuff that you don’t get when you just do brute force statistical learning. When we think about Broad AI, we’re really thinking about is ‘How do we infuse that knowledge, that understanding and that commonsense?’ And one area that we’re excited about and that we’re working on here at the MIT IBM Lab is this idea of neuro-symbolic hybrids.

So again, this is in the spirit of ‘don’t throw away neural-networks.’ They’re wonderful in extracting certain kinds of statistical structure from the world – convolutional neural network does wonderful job of extracting information from an image. LSDMs and recurrent neural networks do a wonderful job of extracting structure from natural language, but building in symbolic systems as first-class citizens in a hybrid system that combines those all together.

Some of the work we’re doing now is building systems where we use neural networks to extract structure from these noisy, messy inputs of vision and different modalities but then actually having symbolic AI systems. Symbolic AI systems have been around basically contemporaneous with neural networks. They’ve been ‘in the wings’ all this time. Neural networks deep learning is in any way… everyone knows this is a rebrand of the neural networks from the 1980s that are suddenly powerful again. They’re powerful for the first time because we have enough data and we have enough compute.

I think in many ways a lot of the symbolic ideas, sort of logical operations, planning, things like that. They’re also very powerful techniques, but they haven’t really been able to shine yet partly because they’ve been waiting for something—just the way that neural networks were waiting for compute and data to come along. I think in many ways some of these symbolic techniques have been waiting for neural networks to come along—because neural networks can kind of bridge that [gap] from the messiness of the signals coming in to this sort of symbolic regime where we can start to actually work. One of things we’re really excited about is building these systems that can bridge across that gap.

Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

[ad_2]

Source link

Related Posts