Zitat des Tages von Geoffrey Hinton:
Most people at CMU thought it was perfectly reasonable for the U.S. to invade Nicaragua. They somehow thought they owned it.
Irony is going to be hard to get. You have to be master of the literal first. But then, Americans don't get irony either. Computers are going to reach the level of Americans before Brits.
We now think of internal representation as great big vectors, and we do not think of logic as the paradigm for how to get things to work. We just think you can have these great big neural nets that learn, and so, instead of programming, you are just going to get them to learn everything.
In a sensibly organised society, if you improve productivity, there is room for everybody to benefit.
I had a stormy graduate career, where every week we would have a shouting match. I kept doing deals where I would say, 'Okay, let me do neural nets for another six months, and I will prove to you they work.' At the end of the six months, I would say, 'Yeah, but I am almost there. Give me another six months.'
In the long run, curiosity-driven research just works better... Real breakthroughs come from people focusing on what they're excited about.
I think the way we're doing computer vision is just wrong.
Take any old classification problem where you have a lot of data, and it's going to be solved by deep learning. There's going to be thousands of applications of deep learning.
In science, you can say things that seem crazy, but in the long run, they can turn out to be right. We can get really good evidence, and in the end, the community will come around.
My main interest is in trying to find radically different kinds of neural nets.
The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things.
Machines can do things cheaper and better. We're very used to that in banking, for example. ATM machines are better than tellers if you want a simple transaction. They're faster, they're less trouble, they're more reliable, so they put tellers out of work.
I think it's very clear now that we will have self-driving cars.
The brain has about ten thousand parameters for every second of experience. We do not really have much experience about how systems like that work or how to make them be so good at finding structure in data.
Any new technology, if it's used by evil people, bad things can happen. But that's more a question of the politics of the technology.
I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.
Early AI was mainly based on logic. You're trying to make computers that reason like people. The second route is from biology: You're trying to make computers that can perceive and act and adapt like animals.
I am betting on Google's team to be the epicenter of future breakthroughs.
A deep-learning system doesn't have any explanatory power.
We want to take AI and CIFAR to wonderful new places, where no person, no student, no program has gone before.
Everybody right now, they look at the current technology, and they think, 'OK, that's what artificial neural nets are.' And they don't realize how arbitrary it is. We just made it up! And there's no reason why we shouldn't make up something else.
Humans are still much better than computers at recognizing speech.
I got fed up with academia and decided I would rather be a carpenter.
I get very excited when we discover a way of making neural networks better - and when that's closely related to how the brain works.