The Thinking Game
#ai#technology#science
Trying to build AGI is the most exciting journey, in my opinion, that humans have ever embarked on. If you're really going to take that seriously, there isn't a lot of time. Life's very short.
My whole life goal is to solve artificial general intelligence. And on the way, use AI as the ultimate tool to solve all the world's most complex scientific problems.
I think that's bigger than the Internet. I think that's bigger than mobile. I think it's more like the advent of electricity or fire.
The human brain is the only existent proof we have, perhaps in the entire universe, that general intelligence is possible at all.
We're going to try to do artificial general intelligence. It may not even be possible. We're not quite sure how we're going to do it, but we have some ideas. Huge amounts of money, huge amounts of risk, lots and lots of compute. And if we pull this off, it'll be the biggest thing ever. That is a very hard thing for a typical investor to put their money on. It's almost like buying a lottery ticket.
We needed investors who aren't necessarily going to invest because they think it's the best investment decision. They're probably going to invest because they just think it's really cool.
In Silicon Valley, everybody's founding a company every year, and then if it doesn't work, you chuck it and you start something new. That is not conducive to a long-term research challenge.
The first people that came and joined DeepMind really believed in the dream. But this was, I think, one of the first times they found a place full of other dreamers.
I've always been thinking about thinking.
I was at this international chess tournament in the mountains. We were in this huge church hall with hundreds of international chess players. And I thought, are we wasting our minds? Is this the best use of all this brain power? Everybody's, collectively, in that building? If you could somehow plug in those 300 brains into a system, you might be able to solve cancer with that level of brain power.
We didn't build it to play any of them. We could just give it a bunch of games and it would figure it out for itself. And there was something quite magical in that.
It was in many respects the first example of any kind of thing you could call a general intelligence.
The game of Go has been studied for thousands of years. And AlphaGo discovered something completely new.
It was at that moment we were telling the world that something new had arrived on earth.
For China, AlphaGo was the wakeup call, the Sputnik moment.
It's always easier to land on the moon if someone's already landed there. It is going to matter who builds AI, and how it gets built.
AlphaZero could start in the morning playing completely randomly and then by tea be at superhuman level. And by dinner it will be the strongest chess entity there's ever been.
It's inspired me to get back into chess again, because it's cool to see that there's even more depth than we thought in chess.
You can't look at gunpowder and only make a firecracker. All technologies inherently point into certain directions.
I think that Oppenheimer and some of the other leaders of that project got caught up in the excitement of building the technology and seeing if it was possible. They did not think carefully enough about the morals of what they were doing early enough.
My view is that the approach to building technology which is embodied by move fast and break things, is exactly what we should not be doing, because you can't afford to break things and then fix them afterwards.
How many billions would you trade for another five years of life, you know, to do what you set out to do?
Proteins are the machines of life. They build everything, they control everything, they're why biology works.
I've run a laboratory for nearly 50 years, and half my time, I'm just an amateur psychiatrist to keep my colleagues cheerful when nothing works. If you are at the forefront of science, I can tell you, you will fail a great deal.
You throw all the obvious ideas to it and the problem laughs at you.
We were the best in the world at a problem the world's not good at. We knew we sucked.
It doesn't help if you have the tallest ladder when you're going to the moon.
Lesson I learned is that ambition is a good thing, but you need to get the timing right. There's no point being 50 years ahead of your time. You will never survive fifty years of that kind of endeavor before it yields something. You'll literally die trying.
You can't force the creative phase. You have to give it space for those flowers to bloom.
The moment AlphaFold is live to the world, we will no longer be the most important people in AlphaFold's story.
It's like drawing back the curtain and seeing the whole world of protein structures.
These are gifts to humanity.
We're now starting to wonder whether we're gonna build systems that we're not convinced are fully intelligent, and we're trying to convince the world that they're not.
People often ask me, "What happens if you're wrong, and AGI is quite far away?" And I'm like, I never worry about that. I actually worry about the reverse. I actually worry that it's coming faster than we can really prepare for.
The advent of AGI will divide human history into two parts. The part up to that point and the part after that point.
If you received an email saying this superior alien civilization is going to arrive on Earth, there would be emergency meetings of all the governments. We would go into overdrive trying to figure out how to prepare.
Very clearly the next generation is going to live in a future world where things will be radically different because of AI. And if you want to steward that responsibly, every moment is vital.
It's just a good thinking game.