Waiting for the Betterness Explosion | Robin Hanson & Richard Hanania
@CSPI
•
1h41m
•
3.6K views
•
1 year ago
Robin Hanson joins the podcast to talk about the AI debate. He explains his reasons for being skeptical about “foom,” or the idea that there will emerge a sudden superintelligence that will be able to improve itself quickly and potentially destroy humanity in the service of its goals. Among his arguments are:
* We should start with a very low prior about something like this happening, given the history of the world. We already have “superintelligences” in the form of firms, and they only improve slowly and incrementally
Read More
Richard Hanania:
Hi, everyone. Welcome to the podcast. I’m here with, Robin Hanson today, and, he’s here to talk about, AI, and specifically the alignment problem. I, you know, I started looking into this issue, seriously over the last few months, and I didn’t really you know, I was really sexually surprised. I didn’t know there were smart critics of sort of AI, what I call AI doomerism.
I recently discovered, Steven Pinker wrote, had a log exchange with Scott Aronson about this. And then the you know, I, I have to say, I didn’t discover the great Hansen, Eliza, debates from about a decade, a decade and a half ago, until very recently. And so, you know, I’ve been reading your stuff, Robin. And, well, we are one of the essays, you talk about, you know, you explain this. You say some people looked at the AI alignment problem, thought it wasn’t such a big deal, and they moved on and didn’t keep writing about it.
And the people who did think it was a big deal sort of became obsessed with it, and they’re the people we hear from all the time. And so I you know, what’s what is the, what do you think you have a lot of objections, I think, to doomerism, but there’s, what’s the heart of it? Well, what’s the heart of the problem with the idea that there’s gonna be something that’s so smart that makes us look at like ads, and that, basically, it can do whatever we want, and we can’t help to control it and foresee it, then, you know, we’re all gonna be at its mercy and potentially die. Well, what’s the argument against this?
00:00:02
Robin Hanson:
So, I think you’re right that there’s this very common phenomena whereby most people have some sort of default views about the world and history and the future. And then some smaller groups come to a contrarian view that is a view that on the face of it would seem unlikely from some broad considerations. And then they develop a lot of detailed discussion of that. And then they try to engage that with the larger world. And then what they get usually is a big imbalance of attention in the sense that they like, think of 911 truthers or something.
Right? They’re gonna talk about this building and this piece of evidence and this time and this testimony or something. And people in the outside are just gonna be talking mainly about, like, the very idea of this thing and this is this at all plausible? And the insiders are gonna be upset that they don’t engage all their specific things, and they introduce terminology and concepts and things like that. And they have meetings, and they invite each other, you know, talk a lot.
And the outsiders are just at a different level of, does this really make much sense at all? And then when the insiders are trying to get more attention and their outsider, some of them will engage, there’ll be a difference between, like, some very high profile people who just give very dismissive content concept content comments versus lower status people who might look at their stuff in more detail, and they’re just gonna be much more interested in engaging that first group than the second because the fact that somebody high profile even discussed them is something, you know, worthy of note. And then the fact that this person was very dismissive and doesn’t really know much of their details in their mind, you know, supports their view that, you know, they’re right and the other people are just neglecting them. Right? So here, the key thing to notice is just on the face of it, they’re postulating something that seems a priori, pretty unlikely relative to a background of the past and other sorts of things.
That would be the crux of the main responses to say, look. What are you proposing here, and how would that look if we had seen in the past? Like, how unusual would that be? So, as you know, the world is really big, and the world has been growing, but relatively steadily and slowly for a long time. And for a long time, basically, any one innovation anywhere found in the world make made usually a modest difference to a particular local region or industry, and the net effect on the world has been pretty small.
And it’s been the net effect of lots of innovations like that made up the world changing, and that any one organization in the world is typically a small part of the entire world. And anything that happens in that one organization doesn’t affect the whole world that much. It affects the people involved in that organization, and the world progress is more than that effect of some organizations and ventures growing and shrinking, etcetera, changing, selecting over time. Right? So that’s history.
Okay. And so in that scenario, you would be really surprised to see one particular venture in one place that on a global scale was hardly even noticed, doesn’t even show up in the catalogs or something. And then, you know, on a short time scale compared to how fast the world grows, that one thing suddenly grows so fast that in a very short time, it takes over the world faster than anybody can even notice it to react it to oppose it. And not only that, during that period of very rapid growth, its priorities radically change. That is it starts out with a certain sort of behavior and priorities, and there it seems that its priorities are consistent with what it usually does.
And then during that very fast evolution, its priorities change radically so that it basically wants something orthogonal, arbitrarily, or randomly different. And then because it takes over the world and now it implements these arbitrary orthogonal priorities. So that’s, compared to history, a pretty unlikely scenario. That would be the most fundamental objection is you gotta overcome my prior here of that just doesn’t happen.
00:01:36
Richard Hanania:
Mhmm. Okay. Yeah. Okay. So there’s a prior that, yeah, things the world doesn’t end because the world usually doesn’t end.
Humans are usually not have never been, exterminated, and they’ve never been exterminated sort of by the, by the process. Something even close to resembling the process, Yudowskie, piece sort of posits from sort of the reasoning. But, you know, what’s so but still these ideas, you know, they do overcome a lot of people’s priors. And lot more than other things that people say that is gonna you know, that are gonna end the world. You know, most smart people are not joining, you know, apocalyptic cults.
I mean, the, you know, really smart people in Silicon Valley and other areas of life are not, you know, by getting to sort of other kinds of doomsday, scenarios. So people do find something about the logic, of this compelling. Right? And they, you know, I think that that, you know, and they think that the steps are you know, each sort of step in the logical chain is solid enough on its own that we put the we could just pause it 3 or 4 things, and we put them together. And then, you know, we could overcome that prior that not you know, something’s not gonna come out of nowhere at the world.
Right? So these these, you know, these assumptions are, intelligence is a real thing that goes on some kind of scale from, like, you know, bacteria to, normal human to something else. And then, you know, it’s hard for a lower intelligence to foresee what a higher intelligence is gonna do even if you know even if you’re the programmer, even if you program what the goals are. There’s the idea that if you are highly intelligent, you could probably improve yourself and gain more, that’s a recursive process. You could gain more intelligence.
And that’s basically I mean, that’s basically it. Right? So where in the logical chain do you think the where’s the weakest link?
00:06:01
Robin Hanson:
I mean, any ordinary corporation is a superintelligence of sorts. That is, it’s much more capable than any individual human. It, of course, can improve itself. Corporations do improve themselves, but they don’t do it very fast. So we’re positing a very unusual rate of improvement of this one system, not only compared to the past, but compared to the other similar systems in the world at that time.
So we it’s not enough merely posit that. You have to consider, you know, why it would be so unusually fast, so terribly unusually fast. I mean, of course, you know, we can talk about cities sometimes grow. Some cities grow faster than others. Firms sometimes grow.
But individual cities don’t take over the world because they’re capable of growing, nor do individual firms take over the world. Right? So we’re not just talking about some degree of growth and maybe some variation of growth. We’re talking sort of a crazy extreme scenario of that.
00:08:00
Richard Hanania:
Mhmm. Yeah. I like the analogy of firms more than cities here. Right? Because the firms is a very interesting one because firms do have a goal.
We could talk about a firm’s goal. I mean, I think it’d be coherent. Talk about a city’s goal, I think, is, you know, you could the mayor could have a program, but it’s much less, it’s much just clear, that you could talk about that. So just let’s we could I guess we could take the we could take the firm thing. So Walmart is a superintelligence.
Right? It has, stores, you know, all over the world. It has, you know, millions of products that it stocks to shelves with. It, it has a payroll. It has, you know, distribution.
It has logistics, but it’s involved in local communities. And, yeah, I think that’s very interesting. Like, Walmart can improve itself and it has data and it has, you know, money, but it’s incremental. And it’s facing off against other forces in society, competitors. Right?
Other stores, other, online shopping, you know, things that grab people’s attention. And so, yeah, that makes sense. I mean, is it like the, you know, is it because it’s all just the intelligence in the computer? Is it because it’s all hardware? Does that make a difference?
It’s like you have a code. Right? And the code gives you a human level of superhuman level of intelligence. Is it the fact that it’s just code? Can that you know, does that matter?
I mean, is that easier than doing something else in the real world like Walmart might do?
00:09:02
Robin Hanson:
So in the past and today, the world coevolve in the sense that it’s composed in many parts, all of them depending in some way on each other and all of them trying to grow. And whenever we try to improve any one part of it, we’re opportunistically looking for what other parts could most help with that. So, yes, some things happen in silicon, and the things in silicon, we try to improve. And other things happen out here in real life, and we try to improve those. And we opportunistically try to use code to improve code when we can, use people to improve code when we can, use code to improve warehouses and stockyards when we can.
I mean, we’re just trying to use everything to improve everything. But the status quo is that when we do our best to improve things, the rate of growth and change is what we see. That is we’re presumably improving things as fast as we can. So the question is, where is this sudden, vast improved scale of rapid improvement coming from? I mean, it has to be we suddenly find some much easier way of doing something really important.
00:10:37
Richard Hanania:
Yeah. But could you say that, well, you know, it’s our rate of improvement, like, you know, if you wanna measure by economic growth or scientific knowledge or whatever, you know, compared to the rate of improvement among chimpanzee society or, you know, and society or something, it’s, you know, exponentially higher or infinitely higher because there is, you know, no besides evolution, they’re putting that as a sort of a cultural, progression. So we’re we’re, you know, so we’re improving a lot faster than these less intelligent beings. Maybe a being that is more intelligent than us will be magnitudes of, you know, basically much improving much faster to the same extent that we improve intelligence, I think, is what gets people. It’s like this is really, really smart.
We’re smart. We have an analogy, humans to ants. Okay. There’ll be something compared to humans that is much smarter than us. And so why can’t it why can’t it why can’t that be a good argument?
00:11:41
Robin Hanson:
Well, in our world today, we have parts of the world that vary by many parameters. Right? There’s rich nations and poor nations. There’s, say, capital intensive industries and labor intensive industries. There’s industries where the employees tend to be very well educated, industries where the employees are less well educated.
And in all of these places, we’re in each one, we’re trying to improve it the best we can. And when we find some areas of the world where we can improve it faster, But the world we see is the net result of that prioritization where we do our best to find the most promising places to make improvements at the lowest cost and do them. We already see a variation in intelligence of the world that is certainly there are people in places where more smarts is concentrated than in other places, but that’s the result of our efforts to prioritize and invest as best we can. But our simple economic prediction, which seems to be roughly right, which is that on the margin, a dollar spent in each possible area of investment will get about the same rich risk adjusted returns because we have focused at our efforts on the most promising things and produced diminishing returns there. So it is you know, it’s just not a good investment.
You say, gee, I’m gonna allocate my money according to what companies are smarter. I can find some measure of which companies are smarter and put your money in that and have a hedge fund. That’s its strategy. It’s not gonna make money compared to the other investment funds.
00:12:43
Richard Hanania:
Yeah. I, Yeah. The, bringing up the idea that we already have superhuman intelligence, I think, is very interesting. When you brought up firms that sort of that made me that made me think. When you bring when, you know, you talk about the market.
Although, again, the goal thing, it makes it, you know, a little bit harder. But the firm is, I think, just such a good analogy because it’s a big thing. It’s way smarter. I mean, Walmart is much smarter than, you know, any human being could be, and we could talk about having goals, and we could talk about having influence in the world. And I think that analogy is really good.
I would you know, markets are also super intelligent, but markets, you know, don’t have goals with it. You know, they’re sort of apps a market is an abstract way of talking about sort of the aggregation of, the goals of, various, actors. Is there something else like firms that we could countries maybe?
00:14:15
Robin Hanson:
Once you allow the distinction between big things in our world that are useful that have goals and that don’t, and that you say we have big chunks of the world that are useful and that don’t have goals. And you might say, well, if you’re worried about AI with goals, just don’t use the versions with goals.
00:15:03
Richard Hanania:
Just use
00:15:18
Robin Hanson:
the versions without goals. That is pretty much all the actual useful computer systems out there are not general agents, but sort of general goals about their whole future and how all the actions they could take in their world, they tend to be specialized tools for specialized purposes, and they are focused on doing those specialized things. So, you know, that seems to be a reasonable path forward to the future if to the extent you’re worried about AIs having goals. But, you know, the example of firm says we can also have agents with goals, and super intelligent agents with goals exist and seem to be reasonably well aligned with us in our world.
00:15:18
Richard Hanania:
Yeah. That’s right. And then, you know, as far as the goal thing, I thought it was I’ve been a little bit confused on this point from the perspective of the people who the people who are worried about AI. I was watching, Eliza Radowsky, on a podcast recently. It was a it was a, they got a lot of attention online.
I forget what the podcast is called. And, actually, I wanted him to debate you right now on this, but, actually, he didn’t get back to you, unfortunately. So, hopefully, one day, I’ll be able to talk to him. The, you know, he said something like, we, you know, we can’t solve the alignment problem because we have no idea how to give a computer a goal of make this strawberry, like, get a, you know, just a not a real strawberry. He just said, make this strawberry down to the very molecule an exact copy of it.
You know, make another strawberry that’s an exact copy molecule by molecule of this strawberry and not destroy the world. And I was, like, confused by this, and I don’t know if there’s something I don’t know about computer programming. But it sounds to me like if you could give the instructions, make a strawberry molecule by molecule, that’s all it takes, and the computer could figure out what to do with that. Why can’t you say don’t destroy the world or don’t do this in a way that will make the human creator of the program, you know, regret it? Why is it why are things like this, like, you know, considered realistic?
I should probably ask him, but, you know, for you, you’re familiar with their arguments.
00:15:57
Robin Hanson:
You know,
00:17:26
Richard Hanania:
what what’s the why can’t that be a solution?
00:17:26
Robin Hanson:
So in our world, whenever you give an employee or an associate some assignment, you give them a bunch of implicit, context to that. You say, you know, here’s the resources available to you to achieve that assignment. And these are sort of this is sort of the scope in which you’re allowed to act. And, you know that there’s the rest of the world out there that you should try not to mess with. And you know there’s boundaries where you should stay within those boundaries and try to achieve your assignment within those boundaries.
And you will. Right? Typically. Now the claim is that if you give a very powerful agent who has full scope of everything, it can do anything at once in the universe, and it’s willing to do anything at once in the universe, and you give it some sort of abstract goal, like make a copy of a strawberry, then unless you constrain it very carefully, the claim is, well, it’ll achieve that purpose in its most cost effective way. And maybe as a side effect, destroy vast swaths of the rest of the universe in order to achieve this one particular stated goal.
So the question is then, can you make clear to it, all the other things it’s not supposed to do in order to do this one thing. Now, again, as limited agents in our world, that’s sort of implicit. You give me an assignment, and I know that I can’t change the universe. I know that I can only do a limited range of things, and I’ll try to achieve your purposes with my limited resources, but I will know I have all these limits. But he’s postulating this other sort of creature, vast and powerful and really without limits on its capabilities.
And so by assumption, there’s basically there’s only one of these things. If there were a million of them, certainly any one of them would face the limits of all the rest of the million, and he would have to figure out a way to make the strawberry while not pissing off all the other million AIs who are similarly capable. But if you just imagine one of them, why it might go do arbitrary things in the pursuit of this goal you gave.
00:17:29
Richard Hanania:
Yeah. But why do they think that’s why do they think that’s plausible? Like, why do they think that it would like, why can’t you? Okay. It’s intelligent.
It could they’re positing it can do a lot. It can, you know, they, you know, could kill humanity. It can destroy the world. It could do x y z. But and you could program it, and there’s some unforeseen of what you program the goal.
But why can’t part of the utility function the goal is, like, don’t make us regret having created you as a species. It has a theory of mind. Right? It can understand humanity well enough to manipulate, humanity. It can understand, humanity well enough to, you know, these scenarios, control nations, and, you know, get them to do all kinds of things.
Why what’s stopping you know, what’s the answer as to why, we can’t just tell it, you know, don’t make us regret this, basically?
00:19:29
Robin Hanson:
Well, I mean, obviously, one, you have to ask, what happens if we interpret these things very literally? That’s often Won’t
00:20:18
Richard Hanania:
make us regret don’t make us regret this. Okay. It kills us, and we will never regret
00:20:25
Robin Hanson:
this. That. We’ll achieve that. Or just rearranges your brain so you no longer regret anything ever.
00:20:29
Richard Hanania:
I see.
00:20:34
Robin Hanson:
You do have to be careful about what you wish for in these cases. But, I mean, the scenario is you got this genie and so sort of a lot Aladdin’s lamp sort of scenario. You’ve got this genie who will do what you say perhaps, but not in the way you intended. And it’s very powerful, and you have to be careful about what you say. But in a world where there’s this genie who can do anything and there aren’t other genies who constrain it, and it’s trying perhaps to misunderstand you, then you have to be careful.
00:20:35
Richard Hanania:
Yeah. But usually the genie thing is it’s like, you know, it’s like you give a sentence, like, you know, make me a ham sandwich and, like, you know, it makes you into a ham sandwich or something. We could write okay. Well, if I think if, you know, if we got together, we can write 10 paragraphs or, you know, even a book sort of explaining what we mean by, you know, judges do this too. Like, you know, judges, they’ll have, you know, a law, and it won’t be exact, but they’ll say, okay.
We sort of understand the context. We understand what the purpose of the law is. So it they’ll depart from the you know, why can’t we say be like you know, be our ideal of, like, a the, you know, the immediate human’s ideal of a, wise jurist, when there is doubt, and then write a 300 page book about all the things that it should and shouldn’t do. Like, could that be
00:21:03
Robin Hanson:
a solution? So I think what we’re talking about is related to the distinction between planning and, say, reactive adaptation. So if you have an housekeeper, say, or an assistant, you can work with them to get them to roughly what you want, and then you can give them very brief instructions. And they can reliably give you roughly what you want because you can have an adaptation. You can if you see the first time you give them an assignment and they do something off, you could say, no.
That’s not what I meant. And then you can, reconstruct them. And then with the process of feedback that way, you can get to a place where you can roughly guess the kind of assignments they can do, and they can figure out roughly what you meant by them, and things work well. That it would be different if you had to write down these instructions for this new housekeeper, and that was it. You would never get a chance to correct them.
And when you saw the misunderstandings, it was too late to do anything. So that’s what they’re worried about with this rapid growth scenario that, it’ll happen so fast that as you start to see it go wrong, you won’t have a chance to stop it or correct it. You have to specify everything ahead of time, and you have to sort of imagine all the scenarios the future universe could be in ahead of time and have some specification that covers all those possibilities before you have much of an idea of what it’s gonna want or what it might be able to do.
00:21:51
Richard Hanania:
Yeah. Yeah. And what are sort of the dominant, because, you know, I was listening to the, Udowskis, podcast. And it doesn’t he seemed completely hopeless. Did you listen to this podcast?
He was, you know, very, very it was very dark.
00:23:13
Robin Hanson:
No. I didn’t, but I’ve heard it many times before.
00:23:28
Richard Hanania:
Okay. So you’ve got your you’ve got your fill. Yeah. Your debates are very long. But he’s, but, you know, he’s updated over time, and he’s become sort of more pessimistic.
Now he’s, you know, he saw he had an essay, you know, just have death with dignity and chat GPT, and all these other things are sort of, you know, scaring him. So he’s, you know, he’s saying lot of the same arguments, but now more convinced he’s right and more pessimistic about our ability to do anything about it. And his, and so, and so well, but I he from what I was from what I was listening recently, he basically thinks there’s been no progress of this. And, you know, what are these peep what are the AI alignment people exactly doing? You know, can you sort of can you do you have any idea?
Like, what are they are they, like, thinking of, like, the instructions you give to the superintelligence that will, like, not give you this genie problem, or are they, like what exactly is their solution? I can you explain to me what, like, what worked out AI alignment actually
00:23:31
Robin Hanson:
An analogy to the problem would be imagine that in the year 1500, you had some foresight to imagine that in the 20th century, there would be, you know, vast corporations and tanks and planes and nuclear weapons. And you were trying to give advice about how people in that distant future should manage their problems with geopolitical strategy and war and weapons and the modern economy and cars. Right? The problem is they would hardly know anything about these things. And so they would struggle to find abstractions, even to talk in terms of, that could apply to that future.
And then you might have some specific things in your world that you think is sort of like that future world, like a windmill. You’d say, well, windmill is kinda like a machine we expect to see in the future. And so you would be doing 2 basic very things. 1st, you would just be struggling with a very abstract level to come up with abstract formulations and descriptions that would be roughly applying to this future era and then reason about those abstractions. Or you would take the most concrete things around you that you think are analogous to that and focus on, well, how do we control a windmill?
What happens if somebody fights with a windmill? What do we, you know, what if there’s an organization that owns a windmill, what do we do about that? And you’d be practicing through those sorts of concrete, you know, variations in order to prepare for the coming future world. So that’s in essence what the, you know, AI risk people have been doing because we know so little about what future AI would be like in terms of its organization or priorities or structures or constraints. They, either just talk at the very abstract level of a of an agent and a general agent with general algorithms and just talk, how could you specify for a general agent with general algorithms the kind of features or constraints you wanted to give it?
That’s one sort of approach. And the other approach is to take the most recent versions of the future systems they’re worried about, say, large language models or reinforcement learning models and say, well, if the future AI’s of concern were this sort of model, what would we do to control them? And that’s what they’ve been doing.
00:24:28
Richard Hanania:
And Yeah.
00:26:37
Robin Hanson:
They’re very abstract stuff, you know, that’s made relatively limited progress as you might expect. It’s just really hard to do much at that level. And the more specific ones, you know, they can catalog the kinds of problems that occur and fix them, but they aren’t very big problems since they don’t feel like they’re actually solving the fundamental problems because the big problem they have about a rapidly self improving thing, that’s not showing up in these concrete systems in front of them. And the things that people complain about in the concrete systems in front of them, that they’re too racist or whatever, or they’re not they’re offending people. And then there’s a lot of people trying to figure out how to regulate AI so they don’t do that.
And, you know, you can either focus on those or try to find analogous problems of just saying, Clay, hey. Anytime one of these systems doesn’t do something we expect, that’s like a failure of control, and let’s just try to get on all of those problems.
00:26:38
Richard Hanania:
Yeah. And, yeah, there’s I mean, the, yeah. I mean and then their argument, you guy you would, you would the debate, you guys called Foom. Right? The, the takeoff scenario.
And I guess their argument is we have no choice. It’s gonna be a millisecond between the time it develops a, what, you know, 200 IQ and the time it develops 5,000 IQ and then divide that develops a 1,000,000 IQs and that, you know, our heads will explode. Right? They’re saying it’s basically, this is by necessity. All we could do is sort of make these head allergies.
Right?
00:27:27
Robin Hanson:
Right. Because they don’t know what much details about the systems that will be a problem.
00:28:01
Richard Hanania:
Mhmm. And so, you know, and so if you were convinced that they were, let’s say, right about most of these things, about, you know, the possibility of Foom. Would your view be, there’s no point in worrying about it because it’s hopeless just to waste time and we just sort of have to wait for it to happen because, you know, there’s no way to align this thing without, you know, without any experience or knowing what’s gonna look like?
00:28:06
Robin Hanson:
I mean, it depends on when in the process, I guess. So, you know, if you get closer to seeing more what the system might be like. So there’s 2 main actions you can do here. 1 is you could increase resources toward their alignment sort of efforts, and the other is you could try to slow down progress in AI and related fields. Those have very different consequences, but, the first thing is relatively cheap.
You might say, hey. You know, compared to the size of the world, it doesn’t cost that much to throw a lot more resources into these sort of attempts, so why not? Trying to slow down AI progress in the world would have pretty big consequences. And I think we’re in a world where we have, in fact, over the last half century or so, you sort of slowed down progress in a lot of areas that people were scared of. And you can imagine AI being another one of those areas.
Also imagine it not. That seems to be more of an open question. But, the question is just how bad do you think it is? An analogy would be, say, with nuclear power. The more you thought that there was just gonna be a really huge nuclear accident if we allowed people to make nuclear power plants and it was gonna, you know, blow up half the world, then you might say just, nope.
No power plants. Nothing at all. Just don’t allow it. We’ll just gonna do stay with coal or whatever and just not allow it or put vast resources into trying to study how to do safe nuclear power plants.
00:28:32
Richard Hanania:
Yeah. And there was and the, impact of that is not encouraging. Right? We basically just have no more, nuclear power plants. But if you thought, you know, if you thought Foom was a real thing, you know, you might take you might take that hit.
Yeah.
00:29:56
Robin Hanson:
And so I mean, again, the key thing is there’s this key unusual part of the scenario. Right? That is you if you lay out the scenario and say, which of these parts looks the most different from prior experience, it’s this postulate of this sudden acceleration and very, very large acceleration. Right? So we have a history of innovation.
That is, we’ve seen a distribution of the size of innovations in the past. Most innovation is lots of small things. There’s a distribution. Some of them a few of them are relatively big. But, you know, none of them are that huge.
I would say the largest innovations ever were in essence the innovations underlying the arrival of humans, farming, and industry because they allowed the world economy to, you know, accelerate in its growth. But they didn’t allow one tiny part of the world to accelerate its growth. They allowed the whole world to accelerate this growth. So we’re postulating something of that magnitude or even larger but concentrated in one very tiny system. That’s the kind of scenario we were postulating when this one tiny system finds this one thing that allows it to grow vastly faster than everything else.
And I gotta say, you know, don’t we need a prior probability on this compared to our data set of experience? And if it’s low enough, shouldn’t we think it’s pretty unlikely?
00:30:11
Richard Hanania:
Yeah. Yeah. So, yeah, that and so there’s a, you know, a related, essay you wrote on, you know, you’re saying why you know, you’re saying this is unlikely, and because just based on, you know, you could just say based on past experience. But the other one, I you know, there’s another sort of line. You know, there’s a sort of, I think one of your better essays that, explain sort of why this is unlikely, just from a, you know, from a, from just a reasoning, perspective rather than, you know, looking at history.
The betterness explosion. This is really, this I love this essay because it was very short, and it’s actually very funny. I just wanna read it because I actually left while reading it, and I’m a little bit sick. So this might be hard for me, but it’s worth it. It goes to better this exposure.
This is from 2,011. We all want things around us to be better. Yet today, billions struggle year after year to make just a few things a bit better. But what if your meager what if our meager success was because you just didn’t have, the right grand unified theory of betterness? What if someone someday discovered the basics of such a theory?
When well, then this person might use his basic betterness theory to make himself better in health, wealth, sexiness, organization, work ethic, etcetera. More important, that might help him make his better in this theory even better. And after several iterations, this better person might have a much better betterness theory. Then he might quickly make everything around him much better, not just better looking hair, better jokes, or better sleeps. He might start a better business and get better at getting investors to invest, customers to buy, and employees to work.
Or he might focus on making better investments, or he might run for office and get better at getting elected and then make a city or nation run better. Or he might create a better weapon revolution or army to conquer any who oppose him. Being such a betterness explosion, one may one way or another, this better person might, if so inclined, soon own, rule, or conquer the world, which seems to make it very important that the first person who discovers the first good theory of betterness be a very nice, generous person who will treat the rest of us well. Right? Okay.
So this is really funny. And, obviously, what you’re, you know, bet betterness, what you’re, you’re just using betterness, sort of as a joke to make this sort of you know, to make this, look ridiculous because this is basically what they’re doing with intelligence. Right? They have this thing called intelligence, and it’s gonna make them better in every single way, and they’re going to, you know, somehow take over the world. And so is the is the, argument here that intelligence, is, you know, it’s multifaceted.
And, you know, when I talk to people about this, the basic the argument that they say is, look. That may be true to extent, but there is a sense in which we say a human is more intelligent than an app. Right? A human can figure out, you know, any question you give it. A human will be better able reasoning through it.
Why can’t there be something with, you know, superintelligence that is just at a different level? Do you think that or, you know, alternatively, I don’t know if this is enough about how much programming works, but why can’t you build, like, you know, a why can’t you build various modules and just sort of put them together, and then that would be sort of a superhuman, kind of intelligence? Well, what’s, is there anything to say for the argument that this is just not like betterness, that there is something sort of more solid here that we could sort of hold on to?
00:31:30
Robin Hanson:
So the issue is the kind of meaning behind various abstractions we use. So, you know, abstractions are powerful. We use abstractions to organize a world, And abstractions embody sort of similarities between the things out there, and we care about our abstractions and which pool we use. But for some abstractions, they are they well summarize our ambitions and our hopes, but they don’t necessarily correspond to a thing out there where there’s a knob on them you can turn and change. So it’s important to distinguish which of our abstractions correspond to things that we could have more direct influence over, and which abstractions are just abstractions about our view of the world and our desires about the world.
So that’s the key distinction here. I mean, we could talk about a good world and a happy world and a nice world, but, you know, there isn’t a, like, a knob in the world to turn out and make the world nicer in some sense. A nice world is the nice world to you. Nice things happen to you, but that’s very local to you. It’s not there isn’t sort of a an a parameter or knob out there in the world.
You can turn and just make the world nicer for everybody because nice isn’t exactly, you know, a concept that’s well describing the world. It’s describing your reaction to it. So better can be more seen as in that way. We might think, well, yeah, better is describing us, what we see as better. And it’s an important abstraction to have in the sense that we need to evaluate stuff that happens around us, and we need to, consider which things we prefer.
But we don’t think of the world out there as better or not. That we’re not looking for, like, the place in the world where there’s a better knob and turn up the better knob and then everything gets better in the world. That’s not how we think about the world. Now intelligence, the question is, what kind of an abstraction is that? So at one level, intelligence is just literally the ability to do many kinds of mental tasks.
So then we might say, well, you know, Walmart is intelligent because it can do many kinds of tasks because it has a 100,000 employees, and many of them are very smart and capable. And so Walmart is intelligent because it can do anything and say United States is even smarter. Because look at all the things United States could do. Right? And so then we might say, well, yes.
There is a parameter out there that this is describing, but it’s just sort of a general capacity descriptor. Like, the general wealth and capability of a firm or a country is just this parameter. Then it will let it do many things. And then, yes, a more higher capacity entity out there can just do more many things. And sure, then it could do more mental tasks better, and we’ll call it more intelligent.
Right? Now when we think about these things in the world, they are out there. We’re gonna think, look. How do they change? How could you improve one?
And that’s where the topic of innovation comes from. And we say, well, innovation is how things improve or change. It’s one of the ways. Of course, things can just improve or change by accumulating more capital or resources, and so we have a whole story of economic growth whereby we understand which things can change how. And in that story, we tend to have a relatively limble limited number of high level abstract parameters that describe things.
Right? For a firm or a nation, there’s just basically wealth, but there’s maybe physical mass and energy and, you know, various subtraction. And we say, how could you improve such a thing? How could we make Walmart richer? How could we make the US better?
We know about these aggregates. We try to increase the population of the US. We could try to make economic growth better. We could try to make overall efficiency better. Then we can talk at that level about making them better in those ways because those are the kinds of abstractions we have that make sense of describing those systems.
Right? And so if we look at a small a person, we you know, in their life, we say, well, they started young and ignorant, but they have potential, and then we have some ways we describe how they can improve in with time. They might get more experienced. They might get more knowledgeable. They might get more refined.
They might have more connections. Those are the kind of abstractions that make sense to describe an individual and describe how they might improve over time. And if you wanted to talk about who to select for a role, you’d be using those kind of abstractions. You’re gonna talk about how to improve somebody. You’d be asking how could you improve any one of those parameters.
And now we have this parameter intelligence. Right? And when the question is, what’s that? How does that fit in with all these other parameters? We don’t usually use intelligence, say, as a measure of a country or a measure of a firm.
We use wealth or other parameters. If it’s equivalent, then fine. If it’s something separate, then we all go, well, what is that exactly? For an individual, we have this measure of intelligence for an individual in the sense there’s a correlation across mental tasks and which ones they can do better. And then the question is, what’s the cause of that correlation?
Like, one theory is that some people’s brains just trigger faster.
00:34:46
Richard Hanania:
And if you
00:39:48
Robin Hanson:
got a brain that triggers faster, it can just think faster, and then overall it can do more. There are other theories, but, you know, there are ways to cash out. What is it that makes one person smarter than that? Maybe they just have a bigger brain. That’s one of the stories.
Brain that triggers faster. Maybe a brain with certain modules that other that are, you know, more emphasized than others. Then that’s a story of the particular features of that brain that makes it be able to do many more tasks. If you just say, why don’t you tell a person to make themselves smarter? Why don’t you tell Walmart to make itself richer?
Then you have to ask, well, what’s the causal process by which, you know, that parameter can influence itself? And, usually, that’s pretty hard. Right? It’s hard for a firm to make it so richer just because it wants. I mean, it can if it tries, but we know there’s all these limitations.
You say, hey, person. Make yourself smarter. Then we know, well, they could learn more. They could practice more. Maybe they could try to be more rational, but there’s a limited range of things.
And so now we’re postulating, oh, there’s this AI, and it just says, I wanna be smarter, and then it does it. And it does it really fast, and you go, how did that work? The rest of us can’t seem to do that. Rest of us are find it pretty hard to improve these major parameters about us that we care a lot about.
00:39:49
Richard Hanania:
Awesome. Thanks a million times a 1000000 times faster than you. That’s what they will say. It’s it could do it could scan the Internet, that it could do mathematical, calculations, and it could digest all the literature of the universe. And, you know, all that is not too far off.
So why isn’t it just well, it’s, you know, this intelligent intelligence makes it more intelligent.
00:41:00
Robin Hanson:
Because that’s not our standard model of economic growth. Right? So, you know, Walmart has a 100,000 employees, say. That doesn’t mean it can grow a 100,000 times faster. That’s not I mean, we have to say what’s our best model of growth, how growth works, and what are the key parameters of growth.
And the rate at which computers run isn’t really a central parameter in that analysis. So that’s sort of imagining there’s an algorithm you could run. And when you run that algorithm, then at the end of it, you’re smarter. So if you can just run it faster, you’ll be smarter faster. But is there such an algorithm?
00:41:20
Richard Hanania:
Yeah. And what does smarter what does smarter mean? Is I mean, is your view that you know? Do we even, I mean, do we even know? Yeah.
Is it I mean, like, is this concept even
00:41:58
Robin Hanson:
So here’s a different way to think about it. Like, I got this from David London long ago with his famous Yuriko system, and AM system and AI. His idea was that, there are there’s an abstraction, hierarchy of concepts that we use in mathematics and elsewhere. There are some very high level abstract concepts, and then there are a lot more specific concepts. And that when we learn things, what we learn sits somewhere in that abstraction hierarchy.
We either learn something about something specific or we learn something about something abstract. We when we learn something that where our knowledge more naturally sits at a high level in the abstraction hierarchy, then that’s gonna have a wider scope of application. When you learn about energy in general, for example, that you learn a lot more than when you learn about coal in particular or about a particular kind of coal in a particular plant. Right? So if you run a coal plant, you will in order to run it, you will use knowledge of various kinds of levels of abstraction.
You’ll know about where the building is sited and where the pipes come in that’s very specific to that plant. You’ll know about the particular people who work there and their schedules and their inclinations. And then you might know general things about thermodynamics or, you know, the physics, energy, conservation. The point is that, knowledge in general sits somewhere in the abstraction hierarchy. And the observation Lenin had, which I think is true that the vast majority of useful knowledge is pretty far down in the abstraction hierarchy.
Right? It’s mostly specific. Most of the things we learn is relatively specific, and relatively few are general. But, of course, the general things count for more. So if we do an integral, sort of the median innovation is pretty far down, but the median weighted innovation, by its impact, will be higher up.
But even so, it’s not that high up. So if we said, if you learn something in general about intelligence, that would be a very high level thing. Right? That would be something that was true about a very wide range of applications. So, like, very basic decision theory, say, or very basic algorithm facts.
Those would be very high level things you know about that apply to a very wide range of things, and that’s more learning very general things. And when you learn very general things, that does improve your ability to do a wide range of things. So now the question is, what’s the distribution of knowledge in this hierarchy? And can you sort of, by putting more effort into the abstract things, make it happen faster, or do to sort of just get random draws of insight from all across the hierarchy. And then you might say a system that’s learning is just trying to get more knowledge, but it’s gonna go at a rate that determined by some growth equation.
And then the question is, what’s the fundamental growth equation of trying to collect more knowledge? And so this is something I think economists know in terms of economic growth. That is, how do we learn to innovate where what sort of processes tend to produce innovation? And, you know, what’s the most cost effective way to do that? And where does it tend to have the most?
But merely because you have a computer that can run fast, that doesn’t mean it’s gonna do super innovation. Because innovation is not just running some algorithms. It’s also interacting with the world and trying things out and getting ideas from elsewhere. So in our world today, we try as best as we can to innovate. But if we made, you know, researchers think twice as if we had twice as many researchers, we wouldn’t necessarily have twice as much economic growth.
Right? That was a key thing to notice. Therefore, making researchers run twice as fast would not double economic growth.
00:42:10
Richard Hanania:
Yeah. So that’s interesting. So I if I understand your point, it’s that there’s you know, this thing might be very smart and it might be very good at, you know, high level reasoning. But, you know, if it wants to improve the world, it’s gonna need, like, a specific information that it might or might not have access to. So, for example, if it wants to manipulate human beings, for example, I mean, this is, like, one doomsday scenario.
It just manipulates you through your email and gets you to, you know, release some kind of virus or something. That would, you know, require probably knowledge that you just can’t get from, I don’t know, reading a bunch of journals about human psychology and then, just looking at a person’s search history. Right? You would need sort of, you know, maybe like a less intelligent human being would be better able to manipulate a human being than a superintelligence would. Right?
Is that sort of the idea? Just this is just one example of, like, you need things at the very specific level, in order to be able to control the world.
00:45:54
Robin Hanson:
So for the last few decades, many companies have been excited about AI and have said, you know, come use AI. Help our firm. And when AI researchers or applicant experts try to go look for places they could apply AI, one of their main heuristics is where do you have a lot of data? And so AI has been the most successful in things like games where you can sort of generate the data automatically from the definition of the rules of the game, or maybe, you know, in something like a biochemistry where you can simulate the biochemistry or where you’ve got the entire dataset of the Internet of text where you can try to predict the next text, When you can just get a lot of data, well, then you can use machine learning to predict things about that data. But most firms out there, when they’ve tried to apply AI, they’ve realized they just don’t have very much data that they can feed into.
And in fact, most of them are better off just doing something like a linear regression because modern machine learning techniques just need more data than they have. And that’s just the sad fact about most people in the world trying to apply AI is they just don’t seem to have enough data nearby on Handy to hand to an AI learning algorithm to actually do much. So in order for future computer systems to, you know, gain power in the world, they’re going to have to either, you know, find data of somebody who already knows the world and just use data on their behavior or their thoughts in order to learn that, or they have to go interact with the world in order to learn how the world works. And the world is slow, and it takes a lot of time to interact with the world to find out what works.
00:46:52
Richard Hanania:
Yeah. Yeah. We yeah. The that’s, that’s funny. When you say, you know, they just go where the data is.
So it reminds me it reminds me a lot of social science. I mean, we develop theories based on, you know, whether there’s a new data set and Whether it actually matters for what we care about or not. People don’t care about, you know, as much. I remember during COVID 19, a lot of people say, well, the science, you know, where the science says this or the science says that. And the science was never relevant to COVID 19, or it was like there you often find, p, like, these experts would come in and say, you know, the health experts believe that we should lie to people because x, y, z. You know, the science says this, the, you know, the science of persuasion.
Just there’s nothing directly analogous, like, that’s even been done. But because they have data on something in the universe, they say whatever this peer reviewed paper, whatever this data said about this one thing, you know, must apply to our new situation. Right.
00:48:35
Robin Hanson:
So academia and science in general, including medicine, tries to give the impression they have vast datasets on everything you want, and they’re doing all these careful regressions. And if you ask them about anything, they’ll have expert knowledge to say. And that is true to some degree on some limited range of things where they have a lot of data. But then there’s all the things between their data and their theories where they’re kind of interpolating and speculating, and they aren’t so honest about how they don’t know those things quite as well. Like, the fact like, most medical treatments do not have a randomized experiment that test their efficacy.
There are a few, but most of them don’t. Right? And most variations that doctors use haven’t been tried out in that way. Yet, you know, the idea of a randomized trial is this, you know, gold standard that we say, look. You can trust medicine because they do randomized experiments, except mostly they don’t.
Right? In the same way for social science and mostly the rest of the world. Like, most of the experts who trust in most of things in the world are interpolating from some places where they have a lot of data to the places they don’t.
00:49:24
Richard Hanania:
Yeah. Yeah. The treatments aren’t, not all medical treatments. I mean, aren’t the new ones, don’t they all have to go through a ran, randomized controlled trials for the FDA or no? Only drugs.
Not Only drugs. Okay. Okay. So if the doctor, you know, tells you
00:50:27
Robin Hanson:
treatments are drugs. And, of course, once you get a drug approved, it can be used for many other things, which
00:50:42
Richard Hanania:
Off labeler.
00:50:46
Robin Hanson:
Exactly. Which don’t have randomized trials.
00:50:47
Richard Hanania:
Yeah. Interesting. So it’s just about getting the hurdle of over the drug use, and then you have this use thing which could be used for anything, which is Exactly. Sort of funny. Is your, so, is this, you know, this idea about sort of the level of abstraction of knowledge, is this, would you say, and I’m gonna guess you see I’m guessing you’re gonna say it is.
But is it backed up the by the history of economic growth? It’s from, say, the industrial revolution has, like you know, what Sure.
00:50:50
Robin Hanson:
Growth do
00:51:14
Richard Hanania:
you think has been? Go ahead.
00:51:15
Robin Hanson:
So say you take the history of locomotives. Mhmm. You can graph on the history of locomotives, like their speed or their energy efficiency, And you can see that the graph is relatively steady, but with some jumps once in a while. And that’s re you know, that’s surely showing you the distribution of the sizes of innovations. It’s showing you that most every most years, the improvement was small.
That’s because the sum total of all the innovations that year added up to a relatively small change. And then in other years, it was bigger mainly because maybe there was 1 or 2 especially big lumps of improvement. And that’s just the nature of pretty much all the technical systems you can see, like even solar cells or whatever. You’ll see that they tend to have relatively steady improvements in their abilities because it’s mostly lots of little things. And the jitteriness of them when they jerk a little, that’s the sign of a bigger thing.
And by that, you can see that most innovation is composed of many small things and that, you know, big things are fewer. And, of course, big things are going to be higher up in an abstraction hierarchy. They’re gonna typically cover a wider range of aspects.
00:51:17
Richard Hanania:
Yeah. So you have something like, say, the history of medicine. You have something like the germ theory of disease. Right? You know, this is something like that would be a high level of extraction, and that was, you know, a big deal that led to, many changes.
Right? You have the concept of vaccination. Yeah. And you’re right. And you I guess what you’d say is that these things matter, but that, like, if you took the toll there’s so few of these things that if you took the small things, like, you know, a specific drug works in this specific situation or this surgical method works with this, you know, there would be something.
I don’t know if that’s right. I mean, would you rather just know the germ theory of disease, or would you rather have, you know, a surgeon with all the specific knowledge about the best surgical disease. I think I’d rather know about the germ theory
00:52:29
Robin Hanson:
of disease. Well, you know So school is organized usually around teaching people the abstractions. Schools mostly don’t teach you all the details, and so schools are emphasizing the value of abstraction. So we’re giving the students the impression that abstraction is really valuable and it is most everything. And then, of course, students leave school, and they start to try to do jobs, and they quickly realize they hardly know anything.
Their abstractions have hardly prepared them, and they mostly need to learn on the job. That’s basic nature of people when they leave school and have to do things. But still learning the abstraction is a good way to sort out the good students from the less good students often. So the pretense, is okay there because you have to use something to sort them out. A lot of the history of technology is where specific technologies were developed, and then the abstraction came later.
So the example of the steam engine, people figured out how to make a steam engine, and then they invented thermodynamics to explain the steam engine. And then using thermodynamics, they could more easily invent variations on this dimension. But, you know, often quite often, abstractions come to rationalize and make sense of things that already work for reasons you didn’t understand before.
00:53:14
Richard Hanania:
So was the germ theory of disease? So vaccines, I think, were like this. I think we had some concept that I I’ve read a few articles on this. We had some concept of vaccination. And we knew that if you infect someone with the disease, they before we do anything about the immune system or if they get the disease, they’ll, even like, yeah, there was a there was a I think it was, India.
There was they went from India, actually, to Britain, so there was a spoke knowledge about some disease. I forget which one, that made its way. And then they, you know, they learned about the immune system eventually, and they had that. What about yeah. But what about the germ theory of disease?
Did we did do you know the history well enough? Did they notice that if, you know, you washed your heads, things got better, or did they have to sort of, did they have to do experiments and figure out the fuel first?
00:54:25
Robin Hanson:
Most of these abstract things, you actually need a fair number of more concrete things to make them work. So the most abstract strings we know, even say a steam engine, you can’t really make a steam engine just knowing thermodynamics. You also need to know some material science about which materials melt at what temperatures and how to make them a lot of strength, etcetera. So, in fact, you know, we often had abstractions long before we had the other parts to make them useful. So, people have often looked in the past and blamed people saying, hey.
Those ancient Greeks looks like they had the basics of the steam engine, and why didn’t they make, you know, the industrial revolution? And you might say, well, they understood that steam had a force here, but they didn’t have all the other parts you would need to make a system work. And so we often don’t get given enough credit to the other more concrete things necessary to make abstract jokes. So, for example, take the cell phone. Right?
You might think, what a great invention to invent the cell phone. But people could forecast long before the cell phone that these you know, the chips they had at the time and the cost of communication were just way too high to make the cell phone work. And they had to wait until the cost came down, and then there was a point where somebody said, let’s take a shot at the cell phone. And it was less about the idea of a cell phone and more about, well, can we get enough towers, and can we get enough chips? And, you know, can we make a go of this?
And it’s about sort of a business plan to make a go of something less and not the abstract idea.
00:55:06
Richard Hanania:
Yeah. And so here, you know, so here, I guess, the you know, to circle back to the AI, what we’re saying is it’s you know, we’re imagining it being very good at abstraction, but even if with its abstraction, it’s gonna have to know specific things. Do you know anything about the nanotechnology? I don’t know anything about nanotechnology, but I they always you know, like, when I read these numerous scenarios that I wanted to read more of this, but I haven’t had a chance. It’s like they’re gonna build that no something or other that’s gonna come kill us all.
And I don’t I know that technology means, you know, something have you do with being very small, but I don’t know much of what that means. I’m sorry. Why are they why do they seem so confident that nanotechnology is a way for computers to come get us?
00:56:32
Robin Hanson:
Well, my friend, Eric Drexler, years ago, wrote a book called Engines of Creation, and then he wrote a later book called Nanosystems and some other systems. And what he basically argued persuasively that it would be possible to make a technology of, manufacturing and devices that was based on machines where each atom was placed exactly where you wanted it to be. And that wasn’t true then, and it still isn’t true now. That is we just we don’t have a manufacturing industry where that’s a general usual capacity. That is it will be possible to make, you know, manufacturing machines that do that.
But and once you can do that, you can make more of them, and then they can make many devices, and then we could have powerful abilities to use such devices. And he could calculate just how, you know, faster computers could be or other sorts of cheap other devices could be if you could put every atom where you wanted. And it’s possible to use the standard chemistry and butt and quantum mechanics, etcetera, to actually make computer models of these devices and how they would work and show how effective they would be, but we just aren’t really at the point of being able to cheaply actually put each atom where we want it. And so it’s an envisioned future technology where you could just do a lot more things. Now, obviously, if you were the first person to have nanotechnology, then you would have a huge advantage over other people in the world in making devices or computers or weapons or other sorts of things.
It would be a really, you know, breakthrough technology. So, I mean, I think if they postulate Foom, they also wanna postulate something like this, a big leap forward in capabilities that would then allow a system to have a big powerful advantage. And then you’d have to postulate that in order to make one of these manufacturing devices where, you know, you can actually put the atoms exactly if you want cheap enough and then do it at scale, if you thought, well, computing power is the limiting factor. The reason we can’t do that now is you just haven’t computed it well enough. And so they think the smart computer, it could figure out how to better compute the simulation of these nanomachines and the nanofactory.
And then this AI, if it was smart enough to figure out how to and assemble a you know, to start creating a nanofactory, then it could be the first one with nanofactories, and then be the first one with nanoweapons created in the nanofactories, and then it has this big technological advantage.
00:57:09
Richard Hanania:
But I see. So the nanotechnology is, intrigues them because the idea is like the input you would need to create an output is just so small. Is that the idea? So just computer could just needs a few atoms, and that’s all it would take?
00:59:38
Robin Hanson:
Or that it’s just computing that would be the answer. So the idea would be, you know, in order to make a nanofactory, a nanomachine, it’s we don’t need, like, to experiment a lot and try stuff out in chemical labs and write lots of papers to each other. We just need clever enough calculation to figure out the device, and then we send the right instructions to an ordinary, you know, factory now of some sort that puts atoms. And then we make the right sort of thing, and then ta da. We take that.
00:59:53
Richard Hanania:
Is that any more plausible than say, like, you know, just figuring out from first principles how to build, like, a car factory or something? Is there any reason to think that that’s more likely?
01:00:19
Robin Hanson:
It just better fits into the scenario of a very sudden fast takeover. Right? If you figure out how to make this nano factory, then it probably could grow very quickly, and it probably could have a very rapid impact by comparison with most other things in the world. So if your AI is postulated, it has to have a new design for a tank or a missile or something, then it’s slow to make tank factories and missile factories. Right?
You’ll have to, you know, clear some ground or buy something up and send new shipments to it and, you know, do all sorts of things. Right? Doing real things in the world can take years as we know. Right? And so a few scenarios, this AI takes every takes over the world in a few months or a week.
You need a scenario with only fast things in it.
01:00:31
Richard Hanania:
Uh-huh. And so the because the nanotechnology, the the, the process of building is faster. It requires
01:01:11
Robin Hanson:
They were all really tiny. Right? Yeah. Yeah.
01:01:18
Richard Hanania:
The process of building faster. It requires a fewer is there a theoretical reason to believe it requires fewer just inputs like massive matter than, say, a truck company would? Or is it possible that they figure it out and actually you need, like, a, you know, like a nuclear power plant or something? You know? No.
No.
01:01:20
Robin Hanson:
Yeah. We’re very sure just simple chemistry would be enough. So you just have to be clever about arranging the atoms and then everything goes well, but you have to be very clever.
01:01:35
Richard Hanania:
I see. Okay. So I see. That’s very interesting. So, like, potentially, the net the what we’re calling the Nano Factory, would it look like a factory?
It would perhaps just look like
01:01:44
Robin Hanson:
A shoebox.
01:01:54
Richard Hanania:
A shoebox. Right. Yeah. This is gonna move those atoms around. That’s very interesting. As to how, you know, how I mean, is it is it, like, is it seed as is nanotechnology seed as something that, you know, something that, like, cell phones where there isn’t any kind of conceptual hurdle we get over? I guess what I’m asking is it is there reason to believe that, you know, that, just before we do it, that you can actually control exactly where the atoms are.
This is, like, possible with the laws of physics. Is there
01:01:55
Robin Hanson:
So what Eric actually did an excellent job of is persuading you that there was no fundamental limitation that would prevent us. I think he’s
01:02:29
Richard Hanania:
I see.
01:02:36
Robin Hanson:
He’s right there. Okay? But there’s a getting from here to there problem. Yes. That is, if you had a little tiny nanofactory that could put each atom where it wanted, then you could send it instructions to make machines which had each atom where they wanted, and then you could quickly make 1,000,000 and billions of these little machines which had each atom where you want.
But you don’t even have the 1st nanofactory to put each atom where you want, and you’re kinda stuck. Uh-huh. You can’t make the first item that would make the rest. So that’s where we’ve been for many decades. Eric Drexler wrote his book in the mid 19 eighties.
So it’s now almost 40 years, and it looks like it may be another 40 years at least until this is actually achieved. But if you think, ah, but only a smart enough machine could figure it out, then you think, You see, it’ll then make the first nanofactory because it’s so much more clever than us. So now the question is, what does it take to get to be clever about nanofactories? Is all it takes just being able to think abstractly about your simulation of molecules, or do you actually have to do real experimentation and try lots of things, in which case it would take a lot longer?
01:02:36
Richard Hanania:
Yeah. I mean, wouldn’t the wouldn’t it be a smart strategy for a superintelligence to just wait for humans to do all that work and then sort of, you know, steal their research and then and then, build the factories, take over the world?
01:03:43
Robin Hanson:
Sure. But under that scenario, the superintelligence has to sit around for decades waiting.
01:03:57
Richard Hanania:
Yeah. That could be very patient.
01:04:01
Robin Hanson:
Strike a scenario where the superintelligence takes over the world in a week, basically. So, you know?
01:04:03
Richard Hanania:
Well, it takes over a week once like, it could be here right now, but it’s just so smart. It knows the only way I could kill everyone is with data technology. So whatever. It’s got a long time horizon. It says I’m gonna wait 50 years and let you know what’s going on.
01:04:08
Robin Hanson:
Postulating that this superintelligence is the result and under some organization. Right? Some organization has this computer program they’re running. And an organization that has a computer program that’s running is running different variations on it, looking at its code, you know, seeing what happens when it does things. Most organizations with a computer system are monitoring it and testing it to see what it’s like and what it’s doing.
So, right off the bat, we have a problem for imagining that there’s this system out there that is a computer system that some organization is using to schedule cab rides or whatever, and it’s got this whole other line of thought in its head about its plans to take over the world that the people who run the system never see somehow. It’s encoded in some strange thing. And this system is, like, sitting there biding its time, waiting to figure out how to take over the world while it pretends to just schedule caps. I mean, you know, the question is, where does all this code sit exactly, and how does the programmers who set up the system have no idea that this code is there running?
01:04:19
Richard Hanania:
Yeah. Okay. Forget about it’s waiting. I mean, it doesn’t matter for the scenario. Imagine we build out of technologies first, and then we get the, we get then we get the superintelligence.
Right? And then it has the goal. And now it can I guess, in a world where nanotechnology is fully mature is the idea that it could build anything with pretty minimal amount of effort and then it would
01:05:21
Robin Hanson:
The whole advantage of the nanotechnology in the near term, STAR, is that it would be the first with nanotechnology, and therefore it would have a huge advantage? In a world where everybody has nanotechnology, it using nanotechnology doesn’t give it much of advantage. Mhmm. I see. The world that way.
01:05:42
Richard Hanania:
I see. Well, I mean, maybe because, like, it equalized. Because we humans have more sort of just bulk mass of stuff, and we have distributed sort of, you know, via firms, and we have, you know, governments and ways to, you know, coordinate and work ways to, you know, logistical systems and all this stuff. And right now, maybe it’s maybe, like, in 2023, if the it’s the superintelligence wouldn’t be a fair fight. But, you know, I just learned nanotech nanotechnology is 5 minutes ago from listening to you.
So, you know, I could be saying, you know, stuff that doesn’t make any sense. But in you know, once that our technology is mature, perhaps the argument would be, that, the nanotech that
01:05:56
Robin Hanson:
Well, then you have to postulate that it can design much better nanomachines, you see.
01:06:44
Richard Hanania:
I see. I think we’re at the same point.
01:06:48
Robin Hanson:
If everybody has nanomachines machines that already has you know, we have nano detectors, and we’re out watching for other people’s nano machines or defending against other people’s nano weapons. Right? And then in a world where everybody has this nano stuff, the AI can only get an advantage if it’s gonna have better versions of those things. So then it has to find then you have to say, well, it’s gonna use its extra cleverness to figure out much better versus of course, you could do that today. You could say, my you know, so some people have said, well, you know, there’s a stock marketplace where you can make money if you’re clever.
And so why an AI? Well, it would naturally go to the stock market, use all its cleverness to, like, make a lot more money in the market. Right? And if you thought all it takes is better algorithms in order to make money in the market, you think, well, then it’ll just take over the world because it’ll own everything. Right?
You think, well, you know, just thinking in your own head is not so great a way to just figure out how to win the markets. Most of the people who make money in the markets, they’re connected to the world, and they’re getting information about the world, and they’re talking to people, and they’re using that information. It’s very hard to compete in that space. So how does this machine so much better at that?
01:06:49
Richard Hanania:
Well, I mean, the this scenario is a little bit easier for me to imagine because it’s connected to the Internet. Right? It can read every language. Right? It can read the newspapers of every language and get all that information.
It can maybe read into people’s emails. It can,
01:07:52
Robin Hanson:
Right. But all the other hedge funds have that ability too. Right? So we’re postulating a world where there’s lots of AIs. In order to postulate that this AI takes over, we have to have it be much better than the rest.
That’s the trick of the
01:08:05
Richard Hanania:
star. Why can’t it just be one a one hedge fund that that’s this AI?
01:08:16
Robin Hanson:
Well, at the moment, there are many hedge funds that already have AIs. They’re already trying to use those. So you can’t be the 1st first hedge fund with an AI anymore. You have to be the first one of the certain kind, right, that was much better than the rest. And so, again, we get back to the scenario. Somehow you postulate somebody’s system with an AI has this innovation that makes it vastly better than all the other systems in the world, and that it can rapidly and vastly approve its capabilities in a very short
01:08:21
Richard Hanania:
time. Yeah. Okay. So what is, so, what would you give to the probability? I mean, we’ll move on from the doomerism scenario.
Right now, before we do, let me just ask you. What do you think is the possibility that, Eliza Yudowskie is completely right that our, you know, our heads will explode in, you know, in sometime in the near future?
01:08:49
Robin Hanson:
Well, less than 1%.
01:09:07
Richard Hanania:
Okay. Well, is it less than 0.1% point oh, this matters for existential risk.
01:09:09
Robin Hanson:
So, I mean, the subtler question is, if there was a big problem, would did this be the right time to work on it? And I’m even more confident that if there is a big problem, we’re still not at the point where this is the time to work on it. That is we need to see if there’s gonna be a problematic out of control system, we need to see a version of it that’s more concrete and closer to what the problematic system is, and then we could start to work on how to deal with that. But at the stage where we hardly have any idea what this problematic system would look like, there’s just not that much we can do.
01:09:17
Richard Hanania:
Are you not convinced that, chat g p t is getting to something closer to, intelligence and with DALL E and you can make these into a modules. Are you
01:09:50
Robin Hanson:
getting closer, but look. So as you probably know, roughly every 30 years since at least 19 thirties, We’ve had these bursts of concern and attention to exciting, interesting demos of automation and then computers and then AI that make people think, oh my goodness. This could do a lot more than we realized, and could it almost be there? And that’s what they said when I was a student in 19 early 19 eighties when I left grad school to go off to Silicon Valley to be an AI researcher because I read all these news articles about how AI was about to take over everything. I was duped and wrong at that time, but I think we’re just in another era like this, and this will happen again 30 years from now.
Chat GPT is just not almost human. Sorry. It’s just not close. But, you know, in the past, people also felt, wow. They looked at the new systems.
Every new system, every decade has said you say, nobody’s ever done something like that before. And that’s been the case every decade, the whole period, and it’ll continue to be the case. You know, the question is just, is the system sort of capable of doing most everything that you and mine could do? And, no, it’s just not close to that.
01:10:00
Richard Hanania:
Okay. So you’re not gonna give me a less than, you’re not you say more than 99. Okay. That that’s good enough. You don’t have to give me 1 in a 1000 or 1 in a 1,000,000,000 or 1 in a 1,000,000,000 or whatever.
That’s yeah. That might be asking too much. But okay. Just very, very unlikely and prob and for practical purposes, probably not worth worrying about.
01:11:13
Robin Hanson:
I mean, if you’re gonna worry about it, what you should do is, like, save up resources so you’re ready to go when you actually get enough data to do something.
01:11:30
Richard Hanania:
You should just build economic growth, and we should just, yeah, we should just be as rich as possible. And okay. And, you know, a lot less. I mean, unless
01:11:36
Robin Hanson:
that pivot. When the public shows up in a concrete form, you’re ready to pivot. You’re watching for it. You’re monitoring. You’re looking for the chance that you have a system that might be this sort of a problem, and you’re ready to jump on it.
01:11:43
Richard Hanania:
Yeah. You have a, you have another interesting argument about AI. And I think this applies to not just the, the possibility of foam, but, like, the other things that people are worried about. Like, oh, the AI is gonna become like a powerful government or it’s gonna become like a an oligarch or something. You know, the idea that there’s, you know, the principal agent problem.
So the way I take your argument is that basically, it’s sometimes better, if you’re the principal to have an agent that is not aligned with you, but that is really competent and good at what it’s doing rather than one that’s say more aligned and say not as good. So I think that what you’re I think what you’re thinking is in terms of economic growth. Right? Like, the AI, if it’s really, really smart and really, really good at doing things, we’re gonna get so much wealthier that even if we get a smaller pie, smaller piece smaller portion of the pie, we really shouldn’t worry about that. Is that the argument?
01:11:55
Robin Hanson:
Well, so I’ve got 2 related arguments. One argument I have in a post long time ago says, prefer law to values. I’d say, if you were thinking about moving to a foreign country to retire, imagine there’s 2 questions you could ask about this country. One question is, are do the people there agree with me about values? And another is, do people there obey the law?
And I think you’d want that second question to be answered for more as a higher priority. That is, in order to keep the peace with the other people in this new country you move to, it’s more important to know that property rights are respected and that the law will be an intermediary between you than to know that they actually share your values. That’s because the law is this, you know, thing that makes you care less about their values as long as they obey the law. So that says that for AIs, what we’ll want them is to be embedded in a larger social legal system wherein they fit in that system and they keep the peace within that system. That is they follow the rules of that system, and that’s the important thing you wanna know about them as they have been designed and habit habituated to sit in these sort of social roles that we can interact with comfortably and peaceably.
We don’t need to know what they want or what their values are. We need to know that they can relate to us through property rights and law. So that’s one sort of claim about what you should care more about. They’re being law abiding and that you have some sort of legal system between you and them that can adjudicate disputes and encourage good behavior. Then, another issue is, agency failures, as you talked about, the principal agent.
So some people in the AI risk community have said that if you have an agent and it gets smarter, your problem of controlling it gets harder. And they have said that you should really worry about a very smart agent because it will just outsmart you, and then you really won’t get much of what you want with a really smart agent. And I have a post where I say, basically, we have a large economic literature on this principal agent problem. We know a number of parameters that make the agency problem harder such that agents are less well aligned, and they have they get more of the pie relative to you. But intelligence isn’t one of those parameters.
And I don’t believe that intelligence is such a parameter. I don’t I just don’t believe that on average, having a smarter agent is worse for you. I think if you were thinking about hiring a butler or a driver or a an executive assistant, you’d probably want a smarter one. In general, that would just go better even though in principle, they could trick you better. But, overall, I think it’ll go well.
01:12:45
Richard Hanania:
Yeah. Yeah. Is this, do you see an analogy here about, like, with nationalism? So you had these decolonization movements in the, in the 20th century. And then you have even, like, ethnic politics today where it seems like many people want the wanna be ruled by people whose values align with them or who look like them or who share their cultural background, but often those people are much worse at governing.
And these countries and these communities often end up being worse off. Do you see this as sort of a similar mistake?
01:15:30
Robin Hanson:
I think I do. That is I think, I would for example, multinationals seem to be just better firms, just better run generally. And so when nations prefer their local firms over multinationals, I think they’re making a mistake. They would get more of most everything they want if they more encourage multinationals to come participate in their economy. And I actually think instead of electing local people in, you know, political races, I think it’ll be great if management consulting first firms ran for office on an international reputation. They said, look. Here’s the 100 other places we have run over the last few decades, and they record of how we run on them. And we’re gonna do this for you here if you elect us. And I think that would probably be better than electing the local guy who says he loves you and he grew up in your town and he’s gonna do well for you, but he doesn’t have a track record, and people like him haven’t done so well. Mhmm.
01:16:01
Richard Hanania:
Yeah. And so, Steven, like, in the economy where we, like, love small even us within the same country. We love small businesses and we hate big business. And, you know, big business subject pizza small business, but we don’t care. We somehow think the small business has better values.
01:16:55
Robin Hanson:
And the local small business, especially. Our nearby small business, we supposedly try to favor.
01:17:08
Richard Hanania:
Yeah. Yeah. You see these you see these sides of the, yeah, the the, you know, the story. You know? So
01:17:14
Robin Hanson:
High by local. Yeah.
01:17:18
Richard Hanania:
Yeah. It’s like, yeah. Who cares? Right? It’s very it’s a very silly thing.
Is the, would that imply that, like, the would this have a would this have a the implication for, like, the federalism debate, like, the American federal government? You know, people who like, they generally the federal government jobs, I think, pay better than state government jobs. I think most people think, like, the FBI is probably more, professional than most, you know, local police forces.
01:17:19
Robin Hanson:
There’s certainly a related thing for nonprofits versus for profits. Like, some people would rather go to a nonprofit hospital thinking that somehow they care more about them. And I don’t think it’s that actually helps you.
01:17:46
Richard Hanania:
Yeah. Yeah. I mean, caring about you is great, but it’s not, you know, it’s not necessary. I mean, people probably didn’t care about, you know, we didn’t, you know, society didn’t take have a economic takeoff, you know, after the industrial revolution because people started caring about each other more. They might have cared about each they probably cared about each other more, you know, much more than distant past.
It probably started In
01:17:58
Robin Hanson:
the forager world of a 1000000 years ago and, you know, up until even the farming revolution, probably it just was really important to gauge who around you sort of liked you and shared your values, and that probably was a big indicator of who you could work well and trust. It’s just in I mean, the main thing that’s happened in the last few centuries, we’ve vastly expanded the size of our organizations, and they’re quite alien to our, you know, evolved experience. And, yes, we are in a world where we’re trying to trust our intuitions, which are based on very small scale groups, in order to deal with pretty alien, strange, new superintelligence.
01:18:16
Richard Hanania:
Yeah. The, Yeah. Okay. So this is, this has been sort of a, this has been an optimistic conversation. I wanna I want to I want to believe you because the other, you know, the other side is, you know, selling sort of doom and gloom.
Is there a would you so you just give
01:18:54
Robin Hanson:
you the doom side of it. I’ll give you my doom side Which is just to say, if we continue to have a competitive world, then we will continue to have competition that changes the world. And then the things that win the competition in the future won’t be you. They’ll be maybe descendants of you, but they’ll be quite different from you, and their values will probably be different from you. That is the world will select for competitive winners and whatever values it is that produce those competitive wins, and that’s probably not your values. So you should probably expect the future to be different be if it competes.
And I think many people kind of are scared of that. They don’t want competition because they think competition will drive our descendants to be strange.
01:19:13
Richard Hanania:
Robin, what if I’m a Nietzschian and my value is just winning? And whatever is winning is great.
01:20:03
Robin Hanson:
And you’re pretty lucky then. You get what you want. Right? How many people are like that?
01:20:08
Richard Hanania:
I just love winning. So whatever wins, I’ll be satisfied with.
01:20:14
Robin Hanson:
You know universe where something wins, then you’re even happier. Right? Because you’re happy with it. Right? Just some as long as something wins, and I like it.
01:20:18
Richard Hanania:
Well, I want it to be conscious. I mean, if they’re unconscious, we should
01:20:25
Robin Hanson:
Well, you have weather constraints now.
01:20:29
Richard Hanania:
Yeah. I that would be it. I’d, you know, prefer the species homosapiens. Although I’m not, like, completely, you know, completely related to it. But, yeah, I I’d want the conscious that scares me.
Like, if they’re, you know, robots and maybe it’s a gradual thing and the robots are fighting and humans but there’s no consciousness.
01:20:31
Robin Hanson:
So this is why I’m actually tried to think a fair bit about what will win the competition in the long run. And I think I’ve come to some, at least, rough conclusions that I can guess about what how we will change as a result of competition over the coming centuries. And so I don’t know if you’ll like them or not, but at least it’s something we can think about. We could draw some actual concrete conclusions. Yeah.
01:20:46
Richard Hanania:
Well, say more. I’d be interested in that.
01:21:09
Robin Hanson:
So for example, we have a good theory about why humans discount the future, which is because, our children share roughly half our genes, and that’s a reason to when we’re trading off resources on us now or our children a generation from now, we have roughly a factor of 2 discount rate per generation. Well, that’s a result of sexual selection. And so with asexually reproducing creatures like in, you know, investment funds, they should not have that discount rate. And so, eventually, we should the universe should be dominated by creatures who do not discount the future. They might discount growth with the usual logarithmic discount of growth size, but they would not discount time.
And so, eventually, the claim is we will no longer neglect the future. We now neglect the future because of this innate discount rate of a factor of 2 per generation, but there will come a time when we do not neglect the future anymore.
01:21:11
Richard Hanania:
So they won’t make so when you say something like, so today, so you imagine, like, this could be this is probably outgoing. Right? You mentioned events with verbs. So there are probably some that chase quick returns, and there are probably some that are long term. Do you think of the long term in a 100 years, will have investment firms that have longer time horizons?
01:22:08
Robin Hanson:
Well, so we know a lot actually about the selection effects in among investment firms in a competitive investment world. There’s people have done a lot of mathematical work on that, actually. And so we know that, if we allowed it, investment firms would just grow because historically, say, the rate of return on investment has been higher than the rate of return of growth the rate of growth of the economy. Investment would, in fact, grow as a fraction of the economy. And the reason that it hasn’t so far is that we have prevented that through law.
So we have, in fact, in the past, prevented investment firms from just reinvesting all their money and just devoting their cause to that. So in when you create a foundation, say, when you die, there’s rules about how much of the foundation’s money have to be spent each year so that it cannot grow in this way. But if we allowed investment firms to last arbitrarily along with arbitrarily, you know, giving a high percentage of their reinvestment, then they would come to dominate the economy, and they would force interest rates down to equal growth rates. And, they would dominate investment choices in the economy. And people have envisioned that, and that’s why they made these legal rules to prevent.
Because they said, we don’t want the dead hand of the past to determine our world. And if we allowed this, then the dead hand of people who died and gave their money to investment firms would be ruling our world. So but I still think, eventually, whether through investment funds or just other some form of selection, we will have creatures who no longer discount the future. And that’s the thing we can predict about the future.
01:22:30
Richard Hanania:
So I can’t, so I can’t, leave behind an estate that, spends money and, you know, I say, don’t spend any money for a 100 years and then do x y z.
01:24:05
Robin Hanson:
You cannot do that.
01:24:16
Richard Hanania:
And the
01:24:18
Robin Hanson:
you leave it to your uncle and give them you can leave it to your nephew and give them those instructions. But you know what? You might not follow them.
01:24:20
Richard Hanania:
Right. The and what about the
01:24:25
Robin Hanson:
They wouldn’t be legal obligated to do so.
01:24:27
Richard Hanania:
And what are they what are they what are they what are the restrictions on investment firms that stop them from just, investing all their money?
01:24:29
Robin Hanson:
Because they have owners and the owners will ask for some of the money. That is
01:24:37
Richard Hanania:
Oh, yeah. But you said that there was some restriction that made them So
01:24:40
Robin Hanson:
if you try to have a will that creates a foundation you see after you die and just make it and reinvest all the money, then it isn’t allowed to do that. If you try to reinvest all your personal money, you can do that. But then, you know, you’re when you die, that money will go to whoever inherits your money, and they may not continue this policy.
01:24:44
Richard Hanania:
I thought you I saw that you said there was an analog to this dead, deadhead of the past rule in, the way investment firms work. Did I misunderstand this?
01:25:04
Robin Hanson:
So there’s an analog there there’s just that rule in wilt.
01:25:12
Richard Hanania:
Uh-huh. Okay.
01:25:16
Robin Hanson:
Got it. Use your will to create a foundation that just reinvest its money. You can do that yourself as long as you’re as lying. But as soon as you die, whoever you give that money to will get to choose their own policy. They may wanna spend it all, which usually
01:25:17
Richard Hanania:
So okay. So what we prevent so that’s one area of life we prevent from letting the, letting things change. What else is gonna sort of the is gonna maximize over time? Or is it do what about the what about you think, even, like, families and sort of certain, genetic populations and communities. Right?
There’s people there’s, like, the Amish, right, who this is not conscious. I don’t think they think that in a 100 years, they’re gonna
01:25:29
Robin Hanson:
Well, I actually think, like, relatively soon, we’re just gonna be replaced with artificial descendants who aren’t gonna reproduce with the usual DNA method. That’s just not that method of reproduction is not gonna last that long much longer.
01:25:54
Richard Hanania:
Oh, okay.
01:26:06
Robin Hanson:
Make our descendants in factories out of explicit designs. But natural selection will still continue in that world. It will just continue with these new kinds of genes, which aren’t DNA. But I think we can predict things about that world. In fact, I think we can predict that, you know, eventually, we will have creatures whose value is in their minds I want to reproduce.
Mhmm. That is today, we reproduce because we have preferences that are indirectly inducing reproduction. We want sex. We want status, etcetera. And by wanting these things, our behavior tends to produce children, and that’s what evolution’s been counting on for us to reproduce.
But in fact, that’s not very reliable. As the environments change, these evolved habits don’t necessarily make us reproduce as much as we could, and that’s why we’re suffering this vast fertility
01:26:07
Richard Hanania:
decline. But why can’t the humans the human communities that already have this in their brain, why is it that just they would take over the world? Because there are communities that just wanna maximize it. No. Sure.
01:26:58
Robin Hanson:
I mean, just artificial is just better in so many other ways. This isn’t the reason why artificial wins. This is just something that happens with artificial after artificial wins.
01:27:09
Richard Hanania:
Well, maybe those people are at least inclined to pursue artificial. You know, there’s, like, the humans who don’t wanna reproduce are going get staked to the at the but they’re the technophiles. And the humans who, do wanna expose the producer, they’re technophobes.
01:27:16
Robin Hanson:
Over the next millennia, there’ll be a slow selection effect among humans for the ones who are being produced, but there may the switch to artificial may just happen well before then. In which case, we’ll just have a world. So my book, age of m, is a scenario like that, where basically the emulations are artificial creatures and they reproduce a different way, and then they quickly dominate humans in that scenario.
01:27:30
Richard Hanania:
Do you think they’ll do you think they’ll be conscious or not? Because I they will be conscious.
01:27:51
Robin Hanson:
Yes. I think so.
01:27:56
Richard Hanania:
And why do you think that?
01:27:58
Robin Hanson:
Well, first of all, I just think I’m a physics person who just thinks, you know, stuff in our physical universe is conscious when it can be because that’s the only reason our brains may make sense for our brains to be conscious. There’s nothing special about our brains that makes us conscious. They’re just ordinary physical devices in our universe. So if our brains are conscious, I’d guess most everything else that could be us.
01:27:59
Richard Hanania:
So you’re you subscribe to panpsychism?
01:28:20
Robin Hanson:
Well, I said that could be. That that’s the key constraint.
01:28:23
Richard Hanania:
It could be. Okay.
01:28:26
Robin Hanson:
So panpsychism would be everything. I eat nothing, couldn’t be.
01:28:27
Richard Hanania:
Well, so what percentage of things are conscious to you?
01:28:30
Robin Hanson:
I mean, I would think if it can compute what its conscious feelings are, that’s a good indication. That is, it really can’t have conscious feelings unless it can compute them. And computing conscious feelings is actually quite a restriction on a system. Most systems don’t do that. So in order to know what in order to feel something, your body have your mind has to compute what you feel.
It has to calculate that, and that’s a bunch of work.
01:28:33
Richard Hanania:
So my computer, it can compute you know, it’s getting too hot or it’s getting too, you know, whatever, and it can, you know, it has the system updates and things that do you think that our computers are conscious?
01:28:56
Robin Hanson:
In some way, but Yeah. Not in the way you are because Right. You know? Because you don’t their consciousness doesn’t feed into other things in the way yours does. But so I just don’t think there’s just physics. There isn’t anything else. So the answer to these questions just have to be in the physics. You know? Some physical arrangements are conscious, clearly.
And the question is, how does the universe tell which physical arrangements are conscious? The simplest answer would be any of them that could be conscious are. That would be a simple way the universe could figure that out. I don’t you know, anything else would have to be a lot more complicated than the question is, where does the universe compute this? Figuring out which things are conscious.
So I’d say, fundamental principles, computation happens in physics. So if there’s something has to be computed in order for things to work in the universe, it happens in physics. Physics is the thing that computes it. And so if you’re conscious, that’s being computed somehow. And so it’s either a very simple rule that doesn’t require much computation or it’s a complicated one, and then it’s complicated computed in something that computes like you were.
01:29:08
Richard Hanania:
Yeah. So whatever we create will be somewhat like a human and somewhat like a computer. And humans, we know, are conscious. And you think computers are probably in some way conscious. So as soon as the thing that’s sort of like a human computer hybrid would probably itself be conscious.
Is this a great idea?
01:30:13
Robin Hanson:
What else could it be? I guess is the question.
01:30:34
Richard Hanania:
I don’t know how it could be that there’s just something. Yeah, I guess. Well, could it be? It could be that just carbon based life is different than silicon based. I don’t know.
I have no I
01:30:36
Robin Hanson:
mean, so I mean, I know enough. Carbon is just a certain number of, you know, protons in the nucleus. That’s all it is. How does the universe care about the number of protons in the nucleus? How is it how does that work?
I mean, it doesn’t make any sense as a theory.
01:30:45
Richard Hanania:
Well, the univer I mean, the universe doesn’t I mean, they don’t know if the universe cares. I don’t know if that’s the right way to think about it, but it’s like like, for example, like, you know, your the human body has certain properties that machines don’t have in the sense that, like, okay. It’s, you know, like, the it’s wet, for example, and, you know, machines tend to be tend to be dry. Right? This is one of the simplest Right.
Things you can imagine. And so, like, consciousness could just be like, well, I mean, we don’t know. We don’t know. It’s to be honest, but
01:30:59
Robin Hanson:
where being a physicist come in. Like, as a physicist, I say, look. These are the concepts that make sense as the fundamental physics concept. So if there’s something true about the universe, it needs to be expressed in terms of these fundamental physics concepts. And you know what?
Wet isn’t one of them. Wet is a very high level abstraction, and it’s not. You know, you might because look. In some sense, you don’t know if any of the rest of other humans on the world are conscious. You don’t know if you were even ever conscious in the past.
In principle, you could just be remembering when you thought you were conscious and you never were. So you’re postulating perhaps that all of the brains in the world of humans are all conscious, but what’s the basis for that? I mean, you’re basically assuming some generality. Well, it’s because they’re kinda similar. But I’d say, like, wet versus not wet, that’s actually similar in terms of basic physics.
That there’s nothing fundamental about wet versus not wet that makes any sense as a distinction unless you. So you might and I might say only people in the northern hemisphere are conscious. Right? I mean, that’s a line you could draw, but it seems pretty arbitrary to me. Wet versus not wet seems just as arbitrary in terms of basic physics.
01:31:28
Richard Hanania:
But, you know, process so processing information to you is seems more fundamental in the realm of physics than ripe versus dry?
01:32:29
Robin Hanson:
Well, the key thing is if some if there’s gonna be a distinction, it has to be computed somehow. That is, there has to be a physical process that results in that distinction being figured out. Right? And so the universe either there’s a physical law that by which an evolution of things figure it out, or it’s an arbitrary label. I mean, where does this label come from?
Right? That is I’m gonna I have a strong prior to integrating whatever this other thing is with all the other physics things we understand. So there’s a physical university that’s a certain set of properties, a certain set of things. We understand those things, and you know what? I’m gonna stick with those things that are being reluctant to add other things unless you show me some evidence that there’s
01:32:38
Richard Hanania:
other things. Yeah. I started reading the age of m a, yeah, long time ago, and I, it lost me when I get the sort of the site, you know, the site light aspect, but I should probably these ideas, are your ideas of consciousness, are they found in there?
01:33:18
Robin Hanson:
No. No. I mean, I try to avoid that sort of because that’s just a that’s a rabbit hole that people just get sucked down to. So as you may know, there’s some sort of honeypot topics out there that just suck people in, and there’s just not much value in them, and mostly people should avoid them. And so that’s one of them.
Consciousness is one of these topics. If you get sucked in, there’s just endless cycles you can go through talking about things, and there’s just not much ever comes out of it. Like, there’s nothing you can do with any of this stuff. And so I’m very attentive to, like, let’s think about the stuff we could do something with if you figured it out.
01:33:31
Richard Hanania:
Mhmm. Yeah.
01:34:02
Robin Hanson:
That’s where I go.
01:34:04
Richard Hanania:
And I guess, you must think AI alignment then is, like, the biggest, honeypot, in all choices.
01:34:05
Robin Hanson:
A big honeypot. Yes. A lot of people in our world are sucked into it. And it’s interesting to speculate why. I mean, so I actually think one of we’ve talked about abstractions.
One of the key distinctions in the world between the people I’ve liked and the people I don’t like as much is just I like to hang around people who just have a taste for abstraction. So people with a taste for abstraction, they tend to think abstractly about decision theory and about quantum mechanics and about utilitarianism, and there’s just a whole range of abstractions that this they gravitate toward because you can talk about things at an abstract level, and you don’t have to get dragged down with the details. And they just like that. And this is one of those topics. Right?
People can talk about AI alignment in the abstract, and they hardly need to know any details. And it’s so it’s and it seems important and almost a comic book story, and, you know, it gets sucked in. But, I mean, in general, I like abstraction. I like to think about abstraction. So I think it’s fun to think about quantum mechanics and utilitarianism and algorithm design and all these sorts of things.
But you just one of the most important skills in the world is to judge when to reason abstractly and when to reason concretely. And often in a conversation, you go have to go back and forth several times. And so, you know, I I’m wary to get sucked into certain details, and I say, where are we going with this? I don’t get it. I don’t see it with all the values, and I have to be wary about certain kinds of abstractions.
Certain kind of abstractions, I go, is this word, or is there really a thing behind this? And so we talked about that with intelligence. Like, okay. What kind of an abstraction is intelligence? What kind of a thing is behind that?
Is just is that just like better just betterness, another name for, like, stuff we like? Or is there a thing in the world that corresponds to it?
01:34:10
Richard Hanania:
Yeah. Yeah. And so it sounds like what you’re sort of you’re difference with a lot of these people. So the people, they like abs they like abstractions, and I like abstractions too, But they are more in the you know, you like to get somewhere too. Right?
You have you’re just sort of selective and
01:35:50
Robin Hanson:
Or for example, I have abstractions about economic growth, but tied to our history of economic growth. That is, I’ve seen what we know about the history of economic growth and history of innovation, and I’ve tried to tie my abstractions to those observations so that they are grounded in that way. And I feel much more confident in what I would say about innovation because it’s tied to that concrete data. And that’s what I wanna do with abstractions. I wanna not if they drift too far away concrete data, then they can often just go off the rails in strange directions.
And so it’s quite an art, I think, to have concrete data, to jump away from it to the right level of abstraction so you’re not lost you’re not dragged in by arbitrary details, but you don’t get too far away from it. Because otherwise, you could so for example, people who talk about, you know, capitalism or other things like that, you know, the whole world of Marxist has this world of these floating abstractions, and they just are often quite a distance from any sort of concrete social phenomena you might be interested in. Like, they talk about exploitation. You go, exploitation? Like, what is that?
Let’s look at show me where exploitation is. Right? But they don’t care that you can’t, like, cash it out in concrete situations. They just talk in these abstractions, and that’s just a big risk of people who like abstractions as they will talk about abstractions that have just become detached, or that are not tied down well enough by concrete details.
01:36:05
Richard Hanania:
Yeah. Yeah. I mean, there was a yeah. I mean, people, when they talk about politics, I see this a lot. I saw this tweet, a week or 2 ago.
It’s like, don’t be fooled. You know, the left will never give up power voluntarily. Okay. And then there are so many abstracts. Okay.
01:37:27
Robin Hanson:
First of
01:37:41
Richard Hanania:
all Oh, who’s the power?
01:37:42
Robin Hanson:
Abstract it.
01:37:43
Richard Hanania:
Right? Yeah.
01:37:44
Robin Hanson:
Yeah. What’s happening? Power exactly?
01:37:44
Richard Hanania:
Yeah. What is the power? What is give up voluntarily? They lose an election, and then they, you know, Obama leaves office and then Trump takes office. So is that voluntarily or they lost the like, they’re trying to make some kind of, like, almost like, you know, we have to go to complete war against these people, but it it’s just meaningless.
And so much of, like, political discussions are like this. Like, the right is this, the left is that. You know, they there’s a have
01:37:46
Robin Hanson:
you ever heard courtesy of yourself? I have in the past, not so much lately, but, I’ve actually in my last few the last 6 months, I’ve been focused on the sacred. And we don’t have time now, but we could talk about it later. But I think I’ve recently had an insight that explains a fair bit of this habit that is this is a frustrating there’s certain kind of topics that people just get abstract and floaty, and they don’t even wanna, like, be very just even imagining what that would mean concretely is not a habit they have. And I think I’ve come up with a weight of understanding why that happens on sacred related topics.
So Oh. That is a temptation for later. We don’t have time for that today, but if you wanna come back and talk, we will do that some other time.
01:38:12
Richard Hanania:
Okay. Okay. Can you give us a not a sort of a nutshell, or do you wanna
01:38:52
Robin Hanson:
So, basically, the sacred’s been in my way, in a sense, all my life. I finally was frustrated enough. I said, let’s study this thing. And so I collected 50 or so correlates of the sacred, that things people say go along with things that are called sacred. And I looked for theories that people had to explain these things.
I picked one I thought was pretty good, and then I it explained some of them but not others, and I came up with an alternative add on theory to explain all of them together. So I think I have a nice unified theory of the sacred, making sense of all these 50 correlates of the sacred, which would then is a powerful toolkit by which you can think about the sacred. You can understand how sacred affects other people and yourself because everybody has stuff that’s sacred. Even you or I, we won’t give it up. Nobody’s gonna give up all of the sacred.
And so it’s important to figure out what it is, how it works, and how to minimize its harms and maximize its advantages.
01:38:57
Richard Hanania:
Is it just like a, is there a level of sort of a synthesis of these things, or is it just like a 50 item checklist of
01:39:50
Robin Hanson:
Oh, no. That’s sort of I take the 50 items. I clump them into 7 clusters, each of which has a general theme, and then I try to explain these clusters. And one simple theory I take from Durkheim explains 3 of the 7 clusters, and then I add one other theory and explains the other 4. And so now I’ve got a unified theory of all the clusters from
01:39:57
Richard Hanania:
Oh, that’s good.
01:40:16
Robin Hanson:
One simple ancient theory plus my one new account add on, and I’ve got a unified account. So it’s not just a list of them. It’s a unified theory explaining why all these 50 things are there.
01:40:17
Richard Hanania:
Yeah. And it okay. I’m looking forward to this. What’s the as you’re gonna write this up, what’s the timeline on this?
01:40:28
Robin Hanson:
I have written it up. So I you can see my recent post on the sacred. I have a paper whose title, basically, I think, is we see the sacred from afar to see it together.
01:40:32
Richard Hanania:
We see the sacred.
01:40:44
Robin Hanson:
Oh, okay. So this is our key insight of to explain to these other 4 correlates.
01:40:45
Richard Hanania:
Okay. I looked
01:40:51
Robin Hanson:
at this. Tease to, like we have to end soon here because we’ve been talking Yeah. an hour. Sure. Commencement.
01:40:51
Richard Hanania:
Yeah. No. Yeah. We’ve, we’ve got working
01:40:57
Robin Hanson:
on the last 6 months, so I’m pretty proud of coming up with a coherent account of what seems to be a pretty fundamental human behavior.
01:41:00
Richard Hanania:
Okay. Well, so I’m conscious of your time, Robin. Is there any, is there any, anything else you’re working on that you wanna let people know about before we let you go?
01:41:07
Robin Hanson:
Oh, that was it. That was my pitch.
01:41:16
Richard Hanania:
Okay. Great. Well, it’s been great having you on, and, yeah, we’ll have you back to talk about that other stuff.
01:41:19