In this special episode of The Cognitive Revolution, Nathan shares his thoughts on the upcoming election and its potential impact on AI development.
Watch Episode Here
Read Episode Description
In this special episode of The Cognitive Revolution, Nathan shares his thoughts on the upcoming election and its potential impact on AI development. He explores the AI-forward case for Trump, featuring interview with Samuel Hammond. Nathan outlines his reasons for not supporting Trump, focusing on US-China relations, leadership approach, and the need for a positive-sum mindset in the AI era. He discusses the importance of stable leadership during pivotal moments and explains why he'll be voting for Kamala Harris, despite some reservations. This thought-provoking episode offers a nuanced perspective on the intersection of politics and AI development.
Be notified early when Turpentine's drops new publication: https://www.turpentine.co/excl...
SPONSORS:
Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit https://wandb.me/cr to start the RAG++ course today.
Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive
Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitivere...
LMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcr
CHAPTERS:
(00:00:00) About the Show
(00:00:22) Sponsors: Weights & Biases RAG++
(00:01:28) About the Episode
(00:13:13) Introductions
(00:14:22) The Case for Trump
(00:16:32) Trump: A Wildcard
(00:21:02) Governing Philosophies (Part 1)
(00:26:10) Sponsors: Shopify | Notion
(00:29:06) Ideological AI Policy
(00:33:47) Republican Ideologies
(00:40:31) Trump and Silicon Valley (Part 1)
(00:40:31) Sponsors: LMNT
(00:42:11) Trump and Silicon Valley (Part 2)
(00:47:49) Republican Nuance
(00:53:36) Elon Musk and AI
(00:55:43) Utilitarian Analysis
(00:58:01) Internal Consistency
(01:00:31) Trump's Cabinet
(01:04:22) Counter-Establishment
(01:05:53) Immigration Reform
(01:10:37) Teddy Roosevelt Analogy
(01:15:30) Creative Destruction
(01:22:29) Racing China
(01:32:51) The Chip Ban
(01:44:20) Standard Setting
(01:48:36) Values and Diplomacy
(01:52:50) American Strength
(01:55:56) Red Queen Dynamic
(01:59:23) Interest Groups & AI
(02:05:06) Peering into the Future
(02:08:32) Concluding Thoughts
(02:17:45) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://www.linkedin.com/in/na...
Youtube: https://www.youtube.com/@Cogni...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
TRANSCRIPT:
Nathan: Hello, and welcome back to the Cognitive Revolution.
This weekend, we're running 2 episodes, which originally appeared on the Moment of Zen feed, focusing on the election, in which I attempt to give an honest hearing and questioning to two different AI-forward cases for Trump.
I like this exercise because, imho, this election is ultimately a Referendum on Trump.
My interlocutors are, in the first episode, Samuel Hammond, Senior Economist at the Foundation for American Innovation, and a thinker I generally very much respect – we cross-posted his appearance on the Future of Life podcast last year, and I also really appreciated his 95 Theses on AI, most of which I agreed with.
In the second episode, I speak with Joshua Steinman, Trump's National Security Council "Senior Director for Cyber Policy and Deputy Assistant to the President” from 2017-2021 – the entire Trump term!
Before launching into it, I’m going to briefly share where I’ve landed on Trump as I expect a possible Trump presidency might relate to AI development.
If you see AGI as a real possibility in the ~2027 timeframe, it seems totally reasonable to consider the election’s impact on AI as a major decision making factor
Of course I understand people have other priorities, but this isn't a politics channel, so I’m not going to share my opinion on every issue - just AI and AI adjacent issues.
Interestingly, as you'll hear, I find that on a number of AI-adjacent issues, I agree with the Trump supporters I talk to.
To name a few:
- nuclear energy is good – we should build more nuclear plants!
- Population decline does merit real concern
- Freedom of speech is valuable and should be protected
- On today's margin, we should have fewer rules and more right to build on their own property
- We should cultivate a Culture of achievement
- We should aim for age of abundance - degrowth is nonsense
- It makes sense to prioritize high-skilled immigration, at least to some degree
- And… american companies like Amazon, Google, and Tesla should not be allowed to abuse their market power at the expense of consumers, but neither should they be subject to government harassment just because they are outcompeting many legacy businesses.
Fortunately, it does seem the Democratic establishment broadly is coming around on at least a number of these, but in any case, there are still 3 main reasons that I cannot ultimately support Trump, despite these points of agreement.
Those are:
- He's far too inclined to escalate tension with China, accelerate decoupling, and prioritize his own narrow domestic political interests over the national & global interest.
- The lack of rigor in his thinking & discipline in his communications seems like a recipe for unnecessary risk-taking in an increasingly volatile environment.
- I believe we are far better off approaching the future with a positive sum and inclusive mindset – not just within the US but globally, if we're to have a healthy conversation about a new social contract that befits the AI era.
On the question of US-China relations, I think we have a general failure of leadership and vision, on both sides, unfolding slowly but gathering more momentum all the time. People now see adversarial relations with China as a foregone conclusion – Joshua Steinman calls it "the physics of the environment"
To put it plainly, I don't accept this.
Conflict with China would be a disaster, and an arms race would take us ever closer to that disaster, but I do not see this as an inevitability, because I don't see China as a meaningful threat to America, Americans, or the American way of life. That's not to say the Chinese government haven't wronged us at times – their coverup of early COVID, whatever its origins, was shameful, and obviously Chinese agencies and companies have stolen a lot of intellectual property from American companies. I don't think we should ignore that – and of course we should take steps to make ourselves less vulnerable to cybercrime – but I think we should stay level-headed about it. The possibility that our grandkids could be speaking Chinese one day seems far more remote to me than that AI destroys the world.
I would say the Biden admin has done OK on AI policy domestically - the 10^26 threshold for reporting has aged pretty well for a 2023 rule, and I do believe in some strategic industrial policy – subsidizing the building of new chip fabs in the US so that we are not so easily disrupted by eg a Chinese attack on Taiwan seems a prudent step for a great power to take.
That said, the chip ban still feels wrong to me, and I have to admit that Kamala's rhetoric on China also depresses me. There's no way for China to understand recent US statements and actions other than as an attempt to keep them down, and I'd say this escalation was premature at best – given the shape of an exponential, we could have retained the option value of cutting them off later and still the bulk of the total hypothetical chip sales would have been in the future.
Relatedly, I have been really interested to see Miles Brundage, OpenAI’s recently departed Head of Policy Research, saying that we need to make a much more proactive effort to demonstrate that western AI development is benign - which of course is much easier to do if it actually is benign, and to some degree open or otherwise shared.
If we have to frame our relationship with China as a competition, I would love to see us race them on metrics like life expectancy improvements, number of diseases eradicated, or perhaps tonnage to Mars. Of course, I do understand that it's a complicated situation, that naivete solutions aren't viable, and that real solutions will be both nuanced and hard to find, and I do intend to invite more China experts onto the show going forward in an effort to more substantively contribute to a positive vision for the future.
Fow now, I do wish Kamala and Democratic leadership in general were less hawkish and more visionary, but considering how much trust has already broken down and extreme difficulty of making credible commitments between the two countries, I’d rather have a steady hand, who is more predictably going to follow a sane consensus path, and might actually make some arms-control type deals, even if politically costly for them, to help us better navigate a tricky period.
Trump, it's well established, will do anything to avoid looking weak, has credibility with rival countries perhaps when it comes to threats but not positive follow through, and for me his withdrawal from the Iran nuclear deal, general taste for inflammatory rhetoric, and stated plans for blanket tariffs – which will hurt American consumers in order to generally stick it to the Chinese – all suggest to me that he is overwhelmingly likely to make things worse.
Zooming out from US-China relations, while I agree with Sam Hammond when he says that the more fucked you think we are, the more willing you should be to roll the dice with Trump … I don’t think we are actually that likely to be fucked.
I've been a bit ambiguous about this over time, often saying that my "p(doom)" is 5-95%, and I've meant that to reflect the fact that while nobody has convinced me that we don't need to worry about AI X-risk, neither has anyone convinced me that it's inevitable.
After all, while we don’t know how they work and see all sorts of surprising and sometimes alarming capabilities, today's best AIs do understand human ethics quite well, and seem to be getting at least a bit "more aligned" with each generation. This may not continue and we should absolutely be vigilant about it, but this is a much better position for 2024 than most AI safety people expected 5 or 10 years ago.
Today, facing a decision like this referendum on Trump, I recall the words of a wise friend who told me that we should think less about what the probabilities are, and more about what we can shift them to.
And here… I have to say that, with competent, stable leadership, I believe we can steer toward scenarios on the lower end of that range, where the nature of the risk is more intrinsic to the technology itself and less the result of letting domestic political incentives lead us toward imprudent escalations, AI arms races, or catastrophic impulsive decisions.
I often think of the role that Kennedy played in the Cuban missile crisis, where my understanding is that he over-rode the recommendations of his military advisors to insist that the US would not escalate to nuclear war first.
That was heroic, but scenarios in which executive authority matters most can cut both ways. When I imagine Trump vs Kamala in moments of intense crisis, where single decisions could alter the course of history, I have to say that I find it much more likely that Trump would impact things substantially for the worse than substantially for the better. After all, we saw how he handled COVID.
To be clear, Kamala hasn’t impressed me on the topic of AI, and in general her track record generally doesn’t show the foresight of great leadership so much as a tendency to follow local trends and incentives. We could certainly hope for better. But still, if I have to choose a leader for a potentially highly volatile period of time, I'll take the stable, sane person who will listen to expert consensus, even acknowledging that the experts could be wrong, rather than betting that Trump will somehow manage to override experts in a positive way.
You'll hear my conversation partners make the case, which I won't attempt to summarize here for fear of doing it poorly, that Trump represents our best case to break out of a broken consensus and revitalize the American state for the AI era, but in the end, I just don't see it. It sounds like chaos, when we need capabilities.
Finally… when it comes to the future of American society, and the world at large, I think we have a never-before-seen opportunity to adopt a positive-sum mindset, create a world of abundance, and ultimately update our social contract.
I think OpenAI and the other leading AI companies do have roughly the right vision when they talk about benefitting all humanity. And I think Sam Altman, for all the other criticisms I've made of him, should be praised for his experiments in Universal Basic Income.
While neither candidate has shown this kind of vision, Kamala at least aims to speak and extend opportunity to all Americans. I thought her best moment of the recent debate was when she said that when Americans look at one another, "we see a friend" – this is at least something of a foundation on which to start building a shared positive vision for the future.
Trump, of course, is far more zero-sum in his thinking and negative in his outlook, and that has real consequences.
I grew up in Macomb County, Michigan - one of those bellwether counties that swung hard from Obama to Trump. And I also have family in Ohio - my beloved Mama and Papa belong to the same cohort as JD Vance’s grandparents - they moved from rural Kentucky to southern Ohio for jobs, the whole bit.
And to be totally honest, one thing I have seen personally, is that Trump has brought out the worst in a lot of people.
While JD Vance, Elon Musk, and others in Trump's orbit are no doubt more sophisticated thinkers about technology than Trump himself, I can't imagine that his brand of cynical populist politics could possibly lead to a healthy national conversation about adapting to AI that is – let's face it – going to disrupt a lot of people's jobs, let alone re-imagining what it means to contribute as a citizen or to live a good life.
It would be shameful if we ended up hoarding the benefits of AI or restricting access for non-Americans due to a widespread sense of scarcity that isn't even justified by the fundamentals, but that's unfortunately where that’s the direction I'd expect Trump to take us.
Ultimately, the idea that Trump could be President as AGI is first developed strikes me as an imprudent move, with far more and more likely downside than upside.
By all means, listen to these conversations with an open mind and form your own judgment, but for my part, I can’t support putting a loose cannon in power as we head into such a potentially pivotal period, and so I will be voting for Kamala, mostly as a rejection of Trump.
Eric: Yo, there we are. Hey guys. So, so sorry about the, this, this low starts or the miss miss counting on my end. I'm really stoked to, too, to have this conversation guys. Thanks for joining.
Nathan: We just had a chance to chat a little bit and get familiar. Actually first time we've ever spoken, although we've exchanged. DMs occasionally appreciating one another's various outputs. So it was good to have a little warm up.
Eric: You two thinkers are two of the thinkers I respect the most. You can't fit you two in a box. You both get into arguments, heated arguments, good natured, but deep arguments with people who are on your side of the, who are kind of in your peer group. So you don't fit neatly into any category. So I wanted to invite you both on. And Sam, let's start with the piece you wrote, but kind of the broader thinking that you've been undergoing as it relates to the case for Trump. That's been what we've been debating the last few episodes. And you brought the effective altruist perspective to it as well as your other perspectives. So let's start there.
Sam: Sure. And for the record, I'm not an effective altruist. I just, I can put on that hat and try to think about what an EA would say. Um, I am a kind of rationalist and what, you know, one of the things rationalists are known for is trying to see through, you know, convention and, you know, social desirability bias, right? Like is, is supporting Trump going to hurt you in your social circles? Well, then maybe you should have a degree of meta-rationality and try to interrogate that and overcome it potentially. At least it doesn't mean support Trump, but it means factor that into how you understand yourself. So, you know, from, from my lens, you know, take a more utilitarian look at how two candidates would play out. What matters most, at least in the long run is innovation and innovative capacity, especially if you care about future people, right? Innovation compounds on itself, productivity improvements compound on themselves. So the question becomes who is better for innovation, at least at first order. And in my piece, I, you know, break down some of the most obvious areas, drug and medical innovation is a huge one. Kamala Harris and piggybacking on some Biden initiatives wants to cap pharma prices at based on a basket of international prices. This, I think this would be a huge mistake. I mean, there's been tons of research into how the U.S. is a kind of provider of global public goods, because the fact that we pay exorbitantly for all kinds of drugs enables big pharma companies and small pharma companies for that matter to actually do the R and D that goes into designing and manufacturing new drugs and then jumping through the FDA approval hoops. And so there's been estimates that even just the pharma caps that were put in under the inflation reduction act have delayed, or maybe killed up to 170 new drugs, untold costs associated with that policy, even if they're relatively unseen. And I don't think it really stops there. I mean, later in the piece, I get into AI, which is maybe where me and Nathan can get into it. But I think the two administrations, the Trump or Harris administration would take a very different tack on AI. And that might be the biggest issue of all, depending on how you see it playing out and where you think the technology is headed. Getting AI policy right could easily swamp the costs or benefits of every other domestic policy we think about.
Nathan: Well, I agree with the last statement most that the AI policy seems to matter most. I guess the way I think about Trump is as a total wildcard. And when I think, and I do agree, you know, I think EAs and, you know, do I call myself an EA is an interesting question. I'm definitely highly influenced by EA thought. I would say if I'm not an EA, it's probably because I'm not virtuous enough to count myself among them. I generally have a lot of respect for the EA community and especially those that really hold themselves accountable to living up to, you know, EA values. And I don't always feel like I do that. So I sometimes, you know, I want to say like, I'm not worthy of the EA label, but I think the, you know, the EAs are right to think long-term, you know, right to care about future people, right to not, not discount the future too much, whether you should go all the way to zero discount is an interesting question, but, Then when I think about Trump in that way, I kind of can't even get down to the level of resolution on these super specific policies. I would agree that we want to get more drugs. I don't necessarily always like paying the higher prices for drugs, but I do buy the argument that there's something good about subsidizing drug development for the rest of the world. And yet I'm like, That seems so small compared to the really big questions. I think EAs are also right to worry a lot about the tail risks. And when I talk to EA thought leaders, I really don't hear anything along the lines of like, you know, this policy, that policy, tax, you know, incentives, you know, what's the sort of amortization going to be? Instead, what I hear is like, how do we make sure we don't have nuclear war? How to make sure we are... as prepared as we can be if a really bad pandemic comes our way and how we make sure we have the best possible thinkers in place to make whatever judgment calls might need to be made as AI really starts to potentially heat up, you know, maybe as soon as within the next presidential term. So I guess, you know, when we get to, we can unpack all those things, but I think all of them are kind of tail risks. You know, if you were to tell me that, you know, another pandemic would happen, but it would only be as bad as COVID, you know, and then I could sleep reasonably well at night. If you were to tell me like, we might have a small nuclear war, but it'll only be like one nuclear weapon. Then again, I could be like, okay, well, I can, you know, the world can probably deal with that. The real risk is like the pandemic that is super lethal or the nuclear war that like goes all out. And it's in those sort of scenarios where I'm just like, I have no idea what Trump would do. I don't trust that he is like a very stable genius, you know, at all, quite the opposite. I would say he seems like a very unstable and not genius person. He has managed to, you know, in his first term, he did seem to kind of get away with crazy in most cases. But I sort of look at that, you know, when I look at the better angels of our nature graph, you fewer conflicts and lower casualties in war. And then I look at the spikes, I'm like, you know, I really don't want to see that next spike. And Trump to me feels like the kind of guy who might tamp down the small spikes, but inadvertently create the conditions for a really big spike. And so I don't know even what I should expect, you know, from an AI policy from Trump. I don't think that, you know, I don't think like Harris is a genius on AI. You know, her famous clip of just two little letters, you know, is not super confidence inspiring. But Trump feels like, you know, he's totally all over the map. I have no idea what to expect from him one day to the next. And I don't feel like we can really even talk about know this policy or that policy or whatever because he's just so erratic and there is a lot of power vested in the presidency right i mean he at the end of the day like if he if he holds the office people are supposed to listen to him and that just seems like a very unstable situation to put ourselves in, given all the other uncertainties that we're likely to face. So that's the hump. I mean, I would be interested to know, like, what do you think a Trump AI policy would be and how confident are you in that assessment? Because I, you know, don't have a lot of confidence in my ability to predict what Trump will do from one day to the next.
Sam: Yeah. I mean, before getting, maybe getting into the concrete policy, details of, you know, how different ministries could pan out. I think maybe we could zoom out to just the different governing philosophies and the kind of coalitions behind each party that first of all, from an EA perspective, you know, especially the more doomer you are, it's probably, probably because you think things just go bad by default. Right. And how do we get off a default path without embracing some higher variance option? And so that's like point number one, the high variance of a Trump administration actually should be in his favor. Because there could be very high highs and very low lows. And it's very hard to, you know, it definitely leads to some discomfort or some uncertainty. But if they were going to do something that's sort of out of left field, it's, I think, much more likely to come out of administration. Second, Matt Grossmann and David Hopkins, these two political scientists, have an excellent book that I always recommend called Asymmetric Politics. And it talks about how to understand the Republican and Democratic Party from a sort of more analytical structure perspective. based lens. And their argument is essentially Democrats should be thought of as a coalition of social groups. You have like the teachers unions, the trial lawyers, the, you know, Joe Biden and the credit card companies. There's all these different stakeholders and they kind of have to balance each other's interests and often pursue policies that are very concrete and directed at a particular interest group. But ultimately it's sort of through the stakeholder negotiation path. Republican Party, in contrast, is much more of an ideological vehicle, right? It's based around the conservative movement. There's obviously different factions within the Republican Party, but because it's more ideologically driven, you can also do things if you are able to couch a policy within that vision and show why it's consistent that are in some ways more first best, right? And in some ways even cut against their narrow interests, right? So like the fact that they appointed justices that overturned Roe v. Wade has cost them in the election, but it was driven out of an ideological view of what the Supreme Court is for. You know, the fact that Paul Ryan during the Tax Cuts and Jobs Act repealed the home mortgage, severely capped the home mortgage interest deduction, the SALT deductions, which are like very popular credits. You know, it instigated a war with the Realtors Associations. You know, that was because he had this ideological view of like tax, the tax code should be simpler. Right. It wasn't driven from an interest group. It was driven from this sort of supply side ideology. Similarly, like if you look at the personnel and to a large extent personnel as policy within a Republican administration, at least in the last Trump administration, they drew heavily from more libertarian circles. Right. So we talked about, you know, talk about the tail risk from pandemics. Operation Warp Speed occurred under the Trump administration. And I think it only could have occurred under Trump administration, in part because they had a couple of Chicago school economists that were at FDA and that were basically given the mandate to apply Chicago school theory. Chicago school economists have been thinking a lot about the hidden costs of FDA drug regulations, like the drugs that don't go to market. How do we streamline these things? And so they were already in the process of deregulating FDA for the explicit purpose of accelerating vaccine approval. And part of that was an architecture for public-private partnerships, which then was deployed with Operation Warp Speed. And that pulled forward the vaccine by like five or six months and saved like up to 200,000 lives. Probably has all kinds of second order effects because now there's been a boom in activity around mRNA and mRNA candidates for malaria, for cancer, for there's like a pan influenza vaccine that's in development. And so on the surface, yes, now politics have polarized. And during the pandemic, Trump suddenly went from being pro-vaccine to anti-vaccine. And you saw the exact opposite on the Democratic side, where Kamala Harris, before the first vaccines were approved, was saying, I don't know if I put that in my body. And then now they're out. And then all of a sudden, things repolarize. That doesn't mean, I think, in the event that we had another pandemic, that they wouldn't also just pull out that same playbook and accelerate a vaccine. Because ultimately, the way Trump appears on TV is much different from how he ends up governing. you probably make a similar point about like World War Three and like nuclear risks and stuff like that you know in interviews Trump will say you know you know it's the n-word you know the other n-word nuclear and he talks about like you know World War Three the chance of World War Three being at its highest in our lifetimes he said that AI is called it super duper intelligence is uh quote alarming and scary meanwhile you know you hear Biden saying that our biggest existential risk is climate change. And Harris saying that the existential risk from AI is it denies you healthcare because healthcare is existential. And I think there's just a different level of like attention to these sort of potential black swans through the different lenses. And that's also partly downstream, again, of these different party structures where you have something that's more ideas driven. And I think you can only really tackle like AGI risks through a more ideological lens precisely because they're still theoretical. Whereas something that comes through a stakeholder lens is going to be connecting AI risks and these other risks to particular constituencies.
Nathan: Yeah, I mean, I don't want to be the person who's going to defend the stakeholder-based approach to AI. It is pretty easy to imagine the scenario where... Lots of good things that I want to see come online are blocked by interest groups. And you can look at all the cost disease segments of the society and imagine what AI could do for those segments. And then you could imagine how the people who might stand to lose out in relative income and status and whatever would want to block those things. as an aside, I've been pretty pleased with how the medical establishment has responded to AI so far. I would have expected a much harsher anti-AI stance from doctors in general at this point than we've seen. There's been perhaps just because they're also, you know, overworked and exhausted and just fed up with all the paperwork that they have to do, like anything that could, you know, reduce the paperwork burden, you know, is maybe welcome. But one can still imagine that that could change. And, you know, certainly I don't have a high level of confidence in what teachers unions are going to do when it comes to implementing AI in classrooms. And, you know, I definitely would like to see my kid have the benefit of the best AI that we can bring to bear on his education or they're my only one is so far in school. But so like, sure, I'm with you that that could suck. At the same time, I also really don't like the sound of an ideological approach to AI. I feel like one of the biggest things I've learned or I feel like I've learned in the last couple of years of obsessive study of it is that these things have to be understood on their own terms. that we can't really bring preloaded frameworks or we can't really reason by analogy all that much. We really have to get down to kind of ground truth on like, what is this thing? How does it work? Ideally, as much as possible mechanistically, that's why I'm so excited about the actual study of what is going on in these systems and why are they doing what they're doing. can we engineer ways to control them? Like that is, you know, if we're really going to get to any sort of sustainable safety, it's presumably going to be with engineering solutions. But in the meantime, I really don't like the idea that we would have a sort of ideological approach. And, you know, most of the people that I think are kind of the smartest or, you know, have the most like insightful things to say about AI are, first of all, very quick to admit their high level of uncertainty on, you know, where we're going in the biggest picture, you know, even Anthropic has their sort of core views on AI safety blog posts, which is kind of like their canonical statement on the matter. And even in that, they're just like, we really have no idea. We don't know how hard this problem is going to be. It could be relatively easy. It could be near impossible. We're going to just try to do our best and, you know, update this view as we go. I think that's basically the right view to take right now. It's also interesting to note that a lot of these people are techno-optimists and libertarians in every other way than AI. They're all eager to see more drugs come out. I mean, I think among people that are AI safety hawks, you would find many FDA reformers, for example. And so I think these people who have broken their normal and I count myself among them, like I'm probably a libertarian minded person. But I think these people who have been willing to recognize that, like, yikes, this is, you know, this is not something that we have precedent for. It's not something we have good analogies for. It is something we need to understand on its own terms. Those are the people whose analysis I find most compelling. And, you know, the idea that we would have like a, I mean, what ideology are we even talking about in the Republican party? It doesn't seem like there are several, right? There is this sort of Chicago school, you know, market maximalist. There's also like a Christian thing. There's, you know, I don't know, there's like neo reactionary elements. There is, I mean, I don't know. I don't know. I'm not sure whose ideology would be, but like none of those three seem like they're giving us, you know, an actual grounded in the substance of technology understanding, which feels to me like critical to making good decisions as the situation evolves. So yeah, I don't know like which, which ideology are we talking about and why should we trust that?
Sam: Yeah. I mean, the first point of clarification is, but I don't mean ideology ideology in a pejorative sense. I just mean driven by ideas in this case and broadly the conservative movement and it does contain many factions. It, you know, ideologically it's like, let's follow the constitution. You know, let's be concerned about surveillance and censorship. you know, let's preserve basic civil liberties. Let's try to simplify government and roll back programs that we think were part of a progressive social engineering project or whatever. And so there's like, there's no interest group behind that per se, right? Sometimes they will be tied to an interest group. But I guess a good way to illustrate this is during the first Insight Forum that the Senate held on AI, you know, Republicans and Democrats both get to invite people. The Republicans invited Elon Musk and Mark Zuckerberg. The Democrats invited Randi Weingarten, who's the head of the Federation of Teachers. And, you know, and they also have had hearings of like Emily Bender and Timnit Gebru. And so there's other, the people who are informing the left on AI policy are also ideological in a certain sense, maybe a more pejorative sense in this case. You know, I think to the National AI Advisory Committee, which advises the White House. The Democratic appointees on that include a woman named Janet Haven, who runs this program called the Digital Society Initiative that includes a land acknowledgement for their servers, a digital land acknowledgement. I was at a conference with her where she said that AI is just web three all over again, and we really need to be concerned about the race and equity and discrimination implications of AI. I think that's the lens that more ideological lens that the prism that AI policy will be passed through in a Harris administration, you already kind of see that as well with some of the policies that the Biden administration have already adopted, right? Because Biden made Harris his AI czar, right? She was the representative at the AI summit in the UK. You know, part of that, what came out of that was this AI bill of rights. And then that was somewhat formalized with the White House executive order, which included a directive to OMB, the Office of Management and Budget, which is sort of the brains of the executive branch. And now OMB has this directive that says any application of AI in government has to pass through this minimum practice of standards. And they include external consultations and audit of risks, ongoing quality management, And this applied retroactively to all kinds of existing uses of AI. So there's, you know, worries now that the post office, because the US post office uses an AI, simple AI tool for routing mail, that this has to then retroactively go through this process and verify that it's not going to be rights impacting. Right. And so in one sense, that is... a kind of safety. I do worry that it's a form of safety that comes with the cost of dramatically reducing US state capacity and ability to adapt to new technology. Because on the ideological side for the right, you know, there's been a long project to try to, you know, quote unquote, deconstruct the administrative state and challenge administrative law. And, you know, part of that is trying to figure out how do we actually make government more efficient and do the things that it needs to do without having, you know, a Trump, a Trump person would call them like agents of the deep state interfere. And so that kind of energy can actually be applied towards more rapid diffusion of AI within government to the extent that we could, you know, either augment existing civil servants or substitute civil servants with AI. And in many cases, a lot of what government does is sort of, you know, a kind of fleshy API passing PDFs back and forth and check boxes. Now that's, that's a longer term project. I think one of the ways you have to, motivate and how do you actually get that project done is by building some kind of counter-establishment, right? And this is where, you know, someone like JD Vance may come in. You know, I think back to the first Trump term, I was sitting in a Starbucks and, you know, this was during the transition and Trump, of course, didn't expect to win. So they barely had a transition team in place.
Nathan: And then the news came out that- That's the kind of preparedness we need for AI.
Sam: Well, this time it was different. This time it was different. But news came down the wire that none other than Balaji Srinivasan was on the shortlist for FDA commissioner. Right. And I remember at the time I, I tweeted a link to his startup exit, talk, Silicon Valley's ultimate exit. And I said, this is a surprising pick, like watch this infamous talk. And then my phone rang and it's Balaji. And he said, infamous, I think he made famous. And I had never, I'd never spoken to Balaji before at that point. I don't know how he got my number. But I was intrigued and started talking to him about like, you know, in his background, you know, he's now known as a Bitcoin maxi, but his background is computational genomics. He taught the first comp genomics course at Stanford and developed a DNA counseling company with his brother. But, you know, I talked to him and he was like, you know, what I want to do is right now FDA has a pre-market approval regime where new drugs have to go through clinical trials. phase one, two, three, and then pass through and then finally enter the market. This is anathema for personalized medicine. And if you just extrapolate falling costs of genomics and the potential for AI to revolutionize medicine in this way, what we need is a post-market surveillance regime. And there's a way I'm going to do this. And we're going to use the post-market adverse reaction system. So anytime there's a new drug on the market that has this one in a million interaction, doctors are supposed to report this to the FDA. We'll use that and a variety of other big data techniques. and sort of Bayesian machine learning to have like a real-time update on the efficacy of a drug and how it correlates with eosyncratic factors, including your genetic profile. And I don't know whether or not he would have been able to enact that, we will never know. But the fact that he was even under consideration was only because Blake Masters and Peter Thiel were running the transition. Now, I use that as an example of the kind of thinking that we're going to need if we're going to actually adapt and co-evolve with AI on an institutional level. You know, we need the kind of community notes version of everything, right? Where we move from a system where you have a bunch of people, you know, flesh-blooded people with their own ideological proclivities doing policy, you know, deleting tweets, whatever the case may be, and transition that to something that is actually scalable. That won't come through a stakeholder process, I'm afraid to say. I think it's only going to come through something that is a little more ideologically driven, provided that that faction has the resources to call upon. Because right now, the problem with the Republican Party more generally is this education polarization means that they lack the kind of technocratic elites you need to actually run agencies. But the tech world can potentially fill that void.
Nathan: Yeah, I mean, boy, you can paint a picture that sounds pretty attractive. But I kind of still come back to that doesn't seem to bear any resemblance to the actual Republicans that we have in today's world. Right. I mean, this is sort of a I think people are like really investing a lot of hope in the, you know, this sort of. romance between Trump and Silicon Valley and thinking that we're going to get like a very high competence government out of it that notably didn't really happen the first time around. I mean, with maybe a couple of, and I'm not somebody who can't, you know, recognize that there were some good things to come out of the Trump administration and happy to, you know, give credit for Operation Warp Speed. That's, you know, we need it that bad. And, you know, it's certainly very, good that it happened. I don't know what the counterfactual would have been if that was a Hillary, you know, project. Like, I think she was also pretty competent, you know, and very likely could have figured out a way to do an Operation Warp Speed, but who knows? We'll never know. But I just, I don't, you know, I guess, first of all, still on just ideology for a second, like, what is the opposite of ideology? One way to describe it would be like being responsive to new evidence and being willing to update your beliefs on new evidence. I know you're not saying like the pejorative form of ideology too much. Sure, in the pejorative sense. But this is, I mean, this is where the rubber hits the road though, right? The Republicans that we have are not those people. And I wouldn't say Republicans. Who have not been sass. Yeah. I mean, you just look at Congress, you know, I mean, if you don't like I guess I would I think there's a there's a version of the Republican Party that doesn't currently dominate the actual Republican Party that I think we could have like this discussion about. But this discussion feels like it is sort of forgetting the fact that we have like. By all accounts, a like. valueless sort of purely self-interested maniac at the top of the party and a Congress that is full of people who are, you know, some mix of like religious zealots on the one hand and you know sort of craven opportunists on the other hand with like a small you know section that is like still sort of the old guard that's like kind of institutionalist or you know is interested in a future of you know state capacity or whatever but like the actual Republicans in power i don't see any signs that they're interested in state capacity i i see like they want to drain the swamp they want to burn it down they you know I have no idea how to, how to anticipate what they'll do with respect to AI, but it just, I, I can't reconcile the people that are actually there that are like actually casting the votes, you know, that would, that would actually have the sort of top level control of the administrative apparatus of the country with the characterization of like you know, Chicago school or biology. I mean, these are like very different people. Like biology wants to burn it all down too. If given power, I think he would do a much better job than a lot of other people, but they're not like this project seems like quite fanciful, honestly. I mean, it seems like, It would be great, maybe, but I just don't think that the people, you know, as Americans like go to the polls in a couple of months. I don't think that the vision that you're giving them right now is actually on the ballot. I think what is on the ballot in almost every congressional district is like the person that Trump endorsed. who was the one that was like the most religious, right? The most like focused on abortion. I mean, that's what wins Republican primaries. Like these people are, they don't care about AI. They don't care about anything, right? They're like, they're outright and they are ideological in the pejorative sense. They are not like ideas motivated. Like I think you, you go to, if you visited every member of Congress, I would be very confident that you would have much more ideas driven conversations with the Democrats than you would with the Republicans. The Republicans that you're going to meet actually in Congress are like ideologues. You know, they are like most of them are a few of them, I think are a little bit more sophisticated, but most of them are really not. And that's not to say all the Democrats are great. And certainly we can, you know, there's, the sort of, you know, excess of wokeness over the last few years is like well documented. But I still think at the end of the day, and the real, I think the real EA case, like the real EA outlook, I think is anti-Trump. And I think the real case for that is just like, We want people who are going to think carefully. We don't want people who are like motivated by scripture. And, you know, that's like a lot of what the Republicans are. And like scripture is just not going to be a good guide to a lot of these modern challenges, you know, AI or otherwise. So I like, I wish we had, you know, I feel this dissonance where I'm like, as I listen to your talk, I'm like, no, that sounds pretty good. You know, we could get like FDA reform. We can get all these things going on and yeah, Chicago school, this and that and great. And you're, you're speaking to the 10 years ago, me who was like, you know, man, this Romney guy makes a lot of good points. Right. But now we like that dude has been totally, he's like the last guy, you know, one of, you know, the last few standing that And he has basically no power. Right.
Sam: Yes. I think this is a bit of a caricature, maybe through the media lens, media filter, because, you know, I work with congressional offices on a daily basis, both sides of the aisle. And I think I think you'd be shocked by the amount of ideas driven hard discussion that takes place in Republican offices and how. If disappointed, you'd actually be on the other side of the aisle. You know, I've been in reading groups with Republican members where we read like Joan Robinson's, you know, heterodox work on like monopsony power. And then we like, well, you know, I know someone like Marco Rubio, like I worked with Rubio and his staff on this small business administration reauthorization where they wanted to like return it into a development bank, like to do, you know, hard manufacturing projects and stuff like that. Whereas what the SBA typically does is like give loans to gas stations and nail salons. So that's another example of what I mean by they have a vision in mind and want to reform the agency, any given agency. And certainly I would agree that on the surface level, Republicans seem much more chaotic. They seem much more religious. I would make the counterargument that Democrats are also in their own way deeply theologically motivated. Maybe it's more of a progressive version of theology, but it's also one that is much more institutionalized rather than You know, I think actually with the Republican National Convention, Trump made a gesture of like actually pushing the social cons out of the coalition to some extent. So, you know, I think really what we're voting for, I'm not voting, I'm Canadian.
Nathan: You're telling me he's not still playing to his beautiful Christians? I don't know if I quite believe that.
Sam: Yeah, he said the Bible is his favorite book after the Art of the Deal. And, you know, he couldn't quote a verse, you know, I, I think in some ways Trump may just be a apatheist, right? He's, he's not really that. And I think he's actually kind of weirded up by them, right? Cause what Trump actually is, is a kind of populist New York City Democrat who has sort of paleo con views on immigration and trade, who's somehow found himself leading the Republican coalition supported by his cult of personality. And, and, and, you know, In some ways, that's better than the alternative. You know, if not for that cult of personality, then you might actually have to pander even more to those various interest groups. But even on this point, like, you know, there's good eschatological reasons for the evangelical movement to be actually terrified of AI. You've seen, you know, Steve Bannon has done episodes on what he's called the cyborg theocracy and the singularity. You know, Tucker Carlson has actually called for bombing the data centers.
Nathan: Struggling it in the crib. Right.
Sam: But more fundamentally, when it comes down to if we enter a crisis, and I do think even though this is not on the ballot, I fully agree. I think we're sort of talking at a more esoteric level of what's actually going on versus what is projected to the masses who maybe just be voting on inflation or kitchen tabletop issues. I think to this recent Dwarkesh podcast with Patrick McKenzie, where Patrick was explaining that he kind of ran vaccine logistics out of a Discord server during the pandemic because there was no one that was in charge. There was no adult in the room to say, you're the person running logistics. And so it ended up falling to this sort of polymath. And in that conversation, he said, what should have happened was when the pandemic started going on, they should have said, found the smartest person and said, you are now a colonel of the US Army. you have all the authorities of a colonel go figure out vaccine logistics and don't worry about racial quotas. Don't worry about like all this other junk. Like let's just get shots in arms and like, let's not throw them into the waist because that's, that's, that's super stupid. And you know, if, if, if this really is, if, if there's a reason to believe that like the, the, next major technological inflection point in human history could could plausibly happen within the next five years that means the next president could be presiding over an intelligence explosion or an AI takeoff or some whatever transformative AI whatever you want to define it as and that could be a moment of crisis where you need to call somebody the smartest person in the room and make them a colonel to do whatever and the question is is that going to be Elon Musk, or is it going to be designed by committee with, and particularly when you start talking about what jobs, at least in the near term, are most threatened by AI. You listed some of the costs of these sectors. I don't think it's going to be that public schools adopt AI into the classroom. I think it's much more likely that they drag their feet. AI tutors and run the system. And then we ended up having like a totally new kind of education system where people can just in source the same way that we saw during the pandemic, like the shift to remote schooling led to this 2 million student increase in homeschooling. You know, there's going to be a huge resistance from, you know, what you could call the professional managerial class to any kind of AI diffusion. And that's going to translate into a kind of decelerationism targeting, targeting use rather than actual safety. Whereas if we actually are in a crisis and emergency situation, I would much rather, you know, have actual experts in the room, including someone like Elon Musk, who I know is incredibly polarizing, probably because, you know, he's a little bit on the spectrum and says things that you probably shouldn't and is not politic in that sense. But I think would, you know, has demonstrated an operational and executive ability to handle very complex engineering problems.
Nathan: Yeah, I mean, I guess, again, there's a hopefulness to the description here that I have a real hard time really signing on to. I would agree that, you know, I'm a big Elon Musk fan. Not of everything he's ever done. Certainly, you know, it feels like you have to sort of. almost ritualistically say he shouldn't spend so much time on Twitter. And, you know, I would definitely advise him to get offline a little bit more often if I was an advisor. But yeah, I mean, he's clearly able to make things happen in the world. Not entirely clear that's what we'll need, you know, in a moment of crisis, maybe, but I'm not sure it's like a technical solution that is ultimately what we will need. It very much depends on the crisis, right? Like, And when I think back to the pandemic, I don't you know, I recall these news conferences where we're getting like extemporaneous, you know, oh, maybe we could inject bleach and whatever. And, you know, it's like, OK, sure, that was a low light. But it is a reflection of the fact that the dude at the top. Was not disciplined, was not, you know, ideas driven, was not like in command of the facts. And just kind of went around like making a mess for his people to clean up and kind of further polarizing the issue. And I don't think it's hard to imagine at all a different president doing a better job on the pandemic than Trump did. would have to imagine i would think that it would have become less polarized some of the fault of that definitely resides with Democrats i do think that you know we did not need a statement from Kamala Harris that she you know maybe didn't trust the rushed Trump vaccine that wasn't healthy you know unfortunately like nothing is healthy around Trump though that seems to be the problem right it's like i i do think when somebody makes everybody around them worse both the people on their team who they're constantly, you know, throwing off of what they're trying to do and creating problems for them to clean up and distracting them and the opposition, you know, that they're just like driving insane. Like that's not a good person, right? Like that's not the person that we want to be in charge.
Sam: Oh, I totally, you know, I'm not, I'm not trying to defend his character by any means. Again, going back to like utilitarian analysis, you know, someone like Norman Borlaug who, you know, kicked off the Green Revolution, right. With his research into wheat varieties, you know, he plausibly saved hundreds of millions, maybe even a billion lives incrementally through that research. He could have spent the rest of his life being a raving anti-Semite and playing the knockout game and punching babies, you know, utilitarian should still be happy that we lived in a timeline where he existed. Right. And likewise, like I, But Trump hasn't given us anything like that.
Nathan: I mean, where's my Green Revolution from Trump? Sure, I guess, maybe. But like, you know, if he were all those things and he had done the Green Revolution, I still wouldn't want him to be president. And Trump doesn't have a Green Revolution to his credit. I just don't... It feels like we're really being asked here to imagine a scenario that we don't really have evidence for that... It didn't happen last time, but this time is different. I think you said that earlier. This time is different. This time we're going to get competent executives to run all these things. I just don't see that that's likely to happen at all. If I did believe that was going to happen, I would have a very different outlook on this.
Sam: I think the competence of the first administration is underrated as it is. It just was a rocky transition because they didn't expect to win and had to, especially in the White House, have to staff up with a bunch of people who had very conflicted views of what to do and the leaks were constant. I think that's one big thing that will change. There'll be much tighter ship.
Nathan: And all the people, I mean, I'm not sure who you are referring to when you refer to the competence of the First Administration, but...
Sam: There's like a long list of head of HHS or, you know, his Trump's NSC. I thought, you know, there are people in very important decision-making roles who were very competent and partly were able to express that competence because they came from a more right center orientation. And again, like going back to, to, to like, you know, one of the things I think we need to do is try to make our beliefs consistent with each other. Right? This is a big topic all the time when we talk about AGI timelines. Are you long the market or short the market? Are you joining a nine-year fellowship program? Do you think AGI is three years away? There's all kinds of ways where our beliefs and actions may be internally inconsistent. But if you do believe that there is an inflection point, technological inflection point within potentially the next administration, then all these things become open, right? Open questions. You know, I meant, you know, it's not just a matter of like, oh, we could prospectively make the FDA more efficient. It's that, no, we're going to have like a Google Gnome project, but for drug discovery that is going to take us from 50 new drugs a year to 500,000 new drugs a year. And we do not have the state capacity and state capacity in this context does not mean civil service protections. It might even mean the opposite. It may mean actually firing 70% of the people. So you can bring, bring in the 20% more who have the, who are the hyperscalers. Right. And I think of this in terms of history, because if you look at other major technological transitions, they have had major second order effects on our institutions. Right. And the last major one we went through was the Industrial Revolution. And in some ways we're still going through it. But the Industrial Revolution took us from the world of 1900 to 1950. And the level of governance is like night and day. Right. And how did that transition take place? Right. What we had was first in the early progressive era, the rise of the first large corporations. Right. Railway networks, the Carnegies and Rockefellers, those types of industrialists. And they developed a new science of management. This was the first time where you could have big multinational companies and you had to understand supervisors and direct reports and all these basic things that we take for granted today. And that learning was then brought into government directly and indirectly through osmosis into how to build an administrative state. And we're going to go through another transition because AI directly implicates the machinery of government. you know, what we need is like, you know, and I use this more of a metaphor, but like, how do we get the Patrick Collison Secretary of Commerce, right? How do we get the Palmer Luckey Secretary of Defense, right? And that is much harder to see coming through the Democratic side of the aisle, partly because their horizons of possibility are much more constrained because in a weird way, they've become the Burkean party.
Nathan: Yeah, I mean, a lot of that holds water, I think. But then look at what all of the previous cabinet members are currently saying, even the vice president. I mean, the number of people that were in the White House that served at the most senior levels under Trump in the first administration are now saying in very clear terms, this dude should never be back in a position of power. He did this, he did that, all these things. I mean, there's a laundry list of cabinet members that I mean, basically, to a degree that I would say is unprecedented in American history. I don't know that there's ever been a president, even Nixon, I think, like mostly had people that were loyal to him, you know, and still thought highly of him despite his mistakes or crimes, if you want to be more blunt about it. But not a lot of returning starters from the first administration. Most of those people have fallen out of favor. They mostly hit their limit of what they were willing to put up with from Trump and finally spoke out about it. And now they won't be back. And so now we're going to have this whole other regime and we can hope he, you know, I mean, if he does get elected, you know, I certainly hope he picks wisely. And I certainly hope we get executive competence that can, you know, create great stuff for us.
Sam: We will at least get much less inner, inner, you know, internal conflict, right? Because one of the ways that that lack of transition manifested was they had to pull in a bunch of like new conservatives and, you know, old Bush operatives and people who, naturally reviled Trump, you know, throughout and had to basically, you know, falsify their preferences until they could achieve what they wanted to achieve. And then you have like Miles Taylor and like J.D.
Nathan: Vance is doing now.
Sam: I mean, isn't J.D.
Nathan: Vance clearly doing that?
Sam: He's very sincere.
Nathan: You think he likes Trump now? I don't think he... I think he's genuinely... I don't know about this. Radicalized a little bit. Maybe somewhat on some dimensions, but you think he thinks Trump is a good leader for the free world after he said... you know, very clearly that he was like not to be trusted.
Sam: Well, I think that was a, you know, a reasonable, like, I think you can actually tell a story where we're coming through the Trump administration and seeing how things played out, that there is a case for that Trump is the only person that could do certain things, right? He is the only person that can go before, you know, German, German members of the European parliament and tell them that they need to increase their defense budget or their own pull out of NATO. right i don't think he was ever actually going to pull out of NATO but i think you need to have that credible threat and no certainly no certainly Trump is uniquely capable of that right but in this new in another administration he's not going to have this it's sort of like if Elizabeth Warren became president and had to hire a bunch of Obama Democrats because she didn't have a transition prepared you'd have a lot of people exiting saying that she's going overboard on things it won't be the same that time go ahead One quick analogy might be, you know, Elon and Twitter, like a lot of Elon's, you know, former execs aren't going to say great things about Elon.
Eric: He kind of had to clean house. I know Trump had more selection over those executives than Elon did, but he was fundamentally misaligned with the party, right? with Tillerson, with Mattis, with McMaster, with Kelly, he was a new regime. And he had, you know, the fact that he, you know, is that they're not a fan of him in some circles is actually a plus that Trump didn't sort of bow down to people who are fundamentally misaligned with him. And you see, you know, very serious people like Bill Barr who are supporting Trump, who are aligned with him. So he is bringing a new regime into the party and that regime wasn't wasn't stocked up in 2016, wasn't ready.
Sam: Michael Lind in his book, The New Class War, talks about this as the first Trump administration was a counterculture. To be a Trump supporter, you were a counterculture. And what you really need is a counterestablishment because a counterculture can't actually affect change. Whereas a counter-establishment can, and you need, a counter-establishment needs to be like reasonably, you know, institutionally complete to actually move the ship of state because there's like, it's a 2.2 million person organization. There are tons of different agencies, tons of moving parts need expertise in the ministry of law and all kinds of stuff. And so you can't just do that based on cultural personality alone. And we have the green shoots of a new counter-establishment forming. And if Trump wins again, they will be able to solidify their position. And it won't be like, you know, sunshine and roses overnight. It's not like we're going to, you can't like rebuild the FDA or whatever overnight, but it does establish a new hierarchy within the right. And I think that it's like, would be incredibly salutary in part, because like when you look at who are the biggest attractors from the first Trump administration, it is like the Mike Pompeo's and so forth that were like, you know, I didn't get a long enough leash on my kill list. And like, I wanted to invade Iran, but you know, all we had to do was a silly bomb. And they're mad at Trump, I think, for some of the right reasons. Especially when you start talking about avoiding World War III. Who else could broker a diplomatic channel with Xi Jinping? Someone who is aggressively hawkish on China rhetorically, but at the same time has this more transactional stance and wants to strike a deal.
Nathan: But where's the deal, right? I mean, this is where we keep coming back to like these stories. I noticed in the piece that you wrote, there was a link to a 2016 article from The Federalist, February, 2016. And the link text is only Trump can fix America's broken immigration system. But in reality, he was president for four years. He didn't fix it. Then he was not president. And there was allegedly, I mean, I guess we can't know for sure that it was gonna pass, but by all accounts that I understand, there was a bipartisan deal to make some meaningful reform just in the last however many months. And what did he do? Comes in from left field and tries to destroy the deal, ultimately successfully destroying the deal because he wants the issue to remain live for his own electoral grievance campaign. So maybe he's the only one that can fix. But that article was by me. So you would not say he scuttled the bipartisan immigration deal? I mean, that's seemingly been reported a lot of places.
Sam: So the argument is that you need to take a, we may need, if we want to liberalize immigration in the long run and get to a more sensible system, you know, not one that is based on dozens of different visa categories that each have their own bespoke interest group and that are easily abused, even, even like H1Bs, like as much as you may favor high school immigration, you know, as a, as a Canadian working in America, like the visa system is incredibly exploitative and, and kludge, just kludgeocracy. How do you transcend that? You know, Trump, the, The Trump people, you know, wanted to do a points-based immigration system. And there was a proposal from Tom Cotton called the RAISE Act. But in the first term, Paul Ryan and what's his name? I'm just having a brain fart. The Senate Majority Leader both met, yeah, McConnell, Mitch McConnell, both met with Trump in the Oval Office and said, look, I know you want to do immigration. We promise we'll do that the next Congress. This time around, we're going to do a big tax cut. And that he got rolled, right? He had the majority used to do immigration reform in his first term, but he got rolled and he's not going to get rolled again, right? I think moving to a point space immigration system would be beneficial in the long run, even if it came at the cost of short run reductions in total numbers, simply because it would massively clean up the system as it currently exists and lead to some semblance of order and control.
Nathan: And points-based, just to make sure I understand this policy, is like you get points for having advanced degrees and for just having various positive traits that we want.
Sam: Right. Like a more Canadian-based system. And again, as a Canadian, I kind of like that system. I think it actually allows you to have higher rates of immigration in the long run because it gives a sense of democratic ownership over the policy rather than something that's... You look at what drives xenophobia and anti-immigrant sentiment. It's not necessarily rates of immigration. It's whether those immigration rates come through democratic consent or a product of chaos and lack of control. And going to the point about this border deal, you don't want to put too much stock into this. It wasn't going to solve the problem. The main thing it was going to do was increase the number of judges for these asylum courts. so they could process more of these claims. And the Republican argument was that this is another kludge, another patch to the system that is going to only further delay the more structural reform that we need, because it's going to sort of take a, you know, it's a release valve on the pressure for deeper reform. And this is actually a recurring pattern in a lot of how Republican votes get interpreted by the mainstream media, where they say, oh, you're for the child tax credit, then why did you vote against it? And this thing was like, actually, there's more of a game theory behind this and I'm not just like wish casting or projecting. This is what they will tell you. Like we want to get more deep structural reform done and that will only be possible if we have like the motivation and the will to do that. And if we do little patchworks here and there that sort of give the semblance of fixing the problem and that kills the political capital to get deeper change. And time and time again, you know, from my personal experience working on the Hill, Republicans are the ones proposing actual deep structural reforms to things. Even something like healthcare, where Obama, the biggest healthcare expansion you could do without upsetting anybody. How do we do this big policy without actually doing anything structural? And that, again, is part of the problem is all these policies get filtered through existing coalitions of interest groups that want a particular outcome without rocking the boat. And we're going to need to rock the boat Yeah, I mean, I think that time is coming.
Nathan: I don't disagree with that. My second son is named Teddy, in part after Teddy Roosevelt. And I think a lot of this comes down to, for me, do we have the right person? And Teddy Roosevelt did some amazing things that people thought somebody from his class or his party could never do because he was willing to cross those lines or be a, I wouldn't say he was a traitor to his class, but whatever, to be dramatic about it, you could describe him that way. Certainly, he was willing to speak truth to his own constituencies when needed. I just don't, you know, I don't see Trump as that guy. And I don't see the Republican Congress as like largely full of those guys. And that's not to deny that you're having some substantive discussions behind closed doors. But I mean, at some point, you know, if they're so substantive, like why are they keeping it such a secret? You know, is it because they, I just, I have a very hard time believing that like, there's all that many of these Republicans who project as like abortion focused, religious zealots that are in fact like very thoughtful, you know, state capacity, structural reformers. I don't have, there's a few, but I don't really buy that. There's all that many. And I just don't think we have, this is not the group, you know, that can reform America. They're like, if they are given the levers of real power, well, you know, as we've recently seen, like they didn't do it, you know, they did a big tax cut. They didn't do the immigration. You know, this and give me some give me some relief, too. You know, I mean, if we are in this, it's just also cynical. Right. I mean, I the idea that. OK, we're going to hold our we're going to hold this issue Ericage. We're going to keep the pain alive because that's the only way we're going to get a real win in the future. I don't know. How about incremental progress? You know, how about how about taking wins when they present themselves? How about relieving pressure from a system that, you know, by the Republicans own account is like about to blow? If this is like, you know, if we're being invaded, if we have like, you know, recently been reported I don't know if this is really true that Trump is somewhat confused about asylum the you know immigration concept and asylums the like you know place that holds crazy people and he certainly has said many times that they're like sending you know criminals and crazy people into the country so who knows what he believes or I mean it's it's all very hard to read but if to bring it back to the AI question I don't think it's a very healthy outlook on AI to say, oh, well, yeah, we could like get a win here or do something sensible here, but it's only on the margin. Therefore, like, let's, you know, hold off or, you know, kill that deal because we want to let this pressure build until we can really do the right thing. That just does not seem like a healthy approach. I mean, Trump was not in command of the exponential of COVID. I have the receipts for that, right? I mean, he was like tweeting in like February of 2020 that, oh, you know, this is how many people have had it and only this many people have died and compare that to the flu and whatever. And he just like clearly did not get the you know, underlying nature of the process that he was going to be forced to deal with. And if he brings that same sort of, you know, outlook to AI and it's kind of like, oh, this thing is, you know, not that big of a deal, or I can sort of kick the can down the road, or I can sort of use this as like a, you know, wedge issue in the next election. I just don't think these are the guys, you know, and they almost all are guys too. And I would, I do think we would do very well to have some meaningful reform in this country, but are these the guys that are really going to deliver us that reform? I still am just like, I don't see it at all. And I would rather kick the can down the road one more administration. Frankly, I don't expect that the Democrats are going to But Trump seems to be a burn it down guy. I don't see the build it up part. You know, he I kind of think of him almost as like a tornado where it's like this extremely destructive force, you know, that sits at the middle of like two, you know, giant opposed masses and is kind of this vortex of energy. It's largely destructive, right? I mean, some towns that got like wiped out by tornadoes may years later look back and say, well, yeah, actually we have a nicer main street now than we used to because we had to rebuild, but I don't want to go through that process on a national level. And that seems to be, you know, the base case. I I'm not, I don't know.
Sam: I'm not sold on the, on the creative, creative destruction is the engine of prosperity in the private sector, right? You know, I guess Blockbuster had an opportunity to buy Netflix, but they passed on, right? But ultimately Netflix was a new company that displaced Blockbuster. Uber and Lyft were new companies that displaced taxi commissions. And this is how creative destruction tends to work. And creative destruction can be very dramatic and it does have costs, right? Like Uber example, there were the taxi drivers in Paris that were throwing rocks off bridges and people committing suicide in New York because their medallion became worthless and so on and so forth. Great destruction is dramatic even in the private sector. The problem is that in the public sector, we don't have a system of profit and loss that allows the NIH to fail and a new, better public health science funder to rise in its place. There's no competitive process. And as a result, we get this sort of buildup of entropy over time where we're kind of living on the capital that was developed in the New Deal era and then the Great Society eras. And... that long 20th century is rapidly coming to an end, right? And there's going to be some necessary destruction to even begin the rebuilding process, right? And part of that, it looks like massive deregulation, right? It looks like getting existing agencies, reallocating resources you want to keep and setting a new framework, right? Again, it's much easier to do that if you have some philosophical motivation or some vision that you're trying to enact that sort of detached or sits above pecuniary interests. And I agree, like Trump on the surface certainly doesn't seem like he's like totally in charge of the facts on any given issue. On the other hand, you know, he is he does have this like remarkable perspective. ability to absorb things. And, you know, often he'll echo what he's absorbed in a kind of mangled way, but you kind of get what he's getting at. And, you know, one of the things that's leaked thus far on their potential AI policy is this executive order that's been drafted by the America First Policy Institute that calls for Manhattan projects on AI, that calls for broad-based deregulation and situates this under a framework of we need to make America first in AI, right? And speaking as somebody who took part in one of the Project 2025 policy committees, totally voluntary role, sort of an outside participant offering, you know, submitting some memos on like whether AI policy should be. It was, I was actually quite surprised and I can't really reveal too much, but surprised by how many like AI safety field people there were on these committees. You know, we had actual memos on like timelines to AGI, what are the risks? How do we respond? You know, there's been some worry that the Trump administration would repeal the Biden executive order and then therefore repeal the compute thresholds for monitoring large, large models. My sense is that they will retain those provisions and do sort of more of a repeal and replace a day one executive order, keeping some of the monitoring capacity, but adding, you know, getting rid of all the things that are hindering adoption within government and doing these Manhattan projects. And so like, Certainly within the EA world, there's a lot of debate about whether this is a good thing or a bad thing because you don't want AI to become a true arms race. On the other hand, my intuition is that if you're going to take AGI seriously and respond in an appropriate way, you at least need to believe it's a thing that can happen and have some realistic expectations of the timelines. And I can assure you that people within the Trump world and within the Heritage world and who would be on his NSC and so on down the line are situationally aware, right? In the Ashton Brenner sense, like they see what's coming. They take it very seriously. You know, again, partly this is because a lot of conservatives come from If they're not libertarian, they at least, you know, were forced to sit through some libertarian seminars at some point in their career. And, you know, and there's some adjacency between the libertarian world and the rationalist world and the EA world where they think about these, you know, you know, I came up through all that stuff. I was at Mercatus. We, you know, all read Deirdre McCloskey and like study the industrial revolution and, you know, Tyler Cowen's great stagnation. And we're always trying to think like where, you know, where's our flying car, right? And that to me seems at least prerequisite to getting the policy right, because at least it gives you the sense of gravity. Now to the question of whether it's appropriate that we accelerate a kind of militarized approach to AI, I don't think that's what this is. I think overwhelmingly the talent and capacity to even build a super intelligence, let's say, is still in the private sector and that doesn't seem to be going anywhere. And certainly if the Department of Defense or the Department of Energy had some secret project, we'd probably be able to see it from space just because of all the energy we need. But what we do want to be wary of is that to the extent that some of these dynamics are already baked in, it's probably inevitable that the intelligence community and security apparatus is going to become situationally aware if they aren't already. And you'd much rather that be done out in the open with some like explicit, you know, conscious intent. And you wouldn't want to base your decision on who to support on something that was probably inevitable anyway. And so then the question becomes who gets there first, right? And this is the Ashton Brenner thesis as well. I was like, you know, to the extent this is a lot really baked in and the slow takeoffs are already taking off. Like, do we want China to get there first Or do we want the US to get there first? And if getting there first means having a decisive military, technological, economic advantage in perpetuity, I think you'd much rather the US get there first. And moreover, if you're worried about Skynet scenarios, you probably also want the people who are governing that policy to be militantly anti-censorship and surveillance, right? Which just so happens to describe a lot of the people in the Trump world who are terrified of big tech censorship and so on and so forth. And I think this is even before we get to super intelligence is going to be a really important issue just because of all the ambient information we're constantly shedding and the ways in which like even existing sort of like financial reporting systems and, you know, big data that could be applied. You know, we could have LLMs and multimodal models that are spying on everybody right now, right? We could, NSA could have a dragnet surveillance program that is like filtering data using an AI to detect whether it would pass a FISA warrant or not. And I would much rather have people in there who could plausibly pardon Edward Snowden than people who don't think it's even an issue.
Nathan: Let me take it from the top. I think the top is if we are racing China to some notion of strategic dominance indefinitely, I think we're all kind of screwed. We are still living under the shadow of the last Cold War and the arms race. And I think the US government still has like 5000 nuclear weapons, you know, that are in some form of like operational readiness. that first of all, by the way, raises the cost of creative destruction in the government sector a lot higher than in, you know, the taxi sector, not to diminish the, you know, personal tragedies involved there, but, you know, we, I don't really think we have the luxury of a, you know, very chaotic creative destruction of the US government without it being like truly catastrophic, just in virtue of the sheer, you know, weapons cash that we have that the president is, you know, according to policy, like independently, you know, authorized to make a decision on. Right. Um, now hopefully there would be somebody in the chain of command or something.
Sam: Reforming an agency is much different than state collapse. Right. I'm not talking about state collapse. That would be, that would be catastrophic if like suddenly the U.S. government just had total system failure and we didn't in the world was like flooded and deep fakes and drone swarms. And we didn't know who you know, was handling the nukes. That would be really bad. I agree. But I'm talking about something a little more mild, right? And to me, like, again, as a Canadian and just looking at other parliamentary systems, it's totally normal in a parliamentary system for, like, a new government to form. They literally call it forming a new government. And then for, like, ministries to be shut down, new ministries to be created. You know, my political idol growing up was Jean Chrétien. In the late 90s, he fired a third of the federal civil service in his unilateral budget, right? And so like the U.S. is uniquely ossified in this respect, partly because our system of checks and balances makes it very hard to do more dramatic kinds of change. And my basic thesis is just that this change will be forced upon us.
Nathan: Yeah. So, I mean, we could like eliminate the filibuster and, you know, start to have, I think there's all kinds of like incremental steps we could take to unjam our, you know, to get ourselves unstuck. But yeah, I guess going back to the top, if we're racing China for strategic, you know, indefinite strategic military dominance, I don't think that's a race either of us can really win. Even if we last a while, you know, there's various estimates, of course, of like, what is the annual risk of catastrophic nuclear war in today's world? But I think most people would have a very hard time saying it's less than like a tenth of a percent. And we're not dealing with it. We're just sort of living with that. And maybe it's even more than that. Maybe it's a half of more percent. I mean, these things really start to compound over time. AI feels like even if we can reach some sort of, you know, quasi stable equilibrium, it's very hard to imagine a... actually safe outcome that we arrive at through a race to strategic dominance. So I sort of feel like the only way that we get to a good place is to challenge that frame. And that's, again, where I just don't have, I don't have a lot of confidence in Kamala Harris about that either, but I have basically negative confidence in Trump. I mean, it seems like he will play the, it's us versus China, card to the end of the road and he's not thinking far enough ahead to, you know, it's not going to be his problem. he'll just be dead, right? By the, you know, presumably unless it happens really fast, he'll be, he'll age out. And like the rest of us will be here for the, you know, however many decades and our kids will be here and we'll be living under at best a sort of, you know, mutually assured AI destruction regime. And I just really don't want us to create that. I would, If there was anyone who would say anything contrary to this notion that we have to race China for AI dominance, they would be my candidate. But Trump just seems like he's absolutely going to ratchet that up, you know, and he's going to play to those fears. And he's like, you know, what is a Manhattan Project? I mean, the history of the Manhattan Project is not exactly a glorious one, right? I mean, it's a scientific triumph in some sense. But, you know, notably, like a lot of the people involved immediately regretted what they had done when they actually unleashed it on the world. And then they were like, shit, you know, can we get a mulligan on this? And, you know, you can't. so what i mean i don't know what Manhattan Projects we really expect but i mean it's it's kind of chilling to me to think that that's the sort of historical precedent we want to look back to to say you know back when America was really great you know in unleashing mutual assured destruction paradigm on the world and i you know i don't know i mean the counterfactuals are always and it created a century of peace right Again, I look at the spikes, you know, I don't know that it created a century of peace. I think it pushed a lot of sort of mid-level conflict risk into the tail. And the tail is potentially what ultimately kills us. You know, I mean, I think all war is bad.
Sam: But do you think the A-bomb was going to be built at some point?
Nathan: I don't know. I think that's very hard to say. I would say there certainly are plenty of people who are more expert in this than I am. But I think there's at least a credible case that if the U.S. had not built the A-bomb, that maybe nobody would have. I mean, Germany wasn't close. Nobody else was close at the time. The Soviet Union stole the original IP from the U.S. There was no real other, as far as I'm aware, there was no other credible project that was on the path to the A-bomb. So maybe somebody else would have started one. Maybe they wouldn't. I mean, especially if we had sort of gotten to a general period of relative peace. Who knows? I don't know. I think we can at least hope for a scenario where we don't end up with mutual AI destruction. We should, I don't think we should take it as a foregone conclusion that, well, we're, you know, we're going to have mutual AI destruction. So made, I'm calling a new term here. We had mad, now we're going to have made.
Sam: You know, I think there's, there's all kinds of ways you can deploy even existing AI in like an offensive way. Like, you know, yes. Stuxnet was, I think initially developed like the mid two thousands. Right. So there's lots of ways you can cause damage with existing capabilities. The way I think AI potentially gives us supremacy is not necessarily through wielding a super intelligence singleton that imposes US hegemony on the world. It's more that if we are the first to begin having a productivity boom like none other, where we have major automation of R&D and factories and services, that this gives us a huge dividend in terms of our economic prowess. And lets us also do some of the defensive work. Right. And, you know, and I think if there's any like ethic among the AI people in the Trump world is a kind of defensive accelerationist point of view. Right. It's it's we need to have iron domes everywhere type of thing. And if you look at what China is already doing, like the main bottleneck for who gets there first is going to be energy, at least in the medium term. And China is, you know, creating new coal fire plants and building, you know, has tons of battery and electric capacity. And Trump seems to be, again, situationally aware enough that he thinks that we need to double our electric energy production by the end of the decade. And I think they'll do anything, you know, that's going to require, you know, embracing natural gas and, you know, deregulating nuclear even more than has already been pushed for. And again, I think that's, you know, energy abundance is much more likely under Trump administration. And I don't think this, I don't see this in terms of a mutually sure destruction, more as how do we, you know, US infrastructure is already deeply compromised, right? There's tons of stuff that's probably burrowed into her critical infrastructure that China could turn on at any point. You know, the US government itself is incredibly fragmented. It's not like we have like some WeChat single platform that everyone uses. It's like, you know, different agencies have different cloud contracts. Microsoft is constantly having leaks and so on and so forth. I think it actually is quite existential for the US disposition in the world. I don't think it necessarily has to be a race to oblivion. It's a race to getting to a point where we can credibly defend ourselves against oblivion and have a degree of deterrence both technologically and through our economic strength. I look at the political actors in the world right now. And there's only one group that are actually sort of talking about this in a, in a way that seems sort of like at least proportional to what's, what's coming down the line. And again, like I would, I would beg to differ on your characterization of, of who's been more hawkish on China. Like in some ways, I think the Biden administration has been much more escalatory vis-a-vis China, whether, you know, from Nancy Pelosi visiting Taiwan to the chip export controls, which I support. But we're sort of at a nadir in terms of our diplomatic relations with China. And that actually took place under Biden, not Trump. You know, Trump cared about tariffs. You know, he wanted a fair trade deal. He didn't really care a whole lot about human rights. And he said he had a lot of respect for Xi. And he brokered a peace with North Korea, right? He's like best buddies with Kim Jong-un. And so there's lots of ways, I think, a Trump administration on foreign policy could actually surprise us on the upside while still maintaining, you know, an America first posture.
Nathan: I'd be interested to hear more of your take on the chip ban. I've never been sold on that policy, and I would say I'm broadly a China dove and quite, you know, I feel like the case has not been made, really, that this is something that we need to do. I would totally agree. The shipment is very escalatory. It doesn't feel like the case has been made that it's actually going to help us. It feels to me like we said, this is the one bipartisan thing. This is the sad reality, in my view, that the one thing that the two parties can agree on is that we need to stick it to China at every turn. And to what end, really? It seems like there is some sort of assumption in there that we're going to achieve some sort of weaponized dominance. Because if it's just about a productivity boom, then they can have some productivity boom too, right? Why do we need to cut them off of the productivity boom? Like when you think about that from China's perspective, to me, it seems that they can only interpret this as these guys will, you know, do anything to stop our rise and they're probably trying to create, you know, strategic military dominance that we, you know, can never come back from. But at worst, they're just trying to stop our, you know, mundane application of development and application of AI. And that's also like a shitty thing to do if it's, you know, if it's all kind of benign. Is there a story about this that makes sense that isn't predicated on the Leopold sort of narrative that we're going to use our and there's a talk about it you know there's a lot of chapters in that story right and they're not supposed to be playing out over that long of a time it's like we're gonna first you know use our little lead to solve alignment and then once we have this then we can finally talk to China and have a nice conversation with them and I'm just kind of like maybe we could start having a nice conversation now maybe we could start building some trust you know this is another issue where it's like do I want to play the card of yeah, we could deescalate with China now, but really it'll be better if we keep the tensions high. And then later, when the pressure's at its highest, that's when we'll have this great moment of reconciliation with China. That doesn't sound that great. I'd much rather pick up some wins along the way. But what would be your sort of non-made case for the chip ban?
Sam: Well, I view this as just securing American standard setting capacity over AI, right? Like the fact that we're 90 something, you know, something like 90% of cloud infrastructure and NVIDIA is a US company. It gives us enormous leverage over how AI governance gets actually implemented, right? At the compute layer. And if not for like short timelines AGI, I would probably maybe be against the export controls because they have been so escalatory and also easily evaded, but to the extent that they could add a year or 18 months or two years to our lead, I think they've become potentially decisive, especially when you think about like inference compute, right? Like if, you know, maybe we build the AGI in a lab and they steal the IP, but if they don't have the scaled infrastructure to actually, you know, commercialize it and make it useful, that gives us a huge upper hand. And this comes back to a line of thinking I've written a lot about in an essay series called AI and Leviathan where I kind of see a kind of knife-edge scenario where AI can either go in a very totalitarian, authoritarian, panopticon kind of direction or into something potentially better, right? And something that tries to preserve our liberty, right? And that gets much more difficult when it is possible in principle to have a big brother watching you. And what does that look like in practice? Well, the country that comes closest to what I think we need to become is Estonia. And Estonia, because they had a blank slate after the fall of the Soviet Union and a bunch of young policy hacker type people in government adopted all these e-government reforms in the late 90s, were the first to have e-banking. This was all built on a system called the X-Road, which was like a precursor, pre-Satoshi version of the blockchain. So the Estonian government is like on this big distributed data exchange layer that has like an open API you can develop tools for. you know, the same ID card that you used to ride the bus and pay your taxes and vote and start a business in a way that like has the economies of scale of a big scale up system, but with the civil liberties and privacy protections built in, like the whole thing is like PGB encrypted and so on. Like, how do we, so it's no longer getting to Denmark. It's like getting to Estonia. Right. And, and I don't think we're going to like literally put US government on a blockchain, but like who, who's actually thinking in those terms on that, on that level. And they tend to be, on the more libertarian right. They tend to be like the Bellagio type people. And, you know, that comes with a lot of baggage, right? Because, you know, Bellagio may be great on the FDA, but he's totally has any ideas on like the Federal Reserve. And you can say that about like a lot of different actors on that side. And this is why it becomes a high variance play. And I'm not saying this is dispositive. I'm saying that it's the gamble that's on the table. You know, in the kind of political system we have, we have like the turd or the douchebag, right? And we need to assess expected values over those two options. And one has a much higher variance, but I think a low variance incrementalist approach is the default, right? And the default path, if you look out at the stars, according to Fermi's paradox, is not a viable path. We need something that sort of takes us off that default.
Nathan: Yeah, I would love to see something. I mean, I think I'm still inclined to fold this hand on this particular gamble and wait for somebody that I think I can actually trust. On the chip end for a second more, I don't know if I quite find the standard setting argument compelling and I'll try a different argument out on you. I don't know if I necessarily believe this argument. I broadly am like, very uncertain about a lot of these things. I guess the highest level summary of my worldview here and the reason that I don't want to put Trump in a position of power is that I don't think we know what we need to do. I do think we need to be very keenly in tune with what is happening on the ground and very responsive to new information, very willing to update our worldviews. And I just don't trust him to do that. And even if he has good people around him, I don't trust him to listen to those people. So that's kind of my Trump view. But now going to the chip thing and maintaining that same level of like humility on, I really don't know if what I'm about to say, you know, carries the day or not. I think there's an argument to be made that China is not going to be denied AI. They will make their own chips. They've seemingly made pretty good progress on that already. I don't think they're caught up, but I've seen them build a hospital in seven days. I wouldn't underestimate their state capacity to really mobilize and create this industry if they need to. They also have tons of great AI researchers. So my guess is they'll figure out a way to have enough chips And if we say to them, hey, you can't use the same hardware that we use. And then the next thing might be the sort of classification of research. Then, and even, you know, by all accounts, like even the frontier labs are getting more and more secretive all the time. Like even OpenAI internally is now said to be quite secretive to the point where most of the people that work there don't have all that much more information than we do about, you know, what the status of GPT-5 is or what Q-star is or Strawberry or what have you. The more secretive we become about all that sort of stuff, perhaps that pushes China toward a like a significantly different branch of the tech tree. And I actually quote you on this. I thought from your 95 theses, I thought one of the most compelling was that RL based threat models have been prematurely discounted. I'm not sure that was exactly the right quote, but that's kind of the worry that I have for If we put China in a position where they're like, okay, we can only understand the United States as an adversary. They're trying to keep us down. They're trying to gain decisive advantage over us. They're cutting us off from hardware. They're cutting us off from research. We've got to go our own way. Who's to say that they don't spin up enough compute to pursue some like RL based killer bot scenario. Reward is all you need. And then, yeah, then we're like, we're back to screwed again. Right. Because, and, and did we really set standards in that case? I would say that on the contrary, we sort of bulk and I, you know, right now we're still in this miraculously. We're still in a zone where I see us, China papers, every week, you know, whether, and it's often Microsoft, but it can be other things too, where we have leading minds across the U S China divide, working on AI technology together. And, it seems like we're going to end that, you know, this trajectory is like, we're going to cut them off. They're going to have to go their own way. They will go their own way. Who knows what comes out of that. But the more we sort of aren't talking to each other and we're racing and we don't even have a great sense of what one another's technology progress looks like. It just seems like it feeds that dynamic where, you know, how does one pull back from that? The only way I can see to pull back is to begin to ease up on them a bit and say, hey, why don't we share some benefits sooner rather than later? Let's maybe try to stay on the same tech tree. If we were really all inclined to take AI risks seriously, one shared global tech tree that we can all kind of be on and understand. And like the fact that LLMs have a decent understanding of human values, you know, relative to your, I know you understand this from that one, like nine word sentence, you know, these, the LLMs that we have for all their flaws, like I can have a very sophisticated conversation with Claude about any number of ethical quandaries. And I would say it's more ethical than most people that I know. And there's like a very different type of AI that they could create to try to take an asymmetric strategy on us. And I just think we run the risk of, in our effort to quote unquote standard set, it seems like we run the risk of pushing them to a totally dark portion of the map that could have some very unpleasant surprises for us. I'm again, not super confident in all of that.
Sam: I like export controls partly because they're kind of robust to many different worlds, right? And we don't know what the final architecture will be for AGI, but my expectation is that it will still benefit from accelerated computing and scale. And if you want to have many AGI's that are revolutionizing the economy, you need lots of inference compute on top of that, right? And the export controls that were introduced in 2022 and strengthened in 2023 It's a complex threshold, but roughly around H100 level. You know, NVIDIA produced on the order of half a million H100s last year. This year, they're projected to produce 4 million, right? And then the new series of chips coming behind that. And so there's been the thresholds where they're set today are exerting a cost on China, but it's a relatively manageable cost. That cost will escalate over time as the delta between where the threshold is set and the volume of chips being produced above that threshold increases dramatically. And so they're, they're ramping up their efforts around smuggling and, and so on and so forth. So, you know, a big part of state capacity here is like investing in the Bureau of Industry and Security, which enforces export controls. So they actually have the capacity to monitor, supply chains and, and I'm done on smuggling and identify new, new Chinese entities and add them to the list and so on and so forth. But I do think that the export controls are, are really binding and they're binding the, regardless of what, you know, if you take a more RRL approach or transformer based approach or whatever the case might be, it's all going to need hardware. I think there's a longer-term risk that this forces them to maybe down the tech tree of doing some energy-based chip, like an entropic kind of thing that's a totally new paradigm that induces a platform shift. I think that's a ways away. The Semiconductor Manufacturing International Corporation, SMIC, the China state chip national champion, I think broke ground in 2000. They've been working on this for 24 years and they have not Not only have they not caught up, I think they're still at least like a decade behind. That's my impression. They have made progress with the seven nanometer chip with Huawei that was basically repurposing old semiconductor manufacturing equipment. And so they're really, that was impressive, but they were really squeezing you know, the juice out of like existing equipment. And they're now completely cut off from like newest stuff. And those controls could go even further because ASML is still servicing their equipment and that equipment breaks all the time. Right. And so I think it's much harder to just like indigenize your own chip production, especially at the frontier. It's probably one of the hardest things any country could ever do. They've been trying to do it with chips. They've been trying to do it with wide body airplanes and they've failed on both accounts. So I'm not, I'm not so worried about like inducing a search down some dark path part of the phase space, at least on that dimension, especially when we're talking about like potential transitions in the next five, five, six, seven years, maybe anything that pushes that lead out, I think redounds to, to the U.S. just to give us at least some breathing room, partly because our political process moves so much slower. So, you know, going back to like the question about diplomacy and having olive branches and like collaborations I'm all for that you know I before in a especially before if it was done in a way that like had some reciprocity and didn't have this didn't have this asymmetric cost of the fact that China has no qualms about taking our IP and using these joint partnerships to basically transfer U.S. technology into China and we get very little in return other than you know citations and and our h index goes up but you know what could matter more than that who right But the question like who could actually broker that, you know, I think what pisses off China more than, than a tariff war is being scolded on their human rights record and questioned about like their internal politics. Right. And, you know, while Trump is hawkish on China in one sense, he comes at it from a more purely economic standpoint, you know, it's, it's, he was on the same bandwagon about Japan back in the eighties. Right. And For that reason, I think they could, yeah, again, this is maybe wish casting. I think there's more opportunity, more potential to have stronger diplomatic ties while still turning the screws on the economic front.
Nathan: Yeah, something feels a little, I'm not one who goes down this path of like, Whose values are going to prevail? This is like the new framing that I am extremely allergic to. Now we got Sam Altman on this vibe as of recently with, I think it was a Washington Post op-ed. stark departure from what he was saying even just one year ago I went back and looked at just one year ago he had a interview in which he said you know so often in the West we basically act like there's no what no hope of cooperating with China and I don't think we should give up on that too soon and now he's in the Washington Post saying there's no third way it's our values or their values first of all I'm like What are Chinese values? Can anyone who's saying this articulate Chinese values? What are they? Do we have a summary? It feels like we're basically just talking about Xi in many cases there. I don't know what China... Sometimes I say 5,000 years of civilization can't be all wrong. I think there are probably some Chinese values that we would also find to be valuable. But then there does seem to be something weird too about this. And I feel like this is kind of the theme that I'm just not able to get over the hump on is that somehow it's this big contest of values between our values, their values, we're the good guys, they're the bad guys, we're for freedom, they're for not freedom. But then we've talked ourselves into this position where Trump, because he's not gonna push them on values, can actually do a better job of somehow getting to something good, where I'm like, I don't know, it all seems very bank shoddy to me. If we actually believe that our values are better than their values, then why don't we just like- That's all we get though. Well, I don't know. We could play it straight. Like if I was president, my proposal would be let's deescalate with China. Let's try to stay on a similar branch of the technology tree. Let's have a shared, you know, sense of what we're building and where we're trying to go. And then let's like tell them to let the people out of the camps at the same time. And if they're annoyed by that, fine. But I would think they would be a lot less annoyed by some chiding at the UN or whatever than they are by being cut off from chips. I don't know if you mean to say that you think otherwise, but if I think anything about Chinese leadership, it's that they're like... quite practical. I would think they would be able to handle some rhetorical abuse if we had an actual technology sharing and just like vision for a positive future of shared prosperity. How we get to this point where we're like, Trump is going to be better for our values because he doesn't even mention them to the Chinese. And so that's going to somehow lead to better relations. I don't know.
Sam: Well, I'm not, I'm not saying, you know, that Trump would be better for the hegemonic imposition of our values on them. I think, I think if you want that, then you should vote for Harris, right? Like I'm saying it'd be in terms of having some separation, some firewall where we're able to preserve our values and build an open society without being overrun by, you know, an AI takeoff occurring in a country with very different values, where even however hard it is to define, they have in law that no AI system may be developed that does not reflect socialist values. I think it's something that we should at least take very seriously. And when it comes to multilateralism, I think if you look at the history of the last century, most acts of multilateralism are really just wrappers on American hegemony. There's been very few binding strong multilateral peace treaties agreements, arms control agreements, so on and so forth, that didn't ultimately depend on US buy-in. And we only bought into those things from a position of strength.
Nathan: Well, I guess two reactions to that. One is like, we don't have that level of strength anymore. Right. So, I mean, we, we are not in a position to dictate global terms for all that much longer. It seems to me like China's going to have their seat at the table, barring, you know, some dramatic AI development that changes everything. And I, I guess I also think on the point of like, idea that a Chinese AI would take would run off and you know impose socialist values on us like we are the ones doing this they are not leading we are leading it's our companies you know that are talking about AGI in the next couple of years it's not their companies i think we just need to look at ourselves like we're the ones if anybody's going to have a runaway AI in the short term overwhelming odds would be that it comes out of the United States right there's not So I just don't like the idea that we're gonna hold up this Chinese socialist AI bogeyman as the reason that we need to do all these things when it's almost for sure gonna be us. You know, if anybody destroys the world with AI, it will be us. And it's like, I would say that's like a, you know, easily an order of magnitude more likely that it would be us than that it would be them. So We just got to keep that in mind, right? I mean, we're doing all these things out of this sort of paranoia of, and maybe that's too pejorative, but this fear that China has a policy about socialist values must be encoded in the AI, sure. But if there's actually an AI that comes out of anywhere that's powerful enough that if it were to have come out of China, that it would impose socialist values on America, then we're probably all screwed. I think the general AI safety view on this is like, It's not your values or my values. It's, do we survive this thing at all? If it gets that powerful, like, do we survive it at all? And I think there is just some, you look back in like deep history. I always think about the, this is a bit of a non sequitur, but I always think of the oxygenation of the universe, of the planet, not of the universe. The oxygenation of the planet was the first mass extinction event. And I just think we have no... we have no way of predicting just how weird an AI future might be. It's probably going to be way, if it gets that powerful that fast, that like in the next four year presidential term, something on the scale of a socialist AI taking over America is like at all in the realm of possibility, then my sense is that You know, the Overton window is way more open than that. And that like it's going to be way weirder than socialist values. And like whether the bot can like talk honestly about Tiananmen Square is like not going to be important anymore. You know, instead, we need to just get prepared for like super weird, super disruptive. And again, like look at ourselves. We're the ones that are going to do it almost for sure.
Sam: Yeah. You're kind of making the case for me though, because things are going to get super weird and super disruptive and it's the Red Queen dynamic from evolutionary biology, right? Like you have to run just to stay in place, right? You have to co-evolve with the technology if we're not going to be completely subsumed by it. And that gets back to the discussion around institutions, you know, going, going to the question of like, My picture of the world is not that some hegemonic Chinese super intelligence takes over the world. I think that's maybe a little more fanciful than what I have in mind. I think more in terms of just raw economic power and using that power to then establish some kind of new world order that is consistent with federated views of how to run their society. That's more or less what the history of the 20th century was. And an AI could be that focal that that fulcrum for for that prowess and i totally agree if there is if it is possible that we could train a like world destroying model on a single GPU then we're damned anyways either way like but if there's a possible world where we could have you know location verification or other kinds of like kill switches on GPU hardware that we have a de facto monopoly over and and infrastructure that has to run these systems due to their scale that has our governance standards within them. Then we could potentially even export that technology to the global south and provide services that respect liberties and freedoms in a way that like by default we wouldn't do because most of their telecom infrastructure is Huawei, right? So yeah, in my own piece on the Manhattan Project for AI, I proposed well, less a proposal and more of what I think is going to be a kind of inevitability is a kind of joint venture between the major labs and the US government. Because the US government is not going to do this on its own. But if there is a risk that like super intelligence gets developed in the lab and is like super, super powerful, super duper AI, right? Then I think it's imperative that be done within some kind of democratic structure, right? We don't want OpenAI to become a sovereign citizen and to be a total new source of sovereign power vis-a-vis the state. And you start to see this a little bit already. You know, OpenAI has announced that they're working with Los Alamos of all places on red teaming their models. Anthropic already has deep relationships with the national security establishment. And I think this is actually good, right? I think partly it's good because it shows that democratic institutions are now getting involved. Partly it's good because it shows that there could be a roadmap towards some kind of joint venture where the companies work together on some of the most sensitive parts of the research and have the scaling capital from the government that only government has to sort of have a glimpse into the future. But it's also good because whether or not we want this to be militarized, for better or worse, the defense vertical within the U.S. government is the one vertical that has lots of discretion and ability to pivot and work flexibly. And, you know, that's why, you know, the Manhattan Project happened at all was because it was sort of run like a startup. And, you know, if I look around the world of who is our Leslie Groves today, it is not in the U.S. government. It's leading one of our major tech companies.
Nathan: So what do you think is actually going to happen with, I mean, this is a hard question, but partly when you go through that analysis, I'm like, well, maybe we should just bring those interest groups back to the table. You know, if we're going to run, if we're headed for this, it seems like you take all the way up to intelligence explosion quite seriously as a live possibility. You know, I think I think if I believed that... If I was confident that that was going to happen soon, I would be moving more toward advocating for a pause. I don't... It seems crazy to me that we would... push to that level of power given our current level of understanding. I've been very encouraged over the last two years by how much progress has been made in mechanistic interpretability and we haven't seen the next model yet. So part of what's weird is we have this sort of some progress is happening in the open and some progress is happening in places where we can't see it. So we both with respect to like open source versus closed source models and also just like capability of systems versus ability to interpret and control them the we have these like major step changes seemingly happening or this certainly has been in the past and i am guessing probably will be again in the not too distant future there'll be like another model that takes a big step up and it'll be like okay whoa everybody has to recalibrate and then in the meantime that'll sort of open up a gap where it's like oh yeah no well closed source is way ahead of open again and then open will kind of chip away at that lead until you know maybe it gets close right now they're kind of Very similar. Same thing with sort of interpreting what's going on. There's been a lot of progress there for sure, but you can't interpret something you don't have access to. So like by almost definition, you know, if there are AI systems under development that can like reason at a human level, you know, only a very precious few people have had any chance to study how that works under the hood. We're at this moment right now where it seems like the interpretability has kind of caught up, but another shoe may be about to drop. At some point, I kind of expect myself to move into advocating for let's stop pushing the frontiers of this capability. It would be a mistake to race China. It would be a mistake to push it any farther, not beyond the current point, but beyond some point until we really have a robust understanding of what we're dealing with. And that's, again, why I'm just so allergic to all these other things where it's like, oh, China this or whoever that. We're the ones that are going to do it. We're going to make the decision to either push this thing to the point where it becomes genuinely dangerous to us or we're going to be the ones to decide like we need to understand what we're doing better before we continue to. Is that kind of, is that your worldview or how are you?
Sam: Yeah, I mean, in my Manhattan Project piece, I talk about it more in terms of like how to scale interpretability research and alignment research and defensive applications. You know, I think there's a subset of research that you could describe as sort of gain of function, right? That is, you know, you would much rather that be done in some kind of joint public-private framework that included, you know, secure, very secure facilities, more top secret sort of protocols in place, more compartmentalization. Obviously another leg of the Ashenbrenner thesis is that, you know, we need much better operational security at these labs. And, you know, we can talk about like the fact that, you know, OpenAI start to compartmentalize more and so on and so forth, but you still, I still imagine that they're doing these pitch, you know, pitch meetings and the new, the new training framework from like, you know, a conference room where like their security protocols to close the blinds, right? And so there's still a lot more to be done there. And I think that to do that, you have to start imagining worlds where, and to coordinate a pause in the first place where the labs come together with the big three and agree to work jointly and are, you know, given some safe harbor from antitrust considerations and so on, and given access to the scaling capital to actually see into the future. And I really think this is where, you know, there's a Yudkowsky view that like scale, scale is all you need to the point where like simply scaling up a model you know, the M plus one model could like spontaneously learn to like, you know, wiggle psionic bonds and like escape from the computer somehow. And I don't buy into those sort of more hard takeoff scenarios, but I do think there are worlds where we could trip into some kind of like self-play mechanism or some other super scalable approach that immediately reaps the benefits of the scale we already have and will be scary powerful, right? And again, I'd rather, if that is like the warning shot that, leads us to pause, a pause will be much easier to coordinate if companies are already working together and in some kind of formal structure and have some, some kind of blessing from the, from the Defense Department. Right. You know, the US defense department spends a billion dollars a year on directed, directed energy weapons. And, you know, I don't think we've ever actually used them, you know, you know, the, the, these are like big microwave guns, you know, their main output has been to like generate conspiracy theories about Havana syndrome and so on. So like we, we, we pour billions of dollars into like boondoggles that don't go nowhere. Like we could surely spend equivalent sums of money on a major alignment and sort of peering into the future effort to try to understand where this technology is going. Because if you can scale a model for, if you can scale the a hundred billion dollar model, you get some glimpse of what will be possible in a decade for much less. Right. Um, and that, that is only something that, that nation states can do.
Nathan: So the argument there is peering into the future. Yeah. I mean, I would feel a lot better about that plan with a genuine, very stable genius at the top of the chain of command. You know, it's like, who does this work and how competent they are and, you know, what is influencing their decisions and whether they have the, you know, the courage to call a halt or, you know, to blow the whistle. So I think so much depends on that sort of thing. And I, yeah, I just can't say I can't, as much as I think, you know, I would there, I could imagine somebody I could be inclined to gamble the future on, you know, and, and trust enough to do that. You know, it's like this is maybe the would be like at the very bottom of my global power rankings of people that I would I would probably bet on Xi first, honestly. You know, I don't see eye to eye with Xi about a lot of things. And I don't I don't want to live in Xi's China. I've said that many times. But who do you think is like better able to oversee a private business? public partnership, you know, to peer into the future and to like channel billions of dollars of resources, like Xi or Trump, I honestly, you know, it's a sad, I mean, that's what Operation Warp Speed was, you know, now advanced market commitments are all the, all the, all the, all the craze, but that was a, you know, deep, that was like a Milton Friedman idea.
Sam: Yeah. So, I mean, we'll see. Civil-military fusion is happening on both sides of the ocean, and that's somewhat priced in. The question is, are we going to do this in a way that is proactive or reactive? And my sense is that if you have Elon Musk and Vitalik Buterin and some of these people who could plausibly have the ear of the National Security Council, right, in a Trump administration, whereas the Harris administration, well, superficially much more competent, much more credentialed and expert driven, we'll be turning to the ethicists and, you know, major hospital networks and other stakeholders in terms of how they think about the technology. And so, you know, there's no, there's no, no genius about a little bit of madness. Right. And that's, we got to take the, it's a package deal. And I say that as, you know, somebody who temperamentally, you know, I love like walls, like the Paris VP pick. Like if we were at the end of history and I didn't think there was going to be another major technological transition point in the near future, you know, let's have like competent managerial people to, you know, increase SNAP benefits a little bit, you know, all for that, right? But that's not the world we're running into. And there's a segment of the Democratic Party that is basically, you know, sees itself as kind of managing a relatively stable social technical equilibrium. And that's just not the world as it really is.
Nathan: Yeah. Lord knows. I mean, they may have a real wake up call in store for them if they are actually in power as AI becomes, you know, what it might very well become. So yeah, I guess the, The choice that you're putting to me is would I bet on the Democrats to update their worldview in response to new evidence and kind of get it together on a rolling basis? Or should I put it all on black with Trump and hope that the right people are going to whisper to him in the right way and he's going to make the right decisions? And when there's no, we don't have any great answers. And maybe we could conclude by drafting somebody for president. Do you have any, if you were gonna put another name on the ballot, who do you think would be a good leader that has the right mix of, you know, grounded temperament, responsiveness to evidence, you know, idea driven, but not ideological, Yeah, I think we put too much stake into the person, right?
Sam: And that's sort of been the bigger message of this is like, we're not just voting for individuals. We're voting. And with that, we're not just voting for like their epistemics. We're voting for blocks of power, establishments, and the political economy that comes with that. So I think it doesn't even matter if like Harris reads Superintelligence or The Singularity Is Near and like has a wake up if the political economy of the Democratic Party constrains her and what she's able to do. And so, you know, I don't know who I would draft for my presidential pick. We had our chance with Andrew Yang. He was kind of ahead of his time maybe, but it has to be someone relatively outsider-ish, right? The kind of Dominic Cummings profile because you need to be an exogenous force. If you're working from within an existing political establishment, you're going to be tied and constrained in what you're able to do because those is sort of the classic innovator's dilemma, right? You're making incremental progress while the world changes around you.
Nathan: What do you think happens if Trump wakes up and says he wants to nuke somebody? Do you think that he can do it?
Sam: I don't think that's him. I think he's actually a bit of a pacifist.
Nathan: Well, okay. I mean, we're projecting a lot of hopes and dreams onto Trump during this exercise. I hope that's right. I'm not so sure. I think when he bombed Soleimani on the runway, he had no idea where that was going to go. And, you know, we're...
Sam: They wanted to do a lot more than that, right? And you had Tucker Carlson on the phone with him pleading him not to invade Iran.
Nathan: But I mean, I don't know. We could even cut this off.
Sam: He wasn't going to in the first place, right?
Nathan: We could even cut this off the podcast. I don't know. I don't know to what degree I believe the individual matters or not. I think the line that the public is broadly... given is that the commander in chief makes these decisions and that, you know, we're ready to respond and we carry a nuclear football around and, you know, it is supposed to be one person's decision. I guess I'm, you could say, well, that won't happen, but I'm, I mean, Trump's done a lot of crazy things. That's not really a dispute. He said a lot of crazy things. If he were to say something like that, do you think that the chain of command acts on his instruction or like refuses him in that moment.
Sam: I mean, this is a total hypothetical. I don't think this would happen in the first place, right? Like he's, his number one priority at getting into office is to broker a peace with Ukraine and Russia. Right. And, and, and I think he's uniquely capable of, of, of doing something like that. You know, he's terrified of World War Three and nukes. He, he's like, I mean, he was threatening nukes against North Korea.
Nathan: Like he's not, he's gone further than any other president has gone in invoking. He understands madman theory. He is a madman. So he has gone further than, I think it's undisputable. Right. Yes, he's definitely convinced me that he is a madman. He has gone further than any president in 60 years at least in terms of threatening nuclear war. I would say that's clear.
Sam: He walks across the DMZ and now they're like bosom buddies. I think there's like, Trump has like psyoped the world into thinking he's one thing when he's really not. And like, you look at the actions again, like the, the NATO example, like how are we going to actually get Europe to pay their fair share? If not to make some kind of credible threat and how do you motivate that credibility without having some kind of like actual capacity to, to pull out. Right. Or even if it's feigned. Right. Same as like the Taiwan situation, you know, we've benefited from a policy of credibility, of ambiguity, right? Whether it's its own independent country or not. But those are the kind of things, the sort of game theory that, you know, having someone like Trump, I think is actually an asset. But again, I don't think this is where things are going to hinge. It's not, you know, the president obviously has a lot of unilateral power. Individual decisions made by a president can reshape the world order. On AI policy in particular, I think it's going to come down to more how things are delegated. And if you had to appoint a colonel in a moment of crisis, who would that be type of thing? And what sort of broad social base are you drawing on? Are you drawing from a bunch of ethicists and sociologists and union representatives or people who actually know the technology and know how to scale large organizations and large responses? You know, Trump's CTO, Kratsios, is now head of policy at Scale, right? He ran his OSTP. Like these are very... very smart, capable people who I think are below the fray in terms of what you see on like CNN, right? You don't hear this level of granularity and how the Trump administration the last time around actually operated. And it was much more rational than sort of met the eye partly. And that was partly Trump's own fault that that didn't get communicated because he was, again, telling people to drink bleach and like mired in lawsuits. But look at what he did in spite of that.
Eric: I do want to be mindful of time. This has been a great discussion, but maybe Nathan will give you the last word and we'll wrap on that.
Nathan: Boy, how to land this plane after all this. There are two pills before you. I'm inclined to gamble at some point. I think that there is a strong case and I've been one to make it in years past. I've only kind of stopped making it because of the current crop of leaders of the Republican Party. I buy a lot of the analysis that we're not going to handle a technology revolution by calling in union leaders from every different sector to ask what they think should be done about it. I definitely buy that. I think it's undeniable that we need a reform of many parts of government. It seems like you do need somebody who has some sort of independent power base, you know, to do that. It can't just be a total, or it seems unlikely that it would be a total creature of the machine that can affect those sorts of big changes. So I do buy a lot of the overall story that you're telling and I think for me, it, it ultimately does come down to do I want to bet on this now, you know, we don't have full control of the timing. I've, I've said to people in the past, I'm sorry that the timing of the AI revolution is inconvenient for you. Um, so, you know, I can't move that timeline, but man, I do, I want to gamble on this group. to do this? Do I want to put Trump in the ultimate say-so position, or would I rather keep with the incrementalists for another term, hope that they'll rise to the occasion if things really get crazy, and hope that we might get a better reformer at a future date? I can't get myself to want to bet on Trump.
Sam: I'm Canadian, so I can't vote, but you're in Michigan where your vote actually matters. So think hard.
Eric: Awesome. Well, that's a good place to wrap. Sam, Nathan, this has been a fantastic conversation. Thank you as always. And until next time.