In this special episode of The Cognitive Revolution, Nathan shares his thoughts on the upcoming election and its potential impact on AI development.
Watch Episode Here
Read Episode Description
In this special episode of The Cognitive Revolution, Nathan shares his thoughts on the upcoming election and its potential impact on AI development. He explores the AI-forward cases for Trump, featuring an interview with Joshua Steinman. Nathan outlines his reasons for not supporting Trump, focusing on US-China relations, leadership approach, and the need for a positive-sum mindset in the AI era. He discusses the importance of stable leadership during pivotal moments and explains why he'll be voting for Kamala Harris, despite some reservations. This thought-provoking episode offers a nuanced perspective on the intersection of politics and AI development.
Be notified early when Turpentine's drops new publication: https://www.turpentine.co/excl...
SPONSORS:
Weights & Biases RAG++: Advanced training for building production-ready RAG applications. Learn from experts to overcome LLM challenges, evaluate systematically, and integrate advanced features. Includes free Cohere credits. Visit https://wandb.me/cr to start the RAG++ course today.
Shopify: Shopify is the world's leading e-commerce platform, offering a market-leading checkout system and exclusive AI apps like Quikly. Nobody does selling better than Shopify. Get a $1 per month trial at https://shopify.com/cognitive
Notion: Notion offers powerful workflow and automation templates, perfect for streamlining processes and laying the groundwork for AI-driven automation. With Notion AI, you can search across thousands of documents from various platforms, generating highly relevant analysis and content tailored just for you - try it for free at https://notion.com/cognitivere...
LMNT: LMNT is a zero-sugar electrolyte drink mix that's redefining hydration and performance. Ideal for those who fast or anyone looking to optimize their electrolyte intake. Support the show and get a free sample pack with any purchase at https://drinklmnt.com/tcr
CHAPTERS:
(00:00:00) About the Show
(00:00:22) Sponsors: Weights & Biases RAG++
(00:01:28) About the Episode
(00:13:13) Reflecting on Trump
(00:15:32) Introducing Josh
(00:16:35) AI Arms Race Concerns
(00:20:20) Arms Race History
(00:22:35) Building Trust
(00:25:19) Ashenbrenner Model
(00:27:17) Global Good vs. Self-Interest
(00:28:20) Sponsors: Shopify | Notion
(00:31:16) Working with Trump
(00:33:54) Media Misrepresentation
(00:40:09) Cabinet Member Leverage
(00:44:41) Sponsors: LMNT
(00:46:23) China's Communist Party
(00:48:36) AI and National Policy
(00:50:14) The Reality of AGI
(00:52:39) Framing the Disagreement
(01:01:41) Slaughterbots and AI Future
(01:04:24) Risks of Engagement
(01:09:29) Sustainability of Military Tech
(01:13:01) Closing Statements
(01:14:55) Outro
SOCIAL LINKS:
Website: https://www.cognitiverevolutio...
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://www.linkedin.com/in/na...
Youtube: https://www.youtube.com/@Cogni...
Apple: https://podcasts.apple.com/de/...
Spotify: https://open.spotify.com/show/...
TRANSCRIPT:
Nathan: Hello, and welcome back to the Cognitive Revolution.
This weekend, we're running 2 episodes, which originally appeared on the Moment of Zen feed, focusing on the election, in which I attempt to give an honest hearing and questioning to two different AI-forward cases for Trump.
I like this exercise because, imho, this election is ultimately a Referendum on Trump.
My interlocutors are, in the first episode, Samuel Hammond, Senior Economist at the Foundation for American Innovation, and a thinker I generally very much respect – we cross-posted his appearance on the Future of Life podcast last year, and I also really appreciated his 95 Theses on AI, most of which I agreed with.
In the second episode, I speak with Joshua Steinman, Trump's National Security Council "Senior Director for Cyber Policy and Deputy Assistant to the President” from 2017-2021 – the entire Trump term!
Before launching into it, I’m going to briefly share where I’ve landed on Trump as I expect a possible Trump presidency might relate to AI development.
If you see AGI as a real possibility in the ~2027 timeframe, it seems totally reasonable to consider the election’s impact on AI as a major decision making factor
Of course I understand people have other priorities, but this isn't a politics channel, so I’m not going to share my opinion on every issue - just AI and AI adjacent issues.
Interestingly, as you'll hear, I find that on a number of AI-adjacent issues, I agree with the Trump supporters I talk to.
To name a few:
- nuclear energy is good – we should build more nuclear plants!
- Population decline does merit real concern
- Freedom of speech is valuable and should be protected
- On today's margin, we should have fewer rules and more right to build on their own property
- We should cultivate a Culture of achievement
- We should aim for age of abundance - degrowth is nonsense
- It makes sense to prioritize high-skilled immigration, at least to some degree
- And… american companies like Amazon, Google, and Tesla should not be allowed to abuse their market power at the expense of consumers, but neither should they be subject to government harassment just because they are outcompeting many legacy businesses.
Fortunately, it does seem the Democratic establishment broadly is coming around on at least a number of these, but in any case, there are still 3 main reasons that I cannot ultimately support Trump, despite these points of agreement.
Those are:
- He's far too inclined to escalate tension with China, accelerate decoupling, and prioritize his own narrow domestic political interests over the national & global interest.
- The lack of rigor in his thinking & discipline in his communications seems like a recipe for unnecessary risk-taking in an increasingly volatile environment.
- I believe we are far better off approaching the future with a positive sum and inclusive mindset – not just within the US but globally, if we're to have a healthy conversation about a new social contract that befits the AI era.
On the question of US-China relations, I think we have a general failure of leadership and vision, on both sides, unfolding slowly but gathering more momentum all the time. People now see adversarial relations with China as a foregone conclusion – Joshua Steinman calls it "the physics of the environment"
To put it plainly, I don't accept this.
Conflict with China would be a disaster, and an arms race would take us ever closer to that disaster, but I do not see this as an inevitability, because I don't see China as a meaningful threat to America, Americans, or the American way of life. That's not to say the Chinese government haven't wronged us at times – their coverup of early COVID, whatever its origins, was shameful, and obviously Chinese agencies and companies have stolen a lot of intellectual property from American companies. I don't think we should ignore that – and of course we should take steps to make ourselves less vulnerable to cybercrime – but I think we should stay level-headed about it. The possibility that our grandkids could be speaking Chinese one day seems far more remote to me than that AI destroys the world.
I would say the Biden admin has done OK on AI policy domestically - the 10^26 threshold for reporting has aged pretty well for a 2023 rule, and I do believe in some strategic industrial policy – subsidizing the building of new chip fabs in the US so that we are not so easily disrupted by eg a Chinese attack on Taiwan seems a prudent step for a great power to take.
That said, the chip ban still feels wrong to me, and I have to admit that Kamala's rhetoric on China also depresses me. There's no way for China to understand recent US statements and actions other than as an attempt to keep them down, and I'd say this escalation was premature at best – given the shape of an exponential, we could have retained the option value of cutting them off later and still the bulk of the total hypothetical chip sales would have been in the future.
Relatedly, I have been really interested to see Miles Brundage, OpenAI’s recently departed Head of Policy Research, saying that we need to make a much more proactive effort to demonstrate that western AI development is benign - which of course is much easier to do if it actually is benign, and to some degree open or otherwise shared.
If we have to frame our relationship with China as a competition, I would love to see us race them on metrics like life expectancy improvements, number of diseases eradicated, or perhaps tonnage to Mars. Of course, I do understand that it's a complicated situation, that naivete solutions aren't viable, and that real solutions will be both nuanced and hard to find, and I do intend to invite more China experts onto the show going forward in an effort to more substantively contribute to a positive vision for the future.
Fow now, I do wish Kamala and Democratic leadership in general were less hawkish and more visionary, but considering how much trust has already broken down and extreme difficulty of making credible commitments between the two countries, I’d rather have a steady hand, who is more predictably going to follow a sane consensus path, and might actually make some arms-control type deals, even if politically costly for them, to help us better navigate a tricky period.
Trump, it's well established, will do anything to avoid looking weak, has credibility with rival countries perhaps when it comes to threats but not positive follow through, and for me his withdrawal from the Iran nuclear deal, general taste for inflammatory rhetoric, and stated plans for blanket tariffs – which will hurt American consumers in order to generally stick it to the Chinese – all suggest to me that he is overwhelmingly likely to make things worse.
Zooming out from US-China relations, while I agree with Sam Hammond when he says that the more fucked you think we are, the more willing you should be to roll the dice with Trump … I don’t think we are actually that likely to be fucked.
I've been a bit ambiguous about this over time, often saying that my "p(doom)" is 5-95%, and I've meant that to reflect the fact that while nobody has convinced me that we don't need to worry about AI X-risk, neither has anyone convinced me that it's inevitable.
After all, while we don’t know how they work and see all sorts of surprising and sometimes alarming capabilities, today's best AIs do understand human ethics quite well, and seem to be getting at least a bit "more aligned" with each generation. This may not continue and we should absolutely be vigilant about it, but this is a much better position for 2024 than most AI safety people expected 5 or 10 years ago.
Today, facing a decision like this referendum on Trump, I recall the words of a wise friend who told me that we should think less about what the probabilities are, and more about what we can shift them to.
And here… I have to say that, with competent, stable leadership, I believe we can steer toward scenarios on the lower end of that range, where the nature of the risk is more intrinsic to the technology itself and less the result of letting domestic political incentives lead us toward imprudent escalations, AI arms races, or catastrophic impulsive decisions.
I often think of the role that Kennedy played in the Cuban missile crisis, where my understanding is that he over-rode the recommendations of his military advisors to insist that the US would not escalate to nuclear war first.
That was heroic, but scenarios in which executive authority matters most can cut both ways. When I imagine Trump vs Kamala in moments of intense crisis, where single decisions could alter the course of history, I have to say that I find it much more likely that Trump would impact things substantially for the worse than substantially for the better. After all, we saw how he handled COVID.
To be clear, Kamala hasn’t impressed me on the topic of AI, and in general her track record generally doesn’t show the foresight of great leadership so much as a tendency to follow local trends and incentives. We could certainly hope for better. But still, if I have to choose a leader for a potentially highly volatile period of time, I'll take the stable, sane person who will listen to expert consensus, even acknowledging that the experts could be wrong, rather than betting that Trump will somehow manage to override experts in a positive way.
You'll hear my conversation partners make the case, which I won't attempt to summarize here for fear of doing it poorly, that Trump represents our best case to break out of a broken consensus and revitalize the American state for the AI era, but in the end, I just don't see it. It sounds like chaos, when we need capabilities.
Finally… when it comes to the future of American society, and the world at large, I think we have a never-before-seen opportunity to adopt a positive-sum mindset, create a world of abundance, and ultimately update our social contract.
I think OpenAI and the other leading AI companies do have roughly the right vision when they talk about benefitting all humanity. And I think Sam Altman, for all the other criticisms I've made of him, should be praised for his experiments in Universal Basic Income.
While neither candidate has shown this kind of vision, Kamala at least aims to speak and extend opportunity to all Americans. I thought her best moment of the recent debate was when she said that when Americans look at one another, "we see a friend" – this is at least something of a foundation on which to start building a shared positive vision for the future.
Trump, of course, is far more zero-sum in his thinking and negative in his outlook, and that has real consequences.
I grew up in Macomb County, Michigan - one of those bellwether counties that swung hard from Obama to Trump. And I also have family in Ohio - my beloved Mama and Papa belong to the same cohort as JD Vance’s grandparents - they moved from rural Kentucky to southern Ohio for jobs, the whole bit.
And to be totally honest, one thing I have seen personally, is that Trump has brought out the worst in a lot of people.
While JD Vance, Elon Musk, and others in Trump's orbit are no doubt more sophisticated thinkers about technology than Trump himself, I can't imagine that his brand of cynical populist politics could possibly lead to a healthy national conversation about adapting to AI that is – let's face it – going to disrupt a lot of people's jobs, let alone re-imagining what it means to contribute as a citizen or to live a good life.
It would be shameful if we ended up hoarding the benefits of AI or restricting access for non-Americans due to a widespread sense of scarcity that isn't even justified by the fundamentals, but that's unfortunately where that’s the direction I'd expect Trump to take us.
Ultimately, the idea that Trump could be President as AGI is first developed strikes me as an imprudent move, with far more and more likely downside than upside.
By all means, listen to these conversations with an open mind and form your own judgment, but for my part, I can’t support putting a loose cannon in power as we head into such a potentially pivotal period, and so I will be voting for Kamala, mostly as a rejection of Trump.
Eric: Hello, sir.
Nathan: Yo, what's up?
Eric: All good, man. Good morning. Good to see you. Thanks for doing this. Still waiting a minute for Josh, but I thought we'd get started. Any quick reactions to the last episode that we did on this same topic? Josh worked for Trump, so it brings more of a personal insight or connection than more of a abstract think tank view. But before getting into it with him, I was just curious, any Any reflections or reactions from talking to Sam or how you've been thinking about the topic since?
Nathan: I did go back and listen to the whole thing. And it was a little weird. I don't know. I felt like I kind of kept getting lulled into these scenarios of like all the great things that the, you know, highly competent Trump administration of our dreams might do. And then I look at the actual... you know, election as it's unfolding. And it's like, I just don't see the evidence in the actual candidate or like the way that they're executing a campaign to believe it, you know? And I also feel like there's this weird, I mean, politics is of course full of like contradictory messages, but I feel like there's a weird one happening where The criticism, obviously, and I don't even care about this too much, but the talking point on the Republican side from the sort of popular surrogates is like, who's the president? We have no president. The president's incompetent, whatever. Meanwhile, we've got Trump. He's a strong, singular figure, and his whole appeal is about what a strong, irreplaceable figure he is, and only he can fix it and so on. And that seems to be what the large majority of his voters believe about him. But then when we get on with Sam, it's like, oh, but the president doesn't really do that much. You know, it's like, it's actually all the people that he's going to appoint that are really going to matter. And so I'm like, well, which is it? You know, is this sort of a, if that's the real story, are we just kind of lying to the voters? Which I guess, you know, again, maybe all the candidates are sort of lying to the voters in some ways. But I actually tend to think that the person probably matters. That seems to be my default position. That's certainly like what the Constitution says. So I don't know.
Eric: Let me segue and introduce Josh. Josh, thank you for joining. I'm lucky to be a collaborator with Josh in that I'm on the Galvanic cap table, but Josh is also a friend and someone who helps me make sense of what's happening in politics. Josh previously served in the Trump administration, and so I thought it'd be great to bring him on and have this conversation as well. I briefed him that we previously had a conversation, and I think this is a good one because I think, Nathan, you represent a lot of people in this country who are first principled, not tribal, and really just trying to sort of call balls and strikes as you see it. And while you don't love everything that's happening on the Democratic side or left side, of course, there's something about Trump that just makes you uncomfortable. And I don't mean to dismiss that, I'm just saying that it is deeply unsettling in terms of the risk that he presents.
Josh: Sorry, what risk and what is it that unsettles you?
Eric: Let's get into it.
Nathan: Well, I focus all my time and attention pretty much on AI. And I think we may well be headed for a short-term situation in which AI systems become extremely powerful and pose all sorts of unprecedented challenges. On what time horizon? Potentially as soon as the next two to three years. So, you know, very much in... You don't think the grid... I mean, I think the window of possibility is very, very wide open.
Josh: I'm just saying like a bunch of folks that I really like have said that essentially the U.S. energy grid can sustain current rates of growth of AI power consumption until about 2026 and then essentially run out of power. So, I mean, are you talking about in that window?
Nathan: Possibly. I mean, that would be the near end of the window. You know, if you listen to somebody like John Schulman, who was the head of post training and one of the co-founders at OpenAI, he was recently on the Dwarkesh podcast and said, you know, yeah, this could happen as soon as next year. This being like AGI, probably an early, not, you know, super intelligent AGI, but nevertheless, something that I think could be profoundly important. you know, altering of all sorts of dynamics and power structures, you know, within and across countries. And Dwarkesh was like, you mean next year? And he's like, well, that would be kind of a surprise, more like two to three, probably. And Dwarkesh was like, that's still really soon. You know, three is only 2027. So yeah, I mean, I don't know. The energy question is really interesting. I see huge efficiency gains happening all the time. And I tend to think a lot of these analyses don't take that into proper full account, but it's hard to say, you know, I mean, you do, you can only see so many like 10 X efficiency improvements before you're like, geez, Unless these are like fake or they somehow don't work, you know, when it really matters, then it seems like we probably will have enough energy. I've done a lot of an energy analysis just in terms of like offsetting as well. You know, how many chats do you have to have with a model before it takes as much energy as like one crosstown car trip?
Josh: I feel like we're sort of quiet. So just to be super clear. So the thing that concerns you is what about Trump with regards to AI? Tail risk.
Nathan: Tail risk. I think being- Like what tail risk? Creating an arms race with China. Creating an AI arms race with China.
Josh: Aren't we already in it?
Nathan: I mean, China's gonna three- I think we're gonna figure that out over the near term. I mean, not necessarily. I think that is probably, or has a very good chance at least of being the key question that political leadership on both sides is gonna decide. If you can believe the reporting, which is like hard to say, of course, we have recent comments from Xi suggesting that he might not be inclined for an arms race and he does seem at least open to taking things like existential risk from AI seriously you've got Chinese Turing Award winners also coming out recently and joining Aman Turing Award winners with statements about geez we might really need to slow this technology down like maybe we can have an international treaty to not create slaughter bots I don't think any of those things are inevitable. I think if we say, oh, we're definitely in an AI arms race with China, then we're probably fucked. And then who cares who's president, arguably. But I think I'm like a one issue voter. If any candidate will say, I'm going to do everything we can to not have an international AI arms race and to try to make AI a peaceful technology.
Josh: Are you familiar with previous arms races with other competitive, aspiring global hegemons?
Nathan: I mean, somewhat. I don't know how many arms races you have in mind, but I... Do you think that countries tell the truth to each other when thinking about national security? I think it's very hard, but I don't... I mean, again, if you're going to just bake in an AI arms race, then I think, you know, from my perspective, that's kind of the end of the story.
Josh: I don't think, you know, to take one earlier... Have you seen the Chinese Communist Party's plans to 3x its total power output to 7 terawatts a year in the next 15 years?
Nathan: Yeah, that's great. I mean, they have a lot of people still living in rural poverty. So, you know, they've got plenty of uses for that. And I wish them well on their power expansion. I also would support, you know, at least some amount of power expansion here. I would love to see us build nuclear reactors. I'm not somebody who is, you know, anti-growth or, you know, anti-progress. I call myself an adoption accelerationist when it comes to AI.
Josh: Would you privilege Xi's words or actions when judging whether or not they're already engaged in radically expanding their capacity to compute.
Nathan: Well, we have cut them off, and this is a Biden policy, so I don't blame Trump for this, but we have set the tone in this dynamic most recently with a dramatic escalation in the AI domain specifically by saying, we are not going to sell you leading chips. And so, of course, they're responding to that by saying, well, shit, if you're going to cut us off, and now we're hearing all these comments from every which angle about arms race and decisive strategic advantage that's going to be achieved by AI. Of course, they're going to be trying to figure out what they can do to avoid that. But again, I challenge that dynamic. I don't want to see us in this AI arms race. I think we can begin. We should try to build trust.
Josh: So you want to see a candidate who's going to allow... people to buy whatever chips they want. I mean, do I get this correct? Like that's what you want. You wanna take off the sanctions. You wanna allow them to buy advanced compute. Why?
Nathan: I want to build trust. I think that if we end up in an AI arms race and we end up, you know, seeking strategic advantage over each other, we are going to all lose.
Josh: I'm asking you a very specific, very specific question. What task do you want someone to accomplish?
Nathan: Avoid AI arms race with China. Begin by building trust. Yes, share benefits now.
Josh: No, but you just said what you want is to take off the sanctions and let the Chinese buy advanced chips, which are necessary.
Nathan: I don't even think that's necessarily true. I think that, in fact, what we're seeing in the research, even from this week with a recent potentially game-changing breakthrough, for better or worse and probably both is that distributed training is now starting to work so the whole paradigm this is why i also don't fully believe the energy story I've got a buddy who thinks he can train at one-tenth a cost using FPGAs.
Josh: I'm under no illusions that we need advanced chips to train crazy models. Okay, so you want rhetoric. You're looking for rhetorical change from a political candidate. Is that your request? Yeah.
Eric: Josh, do you think that basically that we're in an arms race no matter what and sort of this idea of trust?
Josh: Yeah, that's the Physics of the Environment.
Eric: Yeah, so sort of a trust building.
Nathan: No, that is not. The physics of the environment does not dictate. I mean, unless you're a total, unless this is some sort of total universal determinism argument where like we don't have free will in this situation, then again, what are we even talking about? But if we have some sense of agency.
Josh: Look at the actions of the Chinese Communist Party. Like the Chinese Communist Party. Look at our actions.
Nathan: We are both currently escalating with each other at every turn. That is a choice that both political leadership regimes are making. And I think it's a terrible one. I mean, the last arms race, you know, you kind of raise.
Josh: I reject your premise, but I appreciate that you're trying to inject it. What premise are you rejecting there? That everything is completely escalatory. Like this is just great power politics. This is welcome to the history of the world.
Nathan: The history of the world is not on a good trajectory. I mean, how are we going to get to a good trajectory where we have peace between great powers and AI that serves us as opposed to AI that hangs over us all like a sort of Damocles as the nuclear arms race still does.
Josh: So you're interested in a candidate that will appease commercial powers inside China. And I'm just trying to understand what you want.
Nathan: Yeah, I would go for benefit sharing sooner rather than later, I think. I mean, I don't know what the... Let's take the Ashenbrenner model as sort of the contrasting point of view, right? Stylized story. You have to talk to me like I'm fine.
Josh: I don't know what that means. I don't know what that means. Sorry. I'm a... I'm a simple man.
Nathan: Explain it to me like in five. His situational awareness manifesto in more or less his words, he said, here's what I think we should do. We should take the lead that we have on China, jam as hard as we can, stay ahead, use all kind of available mechanisms to stay ahead. use the window of time that we have in the lead to solve alignment, make safe AI to achieve decisive strategic advantage. Then we can go to China and have a conversation about benefit sharing. I would say let's, I don't like that plan at all. I would much rather see a plan that involves earlier benefit sharing and a collaborative approach to trying to solve the fundamental challenges. So you want to give more technologies to the Chinese.
Josh: Is that right? You want to give things?
Nathan: I mean, I would I would engage in trade with China. Yes, I'm I don't think the case has. You're our largest trading partner. What are you talking about? Yeah, well, we've just cut them off from perhaps the most fundamental resource in the world at the moment. So we're not we are in a period of decoupling. I would like to see us stay more coupled rather than continue to decouple from China.
Josh: Okay, so you're interested in closer alignment with the Chinese Communist Party. You're interested in giving them the tools to build the things that you fear the most. I'm trying to understand this here.
Nathan: I'm interested in working together as a global community to develop AI in a positive way, not racing each other to achieve strategic advantage with AI over one another, because I don't think that ends well for anyone. And it might not end well in any case. Do you think a president of the United States should represent a global community or the citizens of the country that they're leading? I think it's definitely a mix of both. I mean, you know, when you have global issues that affect everyone and that- Should there be a priority? Should one take priority? I think it depends on the issue. I mean, there's, when it comes to a pandemic, we're all in it together.
Josh: Give me an issue where there should be parity in between president's evaluation of options and judging the benefit to humanity, vice the citizens.
Nathan: Yeah, right now there's a monkeypox outbreak happening in Africa. If you're the president and you're sitting on a bunch of vaccines, you could say, well, we could send a bunch of vaccines to Africa and try to get that outbreak under control. That would be good for everyone in the world. Or you could say, let's just hoard those for ourselves. [Expletive] everyone else. We'll wait till it gets here. We'll all be vaccinated. Everybody else can deal with it on their merits.
Josh: I would vote for the former because I know of no one who's statistically likely to get a Hockey Box.
Nathan: You don't know anyone in the Democratic Republic of the Congo right now, perhaps. But those people are out there. And I believe that we should prioritize the global good over a narrow self-interest in cases where the global good is at risk.
Josh: I think you've got a candidate that you're going to want to support.
Eric: Let me zoom out really quick. This is a good debate because we don't hear this debate too often. But I want to get out from the weeds of this specific issue, which is obviously very important. And Josh, I want to hear from you a little bit about your experience working with Trump because there is a representation of who Trump is, what it's like to work with Trump. And from our private conversations, you said that that is different from your experience. So I would like you to articulate what is your perception of how other people have perceived sort of the previous Trump administration and Trump as a person. And then I'd like to hear from you where there's overlap and where there's difference.
Josh: Yeah, he's a really sharp guy. So, you know, I worked for four years. I was the senior official on the National Security Council coordinating all of our cyber telecom supply chain and cryptocurrency policy. That meant that essentially when the president said, this is what I want our policies to look like, it was up to me and my team to structure national strategies and then ensure that all of the departments and agencies, DOD, CIA, Department of Energy, etc., conformed and executed those strategies. So my office was at the White House. I had a small team that worked for me, and it was our job to coordinate how the U.S. government functioned and what priorities it pursued. Yeah, I just found the president to always be thinking more steps ahead than I was. And it was a very humbling experience. Not that I'm, you know, some genius or anything like that. But, you know, often in meetings with foreign leaders, you look at the talking points that have been assembled by the sort of bureaucratic entities such as they submit them and President Trump would talk about things very different. And it was only after a day or two of like pouring through a bunch of research that you realized he was talking about political economic priorities of the counterparty at the table. So I just think he's a really sharp guy, probably one of the smartest people I've ever met. I think that the challenge that he faces is that a lot of people aren't that smart. And so you have to find a way to communicate and find common ground with folks. And I think he's a great communicator. He's shown that over 20 years of being one of the leading TV stars of an entire generation of having a... huge real estate company and a bunch of other successful and some unsuccessful companies, just like every entrepreneur has hits and misses. I was always really impressed and enjoyed working for him.
Nathan: You buy in his latest digital trading cards? Going long on the Trump token. Oh, he's currently hawking, unless this is like an AI fake, he's currently hawking digital trading cards for 99 bucks a piece. Buy 15 and they'll send you one in the mail, physical one. I mean, I don't really care. It's just absurd. I predict that that will be a miss on the entrepreneurial ledger.
Josh: AI Crypto friction. I just love seeing it. That's cool. I got it.
Eric: Josh, why do you think other people don't see that? Like, what is it about Trump that some people think he's very sharp and other people think, you know, he's not a stable genius, you know, to quote the quip. Like, what is it about him that, you know, some people see the intelligence and some people don't?
Josh: Yeah, I mean, it was really eye-opening where... you realize that most of the world gets their information a medium, right? A media, one might say. And, you know, those mechanisms are under significant control. Not all of them are under control. And so what I usually find is you run this loop with people who think that they know what he's like or even what the policies are. which is that they read articles that don't represent reality. They make assumptions. And so when you confront them with facts, they go back to this set of media narratives, articles, press operatives, et cetera. And they say, well, no, that's not true because I read the following words on a website. And, you know, when you work in one of these places for a long, even for a short time, what you essentially see is on a day-to-day basis, people actively, either through ignorance or malice, misrepresent reality. And you just learn a sort of, it's a feature, unfortunately, of the system. So, you know, on a... On a weekly basis, I would see articles in the mainstream media. People would send me breathlessly like, oh, my God, what's going on with X, Y or Z? Read the article and, you know, it'd be a total fabrication or a misunderstanding of what was actually happening. Furthermore, and this is the most interesting part. I'll give you an example. This is amazing. So I was a military officer for many years. Then I left and I went to Silicon Valley. When I was in the military, I started a luxury Aman-made CPG company. Not worth talking about. Anyway, I had a whole bunch of things that I did in the military. I got out. I went to a startup. I was the running ops at this startup. And then just through a strange turn of events, ended up at the White House. So two and a half, three years in, one of the senior national security correspondents, a guy whose name you know, whose articles you've read, has been begging White House comms to sit down with me for over a year. I wasn't one of these guys who leaked to the media. I didn't really care. I've got a long list of things that I got done because I just stayed focused on doing the thing that he asked me to do. They were like... five or six major things that he asked me to do. And I just went about and did them. But finally, in like year three, three and a half, something like that, we're like, okay, we'll sit down with this guy. Literally, like best selling author, writes for one of the top three newspapers in the world, whole thing on TV all the time. He comes in and pulls out his latest book, Signs the thing to Josh, hands it over. I'm hearing all these amazing things about you. Like you've done this, you've done that. And like, it's clear that he's talked to people and he knows what I've actually done. And we went on to have a very in-depth, very direct conversation for about an hour because he's writing this news story. You know, going what I would consider to be relatively straightforward. strategically deep. Like, why are you doing X? Why are you doing Y? And me giving him like very specific answers. He has rejoinders to those that I'm like, but X. And he's like, huh, okay. Hadn't thought about that. So, you know, I found him to be a competent interlocutor. The story comes out, none of that in there. The only line of description, Steinman, a former SOC entrepreneur, in over his head. So you have these engagements with these people and you realize that essentially like a significant period of time, they're acting in Not good faith. I wouldn't want to say bad faith. And you just have to extrapolate that out to the news cycle. So when people are like, oh, geez, Trump's this orange man bad. I'm just like, at this point, I, you know, I can't help but laugh because these are like we it's like you're talking to someone in the cave. Like, I just can't, I can't, there's nothing I can do, man. Like you're in the cave, that's cool. Like, listen to the speeches, like go direct. That is how I've always tried to, you know, pull myself out of those types of, you know, situations of knowing. But I mean, friends, family, at this point, it's sort of like, I can't help someone that wants to stay inside that cocoon.
Eric: What do you say to someone who says, hey, You had a great experience. There's some people who had a great experience. But listen, if you decide, a lot of people or a few dozen people or something worked for Trump in the last admin who don't endorse anymore.
Josh: Kamala obviously- I'm going to go name by name and I'll tell you all the dirty laundry.
Eric: Less Dirty Laundry.
Josh: You want to find out who's paying them? Should we talk about the donors? Should we talk about the private equity firms? I'm happy to do it. Like every single one of those people has skin in the game and there's a very specific reason why they've done what they've done.
Eric: So enlighten us a little bit, not on a name by name basis, but more on a macro, you know, and obviously Kamala doesn't have people saying great things about her either who worked for her. So, you know, this is, this is bipartisan, but. Maybe the bartenders. So you mentioned some examples of maybe some corruption. Give some of the macro reasons why people in the last administration or who worked for Trump don't endorse or don't have good things to say. What was the situation there? And why would it be different in the future?
Josh: Why would it be different in the future? I mean, there's a bunch of questions in there so I can sort of answer the one that I want. Look, it's really powerful. It's one of the reasons why I think it's hard to be a member of the cabinet because when you walk into the situation room and you've got genuine disagreements and like Nathan has... you know, some interesting kernels of disagreement that I think, you know, if we had a different type of conversation, we could sort of pull on. I don't think we're going to have that type of conversation, but nothing against you, Nathan. I'm just saying like, that's not where this is going. But like, imagine that you're sitting in a room and I'll sort of pull out what I would think, what I would call like the best version of your arguments that actually carry weight in that room. So you say something along the lines of, this is how much money US companies make selling these types of products to these types of customers in China per year. And you say, okay, we're going to take step X, we're going to cut off this, and they're going to build the capacity to build Y. And the long-term negative consequences for US GDP are going to be Z. That's sort of a standard formulation of a debate that happens a lot. And it doesn't have to be China. It can be other countries as well. You could imagine that this debate likely happened around the de-dollarization of the Russian Federation around the war in Ukraine, something which, you know, One could imagine had been discussed for many years, cutting them off from Swift, et cetera, but only happened for the first time. So essentially weaponizing the dollar, which the U.S. did to the Russians about whatever it was, like two, two and a half years ago. So imagine that debate, right? Nathan comes into the room and is like, hey, like, name your... you know, name your analog chip manufacturer, you know, those guys down in San Diego, I forgot the name, you know, Qualcomm, Broadcom, whatever. Like they're making these chips and we're afraid that there's going to be significant hit. The Chinese are going to spin up a competitor, et cetera. Okay. So if you believe your position firmly, you need to have the ability to walk, right? You need to tell the president, like, if you're the Secretary of Commerce, like, I think we need to make this deal or I think we need to not make this deal. If you don't have the ability to walk, you're essentially at the whims of the people that do have the ability to walk inside that room. What do I mean by that? I mean that people will be calling you if you're a secretary or a deputy secretary. People called me. People in the news that you read right now about many China-related things, some of them got my desk number, called me up, like big, big, powerful people. And they're like, Do you know who I am? Because I, you know, just to be clear, Nathan, so I architected our policies against Huawei. I architected Executive Order 13873, which is what you would have heard referred to as the ICT supply chain executive order. It's now it's a counterpart to CFIUS. It's the ability of the U.S. government to shut off a company from doing business in the United States if it has deep ties to the military or intelligence complex of a foreign adversary. including Chinese companies, Russian companies too, many other countries as well. So if you're the Secretary of Commerce and you hold that power, you walk into that room, and you've had powerful people calling, and they're essentially threatening you. They're saying, do you know who I am? Do you know what I can do? I know you need to work after this. You're going to do this thing for me. I've got leverage over you. And I think that what you're seeing right now with many of these people is, you know, people have leverage over them. They're not independently wealthy. They've got modest pensions. They want to sell books. They want to get on TV. They want to get board seats. You know, 75K a year, or sorry, 75K a month from a, you know, top 20 technology companies, nothing to sneeze at. It's like sort of going right for these folks when they play ball. Like, you know, a million dollars a year here, a million dollars a year there, pretty soon you're talking about real money. And so, know it's the it's the it's the basics of what motivates human behavior you know money ego, compromise, et cetera.
Eric: I appreciate that articulation. Let's actually focus on China for a bit. Could you give a broad overview? We got in the weeds of AI in the beginning, and we'll get back there in a bit. But maybe you could start with just a broad overview of how you think we should be responding to China or engaging with China. You mentioned previous great power conflict.
Josh: The Chinese Communist Party is a technology-enabled totalitarian fascist dictatorship. That's what the Communist Party is. They're a Communist Party. They kill their own people. They harvest their organs. They create strange rape environments when they're part of minority groups that they don't like, Muslim minority groups that they don't like. They're threatening to take over a country with a long history of democracy, Taiwan. They bully their neighbors and they steal things that Amans build, that smart Amans build. There's over a trillion dollars of stolen intellectual property over the past 20 years. Go look up Advanced Persistent Threat 10, APT10. It's one of the leading government-sponsored hacker groups that the Chinese operate. And folks like that have been given huge shopping lists of like, go and steal us this, go steal us that. And then because the Chinese Communist Party controls China, you hand that material over to companies, to individuals that the government supports and likes. So I don't think that this is like dealing with your neighbor, right? It's not like the nice guy next door. Maybe you guys, you know, he goes to a different church. You know, this is an entity that is a revolutionary Marxist entity that wants to make the world safe for Maoist communism. And I just think that you have to sort of come with that approach when thinking about what you want to do with the Chinese Communist Party. You can be scared.
That's fine. Being scared of what a world dominated by the Chinese Communist Party would look like, totally reasonable. You can be scared that maybe things are going to spiral out of control, but you've got to remember, this is who you're dealing with.
Eric: And so what are your thoughts on how we should be handling AI then? Or sort of give you the concerns that Nathan has shared earlier around the arms race being potentially hit?
Josh: I think they're a synthetic straw man for a bunch of other things that maybe a lot of people advocating for these positions don't even understand, like changes in trade policy. And so I don't really feel the need to engage with the sort of like technical details. I can tell you that And I do have friends that run some of these big AI companies. They're much more concerned about energy than they are about these strange policy angles with regards to international trade. And so I just don't think it's really that serious of an issue. And I don't mean serious as important. I mean serious as in worth prioritization at the current moment. If energy is 35 cents a kilowatt hour in the United States, you're not getting AGI. If it's 20 cents a kilowatt hour, you're probably not getting AGI. Like I'm not even sure that AGI exists, nor are you in Heart of Hearts. Like you can believe that there's a synthetic, you know, entity out there that, you know, can represent itself as something that to our minds, you know, one could ascribe intelligence to, but I'm just not sure that that's the case. Yeah. You know, it's almost a theologic question. So we can have a theologic debate if you want. But to me, this is about processing power. This is about corporate power. It's about trade. It's about money. Like those are real things in the world. All this other stuff is fantasy land.
Nathan: Yeah, I mean, I think you should study AI more. AGI definitely exists. We are a form of AGI. I would call us a weak AGI. There is no reason to doubt that something more capable than humans can be created. We are not the end of history. The timeline on that is very unclear. The energy requirements for that are also not entirely clear. But the idea that, I mean, so many of these debates ultimately come down to, do you actually take the tail risk from AI seriously or not? It sounds like you don't. And if you don't, then it's sure, then the whole debate is kind of moot, or at least the perspective I'm bringing to the debate is kind of moot, because if you don't really think it's a problem, you don't really think that tail risk is out there, then you can say, sure, like, who cares? Why would we prioritize that? but I don't think that is going to age very well. And it may age poorly even on the timescale of the next president. I would ask anyone watching to go watch the short Slaughterbots film and then go watch some videos coming out of Ukraine and then watch the human versus drone, human versus AI drone races. And, you know, just extrapolate a little bit and say like, are we not on the Slaughterbot trajectory?
Josh: you and I are talking about different things. Okay. You're talking about technology futures. I'm talking about political power. So if you want to have a conversation of like, what's in the art of the possible, like I literally cut my teeth in the military when I wasn't deployed to Iraq, looking at technology futures, like was on the team that put, you know, some of the first unmanned systems in the hands of people that were using them, 3D printers, augmented reality, literally a decade ago, in some cases, depending on the data we're talking about here, like more than a decade ago. So I have no doubt that you're going to have, in fact, we already have like autonomous, by we, I don't mean the US government, but like I have friends that have started companies that do autonomous targeting, all these things, like I get all that. I'm talking about political power. And I'm talking about what moves the needle politically. And I'm talking about how these things actually play out inside the rooms where decisions get made. Like if you want to have a conversation about like what might happen with technology in the next 10 years, that's a different conversation. I'm talking about politics because we're here ostensibly talking about President Trump.
Eric: And I want to just frame what I think is the difference of opinion here, which is Nathan believes that we are headed, you know, whether it's know either administration at this point because Biden hasn't kept policies that that Biden's advanced policy that Nathan's not excited about, which is this arms race with China. And Nathan believes that this arms race with China is not inevitable, that we are accelerating it in trying to compete. And by identifying the arms race, we're accelerating it. And we need an alternative path that would remove some of the sanctions, would hopefully build trust and maybe stop the arms race. Nathan, I'm sure, would would concede that there are some risks with that approach, of course. And Josh has a much more realpolitik perspective, which is, hey, we are in an arms race. To, you know, remove sanctions would be to be aiding our enemy in the arms race. And thus we would be, you know, losing that arms race to a dictatorial, you know, communist.
Josh: In the fullness of time, the people who in great power competition have advocated for, you you know, this type of thing, de-escalation, et cetera, you know, are one, two, three steps intellectually or financially removed often from, I'm not accusing you of this. I'm just telling you that like, read Venona, like, you know, read Cold War history. You know, people get wrapped up in these memetic things you know, mania usually end up having sponsorship in the counterparty. So, you know, I understand that we're afraid of this potential future. I don't disagree with you. I've made investments in this space personally with, you know, startups building military technology that's going to be able to do all this stuff because I'm terrified of it.
Nathan: Talk about your financial conflict of interest. You're throwing out everybody else's sponsored. You've got direct investment in military technology. Yeah, 100 percent. That is somehow not. Why is that supposed to undermine me in the abstract where I don't have any?
Josh: No, because I'm saying I'm afraid of the same reality that you're afraid of. But I'm investing in these companies because I want us to have it as opposed to the other side to have it. And what you're saying is that you're afraid of it and you want to get a bunch of words on a sheet of paper as a mechanism to try and prevent that eventuality from coming to pass. And I'm telling you, words mean nothing.
Nathan: Well, they're a start. I mean, were you, do you think Reagan was making a huge mistake when he engaged in arms reduction treaties with the Soviet Union? Was that a terrible idea because words mean nothing? As far as I know, the actual number of deployed nukes came down dramatically. And while not nearly enough, I would say that's a very good thing. Are you like... Why is that not possible to execute something similar in the AI era?
Josh: Because so many other things were happening at the time that caused those things to happen, like Soviet economic collapse, overmatch, extension of their military industrial complex into expeditionary you know, conflicts in Central Asia, et cetera. Like, I don't think that the story that you've told yourself about why things happened is a reflection of reality.
Nathan: I'm not telling myself any story. I'm just saying there is precedent for arms reduction treaties. There is precedent for arms control. Still very few nations in the world have nuclear weapons. Many could develop them if they chose- I think we're doing a great job of arms control.
Josh: Limiting the chipsets that go to the Chinese Communist Party. I think that's a great start. Let's keep doing it.
Nathan: What's your plan? How are we not all going to be living under the threat and possible actual reality of a militarized AI arms race? How do we not end up there? Because if we end up there, it's bad for everyone. I mean, we could all die in a nuclear war anytime, right?
Josh: Oh my God, the Sweet Meteor of Death is coming.
Nathan: Do you deny that? I mean, if you're going to say like, oh, the nuclear sword of Damocles that we have is no big deal, then I think that's just ridiculous. Like there's some finite probability every year that we could have a nuclear Armageddon. That is going to, in the fullness of time, end our species if we don't deal with it in some other way. The probabilities are going to accumulate and there's just no escaping that. The only thing we can do is de- decommission the weapons. So if you want to decommission all nuclear weapons? I think we should decommission a very... I think there is maybe a small amount that could be useful for deterrence. We're way beyond that. We do not need to have the capacity of nuclear weapons to destroy the world fully. And we do. And it's a huge strategic blunder that the public... Everybody who doesn't overthink it knows that the world is not in a great spot for having 20,000 deployed nuclear weapons. So what is the right number? It could be zero. It could be like a small number that's just like enough to make sure nobody fucks with you. Fine. But we're not in a healthy place. Right. We are. We've had many close calls. We've had many sort of false alarms. You know, we've got Petrov Day that we celebrate because one random dude like had the backbone to override what his signals were telling him at a critical moment. Who knows how close we came in the Cuban Missile Crisis, but we just can't keep living with this persistent threat of annihilation forever. It will one day catch up with us. So if we're going to add another one with AI, that seems to me very bad. And I, you know, my question to you is, what's your plan?
Josh: Yeah, I think you have no way, no frame of reference for how reality actually works. No, I mean, like, I mean, I genuinely mean that. Like, I think you're well at it.
Nathan: What's your plan? You can insult me, but what is your plan? How do we get to an AI prosperous future that is not a mutually assured destruction? We're going from MAD to MADE.
Josh: Yeah, the Mayan plan was very interesting for how to stave off these types of cataclysms. When you defeated adversary tribes, you brought their warriors back, put them on the throne, split open their chest, removed their hearts, and allowed their blood to, you know, fall out on the temple.
Nathan: Great analogy. What is your plan? What is the plan? What is the Trump plan? What is your plan? Give me a plan. I'm at least giving you a plan. You're just insulting me by comparing it to the Mayans. I have not heard any plan.
Josh: I have no connection to the President. I'm not a part of the campaign. I run a private company right now, so I can't speak for the President.
Nathan: Make it your plan. What is your plan? If you were president, if you were advising, whatever hypothetical, just tell me a plan. What is the plan for a good outcome that doesn't, I'm not asking for another round of insults against me. I'm asking for a plan. Give me an outline of one.
Josh: You think that the way in which policy gets made is people screaming loudly get to elicit some type of formulated structure that responds to their queries. That's not how things work in the world. I'm telling you like- Okay, but what's your plan? You're just doing it again. Continue the pressure on the Chinese Communist Party and shift semiconductor production to the United States. That is a plan on one issue that is an actual issue that people talk about. Not this thing that you're talking about.
Nathan: Not like this, like, I'm afraid, please comfort me. That's just a step on the path to, I mean, I think we should also have some domestic chip manufacturing capability. So I don't think it's a good situation that it's all, for many reasons, not just the AI arms race, we should be able to make our own chips. I think we can agree on that. However, as presented, that is not a plan to reach some sort of stable AI future. That is currently one move on the path to the AI arms race. It's been framed that way.
Josh: Your structure of approaching this problem of a stable AI future is again, like it comes from a place that I don't accept. Like it comes from a set of experiences that I have no exposure to. Like, that's not how people think.
Eric: Just to sort of add to that, Nathan, earlier you said if someone isn't afraid of AGI, then they're not going to, then this conversation is a little bit moot because they're not super worried about the AI arms race in terms of things getting out of control with the technology and its threat to humanity. They're mostly just concerned about beating China. And so... Josh's proposal is consistent with that, which is sort of to shore up our domestic capabilities. Is that how you perceive the situation too, Nathan? Or when you flush out exactly, what is the concern on the AI arms race that you have?
Nathan: Well, I think I honestly, I just watched this Slaughterbots thing again recently. I think it's a very good short visualization of what the future might be like. It is. But, you know, we have many good pieces of fiction, I think, can inspire good thinking. Notably, the movie Her is inspiring the AI developers right now in their product development. So basically, this depiction of a future of out of control, highly weaponized AI is just one where everything is destabilized because everybody's under constant threat of assassination. You've got like tiny little autonomous things that can take out any target. They overwhelm, you know, they sort of swarm defenses. And this is the trajectory that we're going on, right? We are headed for- 100%. Right, so this is bad. So, you know, we can sort of dismiss people as naive who want to avoid that future. Or do you want it controlled by the United States? I don't think either party can control that sort of technology or dynamic. And so I think the only way that we're going to have a good future... So you're in favor of continued sanctions? No, I'm in favor of working together to avoid that branch of the technology tree. What does that mean practically? Is it mutual sanctions? Well, I think it's a many-step process. I mean, you've got to start by building some trust, right? The two powers right now don't trust each other. We are locked into a period of mutual escalation, mutual decoupling. And the first thing is to extend some olive branches to try to reverse that process so that there can be some form of trust building so that the world's two great powers What olive branches do you propose we send to our largest trading partner? I think it would, you know, of course there's in, in any complex thing like this, I don't think there's a single, you know, one sentence answer. I think it's, I would first change our attitude and I would start. You want to send them the advanced chips? So that instead of having one party that can do this, you want two? I think that we should try to work much more together as one party than two. Is that going to be easy to achieve? I don't think so. A communist party? The communist party has been many things over time, right? I mean, we currently have a leader there who we don't like. We also had previously Deng Xiaoping. Before that, we had Mao. I mean, their leadership can change just like our leadership can change. I don't think we should cast ourselves as their permanent enemy or vice versa, because who knows what, you know, openings there may be in the future. Nixon went to China, right? I mean, there is the possibility for much better relations between the countries to come. And we foreclose that possibility to our own deficit or our own detriment, I think.
Eric: Nathan, do you concede the risk of doing that given sort of, you know, if they take advantage of us like they have for many years? You know, there have been lots of efforts at trying to liberalize or create good relations, and they haven't always been responded to well or been met, you know, reciprocally, to put it mildly. And that hasn't been great for us, right? It seems a big critique of our policy over the last 20 years is we didn't take the threat seriously enough or soon enough, and thus we enabled them to build. And so some people might be listening and gain a lot of power. Now they're a great power alongside. It's been a great 5,000 year old civilization that had a down century. But 30 years ago, you know, the economy was very different. And so there's a question as to like, is this just repeating the same mistakes that we've made, which is not treating China like the the the sort of threat that that it is and letting them Go listen to the private speeches of the Chinese Communist Party that they give inside their private party conferences.
Josh: Listen to how they talk about the United States and ask yourself if that's a country that you think we ought to be doing favors for. I don't mind trading. I don't mind even swaps. I don't mind exchanging money for goods and services. But ask if you think we should be doing them favors after reading what they say about the United States in private.
Eric: What would we learn, Josh? I haven't heard the speeches. What would we learn from them?
Josh: They do not think that we are their friends. We think that they think that we are their enemies. Here, how about this? Don't believe me on this. Rand Corporation put out an amazing report in 2015, 2016 called Systems Confrontation, Systems Destruction. And it is an accounting of the current thinking of the leading theoreticians inside the Chinese Communist Party's military apparatus, the People's Liberation Army. So basically how they think about competition with the United States. It's the most terrifying thing you'll ever read. It's written by the students of the two colonels who now run the think tank of the People's Liberation Army. who wrote the Unrestricted Warfare article in 2000, or sorry, 19, I believe it's 1999 or 2000. So current thinking inside the Chinese Communist Party is to disassemble the United States along hundreds of vectors from our ability to use language, to our ability to govern ourselves with laws, to explicit military capabilities and tasks. I mean, just read the Rand Corporation report. read Unrestricted Warfare. I don't know how else to try and tell the audience that these are adversaries. We can collaborate, we can deescalate, but this is how they think about Ama is something to be destroyed, something that's in the way of global Marxist revolution with Maoist tendencies. They buy off politicians. They will change national laws. It's not good. It's not even concede. I'm convicted that many of the technologies that you're concerned about, Nathan, are coming. And by me telling you about companies that I've invested in, what I'm explaining to you is on that line, we think very similarly, that these things are happening. I don't know that they're inevitable, like startups can always fail. I mean, military technologies have a long history of just not working or not getting adopted. But the point is like, yes, these are things to be concerned about, but this formulation of like, let's get some type of alignment with the Communist Party is to me, bizarro land. And it's not how national policy, at least in my experience, gets made. I don't think it's how good national policy gets made. I think that you look at things very minimally, right? You look at a trade issue, you look at a diplomatic issue, You try and do something big and you run the risk of taking things off the rails in a very strange way, right? There's billions of dollars floating around the economy for AI influencers to be talking about these types of things. Some of that money comes from places that we know. Some of that money comes from places. If you could point me to any pockets of it, that would be much appreciated. Like $600 million into this thing. I'm sure you could get some, you know, I mean, I seriously, like I'm sure you could get some like non-resident fellowship at one of these institutes or something like that. The trick is to start writing articles and then see if you can get on the conference circuit. Like that's where the money starts.
Eric: Let me rephrase one thing, and I want to be mindful of time, so we'll get you both out in a few minutes. But putting aside the AGI stuff, you're in agreement that these sort of military technologies are getting stronger and stronger, and more countries are going to have more capabilities to do damage. And I'm curious. Do you think that world is sustainable? Like, do you think there needs to be some global sort of decommissioning or?
Josh: 10 years ago in graduate school, I wrote a paper talking about the declining barriers to entry and exit into the marketplace of violence. I firmly believe this, right? I don't like it. It's happening. Right? Like literally wrote a short story about an assassination by drone in like 2015. Published. Published. Completely agree. These things are very dangerous. Like the world is getting more dangerous. There are very few things that you can do as a nation state to affect the security of your own country and of the world. Okay? I understand that there is this desire to try and like bring people together and like, can't we all just figure this out? And Orthodox Marxist doctrine does not work like that. Like they will lie up until the moment that they slit your throat. This is what they've done in every country they've taken over. This is what they do on the international stage. It is just how these systems are structured. Okay. The United States is this like gem of, of a political system in the world that accords some amount of freedom to individuals more than any other country in the past 1500 years, 2000 years. I mean, however you want to sort of measure these things, you could say ever people would debate that fine. And so, and so from, from my perspective and my experience, what I'm telling you is, Those types of grandiose desires, there's no attachment point for national policy there. You can ask for it, you can want it, but I've never seen it. You can get national proclamations or whatever, but when the rubber hits the road, that's just not how things happen. I'm telling you that That my recommendation for how we run the country is to take very narrow approaches to technical competition, economic competition. And the best way to do that is to act in the best interest of the Aman people. Okay. And so when you have a counterparty that has stolen trillions of dollars of intellectual property that actively dumps raw materials and finished goods, subsidizes them, look at what Huawei has done around the world. It's essentially a SIGINT system for the Chinese Communist Party, by the way, like they've all but admitted this, like you just have to deal with things in very specific cases. Even though we understand that there is some possibility out in the future of some terrible world, like I get it. Like I'm mindful of it as well. But to say like, hey, you know, kumbaya, we're going to give them all the chips. We're going to work together. I'm just telling you, you can't trust Marxists. Yeah.
Eric: I want to be mindful of people's time. So I'll give Nathan a closing statement. And Josh, if you want another one, you're welcome to.
Josh: I've said enough. I've said enough. Nathan, please.
Eric: Yeah.
Nathan: I mean, you may say that I'm a dreamer. I'm reminded of the perhaps apocryphal Einstein quote where, you know, and I don't know if he actually said this or not, but we don't know what weapons World War III will be fought with, but we know that after that, we'll be back to sticks and stones. It feels like you know, the argument for sort of that's not possible just doesn't cut it in the, you know, against the sort of magnitude of the threat that it does sound like you are also recognizing. I would love to hear a plan, you know, I would love to hear a plan from either candidate for what is, how are we going to not slide into an AI arms race, another sort of Damocles, another technological sort of Damocles over the entire global population a World War III perhaps at some point that we can't recover from.
Eric: So some people would call sort of what's happened with nuclear weapons a success in that we haven't yet had major nuclear conflict since World War II, but you just think it's an inevitable conflict. You think it's a failure?
Nathan: I think it's a terrible failure. Yeah. I mean, we have way too many. It hasn't been that long, and we've had a number of close calls. So the you know, just in the same way that like we had this pandemic and we don't seem to have like learned much or, you know, we're not like taking the appropriate next steps to sort of be ready for the next one. We sort of, you know, have had a bunch of close calls with nuclear weapons and we basically have just been like, guess there's nothing we can do about it. You know, it's just, that's just life. I just don't accept that frame on any of these big questions. I think we can and should strive to do better. And if we want to be here in thousands of years, let alone millions of years, we better.
Eric: Thank you for both engaging in this very important conversation. Nathan, Josh, thanks so much.