The below is an edited transcript of a conversation that I had with Matt Henshon on April 14, 2026. Matthew Henshon is a founding partner at Henshon Klein, LLP and a trustee of the World Peace Foundation. He is also the author of the recently published: A Lawyer’s Guide to AI: Ten Essential Concepts (ABA, 2026). We invited him to talk with us about his book, focusing on how AI will impact both our concepts of peace and those of war. The transcript was originally created by AI, then edited for accuracy and legibility. I added the hyperlinks.
Bridget Conley: Matt, in your book, you cover an incredibly wide range of issues. Are these issues that you have come into contact with because of your research — or, are these issues that you’ve experienced through your work as a lawyer, because they’re real, present challenges for the people and companies you work with?
Matt Henshon: Both. The idea was to write a book for general counsel, who need to know a little bit about everything that’s going on in AI. We’re going to talk about China at some point. China is not a legal issue, but if you are in this space at all, you have to be thinking about what they’re doing and DeepSeek, which is the leading competitor in the large language model (LLM) space. You need to know what Taiwan means to the AI industry and what it means to China. It’s not all what has the Ninth Circuit said about fair use in copyright, which is also detailed here, but I didn’t want to get caught up in making this a law book. There are other people who are writing those. This is a about what do I need to know about AI? That was the premise of the book – as you see in the subtitle — 10 essential concepts.
BC: I found it incredibly helpful. I’ve interviewed people for the podcast who worked with the Stop Killer Robots — an advocacy campaign — and people who are thinking about how to use AI to benefit mediation efforts, but your book provides a really big, broad picture. We need this because the concept of peace isn’t just war/not war, but you also need to think about how changes in the world are impacting the possibilities for peace and war.
MH: That raises the question if AI is different from past technology changes. It is different, but it’s also a continuation of what we’ve done. This didn’t start in 2022, with OpenAI, this has been going back to the 1930s, and even before that — if you want to go back to Charles Babbage and others. The irony is that a lot of AI progress is made through war. World War Two pushed AI — or what has now become AI — with Alan Turing, the Enigma Code, and the first computers mostly built in the US to, initially, as firing tables for a tiller and so forth. But that’s the genesis of it, because you had time and you had money invested in pushing these things forward.
Today, drones, which are definitely the cutting edge right now of AI in the international world, are being pushed by the Ukraine war, but I’m jumping ahead.
BC: Maybe we can think about two really big areas of questioning: how is AI changing peace and how is it changing war. Let’s start with peace.
MH: So, how does AI change peace? This is the good side of AI, which is the abundance. You hear this in politics right now, the abundance theory, the abundance future, etc. Now, there is a problem here with people losing jobs, but in theory, we can get rid of a lot of menial tasks and move people on to doing seemingly more interesting work and keep the same level of productivity across the economy. That is the promise right now — whether or not that happens is determined by how we enable people to move.
We had 100 years to move from the agrarian economy to an industrial economy. People weren’t required to learn how to go from plowing a field to fixing an automobile, but their kids did. I still plowed the field. My son went in the factory and learned how to make automobiles. Now because the rapidity of the change, we’re going to be asking people to — keeping the analogy going — to put aside the plow and pick up a welding tool and figure out how to do this. And by the way, the welding tool is going to be continually changing as AI addresses it each year.
It is different in that respect, but it’s consistent in the sense that we’ve had this problem before. But it’s scary because it’s faster and more difficult to understand. You can look at an assembly line and the Ford plants of the early 1900s and kind of understand what’s happening. People were doing jobs that, if weren’t exactly what they were doing, but they were a portion of what they had done before, when automobiles were made by hand entirely. This is different, because it is going to change how we think about things and how people are trained.
BC: Yes, and the process of change will be uneven globally. In the book you quote Rajiv Malhotra’s Artificial Intelligence and the Future of Power: 5 Battlegrounds, which states that in India, where labor is cheap, you may not see unskilled labor disappear as quickly.
MH: We think about this as a universal: everything’s going to change. But it’s country by country, it’s market by market. It’s not going to be consistent, and it’s not going to be uniform.
BC: A question on the peace side of this, is this dual edge sword of biases. On the one hand, we should be concerned about biases as they currently exist, shaping knowledge production during this process of exponential learning. Is it supercharging biases? But on the other hand, we also hear claims that AI is a way to reduce human biases, because it takes some of the human preferences out of the equation. In your view, what makes the difference between AI fueling biases in our social relations and AI mitigating them by being a more neutral engager?
MH: There’s a couple ways to think about that. One is the validity of the data. AI is not this magic black box. It’s a prediction engine that looks at data that it’s being told is relevant, and tries to find prediction. What is the next word in the sequence, or better yet, what’s the next character in the word and the sequence. It doesn’t think the way we think – although we don’t really know how we think and how we learn at base level.
The question is, what is the data that we’re relying on, and is it representative of what we are trying to do? One example in the book is this Framingham Heart Study, which is, as far as I know, the longest term study of heart health. It is set in Framingham, Massachusetts, which is not very far from where either of us sitting, and it’s a 70 or 80 year study of how people’s behavior and lifestyle is reflected in their heart health. That’s great data, right? That is a long term, more than 70 years of data over time. We have a universe of 15 or 20,000 people. Unfortunately, it is 98% white. It’s the best data we have, but it’s not representative of America, it’s not representative of the world. There are different diseases and other maladies that are affected by race. So the question is, does the data adequately representative of your population?
The next problem is who’s writing the program, who is thinking about this? Are they sensitive to the results? And there’s a great line from Fei-Fei Li, who was one of the few female voices in the AI space. She says, it’s a problem. She calls it the sea of dudes. 90 or 95% of AI engineers/programmers are white guys – 30-something year old white guys. That’s a problem, because they don’t necessarily think about or are even sensitive to potential problems, either in the data or in the results.
Let me take another example: judges. We’ve had judges in various forms for 2000 years. Starting from the Roman times, we had judges and in the middle ages, England and so forth, so on. Up until today 98% of the judges were white males. But now, a third of American judges are female at this point. Obviously, it’s more diverse and so forth. So the question is, what is your data set? Is it 2000 years of history, or is it the last 20 years? And what is the right data set? And put on top of that, we have a Trump administration that says, No woke AI, I don’t know what that means.
BC: Make sure inequality is beaten into the system.
MH: It’s not really clear what they mean, other than the politics of it. Should we be tweaking the results? That’s an interesting question. Let’s say a third of all judges in America right now are female — which I think is more or less correct. Roughly 52% of the population is female. Should we say to an AI, give me 100 judges randomly pulled? Should the result be 30% or 33% or should it be 52% female? What’s the right answer? I’m not sure what the right answer is, and I’m not sure which one is woke. I can guess, but I’m not sure. You have a bunch of stuff that’s all based on data and history, that is in the bias already, or reflects itself in bias.
BC: Let me ask a different question about what knowledge is getting fed into the systems. A couple years ago, a publisher that I’ve published with starting sending emails: ‘Please sign this agreement that will allow us to make our archive available to AI learning systems.’ Their argument was, if we don’t do it, then the knowledge we’ve produced will get left behind, it won’t be included in this learning system. If you sign, then AI will incorporate and draw on your knowledge without necessarily citing it. One of the things I thought was really interesting in your book is the way you talked about how this issue changes what’s considered a national security issue. If we want US-based AI systems to be really robust, then there’s an argument to be made that they should have access to the greatest amount of knowledge. Because, as the argument goes, Chinese systems will not be as respectful of copyright. But that’s really complicated, because all these AI companies are for-profit. So is there a new national security interest in reducing copyright protections so that private, U.S.-based companies can dominate the AI competition?
MH: You’re touching a ton of subjects here. Let me start with where I think you’re going, which is the national security issue around defense for fair use. We’re now sorting it out after the fact, because what they call ‘frontier models’ — all the big LLM companies you think of OpenAI, Anthropic, Google, Meta, Grok –have basically sucked up all the data that’s been out there, all the books, all the written materials, everything that’s been out there that has been digitized in the system already. They did so without permission, generally. Now some of that is allowed. Governmental reports and government data is all public information that is all has no copyright on it, and so you can do that. But there is a bunch of material that does reflect individual thought and does reflect copyright.
The traditional excuse for using it was called something called Fair Use. The most traditional use of fair use was scholarship. If you were writing about a Robert Frost poem, for example, you were allowed to take two lines, and print it verbatim. Copyright prevents stealing, which is my words in a unique format can’t be used without my permission or payment. However, you could have two lines in this example, out of a Frost poem, put them down and write a paragraph about it. That’s their traditional academic use; it was always sort of guarded or gated by questions like: What was the purpose? Was it academic? Was it for profit, etc? And ultimately, what was the economic impact on me, Robert Frost, or my estate in this case, does that prevent me from selling my poem going forward, or selling a book of my poems going forward? And the general answer in the academic case, was no, because it made us more likely to buy a poem because you read this essay about it. And now, maybe you’re interested in something else he wrote.
That has all been thrown out. The whole rubric of how to think about this has been thrown out by the way we deal with copyright in this age. When the LLM “reads” a Robert Frost poem, it reads it not unlike we do when we take a book out of the library. It ingests it. It analyzes it. It does whatever calculation it does on the words and the relationship to each other and so forth, and then it discards it. But unlike you and me — I can’t recite the whole Robert Frost poem; I can do bits and pieces of it, but I’d have to go back and read it again before I recite the whole thing — the AI can probably remember almost all of it verbatim. That has changed how we think about fair use.
The frontier model companies have used a number of defenses: ‘we don’t use it very often, we don’t use it very long. We throw it out when we’re done, etc.’ All which is true, or, for the sake of argument, all of it’s true. One of the other defenses they have is national security. If we don’t do it, somebody else will, and will be at a disadvantage. That’s the national security element here.
The fair use argument hasn’t really gone very far, because most of these cases are dragging along. There are big class actions, like John Grisham and some other famous authors, have a huge class action against OpenAI, which is percolating along through the courts, and nothing’s really been resolved there. There was what looked like a settlement last fall when Anthropic settled a case for $1.5 billion which seems like a lot of money, but it’s $3000 bucks a book. I don’t think everyone’s taken the deal. Is 3000 bucks worth the IP in your book? There’s not a lot of intellectual property left in the world that hasn’t been consumed by these frontier models. What’s being produced now is AI on top of AI.
One last thing on for profit/nonprofits, that is a distinction that mattered 10 years ago, when OpenAI was founded. A bunch of people who are pretty famous, Sam Altman, Elon Musk, Peter Thiel, Microsoft, etc, all got together because they were worried about where unfettered AI would go. They created this nonprofit open AI initially, which was designed to be sort of the equivalent of Linux in the operating system space, which was going to create this common good we could all use. That worked for like three or four years. Then they’re like, ‘you know what? We’re just going to make money.’ And so OpenAI basically became a private, for-profit company. Musk split off and went with his AI company xAI. Anthropic was also formed by two refugees from OpenAI. This is all from this original idea in 2015, that AI was potentially dangerous. We need to be careful with it.
BC: So, there’s for profit, there’s nonprofit, and then there’s also government. What is the role for government in trying to create guidelines for the ethical development of AI?
MH: That’s a good question. We’ve had two and a half administrations, if you will, two Trumps and one Biden in the AI space, and the approach has been radically different across them. Trump one was sort of, AI wasn’t a thing yet. It was a breaking thing, but it wasn’t a big thing.
The Biden administration definitely tried to address some of the concerns. They had a bunch of white papers and guidelines for what is ethical AI and so forth, most of which were executive orders that were rescinded, day one of the Trump 2.0 version. And that’s where we are now.
The Trump administration vision is this AI Action Plan, released last July, which is: build it faster. They want to emphasize infrastructure, acceleration, and innovation, with the goal of America leading AI technology around the world, so everything built on American systems — or stack, is the word one uses. Stack is a word for everything from the data centers, the software, the electricity that also goes in, and then the computer that sits on top. The whole thing from ground up to your user interface on your computer is called the stack. Their vision is America first and America everywhere. They’re worried more about America leading than they are about what the ethics are.
That being said, there are some guidelines. You’re limited to some extent by the data centers and the computers and the computing chips — how fast, basically, Nvidia can produce them. Nvidia produces about 80 or 90% of all the chips that are used for everything we’re talking about here. They’re produced in Taiwan, which is a separate issue. What is it also interesting in the Trump 2.0 is they’ve been back and forth on state regulation of AI.
Back in July, when they wrote their action plan, they were concerned that you have a worldwide market, certainly a nationwide market, and you have 50 states that could regulate in different ways. How do you deal with that? Initially, the Trump AI plan in July decided: no ‘bad’ state regulation, whatever that meant. Since then, they basically moved to: ‘no state regulation, period.’ Trump issued an executive order in December saying: ‘no state regulation, we will litigate against state regulation of AI.’ That’s where it is now.
There are other actors here. You have the EU, which is definitely the leader in privacy regulation, and they have passed an AI law which, was going into effect later this year. I just heard it got kicked for six to 12 months into next year. So, there is regulation, although it’s not at the US federal level.
BC: You must have followed Anthropic break with the newly christened “Department of War,” which was over, as I understand it, a company saying, we have ethical guidelines and want the government to explicitly state that they are they will not cross over these guidelines. Then the government was like, we’re not taking direction from contractors.
MH: Anthropic definitely positioned themselves as the ethical company — the cynical people say it’s a marketing plan while other people say it’s an ethical plan– but they’re positioned themselves as the voice of reason in the AI space. They were going to limit the Department of Defense/War, as far requiring humans in the loop, and otherwise limiting what it what AI could be used for. The problem, from administration’s perspective, is that a lot of the stuff is baked into systems that they’re using right now. I don’t know how you unwind this quickly, when you’re actually running a system. Anthropic, also recently [April 2026] did something along the same lines in the private sector. It said that the new version of our code is too dangerous to release into the wild as is, and so we’re going to experiment with it with certain high-end customers like Amazon and JP Morgan and so forth, who have pretty robust sandbox rules. We can figure out where we are in 135 days.
BC: In that case, they said their new code could expose vulnerabilities in existing financial systems, right?
MH: Yes, Anthropic said it would fix these problems. The Coke and Pepsi of AI are OpenAI and Anthropic right now. The other ones will probably catch up, because unlike OpenAI and Anthropic, Google, Meta and others all have income that’s separate from AI, and so they have ability to subsidize it. OpenAI and Anthropic don’t do anything else other than sell AI tech, and so they’re completely dependent on fundraising. OpenAI started out earlier, and with the launch of ChatGPT in November of 2022, they hit the ground running and became the poster child. But it was focused on consumers, and it was highly, you know, 700 million downloads in the first six months, or whatever the final number is – it became very prominent.
Anthropic focused on business and coding. We’re all amazed at the outputs of Claude and ChatGPT and so forth, but you can tell it’s English written by an AI agent. But in terms of coding – when you’re inside the software coding business — you can’t tell. Although, it doesn’t always work, and sometimes it goes out of control. Amazon, with its AWS systems, has basically put a moratorium on code written by AI that’s not inspected by humans before it’s deployed, because they’ve had some incidents where other working code has been wiped out in the mix here.
Anthropic’s coding engine is finding various “Zero Day holes” or “zero day defects” as they’re called in preexisting code. These are security around banks or so forth, and it’s going to take 135 days to try to fix as many as they can using their own engine. That’s really what it’s doing. It’s going out in the wild, or it’s certainly among these major financial institutions and securities systems, and trying to find holes and patching them before other people, bad actors, get the same engine.
BC: Let’s switch to discuss AI and war. There are a couple different ways to look at it. I mean, one is the way that various resource and supply chain demands of AI – so Taiwan maybe fit in here– changes the causes for why countries go to war. And, we can discus also the way that AI changes the tactics and the targeting within wars. So maybe we start with the latter. Drone warfare is obviously the area where we’ve probably seen the most rapid change. Where is drone warfare today and how does this connect to AI?
MH: Ukraine wouldn’t exist as a separate nation right now without drones, almost surely starting from literally day one of the war. On February 24, 2022, Putin sent his army in there, and literally, a handful of Ukrainian soldiers at one of the major military airfields just outside Kyiv with a couple drones prevented the Russian paratroopers from landing. From that point on, drones have been a vital part of defending Ukraine. They’re mostly what we call “first person drones” at this point, which are usually small — the size of a microwave oven or something — drones that often have an explosive with them and are up there with a camera. They’re very cheap, and it’s like hunting individual Russian soldiers. If you go on Twitter or X, and you’ll see these videos. If it wasn’t real life, it would be like a Monty Python, or Benny Hill skit, where Russian soldier running around trying to avoid and the drone just keeps coming. It’s totally changing the way you defend. Tanks are vulnerable to it. Individual soldiers are vulnerable to it, and it is really hard to move on the battlefield. It’s basically been a static line since almost February, March of 2022 and no progress has been made from a Russian standpoint.
BC: But, actually, from a Ukrainian one, too, because it meets its match in greater conventional capacity versus greater innovation and drone capacity.
MH: Although interesting enough, Zelensky recently announced that robots, presumably with human oversight, although it’s not clear, combined with drones captured a Russian position for the first time ever. Russia is now making a huge investment. They’re running out of soldiers. They’re now just recruiting regular college students to come join the army. ‘You’re not going to go to the front lines. You’re going to drive drones from Moscow. And you’re going to be flying via video cameras and joysticks from Moscow.’ And so now the question is, can they get enough drones to overwhelm the Ukrainian defenses? That’s where that war is.
What is the AI component? There are I’m sure, autonomous drones that our defense department, that China’s have made, that will make decisions and fire on their own. But that’s generally not what’s happening. What is happening is they’re being driven by people. The AI component is basically the stability of the drone. If you can imagine a drone the size of microwave, with four propellers going up and whatever. So it’s interpreting the joystick movement against the air pressure, the wind speed, gravity, etc. It’s making all these calculations in real time, without the input from the “pilot.” That’s really what the AI component is: maintaining air speed, altitude, and targeting.
BC: It’s not locking in on a target, and then AI is guiding it to stay with the target.
MH: Not as far as I know, as far as any public information states, but we’re heading that way. There’s no question that we’re heading that way.
BC: What about using AI systems to identify targets, based on behaviors and patterns? What the US in the war on terror called “signature strikes,” and then the Israelis used it throughout the war in Gaza, right?
MH: Let’s go back to where we started the conversation. War has expanded use of AI, absolutely, drones were developed in Israel. In some ways, they go back to World War Two, and even further, arguably. But they really developed in the Arab Israeli wars in the 1970s and 1980s. The US picked them up for the war on drugs, for monitoring drug traffickers, etc. Then post-911, it became a way to deliver missiles to targets, usually in Afghanistan, that were very difficult to reach otherwise.
Initially the targeting was, we track this specific guy. We’re watching and we’re going to blow up his truck. Then we started doing signature strikes, which you alluded to, which is like: we know this is a bad area. We know there’s a bunch of bad guys here. They’re all hanging around with Kalashnikovs, and we’re going to blow them up. We don’t know who they are, but we know this pattern looks like al Qaeda.
Go back to, what is AI? AI is a probability system. In that instance, we weren’t using AI, but you can see how it is not that dissimilar. We don’t know for a fact that any of those guys carrying an AK 47 in a Taliban controlled area were bad, but the percentages was they were. So we bombed them. The same type of thing is going on now, from what I understand in the public record, with Israel targeting sites in Gaza and other places, and the US is doing the same thing with Iran.
We apparently blew up a girls school, 175 kids, because the targeting system said — now it was next to a missile base, or near a missile base — but the targeting system said that it was a military installation. The story has disappeared because of all the other news since then, but everyone who has looked into it has said, No, it was a girl’s school. It always was a girl school, and we killed 175 girls.
Go back to Anthropic — one of the things they’re saying is we can’t trust these systems. I’m going to go back a little bit to hallucinations, where you have information being created out of whole cloth. This is a big problem for lawyers, especially in the first few months of AI, of LLM use in the 2023 period, you had a bunch of cases just being made up of whole cloth. The important thing to understand about when it’s producing text, it is just trying to replicate what seems to be the next logical word, right? As said in Smith v Jones, blah, blah, blah, you know, the driver of car who crosses the yellow line is always guilty. Well, okay, Smith v Jones may or may not exist. It certainly could exist, and it certainly sounds reasonable that exists, but it may not exist. We’re playing probabilities tables and probabilities are never 100 — otherwise there’d be certainty, not probability. So that’s how you get to schools getting blown up.
BC: Another concern — and you could see it through the global war on terror – is that there’s a way in which targeting individuals could be deemed as positive, right? We’re not just wholescale bombing entire towns or it’s not like a land war where you’ve got to hold and capture areas. You’re saying these are people who we think are the cause of the problem. Although, if it wasn’t a war zone, we’d call it extrajudicial execution. But separately, there’s also a threat of losing strategy. We’re really getting caught up in tactics. So we could keep killing people. We could keep killing individuals, as long as you know there is daylight and people on Earth, and there were people who were possibly threats. But isn’t there a risk that we lose the capacity to articulate coherent strategies in war, given this hyper focus on the tactics of identifying X, Y, Z people?
MH: We’ve seen it already, right? Obama in 2014 went back and said, ‘this is making it too easy to kill people. We need a higher threshold than this guy is walking around with a gun, yes, in a Taliban area. Everybody in the truck was a former terrorist when it gets blown up, regardless of what they actually were. But put that aside, we are alienating their family and the guy across the street, and we’re making more enemies than we’re killing.’ That is one of Obama’s arguments in 2014 which led to a restructuring of that decision making process to bomb. The Trump administration one or two, I don’t think either one is as worried about that.
Going back to the strategy versus tactics. It’s hard to comment on a war that’s ongoing, but we apparently wiped out everybody, all the top 50 leaders in Iran, the first day of the war, and the war is still going on. Now, I’m not saying that was right or wrong or whatever. We saw this in the whole month of March with Hegseth. It was actually very reminiscent of Vietnam. He would come up and say, we hit 5000 targets last night. We dropped X-thousand munitions. It was very reminiscent of Westmoreland coming up: We killed 2000 Viet Cong last night, and we’re making progress, and the body count is this. One thing is not related to the other, other than you get to give a press conference say, all the ‘progress’ we were making. It’s almost eerie. 50 years later, we’re doing the same thing. We’re reporting on the same type of information as if that were the metric of ‘victory.’
BC: There is so much more to talk about in this book, so I would encourage people to read it. But I wanted to wrap up with some bigger picture reflection questions. First, what surprised you the most as you were doing this research for the book?
MH:
This is just a big Prediction Engine, which is what Alan Turing first thought of in the 1930s. It’s predicting probabilities. We Impute it with knowledge, information, reasoning and so forth, because of the language piece. It looks like it’s talking to us, but some people say that it’s glorified autocomplete. Now it does so faster, and it does see insights in certain instance, especially medical that we don’t see, and recognizes those insights faster, right? But it is a prediction engine only, and not a thinker the way we think about the way we consider thinking.
Second is, a lot of the big progress is around war — right or wrong — it drives innovation, it drives investment. We had this big AI winter in the 1960s, 70s and the 80s, where we thought it was going to be the future, and it didn’t really happen, because it didn’t have the wherewithal of investment. I found that interesting.
And then the final point of the book, is the connection. All these things are connected. From the Turing test, which was, could a computer fool you into think it’s a real human? We’ve now almost certainly achieved that. The big technological innovation on the LLM side was essentially the video cards in the what’s called parallel processing, which is the ability to do more than one calculation at the same time. And ironically, the way that happened was in the 1980s and1990s, when we’re all playing video games. Money was getting poured into making the graphics better, sharper and faster, and that basically allowed the development of these graphic chips, which are now the backbone of all these data centers. Intel was the original chip maker, right? Intel was basically on its back, last year it got bought out and bailed out by the government, because they missed the whole GPU graphic. It’s now just GPU, but it was graphic process unit. They missed that innovation in the 1980s and 1990s. Nvidia was started as this little, cute gaming company for graphics for high end gamers who were online. And it turns out that we stumbled in this innovation that enabled all the AI developments.
BC: What worries you the most about the future of AI?
MH: What we’ve always been worried about, which is a AI run amok. If we give an instruction — this is an Elon Musk story — if we give an instruction, get rid of all unwanted emails, sooner or later, it figures out humans created all the emails, and it tries to get rid of us. Now you see Anthropic — I don’t want to give them too much credit – but it gave the AI system a company history of emails, and said, ‘we’re going to shut you down two weeks from now’. So, it circulated through the all the emails, and it basically came back and said to the instructor, ‘Jim, you can’t do that, or I’m going to tell your wife about the affair you had five years ago.’ It’s not like it was told to blackmail. No one told it, the system just figured it out. Yes, we can turn off that machine. We actually can unplug that one, but someday we won’t be able to, or we won’t realize we’re not able to, or worse. That is the ultimate problem or fear, which is an AI out of control, AI run amok.
BC: Okay, last question, what is one thing that you think should be done today to help support the development of AI, so that it is designed to contribute, rather than reduce the chances of peace, equality, democracy, or avoid that doomsday scenario.
MH: I think it’s an educational thing. We have got to understand its strengths and limitations. We haven’t talked much about its strengths here, but in medicine it is moving the needle — developing proteins and solutions to cancer problems and so forth. We’re going to be the beneficiaries for years to come. I don’t want to be negative on AI overall.
Number two, it’s going to happen. We can turn off the data centers — Vermont is talking about a moratorium on data centers. We can try to do that. It’s not going to change the general trend, which is these things are going to keep going and keep getting faster and better. But education puts it in perspective, so that you see the limits of AI. Students are using AI all over the place, but you have to be able to look at something and ask, is this real? Is there a hallucination in here? There’re all these stories about somebody asking a chat bot, what do you think about this mathematical formula? And then the chatbot answers: ‘oh, you’re a genius. Einstein never thought of this.’ It’s just this obsequious tone and you have to take this into perspective. This is a limited tool. We shouldn’t be turning over targeting in wartime. It should be, yes, there is the probability that there’s a missile site here, but it turns out, it’s not 100%. It was a girl’s school next door, and so we hit that too. There are limits to what it will do. It does help. It will make our jobs better. It will reduce some of the day to day, mundane tasks we have to do, but it’s not ready for prime time yet, at least not without human oversight.
The book is at ABA publishing. It’s expensive, which I apologize for. But until June 30, 2026, the code: ‘henshon 20,’ gets you 20% off. It makes it a little more palatable.
BC: Congratulations, the book is fascinating and I am really happy that you shared some of the ideas and history from it.