SDS 935: Global Issues Accelerated by AI (with Solutions), feat. Stephanie Hare

Podcast Guest: Stephanie Hare

October 28, 2025

Subscribe on Apple PodcastsSpotifyStitcher Radio or TuneIn

Jon Krohn speaks to researcher, broadcaster and author Stephanie Hare about how the Hippocratic Oath might apply to artificial intelligence, and a guiding ethos for pushing innovation while protecting users from harm. A code of conduct, she says, could be one approach to ensuring that people are using technology more mindfully and ethically, as well as an opportunity for users to feel that they belong to a wider, global community. Although she sympathizes with people concerned by overregulation undermining innovation, Stephanie also notes that we expect certain standards to be met elsewhere, such as vehicle and drug safety, as well as fair journalistic practices. As Stephanie explains, we need to find a realistic middle ground between innovation and regulation.

Thanks to our Sponsors:


Interested in sponsoring a Super Data Science Podcast episode? Email natalie@superdatascience.com for sponsorship information.


About Stephanie

Dr Stephanie Hare is a researcher, broadcaster and author focused on technology, politics and history. She co-presents “Artificial Intelligence: Decoded” on BBC television and contributes to the BBC World Service.

Her first book, Technology Is Not Neutral: A Short Guide to Technology Ethics, was named a Financial Times Best Technology Book of summer 2022, and her writing has featured in the Financial Times, The Washington Post, the Guardian/Observer, the Harvard Business Review, WIRED and Computer Weekly.

She has worked at Accenture, Palantir, and Oxford Analytica; held the Alistair Horne Visiting Fellowship at St Antony’s College, Oxford; and earned a PhD and MSc from the London School of Economics and Political Science (LSE) and a BA from the University of Illinois at Urbana-Champaign, including a year at the Université de la Sorbonne (Paris IV).


Overview

In her 2022 book, Technology is Not Neutral: A Short Guide to Technology Ethics, Stephanie Hare aims to help employees of tech corporations understand the basics of ethics in relation to their field. Starting with the central question, “How can we maximize the benefits and minimize the harms of anything that we are investing in, building, or using, that could be described as a ‘technology’ or a ‘tool’?” Keeping her definition of ‘technology’ broad, Stephanie was able to explore the human-technology relationship as consistent throughout our lives. To make her approach as inclusive as possible, Stephanie tells Jon Krohn that she used actionable insights from real-life case studies.

Stephanie also details how she has explored the idea of applying the precepts of the Hippocratic Oath to the tech industry. A code of conduct, she says, could be one approach to ensuring that people are using technology more mindfully and ethically, as well as an opportunity for users to feel that they belong to a wider, global community. Although she sympathizes with people concerned by overregulation undermining innovation, Stephanie also notes that we expect certain standards to be met elsewhere, such as vehicle and drug safety, as well as fair journalistic practices. As Stephanie explains, we need to find a realistic middle ground between innovation and regulation.

Jon also asked Stephanie for her thoughts on why the UK’s energy grid would not be able to support the government’s laid out AI action plan. Stephanie says that, with the highest electricity prices in Europe, the UK may not be financially ready for a high-powered data center, especially one where it isn’t yet clear how it would be powered. Nevertheless, with zero or next-to-zero growth in several European countries, some investment in AI infrastructure is inevitable.

Listen to the episode to hear Stephanie Hare’s thoughts on keeping our digital conversations authentic, even in the age of GenAI, Jon and Stephanie’s ideas for improving your experience on the internet, and how green policies that promote and pursue science research could benefit the world.


In this episode you will learn:

  • (01:23) What ‘technology ethics’ is               
  • (14:46) Developing a Hippocratic Oath for tech 
  • (42:32) How to protect against sensationalism          
  • (53:38) How to maintain a balance of growth and infrastructure


Items mentioned in this podcast:


Follow Stephanie


Follow Jon:


Episode Transcript:

Podcast Transcript

Jon Krohn: 00:00:00 All around the world, doctors take the Hippocratic oath to promise that they will do no harm to humans. Should those of us building AI products take a similar type of oath? Welcome to the SuperDataScience Podcast. I’m your host, Jon Krohn. I’m most fortunate to be joined today by Dr. Stephanie Hare, a well-known broadcaster television host, researcher, and author of the award-winning book Technology is Not Neutral. In today’s high level episode, Dr. Hare addresses critical global issues including AI ethics and the most important problems we should be solving with ai. This is one not to miss. Enjoy this episode of Super Data Science is made possible by Anthropic, Dell, Intel, Fabi and Gurobi. Stephanie, welcome to the Super Data Science Podcast. It’s a treat to have you on the show. How are you doing?

Stephanie Hare: 00:00:48 Thank you for inviting me on the show. I’m happy to be here.

Jon Krohn: 00:00:51 Now, I’m sure people can already guess by your accent that you are based in London.

Stephanie Hare: 00:00:57 So think’s totally obvious. Yes. I am from the Midwest of the United States originally just outside Chicago, but I now live in beautiful sunny London.

Jon Krohn: 00:01:07 Now I understand I wasn’t there much this summer, but I understand that it was actually pretty nice summer.

Stephanie Hare: 00:01:12 We had four heat waves.

Jon Krohn: 00:01:14 Nice.

Stephanie Hare: 00:01:15 Climate change is things at roundabouts. Right?

00:01:18 We will actually get to climate change later in the episode, but to kick things off, you are a researcher, broadcaster, an author with experience as an IT strategist at Accenture, Palantir and Oxford Analytica. You co-present a wonderful BBC television program called AI Decoded, and you published in tons of the biggest publications in the world, Washington Post, HBR, wire, the Guardian, and you also have a book. So it came out in 2022 and the Financial Times put it as one of their best technology books in 2022. It’s called Technology Is Not Neutral, A Short Guide to Technology Ethics. And so I thought this could be a nice place to start in a nutshell. Stephanie, what is technology ethics?

00:02:03 Technology Ethics is a book that I started to write before the pandemic and then wrote mainly during the pandemic. And I wanted to write it for a number of reasons. One was that I had just finished my career, I hope, working for big companies, working for other people, and I had started to go independent. So I was newly independent, but I had a lot of thoughts from the time from when I was somebody’s employee and was very, very lucky to work with some of the best clients in the world and fabulous technical people, software engineers, product developers, strategists and the like. And I thought I would like to capture the learnings that I’ve been really lucky to have in my career in one place so that I can pass this on because I wished a book like this had existed. When I started out, I felt like I made a lot of mistakes in my career. Partly it’s a learning journey, but some of

Jon Krohn: 00:03:07 A lot of big ethical faux PAs.

Stephanie Hare: 00:03:09 Yeah, I mean there was just no training. There was no ethical training back in the Jurassic Age when I graduated from university and started working in technology. So my first tech job was in 2000. Many of the listeners to this wonderful cutting edge show will of course not have even been born then. But that’s when I started just at the end of the.com boom. And we were sort of given two weeks of training at Accenture, which by the way was great, is better than nothing, but get in and start messing around with data and building things. And there was zero discussion, none of ethics at all. And there was no discussion obviously of ai. That was not a thing back then, but responsible technology, data protection, even cybersecurity. Are we building a system that’s secure? What happens if one of the partners in a supply chain goes down? What happens to the data? Nothing. Nothing. And so yes, there was a lot of on the job learning and I just thought if I could capture that and put it out there A get it out of my brain because it was taking up a lot of space and B, maybe it would be useful, but I also thought no one would read it.

00:04:23 It was just an exercise that I wanted to try to do. I hoped someone would read it, but I was kind of convinced no one would. And I think that’s what’s weird is that I was very lucky to publish it in February of 2022 because the world was still largely in lockdown. I think people were very desperate for something to read. So it got read and it’s been used to teach people, which was obviously the nerd dream in the sense of if this was useful and other people could learn from it and teach it and use it as a starting point, that’s wonderful. It’s also now like a historical artifact because it came out before generative AI became widely popular. So there’s all sorts of stuff that’s missing. And my publisher and I have discussed a lot. Is it time to write either like an introductory chapter that talks about what’s changed?

00:05:14 I think it’s too soon. I want to wait a little bit longer and we can talk about that if you like. In terms of what I’m already flagging, if there were to be volume two or even just another chapter, there’s a lot that’s missing. But AI is not treated, obviously generative AI is not treated at all. AI is treated cursorily, nothing about environment, sustainability climate, which I know we’ll discuss here in the uk. We’re now talking about bringing in digital ID for everybody. And that’s a big topic of chapter three in the book. So to see that and be like, oh no, we might have to update that, that’s a thing. And the book starts of course with the cancellation if you will, of the then President Donald J. Trump administration, 1.0. He gets punted off of Twitter because he was inciting the insurrection and storming of the capitol and attempts to decertify a democratic election and people were murdered or killed in the process.

00:06:13 So the then CEO of Twitter, Jack Dorsey, then Twitter, the then CEO, the then Twitter and the then president, there was an ethical decision. And that’s how I start the book. And of course now we’re seeing so-called cancel culture, have a new twist under Trump 2.0. It’s other people, it’s liberals getting canceled, not MAGA people. The boot is literally on the other foot. And this question that Jack Dorsey raised was, is this the right decision to kick somebody off of his platform? Particularly in that case it was someone who is an elected official is still salient, but is being now posed in a really different way. So some of the questions still hold, which is a good sign. And then there’s this stuff where I’m like, I can’t believe I didn’t look at anything to do with climate, but in my defense I wrote a book locked in my flat for two years while we had much more pressing concerns. And so I was writing about pandemic health tech, which is obviously not particularly interesting to people. Now in 2025, we’ve all moved on and no one wants to open the pandemic box. Fair enough. So I hope it will be useful, but time will tell. Maybe it won’t. Who knows?

Jon Krohn: 00:07:21 Well, I mean writing another addition is a good solution to that. I’m sure there’s lots of things that are a lot of topics in technology ethics, and I mean that with lowercase letters, not in the field of technology ethics. I think there must be a lot of principles that will last the test of time, and that don’t depend on some specific technology arising or not, although some technologies like gen ai, like climate technologies, the kind of social media trends that you outlined, no doubt being able to discuss these technology ethics, the general technology ethics in that kind of relevant new context would be something that would be valuable to readers. But I actually, and so when I asked the question, the very first question that I asked you the outset of this episode was in a nutshell, what is technology ethics? And I said that right after saying the title of your book. So it makes perfect sense that you explained what your book is. What you couldn’t see is that I had the question written out in lowercase, lowercase d, lowercase e, what is technology ethics? Can you define that?

Stephanie Hare: 00:08:30 Well, the way that I defined it was on a note card, which I had stuck on my desk in front of me for several years, which was how do we maximize the benefits and minimize the harms of anything that we are investing in building or using that could be described as a technology or a tool. And I went quite broad with my definition of technology. I don’t want people just being like GPT Technology is also a process. It’s like, how do I automate, how do I manufacture? There’s an incredible literature and history around what we mean by technology, and I wanted it to be broad like that so that people, a lot of people, again, this is a reflection of when I sort of came up in my training and career, but for a long time the IT department was this sort of separate section in businesses and even in the economy and kind of in life.

00:09:27 And it was often a sort of nerd. It was usually a guy. And if you were really going for business or coming up with an idea, you weren’t necessarily in dialogue with those people or thinking about it, which I’m not saying is right by the way. I actually think it was a disaster. But what I mean by this is by taking the broadest definition of technology and actually shows the human technology relationship is part of what defines us as being human. We make tools. We’ve always been making tools, and if you ever get access to a baby, you’ll see really quickly from a very early age how quickly little babies even are weirdly hardwired. It’s like we come out fashioning things, using things to do things. We’re primed for that. And even other animals have tools and processes. I took it, and again, this is a reflection of being locked in your house for two years under government orders while everybody’s sick and dying around you. I do think that affected my thinking, but I was like, since I’ve got the time and we don’t know when this is going to end, I’m going to take this right back. So my historical purview and the thinking of what I mean by what is technology ethics is big, divide it up. What is technology? We’ve just broken that down. Then we have to go into what is ethics, which is really fun. If you’ve never studied philosophy, which many people around the world don’t get that as part of their educational curriculum, I did not either.

00:10:56 We actually had that prepared as the very next question to ask you about…

00:11:01 Which part, but actually favorite philosopher Jon Krohn.

00:11:07 Well, you’ve previously observed that the Anglo-Saxon world offers little training in philosophy compared to countries like France leaving many without the intellectual tools for ethical debate.

00:11:17 So how can all these US tech organizations have a technology ethics or an AI ethics initiative when their entire workforce, I bet you has never been trained in terms of formal education, taking a class, et cetera? It’s bizarre to me, and I was very lucky in that my own educational path is weird. So I grew up in the US and I did all my education in the US until the age of 22, and then I moved over to Europe. But because I did my first degree in French, I had to go to France. It was very difficult, live in Paris and eat amazing food for a year. But part of that was I was exposed to the French educational curriculum and I was very quickly informed by my French colleagues at the time that they all had to take a ton of philosophy in their high school, and they actually have to all do it no matter what your university degree is going to be, you have to pass a philosophy module just to graduate high school and it’s then part of getting into university.

00:12:24 So I loved that it was like this is not something that we’re just asking some arts and humanities graduates to do. Everyone does it and they consider it really important and it’s part of being French. That’s true in some other cultures as well, of course. But just because that was the one that I accessed at a young age, I think it made an impression on me and it made me think how if we’re going to talk about technology ethics, we have to situate ethics within the philosophical tradition. And so I was like, how do I explain that easily? And I had a Swiss army knife on my desk that I used to fiddle with when I was procrastinating, and I was like, the knife, the Swiss Army knife as itself is philosophy. And when I open it up into its six component parts, I get the tweezers, the corkscrew, the whatever.

00:13:13 These could be the six main branches of philosophy and ethics is one of them. We’ll call it the corkscrew. How do these all interact? And so if ethics isn’t working for me to get through a problem, can I bring to bear the other five? And I don’t want to just think about Greek and Roman traditions of philosophy, urban French philosophy, I’ve got to go global because technology is for all humans. How would someone from China with a philosophical tradition in China approach this problem? How would a Russian approach it? How would someone from anywhere Africa, Peru, so that you can imagine, and again, you could do this until the end of time. I mean you could go so deep with this. I wanted to also pull it back and be like, just keep it real. This book needs to be a short guide to technology ethics. I want people to read it. And it can’t be something that a CEO or a software developer or a product manager is going to go, Jesus Christ, she’s going down some boring academic path. I need this to do my job. So I had to keep it super real, really actionable insights. But to be like FYI, if you get stuck, here’s another way to approach it. Here’s how these people have done it in time with these case studies, with these examples. It was this constant toggling. It was like playing with a Rubik’s cube or something.

Jon Krohn: 00:14:35 So on the note of developing your book and coming up with these ideas of how technology ethics are treated, not just in the west but all around the world, something that you’ve brought up a number of times is the idea of whether we should have something like the Hippocratic Oath that they have in medicine for technology. And so it doesn’t seem like that’s, I don’t know. It doesn’t seem like it’s probably a practical thing that we’re going to have an international technology Hippocratic Oath come about. It’s a nice idea, but so maybe instead of a symbolic oath, are there practical, non-negotiable checkpoints that maybe should be embedded into tech product development life cycles or yeah, there some kind of tool set like a Swiss army knife that technologists could work with that maybe is enforced in some way and isn’t considered to be a luxury?

Stephanie Hare: 00:15:37 I think that you’ve hit on the rub of it, which is the enforcement question. The reason I liked the Hippocratic Oath, by the way, is not because it’s like a mandatory thing. Not even all medical schools around the world require that now. And it hasn’t always been required for doctors. And it was actually recreated or rebooted, if you will, after the second World War because of course, as we all know, the Nuremberg trials after the second World War, there was a special doctor’s trial because doctors were actually very instrumental in the Nazi regime’s murder of many citizens of several European countries. And they had a special trial for that. And so that led to a sort of reckoning and a crisis within the medical community after the war, which was like, how is it that a bunch of people who are supposedly trained to help keep people alive and indeed healthy and thriving, how on earth were they among the first instruments of murder in a Tyra regime?

00:16:36 And I was really fascinated by that. My second area of study was history and specifically World War II histories. I was like Jesus. And they revisited the training of doctors because of what happened in World War ii. That reboot came as a response to an acknowledged universally discussed around the world problem of horror. And I was fascinated by that of the way that we think about trust. Doctors tend to be quite trusted, put a stethoscope and a white coat on them and you’re like, oh, you’ll do what they say. It’s very difficult for a lot of people to push back against a doctor or they have more training than us, et cetera. And often when you approach a doctor, you’re unwell, you’re injured, you’re sick, or your family member is. So you need to know you can trust them. So I was thinking about those sorts of concepts, the historical reality of trusted, intelligent people betraying that trust in the worst possible way that they possibly could. How do you then come back from that? How do you restore trust to a profession? Why do some medical schools do something like a Hippocratic Oath and some don’t? The fact, by the way, that the original Hippocratic Oath versus what said today is largely rewritten. So what

Jon Krohn: 00:17:56 Was They don’t do it in Greek.

Stephanie Hare: 00:17:58 No, a lot of them have rewritten it, and I kind of like that. It’s basically just the first one is first do no harm, which I think is totally appropriate for technologists to embrace as well. And then second, which is the mission statement in my book, is like, how do I maximize the benefits and minimize the harms, which I personally think is a bit more realistic for utilitarian way of thinking about it, which is there’s going to be some harm. You cannot make the omelet without breaking some eggs. So fine, choose it, choose it mindfully, build it in, have a discussion. It could be democratic. We should all be thinking about this. That implies that people have to be around the table. There’s knowledge, there’s consent, blah, blah, blah, all that stuff. That was the only reason I was thinking about it. And the reason I liked it for the medical establishment and thought it might be useful for technologists is precisely because it isn’t enforceable.

00:18:50 It’s not about getting a driver’s license. You’re not allowed to drive your car unless you have a driver’s license and insurance. And if you don’t have those things, you could get arrested, sued, et cetera. This is more like this is part of joining this community. It’s an ethos and it’s a sign, I would hope, in the best engineering schools, the best business schools, et cetera, that we teach ethics. And indeed that is actually true and lots of professions. So lawyers have this, accountants have this, civil servants have it here in the uk. The civil service ethics code is really serious. I have several friends who are civil servants here and I really admire them. Their sense of commitment to something larger than themselves is part of their professional training. So I think it would be lovely. This is just my own take on it for technologists to have that in their formation and for them to think about it a lot. If we treated our careers as a vocation, why do you get out of bed in the morning? What are you building?

00:19:57 That would be something that I think could help not just with all we design and live and create, but also for our relationship with everybody else, the users of our products, our customers, but who are also our family, our friends, et cetera. So it’s just an articulation of the value statement, but I don’t think we need to add more regulation to it in the sense of you can’t code unless you’ve done this thing or you can’t create something unless you’ve got, the world does not need that. You don’t have to be regulated to do the right thing. You could just decide to not be an asshole.

Jon Krohn: 00:20:34 Yeah, it’s this idea, even when you said the first line, I guess, of a typical hippocratic oath of the first do no harm. It’s interesting how with technology, often the primary is first make a profit. It’s like our first generate a RR.

Stephanie Hare: 00:20:53 Well, is it though? I would say that’s for companies, that’s for a lot of people. Sure. But a lot of people are not just tinkering or necessity is the mother of all invention, the person who invented the washing machine or what. I’m just looking around now. I’m make everything in my house suddenly toilet

00:21:14 GO tool. Yeah, usually do it to solve a problem where you’re like, God damn, I cannot take this anymore. I want scissors for left-handed people. Instead. I know the world is mainly right-handed, but there’s a whole crew of people who are not being served and they can’t scissor things without hurting their hands. I shouldn’t invent it. I think it’s often hopefully coming from that. Yes, there are people who always start with the profit motive first, good for them. But I think a lot of innovators are more, they’re problem solvers and then they’re like, oh man, if I did this, I can make bank. Why not? There’s nothing wrong with that. But I think the best stuff comes from solving problems.

Jon Krohn: 00:21:53 It makes a lot of sense. A related topic that you’ve talked about before is this idea of tools like forks versus meals use cases, and that seems to kind of be something that we can, it seems like a direction we can go in from the conversation that we’ve just been having. What kinds of situations in technology mean that we should be regulating the fork, the tool that we’re using as opposed to the use case? The meal?

Stephanie Hare: 00:22:24 Yes. This was, again, I workshop the book so much while I was writing it. There’s a whole group of long suffering family and friends and colleagues who were, I think dreading calls by the end. It’s like, would you like to be regulated in this way or that They were like, could you just not call? I really thought about this though a lot because of this whole thing that regulation can stifle innovation. We hear this a lot, particularly in the United States where regulation is often a dirty word. And yet, and again, sorry to keep going back to healthcare, but I just think about it mainly in terms of trust. Do you want to get into an airplane that does not meet certain standards for safety? For instance, do you want to put your baby into a baby carrier into a car that does not meet health and safety standards, right?

00:23:14 Absolutely not. Of course you do not. Do you want to have a doctor perform on you who’s not completely using drugs that have been tested, tools that have been tested, the doctor has to be board certified, right? All the staff we regulate all the time, and nobody’s saying that’s an hindrance to innovation. On the contrary, their regulation is like a standard guarantee and it’s an accountability mechanism for failing to meet that standard. Fabulous. So I thought about that a lot where I was like Forks, which everyone, at least what they are, if they don’t use them around the world, you at least know what they are. And we can make a similar argument, I’m sure for chopsticks too. I just wanted something that you’re using it every day. You’re using it multiple times a day. That’s a tool. Fine. Do I need to regulate that or do I want to regulate ways that I could use this fork?

00:24:04 So I started just such classic thing when you get hired by somebody, give me 32 ways that you could use a brick. I was like, give me 32 ways that I could use this fork. You could use a fork for eating, but I could also literally stand right next to you and stab your hand or your eye or something, or go for the jugular and murder you. So two totally valid use cases, one of which we definitely want regulated. You should not murder anyone or indeed cause bodily harm. A fork being one of the ways you could do that. So that’s what I want to regulate. No killing, no stabbing, none of the harming. We don’t have to regulate forks. Forks are free a regulation in this particular weird use case that I’m coming up with here, because I want you to come up with all the ways you could use a fork.

Jon Krohn: 00:24:52 In the UK at least knives are regulated though. That’s interesting.

Stephanie Hare: 00:24:55 I mean that’s, we’ve had knife problem here. I know. I think about that a lot too.

Jon Krohn: 00:24:59 And I think that’s probably why in the US guns are so lightly regulated because so many people use them to eat

Stephanie Hare: 00:25:07 Well. I can’t talk about the second amendment. I plead the fifth. An inside American joke. Yes, first amendment versus the fifth. Always so tricky when talking the second. Yeah, look, we all want to innovate, but we also want to be safe, and we just kind of want a hopefully nonviolent non killing life if we can. So what I was trying to come up with was ways to get people thinking about it. Because again, if you go down a technical route, people can get psyched out when talking about regulation. The lawyers get involved. It’s very messy for everyone. But if you take it back to how would you discuss this with kids, all kids on the inside. For me it was forks. I’m like this, I could do this with it. I could do that with it. One of ’em, we want to regulate one of ’em, we don’t. And I want you to do all the other things you want to do with the fork as long as you are not doing these things. And those things should probably actually be quite minimal, like the 10 commandments, easy to remember and follow for a reason. So we probably want to regulate as lightly as possible, but when we regulate, we want it to be very clear. Everybody understands it and easy to enforce. It’s very clear if I’m stabbing you or not, this should not be ambiguous. Hopefully.

Jon Krohn: 00:26:22 Hopefully. Yeah. So moving on from these kinds of general technology ethics ideas to now things that are newer than covered in your books. So talking about Gen AI a bit, for example, long before the rise of gen ai as a society, we’ve grappled with the commercial transformation of our physical places and those that we visit as tourists or as civilians in a city into uniform, inauthentic, and even low quality, but efficient experiences like fast food. And sociologists have called this things like Disneyfication or McDonaldization, and we go from having these diverse town centers to kind of strip balls. And with the advent of the internet, our digital spaces went from the kind of serendipitous chaos, but unapologetically honest spaces of the early internet like GeoCities and MySpace to today’s ad infested, low attention span fakeness of social media, even news media. And so there’s a Canadian writer named Corey, Dr. O or Dr. Ro, depending on how long that person’s family has been in Canada, I guess. And the word that he uses, I actually can’t say on this show because if I, well, I can, I guess I can’t swear, but we have to bleep it out. It’s a clean show, so I’ll call it an ification, but instead of poop, he used a word that rhymes with hit.

00:28:08 And I think you are familiar with that term. Your head was nodding as it started to mention Corey ov. And so now in the age of gen ai, that a very long intro to my question. Now in the age of gen ai, it’s estimated in a few short years, the lion’s share of internet content will be AI slop, low quality AI generated content. Last week, at the time of us recording the Harvard Business Review, I think did a big, really popular story on AI swap even in enterprises where so many of the emails and presentations that people are now being forced to go through are machine generated and nobody even proofread it, or it’s not even necessarily aligned with a human in the organization’s views, but it’s wasting tons of time internally. So with that now in our midst, how can we, I dunno if you have any ideas, you probably have lots of thoughts on everything that I’ve said, so I should give you the opportunity to say that. But then how can we break this pattern and preserve diverse, authentic, thoughtful conversations and experiences in the

Stephanie Hare: 00:29:06 Future? Where to even start? I sometimes think we have to go back to a very basic question of what is the internet for? I loved your hearkening back to the Halian days. I wonder if it was though. My understanding is that the internet is largely a vehicle for pornography. So is the internet so useful as all of that? And no judgment on that. I’m just here to relay the message. So there’s that. Then there’s what everybody’s doing. I don’t know. As you were saying this, I was thinking about how I used to be a power user of Twitter before it was acquired by Mr. Elon Musk, and I sometimes felt bad about this. I needed it for my job. It was actually very useful for a very long time. And then there was a period where I think it was less useful of quite addictive for me. And then Elon Musk bought it and it really became less useful. But I stayed on it as a sort of like a smoker analogy. I was just using it.

Jon Krohn: 00:30:06 Yes, have this came up in our research, not only the smoky analogy, but in the past you’ve likened Twitter to an old fashioned smoking lounge in a Frankfurt airport, expensive, dangerous, and stinky. That’s a quote.

Stephanie Hare: 00:30:21 Jesus, where did I say that publicly? I’m definitely thinking I stand by that statement by the way. I do. Yeah, because everyone’s just shouting and screaming at each other and there is so much crap, and it’s even worse now. So I was actually very grateful to Mr. Musk in the end because I’ve been wanting to kick my Twitter habit for a while, or my ex habit for a while, and he made it so useless for me that it was very easy to delete my account. I was like, do you know what? I actually don’t need this anymore. Thank you. This used to be really important to me, and I am not a smoker, but I have friends in my life who are, and they’ve said that smoking was very difficult for them to quit. Sometimes it has a use for them when they’re stressed or they’re out in a bar or a club and they really want to have a cigarette and it’s hard for them.

00:31:05 And I sympathize and I felt that way a little bit with social media, which we know is engineer to you addicted. And I’m just saying, I sometimes think we know that social media is bad for you. We know that being online is probably really bad for you and for democracy, et cetera. And I sometimes wonder, maybe the only way to get everybody offline is for the internet. Just let it burn itself down. It becomes really bad. And then we’ll all just be like, do you know what’s going to be more useful in that world where it’s all burning and it’s go Deron time is going back to books and the library and in-person meetings and seminars and education and the things that we used to do before this whole thing started. Some of us are old enough to remember what this world looked like. It was not so bad in some ways.

00:31:53 So I don’t know. I mean, you’re catching me on a sort of touchy night clearly, but I’m just saying I’m not convinced that the internet has always been so amazing. It has been for obviously wonderful things. We’re talking digitally across the Internet’s fabulous, but let’s not fool ourselves. There’s always been a bunch of stuff that’s really awful. The dark web is a complete cesspool and it’s also never been equal for everybody. Some people have been having great experiences online and some people have been having terrible experiences online the entire time. It’s just that now all of us are, and AI has at least democratized democratized what Mr. Dr. O has termed his end beep ification spot on. So the question is, if the Internet’s good to move on to solutions, what do we want to do? What worked for us and what didn’t? So there’s Tim Burners Lee, sir.

00:32:44 Tim Burners. Lee has a new book out where he’s talking about why he built the worldwide web as he did, why he made it a public resource. He was not somebody who was building for profit first. He had a different motive. And thank goodness for him and his crew, he’s got a whole other plan. I’m sure other people will only get involved if they find there’s a way to make money from that. So there’s that. It’s an opportunity for us to completely rethink what we want from the internet if we make this version really bad. But it has been really bad for a long time.

Jon Krohn: 00:33:21 It looks like that book is called, this is For Everyone, is that the book that you’re talking about? Yeah, the unfinished story of the Worldwide web.

Stephanie Hare: 00:33:28 And I think he’s an absolute visionary and he’s thinking about technology in a really different way than what we’ve been discussing so far, driven by the profit motive. And maybe there’s somebody out there who’s got more cash, who wants to help back something like that. I am worried though, based on human history, we’ll probably have to completely destroy the internet for people to then be willing to do this. Because right now it still serves quite a lot of people. And first of all, we all can shop online and do the stuff we want to do, but second, it’s very useful for businesses and governments and people who want to inflame emotions and all that stuff.

Jon Krohn: 00:34:06 I wonder if there’s kinds of things that people could be doing as individuals, as individual listeners. I wonder if there’s things that they can be doing to make their internet experience better. So for example, something that I’ve been doing for a long time that has vastly improved my experience of the internet is using an ad blocker, which is free and just there in your browser. And it stops a lot of ads. And there’s things like if I go on, I only go on Instagram in a desktop web browser because they only serve ads in the phone version of Instagram. They have so few users of Instagram on desktop that they’re just, they don’t cater to that for advertisers at all.

Stephanie Hare: 00:34:50 That’s a useful tip.

Jon Krohn: 00:34:51 Even the Instagram shorts, what are they called? I

Stephanie Hare: 00:34:55 Dunno, I’m not on Instagram, isn’t it? Stories?

Jon Krohn: 00:34:58 Stories, yes,

Stephanie Hare: 00:34:59 Stories. Is it reels

Jon Krohn: 00:35:00 Exactly. Stories. That’s it. Yeah. The reels are just all videos I guess, that are in the timeline anyway. But yeah, so those are kind of helpful. Something else that I have personally really enjoyed as something that’s been useful to me in terms of not getting stuck in this news cycle, this kind of fear-based news cycle that it seems like a lot of news reporting is based on. I think it might be a bit different in the uk, especially with outlets like the BBC and all of those ethical codes that the civil servants working at the BBC have gone that you talked about earlier in the episode. It does seem like you’re getting a lot more information relative to junk or just stuff that’s designed to inflame your emotions. But in the US where I live today, a lot of the news stations are actually like this podcast.

00:35:59 They’re ad supported. And so the objective is to keep people engaged as much as possible. We try on this podcast to do it with great informative conversations, but a lot of new shows have learned that the way to keep people engaged is through fear and emotions. And that seems to be good for their bottom line. So I’m working my way to a solution here is that one thing for me is that I subscribe to a physical subscription of The Economist, and there’s still ads in this physical Economist magazine that I get. But I think because you are, I dunno, you’re a paying customer as opposed to just relying on ads. I think you’re getting deeper coverage, more thought on things and well, actually, we were talking about how to make the internet better, and I’m saying not by getting a physical coffee. So I don’t know. I don’t know. But anyway, maybe me giving those couple of examples gives some ideas for ideas that you might have for listeners on what they can do to make their experience on the internet better.

Stephanie Hare: 00:37:11 Oh man. I’m super nervous about recommending anybody doing anything. I don’t particularly feel I have an example of a life to follow. What I would just say for myself, for what it’s worth, I’m putting it out, is I try to really be intentional when I’m on, it’s researching usually, and my research tastes and interests are super eclectic. So I’ll be looking at all sorts of stuff. And I often used to use social media as a bulletin board, so I would just be, very rarely would you get my views on anything. I was more just posting for myself articles. I would then go back and look at or wanting to mark them. And that started from my time as a political risk analyst when I had to cover the entire European Union and the ECB and the European Commission and then EU relations to the wider world. I needed a place to file all of that. And at that time, Twitter was super useful back in 2010, and I carried that on when I started doing technology work full time.

00:38:13 Now I want my brain back. And that’s been a very active and intentional process. So all of the apps are not on my phone anymore. I have to, if I want to go online to surf around, I have to do that on my phone. If I’m using Google, I go to the website of a newspaper and I will read the newspaper. I don’t want the social media curated experience anymore of that at all. And I’m trying to stay away from anything that makes me, which is difficult, still a US citizen. And I’m living in the UK where we’re having some very interesting political conversations at the moment, but seeing things that are inflammatory make you angry and upset, sometimes that’s an appropriate response to stuff. But as you’ve said, outrage every day is just making certain oligarchs really rich and making the rest of us really stressed and anxious and not working with our communities to solve problems.

00:39:12 So I’m trying to just be away from that. I spend a lot of time reading books and in libraries or in archives or out with people. I spend more time now with people I would say, than I ever used to before. And that is a very deliberate correction. I travel a lot for my job, so when I say I’m just hanging out with people, I’m just like, Hey, with my friends and I’m checking in because that would be a bubble. So I spend a lot of time, particularly going around France and Germany, that’s been my focus for the past couple of years here in the uk. I go back to the US a lot. I’ve been flying to the Middle East a lot to have conversations there, have, have good research opportunity, will travel. I want to be on the ground because I would often see, and my mom would call up and be like, oh my God, where you are?

00:39:55 It’s so violent. I’m watching it on CNN. I’d be like, I’m literally here. I don’t know where you are watching this. CNN has found the one place where somebody has set a garbage can on fire, but I am here and everybody’s chilling out. And so I felt a bit like President Trump I think is feeling at the moment where he’s like, is Portland in a civil war outrageous scenario, or am I watching something from 2020? Am I seeing reality? And it must be actually very hard for a president to give him the benefit of the doubt. He can’t just go and fly to Portland and check it out. He’s relying on his people. I don’t have that as a researcher. I can go anywhere. And so I’m trying to do that much more and it’s time consuming. It’s expensive. It means my research is slower, et cetera, but it means I am really grounded now and what I’m seeing and hearing and I have other people on the ground as part of my network who are, they’re telling me, they’re like, no, that’s not what’s happening here. Or Yeah, you need to get over here and check this out.

Jon Krohn: 00:40:53 I like that answer a lot. It had lots of useful tips in it, very analog. I like how you started it with, I don’t know if I’m going to have any tips, but then you ended up having tons in there.

Stephanie Hare: 00:41:01 But are they tips? Is that actionable? Go travel around and just go see stuff, but go take a look around for yourself,

Jon Krohn: 00:41:09 Just

Stephanie Hare: 00:41:10 Coach, go see it. I’m going to be in Chicago soon. I’m super interested to go and see what’s happening there because as a former Chicago Inn, albeit of a suburban nature, but still, Chicago is always in my life and in my heart, and I want to see are there federal agents running around with masks, pulling people off the streets? What I’m seeing on social media, but I will be able to go see it for myself and perhaps film it for myself very soon. And as a citizen, I need to decide how upset I’m getting or not, and I need to know that what I’m seeing is what I am seeing. That’s what I think is tricky now because of AI is we don’t know if what we’re seeing is real anymore. Welcome to the metaphysics part of the show.

Jon Krohn: 00:41:55 The travel thing could be tricky for some people and for some listeners, but a lot of the other tips that you had, spending more time around people, avoiding apps on their phone, social media, apps on their phone or news apps on their phone, going to the library, reading books. These are all things that everyone else can do. That’s great. And so in a related topic, you previously said that this is a quote from you, the majority,

Stephanie Hare: 00:42:24 It’s like having a fight with your partner where they’re, like you said, did what

Jon Krohn: 00:42:30 You signed off on this statement,

Stephanie Hare: 00:42:32 Your Honor,

Jon Krohn: 00:42:33 You said that the majority of people on this planet are not involved in cryptocurrency and they’re not on Twitter, and yet we’re hearing such a huge disproportionate amount about cryptocurrency and whoever’s on Twitter providing their views. So as a broadcaster that communicates complex AI issues and real world priorities related to it like climate change and the future of work, how do you counter sensationalist narratives maybe in those personal conversations that you had, but also maybe on air?

Stephanie Hare: 00:43:04 I mean, I’ve got a lot of very constructively critical friends who are not involved in tech at all, who are like, I’ll be like, oh my God, this is happening with Nvidia. And they’re like, so what? Literally, I don’t care. I’m like, okay, I’m going to stop talking at this dinner party now. They don’t care. And that’s actually genuinely, and that’s really helpful because I have to remember, and I think anyone working day in, day out in whatever field they’re in, we happen to be in technology, but I’m sure that cardiologists have their own obsessive favorite topics too that no one else is aware of. But for cardiologists, a really big scandal or whatever. I dunno why I’m just picking cardiologists, but

Jon Krohn: 00:43:41 It’s also, it’s kind of funny that right now for a lot of cardiologists, it is probably Nvidia and machine vision algorithms and things like that.

Stephanie Hare: 00:43:49 Cliche, no, I was just thinking what is inside baseball? So it’s like journalists love to talk about it. Also, what’s the easy story to get? So the companies are pumping up PR or it’s just lots of money or whatever, but again, step out into the streets of your neighborhood and be like, Hey, Nvidia is doing 5 billion in Intel, a hundred billion in open ai, what do you think about GPUs? And they’re just like, I’m sorry, could you literally just stop talking and you’re kind of like, ah, okay, if I walk down the street here in Hackney, is this relevant? And if you ask people, because then you can stop talking and be like, you’re right, please tell me what would you like to know about technology, which is a thing that I do a lot with people. What would you like us to put on the program?

00:44:34 What would you like me to research for you? What is an important problem that you’d like to see solved? And is technology part of the solution or not? Maybe it isn’t. People are like, look, what subjects should my kids be studying in school and potentially if they were to go into university or some sort of post age 18 educational environment or work environment, what do they need to do to get a job? Hold on second, I’m already working. I’m the parent or I’m 28 or whatever, and now people are telling me I have to either use AI or be exited to quote Accenture, which is what these people have done to English language, use AI or lose your job. They’re like, how do I do that? How do I stay relevant? What should I be doing? What should I not be doing? If you’re talking to companies, it’s like there’s all this money going around with ai, but I need to know where do I invest my money?

00:45:24 I don’t have an endless amount of money to invest, so where is the best return on investment for my organization, my sector? What’s everybody doing? What are the risks? Those things? That is where it gets helpful. Again, talking to doctors how they’re using AI versus talking to a designer, how they’re using ai. AI is such a broad term these people have, it’s that whole thing where you stand depends upon where you stand. So depending on what you’re doing in your life, you’re going to have really different questions and concerns. So as a broadcaster, my job, which I’m sure I fail at daily, but I do try, is to hold all of that and remember that when I’m on the radio or if I’m on television or if I’m interviewing, I’m trying to ask questions that I feel like the audience would want me to ask.

00:46:11 If I’m lucky enough to have a guest or to have been invited to report, what would everybody be like, yes, yes, she’s asked that question or Oh, that’s a really good question I hadn’t thought about, but that’s quite useful. And then they might be able to go in actionable insights, tell it to someone in their lives and it’s helpful getting into some of the inside baseball stuff that can be really fun if you’re having a wonky conversation is less useful for most people and everybody has to decide where they are. There’s a role for inside baseball. Absolutely, and I say this, I love baseball. The kind of work I’m trying to do is talking to the widest number of people around the world sharing findings, and I ultimately would like this to be informative, actionable in science, possibly entertaining, but I’d rather it be more informative than entertaining.

Jon Krohn: 00:47:06 I mean, I think you’re accomplishing that in everything that I’ve seen that you’ve done.

Stephanie Hare: 00:47:11 I’ll find out, we’ll find out as you get zero viewers for this or listeners for this. It’s hard. Getting out of the way is also quite a good tip. I will often ask people if I’m interviewing them, what is a question that you would love or do you have any questions that you’d like for me to ask you? You can go in as an interviewer thinking, I’ve swatted up on this person, I’ve studied the area, I’ve got my questions, and you have no idea where that person is on that day or what they’ve been working on that they haven’t gone public with yet. What’s on their mind and also how few people will ever actually just ask them. So if you ask them, then they’re like, actually, I want to tell you this or this, I’m worried about this or I just came from this meeting and I’m just thinking it through. You can get some of your best broadcasting moments can be if you invite somebody to just go what’s on your mind and give them the space, and that means you have to zip the lip, which I shall now do. That’s a

Jon Krohn: 00:48:05 Really great idea. I should probably just ask that kind of question more to start the podcast with what’s on your mind.

Stephanie Hare: 00:48:11 Yeah,

Jon Krohn: 00:48:11 I like that. What’s on listeners look out for that. What’s

Stephanie Hare: 00:48:13 Something that you would like people to ask you that people never ask you? Some people have fascinating answers to that question and you’re like, oh my God, they just not feel heard.

Jon Krohn: 00:48:24 Do you have something, a question that you wish I had?

Stephanie Hare: 00:48:26 No, mine is just I just need more sleep. No, because not in the mode at the moment of I don’t have a product I’m selling. First of all, there’s nothing I’m flogging where I’m like, actually, yes, I’d like to tell you about my podcast and my substack. I don’t have these things and I’m also right now working on a bunch of stuff that is not about technology, so I don’t think it’d be really relevant to this audience. But yeah, I mean maybe that actually would be a thing is like how does a researcher stay fresh and creative because you are constantly having to digest everybody else’s stuff. There’s a risk that you just become an aggregator and an amplifier of other people’s thinking. So to come up with your, if you want to truly contribute in an original way and when you do something like a PhD, you have to say it, this is where the body of knowledge is, and this is the gap that I have identified that is missing that I’m going to now work on and I shall attempt to fill the gap.

00:49:34 And that’s blessed with the powers that be. They give you some funding and off you go and you come back four and a half years later exhausted with forehead wrinkles and they’re like, okay, have you filled the gap or not? And that is your thesis and you defend it, you stamp at the end or not job done, good luck or go back and revise that should I hope, be where a researcher is approaching this from, which is how am I advancing knowledge? I don’t know how you do that if you’re just busy aggregating everybody else’s knowledge all the time. So that’s on the one hand, you’re doing a permanent literature review if you will. You’re constantly having to be aware of everybody else’s thoughts, but for you to be like, right, I have finished a bunch of research, I now have a clean runway.

00:50:20 What is not being looked at and I’m looking over here at a blank wall right now. I’m like, what is not being looked at and over there is my, I’m like, that’s all being looked at. There’s this whole space here. What are the areas that I think are underserved and that would be useful? Am I the person to do it? Do I have the skillset to do it? Who would I have to work with to do it? Then you get into the how’s and stuff and I try to play in that space. It’s very hard for me as a researcher because you’re often very lonely. You’re like, I’m going to go work on the thing that I don’t think other people are working on. It’s like you and two other people that you find on the planet who are interested in it, and then eventually by, if you’ve done your job and you get there before everybody else does, and you’re like, well, look at this diamond. Then everybody piles in, but by the time everybody wants to pile in, I’m gone. I’m long gone onto the next underserved thing. I hope that’s the goal. That’s not mean I achieve it, but that’s at least the spirit with which I attempt to wake up every day.

Jon Krohn: 00:51:20 Nice. I’m glad you proposed this meta question of what question should I ask? You had a great one and a great answer. Certainly something that I struggle with a lot myself is we do 104 episodes a year of this show, and so a lot of it is, it is just aggregation and letting other people have the floor. And so this idea of what is my contribution?

Stephanie Hare: 00:51:46 I would actually push back on that and say, you’re not desegregating. Absolutely not. You’re part of a conversation. You just pulled something out of me that I haven’t thought of before. I was like, what would I like to be asked? You’ve created a space and you prompt questions and you’re doing that 104 episodes a year. That’s huge. You will have created things I guarantee that are original and new for both you and your guests and then your sum, just your own output will have its own sum that will be unique. So there’s rule for that and we see that in research all the time. There’s like, are you interpreting and offering new interpretation or are you also going out and being like, look, I found a new element for the periodic table, right? There’s room for both and we are not all able to go or just even wanting to go. That’s not why we are motivated to go find that new periodic table element. Some of us it’s like I’m actually going to do a new interpretation of the works of Shakespeare and everything in between. So I think what you’re doing is way more than aggregating for what it’s worth.

Jon Krohn: 00:52:47 All right. Well thank you Dr. Hare.

Stephanie Hare: 00:52:49 Your feet

Jon Krohn: 00:52:49 There. So I promised early one of your first, in one of the first minutes of this episode, we got talking about climate change in some way or other, oh, we were talking about weather in the uk. So it was the very beginning of the show and I promised that later in the show we would get to some climate change related content, and so here it comes. Earlier this year you were featured in an article called, this is a funny title to this article. I don’t really understand it. It’s something because it’s to do with imaging informatics. So the title of this article is Radiology Responds to Launch of UK AI Action and Yeah,

Stephanie Hare: 00:53:28 Am in this.

Jon Krohn: 00:53:29 Yeah, in it you warn that the UK’s energy grid is not fit for purpose to support the government’s ambitious AI action plan. So how can policy makers get the right balance between the excitement of AI growth and the very real infrastructural limits of energy and sustainability?

Stephanie Hare: 00:53:49 I mean, first of all, I really want to see this article because I’m very worried about this. It is true though. I mean the UK electricity grid is not fit for purpose, and that’s the whole thing is we have a prime minister who announced at the beginning of this year that we were going to mainline AI into the veins of the nation, which isn’t a very British way of phrasing that. By the way. I feel whoever phrased that came from North America and or had recently visited and was feeling really pepped up and it sort of left everybody going. What? Now? We’ve just had the US UK tech deal announcement when President Trump came over accompanied by an entourage of US tech leaders, and they announced a lot of money that was being invested in the uk. Largely, it must be said for AI infrastructure, which we do need, everybody needs, there’s a country on earth that has enough AI infrastructure and let’s just keep it simple to data centers for now to do what they want to do, which is why President Trump and Entourage also the Middle East earlier this year.

00:54:53 I happened to be there, weirdly at the same time, so it was quite interesting to see what was going on in the energy effects that had in Qatar and the United Arab Emirates and Saudi Arabia because they need more, everything’s just more in this field. So that’s fine. We want that, but then we have to get down to the infrastructure supporting the infrastructure. AI infrastructure data centers in this case sits on top of our existing infrastructure. So let me speak of my lovely adopted home here in the North Atlantic, the uk, which has the highest electricity prices in Europe, as we all know, living here watching our bills go up every year. What is it going to mean to stress that grid with some big energy guzzling data centers and the amount that gets great to say we’re going to build them, but how now we have to start getting dirty, literally dirty.

00:55:51 We’re going to dig it up. Where’s it going to happen? How are we going to plug it in? Are we going to have rolling blackouts? What’s the plan? Is it oil and gas powered? Is it nuclear powered? Is it renewables powered? Right? So what’s happening there? Then there’s the fact, if you live in the UK, you will know that we’ve been having ever increasing heat waves. And I have mentioned we had I think a record four this summer and it was actually quite scary. I mean you really see the effects in nature here, but what that also means is temperatures are rising and we’re not getting enough rain. Something I believe that’s happening in the US as well, where I think 48 out of 50 states were in drought. So you’ve got a technology that is heavily water intensive, heavily energy intensive here in this country.

00:56:36 You have a grid that does not fit for purpose. Very old, massive, massive investment will be required to get all of this fit for purpose in a country where we are seeing the governments currently having its party conference saying they’re going to have to break one of their promises not to raise taxes because we’re broke and the bond markets are punishing us. Not a phenomenon that is unique to the uk. I was just in France recently. They’re having their own problems with this. I was in Germany just yesterday, zero growth for years now. This is the sort of science and technology powerhouse of the continent, the biggest economy in Europe. Zero growth. Where’s that going? No plans by the way for infrastructure investment for AI in Germany. So one to watch France wants to with nuclear, but can it? So you’re kind of looking around going, it’s all very well and good for the CEO of Nvidia.

00:57:31 Jensen Wong to show up here and make his statements and it’s great. It’s super exciting and it nails all the headlines, but then you’re like, how? Sorry, where are you putting that all in the southeast of England where the majority of the population in this country lives? Have you seen what it’s like to try and get anything built here? Good luck. We can’t even build houses. We’ve got a housing shortage. So I’m not trying to, this isn’t like a negativity thing. It’s like a dream. It’s going to meet reality and there’s going to be a moment I’m here for it as a researcher, as a citizen, I’m like, okay, curious as to how we’re going to square that circle. And again, the promises have been made diarize for five to 10 years to see how many of these things actually get built. We have a terrible record for huge infrastructure projects here.

00:58:21 I’m very curious to see how that’s going to work out in the Middle East for different reasons. Putting the world’s biggest data center within reach of missiles from Iran is a very interesting move, right? Politically, so we can talk about that. We haven’t even mentioned the national security dimensions of this. I’m just talking here in the UK of can we actually just do it? And then how does this work for electricity and water? You want to put it into the Middle East? You are going to have an entirely different problem. So that makes you then go with special position to do this. Who’s been really good at infrastructure projects? Who’s got the vision? Who’s got the political bill? Who’s already looking at emerging technologies, green technologies leadership in this?

Jon Krohn: 00:59:08 I think I know this one.

Stephanie Hare: 00:59:10 I don’t know. I mean, I feel like I don’t want to answer it back to you.

Jon Krohn: 00:59:15 Well, I

Stephanie Hare: 00:59:16 Know what I would put down if we were in a pub quiz,

Jon Krohn: 00:59:18 A book I’m currently reading a novel I’m currently reading is the Three Body Problem. Am I going in the right direction?

Stephanie Hare: 00:59:25 Yeah. Look, may you live in interesting times is a curse for a reason, but we will get to see it play out. We will get to see it play out, and it’s fascinating. I was intrigued by Emmanuel Mol back in February in Paris at the AI Summit when he was like the United States says drill, baby drill, and they’ve got their nuclear fleet, which France is very proud saying, plug, baby plug. We’ve got all this electricity in France and Mol in particular, although he will not be in power for much longer because of his term running out, not for any dramatic reason. France has its nuclear electricity solution. Can it decide to think bigger and make that part of a European solution if Europe could work together, is that an option? Does the UK get some action despite Brexit on that? Right? Again, when there’s a will, there’s a way. The engineering problem is the easy part in some ways of this. The human political social problem is the far bigger one, and we return thus full circle to my book, which is Wicked Problems. That wonderful concept that so many of us, when you learn it, you’re like, oh, finally a term for this thing I’ve been seeing my entire life and I just didn’t know the word for it. Multiple causes the problem and every solution that you pose creates yet more problems. This is where we’re at right now.

Jon Krohn: 01:00:52 This has been a fascinating part of the conversation, and so it pains me that we’re actually getting to a point where we’re starting to wrap up. But my final technical question for you here I think follows on nicely from the last topic that we just discussed. Your expertise, the stuff that you’ve been talking about, broadcasting. Oh, by the way, I looked up into more detail that article that we had your last quote from that started with radiology. What was it? It was radiology response to launch of UK AI action plan. You were not interviewed for that article, which was on a relatively minor blog. They pulled something that you had said on BBC news.

Stephanie Hare: 01:01:32 That’s totally fair, by the way, if they wanted to use it. We love radiologists. I was just like, I don’t remember talking to you this whole vacation. Are we having invented quotes? Because that’s happening now, right?

Jon Krohn: 01:01:43 For sure it is. Gen AI does definitely invent quotes, and I started to get a little nervous as I started doing it. But you did seem to recognize the quote as I got to it, but we should have, it would’ve been better for us to say that you said on BPC news, not in this random blog. So as you’ve discussed on BBC News, your expertise spans not just ai but also other frontier technologies like the Metaverse Cybersecurity. If you had to prioritize one of these as the most urgent frontier shaping human futures over the next decade, which do you think it would be?

Stephanie Hare: 01:02:18 I were only allowed to work on one, but it could be the one that I most want to work on and that I would feel most proud if I could contribute anything to. It would be on climate crisis and biodiversity loss. By which I mean fixing them, not contributing to them. Yeah, I am very worried about that. I’m very worried about that. I wish we were talking more about that. Not on this podcast. I mean in our society and because we have so many other problems at the moment, I think it’s not getting enough attention, but if you are even remotely interested in nature and plants and animals and the world around you, you cannot help but notice these changes and science is being so politicized and defunded, which I really hate. I want to see this be the number one priority. It affects everything. AI could be part of this, by the way, but right now I don’t think it is, but it could be. So I’m sort of adjacent to that. But yeah, I would like to see us be far more respectful members of this planet as a species than we’ve been.

Jon Krohn: 01:03:37 It is probably marginally more important than having even larger gen AI capabilities, manufacturing porn flowing around the internet. Maybe marginally more important than that.

Stephanie Hare: 01:03:52 Yeah,

Jon Krohn: 01:03:53 No, it’s absolutely. I think you’re spot on there. I wasn’t sure what you were going to say, but I think it makes a lot of sense that that would be a top priority for you, and it’s not the most fun note to end this on. So

Stephanie Hare: 01:04:11 It could be though, if we were like, what would that look like?

Jon Krohn: 01:04:14 What

Stephanie Hare: 01:04:14 Would it look like if we actually backed research and science and cared about the environment, cared about, if we parsed every single problem through that lens, wouldn’t that just be awesome? I want to feel happy too. Let’s end this on a high. What would that look like?

Jon Krohn: 01:04:32 I think it’s kind of fun to think about the idea of literally what the solutions would look like as well. If you’re driving on the highway and there’s lots of crossings over the highway for animals to go over, and there’s lots of ways that it’s actually, it’s visually pleasing and just enjoyable to think about. If we had, when you fly into Newark Airport, as a lot of people do when they fly internationally into New York, the journey from Newark, New Jersey to New York City is just this awful industrial wasteland and marshland, and you think, and then traveling in some other countries like Switzerland, Germany, you see there’s so much more nature and biodiversity, and so maybe that’s kind of a fun way to visualize how things could be.

Stephanie Hare: 01:05:27 I know I was just in Switzerland in August for my summer holiday, and you’re just like, my God, once you see this and then you come back to where you live, you’re just like, there’s another way people, there’s another way of sticking videos and sending it to family and friends all over the world being like, and I know for apologies to the Swiss, they’ve been like, we know, but it is incredible. And there are countries that are working on this and there are so many, this is what kills me. So many people care about this. This is one of these areas where I’m like, this is not being served. This is an underserved area, both commercially, but also just as a human being. I do not know a single person who wants to breathe dirty air or who would love to look at an industrial wasteland instead of a garden or a park or whatever, who’s like, oh yeah, we’re killing plants and animals at a horrific rate.

01:06:21 I’m fine with that. No, and children, again, children are born understanding that they’re part of something bigger than themselves and they’re curious about it and they love it, and it’s just like, what the hell happens to people that we just don’t care by the end. So luckily there’s a bunch of good people who do care, and I think supporting them is going to be a big part of it and keeping it front and center. So I do try to do that in my work. It’s one of the areas that I, I’m going to be, I hope more focused on in the future, wasn’t always in the past that this has been something I’ve also had to understand. My book barely mentions it at all just to be like, what the hell? But again, I was running in a pandemic, but it still blows my mind when I go back and look at it and I’m like, this is massive unspoken thing.

01:07:10 It just wasn’t on our minds. So fair enough. There’s no judgment here. I don’t want to nail anybody if they’re not thinking about it yet. I’m just saying for myself personally, I see it. I’m old enough now to see the changes, to have lived the changes, and I’ve traveled enough now to look and see how other people are doing things, and I’m like, it’s not even like we have to come up with the better solutions. They exist, people pioneered them. We just have to do it. So maybe that’s part of it as well, is sharing what other people have done that works and makes your life better. Why would you not want to live a better life?

Jon Krohn: 01:07:43 Love that. Great soundbite. So quickly before I let you go, do you have a book recommendation for us, Dr. Hare?

Stephanie Hare: 01:07:50 Sometimes as a technologist, you get too much tech and you need to read something else. What I read last Christmas was Richard J. Evans third Reich trilogy, which I shall demonstrate here. And the one that I think that I would like my fellow Americans to read right now is the Coming of the Third Reich, because it’s actually quite useful and relevant for today. So I hope that’s a political statement without saying why, but it goes into then what happens when the Third Reich got into power, and then what happens when it went to war? And first of all, I just think it’s useful because people constantly reference Nazis online without actually knowing their second World war history other than from films. It’s fine, we all like the films, but Professor Richard J. Evans has done God’s work in actually slogging through the archives, reading all the literature and writing it in a way that if you read nothing else on World War ii, read this book, but while you read it, you will probably mark it up as I did with, my God, this is happening now. My God, I did not realize that the first people they were rounding up in Germany were in fact Germans. Interesting. Quite useful. So yeah, I would say sometimes put down the book about AI for a moment and pick up, frankly, any book, but I really can recommend these. They’re superbly written by one of the United Kingdom’s top living historians.

Jon Krohn: 01:09:13 Great recommendation. Thank you. And I think actually probably most of the book recommendations we get on this show are unrelated to our field. Oh, good. Yeah, so thank you for that.

Stephanie Hare: 01:09:22 What would you recommend to me before I let you go, John? What book should I read?

Jon Krohn: 01:09:26 Oh my goodness. That puts me on the spot. I mean, this is kind of an AI book in a way, but it’s a fiction book. It’s Kurt Vonnegut from the 1950s. His first novel was called Player Piano.

Stephanie Hare: 01:09:41 Player Piano,

Jon Krohn: 01:09:42 Yeah. And it’s Kurt Vonnegut is Dark but funny, and it is pretty stunning how he nails the moment that we’re in today with Gen AI in his book from the 1950s.

Stephanie Hare: 01:09:56 See, I would never have even known about this. I’m so grateful for you to have told me that. I’ll

Jon Krohn: 01:10:01 Read it. There you go.

Stephanie Hare: 01:10:03 Rendezvous and a couple months will have book club.

Jon Krohn: 01:10:05 Sounds good. And then, yeah, final thing for you, Stephanie, is after this episode, other than catching you on B, BC news, where should people be following you online? Don’t in a library,

Stephanie Hare: 01:10:19 You should be reading Professor Evans’ books rather than following me online. He has far more to tell you than I ever could. I am on LinkedIn if you feel the need to be aware of things. I might be randomly posting on LinkedIn, which is mainly job adverts that other people are, because I feel like all of us should always be aware of what’s going on in the job markets. If I see something that’s useful for other people, I’ll post it. I’m experimenting with Blue Sky, but I might not be on it for much longer just because I wanted a sort of Twitter alternative, but I just don’t know. So yeah, I’m not a good person to follow on LinkedIn or sorry, on social media. No, I would just say follow me by reading books that you would enjoy. Let’s create that climate together and hit me up if you think there’s any good books I should be reading.

Jon Krohn: 01:11:08 I love that. Thank you so much. This has been a great episode. So much fun. So interesting.

Stephanie Hare: 01:11:13 Thanks for having me.

Jon Krohn: 01:11:14 Thank you, Dr. Hare. Yeah, nice one. In today’s episode, Dr. Stephanie Hare covered how technology ethics can be defined as the practice of maximizing the benefits and minimizing the harms of any tool we build or use. Why a Hippocratic oath for technologists could serve as a guiding ethos, focusing on regulating harmful use cases rather than the tools themselves to avoid stifling innovation. She talked about how the rise of low quality AI generated slop may be the final stage in the Internet’s degradation, potentially forcing a return to more intentional real world interactions. And she talked about why ambitious national AI strategies are on a collision course with the real world infrastructural limits of aging energy grids, and the immense energy and water demands of data centers. As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, the URLs for Stephanie’s social media profiles, as well as my own at superdatascience.com/935.

01:12:17 Thanks to everyone on the SuperDataScience podcast team, our podcast manager, Sonja Brajovic, media editor, Mario Pombo, partnerships manager, Natalie Ziajski, researcher Serg Masís, writer Dr. Zara Karschay, and our founder Kirill Eremenko. Thanks to them for producing another stellar episode for us today for enabling that super team to create this free podcast for you. We are deeply grateful to our sponsors. You listener can support this show by checking out our sponsors links, which you can find in the show notes. And if you’d ever like to sponsor the show yourself, you can make your way to jonkrohn.com/podcast to find out how you can do that. Otherwise, help us out by sharing this episode with people who would like to hear it or view it. Review this episode on your favorite podcasting app or on YouTube subscribe obviously, but most importantly, just keep on tuning in. I’m so grateful to have you listening, and I hope I can continue to make episodes you love for years and years to come. Till next time, keep on rocking it out there, and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.

Show All

Share on

Related Podcasts