Jon: 00:00:00
This is episode number 497 with Benjamin Todd, founder and CEO of 80,000 Hours.
Jon: 00:00:12
Welcome to the SuperDataScience Podcast. My name is Jon Krohn, chief data scientist and bestselling author on Deep Learning. Each week we bring you inspiring people and ideas to help you build a successful career in data science. Thanks for being here today, and now let’s make the complex simple.
Jon: 00:00:42
Welcome back to the SuperDataScience Podcast. We are exceptionally lucky to have the brilliant and inspiring Ben Todd, as our guest on the show today. Ben has invested the past decade researching how people can have the most meaningful and impactful careers. His research, however, is not purely academic. It is applied to massive effect via his charity 80,000 Hours, which is named after the typical number of hours worked in a human lifetime. The Ycombinator backed charity has reached over eight million people via its richly detailed, exceptionally thoughtful, and 100% free content and coaching.
Jon: 00:01:22
Thousands of people are known to have dramatically changed their career paths in a more personally meaningful or globally impactful direction thanks to guidance from Ben and his team. Yours truly is one of them and we’ll discuss that on the show. Today’s episode should be of great interest to anyone, since I imagine basically every person aspires to have a more meaningful and impactful life, to that end, Ben will share with us an effective process for evaluating next steps in our career, a data-driven guide to the most valuable skills for one to obtain regardless of our profession, as well as specific impact maximizing career options that are available to data scientists and related professionals, such as machine learning engineers and software developers. All right, you ready for an especially inspiring episode? Let’s go.
Jon: 00:02:18
Ben, welcome to the SuperDataScience show. I am so excited to have you on. We haven’t caught up personally in years, so we’re going to be doing a bit of that on-air, but I promise the audience is going to love it because you’re fascinating. So first off, how are you doing? Where in the world are you?
Benjamin: 00:02:35
Great, yeah. I am coming to you from the 80,000 Hours Podcast basement, it’s a kind of windowless room where we record all of our podcasts with lots of nice sound padding.
Jon: 00:02:47
Yeah. People should check out the YouTube version of this, just to see, it’s one of the nicest spaces that we’ve had, that any guest has had for recording yet. So nicely done.
Benjamin: 00:02:55
Well, yeah. It used to actually it would be like boxes behind here, but I had to get them cleared away for… We’ve put like a fake plant back there, like a temporary plant.
Jon: 00:03:05
Oh, it looks good. Oh yeah, I guess in the windowless room, fake plants are the only thing that would do.
Benjamin: 00:03:11
Well, sorry, no, it’s actually a real plant, but it’s not normally there because yeah, like you say, it would die pretty fast I think.
Jon: 00:03:19
The plant has been planted there. Great. So we’ve known each other for a long time, more than 10 years. So I finished up at Oxford in 2012. And it was near my final years there, so maybe around 2010 you and I met because we were both in this investment challenge. So a big investment manager in London called Orbis, it was this competitive process where they invited applicants. And I don’t remember if we had interviews, but they selected a group of 10 people. And then it was a really cool program. It was actually one of the most interesting parts of my entire PhD. And totally, it was unrelated to the PhD, but working with these 10 bright people from Oxford, from all over the campus, and for 12 months we managed a simulated portfolio. But based on how your simulated portfolio performed, you made real money with no downside.
Benjamin: 00:04:27
Yeah. It’s [crosstalk 00:04:28] to put it all into like a very high-risk thing, isn’t it? And just…
Jon: 00:04:32
Yeah, there was an upper limit on how much you could earn, but it was a really cool experience. And for me, it was a useful one for my career, because my immediate job after my PhD was working as a trader at a hedge fund, different kind of thing. So Orbis was doing long only holding of stocks, whereas at the hedge fund, by its very nature we were doing like sub-second type trading. And it was impactful for you too, right? I mean, I think you got a job offer from Orbis, right?
Benjamin: 00:05:11
Yeah. I did some internships with them and that might well have been what my career would have been if I hadn’t gone to work at 80,000 Hours instead.
Jon: 00:05:20
Right, and yeah. So then following almost perfectly along with this narrative, two years after me doing trading for two years, I was like, “I don’t think this is for me. I don’t think that earning money for its own sake is something that I can keep doing and that I’m motivated about.” And at that time I was looking for making some kind of impact in my life. And so I would go on Saturdays to the trading floor and I would watch videos on YouTube of Doctors Without Borders because for a long in my life I’d had this idea of working as a doctor. And I’d kind of after my PhD, medical sciences PhD, decided, no I’m not going to do that, been at school long enough. Let’s do the hedge fund thing.
Jon: 00:06:11
And so I quit working at this hedge fund after two years. My intention was to move back to Canada from New York and finally study medicine. But in those few months, I had a call with you. So it was, I think, early days for 80,000 Hours, you guys were starting to figure out your process. And so you offered, very kindly to me, to take an hour and help me reflect on my career, and what I’d like to do, and what my options are. And I remember, because I was sitting in an apartment in the East Village that I only lived in for a few weeks because I, last minute, decided maybe I’ll stay in New York a little bit longer and see what else is here instead of doing medicine.
Jon: 00:06:56
And I remember having the conversation in that apartment and it was hugely beneficial for me. So what I had been doing since the decision to become a data scientist, to try that out, which I’ve now been doing for seven years and absolutely love. So thank you, Ben. That all came out of that… I mean, not all came out of that conversation, but it was a huge part of my process in that phase. We’re going to talk, in this episode, about your 80,000 Hours process, give people tons of career guidance, both generally as general structures, as well as specific advice for data scientists on how they can make the biggest possible impact in their careers. But anyway, so thank you very much. It’s actually because of you that I became a data scientist, became host of the SuperDataScience Show and that anybody’s listening to this today.
Benjamin: 00:07:51
Yeah, that’s awesome. And I didn’t know all that. I just, yeah, for my own curiosity, what specifically was it? Because yeah, we noticed that around that time, a lot of people who’d done science PhDs and wanted to do something else, they were starting to switch into data science and that was quite a popular option that was emerging. And so I guess we were telling a lot of people about that at that time. Was that it, or was this something else?
Jon: 00:08:13
Well, it’s interesting. I remember you took notes in a Google Doc in front of my eyes while we did that session and I still have that, I’m sure, so I could look it up. And also, you’re a very fastidious person, so I’m sure you have that filed away somewhere neatly-
Benjamin: 00:08:33
Sure, yeah. It would be in Drive somewhere.
Jon: 00:08:34
Yeah, so we could look up the exact things, but going from memory, one of the great draws of going into data science, as opposed to say doing a medical degree was that I could start making an impact right away instead of studying for MCATs for a year, then doing a minimum four year. And then you’ve got all the residency stuff before you really start being able to be on your own. And so data science was this opportunity to get started right away, having an impact, using a skillset that I already had.
Jon: 00:09:08
Having a PhD in the sciences, I had exactly the background that a lot of employers are looking for when they’re looking for a data scientist. And so the summary point, I guess, that came out of that call with you was why not try this? It seems like a good opportunity. You’re already qualified for it and spot on. I can’t imagine being a doctor now. I’m so glad that I’m a data scientist, and that’s really worked out. So thanks, man. And so tell us about, so 80,000 Hours. So around that time, so that was around 2015 that we would have had that conversation, 2014 actually. So after you finished your degree at Oxford, how did you end up going down instead of on the investment manager path? Because I remember, you were a talented investment manager. You were extremely thoughtful and well-prepared for any discussion that we had around investments in that Orbis simulation group. And so you could’ve gone and presumably been a star investment manager. You certainly seemed like that. But instead, you decided to go down this path of helping people with their careers. So what happened?
Benjamin: 00:10:26
Yeah, I was just trying to think about, yeah, I think like you, I wanted to find a job that was enjoyable and paid the bills, but also, that made a kind of bigger contribution to the world or society. And yeah, in 2011, yeah. So I guess we would have graduated in the same year. I also left Oxford in 2012 and I met Will MacAskill at Oxford and we were both wondering, “What do we do? What do we do next?” He was a philosophy PhD student and I was doing physics and philosophy undergrad. And yeah, we were just like, there’s many things we might be able to do, which one would actually have the most impact? And we couldn’t really find any advice about actually even really trying to compare different paths we might go down. And we thought, “You could work in investing and through the donations you could make and that kind of career, you might be able to fund several people to work in a charity.” So, that seemed like that could be a high-impact path. But we also thought, if you look at history, research has often had a lot of impact to people and governmental policy, working in charity as well was kind of an obvious option. And like how could we actually compare these different paths?
Benjamin: 00:11:41
So yeah, we just basically started thinking about it ourselves. And in 2011 we gave our first-ever talk in Oxford and that was actually still our most successful ever talk. So I think in the end maybe five or six have totally changed their career paths from that talk. And we’ve actually, two of those people now work at 80,000 Hours, like 10 years later. And yeah, some of them asked us, so I think yeah, a guy called Richard Matthew also came up afterwards and was like, “I think you should start an organization about this.” And then we did, yeah. Yeah, that was a big decision for me. Like, should I do this nonprofit path or do the investing path? Or I was also wondering, should I do a PhD? I was kind of interested in climate economics at that time. So those are the three things I was choosing between. But yeah, I settled on this path, yeah. And the idea being, if I can just help like one person switch into a higher impact career path, then that’d be like double my impact already, yeah.
Jon: 00:12:44
This episode is brought to you by SuperDataScience. Yes, our online membership platform for transitioning into data science and the namesake of the podcast itself. In the SuperDataScience platform, we recently launched our new 99 day data scientist study plan, a cheat sheet with week-by-week instructions to get you started as a data scientist in as few as 15 weeks. Each week, you complete tasks in four categories. The first is SuperDataScience courses to become familiar with the technical foundations of data science. The second is hands-on projects to fill up your portfolio and showcase your knowledge in your job applications. The third is a career toolkit with actions to help you stand out in your job hunting.
Jon: 00:13:29
And the fourth is additional curated resources, such as articles, books, and podcasts to expand your learning and stay up to date. To devise this curriculum, we sat down with some of the best data scientists, as well as many of our most successful students, and came up with the ideal 99 day data scientist study plan to teach you everything you need to succeed. So you can skip the planning and simply focus on learning. We believe the program can be completed in 99 days and we challenge you to do it. Are you ready? Go to www.superdatascience.com/challenge, download the 99 day study plan and use it with your SuperDataScience subscription to get started as a data scientist in under 100 days. And now, let’s get back to this amazing episode.
Jon: 00:14:17
I’ve seen some of these stats come out. I don’t know how it happened, if I saw you post about it on LinkedIn. But you have some assessments. You must have to do this kind of as a charity and also, I want to kind of get into the, also, you went through Ycombinator, which seems like… That’s going to take us off on a big tangent. So before we start talking about YCombinator and how charities can go through an accelerator like that. First tell me how many people has 80,000 Hours impacted, roughly since you started it?
Benjamin: 00:14:55
Yeah, in terms of like readership, I think we’ve had over eight million people visit the site at some point. But then yeah, in terms of like the thing we really try to ultimately do, is help people switch career paths. And over 1,000 people have told us that they’ve made a big career path change due to us. But yeah, we haven’t caught everyone because I don’t think you’re in that survey.
Jon: 00:15:19
No, I think I am. I think I do remember filling that survey in, yeah. I think I got an email at some point, years ago. I think I would have waxed lyrically about what a wonderful experience it was and what impact it had on me. But you’re absolutely right, you couldn’t possibly capture all of the people that you’ve impacted. But even just to know that 1,000 people have actively said… 1,000 is an absolutely enormous number. If you try to imagine 1,000 people packed into a room and all the different things that they were doing and are doing now, that’s-
Benjamin: 00:15:54
Yeah, the way I kind of actually think about our impact is we think it’s almost a little bit like startup investing, where a couple of the people we helped turn out to go on and have these really outsize impact. And that’s where a lot of our impact comes from in the long term. So one example that’s been on my mind recently is Sam Bankman-Fried, who was a mathematician at MIT. And yeah, he realized he was a really good fit for quantitative trading at a hedge fund. And he found kind of job through people he met in our community. And he turned out to be a much better fit for that path than me. So I’m really glad I did the nonprofit thing and he did that. But he later ended up founding one of, now it’s one of the biggest cryptocurrency exchanges, FDX. And according to Forbes, is currently worth about $9 billion, which he’s planning to give like all away, basically. He was already one of Biden’s largest donors in the last election. He’s already given millions of dollars, he’s still-
Jon: 00:16:53
Earning to give.
Benjamin: 00:16:55
Yeah. So that’s the I need to gift path. And yeah, he obviously donated a lot more than I ever could have.
Jon: 00:17:03
And hopefully some donations to 80,000 Hours.
Benjamin: 00:17:07
Yeah, he did donate to us in the early days. Yeah, now actually, we’re funded already. So we haven’t actually had any donations from him recently.
Jon: 00:17:19
All right, yeah. So let’s talk about that a little bit because I think this is really interesting, for a number of reasons. So first of all, when you make that decision, “Okay, I’m not going to be an investment manager. I’m not going to pursue a PhD in climate economics. I’m going to start a charity.” So if you want to do that, how do you think about funding in those early stages? And then tell us about the journey to eventually, that leading to YCombinator and I guess you should introduce what YCombinator is. But then also that they have this charitable aspect. And then, if I haven’t asked too many questions already, the last one would be, how is it that you can say, “Well, we’re fully funded, we don’t even need donations now,”? So I don’t know. I’m interested in all of those things.
Benjamin: 00:18:08
Yeah, so in the early days, we were quite lucky because there was this community in Oxford called The Giving What We Can Community, which is people who give 10% of their income to charities, they think are most effective. And so we managed to get several donors from that community early on, and they kind of gave us our seed funding. And that was part of, yeah, definitely gave us the confidence to carry on, like seeing that early traction and getting that early funding. So I was able to go full-time. Though yeah, I think in the first year our salaries were only 15,000 pounds. So we did put it together on a shoestring, but-
Jon: 00:18:45
Yeah, that is not a lot to live on. That converts to something like probably 25,000 US dollars. No, no, not even, maybe like 20,000 US dollars, yeah, wow.
Benjamin: 00:18:53
Yeah, though I was an undergrad at the time, so I would have been living on about that much, as a student. But yeah, so I mean, we’ve raised those over time. But yeah, that was how we got started. And then, yeah. I guess the other way we were in a very fortunate position is like this broader effect of altruism community grew up that we were also a part of. And that’s a community of people who were trained to use reason and evidence to find the best ways of improving the world in general. And we found donors in that community who, kind of most donors, they’re very passion-motivated. They do something they find interesting and they’ll support it for a while. And then they might get interested in something else and it’s quite a fickle funding source. But our donors are much more like, if we show them we’re changing careers and getting impact, they will keep giving us money. And they’re kind of willing to scale us up in line with our impact. So it is actually like a scalable funding model, which is a very fortunate position to be in.
Jon: 00:20:00
So then what about YCombinator? So I guess I can give a little bit of context. So for listeners who aren’t aware, YCombinator is by far the most well-known startup accelerator around. So you can submit, anyone who’s listening right now, if you have a startup idea, you can find the YCombinator website and you can submit a… I don’t know exactly how it works, you probably know, Ben. But like you submit like a pitch deck or something and answer some questions.
Benjamin: 00:20:27
Yeah, it’s an application form, yeah.
Jon: 00:20:29
And based on that, it’s extremely competitive. But if you get into the program, you get some funding in exchange for, typically they take some equity. But the big thing about it is the network and the mentorship. Some of the best-known venture capitalists in the world are heavily involved with YCombinator. And so, in terms of getting that kind of mentorship, that kind of exposure, that opportunity to scale, I think it’s one of the most potentially impactful things that you could get involved with as a startup founder. So I only know about it through 80,000 Hours that they also will fund some charities, like yours.
Benjamin: 00:21:15
Yeah, and so I think they started, I think it was in 2014, the first charity to ever go through was called Watsi. And as far as I know, they still, every year they have between kind of one and four charities or nonprofits that go into the program. And they just do that because they think this is a way for them to have some extra impact.
Jon: 00:21:38
Cool, and then yeah. I guess yeah, my last kind of question there around funding is yeah, the last one was just like, and maybe you don’t need to go into much detail, but just kind of interesting for me that you’ve reached this point where I guess 80,000 Hours is kind of self-sufficient. So you had this donor model for a long time, but now maybe through the experiences, learning how to scale through YCombinator and maybe startup related strategies, you’re now at this fully funded stage.
Benjamin: 00:22:16
Yeah. No, I guess there’s a few things we could talk about with that. But we definitely learned a lot from Ycombinator. It felt like it was almost this whole kind of toolkit or worldview for doing things in the world. And it was very inspiring. I think I was just seeing the other day, I can’t quite remember the figure, but I think they’ve now created over 41 billion dollar companies. And so just being in this room full of people who are all just like, “We’re going to set up a billion dollar company,” and they might actually succeed.
Benjamin: 00:22:51
We were almost a bit jealous of how fast the for-profits can move. And then yeah, we learned a lot. Yeah, one example was just like, we kind of figured, “Well, if you want to build an organization, then you should hire people, right? Because that’s what organizations do. You hire a lot of people. But they’re actually very against hiring early on, because the idea is, you really want to have your model down before you hire. So before you have product market fit, you should avoid hiring and just have the founders focus on that because they’re the best placed people to figure out your model. And then later you try and hire as quickly as possible. And that really changed our strategy.
Jon: 00:23:32
Well, so one really big thing that we probably should have mentioned right off the bat, that I haven’t yet, is where the name, 80,000 Hours comes from.
Benjamin: 00:23:41
Yeah, so that’s roughly how many hours you have in your working life. So 40 hours a week, 50 weeks, a year and over 40 years, about 80,000. And yeah, yeah. Go on.
Jon: 00:23:54
Go ahead. Basically, it ties in directly to your mission, right?
Benjamin: 00:23:59
Yeah, like one point is, if you would spend six minutes debating where to go for dinner for two hours, then the equivalent, for your career, would be spending 4,000 hours thinking about it, and researching, and exploring options. And so it’s just like this really big decision, that’s really worth thinking about. But even more than that, we actually think if you want to live a good, ethical life, then actually, the thing you should focus the most on is what you should do with your career. Because your career is actually the biggest resource you have for effecting change in the world because it’s so much of your time, it’s more than all the time you’ll spend with your friends, and eating dinner, and watching Netflix, all that put together, especially like while you’re an adult. You’ll probably actually spend more time at work. And that means even if you can just increase the impact of your career a little bit, that could have really big effects on improving things in the world.
Jon: 00:24:57
Cool, yeah. I love it. And I’ve always loved the name from the beginning too. So we’re going to get into some specific data science advice. You and I were chatting before the show and then you prepared so much for the audience. I can’t wait for you to share these data science specific tips for them on how they can, specifically, make an enormous impact with their 80,000 Hours. But generally speaking, what is the kind of advice that you give to people? How does 80,000 Hours shape people’s careers?
Benjamin: 00:25:31
Yeah. So one way of approaching it is at a very high level, we think what drives your impact is the problem you focus on and how pressing it is. How effective the solutions are used to address that problem. How much leverage you get on those solutions and then your personal fit. So how successful will you be in the career? Will you stick with it? Will you enjoy it? All those kinds of things. And so basically, what we’re trying to do is help people find better answers to all of those four questions. So find something that fits them better, has more leverage, more effective solution, or a more pressing problem.
Jon: 00:26:05
Cool, yeah. It’s a sensible model, makes a lot of sense. All right. That was a kind of a redundant sentence there. It’s a sensible model, it makes a lot of sense, all right-
Benjamin: 00:26:16
No, that’s good.
Jon: 00:26:21
So let’s talk about data scientists specifically. Let’s cut to the chase. How can data scientists best have an impact in their careers?
Benjamin: 00:26:31
Yeah. So maybe I’d just start by saying data science is a really valuable skill. And so that’s a big part of having a good career as well, is trying to actually build up, we call it career capital. So it’s like skills, connections, credentials, reputation that you can then use to have an impact or to do something that you find satisfying as well.
Jon: 00:26:53
That idea from our hour long conversation, whatever, seven years ago, that idea of building career capital, I use that constantly in thinking about my own career. Yeah, I mean, we haven’t really talked since then, so you don’t know that. But in my own mind, with what I choose to do each week, each year with my career, it is driven by that idea of building career capital. Being a host of the SuperDataScience Show, it doesn’t pay very much. But there’s a lot of listeners and it’s well-produced and I really enjoyed doing it.
Jon: 00:27:30
And so kind of the experience that I get out of this and maybe, indirectly, some kind of network effects come out of it, that’s why, it was from that kind of discussion, this idea of career capital, that when Kirill was going to stop being host of the SuperDataScience Show, and he asked me if I’d like to take over, I was able to say instantly, I didn’t have to go away and think about it. It was like, “Yeah, of course. I don’t know if anyone’s ever given me a career capital opportunity like this.”
Benjamin: 00:27:58
Yeah, and one way of seeing that is if you look at these studies about when people hit that peak productivity, it’s actually like around 45 or even in the 60s in some careers. And that just shows that you can actually keep increasing your skills for decades. And so this means that, especially early career, one of the biggest priorities should be to get career capital.
Jon: 00:28:22
Wow. I didn’t know that because I’ve seen studies, there’s things out there, you probably know about these studies, so when do Nobel Prize winning researchers make their big finding, or when do the Fields Prize winners in Mathematics, when do they make their mathematical discovery? And I thought it was almost always in your mid 30s.
Benjamin: 00:28:44
Well, so it depends on the subject. So basically, once we hit already 20s, unfortunately, our intelligence starts going down pretty sharply. But your connections, and your knowledge, and what’s called crystallized intelligence keeps going up. And so your sweet spot is when those two things intersect. And so it depends on how important the balance of those two things are in the career that you’re focused on. So mathematics and theoretical physics and also lyrical poetry, interestingly, they peak fairly early, often in the 20s. Whereas, on the other end, something like politics, you can’t even become president unless you’re 70 these days. So that’s the peak, right? Again, and that’s because you need this reputation, and you need connections, and they take a long time to build. And yeah, there is some evidence, these ages are actually going older over time, which could be because you just need more knowledge get to the cutting edge of a field these days.
Jon: 00:29:43
Right, there’s something that I think about as a societal thing, we’re getting a little bit off track here. We will get back to the data science, to making an impact, in a second again. But something that I think about a lot is how, as a society, of course, longevity has increased enormously over the last 200 years. So 200 years ago, if you were born in France, your expected lifespan was 30, 32 years, something like that and now it’s mid 80s. And so not only are we living longer, as a result of knowing more about nutrition, having access to food, medical treatments. But it occurs to me, isn’t it also conceivable, I don’t know if there’s studies around this. But by being more aware of our nutrition, by exercising better, by getting more sleep, by not having stresses of just trying to get food on the table for the next meal, presumably not only are we living longer, but maybe that thing about intelligence declining at 20, maybe that’s yeah, being extended out by all of these society level factors.
Benjamin: 00:30:58
Yeah. I haven’t seen research about that, but I would hope that people’s healthy lifespan is increasing as well. So I’d hope that our peak functioning is also going later.
Jon: 00:31:10
Just always trying to come up with reasons why I’m not getting dumber and crossing my fingers. But maybe I’m just so dumb that it’s easy to think that it’s going to get easier and easier all the time. All right, so okay. So data science, it’s a valuable skillset. You were talking about career capital and then I completely took over the conversation. I’ll let you keep going-
Benjamin: 00:31:35
No, no,. But I think that’s an important concept to cover. And then, yeah. So we could kind of think about the four factors, like you could think of data science is your route to getting leveraged, so that factor is already somewhat covered. But then, the thing I’d really encourage people to think about is which problems in the world are biggest and most neglected.? And we actually think that your choice of problem is the thing that most determines your impact. And that’s not how people normally approach this, the normal advice, if you’re like, “Well, should I work on climate change, or education, or health, or politics, or whatever?” The normal advice is like, you can’t really compare these things, just choose something that you’re interested in, that motivates you and just do that. That’s the best we can do. But we actually think if you step back, actually, some issues are way bigger than others and some are way more neglected than others. And if you can find that sweet spot, then that’s one of the biggest things you can do to increase your impact.
Jon: 00:32:29
Wow, so in data science, what are these kinds of opportunities? So what are the problems in data science that can make an out sized impact, but maybe also are being neglected today?
Benjamin: 00:32:44
Yeah, I might almost approach it from an even bigger picture perspective, which is just like in the world, what are the most pressing problems? And then how can I use data science to best contribute to them? And so there’s two philosophies of careers advice in a way. One is, start from what the world needs and work backwards. And the other is, start from what I have to offer and then kind of work forwards. And most careers advice is just focused on that very person focused approach, where it’s like, “Well, what are your interests? What are your skills, strengths, those kinds of things? And how can you use them?” And that is a really important part of the equation. But we actually think it’s almost more important to think about what the world most needs and what contributions would be most valuable. And then work back from those.
Jon: 00:33:30
I guess with data scientists, especially, so if you’re considering a career in data science and especially if you already are a data scientist, do you have a broadly useful skillset? So since we had that conversation years ago, I first worked in the ad tech area, and even as a trader, I was basically using data science skills. I didn’t call it at that time, but I was a quantitative trader, I was using data science skills. So working in finance, working in advertising, and then now, the last seven years or whatever, it’s been in human resources and operations. And so a data scientist has a skillset that can be applied to tackle any kinds of problems that generate data. And the way that the world is going, every 18 months is having twice is much data. We have more and more sensors collecting different kinds of data on more, and more, and more problems. So a data scientist who’s listening today, you already have this skillset that you can be applying to almost any industry you like. And so I think, especially in this kind of circumstance, it makes sense to think about things in that backwards way, from what society most needs, if you want to have the biggest impact. So it has been where can you make the most impact?
Benjamin: 00:34:50
That’s a really good point. Yeah, I think there are some more specific things for data scientists that we could get on onto. But then yeah, just to slightly expand on your point, we have a job board on 80,000 Hours and we have a bunch of different problem areas and there’re jobs within each. And you can filter that by software engineering jobs. And some of them are actually data science jobs. So one example is, there’s this really interesting organization called IDinsight, that basically does data consulting for people working in development. So it helps them evaluate, measure their programs, and they’re hiring data scientists right now.
Jon: 00:35:28
So by development, that’s like global development. It’s like-
Benjamin: 00:35:32
Yeah. Yeah, so like fighting Malaria and stuff like that. And actually we have a whole episode about using data science in global development on our podcast. So that would be a thing to check out.
Jon: 00:35:47
Cool, the 80,000 Hours Podcast.
Benjamin: 00:35:51
Yeah. So if you search 80,000 Hours Podcast, data science or fair rate then that’s all about data science and development. Yeah, and there’s lots of examples. I was wondering if another person you might want to get on your podcast is the founder of Bayes Impact. I don’t know if you’ve come across them. But they went through YCombinator as well. And they were a data science consultancy for social services. So they do things like help fire services optimize their algorithms, so they can respond to fires faster. But then, the biggest thing they’re doing now is they created like a automated careers advice service for the French government. And I think they’ve had like a million users or something. So yeah, they do government consulting, basically, for data science.
Jon: 00:36:41
Very cool, I love their name.
Benjamin: 00:36:44
Yeah, yeah. No, no, Bayes, that’s like one of our big principles, is thinking in a Bayesian way.
Jon: 00:36:50
No kidding? As a company, at 80,000 Hours, you think about the world Bayesian way?
Benjamin: 00:36:56
Yeah. I actually think it’s kind of one of the biggest problems of out time because you can kind of see this with the COVID pandemic where when they were saying, “Should we space out the vaccines?” Loads of people were like, “Well, but we have no evidence that one vaccine shot will work.” But no, we do have evidence. It’s just we don’t have a randomized control trial specifically about that. But the way I see Bayesianism is any evidence is valuable, you just have to weight it by its strength. And should start from your prior and you update and then that’s your best guess. And when you’re approaching these really uncertain problems, like what’s the biggest issue in the world? That’s the approach you have to take. You can’t have hard data that definitely shows that’s the answer.
Jon: 00:37:38
Yeah. I think especially in circumstances like a pandemic where time is of the essence and waiting for a randomized control trial on whether one shot has an impact, we can’t wait around, we need the injections now. So, that makes a lot of sense. And just for viewers who aren’t familiar with Bayesian statistics, there are two main branches of statistics. There’s frequentist statistics and Bayesian statistics. And throughout the 20th century, frequentist statistics was basically all that was taught to people in universities. And so if you studied stats in university, this is probably the one that you learned about, where you’re supposed to have this completely objective reality based on data where you have one group that gets the sugar pill, the other group that gets the drug. And you can compare how they respond.
Jon: 00:38:28
So you have these two distributions of data and then you can statistically compare it. You can say, “Well, I have this confidence that these two distributions are different, that the treatment does have an impact relative to the sugar pill, to the placebo.” Bayesian statistics is older than frequentist statistics, but it fell out of favor in the early 20th century because people didn’t like how in Bayesian statistics, you could, although it’s optional, you can optionally add prior information. So you can have your initial distributions of data be created by you based on some information that you have from another experiment, or just by reasoning, or even a guess.
Jon: 00:39:19
And then you can update that prior distribution based on real data that you collect. And that leaves you with the posterior distribution, that you can then use to make a decision. And anyway, so Karl Pearson, in the early 20th century, who’s a huge figure in statistics, didn’t like that idea of being able to have prior information. And it wasn’t ‘objective’ enough for him. But in reality, like you were pointing out, Ben, like the vaccine situation, there’s all kinds of problems where we can’t gather perfectly objective information anyway. Or even if we can, our outcomes can be improved by using this outside prior information.
Benjamin: 00:40:02
Yeah, totally. And that’s a really good description of the kind of formal debate. But then I also think of Bayesianism, it’s almost like a philosophy for making decisions. And one thing I’d really recommend on that is, Nate Silver’s book has a whole chapter on how if you look at people who are actually really good at making forecasts in the real world, people who bet on sports for instance, and they’re actually being tested with real money on how good they are at predicting things, they basically take this very Bayesian style approach where they kind of start with a guess, they get a little bit of evidence. They slightly update their guess and they kind of repeat and make lots of little updates over time. And that seems to be the best way of making actual decisions on known uncertainty.
Jon: 00:40:43
It’s a fabulous book, The Signal and the Noise, if you haven’t read it listener, definitely do highly recommend that. Anyway, I keep taking you off track. So Bayes Impact, so that sounds, yeah-
Benjamin: 00:40:56
And that’s a good example of also working with government, which I think is maybe sometimes less satisfying personally, but can be already big route to impact. And then the third kind of broad path I would say is like we’ve already mentioned, the earning to give path. So you could join a startup or work in a company, and then it’s kind of amazing in a way, you could almost, even if you have a very, very narrow skillset and you’re not sure how you could use that to work on a really pressing global issue. You can actually, through donating, you could kind of convert your labor into labor, working on the most pressing problems and still have a really big contribution. And the reason why that’s so high impact is with money, you can target it. Like whichever organization in the world is most filling the key bottleneck in a really pressing issue. You can get resources to that thing. So, that’s another route to, yeah, having leverage and having a big impact. But yeah, we can get into some more specific things, so-
Jon: 00:41:56
Yeah. I’d love to. So we’ve ended up kind of indirectly touching on data science ideas throughout the podcast. But I know you’ve prepared ways, really well-researched, data-driven points on how data scientists can make a big impact. So come on, come at us.
Benjamin: 00:42:17
Yeah, so if I was speaking to someone who’s really just willing to change to anything to have a big impact, you’re just super open, and it’s very, very relevant to data science, I think the thing I would think about the most is AI and then what can be done there? And one way to see that is if you survey AI scientists about when will AI be able to do like many human jobs, like most human jobs? On average, they say, there’s about a 50% chance in 45 years. So, that’s a survey of a few hundred AI researchers.
Jon: 00:42:50
That’s a Nick Bostrom study. I’m probably mispronouncing his last name and you know how to do it correctly, but there’s a-
Benjamin: 00:42:56
So yeah, he has a really cool book. The Study of the Survey of AI Scientists is by Katja Grace and a couple of others. And yeah, we actually have a whole podcast episode just about that survey. So I’m sure there’s lots of people listening who are like, “That sounds in these ways.” But yeah, we have like two hours more just about that, if you want to-
Jon: 00:43:16
What’s that episode called or how can we find that one?
Benjamin: 00:43:19
So yeah, Katja Grace, I think if you search for AI timelines or something like AI surveyed, probably be able to find it.
Jon: 00:43:26
It’s probably more recent. I quote this study from, it’s almost a decade ago now that Nick Bostrom did it at an AI conference, interviewing people on when they thought we would have an artificial general intelligence that could learn the same kinds of tasks as a person, and it was the same kind of number. So I thought it was that one, but it sounds like touch like Katja Grace-
Benjamin: 00:43:46
Yeah, yeah. She tried to do a more rigorous version of that initial poll. But yeah, it has come out in a similar figure. So yeah, 45 years’ time, that’s the median estimate. But there’s still like a 10 or 20% chance that it’s in like 10 or 20 years, people are thinking maybe GDP three, maybe that kind of just is how human reasoning works. And if we just scale that like a hundred fold with a bunch more computing power, which we’ll have seen, then maybe you just basically unlock a ton of stuff. We’re not sure. So yeah, the one way of approaching that, it’s almost like we’re being told that aliens are going to arrive on the earth, potentially in our lifetimes, like with a 20% chance.
Benjamin: 00:44:31
It would definitely be one of the biggest things to happen in all history, if there was now systems that were more intelligent than us. So, yeah. So I think, trying to think about how can you make that transition go well, could well be one of the most important things that people can work on at this point. Yeah, and so if in that same survey of Katja Grace, they also asked the researchers, when this transition happens, how likely is it to be good versus bad? And most of them said it was going to be good. No, they said there was a 5% chance that it could be extremely bad, such as human extinction.
Benjamin: 00:45:09
I can’t really think of another field where the actual people working in the field would say that there’s a 5% chance that their field ends the world. But that’s where we are. So yeah, at the same time, it’s also still a very neglected problem. It’s really grown a lot, so there’s now this whole field of research that’s sometimes called AI alignment research or the control problem. And another really good book about that is just called AI Alignment by Brian Christian, who’s written a lot of actually really cool books about computer science. And I was a big fan of Algorithms to Live By, which is all about how to apply computer science to everyday decisions. So that, yeah that-
Jon: 00:45:57
That last book there has come up is the book recommendation. So we always ask at the end of the show for book recommendation, and that Algorithms to Live By has probably come up, I’ve only been hosting the show since January, and it’s come up a couple of times, including in the most recent episode, episode number 495. So people love that book.
Benjamin: 00:46:15
Yeah. It’s an awesome book. It’s so well crafted. But yeah, he wrote a new book called AI Alignment, which is all about this topic of how can we make sure that AI systems do as we intend and are in line with human values, as they get more and more powerful.
Jon: 00:46:33
I was going to ask what alignment means, I see, alignment with like our values. Got it. And so I guess, AI ethics is something different entirely. I was initially thinking that this has kind of AI ethics. But AI ethics is something about like, we need to make sure that our algorithms are treating different demographic groups fairly, it isn’t about machines eating our brains, they’re the best energy source around. Oh, no.
Benjamin: 00:47:01
Well, unfortunately, won’t be. But yeah, no, you could think of AI ethics, that’s a very broad category that includes many important issues. And then I would say alignment is a subset of AI ethics. And the approach we’re trying to ultimately take is, what are the biggest issues, maybe in history that we might be able to help with and are being neglected? I think it’s these longer term issues that are even more important when we step back and take that really big picture perspective.
Jon: 00:47:33
Nice, cool. So AI alignment, that sounds huge.
Benjamin: 00:47:38
Yeah, but there’s a lot of just practical jobs that you can go and take in this. So there’s now teams doing research in AI alignment, in deep mind, open AI, Anthropic, which was a team who just recently left OpenAI, Google Brain, academic centers like Berkeleys, so Stuart Russell’s group at Berkeley’s, doing this research. There’s an nonprofit Aut Mary. So Mary’s based in Berkeley and Aut’s based in San Francisco. And yeah, all of these groups are hiring, they’re hiring ML engineers to help them run the models. So they all have these different research programs to try and improve AI alignment and yeah, they’re hiring lots of software engineers and data scientists. And so we actually again, have a whole podcast episode all about if you’re a software engineer or a data scientist, how can you transition into these roles specifically? And we have a whole guide to a reading list and yeah.
Jon: 00:48:36
Amazing.
Benjamin: 00:48:36
Yeah, it’s possible to do it pretty quickly. So one of the people who’s interviewed Daniel Ziegler, every day he read a paper and then, every week he tried to implement a model from one of the papers. I think he did that for a few months and then he like got a job at OpenAI.
Jon: 00:48:53
Wow, which podcast is that?
Benjamin: 00:48:57
So yeah, if you search for like ML engineering, so it’s with Catherine Olson and Daniel Ziegler, Catherine Olson, who’s at Google Brain and yeah, like super practical guide.
Jon: 00:49:11
All right. So yeah, that’s quite practical guidance. So you also, you have a note here and I can see Ben’s notes. He shared them with me in Google Drive, just like he did years ago, when he was helping me with my career and you have this in square brackets, I guess you weren’t going to talk about it, is chimps versus humans. But I love this because it shows the importance of AI alignment because you might think yourself, “Why is AI a danger? We want it to be giving us self-driving cars and solving medical issues. Why would it be a problem?” And this chimps versus humans thing, I think I first read about this in Wait But Why, Tim Urban’s blog. So I think I know what it is, but I don’t want to steal your show. So tell us about the chimps versus humans thing.
Benjamin: 00:50:00
Yeah, it’s just the idea that chimps are actually way stronger than us, if you get in a fight with a chimp, you’ve got no chance. But basically, there’s not that many of them on the world and their fate just is in our hands. And that’s because we have much more intelligence. And if there was other systems that were more intelligent than us, then we basically become the chimps and they’re the humans then, in this analogy.
Jon: 00:50:30
Yeah, so relative to all of the other intelligent beings on our planet, chimps are really close to us. Their intellectual capacity is super close. So I think the way that Tim Urban in his Wait But Why blog post about it, I think the blog post is called something like The Artificial Intelligence Revolution. And I don’t know if he got this from somebody else, but he draws a cartoon of a staircase. And it’s this idea that there’s an evolutionary staircase where, I don’t know, you’ve got like insects really down at the bottom. But if you go to the human step, chimps are like one step behind, very, very similar intelligence levels, highly complex creatures, potentially-
Benjamin: 00:51:13
Yeah, and there’s even some evidence that it’s just like a few mutations that caused the neocortex to fold a bunch more. And that gave you human brains. So in evolutionary terms, it’s not a big step away.
Jon: 00:51:25
Not a big step at all, whereas something like an artificial general intelligence, as soon as we engineer that, it could instantly, maybe it’ll take a month, or maybe it’ll take an hour, or a minute. It could engineer itself to be more intelligent than people. And then all of a sudden you have this artificial super intelligence, which we probably can’t even describe, in the same way that a chimp can’t describe a symphony, or poetry, or a computer program. There’s just no chance. You’re never going to get a chimp to write a computer program and you can’t explain it at all. And so an artificial super intelligence could be, in this same way, just having ideas or I don’t know if that’s the right word for machine. But it has cognitive capacities that we just can’t even get anywhere close to imagining. But unlike your scenario of just a couple of genetic mutations and being next to each other on this intelligence staircase, the artificial super intelligence could be 10, 20, 30 steps above us and we’re just nothing to it. We’re an insect.
Benjamin: 00:52:31
Yeah. Yeah, no, we don’t know how far capacities could go. But yeah, one thing I would just add is there’s probably some listeners thinking, “This is sounding a bit wacky.” And I would say, this scenario you’re painting there, that’s quite associated with Nick Bostrom. And I think that is one of the key risks to bear in mind. But there’s actually many other scenarios where AI could be this really huge deal. And almost the opposite end of the scale, is this framing where it’s more like what we’re doing today is we’re kind of handing more and more control over to algorithms, more and more things in society are run by algorithms. That process could well continue. Well, it’s almost definitely going to continue. And that then relies on like, are those algorithms definitely doing exactly what we want? It seems like they often do things we don’t want, like they get us addicted to Twitter. And as these algorithms get more and more powerful and better at doing things like getting us addicted to Twitter, even if there’s just a small difference between what we would ideally want and what these algorithms are doing, that could, over time, gradually lead us further and further away from the world that we would want. That’s a very gradualist view of the thing, rather than this kind of bus take off style thing.
Jon: 00:53:54
And, Ben, you don’t need to look to a dystopian future, as you imagine a scenario where the algorithms aren’t doing what we want. We have that today in all kinds of very dramatic ways, in a lot of countries around the world, including where I happen to live right now, the United States, where I can’t vote, but I live. So I experience the incredibly divisive politics here, which if we still had newspapers that were providing two sides to a lot of the political issues, you’d have a lot of people who were educated on both sides of the issues and thought both have their pros and cons. But because of algorithms like the Facebook Newsfeed and Google’s news prioritization, you have completely different camps of people who see the world in completely different ways. And can’t even imagine being in the other person’s shoes. And then all of a sudden you have people riding into your government buildings.
Benjamin: 00:54:48
Yeah. So yeah, people who, you might’ve heard about some of the Nick Bostrom stuff before and the more, kind of early on in AI safety, people were talking about those scenarios more. But a lot of the current AI alignment research, it’s just focused on problems that we face right now. Adversarial examples, like how do we make sure ML systems don’t give us entirely the wrong answer if you just make a tiny tweak to them. But then these kind of short-term issues also relates to potentially, longer term alignment issues as well. And so there’s a kind of spectrum of work to be done and we need some of all of it
Jon: 00:55:26
Super interesting. Well, I don’t want to take too much more time on this point. We could have an entire episode about it, but you already have it. So I’ll refer listeners to that episode instead, maybe you can mention the guests that were on, and they can easily look it up, and we’ll have it in the show notes.
Benjamin: 00:55:44
Cool, yeah. Catherine Olson and Daniel Ziegler.
Jon: 00:55:47
Right, nice. And so I have all kinds of questions, like what are the other skill sets that you need if you want to get into this field? But I think maybe if you have like one or two of those, just like one or two of those and then we’ll move on to the next topic. So if somebody is already a data scientist or a machine learning engineer, software engineer, and they want to get into AI alignment, what are the key things that they need to learn, to be getting into that field?
Benjamin: 00:56:14
Well, so many of the roles are just basically ML engineering roles. So you just need basically that exact skillset. And then you need to learn about the current research within AI alignment. And so, yeah, I think you should be able to find a reading list through our resources of all the most important papers.
Jon: 00:56:36
Cool, all right. So what else? How else can a data scientist make a big impact?
Benjamin: 00:56:46
Well, yeah. So yeah, so in terms of the problems that we personally think are most biggest and most neglected, a big cluster we focus on and we call it like global catastrophic risk. So it’s things that can be global in scope and they often tend to be very neglected. And we put AI accidents and AI alignment and how to make AI go well as one of the top things in that category. But then, a second most ranked one is preventing a pandemic even worse than COVID-19. And so, yeah, we are encouraging people to go and work on pandemic prevention. We had our first guide to that in 2016. And yeah, I think now it’s almost like it’s actually maybe even more pressing because people know that this is an issue. And so there’s a lot more resources available for work in this area. But there’s a real risk right now that a lot of the efforts go into preventing things that are kind of similar to COVID. That’s normally what happens, is people just, they fight the lost battle. And previously all of the US’s biosecurity efforts were focused on anthrax because there happened to that terrorist scare.
Benjamin: 00:58:02
So then all the budget is on anthrax, even though that’s only one of many, many diseases that could be a threat to national security. And so yeah, now I’m really keen to make sure when we do all this work to do pandemic prevention, can we make sure that it’s guarding against all the different varieties of pandemics we could face. And in particular, is guarding against one that could be much worse than COVID because COVID nowhere near represents the worst case scenario for a pandemic, it would be perfectly possible for one that’s…
Jon: 00:58:34
Yeah, it’s been horrible, but yeah, it’s not hospitals being overrun. It ended up, in most regions, actually being avoided through interventions. But there could very well be, yeah-
Benjamin: 00:58:48
And we just know that there have been diseases that weren’t very infectious, but had a fatality rates of 10 or even 50%, whereas COVID is only 1%. And it’s not really clear that people would still be willing to stock shelves in a supermarket if there was like a 30% chance of dying if you caught it, rather than a % chance.
Jon: 00:59:07
Right, right. Oh my goodness, I didn’t even think of that. Like all of the shortages that we have today in the service industry that has been precipitated by COVID, even though now, at least where I live in New York, most people are vaccinated. We still have these issues around staffing shortages in the hospitality industry. And wow, yeah. If the fatality rate was 10% or 30%, why would you go outside? Why would you stock a shelf at all? And we don’t have robots that can do it yet.
Benjamin: 00:59:35
Yeah. So yeah, so we call those global catastrophic biorisks.
Jon: 00:59:43
You’re laughing. So I don’t know, what else can you do? That’s often the [crosstalk 00:59:49] to arguments. Somebody will be upset with me and I laugh. I’m just like, “I don’t know what to do. I don’t know what else to do.”
Benjamin: 00:59:55
Yeah. Okay yeah, so we have a whole guide to what you can do about it, so-
Jon: 00:59:59
Right. Yeah, good point.
Benjamin: 01:00:04
And I think there’s a lot of jobs involving statistics and data science within the whole biomedical sciences area. And in particular, within pandemic prevention is one I kind of sub feel that I particularly encourage people to think about.
Jon: 01:00:21
Nice, and I can see from your notes, that I still have open in front of me, that there’s at least one more that you might like to talk about.
Benjamin: 01:00:29
Yeah, there’s a lot of other interesting ones. Yeah, and we kind of mentioned this one already, but just on our job board, there’s open data science jobs, many of those.
Jon: 01:00:44
Oh, the information security one. That’s a cool one.
Benjamin: 01:00:46
Oh, yeah. Yeah, no. So yeah, this one’s a little bit harder to explain, but we do have a short write-up on the website, if you’re interested. And there might well be people listening, you have a security background. It’s also just the thing to actually really consider working on because it seems to be a skill set that’s in really high demand right now.
Jon: 01:01:09
Yeah, huge.
Benjamin: 01:01:10
It seems like, yeah, with just one or two years of retraining in that, there’s pretty high salaries available for these positions. And then I also think if you’re interested in issues like artificial intelligence, and synthetic biology, and so on, it is going to be important, in the future that there’s good information security around these kinds of technologies. There’s groups who are working on bioengineering, and that knowledge could be dangerous if it gets out. But right now, information security is very weak. Basically any determined attacker can steal most things. And so, in a world where there’s very powerful technologies out there and existing algorithms that doesn’t seem so stable. But yeah, there’s also a kind of a more prosaic end of this thing, where it’s like could there be big competed viruses that could pose systemic risks to society as well? And yeah, sorry to keep pitching our podcast, but we do have a whole podcast episode about that-
Jon: 01:02:18
No, it’s great. Obviously our audience is into podcasts. And so if you can give them this information in a nice podcast format, I’m sure there’s tons that are interested. You can mention as many episodes as you’d like.
Benjamin: 01:02:29
Okay, yeah. So it’s with Brie Schneider, who I think is one of the big people in information security and it’s a whole episode all about, yeah, how that can be valuable for society and how to get started in that career.
Jon: 01:02:44
Super cool. Well, all right. So we’ve covered data science specific areas where data scientists can have a particularly large impact. But data scientists, as well as any other listeners could probably benefit from any general strategies, any general guidance that you have on how someone can strategize about their career. I had this incredible opportunity, whatever, seven years ago, to get on a call with you for an hour. But you can’t do that with everyone on the planet. So what can they do?
Benjamin: 01:03:18
Well, so we do actually still have one-on-one advising on the website.
Jon: 01:03:22
No way, wow.
Benjamin: 01:03:23
So, if you’re a listener, you can apply to that. And so yeah, we have about 600 slots this year. So yeah, there’s an application process because it’s free. But yeah, and so it’s particularly aimed at people who want to work on the problem areas that we’re most excited about, just because we have this limited capacity. But if that’s you, then I definitely encourage people to apply. And then yeah, with career strategy… Well, we’ve already talked about one thing, which is the importance of building career capital early. And then yeah, we’ve just been kind of talking about all these kind of potentially high impact options and what are the world’s biggest problems, and that can often feel a bit overwhelming. And so I would see those as like, those are things you can aim for and it’s really useful to have some long-term aims to aim for. But then when it comes to actual career strategy, it’s really important, as well, to just think a lot about what am I going to do in the next one or two years? Try and make it really, really concrete.
Benjamin: 01:04:20
And when you’re doing that, you can be thinking like, how can I test out a particular long-term option? How can I get some valuable skills? And then after you’ve done that for a while, you can reassess and like good careers are much more this iterative thing where you do something for a few years, you get some career capital, you learn some information about what’s going to be best for you. And then you try it again. And you gradually improve over time with some kind of longer term vision in mind, but that can easily change and you learn more about it as you go.
Jon: 01:04:53
Cool, that is a really great strategy. And it’s really reassuring to know that we don’t peak in our 30s unless we’re looking to win the Fields Medal for Mathematics. I guess my time is, it’s slowly creeping away on me each day. But that by building career capital early, and then following this kind of strategy of having a long-term goal, but iterating in these kinds of one to three-year chunks, almost no matter what our age is, it sounds like our peak productivity can be really later in life. And so that’s really nice to know. So there’s one thing that we didn’t talk about discussing on this program, but is something 80,000 Hours related that I read years ago, and that was hugely beneficial to me. And so if you remember about this, I’d be interested in you sharing it with listeners, which is what are the top skills that people should learn, period. So I remember seeing this, it’s this really well-researched 80,000 Hours article and number one was like learning how to learn or something.
Benjamin: 01:06:08
Yeah, I think if I remember correctly. So actually, yeah, we actually did that research with, it was with a data science boot camp. So it was someone who was doing an internship at one of these data science boot camps. And he did the analysis for us. What he did is, the US has this big data sets, think it’s the ONAP data set. So it has I think like 700 jobs, and it has what skills they need, and the salaries, and a bunch of data about them. And we basically, we looked at the jobs that were highest paid and looked at what skills were most needed in them. And I think we also looked at the jobs that people found most satisfying and looked at what skills were most needed in them. And we also looked to what skills were most transferable, so which ones would show up in the largest number of jobs. And then we us this to make an index of which skills seemed most valuable. And I think if I remember correctly, the top one was actually judgment, which is, yeah, maybe not that helpful, but it is interesting.
Benjamin: 01:07:16
One thing that did stick out in that analysis, yeah, so we also tried to look at the skills that seem most valuable actually often soft skills or more problem-solving, learning how to learn, judgment, productivity, these kind of general skills. And actually some of the more technical skills, like STEM skill sets were a bit lower, which might not be the best thing to say on a data science podcast. But I think one thing is that the STEM skills, probably they’re easier to improve. So they could still be very cost-effective skills to learn, even if they’re not the most valuable or considered. And then another thing that analysis showed is that actually, it seems like there’s evidence that the sweet spot these days, the ones where there’s the most growth, is places where you have both technical and soft skills.
Benjamin: 01:08:05
And so it’s that overlap, which I think makes a lot of sense if you think about the future of automation, because a lot of basic kind of data analysis is basically being automated because we just are now and stuff that would have taken a lot longer in the past, it’s much easier and that’s going to continue. So the role of people doing data analysis becomes much more about figuring out, well, what actually is the problem in the first place? What analysis would even be helpful? And then it’s explaining that analysis to people and figuring out how to apply it to real business problems. And so yeah, if you can combine those both hard skills and soft skills, it seems like a good spot at this point.
Jon: 01:08:49
Cool, great takeaways, Ben. So we literally could have this podcast go on for hours. And so I would like to definitely have you on again sometime, and we can talk about maybe things like, have an AI alignment specific episode or something like that. But for now, we’re going to start wrapping up. So what is the gamut of ways that people can get involved with 80,000 Hours?
Benjamin: 01:09:20
Yeah, so we’ve mentioned a bunch of them. So we mentioned the job board already, mentioned the one-on-one advice. And then another one I would add is we have a bunch of online guides on the sites, and the first one is called the Key Ideas Guide, which is more about like the big concepts. How do you actually compare careers in terms of impact? And then we have a list of problems that we think are unusually big and neglected. Covering some of the ones we’ve talked about, but actually we have this whole long other list of like 20 other problems that we didn’t touch on today. And then, career paths that seem high impact. And then actually, maybe more importantly, we also have now this career planning course. So it’s an eight week course and each week there’s an article to read. And then it gives you some questions to answer about your career and there’s an attached to worksheet. And at the end, you have a whole complete career plan. So that’s designed to kind of help you take all these giant considerations and actually figure out to an individual plan based on them.
Jon: 01:10:16
Awesome, and obviously all the podcast episodes too.
Benjamin: 01:10:21
Yeah, no. Yeah, the podcast, get to out. And yeah, everything we provide is free as well because we’re all funded by donations.
Jon: 01:10:31
Yeah. I have personally benefited a fair bit from resources that you provide, so yeah. So thanks so much to you and everyone at 80,000 Hours for the work that you do, the impact that you’ve made on me personally, and surely to many listeners today, and the thousands of other people out there, it’s really awesome, Ben. So I’m glad that you made that decision back in 2012 to go down this path. So we’ve talked about quite a few books actually in this episode already, Nate Silver, Signal and the Noise. We had the AI Alignment book, there’s others that just aren’t immediately coming to mind. But do you have a particular book recommendation on top of all those?
Benjamin: 01:11:19
Yeah. One I’d really recommend, which came out last year, it’s called The Precipice, it’s by Toby Ord, who is yeah, one of our trustees. Well, so his idea is that the next century or so it seems like we’ve entered this new age in humanity, which he calls the precipice. And one way of seeing it is, technology has developed to the point where we can actually end civilization, we have nuclear weapons, we have runaway climate change, we have pandemics. And so at the same time though, we haven’t really developed the wisdom to make sure we don’t do that. So it seems like we’re in this unusually risky time in history. And this means that, for people listening, maybe you can help with these challenges that are literally of historical importance because if we navigate this time successfully, then we don’t know how long things could last, but like way, way longer, there could be far more people in the future than alive today. But we could also mess it all up. And there’s been hundreds of thousands of generations before us and we’re the ones who dropped the baton. So that really is a big thing motivating our work. And then one thing I also really like about that, is the final chapter is called Our Potential.
Benjamin: 01:12:43
And that’s just one thing I wanted to make sure I also mentioned, that it’s like, yeah, we’ve talked a bunch about these kinds of risks and things that could go wrong. But another thing that really does motivate me, is just how much better could the future be? And yeah, and I think it’s kind of like you were saying with the chimps versus human stuff earlier, it’s not even clear that we can really imagine how good life could get. And so I see it as this kind of, there’s these two extremes that all these technologies could both make things way better than we know today, but they also pose these big risks. And the job of our generation to try and tilt it more in the good direction, rather than the bad one.
Jon: 01:13:23
And there’s countless examples that are data-driven on how the world has improved over the last couple of centuries, but how things are still awful and they could get way better. And you actually, you made a LinkedIn post recently, at the time of recording, about that. And I clicked through and I read a whole bunch and I made a whole podcast episode about it. So episode 492, it’s called The World Is Awful, And It’s Never Been Better. And I cite you and I cite Max Roser, or Max Rosin?
Benjamin: 01:13:54
Max Roser, yeah, who’s awesome. Yeah.
Jon: 01:13:57
So founder of ourworldindata.org and I’d actually, I had been-
Benjamin: 01:14:02
Yeah, they’re probably hiring data scientists too. So I probably should have mentioned them earlier.
Jon: 01:14:06
There you go, really cool company. I’ve been using charts from ourworldindata.org in my deep learning course for years. So I kind of do, in the same way that we’re kind of routing out this podcast episode and in the same way that you’re talking about that book chapter in The Precipice, when I do my 30 hour deep learning course, the final lecture, kind of the final half hour, I bring out a whole bunch of charts from ourworldindata.org, I didn’t know the connection to you at the time that I was doing this, been doing it for years. And I show, across the board, in terms of literacy, child mortality, which is the big one that I focus on in episode 492, in terms of longevity, in terms of democracy, death in conflict, domestic violence, across so many measures, the world was unimaginably awful only just a few generations ago. And so while there still are awful things going on today, it’s not evenly distributed. So there’s a lot of you and I, Ben and me, we happen to live in, in terms of the planet today, we’re very fortunate where we live.
Jon: 01:15:21
But it’s not evenly distributed, even today. So there’s that huge opportunity of just kind of leveling the playing field more. And then beyond that, like you’re saying, like you’re alluding to, it’s crazy given the progress that we’ve made over the last two centuries, as long as we don’t drop the baton, like you say. Things could be unimaginably good for us in our lifetime and certainly for the generations after us. So very cool. Well, I just have one last question-
Benjamin: 01:15:48
A very good not to end on.
Jon: 01:15:50
Yeah, I should probably be letting you make a big final point. So if you have any other ones that come up in the next few seconds, I have one last question for you, which is just simply, how should people follow you? Obviously you have a huge amount of insight, I personally gain from following you on social media and the posts that you make. So how can others do the same?
Benjamin: 01:16:12
Yeah, so I’ve been posting kind of in-progress research ideas on Twitter recently. That’s probably the best thing. But yeah, I also have LinkedIn, I’ve been posting on LinkedIn as well, and benjamintodd.org has kind of everything else, some of my past writing.
Jon: 01:16:30
Nice. Well, we will, as always, include those social media links and your website link in the show notes. Ben, I think it goes without saying that this has been an awesome episode. I’ve really enjoyed having you on and yeah, hope to have you on again sometime soon.
Benjamin: 01:16:47
Cool, yeah. Thanks so much.
Jon: 01:16:54
Well, I’m sure I don’t need to say it explicitly, but I have a huge amount of respect for Ben and the work that he does at 80,000 Hours. And my goodness is he ever able to effectively communicate to us how we can adjust our career path in an evermore personally meaningful and globally impactful direction. In today’s episode, Ben covered the relative value of building career capital as opposed to financial capital, especially if we’re early in our career. We talked about how identifying career avenues that are impactful, as well as neglected is a solid general strategy for success. He covered how particularly neglected, but hugely impactful application areas for a data science skillset are areas such as AI alignment, pandemic prevention, and information security. He covered the effective career strategy of iterating in one to three year chunks toward a long-term goal. And we talked about how job description data suggests that the most valuable career skills are soft ones, like judgment, productivity, and learning how to learn. But that the most career growth potential lies where technical and soft skills intersect.
Jon: 01:18:01
As always, you can get all the show notes, including the transcript for this episode, the video recording, any materials mentioned on the show, and the URLs for Ben’s social media profiles, as well as my own social media profiles at www.superdatascience.com/497, that’s www.superdatascience.com/497. If you enjoyed this episode, I’d of course greatly appreciate it if you left a review on your favorite podcasting app or on the SuperDataScience YouTube channel, where we have a video version of this episode.
Jon: 01:18:29
To let me know your thoughts on the episode directly, please do feel welcome to add me on LinkedIn or Twitter, and then tag me in a post to let me know your thoughts on this episode. Your feedback is invaluable for figuring out what topics we should cover next. All right, thank you to Ivana, Jaime, Mario, and JP on the SuperDataScience team for managing and producing an extra special episode for us today. Keep on rocking it out there folks, and I’m looking forward to enjoying another round of the SuperDataScience Podcast with you very soon.