SDS 988: In Case You Missed It in April 2026

Jon Krohn

Podcast Guest: Jon Krohn

May 1, 2026

Subscribe on Apple PodcastsSpotifyStitcher Radio or TuneIn

In this month’s episode of In Case You Missed It, Jon Krohn talks to guests about memory and education, and how artificial intelligence is continuing to help lower the barriers to access. Hear from Matt Glickman, Traci Walker-Griffith, Richmond Alake, and Linda Haviv, discussing the foundations of AI agent memory, how engineers can develop at scale, and why they believe AI could be your child’s perfect tutor in the classroom.

Interested in sponsoring a Super Data Science Podcast episode? Email natalie@superdatascience.com for sponsorship information.

In this month’s episode of In Case You Missed It, Jon Krohn talks to guests about memory and education, and how artificial intelligence is continuing to help lower the barriers to access. Hear from Matt Glickman, co-founder and CEO of Genesis Computing (Episode 981), Traci Walker-Griffith, Executive Director of the Eliot Innovation School in Boston (Episode 983), Richmond Alake, Oracle’s Director of AI Developer Experience (Episode 985), and Linda Haviv, A.I. Engineer (Episode 987).

Find out all the latest in AI with these teaser clips from our long-running show, and hear from some of the biggest names in the field discussing the foundations of AI agent memory, how engineers can develop at scale, and why they believe AI could be your child’s perfect tutor in the classroom.


ITEMS MENTIONED IN THIS PODCAST:


DID YOU ENJOY THE PODCAST?

Podcast Transcript

Jon Krohn: 00:00 This is episode number 988, in case you missed it in April episode. Welcome back to the SuperDataScience podcast. I’m your host, Jon Krohn. This is an ICYMI episode that highlights the best parts of conversations we had on the show over the past month, in this case, April 2026. My first clip is from episode number 985. Regular listeners will know that I have a background in neuroscience, and I was particularly excited to speak to Oracle’s director of AI Developer Experience, Richmond Alake, about his thoughts on agent memory and how its foundations cross-sect with animal memory systems. I have a neuroscience background, and so these are the kinds of things, doing a neuroscience undergrad in particular where you’re just serving across different psychological disciplines. I spent a lot of time learning about different kinds of memory. And so it’s interesting now to see a lot of those same kinds of animal memory systems being replicated in agents.

01:03 And so according to you, agent memory can be broken down into four categories, episodic, semantic, procedural, and working memory. Can you break down each of those for us here?

Richmond Alake …: 01:14 Yeah, yeah. So first thing I wanted to point out is this is the first time I have someone that can fact check me on all of this, right? So this is good.

Jon Krohn: 01:25 Live on air.

Richmond Alake …: 01:26 Exactly. Because one of the things that we’ve done as, I guess, a society or humankind is look to nature to inspire our technological advances. So I always use the example of planes, right? The Wright Brothers had inspirations from the early inventors of vehicles of flights that actually took inspiration from birds. So you always look to nature to try to inspire technology. Same with convolutional neural networks. There was an experiment by Aberlong Weiser back in the 1970s where they experimented with the visual cortex of cats to realize how neurons work. But anyway, the long story is we always draw from biology to inspire our technology advancement. Yeah.

Jon Krohn: 02:14 If you don’t mind me interjecting

Richmond Alake …: 02:15 Just for

Jon Krohn: 02:16 A moment. Yeah. That Hubble and Visel experiment absolutely a critical moment in the development

Richmond Alake …: 02:21 Of- Jon, I’m so excited. This is good because I always talk about the experiment. Not everyone knows it, but of course you know it because you did neuroscience and you did.

Jon Krohn: 02:30 I did, but it was also, this was the hooble and visel experiments. They were a key … And for years when I was … About a decade ago, when I started teaching deep learning to the public, when I was giving talks to technical or non-technical audiences, I would often use the hooble and visel experiments to illustrate how neural networks work when you layer deeply because they had … It’s a really interesting story. If you don’t mind me taking a couple minutes to

Richmond Alake …: 02:57 Explain this.

Jon Krohn: 02:58 It’s all you.

Richmond Alake …: 02:59 We’ll switch roles.

Jon Krohn: 03:02 Hubel and Visel, they were trying to figure out how the vision system worked. And they could see in the cat’s brain that there were all of these nerve fibers going from the eyes to the back of their brain, the visual cortex. And so they tried for ages and ages to … They would have a cat sitting in a harness, they’d have recording electrodes in.

Matthew J. : 03:24 What

Jon Krohn: 03:24 They could see was the right part of the brain, and they would show different kinds of images to the cats. And day after day after day, there’d be no readings. Until many of the great discoveries and histories like x-rays, they discovered by accident they were frustrated at the end of the day. And instead of taking the cat out of the harness before starting to take apart the rest of the recording equipment, they removed a slide that they’d been showing on the projector and the slide had a straight line edge. And as that straight line edge passed through the cat’s field of vision,

Matthew J. : 04:00 Boom,

Jon Krohn: 04:01 The neurons lit up. And that’s how they made the discovery that the first brain cells that receive information from the eyes, they’re responsible only for straight lines at a specific orientation. And then they later discovered that that information, what they called simple cells, detecting those straight lines, gets combined into edges and corners to get a second layer. And then that second layer gets combined into a third layer to get even more complexity, more abstraction, which is exactly how deep learning systems work. So yeah, convolutional neural networks. Exactly.

Richmond Alake …: 04:34 The early layers there, they capture the edges, the features of the edges. Then as you get through the deeper layers, you see the abstract shape starts to form. I’m so glad that you explained that experiment better than I have ever done.

Jon Krohn: 04:47 Well, it’s kind of wild where we’ve come so far with agents and this kind of abstraction where like a decade ago, it seemed so important for us to understand how neural networks and deep learning work. And now you don’t even hear people talking about deep learning that much, even though that’s how LLMs work, which is how agents work. It’s kind of interesting how you just keep tacking on more and more to get more complexity out of these systems, more abstraction.

Richmond Alake …: 05:08 Yeah. And going back to the whole form, the whole agent memory types that we mentioned, right? Semantic, procedural, working memory, and what was the other one?

Jon Krohn: 05:21 Episodic, semantic, procedural, and working.

Richmond Alake …: 05:23 Yes. Okay. Your list.

Jon Krohn: 05:24 I’m just reading

Richmond Alake …: 05:25 It. Exactly. My list, right? I’m going to go, my memory’s not that great then. I’m going to go through them and just talk about examples of how they could manifest or how they could be implemented. So let’s go and keep me on a stair with the list. Let’s go. It was a long flight from London to New York, so I’m a bit slow. And you’re

Jon Krohn: 05:43 Working without notes. For people who aren’t watching the video version, Richmond is just freewheeling everything

Richmond Alake …: 05:48 Here. Well, the hundred days of agent memory helped. All right. So let’s go with episodic. Episodic is one of the easiest ones to actually understand. Episode is essentially memory that have some association with time. And the easiest one within an agent is back and forth conversation. So the actual conversations you’re having for developers or engineers listening, this would be the user would be the assistant or the assistant or the user. Then the content will be the message. Then you have a timestamp. The timestamp is always important in episode it because that’s how you recall that particular type of memory type. Then that’s episodic. Another example would be procedural. One thing that’s very popular today is skills.mdfiles.

Jon Krohn: 06:33 Let me just quickly, before you move on from episodic, maybe I’ll try to give an example. So you gave the travel agent example. So the instance of somebody, if you imagine a human travel agent and you’re a human travel agent, somebody comes into your office on Monday, they say, they describe the kinds of things that they’re looking for, cultural, lots of great food, that kind of thing. Set up a trip for me and they say,” I’ll be back on Wednesday and hope you have something ready for me. “It’s an episode.

Matthew J. : 07:04 It’s

Jon Krohn: 07:04 A specific … If you were the travel agent, you could mark that down in a notebook on Monday at 10:00 AM Richmond came into my office. It’s a discreet episode where you can, as a human, you can think back by Wednesday, you should be able to think back to Wednesday or think back to Monday and think, ” Yeah, at 10:00 AM, Richmond was here in my office. I remember what he’s looking for. “So you recall that episode.

Richmond Alake …: 07:27 Anyway. Exactly. Nice. So let’s go over to procedural memory. An example of that would be within the agentic context would be maybe workflows. And I have another content, another example of procedural memory called using the database as a toolbox. But let’s talk about workflows. You could think of things like skills.mdfile. They’re very popular now at the time of this recording. And skills.mdfile are basically marked down that covers instructions on how to do certain tasks that are given to an agent. I call it the SLPs for agents, essentially.

Jon Krohn: 08:03 Yeah, standard operating procedures.

Richmond Alake …: 08:04 Exactly. But these are basically procedural memory because humans have the same concept of this routines and skills. We store them in a particular part of our brain, and you’re the neuroscientist there, so you can fact check me on this. We have this part of our brain called a cerebrum, cerebellum.

Jon Krohn: 08:22 Cerebellum.

Richmond Alake …: 08:22 Cerebellum. Yeah. And that stores all our routines and skills. So you can actually think about that in the same context of agents as well. We are writing a bunch of skills as MD file and we’re storing them in people are using file systems and we could talk about file system versus databases for agent memory. But over in Oracle or over in Oracle, we’re experimenting with putting these skills into the database and actually putting them within tables that have some different representation of this data, one of them would be a vector representation. And then we can progressively expose their skills at the right time. So rather than giving the agent a hundred skills or MD file, within a context, we can just retrieve the actual skills that we need at the time, which allows you to start to scale essentially. But that’s an example of procedural.

09:12 Do you want to do

Jon Krohn: 09:13 A- Yeah, I’ll try to come up with an example here. So if you think of some kind of thing that you do regularly, like making scrambled eggs,

09:20 It’s not that complicated. And you probably don’t remember the first time somebody showed you how to scramble eggs. It’s not like an episode that you’re like, oh yeah. I mean, maybe there might be some people who are like, ” I love the time and grammar showed me how to scramble eggs. “You might have the kind of episode that you can recall, but for most people, it would just be something you can make scrambled eggs now and you don’t really remember how it happened, but you’ve just kind of figured out through experimentation, maybe some things you read online or some things people taught you, a cookbook. There could be all these different inputs that over time created this procedure that you’re aware of. There’s not any specific episodes tied to that procedure. You just know how to scramble eggs.

Richmond Alake …: 10:00 Exactly. Exactly. That’s a very good example. So let’s go to semantic. Semantic can be a bit easier. So semantic is actually just … The easiest one would be world knowledge. You can have knowledge about a certain topic, that’s semantic memory. And for agents, that could be all of the, let’s say the institutional knowledge within your enterprise data, how to do certain things, some abbreviations or some words. And you could just bring that into your agents to get them to actually start to have the knowledge that the same your employees have in terms of that particular use case. So that’s semantic memory. There are different types, but that’s the one we’re going to use today. So over to you.

Jon Krohn: 10:39 Yeah, exactly. So this is kind of like any kind of knowledge that’s in an encyclopedia, like Wikipedia, these kinds of key pieces of information. Again, so it’s different from procedural in that it’s not necessarily a sequence, it’s a fact, it’s an association.

Richmond Alake …: 10:57 Exactly.

Jon Krohn: 10:59 Richmond lives in London. It’s like, I know that’s semantic fact. It’s not associated with a specific episode. It’s not associated with a procedure. It’s just a piece of information that I know.

Richmond Alake …: 11:09 Exactly. And then we can talk about working memory, which is another type of memory. Working memory is this short term, specifically a subset of short term memory. Working memory is what you’re using in real time in the context. There’s a bunch of information that is my working memory that I’m using to speak to you right now to actually interact with you. I’m not having to think for an extensive period of time. So that’s my working memory. The best way I would describe this within agentic context is the context window of the LLM. That’s working memory.

Jon Krohn: 11:40 I recommend listening to the whole interview to hear Richmond walk me through every one of the memory types he outlines. We move now from theory to practice with my next clip, which is from episode number 981. I invite Matt Glickman, who is co-founder and CEO of Genesis Computing on the show to talk about how Genesis helps practitioners use data engineering agents to create at scale. It sounds like a no-brainer to be taking on these kinds of data engineering agents to help myself and the audience better understand how this works in practice. Are you able to walk us through one or two use cases, maybe anonymized use cases with clients of yours in terms of how you implemented your solution and what the impact has been on that business?

Matthew J. : 12:23 Sure. I guess in the third pillar that I’ll kind of mention, and then I’ll take you through the example is we’re agnostic to the actual data engineering tools and platforms that people are using. These platforms will come and go. The frameworks will come and go. What we’re replacing or really augmenting is the people. So now everyone out, to be clear out there, anyone who’s panicking, no one’s losing their job.

Jon Krohn: 12:55 I have that question coming up.

Matthew J. : 12:56 No one’s losing their job. What’s going to happen though is that people are not going to be hired as much. So the next wave of people who would have been the junior data engineer or the junior analyst or the junior operator or just honestly any junior role in enterprises, if you want to get kind of a little more dystopian about it, but the junior lever hiring is the problem that AI is basically going to wipe out. And it’s already starting to happen. Hiring projections are just not going to happen. People have the skills already there. There’s no reason to replace them, particularly in the data space because they were limiting the fact they just didn’t have enough of them. Now you just can get 10X more power out of them and they can move on to things that they wanted to do and not the tasks that they were basically bogged down to do.

Jon Krohn: 13:43 Yeah. Maybe I’m being too optimistic here, but don’t you think there are some scenarios in some organizations where they actually will want to do more hiring because, and even of junior people, because they’re being so much more productive with the individuals that they have. They’re getting so much, like you said, kind of 10xing the capability there. You were talking about how previously data engineer was this extremely stressful role. They never felt like they had enough time for anything. And I suspect that even if you’re 10Xing your capability, there’s still a lot to be keeping an eye on and making sure of. And so as that ecosystem grows, you could imagine some organizations also being like, “It’d be good to have a few more eyes on this. “

Matthew J. : 14:25 No, I think that the few more is the key part of this. My co-founder Justin published a piece that if you haven’t seen, it’s on our LinkedIn, on Genesis LinkedIn, there’s going to be certain people who can actually be these kind of conductors of these AIs at scale and actually use them better than others. He calls it spinning plates. He now has six different agents working in parallel on different things that we’re building. I’ve only gotten to two and a half. So there’s going to be certain people who can come in and be these force multipliers and say, maybe that you’re working with one agent is doing something with you. I can actually command 10 of them and I’ll have the context switching in my head to figure out how to put them all to work. So there’ll be some of that hiring, but my opinion, that’s going to be the few special hires because the number of people who can do that, it’s just a limited kind of audience.

15:39 I think it’s more about just the impact it’s going to have. Again, everyone who’s doing these kind of operational roles are going to be the ones that now just get a force multiplier applied to them. And I just see it hard pressed to want to bring on more humans when you can just one up 10 agents, 20 agents who do that task instead.

Jon Krohn: 16:06 It wouldn’t be surprising to me, and I’m open to your critical feedback on this, but it seems to me like some junior hires might be more likely to be that kind of 10X agent orchestrator as opposed to two and a half X because they could be growing up in this ecosystem where it’s like vibe coding first.

Matthew J. : 16:27 I hope so. The thing I worry about, and this is maybe a little off topic, but I think it’s relevant, is that the education system has not yet figured out how to teach AI.

Jon Krohn: 16:45 Yeah, a lot of places haven’t. Some places-

Matthew J. : 16:47 Some places are. And I think if that happens, maybe this can change dramatically, but right now most schools do not in high school and college and don’t discourage people from using AI as part of their process. So it’s like a double whammy. Basically, you’re going to have companies that are just, if anything, like I said, if anything, they’re looking for people who are masters of how to use this tech and people come out of school that have learned it despite their schools kind of not encouraging it. So yeah, it’s a bit of a double whammy and it’s going to happen. I’m actually trying to give back and work with the schools that my kids have gone to and just to try to express the urgency. And I think there’s appetite, but it’s tough. I mean, it’s how do you incorporate this? How do you assign papers?

17:41 What does that mean? The idea that you’re going to be able to tell, you’re not. That’s a farce. No human going to be able to tell and it’s just all going to get worse and worse.

Jon Krohn: 17:49 One person who might have cracked AI in education is Linda Haviv. I speak with Linda, a prolific content creator in episode number 987 about her work in lowering the barrier to access learning about AI. Yeah. Let’s talk about this fast moving, not needing to code anymore scary business because I think it is … I don’t think anyone would argue that it’s still an advantage to be literate in code, even just for keeping track of what your agents are doing. But it is, as you said earlier in this episode, things have always been moving fast in this space. This is a really big change. It’s a good point. Where now, theoretically, you don’t need to be able to write code at all to be working in AI, to be training models, downloading open source model weights, fine tuning them to some specific task, labeling the data, getting that into production infrastructure, building the website that’s running that AI application in the backend theoretically now could be done all with natural language prompts.

18:57 And that is a big change. And that’s really just a couple months now that you

Linda Haviv : 19:00 Could do that. And I do think though the more and more we’re dealing with off toy examples, as people move to production and scale, we’re going to hit a lot more system thinking. And you know what stopped me in my tracks this week? Somebody who’s non-technical talking about memory, and I was thinking about that a few years ago, who would be talking about memory? It was always like an AI infra, you don’t see this part, it’s invisible. I used to build maps for elections. Nobody gave thanks to the DevOps engineers. And I do think we’re in this phase where AI infra is actually becoming the problem that many people are just wording differently, that they need to figure out. And maybe they don’t need to know how prefilled deco disaggregation or KV cache works or stuff like that, but they are saying the same thing in different ways.

19:50 They are saying, “Oh, this context is not working for me. Why does it feel like it’s lagging?” It’s things that even now that they’re building and vibe coding, whether it be the founder, the AI engineer, or the AI infra engineer, they’re kind of all saying the same thing.

Jon Krohn: 20:07 Yeah. So in 2016, when you were getting … Well, actually, I don’t know if it was … Was it 2016 that you got started in your tech career or that’s just content creation? When did you start?

Linda Haviv : 20:15 Oh yeah, no. So it was 2015. 2016. Yeah. 2015. I was teaching myself for a year and 2015, 2016 is when I got my first JavaScript. I was a JavaScript developer as my first role.

Jon Krohn: 20:26 And you were in media still for a while

Linda Haviv : 20:29 Before- So the funny part, talk about transferable skills. I think people are all dealing with non-linear career paths right now. That corporate ladder, it’s not exactly … I don’t know how it’s going to play out for my kids. I don’t know how it’s going to play out,

20:41 But I can say that I’ve navigated non-linear career paths, maybe not in the speed of now, but this is where I think transferable skills and people who have engineering background, it’s really, really a leg up and you have to lean into it. And what I mean by that is when I was working in media, I was working first on the other end of media and I was like, well, where would I have a leg up? If I go to a random tech company, I’m probably not yet … I didn’t have a traditional computer science background, but I do understand the user of media because I come from media. And if I’m building the CMS for the journalist writing the article, I understand what they need because I wrote the articles also on the other end. Yeah,

Jon Krohn: 21:20 It’s the content management system for …

Linda Haviv : 21:24 Exactly. Sorry. It was like my life for a while. I’m speaking of my hands and I always hit it with my chin. You know when I used to sing on stage, I used to always hit my teeth with the mic, sorry, tangent.

Jon Krohn: 21:36 There may have been a sound in the audio only version just now, and that sound was Linda hitting her face on the microphone.

Linda Haviv : 21:42 This is why I’m chaotic. I shouldn’t drink so much coffee. But that part of transferable skills, it is more about now you understand the user, you understand system thinking. System thinking, that is part of everything we’re doing. If you’re building an agentic AI workflows, there is a distributed systems layer here that you have to understand. And a person who is maybe … The people that are non-technical might be coming also with a very good transferable skill, right? They’re coming with understanding their niche. They’re understanding real estate. They’re understanding health. They’re coming from their niche. And what I think engineers bring is they understand where things are failing. It’s not completely like a mystery to them. Whether they come from software development, whether they come from the infra end, even someone coming from DevOps, they might not be like an AI/ML expert, but if you think about it, even the training of a model is an orchestration or distributed systems problem, mixture of expert models and how they work.

22:36 If you think about it, the large language models and how they’re trained, that’s also, in a way, a DevOps skillset. So I think a lot of it is getting abstracted, but what I think is bubbling up to the top is this AI infra challenges. And to bring it back to the transferable skills point, I think leaning into not feeling like just because, yes, people could vibe code, I think everyone brings strengths to the table. And just like the culture of open source, when we build together and we’re able to actually bring our strengths together, we will all win that way, right?

Jon Krohn: 23:06 For sure. So you’re saying your specific background, for example, in media prior to becoming a software developer in media, that was helpful. And that’s a useful tip for any of our listeners that are thinking, “Hey, I want to get into data science or AI or software development.” Where just like you said, Linda, it might not make sense for you then to try to be applying for jobs at Meta as your first tech job. At my first step. Exactly. But you could be thinking, “Okay, well, I actually have already been working in news media for several years. Maybe I can just stay at this company and start to have some of my time or all of my time be doing some kind of development work.”

Linda Haviv : 23:41 And I think people could bring their full self now to a lot of things because I think it actually matters. I think in the past, your career was always like, “Oh, here’s my separation.” As we look at also personal branding, as we look at how things are built that are going to be more intertwined with the human element, there is a part that I think we’re also being empowered even from why am I going to entrepreneurship? I think it’s like there’s a lot that you could bring your full self today that you couldn’t in the past or creative things you could do that you weren’t able to because you didn’t have the $50,000 studio and you didn’t have … Even in the music industry, you had to be with a label. Today it’s very democratized, whether it be content, whether it be building something you don’t need to hire, maybe as many people, but also the software developer take what’s in their mind, what they wanted to solve and do it for themselves.

24:28 And the cost is lower in some respects, right?

Jon Krohn: 24:31 Yep. In fact, it’s almost just as in music. So in music and performance probably in general, it seems to me now like there’s almost an expectation that somebody already has developed a social media following before they could get a record deal. And so that kind of reverse things. Whereas previously you’d get the record deal, you’d be on the radio and then you have a following. And

Linda Haviv : 24:56 Then you were in the mercy of same with news, right? I used to work in TV news and one thing I realized was, oh, I love media, but you know who the happiest people in my opinion were? The people who came as specialists, the lawyers, the doctors. Why? When they needed to not be on air that day because they weren’t feeling well, they could not be on air. But when you live a news cycle, you don’t have … I mean, there’s no balance in many things, but I think in TV news and it’s the 1% and in a way you have so much of your own authority of what you want to talk about, what you don’t want to talk about in your own media. Now at the time it wasn’t as common in 2016 to have your own media channels. But for me, I was like, I’d prefer to come in as an expert versus just … I think what journalists do are incredible.

25:39 I also realized what I enjoy versus non-joy. And I think you have to ask yourself that because I think you could curate your life today. You could be the artist of your life and curate it based on the things you want because you have the power today to take it. And I think it’s like everyone could build something, but what you build is really important. And going deep and figuring out where you bring that skillset, where you find joy, I think we’re in that era. I think there’ll be a creative renaissance in a way too. So live your wildest dreams.

Jon Krohn: 26:05 Yeah. So all of these things that we were discussing earlier in the episode as being scary from one perspective where as a software developer, as a data scientist, as an AI expert, we used to have this inbuilt moat that only this small percentage of us in the population could write the Python code or whatever, the Rust Code. Or design.

Linda Haviv : 26:27 It could be an engineer, but I’m not good at designing. I’m not good at project management, but AI helps me a lot. Right,

Jon Krohn: 26:34 Right, right, right, right. So yeah, so there’s all of these different aspects where you still bringing your particular background, your particular expertise gives you probably more confidence to pursue any given direction. If you’re thinking, I want to build a media business with no experience in media, you can go to Claude and have the conversation, but are you going to have the same level of confidence in these recommendations you’re getting from the cloud chat than if you’ve lived that path in media, you’ve seen what works, you’ve seen what doesn’t work, and you’ve developed connections in that space as well. So you can say, “Hey, so- and-so from media company X, I’ve built this web app that I think solves these problems that you have. Would you like to see it? ” Them already knowing you, they’re much more likely to take the call to look at your app than if you send them a cold email.

Linda Haviv : 27:27 Right. I think for people especially who are like many years in the industry and they’re toying with that, there is a strength to it. And I think of course, obviously you have the more junior developers who are struggling right now coming out of college. I think it’s harder to get junior jobs. So you’re seeing them also going very entrepreneurial as well. There is a democratization, there is an abstraction there. We’ve seen this happen though. In the age of bootcamps, 2016 time, I think I went to a coding bootcamp at the height of coding bootcamps. I went to the flat iron school.

Jon Krohn: 27:55 And we’re rounding up a great month with episode number 983 in which Tracy Walker Griffith shares her novel perspective of what critical thinking is. Tracy is principal of the Elliot Innovation School in Boston. That’s a kindergarten to eighth grade school, and she’s responsible there for rolling out new AI initiatives in the classroom. All right. So now let’s talk about the older kids who interact with the AI systems directly. So I think it’s grade, it’s five through eight. And it’s funny how I come from Canada, so we say grade five to eight, but you say fifth grade to eighth grade. It

Walker G.: 28:26 Should be pretty- I can say it always, Jon,

28:31 Whatever we can do. So the fifth through eighth grade this year is amazing. I mean, I will tell you, I was in a classroom yesterday, we had a district visit, so the chief academic officer, the director of digital learning for Boston Public Schools and the deputy chief of academic learning, they visited and the director of digital learning hadn’t been here for a year. So it was amazing because last year, as we talked about, we were just launching the AI co-lab, we had some ideas, we were flushing out, we were really excited, and this year they got to see the student facing AI. And so the students, and I’ll just give you a real example because I love a good story. So I walk with these amazing people who are so excited to see the best school in Boston, and we walk into a classroom and the kids have a, there’s like a question that they’re exploring, and the question is, how can AI make us a better test taker?

29:43 So to give you context, it’s March, we’re getting ready for our Super Bowl, which is the state test, and we’re in a repertoire unit. We’re not in a test taking unit, we’re in a repertoire unit, because in our repertoire, we have all of these skills that we can show what we know in this test. And so because we’ve been using AI all year, we thought what better way is for students to have an opportunity to think about AI. And so we have student versus AI and we’ve used … So Magic School is another opportunity for us to continue to use other systems and Magic School because it’s part of our suite in the Boston Public Schools. And so teachers in the ELA humanities team work together to create this, I call it complicated because I think it was … I was not understanding it until my fifth grade friend was Principal Tracy, here’s how it works.

30:41 He’s got like four tabs open on his computer. They’ve uploaded these practice, like short pieces of text. There’s the extended response, which is the writing responses that we’ve really … Year over year from like 2007, it has been a problem of practice for our students to be able to explain their thinking clearly in writing. So this critical thinking and analytical writing. And so there’s open response, there’s multiple choice, and then there’s the notebook is next to the computer and the kids are doing what they call chunking. So they’re writing down what they’re reading and chunking it. Then they’re answering a question in magic school. And I said to the student, I said,” How is AI even helping you? “And he said,” Well, let me tell you, I’m writing my response, then I’m letting AI write a response, and then I’m comparing the two to see who’s better.

31:42 “And then I have to explain if I think mine is better or if I think the AI is better. So the idea of reflection right here is, I mean, that’s mission critical to thinking about thinking and accelerating outcomes. And then in addition to that, I said,” Well, what about the multiple choice? “So he’s like, ” Oh, I got this one wrong. “And I said,” Well, why do you think he got it wrong? “And he said,” Why do you think you got it wrong? “He goes,” Well, this was the almost right answer, but this was the right answer. “And I said,” Okay, so what do you do with that information? “He said,” Well, actually it goes right into this tracker so Ms. Duggan, my teacher, can see if there’s a pattern that might emerge. “And I’m like, ” A pattern that might emerge? What does that mean?

32:28 “I mean, just the conversation. So I’m with the CAO, she’s looking at me, I’m looking at her and I’m saying, this is fifth grade. So in four years, this child will be in high school and this opportunity for this child to take this learning … There’s a hundred kids in fifth grade all having the same experience with this interface with Magic School, which was teacher created and collaborated on. And now the student is having an opportunity that this is not the first opportunity that the student is working with AI. This is like a culmination and that’s why we’re talking about a repertoire. This student is driving his learning. Another student is getting feedback based on what they need and it’s personalized, but it’s related to we’re not lowering the bar, we’re raising the bar. AI raised the floor. Teachers are raising the ceiling. What’s better than that?

Jon Krohn: 33:30 That is sensational. I love how the older kids are able to figure out their own way based on the way that they learn, based on the places that they feel like they need work or the AI system says they need work. They figure out their own way to be working between these different applications, Gemini, Magic School, a notebook. And so what is Magic School exactly? Actually, it came up in our research as well, but I didn’t dig into it very much.

Walker G.: 33:57 So it’s another platform and it’s built for education to be school safe. And as I said, we try to stay platform informed, not platform loyal. We want to make sure that like PlayLab, which is another amazing … And it’s powered by Claude in the backend. We’re using PlayLab personally. I’ve done it with some principals. I created a budget tool for Play-Lab that Magic School allows teachers to prompt and control that interface. So we didn’t just start with Magic School. I want to be clear because I think right out of the box doesn’t help the learning that teachers need to do and the AI literacy for both the adult learners and the student learners. That’s at the foundation. We need to know what we don’t know and give students opportunities to think about AI and AI ethicality and create … Our students are critical consumers of everything in their world.

35:08 AI is like number one. We need to make sure from the youngest age we’re thinking about AI literacy. All

Jon Krohn: 35:14 Right, that’s it for today’s. In case you missed an episode, to be sure not to miss any of our exciting upcoming episodes. Subscribe to this podcast if you haven’t already, but most importantly, I hope you’ll just keep on listening. Until next time, keep on rocking it out there, and I’m looking forward to enjoying another round of the Super Data Science Podcast with you very soon.

Show All

Share on

Related Podcasts