00:00:00I would hate to see a future where teachers outsource to AI the parts that I think really
00:00:05make good education, which is the connection pieces.
00:00:08When you really understand your students and can spend time with them.
00:00:11And AI can be used in so many ways that allow teachers to have more time to do that kind
00:00:16of work.
00:00:17And I'm excited for us to talk with institutions and discuss with them ways where we can amplify
00:00:22that knowledge they already have.
00:00:23Hi, everyone.
00:00:25We're here to talk about my favorite topic, which is AI and education.
00:00:30My name is Drew Bent.
00:00:31I lead our work here in education on beneficial deployments.
00:00:35Formerly was a high school math teacher.
00:00:37Parents are educators.
00:00:38I worked in education nonprofits and would definitely consider myself a lifelong learner.
00:00:42I'm joined here by my wonderful colleagues who work in education across the organization.
00:00:47Do you want to start Zoe?
00:00:48Happy to.
00:00:49Yeah.
00:00:50Hi, I'm Zoe.
00:00:51I'm on the education team here at Anthropic, and I support all of our non-technical audiences,
00:00:56including educating teachers and students about both our products and AI in general.
00:01:00Hi, I'm Maggie, and I founded and currently manage and support said education team, which
00:01:06we fondly internally call the Ministry of Education.
00:01:08Hi, I'm Efrem.
00:01:10I'm a product engineering manager, and I've also helped build some of our products for
00:01:13facing education.
00:01:15So I think it's helpful to start here with why are we discussing education in the first
00:01:20place?
00:01:21We started out in education in the first place at this general purpose AI lab.
00:01:25We all know, of course, that at Anthropic we care a lot about studying both the potential
00:01:29of the technology that we're building, but also the risks.
00:01:32I think education is the perfect example and sort of embodiment of that because, of course,
00:01:37there's massive benefits, as we'll talk about in this conversation, but there's also a lot
00:01:41of concerns we have about the impacts of AI in education.
00:01:44And so when we think about the benefits, we think about, you know, I've had many conversations
00:01:47with you all about how AI can prevent teacher burnout, how it can transform and really democratize
00:01:53access to high quality learning and tutoring, how it can change, you know, how and what teachers
00:01:58teach.
00:01:59But then we also, of course, see the other side of it, which are all the risks and the
00:02:02concerns around, you know, the teachers have about how AI could lead to more cheating and
00:02:07is leading to more cheating, of course, but also serve the more existential risks of how
00:02:11do we make sure that these tools are actually enhancing and augmenting human thought as opposed
00:02:15to replacing it.
00:02:16And so my hope for this conversation is that we can sort of dig into all of these nuances,
00:02:20but also talk about the sort of practical type of work that we're doing at Anthropic to work
00:02:24towards these issues.
00:02:26So maybe to get started, I would love to hear, you know, I know all of you, I've known you
00:02:30for a while, but I don't necessarily know all of your stories on what got you interested
00:02:35working on education in the first place.
00:02:37So Maggie, we'd love to start with you and what brought you into this work?
00:02:40Well, my interest in education is twofold.
00:02:42I think professionally, education and communication has always been part of every job I've ever
00:02:46held up into Anthropic.
00:02:48And I think personally, I, you know, have two lovely kids in my life and I am grappling with
00:02:54just as every other parent is in this era.
00:02:57What can I do to help nurture them to being kind of intelligent, thoughtful thinkers and
00:03:05critically engaged individuals as they grow up in the AI age?
00:03:09One is a lot more professional interest and where I feel like I can make a difference.
00:03:14And the latter is like pressingly concerning to my immediate core as someone who is a guardian
00:03:20to young minds.
00:03:21How about you Efrem?
00:03:23Well, so I started my career in academia.
00:03:25I studied physics, maths, and I assumed I was going to be doing research for most of my life
00:03:31before switching to tech.
00:03:33And I've taught classes when I was at MIT, I've been on adjunct faculty, so education
00:03:37has been something I've always been interested in.
00:03:40When it comes to AI and education, I have two children in my own college, so I worry every
00:03:45day about what they're learning, how they're learning, what will they do once they graduate.
00:03:49I'm also very passionate about, I'm sure we'll talk about later on, how do institutions deal
00:03:54with AI in their education?
00:03:57So just a lot of stuff here that is both personal, but also like looking forward to society.
00:04:01What is it, you know, what does AI mean to education that I'm really interested in?
00:04:04I mean, I think like you and I both having children in our lives is like one of the most
00:04:09interestingly concerning things, where it makes it so real for you right now, right?
00:04:15And I think that like your kids are college age and so they're trying to figure out what
00:04:18they're doing in their lives.
00:04:20And then my kids are younger, but I see it on the horizon that very soon there's like
00:04:25a strong decision points where you can decide how to start nurturing this kind of thinking.
00:04:30And I feel like if you don't catch it early, which is why we care about like education
00:04:33at all the different ages, that it can go pretty badly and compound, right?
00:04:38Yeah.
00:04:39I think that's hitting on a lot of like why I got into education in the first place, which
00:04:42is I have this very deep seated belief that education is one of the most important things
00:04:47we can do to make change in our society.
00:04:48I think most people can agree on that.
00:04:50And when I pivoted from the classroom into tech, it was because I wanted to work for organizations
00:04:55that could do that change at scale.
00:04:56I think there's some things with our education system that I would fix if I had a magic wand.
00:05:00And I'm hoping that AI can help accelerate some of that change for the better, but also
00:05:05like very much recognizing that we have a big responsibility to make sure that that change
00:05:08goes well today and then, you know, 10 years into the future.
00:05:11There was a great quote from a professor that I talked to at some point where they were saying
00:05:16that all the problems in academia have existed for a while as an institution.
00:05:22It's just that AI is the forcing function that makes everyone deal with it now instead of
00:05:26keep putting it down, I guess, kicking the can down the road, right?
00:05:29Yeah.
00:05:30Yeah.
00:05:31So I'm excited for us to deal with it.
00:05:32I'm just realizing now we have the parents on this side and the first, we bring a good
00:05:36set of perspectives here and yeah, I think on my end also my parents are educators and
00:05:42so it was always looking up to them and wanting to do what they did.
00:05:45But I think it's helpful to also ground this conversation in some of the research that we've
00:05:49all been working on here.
00:05:51And so it was I think late last year that our societal impacts team at Anthropic had done
00:05:57research into all the ways that, you know, users are using Claude and found that some
00:06:02of the top uses were in education.
00:06:05I think we had seen this sort of in all chatbots to some extent, but I think it was also a wake-up
00:06:09call for us because, you know, what's interesting about it is that these LLMs, as we all know,
00:06:14were built not with education in mind, very much on answering questions.
00:06:18They're fine-tuned that way.
00:06:19They're meant for, you know, these productivity tasks.
00:06:22And then it's kind of this interesting emergent, you know, phenomenon that they're very helpful
00:06:26for education, potentially sometimes destructive as well for people's learning.
00:06:31And so we started to, you know, dig deeper into that.
00:06:33But I think what stood out in the research is one stat that always comes up is 47% of
00:06:40the student interactions on Claude were very direct transactional types of interactions
00:06:45with little engagement.
00:06:46I think I know when Maggie and I were looking at data at first, it was sort of a wake-up
00:06:50call because, again, we have all these incredible ways that you can use it as a Socratic tutor.
00:06:56But then to see that in some cases, you know, people are using it to just do their homework.
00:07:00And I know as a teacher, like for me, I sort of think about the different cognitive skills
00:07:05that I want, you know, my students to learn and, you know, at the base level, it may be
00:07:10things like remembering a fact and understanding some knowledge, but then you want to eventually
00:07:14get them to the level of synthesis and, you know, creation.
00:07:18Of course, you know, we call this Bloom's taxonomy.
00:07:20But what we saw in the data that I think was fascinating was we started to study Claude's
00:07:25interactions in these conversations and saw how well Claude is performing on these cognitive
00:07:31tasks and found that Claude was performing at these top levels of creating and analyzing
00:07:36when, again, as a teacher, that's what you want your students to do.
00:07:40Yeah, I think the students are kind of flipping the script on this in a way that's concerning
00:07:44to us as educators.
00:07:46And I don't know if that's necessarily, I think the first blush reaction is that that's a bad
00:07:50thing, but I think part of what I want to challenge us to think about and the world to think about
00:07:56is, is there like a novel taxonomy where that's the baseline and you can build on top of that
00:08:02to something new that hasn't been possible before AI.
00:08:06We've also explored how the, you know, educators are using it and they're experimenting with
00:08:11it for, you know, create lesson plans, grading.
00:08:14I think it was one Northeastern professor who told us that they're never going to sign a
00:08:19traditional essay again because they just had too many students submitting, you know, these
00:08:25AI assignments.
00:08:26Whether they use Claude or not, we don't know, but I think that sort of raised a lot of questions
00:08:31for us.
00:08:32I feel like you really hit on two of the things that we talk about a lot with educators, which
00:08:36is like AI is both changing how students learn in also what they need to learn, right?
00:08:42Like I don't actually know if it's important that students have the same memorization that
00:08:47they would have needed to have like 10 years ago because they have AI tools readily available
00:08:51or in theory they should have AI tools readily available.
00:08:54And then, you know, when you get into like higher levels of academia, there are potentially
00:08:57skills that we're teaching today that won't be as important in the future.
00:09:01And so it's, it's a ton for teachers to grapple with.
00:09:03I would love to hear from all of you.
00:09:05What is one thing you're really excited about with how AI can transform teaching and learning?
00:09:10I think one thing that really stands out to me is interactive learning experiences.
00:09:15I have this very vivid memory of when I was in the classroom, my students did a virus simulator
00:09:20game that was fully programmed.
00:09:22They were the virus and they worked into the cell and they replicated and the engagement
00:09:27that I saw from my classroom that day was unlike anything else.
00:09:29And I think most teachers have seen something like this, but AI really lets you do this at
00:09:33scale with any subject, right?
00:09:34Like imagine you're talking to a historical figure and teachers can put a lot of guardrails
00:09:39around this with the right tools, but I just am very excited to see that space developing
00:09:44over time.
00:09:45I think the interactivity is also really interesting to me.
00:09:47There's so much assistance you can get from AI that is really hard resourcing wise to get
00:09:55the interactivity, especially in like low resource regions where, you know, many students don't
00:10:01have access to like a personal career coach that can walk you through how to interview
00:10:05properly right at an organization.
00:10:08And with the power of an AI like Clog, you can like upload like the job listing, your
00:10:13resume and so on and just ask Clog to help you role play through these things.
00:10:17I think there's a lot of really engaging, interesting role play experiences, whether or not with
00:10:21a dead historical figure or with some sort of like coaching situation that can really
00:10:27help you through a lot of experiences where an external perspective would be immense help.
00:10:32It's just really hard to get that other human being to find time to sit down with you, especially
00:10:36in regions with low resourcing.
00:10:39Related to that, I've been very excited about how teachers are transforming their assessments
00:10:42with it.
00:10:43I was talking to a teacher a few weeks ago who at one point, I think during the pandemic
00:10:47had taken time over, you know, Zoom to basically do oral interviews with all the students and
00:10:53really get to assess them on a more holistic way.
00:10:57But of course, that didn't scale very well.
00:10:59And so stop doing it.
00:11:01But then with these AI tools coming out, was able to now sort of use the same rubric and
00:11:05start to have, you know, all the students doing on a regular basis these sort of assessments
00:11:11back and forth with a chat bot.
00:11:13And then the professor, the teacher is able to review them and assess them based on that
00:11:18process of going back and forth with an AI.
00:11:21I think assessment is a very interesting use of AI.
00:11:24I could envision in the future where it isn't a specific moment in time where you can assess,
00:11:29but there is a continuous of interaction with AI developing a much deeper understanding
00:11:34of do you really understand the concept behind this algebra or concept or not.
00:11:38One thing that I'm really excited about is I could provide this personalized learning.
00:11:42There was a research done on one-on-one tutoring.
00:11:46And what they found is that on average, the average student that had one-on-one tutoring
00:11:50is better than the 98th percentile of students that didn't have one-on-one tutoring or just
00:11:55a classroom setup.
00:11:57And that was with human tutoring.
00:11:58That's with humans.
00:11:59It's hard to scale.
00:12:00Exactly.
00:12:01Hard to scale.
00:12:02Assume maybe like an hour of one-on-one tutoring a day.
00:12:04With AI, you get continuous one-on-one tutoring, and that is available to everybody around the
00:12:10world.
00:12:11So I think that has a great potential to transform the world and how people learn.
00:12:15Yeah, I agree.
00:12:16I think there's of course lots of challenges as people have looked into the studies and
00:12:20how do you replicate less than all of it.
00:12:22But I think it's a very useful sort of north star of what could be possible if you can have
00:12:27a very personalized but also personal type of tutoring experience.
00:12:31Absolutely.
00:12:32I think today we have classes maybe segregated by students that are in the top or, you know,
00:12:37you take like an AP class if you're, you know, in that student group or not.
00:12:41But with AI, every student could have their own journey.
00:12:44Those that are able to advance could advance very quickly, and those that need help can
00:12:46get that personalized help.
00:12:48You know, this reminds me of a really interesting use case that I talked to a teacher about where,
00:12:52you know, you kind of always want to meet students where they're most interested, right?
00:12:56Like that's like a very macho story type of approach where it's like, what is your favorite
00:12:59topic, and then we'll kind of match all the subjects to that.
00:13:02That's really hard to scale, right?
00:13:04But there was a teacher I talked to that was just like, I asked my students what their favorite
00:13:07things are.
00:13:08They tell me a little story.
00:13:09And then now every single handout that you have like the same math concepts, maybe even
00:13:13same problems, but it's like each handout is made for each student.
00:13:17And it's like exactly according to their interests.
00:13:20It's got a story that's engaging to them, problems they actually care about.
00:13:23And like she's noticed an uptick in the engagement for sure, because suddenly these students have
00:13:28a through line through every subject in the classroom.
00:13:31It's building on itself in a way that's super personalized to their interests, which like
00:13:35if that was in every classroom, imagine how much students would just lean into.
00:13:38It's the main thing.
00:13:39Exactly.
00:13:40Yeah.
00:13:41So how are you thinking about that question of what's worth, you know, learning in the
00:13:44age of AI?
00:13:45As somebody who is in product, in product development, what I see is this absence of product layer
00:13:51that would help both students and teachers use AI very effectively.
00:13:55For example, my daughter's class, she's taking Python.
00:13:58So both of my children are studying computer science, so very, very relevant for their exams,
00:14:03for writing Python, they want them to write on a piece of paper because they're afraid
00:14:07about cheating.
00:14:08Well, the reason it is so challenging now is there aren't products for the students could
00:14:12use to learn.
00:14:14There's also not products for the teachers to use, it was a sign in grade homeworks.
00:14:18All of this are very light lifts, like from what we can do as a product offering, but in
00:14:23the absence of an intentional product that is built on top of LLMs exposes a lot of uncertainty,
00:14:30fear, and abuse of the technology that we're building.
00:14:33So that's, I guess that's my take on this, is that, you know, with a little bit of support
00:14:39in product thinking, so much of the uncertainty and cheating and so on could be mitigated.
00:14:47And one of the things I struggle with is, you know, we can sort of backtrack from how jobs
00:14:52are changing and start to think about how university education should be changing.
00:14:56But then when we think about, you know, kids to the age of your kids, you know, in K-12,
00:15:01it's an even harder question about what will be the skills, the durable skills that they
00:15:05need years from now.
00:15:08So I don't have the answer, but I always look to you, I look to you, Maggie, too.
00:15:12Oh, man, it's a challenging one.
00:15:14I think that what resonates a lot with the teachers that I talk to is that a lot of the
00:15:18skills you teach young minds about how to critically think about the world around them in the world
00:15:25of humans can be quite applied to AI, especially in the world of just critical thinking about
00:15:30the facts that you're presented with.
00:15:33There's a stage of development where you go from trusting every single thing everyone says
00:15:36to you to starting to think about, well, what are the other things that you need to know
00:15:40in order to believe that this is true?
00:15:42With my kids, it's like a two-part framework where part one is just the importance of education,
00:15:47I think is more important than ever where you can't tell if an AI is bad at math, if you're
00:15:51bad at math, or you don't actually know what the right answer is.
00:15:54We're not the stage where AI is always reliable, like a calculator or something.
00:15:57And so just understanding that and emphasizing that learning, reading, writing, science, math,
00:16:04and so on is still very important.
00:16:06And the latter part is developing them into critical consumers of information where it's
00:16:12not just about, this is what a fact that's given to me is, but why is that the case?
00:16:18How do I trust that that's true?
00:16:20What other areas do I need to check in order to ensure that I can kind of corroborate what
00:16:24I'm learning here?
00:16:25And that kind of critical thinking skill you can develop from a pretty young age, regardless
00:16:29of if it's an AI giving you the information or another human being giving you the information.
00:16:33That kind of critical thinking, I think, is one of the most important things to get at
00:16:37an early age.
00:16:38That's skepticism, curiosity, and combination.
00:16:39Right.
00:16:40And I want to add on to that because I feel like a lot of the teachers and parents that
00:16:43I talk to feel this really intense pressure to have the answers and to know what to teach
00:16:48their kids and to know how to conduct the lessons in their classroom.
00:16:51And I think kids are so much smarter than we give them credit for.
00:16:54And so there's something really profound about just sitting with whether it's your students
00:16:58or your actual children and just learning with them, asking AI something, and then evaluating
00:17:03what comes out of that together and having kids reflect and building their own frameworks
00:17:07for interacting with AI that I think is really, really powerful.
00:17:10And we all hear we don't have the answers, therefore no one does.
00:17:13Obviously, we're working really hard to figure out what they are, but I think just encouraging
00:17:17that reflection at any age, wherever it's developmentally appropriate, is one of the best things people
00:17:22can be doing right now.
00:17:24You know, I recommend sitting down with your kids and going through AI together, right?
00:17:28Like ask a question and then say, well, this was really confidently said, but is that enough?
00:17:33When someone says something confidently, is that enough for you to believe that?
00:17:37And hopefully the answer is no, right?
00:17:39What else can you check?
00:17:40Can you look somewhere else?
00:17:42What information do you need to think about this and genuinely internalize if that's correct
00:17:47or not?
00:17:48That exercise is, I think, super fruitful.
00:17:49And I think the converse side is to kind of demonstrate what it's like to not know something.
00:17:55I think that a lot of times, like showing uncertainty and modeling for your kids, when you don't
00:18:02know something, what is your own process of finding that out, right?
00:18:05What is your process for learning?
00:18:07I think that like what I want to impart upon my kids is like finding the answer is just
00:18:12the start of your learning journey.
00:18:15And I think for many institutions, schools, and so on, getting to the answer is really
00:18:19what we're testing right now.
00:18:21But if we make that the beginning of someone's journey, especially learning with AI, then
00:18:25that I think opens up a whole series of doorways.
00:18:27And so yeah, at home, I try to show my process for discovering something and that adults don't
00:18:34always have the answers, kids are super smart and they're able to find answers on their own
00:18:38in ways that if we just talk and ask the right questions, then they'll be able to discern
00:18:43for themselves what is true and what isn't and not just trust things at face value.
00:18:46Yeah.
00:18:47And I love the way you framed out of like modeling how to work through that problem.
00:18:51Modeling uncertainty is just such an important thing that we're all uncertain.
00:18:55So let's use that to our advantage.
00:18:56And we don't do that nearly enough.
00:18:58I think like we want to project this kind of confidence.
00:19:00And I think sometimes it can be detrimental to development of a child to kind of just
00:19:05tell them to trust everything everyone says, or the adults around them always know what
00:19:09they're talking about.
00:19:10Because I think that that gives them, it's a crutch, right?
00:19:14To not actually have to think through like truth for yourself and define your own truth.
00:19:19I think one thing that sort of remains unchanged is that how human things learn, right?
00:19:23We learn basic things first, we learn addition, subtraction and kind of keeps building up.
00:19:27So that will remain true whether we have AI, today generation AI or next generation AI.
00:19:32So I think where I see, I don't know what is the right field to study is.
00:19:37But either way, we still have to go through this process of learning.
00:19:40And I think the great promise now is that you can actually use AI to advance your learning
00:19:44to gain greater understanding.
00:19:46Sort of understanding maybe in two different ways.
00:19:48In one way, sort of personal is I studied physics because I was very curious as a child.
00:19:53But if you're a curious person and want to learn about the world, my God, what an opportunity
00:19:58now.
00:19:59Because AI could just teach you about anything you want to know.
00:20:00But if you're thinking about sort of career, like what happens next, how do I make a living?
00:20:05That no matter what, we'd have to be able to use the AI technology to make yourself like
00:20:09you plus AI be a more capable employee.
00:20:13I agree.
00:20:14Although I do think some of the fundamentals are changing in terms of the order in which
00:20:18we learn things.
00:20:19One example I go back to as you talked about programming and computer science.
00:20:24When I learned how to program, probably similar with you, I spent 90% of my CS education learning
00:20:29how to write code and write algorithms.
00:20:32And then maybe 10% of my time learning how to read other people's code and review it.
00:20:36And then now, of course, at Enthropic, when I program with all the coding agents and Claude
00:20:40Code and all of that, I spend maybe 10% of my time writing the code, but 90% of my time
00:20:47reading the code.
00:20:48And so it has made me wonder, you know, we usually learn how to read before we write just
00:20:54as kids.
00:20:55Right.
00:20:56But then with coding, we often spend a lot more of the time writing and then reading.
00:20:59And so it is starting to make me wonder, like, do we have to revisit some of those fundamentals
00:21:02and like maybe a core part of an intro is CS students education should be to think about
00:21:08reading and being able to discern good code from bad code and all of these things.
00:21:12So I do want us to bring us back to what is Enthropic doing here?
00:21:16I think it's important and we all know, of course, that we have a responsibility here.
00:21:21We are building this technology that's having this impact, you know, on the education system,
00:21:25even though we didn't intend it that way initially.
00:21:28And so we have a responsibility as a company, as a public benefit corporation, but particularly
00:21:32as individuals working in this company, former educators.
00:21:36And so I think it'd be helpful to sort of talk through what are the things we're doing?
00:21:40What are we wrestling with?
00:21:42I don't know if, you know, Zoe or Maggie, you want to talk about some of the work we've been
00:21:45doing with AI fluency?
00:21:46Yeah.
00:21:47Yeah.
00:21:48Happy to start.
00:21:49So I work on education content.
00:21:50So that's one of the main ways that I can make a difference in this space.
00:21:53So one of the things I'm really excited about is our AI fluency courses.
00:21:56We partnered with two professors, Joe Feller and Rick Daken, who built this really great
00:22:01framework about how to think about using AI.
00:22:03And what's cool about this is we're taking a step back from the products that are available
00:22:06today and the prompting and kind of like all these hacks that you see out there.
00:22:11There's a lot of them.
00:22:12And there is a lot of them.
00:22:13Right.
00:22:14It's like it's pretty overwhelming.
00:22:15Right.
00:22:16And they stick it out dated so fast.
00:22:17And so the idea here is we want to give people a tool that they can use to understand the
00:22:21interactions that they're having with AI and work towards interactions that are efficient,
00:22:26effective, ethical and safe.
00:22:27That's the AI fluency definition.
00:22:29So we have this core course that I think is pretty great.
00:22:32And then we've also created spinoff courses for educators and for students, as well as
00:22:36a longer course for educators who are interested in teaching AI fluency.
00:22:40And so the idea is that anyone who goes through one of these courses is kind of better equipped
00:22:45to assess their own AI interactions.
00:22:48I talked about like learning with your students earlier and the power of reflecting on your
00:22:51AI interactions.
00:22:53And at its core, that's really what this course is about.
00:22:55It's just reminding everyone, teachers, students, parents, that they have autonomy in their AI
00:23:00interactions.
00:23:01So that's one thing I'm excited about.
00:23:03Yeah.
00:23:04I think the interesting thing about our AI fluency work is the fact that we're taking
00:23:07a step back to the fundamentals.
00:23:08When we started this AI fluency work way back when, I don't know if you remember, Drew,
00:23:12the question that we were trying to answer was, in all of these prompt engineering tips
00:23:16and so on, they are developed by other humans, right?
00:23:19And it's not as if like we at Anthropic have greater superpowers than everyone else externally.
00:23:23We just have a different way of thinking about approaching models.
00:23:26And it's like, how do you teach that mindset to somebody?
00:23:29Because to Zoe's point, it's like in all of us.
00:23:31That sounds so cheesy, but like we have that capability to be critical thinkers that engage
00:23:36with this.
00:23:37And I think sometimes the fear of needing to get it right kind of supersedes our ability
00:23:41to just experiment.
00:23:43And what I love about AI fluency is we're opening the door to experimentation and saying,
00:23:46you can try these things.
00:23:47They may not work for you.
00:23:49And it's just as important to learn when they don't work for you and when you shouldn't
00:23:53use AI as it is to learn when you can use it.
00:23:56And there's something that we say in the education team every now and then, which I think resonates
00:24:00a lot, which is like we would much rather teach a million people to not use AI than like watch
00:24:07a billion people become dependent on the technology, right?
00:24:10And in practice, that can be quite hard, but AI fluency, I think is a very solid start.
00:24:15I still remember when I first heard you say that I was so happy and I knew I'd come to
00:24:19the right company because here I was in an AI lab and Maggie was saying, yeah, I don't
00:24:23think we should use AI in this case, or let's teach people how not to use AI.
00:24:28It's like giving them the tools to make the decision on their own.
00:24:31Back to critical thinking every time.
00:24:33But I think so, of course, part of it is an education and a training and awareness part,
00:24:38but we are also building products and models that are used out there in the wild.
00:24:41And so I think the work that Efrem, you and your team have been doing on learning mode
00:24:45is a really important part of it.
00:24:46So we'd love for you to share more about how did they even come about?
00:24:49So learning mode is a set of features that positions cloud as a tutor to students.
00:24:55Students could come in and for example, and upload their assignments and rather than answer
00:24:59their questions explicitly, it would help the students through the material that is covered
00:25:04in their classroom.
00:25:05It will guide them through how to answer the question.
00:25:07It will tutor them.
00:25:08It will also help them prepare for exams, for example, by showing them flashcards based
00:25:13on the content that they've uploaded.
00:25:14It's actually very much a grassroot effort.
00:25:16So a lot of people at the company that are really passionate about education and wanting
00:25:19to add education tools was in the main product line.
00:25:22So with learning mode, what we've did is added, it's really like small features here and there,
00:25:27but the Taylor cloud app to be really good at helping students with learning.
00:25:34Along the line, we've also added a few more features like expanding how much content you
00:25:38can add into projects so that more and more content can go in there, connecting to classroom
00:25:43management systems so that content could flow in and out very easily.
00:25:46So that's just the starting point of what I think this could be in the future.
00:25:52I think what's interesting is some of that early research that led into learning mode
00:25:57is we were interviewing university students and we sort of knew that the educators wanted
00:26:02some form of learning mode, they kept saying, where's your learning mode?
00:26:05And so I was like, okay, now we have to build it.
00:26:08But it was really the students who I think really drove the point home for us because
00:26:12they, of course, use a different word, which is brain rot, but we heard them talking about
00:26:15brain rot and they realized that in the short term, it can use AI chatbots to help them just
00:26:23finish an assignment.
00:26:24But when it comes to actually studying for their midterm and understanding and internalizing
00:26:28the concepts, they wanted a version of Claude where they didn't have to prompt it in all
00:26:33these different ways.
00:26:34Exactly.
00:26:35They didn't want to just give it an assignment.
00:26:36It just pops the answer.
00:26:37Right.
00:26:38And instead of giving an assignment, it guides you through the answers.
00:26:40If they are studying for a final, you can just show them a flashcard that helps them memorize
00:26:44and learn the content.
00:26:46So that is what learning mode is, is that just completely changing the interface in cloud
00:26:49so that it focuses on learning.
00:26:51And how long did this first version take to build?
00:26:54So the initial version actually took a very short period of time.
00:26:56There's a number of people that are extremely passionate about adding this capability.
00:27:00It took us about two weeks from start to finish.
00:27:04And it was amazing.
00:27:05Incredible.
00:27:06Yeah.
00:27:07The other aspect of this, of course, is, you know, we can work on these training programs,
00:27:10we can improve our products and our model, but then of course it's how do we partner with
00:27:14the outside world?
00:27:15We're just, we're a tech company.
00:27:16We're a small part of this much broader ecosystem.
00:27:19And so you've been doing a lot of the work we've done partnering with institutions like
00:27:23the teacher's union, AFT.
00:27:24We'd love to hear more about what goes into those partnerships and why are we so focused
00:27:28on them?
00:27:29Yeah.
00:27:30I mean, you and me both.
00:27:31But yeah, I think, again, something we're excited about is just like we have our classroom experience.
00:27:35Mine is relatively outdated.
00:27:36I was in the classroom before COVID.
00:27:38Like I know it's a very different world out there.
00:27:40Pre-COVID, Pre-AI.
00:27:41Pre-COVID, Pre-AI.
00:27:42It's like basically doesn't even matter anymore.
00:27:46But we get to partner with these organizations to learn from teachers who are actually in
00:27:50the classroom and professors who are in universities to understand the real problems that they're
00:27:53having in their schools and the real benefits, like things that are going really well.
00:27:57And lean into both of those, whether it's with education materials to train teachers
00:28:01up or, you know, product solutions that give them more autonomy and tools.
00:28:05And so, yeah.
00:28:06And the heart of this is like, this is a collective issue, right?
00:28:12Like a humanity-wide collective issue.
00:28:14And we are far from knowing every single thing that we need to help resolve this.
00:28:19So I think like the through line for all of our work is to bring more people into the conversation.
00:28:24Like if enough people take AI fluency courses, our hope is that then they bring that knowledge
00:28:28to their institutions and start these conversations.
00:28:30And to what Zoe was saying before, students are really smart and they're also really engaged
00:28:35in the desire to not have brain rot.
00:28:37Like some of the best feedback that we get comes from our student users who I think sometimes
00:28:41we don't give the credit for that we think, well, they're going to definitely want to cheat
00:28:44with this.
00:28:45It's an institutional problem, not necessarily a human, I guess, motivation problem, I think.
00:28:51I think that our best feedback, like from all of our product users, indicates that they don't
00:28:57want this reliance on these models.
00:28:59They want to feel like their own human capabilities are augmented and improved by this collaboration
00:29:05with AI.
00:29:06So I'm very proud of us in general in our products for not optimizing for kind of the standard
00:29:11engagement metrics where we're not trying to optimize for retention or how much time you
00:29:16spend on the product or dependency on the product.
00:29:19And we make active product decisions now and into the future that sometimes actually encourage
00:29:24that greater augmented thinking or encourage, again, times when you don't use AI or as the
00:29:29kids call it, touch grass.
00:29:33I'm excited for us to keep going down that pathway.
00:29:34Yeah.
00:29:35And I actually, that was one of the most surprising things to me joining Anthropic was that it's
00:29:39not a growth-optimized company, right?
00:29:42Like most SaaS companies want to optimize for users, retention, all these things.
00:29:47Anthropic has a much broader perspective on what success looks like in our products, which
00:29:50I think is really interesting.
00:29:52In our product development, that's not just true for the education initiatives that we
00:29:56had, but for everything else we build.
00:29:58Isn't about keeping users engaged in the product.
00:30:00This really is about having AI be beneficially employed and impact society.
00:30:06The thing we touched on is that every decision to use AI somewhere or not is a deliberate
00:30:11choice.
00:30:12I don't think that we are kind of on a creating path towards AI and being every single place.
00:30:17And hopefully the work that we do as a company and the things we make, the choices we make
00:30:22to build our product in certain ways can lead by example and also invite people in to start
00:30:28realizing that everything is a deliberate choice and it is just as good and sometimes better
00:30:33to just choose not to sometimes, right?
00:30:36I would hate to see a future where teachers outsource to AI the parts that I think really
00:30:42make good education, which is the connection pieces.
00:30:44When you really understand your students and can spend time with them and AI can be used
00:30:49in so many ways that allow teachers to have more time to do that kind of work.
00:30:53And I'm excited for us to kind of over time talk with institutions and discuss with them
00:30:58ways where we can amplify that knowledge they already have, right?
00:31:02And experts we partner with have generally a decently clear opinion on when AI is actively
00:31:08harming the educational outcomes.
00:31:09And it's just our job to listen and to try to implement that either in our products or
00:31:14in our educational program.
00:31:15So we've talked a lot about what our personal views are on AI education, what we're doing
00:31:22as a company, but we definitely haven't solved it.
00:31:26So what are the things we're still uncertain about?
00:31:28I mean, I'd be curious to get your takes on this, you know, what are the things we're still
00:31:31trying to figure out?
00:31:32I have a couple of things, both pretty different, but one thing we touched on earlier is AI is
00:31:37changing like what it is that you need to teach.
00:31:39You brought up coding.
00:31:40We are pretty sure that coding curriculums will look very different in five years.
00:31:44I'm interested to see how things start to shift and if there's any like frameworks or really
00:31:48anything that we can develop to help academics along in these areas to understand what kinds
00:31:54of skills may be more augmented in the future and which kinds of skills are going to need
00:31:58additional human support like reviews or management.
00:32:02I think we're starting to understand that in areas like computer science, but it's very,
00:32:05very early and we know this is going to affect a lot more fields.
00:32:08So especially in higher education, that's something I'm interested in.
00:32:11I hear a lot of concerns in K-12 about the different tools and trying to understand what
00:32:16happens to the data when you put it in those tools.
00:32:19And I think right now there's this like massive proliferation of AI tools in classrooms and
00:32:23teachers and administrators are really overwhelmed for very good reasons.
00:32:27Like there's a lot of concepts that are really new to everyone.
00:32:30There are elements of the data privacy that is new and hard to understand.
00:32:33And so I'm really interested to see how that landscape evolves, whether we need to really
00:32:38ramp up our education around data privacy so people can better assess the landscape or whether
00:32:43we start to like, you know, see clear winners in this space, I'll just be really interested
00:32:46to see what happens there.
00:32:47I think in addition to that, the technology is really changing rapidly.
00:32:51So one of the concerns, I'm not sure how it will play out over time is how will institutions
00:32:56adapt?
00:32:57It's generally slow moving and intentionally built that way.
00:33:00And the technology's pace of change has been very rapid.
00:33:03It's harder to predict like what would happen six months from now or a year from now.
00:33:07So I think that pace of change to how institutions generally adapt to new technology is one area
00:33:11that I'm uncertain about.
00:33:13I always think that like just everywhere, every institution feels an immense amount of pressure
00:33:17to just do something with AI instead of doing nothing.
00:33:21And you know, I have no idea how to kind of balance or help organizations balance the fact
00:33:26that pressure is really real, but also when it comes to education especially, moving fast
00:33:31and breaking things is not an option, right?
00:33:33And that's just so challenging for both the individual teacher, but also the entire institution
00:33:37at large.
00:33:38Meg, I kind of call this unbundling of education.
00:33:42One is just the knowledge itself, which this AI is really good at providing personalized
00:33:48education.
00:33:49But institutions provide more than just knowledge, imparting knowledge into students.
00:33:52The other is really like I have two children in college, it's not just learning they're
00:33:56getting theirs, but also that's where they're growing, that's where they're maturing, learning
00:34:00responsibility and so forth.
00:34:02So what AI solves very like really well is knowledge, imparting knowledge and learning.
00:34:08I think what we would have to do as a society moving forward is how do we leverage AI in
00:34:13the knowledge transfer part, but also retain these institutions to for all of the other
00:34:17great roles that they play in society.
00:34:18Right, to separate out some of the pieces, such that like the success metric of a good
00:34:23educator is not to do every single part of this thing.
00:34:27But I think what you're saying is like, do more one thing and then let AI handle things
00:34:32that are like maybe knowledge, like acquisition oriented, but not like, I don't know, a relationship
00:34:39with a student, right?
00:34:40Correct, correct.
00:34:41I mean, one of the feedback, for example, we received when we were visiting universities
00:34:45that while AI assignments are very compelling, that's something they would like to do.
00:34:50But AI assignment means students with AI, therefore it's large assignment sets that might take
00:34:54say six months, but they could end up taking like two weeks.
00:34:57How do you grade them?
00:34:58Right.
00:34:59Right.
00:35:00So there is a lot of AI involvement means there's a lot of learning that could happen much, much
00:35:03quickly as you leverage AI to do the learning aspect of it.
00:35:08What about everything else that those teachers and institutions are providing the students?
00:35:12I think that is where that's unbundling and just leveraging all the right pieces from
00:35:17both the technology, but what institutions do.
00:35:20Yeah.
00:35:21And I mean, I think the best case scenario of this is reducing burnout at scale, which
00:35:25is like the number one issue facing most of the teachers.
00:35:28If you unbundle, you can maybe do it.
00:35:30Right.
00:35:31Where every educator is really, really talented at some things and those things bring them
00:35:34a lot of energy and they're exceptional at it.
00:35:36And what if AI could support them in the things that don't bring them energy?
00:35:39And I think that just creates a much more well-rounded system for their personal lives and then also
00:35:43for the students that they're supporting.
00:35:45This conversation is also reminding me of like a part of the AI fluency curriculum we have
00:35:49for educators that I found super compelling, which is to have AI be so integrated like into
00:35:56the assignments, the experience and so on, and to instead start grading AI use and not
00:36:00grade the outcomes as much, right?
00:36:03Going back to like, how do you think about these long-term projects or how things differ
00:36:07in this AI world, having that kind of engagement is very different and AI forward in a way that
00:36:12I'm excited for more institutions to adopt.
00:36:14I think you've right on, like, you know, in our sort of conversation, one of the things
00:36:17I've come up is when you grade, you're not necessarily just creating the final result,
00:36:21but you've created how did students arrive to that result?
00:36:23How did they use the technology?
00:36:24What is the back and forth?
00:36:25That also becomes part of what learning is.
00:36:30One of our chief marketing folks here one time sat down with me and just said, you know, I
00:36:35think the true power of AI is the process.
00:36:38I think that that's really the core of what we're getting at, right?
00:36:41Yeah.
00:36:42I'm also curious from like a model training perspective, what we can do to reconcile how,
00:36:48like, I think I was having a, I had a really great conversation with someone who studied
00:36:51the philosophy of epistemics, right?
00:36:54Like how do you know if something is true?
00:36:56And they made a great point about how AI more easily than any other kind of intelligence
00:37:01that humans have encountered can be really confident, right, or can say things that sound
00:37:07so realistic.
00:37:08And it usually in a human being, it takes a lot of charisma and practice to get to be someone
00:37:12who can say that and not say true things.
00:37:15But with AI, you're kind of encountering it all the time and our human ability to discern
00:37:21what is true and what isn't is based on how we discern in other humans, which may not be
00:37:24successful when you're applying it to AI, right?
00:37:27That's a whole different thing to unravel and figure out how you can, do you match the AI
00:37:32personality more to that area or do you start teaching people of a new way of discerning
00:37:38truth?
00:37:39And I mean, my hot take is I think that like the latter one is a little more powerful because
00:37:43it's also a good inoculation against all forms of like persuasive writing, persuasive thinking,
00:37:50regardless of whether or not it's an AI or a human being.
00:37:53But that critical thinking, going back to the fact that AI is just forcing us to reconcile
00:37:57things that already exist, I think that teaching that critical thinking is really hard.
00:38:01Yeah.
00:38:02And this goes back to like child psychology, right?
00:38:03Like we have this whole generation of digital natives now who can very clearly identify a
00:38:06spam text when that's like really difficult for some folks.
00:38:09And like, what is the AI native generation?
00:38:12What does that look like?
00:38:13And what impacts does it have on kids development?
00:38:15Like there's just things we don't know yet.
00:38:16Well to close this out, because I think we could go on forever.
00:38:22I would love to hear from all of you as we think about five years out.
00:38:27What's a success look like for teaching and learning?
00:38:29Five years is a crazy kind of predicting the AI world.
00:38:33I guess I'll give you my hope, right?
00:38:35I don't know if I know what success looks like, but I do know what I hope for, which is that
00:38:39in educational institutions, we have like teachers have so much more time to engage individually
00:38:45in the relationships and the fostering portion of it.
00:38:47You know, going back to Efren's point, like maybe it's not the knowledge acquisition part
00:38:51of things that teachers participate in as much versus like synthesizing that knowledge into
00:38:57the greater ecosystem of your life and the world and understanding how any given individual
00:39:05can best be helped to learn.
00:39:06Because we're still all unique individuals in this future and celebrating and emphasizing
00:39:11that uniqueness is something that I hope we can get to with education.
00:39:14Right.
00:39:15I think five years is a very long time, but I think it is success to me by then to look
00:39:21like every person on the planet has a personalized tutor ready for them at any time.
00:39:27And then also if we've made that transition successfully, our institutions will survive
00:39:31and are playing the vital role they already play in our society.
00:39:36For me, I think it's going back to that critical thinking piece.
00:39:39I want every person, really every student, every teacher to have a shared vocabulary and
00:39:45cultural understanding around what it means to use AI and learning.
00:39:48And I think just a lot more discernment and reflection on and just being intentional about
00:39:53using AI.
00:39:54I know, I feel like I go back to this like a broken record, but wouldn't it be great if
00:39:58like every single student could articulate when they want to use AI, when they don't want
00:40:02to and why like that kind of knowledge about your own habits and how you think and how you
00:40:07learn best, that personalization of knowledge is so exciting.
00:40:11And I'm like, that's a shorthand heuristic, I guess, for what success could look like.
00:40:15Or maybe on the other hand, like they don't have to make that decision because technology
00:40:19is the product.
00:40:20The product itself is well adapted.
00:40:21We need both of those.
00:40:22Yeah.
00:40:23Together.
00:40:24Definitely.
00:40:25The product and the experience and the education all have to go hand in hand.
00:40:27I think the thing I keep going back to, which to me brings the optimism because some days
00:40:31in this work there's a lot of pessimism.
00:40:34Jobs are changing and we have no idea what might, you know, our jobs.
00:40:37And a lot of personal responsibility that we feel over any, you know, things that happen.
00:40:42But at the same time, I do see in a world where, you know, intelligence is becoming abundant
00:40:48and it's always commoditized.
00:40:50I think that will, you know, no longer be our defining trait as humans.
00:40:55And that can be scary at some point.
00:40:57But I also think that's liberating because for the last few hundred years, I almost feel
00:41:01like we've lost some of our humanity as we, you know, you have the Industrial Revolution
00:41:05and we're able to do all these things, but we're also like going into offices and doing
00:41:09tasks and defining our ourselves by the work that we do.
00:41:13And that may not be the thing that we can do best, you know, five years from now.
00:41:17But there are so many things that a teacher does that a doctor does that are truly human
00:41:22that are not intelligence, you know.
00:41:24And so I almost am excited for the things to sort of strip away.
00:41:29And so for us and I think our education systems at the core of it to really focus on what makes
00:41:33us human.
00:41:34There's an Oxford professor that said this great quote I think about all the time, which
00:41:40is he says, "I think the age of AI will be the age of asking good questions."
00:41:44And like that's something that doesn't necessarily come from knowing a lot, right?
00:41:48It's just about being curious and then also being a little discerning and skeptical about
00:41:51things you get in return, which then yield better questions.
00:41:54And I think with AI in our pockets of personal tutors, like the world of the question space
00:41:59has opened up immensely and way beyond anything else we've ever had in human history.
00:42:04And we just kind of have to steward people towards the mindset that allows them to ask
00:42:08the good questions.
00:42:09There's never been a better time to have a problem.
00:42:10There's never been a better time to have a problem, yep.
00:42:12I think that's a perfect way to end.
00:42:14Well, thank you all.
00:42:15Thanks for taking the time.