Wednesday Dec 17, 2025
EP 38 - Canvas, Credentials, and the Agentic AI Classroom: Ryan Lufkin, VP of Global Academic Strategy Instructure
In EP 38, John and Jason talk with Ryan Lufkin of Instructure about the evolution of online learning, the impact of Agentic AI on education, and how Canvas is shaping the future of digital classrooms.
See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Guest Bio:
Ryan Lufkin is the Vice President of Global Academic Strategy at Instructure, where he works to enhance the academic experience for educators and learners, worldwide. With over two decades in the edtech world, Ryan has experience with every major technology platform that institutions use to deliver education, from the LMS to the SIS, and all the systems in between. A well-known thought leader in the edtech industry, Ryan is a podcast co-host, frequent media spokesperson, and speaker at industry conferences and webinars. Ryan earned a Bachelor of Science degree in Public Relations/Communications from the University of Utah and certificates in Data-Driven Marketing and Brand Management from eCornell.
Resources:
- Canvas LMS https://www.instructure.com/canvas
- EduCast3000 Podcast https://www.instructure.com/resources/podcast
- Chole 10 Report https://qualitymatters.org/qa-resources/resource-center/articles-resources/CHLOE-10-report-2025
Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
[00:00:00] John Nash: You, you ready? Jason? Anything else?
[00:00:02] Jason: Nope. Just taking a drink, that's all.
[00:00:04] John Nash: Alright, I'll let you do another one.
[00:00:06] Jason: Yeah, that's
[00:00:07] Ryan Lufkin - Instructure: Do the vocal exercises, you the
[00:00:08] John Nash: yeah, me, Mimi. Red leather. Yellow leather. Yeah.
[00:00:12] Ryan Lufkin - Instructure: Yeah.
[00:00:13] John Nash: I'm John Nash here with Jason Johnston.
[00:00:15] Jason (2): Hey, John. Hey everyone. And this is Online Learning in the second half, the Online Learning podcast.
[00:00:20] John: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last three years about online education. Look, online learning has had its chance to be great, and some of it is, but some of it still isn't. And so how are we going to get to the next stage, Jason?
[00:00:35] **Jason:**John, that's a great question. How about we do a podcast and talk about it?
[00:00:39] John Nash: Perfect. What do you want to talk about today?
[00:00:42] Jason: Honestly, and I'm not just saying this 'cause Ryan's in the room, but one of our favorite ed tech tools, Canvas. And we're here today with Ryan Lufkin from Instructure to talk to us. Welcome, Ryan.
[00:00:56] Ryan Lufkin - Instructure: Thanks for having me. I love these conversations. Looking forward to it.
[00:00:59] Jason: Good. Why don't you just kind of describe the role that you play at Canvas?
[00:01:04] Ryan Lufkin - Instructure: Yeah so I'm the Vice President of Global Academic Strategy which means I, I spent a lot of time talking about the trends that are impacting education across the globe. In that role, I travel all over the globe. Honestly, I was in Singapore and Meine Columbia and me. City and all over the United States this year.
Talking about exactly the topics that you all focus on as well. How does technology impact learning experience good and bad? And what does that look like? And I've been within Instructure it's funny 'cause I always say Instructure the makers of Canvas because everybody, Canvas is a household name. Fewer people know the company name.
[00:01:35] Jason: Right.
[00:01:36] Ryan Lufkin - Instructure: But I've been there for seven years now. I've been an ed tech for over 25 years, and I just love the company, love our mission. I love the focus and so it's, it truly is a pleasure to be able to come out and have these conversations.
[00:01:47] Jason: Do you, do you work with anything other than Canvas at Instructure? Are you kind of over multiple things there, or?
[00:01:55] Ryan Lufkin - Instructure: It's honestly our entire suite. So I think a lot of people know that we bought Mastery Connect, which is an assessment tool. We bought Parchment which is a credentials tool which I, I've watched my kids use I've used myself to send your transcripts when you're applying for college and university and things like that. We've bought Badger the Credentials program. We bought Portfolio, which is a portfolio program. So we, we really, over the last 14 years have grown from just a single product company to a real ecosystem of solutions. And unlike, other companies, we don't buy our competitors we, we buy our closest partners and extend that, that that ecosystem.
[00:02:29] John Nash: Yeah, so that's why you didn't buy Blackboard, you just decided to just destroy them.
[00:02:34] Ryan Lufkin - Instructure: We, in doing so, we evolved the market, right? And I always say Canvas came along when we were still having arguments about whether or not education over the, put data in the cloud and never moved towards SaaS solutions, right? And whatever be, open source.
We're technically commercial open source. We publish our core code on GitHub as well. So to, to give people that peace of mind that, you own the code, that kind of
[00:02:56] John Nash: Mm-hmm.
[00:02:57] Ryan Lufkin - Instructure: it was, it. Truly transform the market from that, your LMS systems that would go down every month when you got your little update disc in the mail to, 99.99% uptime. And when we have an outage, it's a big deal now across the industry,
[00:03:10] John Nash: Right.
[00:03:10] Ryan Lufkin - Instructure: Used to be super common. So it's been a bit of fun ride.
[00:03:13] John Nash: Yeah. Yeah. Just that little bit you just described there, it really encapsulates how much has changed in just, , a decade or even, , 15 years.
[00:03:22] Ryan Lufkin - Instructure: been in it, yeah, if you've been in that change, it you take it for granted.
[00:03:24] John Nash: Mm-hmm.
[00:03:25] Ryan Lufkin - Instructure: how much it's fundamentally changed over the last
[00:03:27] John Nash: Yeah.
[00:03:28] Ryan Lufkin - Instructure: And, education has the reputation for moving slow. And in a lot of ways, we're moving faster than people think.
[00:03:32] John Nash: Yeah, you're right. When you're in it, you forget. And again, we didn't have you on, so I could talk about all the other tools that are out there that we didn't like, but we used Adobe Connect for a long time and that was a, an abysmal failure for us that we, we just couldn't get it to go. And so I mean, we're just thankful to have tools that are stable and up and, and then even now thoughtful about instructional design.
Well, that killed the conversation
[00:03:55] Jason: we say all that to say we, . We're kind of, we're kind of fanning on Canvas today because we do, because we, as you said, we have been around. I mean, I think the first, not, I think the first LMS I used was, I it was Blackboard in that early stage 20 years ago. and at that point it was so bad that I just made my own websites basically for my for my students, right? And we're kind of left to that and a lot of, a lot of institutions were doing the same thing. It's like, these are so bad, we're going to make our own LMS system. At the time Canvas and others too have stepped into that gap and created products truly that work well. They're responsive both in the web sense of things, but also responsive to the users. Continue to get better. Run without a lot of outages are secure. , There's a lot of, lot of things that go into these LMS systems that, as John said, , it's easy to take, take for granted in in 2025.
[00:04:55] Ryan Lufkin - Instructure: Yeah. And , there were four kind of key pillars when they founded the company, they were simple, engaging, open, and reliable. And the reliability piece was really the software as a service co-hosted right. Partnership with AWS. But that openness piece is one that, that still remains a massive differentiator for us, we have. I think publicly we see we've got 600, over 600 open APIs. We actually have 847 or something crazy like that. And then we, comply with the common standards like the LTI standard, right? To make sure that if oh, we've got over a thousand partners that have developed LTI apps that plug directly into Canvas.
And, we were very clear. We don't own the data. We are caretakers of the data. We don't own the experience. We facilitate the experience for schools as much as possible. We've done everything possible, not kind out of. Create that walled garden that I think, other vendors that you may have mentioned earlier, I really tried to protect and box out competitors. We work with coopetition on a daily basis, right? And embrace terms like that, right? And so it's, for us, it's all about how do we facilitate our universities and K 12 institutions to, to build the. learning experience that they want. And whether that's working with our friends or more tenuous friends, we, we support that.
[00:06:04] John Nash: Yeah, so Ryan, we're excited to talk about instructional design, thinking about online assessments thinking about where AI is going to play a role to lighten the workload, but not turn courses into some automated thing. But I think we would be remiss if I didn't start off with something that's kind of like the elephant in the room, which is where, , instructors are asking us all the time about AI agents and we've even done an episodes on it, , logging in, doing coursework.
And in a recent episode of your podcast EDUcast 3000, put a little plug in there for you, I listened to your converse. Yeah. You and Zach Pendleton talked about what you learned at EduCause 2025. And I heard you talk about something I hadn't really thought about, which is this something called model context protocol, MCP as a way to give AI a safer, more controlled access to systems. Is that something you think could play a role in the LMS world, at least in addressing these concerns that teachers have about agents logging in?
[00:06:59] Ryan Lufkin - Instructure: Yeah, so you're not familiar with the term MCP it, it essentially rolls those 847 open APIs that I talked about into a more cohesive and again, I'm not, zach's the super technical one, but into a more cohesive approach. And when more about AI, it's garbage in, garbage out, right?
The more organized your data is, the better access those AI tools have to that data. The better your results are. And from a security standpoint, from a. From an accuracy of output standpoint, from a, from an agent standpoint, making sure that they can accomplish the tasks they need to, having an MCP is really important.
It just organizes the entire experience. It also makes integration with third parties easier. And again. One of the things I always like to clarify and it's something that Zach and I came up with three years ago, right on the cusp of, the AI revolution, November 30th, 2022 and when they launched OpenAI we really came up with the, this concept of eating our vegetables, right?
We have all this regulation in place and there was this immediate. We got to pass new regulation, we got to do this. Nope. We've got privacy, student data privacy covered, we've got accessibility covered, we've got security covered. We just need to make sure that we eat our vegetables and make sure these tools align with those existing regulations.
We don't need another layers of regulation specific to AI' cause it exists within the technology we're already using. And so that's something that we've really focused on. And so when we talk about MCP and we talk about integrations. All of that is done with the permission and the full disclosure of our universities, even down to the educator level.
So you can turn on AI features within your course. Things like that. We don't ever turn them on for everyone and plug data in. I think that's everybody's fear is that we're going to plug ChatGPT into Canvas and it'll suck all the data out.
[00:08:33] John Nash: Right. So, but when we think about this concern that instructors have about agents logging in and impersonating students, is, is MCP something that an instructor would invoke, or is it something that that instructor or the LMS has to put in place to say, we won't let agents log in on behalf of students or say, like Comet browser is going to, , roll in there and take the course?
[00:08:57] Ryan Lufkin - Instructure: Yeah. It's funny because before, before we started this conversation around AI the thing was, we were having Chinese Foreign Exchange students log in and take our courses for us, right? This has been a problem as long as, and it was a problem when we had a physical classroom.
We had people show up. You would have to show your student ID to make sure that you were who you said you were when you were taking your final. Things like that. This is not a new problem. And I think there's a lot of times where there's a, there's a level of anxiety or belief that this is happening more than it needs than it is within education. The vast majority of students actually want to learn. And I think one of the things that AI is driving is this explanation of the why. Why do we do these things right? And if we explain the why, if we understand that like this, knowledge builds on itself. Students are less likely to turn to cheating.
But back to your question. Yes. our goal is to catch it at the system level. Right? If we're catching these bots coming in at the system level and performing these tasks, there's we've had some institutions that have had problems with bots coming in and applying for financial Aid, right?
And they're. The AI tools are good enough to actually enroll in the course and go in and do enough activity in the course to pass those standards. And so it's a constant cat and mouse game. The technology advances, we figure out ways to try to block those or identify those users. But at some level the, it's an ongoing battle that we hope to get the upper hand on all the time.
[00:10:16] John Nash: We understand that this is not all at the feet of a company like Instructure. That if a, if a bot is going to log into a course and succeed at it, and I played with a little demonstration project to show how that could happen.
It's really a more, a matter of is what's the instructional design of that course such that a bot can get in there and succeed in it and get As. Also, I think, Jason, you've pointed out that in a smaller course, this would be sort of catchable but in a larger course, it could be a bit of a challenge if there were 60, 70, 80, 90 folks in there and it was sneaking in, but as you point out, this could, there also could be a proxy human being doing it too, depending on where they sit, but.
[00:10:56] Ryan Lufkin - Instructure: Yeah. Ultimately if a student wants to cheat and do that, it, they're just robbing themselves of that experience, unfortunately. And so is that kind of ongoing battle. But it's existed since education started, somebody sending in a proxy and having them learn for them.
And it's an ongoing effort. It's something that, that we spend a lot of time, one, one of the funny things about AI is we continue to uncover new use cases that we wouldn't have imagined before. And so somebody will come with a, with an example and we're like, "Wow, I hadn't thought about that one. That's a good one. We'll go track that one down." And so we, we have a constant, a team that constantly works on that.
[00:11:28] Jason: Yeah. And we've been talking about AI since we've had this podcast, it's been a constant topic. But, this fall particularly, we've been talking a fair bit about agentic AI, So as John mentioned, , we've done some testing. So currently right now, so this is for those listening in the future. This is December 5th, 2025. We know things change very rapidly. We've currently done some testing with , both the Perplexity Comet as well as Atlas, in Canvas specifically, where it can go in and this point you don't even need to do a workaround with these browsers.
It used to have to do some sort of work around kind of like. Oh, this, I'm just testing this course and you're going to do it. , So it doesn't refuse to cheat. They'll just do it for you now. To respond to, , a quiz. Get nine outta 10 or 10 outta 10, do discussion post email the professor through Canvas.
So this is current right now, so I guess my question is then what do you expect to see right now with it? Kind of what your stance is, what's in place, and then where do see this going? Are there changes happening in terms of, of Canvas, , in terms of their approach practically? As I think about from a teacher standpoint and a, in a student standpoint.
[00:12:49] Ryan Lufkin - Instructure: Like I mentioned, we are, we're constantly working on this and we have really great relationships with Google and Microsoft, and we work with them as well. 'cause some of the, one of the first things we saw was Google Chrome plugins that were, "Hey, we'll take your quiz for you." And so then we've got to reach out to Google and say, Hey, maybe you could disable this plugin because it is not even hiding the fact that it's cheating, and so it's not even within our product itself, because if they're plugging those things into the browsers that that's giving access into the system. And. There's still, if you I'm also a student I'm doing a master's program at Arizona State University, and so I'm actually looking at how they're tackling some of this from the student side as well, which is just fascinating. But ultimately, within a lockdown browser experience for finals, you've that's still, you've got a turn on lockdown browser write your paper directly within that tool. There's other tools that are coming out where they plug directly into Canvas, and then you have to write your paper directly in there and it tracks every aspect of it. And that's really hard to game with an AI tool because it's very clear, it's being written by AI. and so there's a number of tools. It's a constant cat and mouse game of us trying to battle this. But what's interesting is when you're also in the workplace where like the exercise that you just went through to build a agent to go in and do this test, of people are doing that within the workplace to accomplish their work more quickly. Building that tool is actually like really interesting skill that we're also probably should be teaching students, which is, seems self-defeating except that these are tools they're going to going to need in the workplace. And so there's nothing I hate more when I hear educators say, "we're going to go medieval on 'em, we're going back to blue books and pencils" because that's just not preparing students for the future.
And we'll continue to play our cat and mouse game and try to block the, the. Use of these tools and the fair use of these tools. But we also need to evolve on, and John, I think you mentioned this a little bit, we also need to evolve how we design our courses. And if they can be gamed by AI in a lot of ways, we've got to figure out how to prevent that and turn that on its head.
And I think that's what we've been a little slow with is. How do we give the resources to educators to understand what these tools can do? Basically, AI literacy, so they understand that this is a problem, can be a problem. How do we actually help them redesign their courses and give them instructional design resources to modify how their course is designed to help prevent this type of thing?
And then how do we the tool provider, continue to fight this battle? So it's a multi multilayered evolution that we're all going through
[00:15:08] John Nash: Mm-hmm.
[00:15:09] Jason: Yeah. And we just released a podcast with a guest who was talking about kind of a multi-pronged approach, which we are completely behind, like redesign. This is, I work with a group of instructional designers at University of Tennessee, and we're talking about this every, every week, right?
And we're talking with faculty every week. And the great thing about that aspect of this is a, this is actually an. Incredible opportunity. we find that over and over again, we are coming back to strong pedagogical principles, right?
[00:15:41] Ryan Lufkin - Instructure: 100%. Yep.
[00:15:43] Jason: just want to build in anyways. How do we make it more engaging?
How do we make more
[00:15:46] Ryan Lufkin - Instructure: Yeah.
[00:15:47] Jason: How do we scaffold more? How do we focus on process? How do
[00:15:51] Ryan Lufkin - Instructure: How do we make it more personal? Yep.
[00:15:53] John Nash: Yes, yes.
[00:15:54] Jason: Exactly. How do we connect better? , So those are, I believe in those, but I also wonder about this kind of multi-prong, so focusing on those aspects as well because , Canvas's business, kind of like my business, is online learning and right now I don't see a really strong blocking approach, a technology like blocking approach right now. And I don't know if it's because it's just not possible to do it in this cat mouse kind of thing or is it from more philosophical, or is there, I get concerned sometimes 'cause I know Canvas is also has agreements with OpenAI and so on, and I wonder about some of those kind of things.
Can you speak to any of that?
[00:16:36] Ryan Lufkin - Instructure: That's what I'd love to dispel that last one aggressively. We work with every AI provider right from Anthropic to Microsoft to Google to OpenAI. We don't sell data, they don't have access to our student data. That's one of those things where hard and fast, and it limits the ways we can use AI within the classroom quite a bit. Because you cannot train, we will not let you train through our technology we will not let you train. AI model on student data. Period. Period. Hard stop. And also where I worry about some of the startups that, we go to ASU GSV every year in San Diego and there are startups making promises that you're like, "you can't deliver that functionality unless you are training on student data. And that's a FERPA violation. And do you what FERPA is?" Like the basic things there and that, so that's why we're always very. We are the gatekeeper in a lot of ways for, all of those vendors that want to work with us because we have over half the, students in North America using Canvas on a daily basis. And so how do we make sure that we are the gatekeeper to make sure those aren't, those tools aren't being used nefariously and we are absolutely the gatekeeper there, right? And we take that role very seriously. But I think to your point is it's not, it's not philosophical. If you look at lockdown browser, like lockdown browser makes it so you can't access anything.
But that test, and it is a great tool for eliminating the ability to cheat. It is not a great tool in providing any sort of engagement or interactivity or a good experience. So for us it's, we're caught in this conundrum of like, how do we track these, how do we block the use of these tools nefariously, but also allow the freedom to create really engaging courses and personalized courses.
And it's. It's not binary. We've actually, and that's where that cat and mouse comes in of constantly going back, building the MCP. One of the reasons we built the MCP is because we thought people were using some of our APIs in ways that they shouldn't.
Maybe we are being too open and we need to shut some of that down and control some of that a little bit. And not to prevent our schools from using it, but to prevent any type of nefarious use in the future. Things like that. And so we are constantly looking ahead and saying, okay, what is possible? What is the next step? What is the, maybe the things we haven't thought of? And how do we make sure that we're protecting for that? But then how do we also make sure that we're, were encouraging the use of these tools in productive ways within the courses. And I was at the, ASU Agenic AI and the Student Experience Conference a couple of months ago, month ago. I travel so much. It's a big blur. But it was so interesting to see what some of these schools were doing. Like Florida State University was embedding a NotebookLM in every course to enable. Yeah, if you're familiar with NotebookLM, it allows you to, it organizes all of the content within that course and then allows students to actually create a podcast to create study cards, right? To listen to elements as a stream, as a narrative, right? It provides a level of, personalization of content consumption, right? That really is the future of learning, right? You're meeting every learner where they are because you're letting them choose how they consume some of that content. Schools like Arizona State has actually created their I think it's called Creator Up. And it's the ability to create an agentic for every educator to create an agent of their own and apply that in different ways. And so we're starting to see all these different usage of the tools in creative ways within this kind of secure context that Canvas can provide, be the gatekeeper for. But we can't get there and we can't start exploring those tools if we're, like the letter that came out earlier this year, but it was signed by 7,000 educators that said, AI is corrosive to learning, and it was, it was, so we're going to bury our heads in the sand and pretend that we can make this thing go away. And that's just not realistic. And it actually prevents us from finding these positive uses because we're so fixated on the negative aspects of these tools.
[00:20:14] John Nash: Mm-hmm. Jason. And I think a lot about how to improve instructional design broadly, and so many instructors. Know their content so well, but they don't always know what constitutes strong instructional design.
And I think as we're thinking about what the future of Canvas looks like, and I'll speak just personally for myself, I would love to see ways in which Canvas might even support my ability to do instructional design. So what's, , what's the landscape look like there, the, horizon for that kind of thing?
[00:20:47] Ryan Lufkin - Instructure: Yeah. So one of the things, and we've rolled out some features like translation and rubric development and different features that save educators time, that's where we've really focused. But one of the things that, Zach Pendleton, who you mentioned earlier, who's our chief architect and one of the smartest people I've ever met I meet with him once a week and I walk away from every one of those meetings smarter for it. But one of the things he kind of realized early on is, look, we have an ecosystem of solutions that we are supporting all the time, but we are not. The AI creators. And so if we actually go out and partner with all, with Anthropic, with OpenAI, if we become agnostic in that approach, can only develop, we can only progress so fast 'cause we're supporting all these different tools, but they're really just focused on their AI tool and so they're progressing so much faster than we could if we partner with them and then allow our schools to work seamlessly through them, that's amazing. And that's one of the reasons that we partner with AWS because AWS has their bedrock models, right? They've got, I think, 27 different large language models, Claude Anthropic, all the ones that we've talked about Gemini, all of those that you can plug in through AWS that provides that additional layer of security and control, right?
And so that's. AWS, Mary strain from AWS and I are besties, and you just spend a lot of time trading information and then covering these like different pedagogical use cases. She just sent me one from a school in Maryland. And we are, we're constantly looking to uncover that and help show schools what good looks like.
I think John, what you were talking about with it seems very. What's, anything's possible go out and do these things, but if you show someone good looks like, or what some examples are, that really is a good starting point for how do I start applying these tools in interesting ways. And so, at some point in the future, there, there are partners that are almost virtual instructional designers and things like that.
[00:22:32] John Nash: Yeah.
[00:22:33] Ryan Lufkin - Instructure: And right now what we've done is let them run. 'cause they're going to evolve faster than we probably could with the resources we have, but then that fits into our model of you. We might buy them down the road and integrate those pieces in.
[00:22:43] John Nash: Yeah.
[00:22:43] Ryan Lufkin - Instructure: And you see great partners like Praxis. I'll give a plug to Dave from Praxis. They've done a great job with that and using it in different ways. And they're half. They're inside the tent. They're in, they're part, they're one of our partners already. They work really closely.
They're at all of our events. And it, that's probably the model we'll follow is find those best of breed tools that are already halfway inside the tent and then pull them all the way in.
[00:23:04] John Nash: Yeah. 'cause I guess, and Jason probably knows more about it than I do, but I think, and now I'm just speaking as my, just my user hat on. It would be nice, I think to have a way for Canvas to help a, a novice, face-to-face instructor coming to the online space to avoid producing shovelware and having an adept instructional designer riding alongside as they bring that course online.
And I think that for me, this John Nash talking, I think that would be kind of cool if it could work well. But Jason, you, you run a whole shop of instructional designers. I don't know what, what would you like?
[00:23:42] Jason: Yeah, I would welcome that because, , we always say there's always more work to be done, right? Like,
[00:23:48] Ryan Lufkin - Instructure: Yeah.
[00:23:50] Jason: I have no concern about our longevity in these roles and also for my instructional designers because partly 'cause I have really good ones, but also because so much work to be done.
One, there's a lot of faculty that would just prefer to work with a person. Instructional designers are really good. They see around things in different ways. there's a group of faculty that would prefer just to basically do it on their own and sometimes do it between 11:00 PM and 2:00 AM in the morning, right?
[00:24:21] Ryan Lufkin - Instructure: When you'd rather be asleep? Yes, yes.
[00:24:23] Jason: And we're not going to be there. Our IDs are not being scheduled for one-on-ones during those times. And I think that, especially if they are tuned in a good way. And as you said, not just doing shovelware and able to give feedback. , We joke sometimes about Clippy, , about having a little Clippy, kind of pop up and say, did you mean to do that?
[00:24:45] John Nash: Right.
[00:24:46] Ryan Lufkin - Instructure: I like the, what you just said about they need to be tuned, they need to be monitored. I think there's a lot of, I, this kind of misconception that AI can just be set and it's off on its own running and you don't have to look at it anymore. You still need somebody a, an expert, an instructional designer to be monitoring. What the outputs are. Is it staying on task? Are we seeing drift? Do we need to update the parameters around this? Oh, we didn't think about this aspect of it. We got to add that, right? These are tools that need monitors that need bosses, right? And so I think that's often funny.
We're like, "oh, fire all instructional designers and. And replace 'em with AI." That's not realistic. That's just not something that, that we see happening anywhere in the near future. They're about the scale, just as you said. They're scaling and providing that, that support when there's not a human available.
[00:25:30] John Nash: Yeah. Yeah, yeah. I think a big question for instructors is how AI can lighten the workload without turning courses into automated experiences. How do you think Canvas is thinking about that balance between. In being efficient and maybe, , while we're all about the preserving the, the human aspect of teaching and learning.
[00:25:51] Ryan Lufkin - Instructure: Yep, what's interesting, first pass grading is one that we've talked about a lot, right? And then we demoed the feature at InstructureCon this year, and it is, you set up your rubric and then the tool run, the AI tool runs the assignment against that rubric, right? And it gives a first pass of did they meet the criteria? Here's some basic feedback.
And the goal there really is to provide a first pass, and especially for a large course, if you have 90 people on a course, thinking about the time saving of having it go through and do a basic first pass for you, and then you as an instructor being able to go through and say, oh, actually I'm going to change this. I'm going to change that. I don't, I think it gave them too much credit for this. I think I'm going to reduce that and, but it provides that framework as a starting point. So it reduces grading fatigue. It reduces bias. We see a lot of, if your letters, if your name starts with Z, your instructor's really tired by the time they get to your paper, and they're not necessarily going to give you as thorough an overview. But what we don't want it to be is just that robots grading robot submitted work. And so what it does. You even have data, so you can actually look at, from a program level, you can look at are your instructors actually just going submit? Or are they actually going through and changing them? Because it's designed to save time and not to just replace that process.
[00:27:03] John Nash: A comment and then a follow up question on that. The, the comment is, I was reading in the, this says a lot about me and maybe how dangerous I am, but I'm in the subreddit for professors on Reddit ,and there's they, in that subreddit, professors complain a lot about AI submission and that, not that students are necessarily lazy, but that's, there's a tone of that there.
But something a professor said that I that struck me is it said, I think that I'm going to ask my students when they turn something in, I'm going to ask them how many minutes they would like me to spend grading it. And it and it sort of made me think about, well that's interesting because you're right.
If they're just going click, click, click because I'm getting a bunch of AI slop and so I'm just going to, , or I'm going to spend time thinking about this with you. And so it'd be interesting if the tools could really help professors really get into, we talk to a lot of professors who love that aspect of marking papers because they get to learn about their students more.
They learn about their interests more. That's just my comment. I think that my question kind of gets into the weeds that it's neat that it'll make a rubric, but doesn't that presume that you also have good learning outcomes? And so is there developmental work that can happen for professors before they get to that point? 'Cause it's easy to think, "oh. I'll just make a rubric." But I, I don't have good learning outcomes, then the rubric's no good, right? Is, yeah.
[00:28:22] Ryan Lufkin - Instructure: And there's a number of things that we do to support outcomes, mapping outcomes, and, yeah, what's interesting is you can actually use if you have your course content and you can actually use some of these tools to help you define outcomes
[00:28:34] John Nash: Yeah.
[00:28:34] Ryan Lufkin - Instructure: some of those things,
[00:28:35] John Nash: Yes.
[00:28:36] Ryan Lufkin - Instructure: And so there are, this is where AI is both the cause of, and the solution to all of our problems. Not to pull a Homer
[00:28:42] John Nash: That's a Homer Simpson. Yeah.
[00:28:44] Ryan Lufkin - Instructure: Yeah, exactly. But it is in so many ways, right? And because it is, I think Jason, you mentioned the Socratic method and this like getting back to this, like one-to-one approach that real human connection of learning.
And in a lot of ways AI is able to help us get closer to that if we do it right. That personalization piece I think is important. And yeah, it can actually help you create course content. But again, one of those things are, is that a tool that we should build within Canvas that we may not be able to update as fast?
Or is there a really good third? Is actually Anthropic's Claude's Learning Mode does some really interesting things around Socratic, where it does won't give you the direct answer, but it answers, Now, OpenAI has their, I think, student mode that does the, something similar, right? There's these tools that they're innovating faster and that's why that open architecture allows us to move more quickly realistically than if we were. Building all of these features ourselves especially as we move into more agentic phase. And these tools are more powerful. They can plug into the, into Canvas a little more deeply. And then it'll be I say a little bit like some of these features that we've developed will be we built them in Flash, right?
They'll, we'll retire those and it'll just be the agent throughout the tool that is supporting that. And I really do think we'll get there. We've got to, we've got to build the trust. We've got to. Build the understanding of what it's capable of. But I also think AI is great at detecting other AI. If we get to the point where Canvas for has never really done we've always relied on our partners for academic integrity.
That was a decision early on that we were not going to, be aggressive with academic integrity. We've got great partners like Turn It In and things like that, that that really focus on that CopyLeaks and some of those others. And we have not focused on that as much, but in the future with, these different models, you may see that evolve a little bit.
[00:30:26] John Nash: Yeah. What are you thinking, Jason?
[00:30:29] Jason: Yeah, I, and I love that idea of that first pass, especially as I'm thinking about larger classes, and the load that that takes both for teachers and TAs and was in a seminar earlier this summer are with some people from ASU and talking about some of these larger classes asking the question, , what if we could focus on the students that really want our feedback, right? And, , those are some of the things I really like . About AI from a grading standpoint. Thinking about from a teaching standpoint, because my students don't need me to correct another one of their commas or split infinitives, right? I don't need to be doing that. They don't need me to be doing that. It's not personal. It's not anything. It's something they should be learning. And they should, especially if they're going to go on higher education, they should be correcting themselves and learning along the way. But I don't need to be spending my time with that. However, I do want to be interacting with them over ideas.
And this is where the line comes from me, is like, , I played a little bit with AI creating comments for students, and it just felt kind of icky. That's my technical word for it. It
[00:31:39] Ryan Lufkin - Instructure: Honestly, soulless.
[00:31:41] Jason: Yeah. Yeah.
[00:31:42] Ryan Lufkin - Instructure: I've heard soulless is a version of that? Yeah.
[00:31:43] Jason: It just felt wrong. Like now I was like, I was a being of an imposter teacher, , that I wasn't really,
[00:31:52] Ryan Lufkin - Instructure: Yeah.
[00:31:53] Jason: For me, it kind of crossed over from that, oh, yeah, here's some, here's some technical things I've now given you feedback for from AI, and this works fine points on this and that, and using a rubric, kind of like that aspect. But now that I get talking to him, Hey, good job, Billy, or whatever, it all, all of a to feel a little weird.
[00:32:10] Ryan Lufkin - Instructure: Feels insincere. Yeah. The other piece too is there's, we talk a lot and I see this, I, I have a, I mentioned my daughter's a junior at university, just as we were coming on the show. And my son's a freshman in high school. And they both use Canvas.
And I actually, what's interesting is some of the frustration they get around the timeliness of the feedback, right? I submitted my paper two days ago and my professor hasn't given me my grade yet. How why not? Why not, right? They're a generation that's, everything is so automated and so on demand and immediate satisfaction. And it's really frustrating for them when it takes that long. The other aspect too, and I've seen it not to call out my professor at my master's program, but where I didn't get feedback on my, my. First paper, and I'm already submitting my second paper, so how can I provide a second, am I going to get ding for what he graded me for, but I haven't gotten it before I write it, right? And so there's some simple basic realities that we need to make sure we are addressing. And instead of villainizing AI or saying it's robots providing robots responses. That timeliness and that is, providing feedback in a useful way. So I'm building on that feedback is really important and I, it's not always easy for educators.
It's, I don't blame necessarily my professor, but it is, it's frustrating on the student end.
[00:33:21] John Nash: This is not the solution, but what you just said in the experience you're having and your children are having, reminds me of, of what happens if any of either of you have used the app on Domino's to order a pizza, but when you, you order the pizza and then it says, Jane has received your order, and then it says, Bill has put it in the oven.
And then it says Sam has stuck it in the car. And so I can imagine like John has received your paper and then John has opened your paper and then,
[00:33:50] Ryan Lufkin - Instructure: John
Expletives about being tracked by this device. Yes.
[00:33:54] John Nash: yeah.
Ryan, you mentioned badging and Badger was the tool that you bought. Our institution's been talking about badging. I've heard you, I think in other spaces talk about micro-credentialing about credentialing being detached from courses.
[00:34:10] John Nash: I think about it in my course, I teach a relatively complex project-based course that teaches students how to be design thinkers. And so there's opportunity I've seen for micro badging inside that. So even if you didn't finish the course, you could have left being an empathetic interviewer or a good brainstorm or what have you.
So talk to us a little bit about what direction that's going in and where we might leverage that as instructors.
[00:34:34] Ryan Lufkin - Instructure: I love that. And we talk a lot about skills-based learning,
[00:34:36] John Nash: Mm-hmm.
[00:34:37] Ryan Lufkin - Instructure: which is, we hear pushback on skills-based learning. 'cause people are like, we're not a, we're not a vocational school. Our job is not to prepare people for jobs, right? There's still some of that mentality floating around out there even in the face of some of the, federal requirements now around are people getting jobs with these degrees?
Things like that. And I don't think that's the end all. I, I don't want to come down on either side of that debate. But, 40% of college students don't graduate within six years in a traditional four year program. And they leave with essentially nothing. But like you said, John, the, they could actually leave with, I actually am a good presenter. I am a good, I've got a badge and evidence-based decision making, things like this.
[00:35:13] John Nash: Yes, yes.
[00:35:15] Ryan Lufkin - Instructure: And I don't think it's we've got in no other industry would somebody spend potentially two or three years of their lives and leave with nothing for what they pay. That's just, it's just insane. And how do we break that down? How do we actually start providing that incremental credit or things like that. But even then my daughter, she's a strategic communications major, she's getting a minor in psychology and she wants a certificate in data analytics, so she can show look, it's not just soft skills. Let's, I've got some hard skills. what's interesting is that certificate in data analytics is an, is non-credit program, right? Because we are caught up in this traditional model of your program is accredited, right? Because it is a full program. Most certificate programs are not accredited if they stand separately, right? They're for adult learners they're not, not for credit or they're treated separately. And so we've got to get to this model where we're looking at it all as a skills-based framework and giving students credit for that so they can then properly show to potential employers or other educational institutions that they have those skills.
They've achieved those skills in measurable ways. And there's a lot of changes within the infrastructure of education that need to happen. But what we've seen is this massive growth since COVID of. Of demand for credentials, demand for, Hey, if I'm going to grow in my job, if I'm going to, if I'm going to switch careers, if I'm going to change something, I need something.
And it doesn't need to be a four year degree necessarily. It could be a six course, certificate that would help me get a better job or help me get better pay. And we're seeing that across the globe. We've seen initiatives for that in the Philippines around tech jobs. We've seen in Mexico a very similar program where they're going to try to drive more non four year degree programs to upskill their labor force. And so this is a global initiative and we have the ability to be leading it just as we've lead the, the model for education, we've led the model for education across the globe, for the last 150 years. We have the opportunity to be leading this and we need to kind of lean into that.
[00:37:09] John Nash: Could I now 'cause I'm too lazy to go look it up and I have you here. Could I now have a students in a full 16 week course that would complete a module and be badged for that module? Yeah.
[00:37:20] Ryan Lufkin - Instructure: Totally. You can set up with, we've rebranded it was Canvas credentials. They've just rebranded it as Under Parchment, so it's Parchment Digital Badges.
But yeah, you can actually set that up at whatever level you want as granularly or that could be six courses and you get a badge. Could be a badge within every module within a Canvas course. Things like that.
[00:37:38] John Nash: And I'm the authority on that badge? That doesn't have to be my institution. I mean, I'm just like, I think I've looked squinty eye at all this, you look good on this. You get a badge. Yeah.
[00:37:47] Ryan Lufkin - Instructure: Yes. And the goal really is to provide a level of flexibility around, because every institution's kind of trying something different. The challenge is the value of that badge then comes from your reputation and your institution's
[00:38:00] John Nash: Right.
[00:38:01] Ryan Lufkin - Instructure: and as opposed to a unified skills taxonomy
[00:38:03] John Nash: Yes.
[00:38:04] Ryan Lufkin - Instructure: or something like that, where there's an agreed upon
[00:38:06] John Nash: Yeah.
[00:38:07] Ryan Lufkin - Instructure: we talk a lot about that currency,
[00:38:09] John Nash: Right.
[00:38:09] Ryan Lufkin - Instructure: that there was a reason that I went and took a GoogleAI essentials certificate because it has Google on it.
[00:38:15] John Nash: Right?
[00:38:15] Ryan Lufkin - Instructure: I could have done that at, Salt Lake Community College or somewhere else. And, but it's the value
[00:38:20] John Nash: Yes.
[00:38:21] Ryan Lufkin - Instructure: that's what I think is so in flux right now.
[00:38:22] John Nash: Mm-hmm. Yeah. It's like, oh, they got that from Nash, that "hack," Nash. So that badge not worth it.
[00:38:28] Ryan Lufkin - Instructure: Or, oh my gosh, you got that from Nash? Okay,
[00:38:30] John Nash: right? Yeah. Okay. Yeah. Right.
[00:38:32] Ryan Lufkin - Instructure: that's the,
[00:38:33] John Nash: Yeah. Yeah.
[00:38:35] Jason: like only
Gotten, have achieved that from Nash.
[00:38:38] John Nash: That's right.
[00:38:39] Jason: Yeah, I think that's an exciting direction. , I was at a session with if you're, you're probably familiar with Chloe 10 the most recent Chloe report. And there's just an enormous among leadership universities, an enormous focus on pouring resources in, into non-degree credits. And they didn't say badging, but I'm assuming that badging is going to be a big part of that. How do represent those?
[00:39:05] Ryan Lufkin - Instructure: Yeah, badging is the. Badging is the external representation of those non-graded programs. And you hear a lot about stackable credentialing and that's why that analogy of my daughter, they really are looking at it as a stackable way as opposed to a degree that is just two dimensional in a lot of ways.
How do I stack up these smaller groups that shows, look, I'm really good at data analytics and I lead more on the creative side. 'cause look what I've done some visualization stuff, right? It's just a. It's a more granular way to define your skills or show your skills. And it's exciting. I think it's nice to see some, we've been talking, Badger, I think we bought Badger six years ago or something like that.
Like it it's, we've been working towards this and it's just really gained momentum. I think post COVID is, everything's moving a little faster.
[00:39:49] Jason: Well, it seems like an appropriate time to make the UHF joke. Have you guys watched UHF "Badgers? We don't need no stinking badgers." Anyways, great films side comment, but... Maybe as we're kind of wrapping up here a little bit, I would love to know from you, from your viewpoint, where are we going? This is online learning in the second half. A lot of what we talked about here is, know, we've, we've been at this for 20 years. We're looking at where online learning has been. We can, we can get information in front of students, we can have them connect with it securely. Now there's ways to interact. do you think we're going in the next year, five years, 10 years? When it comes to online learning in general, or specifically with Canvas, some of the aspirational goals you have.
[00:40:37] Ryan Lufkin - Instructure: Yeah. I'll go back and say we are, we've all been in education long enough to remember, or and many institutions still have that same, we are not a business, we are academia. They are not customers. We owe them nothing. And it's been really interesting to see this evolution towards more student-centered approach.
And I think what we're going to see is a more personalized experience for students, more student agency in what they're able to take. We see this in things like the California community college system, right? Where you can actually take a course from any California community college and they've got a great pathway builder that shows how you take these different courses from any physical institution, right? And that more of that flexibility and interoperability and our goal, canvas today has about, about half of all college students in North America, about a third of all K 12 students in North America, and we're supporting more businesses with their development as well.
And so when you actually look at the benefits of, it sounds a little self-serving, but it is also when you look at the benefits of that technology just disappearing in the background and the cognitive load of learning a new technology being removed and providing a very seamless experience. is really powerful. And then when you actually had tools like, okay, we'll support dual enrollment. And so you're having a seamless experience when between your high school and college courses and oh, we make it super easy to apply for college directly within Canvas and actually maybe recommend what that looks like and where do your aptitudes lean towards?
The more we can use technology to. Keep students engaged in education, the better off we will be. The dropout rates that we've seen, and again, I'm old, I, my first weed out course, they said, look to your left, look to your right. One of these people isn't going to be here at the end of the semester.
And they took pride in that. I love this the shift that we're like, how do we provide services to really keep students on track and moving towards success. I think education has the biggest impact on the positive. Aspect of the globe, right? The positive growth of our society, everywhere in the world. And so I, I think it's incredibly important. So you'll see more of that. How do we pull together an ecosystem that supports universities? How do we help universities work together? How do we help create corporate? Corporate university partnerships to make sure that the programs that universities are offerings are the ones that map to, to paying jobs and in demand jobs, things like that.
And so it's hard to say what it looks like 'cause it's, I wouldn't have predicted AI five years ago, but and. We see so many of that quick evolution, but it's amazing to be part of this, like the most tumultuous but also the most transformative time in, in the history of education.
We all get to affect that, and that there's a, there's kind of an excitement and a power in that that I, I love.
[00:43:03] Jason: Love that and obviously the excitement in your voice. See it in your face
[00:43:08] John Nash: Mm-hmm.
[00:43:09] Jason: our podcast listeners can't see that, but I can tell you're passionate about this and it's been great to talk to you about this. 'cause we, we line up, and this is why we like our edtech partners, right? We don't pretend to be able to manage all this alone.
, If it was, honestly, if it was left up to the institutions to the LMS, we, I don't know John, we'd probably still be having students dropping things into Microsoft office files or some, or
[00:43:35] John Nash: yes,
[00:43:36] Jason: like that on the
[00:43:37] John Nash: we'd be, we'd be using Evernote and Dropbox.
[00:43:40] Jason: yep.
[00:43:41] Ryan Lufkin - Instructure: Dropbox. Oh, yes, I,
And I love these conversations and thank you for having me on the show. I just think the more that we, we have these conversations and we explore different ideas the better off we are as a community. And so this is, I appreciate what you're doing.
[00:43:52] Jason: Yeah. Well, thank you.
[00:43:53] John Nash: Oh yeah. Thank you.
[00:43:55] Jason: For those listening, we're online learning podcast.com. That's online learning podcast.com. So you'll see the full transcript notes there. We'll also put some links in to EdCast 3000, which love the reference there. And if you don't know the reference folks well,
[00:44:10] Ryan Lufkin - Instructure: Special kind of nerd reference that we like.
[00:44:12] Jason: a bunch of nerds here.
Basically, we've pretty much admitted that,
[00:44:15] John Nash: Totally, totally.
[00:44:16] Jason: is in a sub sub Reddit on on
But.
[00:44:20] Ryan Lufkin - Instructure: John also got my Homer Simpson, my Homer Simpson quote, so I appreciate that as well.
[00:44:26] Jason: We'll put a link to that and I, I guess we'll put it a, a link to Canvas. I think probably most people listening know about Canvas, but
[00:44:34] John Nash: Yeah.
[00:44:35] Jason: we might as well put a link in
[00:44:36] John Nash: No. Yep.
[00:44:37] Jason: check you out. They've never heard of this thing called the LMS, but yeah. Thank you so much for joining us. Really appreciate it.
[00:44:46] John Nash: Yeah. Thanks Ryan. Yep.
[00:44:47] Jason: Yeah, thanks.
No comments yet. Be the first to say something!