Online Learning in the Second Half
In this podcast, John Nash and Jason Johnston take public their two-year-long conversation about online education and their aspirations for its future. They acknowledge that while some online learning has been great, there is still a lot of room for improvement. While technology and innovation will be a topic of discussion, the conversation will focus on how to get online learning to the next stage, the second half of life.
Episodes

Wednesday May 24, 2023
EP 11 - OLC Design Thinking Session Wrap-up: How Might We Humanize Online Learning?
Wednesday May 24, 2023
Wednesday May 24, 2023
In this episode, John and Jason talk about their OLC Innovate 2023 Design Thinking workshop which asked the question: How Might we Humanize Online Learning?
Join Our LinkedIn Group - Online Learning Podcast
Resources:
The JumpPage with the results of the design thinking session at OLC
Michelle Pacansky-Brock https://brocansky.com/ and article: Pacansky-Brock, M., Smedshammer, M., & Vincent-Layton, K. (2020). Humanizing online teaching to equitize higher education. Current Issues in Education, 21(2), 1–21. https://cie.asu.edu/ojs/index.php/cieatasu/article/view/1905
The book chapter “Humanizing the Online Classroom” by Renée E. Weiss found in Principles of Effective Teaching in the Online Classroom (2000)
The idea of "Dehumanizing" education found in Paulo Freire's “Pedagogy of the Oppressed” and a more current 2020 article here "A Critical Approach to Humanizing Pedagogies in Online Teaching and Learning"
One article discussing Transactional Distance Theory in the online classroom
Also Whitney Kilgore, 2016 looking at Humanizing online MOOC experiences
And every educator should pick up John Nash’s Book: Design Thinking in Schools: A Leader’s Guide to Collaborating for Improvement
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
Here is the cleaned transcript with all your formatting rules applied, including commas after "So" at the beginning of sentences:
[00:00:00] Jason Johnston: It's Monday morning. What can we banter about today as we begin?
[00:00:03] John Nash: That's a good question. What do you want to banter about?
I'm sure—
[00:00:12] Jason Johnston: It'll come. I'm sure it'll come.
[crickets...silence]
Introduction
[00:00:18] John Nash: I'm John Nash here with Jason—
[00:00:20] Jason Johnston: Johnston. Hey John. Hey everyone. And this is Online Learning in the Second Half, the Online Learning podcast.
[00:00:27] John Nash: Yeah. We're doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning has had its chance to be great, and some of it is, but a lot still isn't. And so, we need to get to the next stage. How are we going to do that, Jason?
[00:00:43] Jason Johnston: And that is a great question. How about we do a podcast and talk about it?
[00:00:47] John Nash: Let's do a podcast and talk about it. Let's do an episode on what are we going to do today?
Main Discussion
[00:00:53] Jason Johnston: Hey, let's talk about our design thinking workshop at OLC Innovate. I thought that was really great experience for us, and we just wanted to spend a little time to wrap it up. Sound good?
[00:01:06] John Nash: Yeah, that sounds really good. I think we should talk about that. I think the workshop exceeded my expectations, given the constraints. Yeah. And so, it's going to be—yeah, let's talk about what, what occurred, because it surprised us.
[00:01:17] Jason Johnston: Yeah, it did. And you are the design thinking expert. You have an actual book. This is a great time right at the front of the episode. Why don't you plug your book, John?
[00:01:27] John Nash: I have an actual book, not a virtual book or a fake book. It's an actual—
[00:01:31] Jason Johnston: It's also virtual though, right? You could also—
[00:01:33] John Nash: Get it virtually. No, my, the publisher will not put it out on Kindle. What? I don't—yeah, and some publishers go right to Kindle for authors. In this one, it's Harvard Education Press. They're no slack. I'm proud to have been a part of their—yes, their work, of course, but I don't think it's out on Kindle yet. But the book is called Design Thinking in Schools: A Leader's Guide to Collaborating for Improvement. And it came out in 2019. And it's a, yeah, it's a handbook with some stories and examples and how-tos for leaders that want to use design thinking in schools. And that's, yeah, Harvard Education Press. You can get it on Amazon and get it from their website. Yeah.
[00:02:10] Jason Johnston: Yeah. And it's very good. I'm not just saying that because I like you and we're friends, but I like that it's a very practical book if you're in education and with a lot of talk around design thinking about using design thinking. And I think it was a nice model within that, the—a very manageable book, I will say, in terms of size. I think in—I think quite concise. And you're also a great writer. Enjoyable to read. Thanks. But I think it just was very clear about what the process was. I think once you've gone through the book, even if you've never done anything with design thinking before—
[00:02:46] John Nash: Yeah. Every chapter is a step in the process. And, and you're right, it's no tome, but it's no pamphlet either. It's sure, yes. It's sufficient. It really does—it gets you into the details. So, that's why when OLC decided to have sessions that were actually called Design Thinking Sessions, I wanted to jump on that. I'm always interested to see how different organizations are interpreting design thinking. And you're right, design thinking is popular and more and more organizations are thinking about it, but how they describe it or operationalize it can vary. So, I'm also always eager to see how a session like they had would fit with our visions of what design thinking—
[00:03:24] Jason Johnston: Could be. Yeah. And I was really glad to partner with you on this partly so that I could learn more. And so, we approach the idea—every design thinking problem has a wicked problem, you'd say, right?
Yeah.
And something that's very difficult. It's not a clear-cut answer. And so, our issue that we were talking about at OLC was: How might we humanize online learning? And what was really cool too is we got a room full of practitioners. We had four tables full of just really amazing people—instructional designers and administrators and some faculty there who were really interested in the problem. And what I was amazed at was how they just dug into it like right away. They just went—
[00:04:13] John Nash: For it. We didn't offer them any chance to really get a preamble from us on what we were going to do. We just went to work. And so maybe for the benefit of the audience, we can describe that. What OLC gave us was about a 45- to 50-minute session in which Jason and I decided to do a full design cycle. So, that means we were going to have people come in and immediately start doing some empathetic interviews with each other, some need-finding in a rapid fashion. And then they were going to brainstorm some solutions to humanizing online education based upon what they learned from each other by understanding their unmet needs, what the problems were in their lives around online learning. And so, the brainstorm yielded scores of ideas in about five minutes, and we can talk some more about what those were. And then they harvested the brainstorm and actually prototyped on chart paper their solutions. And we had some interesting solutions that are now not just built out of whole cloth somewhere in someone's head but really generated from people who are in the mix, who had unmet needs that the challenge was going to address those needs. Yeah. Yeah. I think it's important to talk about design thinking as a solution-finding process, not a problem-solving process. And so that's what made this nice was getting people to talk together about finding solutions to challenges, not just trying to fix a problem.
[00:05:38] Jason Johnston: Yeah, and really our ideal was not just to talk about humanizing online education, but to display a design thinking process they can continue to use that process to, to solve their own problems back in their own context. Right. Yeah.
[00:05:53] John Nash: We had a twofold outcome, which was to inspire some new ideas for thoughts around humanizing online ed, but also build capacity amongst the participants on how they could use this solution-finding process that's generative and, mm-hmm, learner-centered, and they could use that back in their home institutions.
[00:06:13] Jason Johnston: Let's take maybe a little bit of a step back and talk about what the idea is of humanizing online education. Now, for the benefit also of our listening audience, we did not give a lot of description to what this meant.
No.
We allowed people to use their imaginations and think about what this means. That's right. We've talked about it here without a lot of definition. In some ways, humanizing online is a bit of a paradox, right? Because we are working at a distance. Distance education is when there is a physical separation between the student and teacher. And we use the means of technology to try to bridge that distance. And bringing a human element to that is a bit of a paradox. Is there a way that you would like to conceptualize in a few words what you are thinking about when you think about humanizing online education?
[00:07:06] John Nash: When I think about humanizing online education, I firstly think about trying to stay learner-centered and getting learner input into the teaching process. Yeah. So, certainly as a teacher, I have goals that I want my students to achieve, but I also want to get their feedback, input, and ideas on how they like to achieve those goals with me and maybe also fill some gaps that I haven't noticed yet that they have, that they'd like to achieve.
[00:07:39] Jason Johnston: Yeah, that's good. Learner-centered. Mine's very similar. I think much of mine comes from some of the writing of Paulo Freire and the idea that often education can be dehumanizing—not just online education, but face-to-face education can be dehumanizing—and figuring out ways in which we can rehumanize education, have a critical approach to our educational pedagogies, so that we are thinking about ways that bring more freedom, liberty, agency to our students. So, I think in that way it's very similar in terms of that kind of learner-centered approach. And we bring this up on the front end because we don't want to pretend—not that anybody would think this, John, but—oh, we did not come up with the idea of rehumanizing or humanizing education or online education. There's some people that came before us, and we'll put links in for Paulo. If you have not read Pedagogy of the Oppressed, that is probably for me the book that turned me on to education in general, me wanting to be an educator, was that book. Yes. That may—that kind of opened my eyes to say, wow, education is a way for people to find liberty in their own lives. So, we've got Paulo. There's a few other people as well. I wanted to mention on the front end, currently right now, Michelle Pacansky-Brock—and we'll put a link into her website—she does a lot of active work on what she directly calls humanizing online education, caring a lot about equity and who's being included and who is not being included. And yeah, and she's doing some great work there. There's the first—I was looking back a little bit. I was not familiar with this article, but I was wondering if I could find the first instance of an article that talked about humanizing online. And actually, I found one in the year 2000. So, we're looking—it doesn't, 2000 doesn't feel that long ago, John, but it's nearly a quarter century. I know. I know. And so, we had—in the year 2000, there's an article from a book, actually. The chapter in the book is called "Humanizing the Online Classroom." And within that, Renee Weiss, I think—Weiss—suggested that professors can use various techniques like creating a welcoming environment, using humor, sharing personal experiences and so on to help humanize an online classroom. And I think that's right in line with all the things that we're learning and talking about and wanting to hit.
[00:10:14] John Nash: Yeah. I also, when I think about humanizing the online classroom, I think about opportunities for students to be assessed through more constructivist activities and, yes, hands-on work where they create knowledge and learn together and also with their hands and their minds. I think that's—and that's becoming more and more important as we think about trying to assess learning outcomes in the wake of the advent of large language models and generative AI. We are—we're having to totally rethink the way we assess now as these tools start to become more and more in vogue.
[00:10:50] Jason Johnston: Exactly. Side note: where were you during Y2K? Were you, uh, huddling down in a bunker? Did you have some canned goods?
[00:10:58] John Nash: Exactly. Yeah. That's funny. Isn't that funny? I was—it was a big deal, and I was in Silicon Valley. I was working at Stanford. Oh, wow. And yeah, we were all freaked about Y2K. And are we prepped? Is the university prepped? I was working in a lab that was interested in the integration of technology into the undergraduate curriculum at Stanford. And online learning was a big deal at that point in time. And yeah. And then, yeah, nothing happened. We—
[00:11:20] Jason Johnston: Just—and then nothing happened. Yeah. We had this idea that—and for the younger folk listening to this podcast that didn't live through Y2K, the scare of Y2K, there was this idea that when all the dates on computers switched over to 2000, it was going to create major chaos. And there was hardly anything. We thought planes were going to fall out of the sky and our grids were going to go down and everybody was going to have to huddle for protection within their communities. And yeah, I wasn't a prepper by any means, but there was certainly a group of friends that we had backup plans to, to meet up at each other's homes or whatever, if we needed anything, if everything went down—just in case. But just in case—yeah, in some ways wasn't surprised that, no, the world continued on January 1st.
[00:12:08] John Nash: I think those are good examples of what we mean by humanizing online ed, and maybe it's a good place for us to point our listeners back to our first episode where we talk about this a little more in depth. Yeah. And what the—why the podcast is called what it's called, and what our thoughts are on trying to humanize online ed.
[00:12:26] Jason Johnston: Yeah. And in the second half of online life, one of our aspirations is that it becomes more, not less, human. Absolutely. That it would become less mechanized, less industrial, less dehumanized, more human. Yeah. One more article I just want to highlight—and we'll put all these links in our show notes—is by Whitney Kilgore, who's a fine person and scholar who did an article in 2016 looking at MOOC experiences, those MOOC experiences, and trying to figure out ways to humanize online learning through those very mechanical, in some ways—it's scalable—experiences online. And so that's a very, yeah, interesting—a handbook as well within that—looking at, if you're going to start a MOOC, here are ways that you can do it without being completely dehumanized. Yes. All right. Should we go into a little bit of a summary of our session there? Kind of walk the listeners through our session and let them know what some of the outputs were?
[00:13:26] John Nash: Yeah. Let's walk through the session and talk about what folks did. So, we proposed and executed a design thinking cycle in 45 minutes at this conference. And if any of you have been involved in using design thinking, it can be a deep dive for weeks upon weeks. You could—I teach a course over 16 weeks on design thinking. We use the same steps. So, to do a full cycle in 45 minutes was quite a feat, but it also turned into something that was very active. And actually, in the back end, when we got feedback on the session, people were actually quite refreshed by it because it's not a bunch of talking heads, but it's actually them doing some hands-on, heads-down work. We started with something we call "need-finding," or it's this part of this empathetic interviewing process where we asked the people at each table to take a few minutes and talk to each other about a few key questions about what was challenging about online education. Have you taken an online course lately? A lot of people that design courses and teach online courses haven't themselves taken an online course. And so, we asked them to try to describe what that was like for them in a three-word sentence to quickly capture experiences and feelings. We asked them to think about: What is something an instructor of online courses should know but doesn't? This is always revelatory because learners don't often get a chance to give feedback in that kind of way to teachers, but they're always thinking, "Man, I wish the teacher knew this," or "Wow, I wish the professor knew that." And so that led us to think about asking them to consider blind spots that might be present even among savvy online course instructors and instructional designers. And so: Have you noticed any blind spots lately in yourself or others? And what should course instructors and instructional designers start doing, stop doing, turn up and turn down in their own work as we think about trying to humanize online ed? And so, they chatted about this. We gave them about eight minutes to have these conversations. Someone in the group was a scribe and they captured everything on Jamboards that we provided for everyone. And they came up with a lot of great issues—things around student technical issues and planning time issues, accessibility, different levels of technical savviness among the students, and then how much time it takes to create—the time-intensiveness of course design—and not being in a fixed mindset. Many courses get designed, they thought, from a one-angle approach, and one size does not fit all. So, how do you think about those sorts of things? Good design and rubrics and thinking about tying outcomes to the actual lessons. Those were some of the big things that teams were picking up in this time that they had. It was pretty, pretty intense. Yeah.
[00:16:17] Jason Johnston: Yeah. And I was just looking at some of our notes on that, or the team's notes, and they were talking about a couple things from the teacher standpoint as well. Fixed mindset would fit into that, but also about kitchen-sinking everything, how easy it is in online courses. Yes. That we've talked about before. Just putting everything in and it being overwhelming. Being able to have good-quality videos for people, thinking about the student from a design standpoint.
Yeah.
[00:16:44] John Nash: Yeah. And so, after we had them get to know each other a little bit and consider what the responses were to these prompts, we moved them into a brainstorm activity. And we gave them five minutes and asked them to shoot for a goal of 50 ideas to answer the following question: How might we humanize online education? And they were to use the conversation they had prior to the brainstorm as inspiration for coming up with ideas. And I'm looking at one of the team's Jamboards here, and they had 32 ideas in five minutes on how to humanize online education, and that's pretty impressive. Very impressive.
[00:17:24] Jason Johnston: We really should have brought prizes for teams with the most ideas, but we did. Yeah.
[00:17:28] John Nash: And these ideas ran from having a syllabus quiz to ungraded activities to being empathetic with your announcements to just publish your policies. Some of these things are incredibly simple and low-threshold but get picked up on because—and these ideas don't show up out of thin air. They suggest—oh, we should have a late-submission policy posted because professors don't have a late-submission policy. And that, that really dehumanizes the process. Low-stakes or formative assessments, subtitles or transcriptions for videos. So, some universal design issues to just regular—just be a human being. So, yeah, those were—it was really interesting. And so that was just one team. And I—as I said, we had—I don't know about, I'm looking at my list here, and the—our main document—67 ideas, unique ideas, were brainstormed by four teams at the tables.
[00:18:22] Jason Johnston: Yeah, and we'll provide a link to our jump doc. So, if anybody wants to check out to see some of the outputs and this whole list of five-minute brainstorming sessions are there, you're welcome to, to walk through and see which ones maybe you'd like to run—
[00:18:35] John Nash: With. Yeah. Yeah. Talk about what we did after that.
[00:18:38] Jason Johnston: So, after that, we then moved on into a harvesting session where on their Jamboard they were tasked to harvest the brainstorming. I hadn't seen this before, John. I really liked the way that you put this together, and again, we'll show you examples if you want to go in our links. But if you picture four quadrants or four squares that the teams can grab ideas from their brainstorms and put in each one of these segments. One on the top left is the idea that's the team's favorite. Beside that is the idea that's the rational choice. And then below on the left is the idea most likely to delight learners. And then on the bottom right is the idea that they'll never, ever let us do, but if they did, it would be awesome.
[00:19:30] John Nash: Yeah. And that one's called The Long Shot. A great way for a team to take like these 35 ideas—if we're going to ask you to prototype one of these, how in heaven's name can you select the idea to prototype? So, this is a way to harvest that brainstorm, get it down to four doable ideas, and then from that we ask them to draw one. Yeah. So, the team favorite, the rational choice—that gives even—that sort of gives leaders and systems an opportunity to actually use an idea that was derived out of design thinking. But they don't have to go too far afield. They—oh, that's irrational to do. And it was still derived from human needs and human interests. And so, I think that's what's important there. One that's going to delight the learners, and then this long-shot idea. And it's funny, Jason, a lot of times when people pick a long shot because they believe their institution or their leadership, or in P-12 schools where I work, the principal will never let them do it, it's actually pretty low-threshold and not a big deal to the leadership. But it's—there's this disconnects between what people think they can have and what they can really have. And sometimes they're closer than you think. You just have to get them to reveal it.
[00:20:35] Jason Johnston: Yeah, and I liked the framing of these because each team was very quickly able to pick four ideas. I thought this part—we only gave them five minutes. It happened very quickly, and I feel if the teams had to pick one idea, it would've taken them longer. Yes. If we had asked them: Pick the one best, absolute idea in this whole list. That's right. It might have even taken them longer to come up with something. But this just gave them kind of a framework to think about it in different ways and maybe then they can move forward.
[00:21:05] John Nash: This is effective because it doesn't put the pressure on the teams to pick a value of "best" on any one idea, but rather just give them labels like: Yes, on that one, that would probably be a favorite. It's not—we don't judge whether it's a good idea or a bad idea. It's just that would be favored by the team. This would delight the learners. And that's the, yeah, that's the nice thing about this. It takes the pressure off putting a value on the ideas.
[00:21:28] Jason Johnston: Yeah. A few team favorites were: having personalized, human-touch announcements beyond "here's your course and here's my email"; incorporating more UDL into their work; building community through student video introductions. Yeah. And breaking the audience of one.
[00:21:49] John Nash: Yeah. Yeah. And it's funny, I always like to ask what the long shots are too, because they'll usually want to prototype the rational choice or the delight-to-learners one. And I say to people though, "What was your long shot?" And so: complete ungrading—no grades—was a long shot. And that's funny because I think we're going to talk to someone in a future episode that's interested in ungraded courses. And I've been playing with that too, so it's funny that's a long shot. An instant AI quiz generator for quick checks—not graded—that'll never happen. That's a long shot. Problem-based learning—it would be a long shot. And then the last one was incorporating high-quality video and VR capture equipment. So, really let's up our laboratory for creating content. Yeah. Yeah. Those didn't seem like such long shots to me.
[00:22:37] Jason Johnston: They didn't. Depends on the context as well. Exactly right. Like, I feel fortunate to be at University of Tennessee. We have equipment in place that we can test VR environments and record VR environments and move forward just as faculty want to be able to do that. And so, it—that doesn't feel like a long shot to me, but in some places, maybe if they've been said no to so many times whenever any equipment needs to be purchased, then it feels like much more of a long shot. They have their own kind of hurdles to get over in that way.
[00:23:08] John Nash: Yeah, for sure. And so, then we had them select one of those four ideas and prototype it. And prototyping in a design thinking cycle—we prototype to learn, and we like to draw when we prototype or make a physical manifestation of the idea. So, in this case, we asked them to pick one of the four ideas that they harvested and they draw it with markers on a large sticky Post-it note that was going to go on the wall. And we had some constraints. We told them that they may only draw—they may label things with words, but it may not be a bulleted PowerPoint slide—and there was rapid prototyping. They had 10 minutes to work quickly, make a sketch or a chart or a diagram, and then give a little solution name at the top and how it solves the problem of humanizing online education.
[00:23:58] Jason Johnston: Yeah. So, they came up with their prototypes and they did a—again, fantastic job. Amazed how quickly these teams worked together, people that had never met each other before. That's right, for the most part. And it shows you what you can do in a short period of time when people are eager to learn and they're there with the right mindset. And team one's solution: they went with breaking the audience of one, which was their team favorite. Where they talked about how they could make it more humanized by removing grades entirely. This is interesting. It was their personal favorite, but another team—it was the one that wasn't—
It was more long shot.
Going to happen. Long shot. Exactly. Yeah. Instead, students are going to set goals for themselves related to their study topic and collect artifacts to show that they've achieved those goals.
[00:24:46] John Nash: Yeah, that was really cool. And when we share these out, we'll show folks how simplistic the drawings are, but they convey very complex, important ideas. And that's one of the mindsets of design thinking: Show, don't tell. Yeah. And another one is a culture of prototyping. And these bring—these together—even simplistic stick drawings can convey a lot because it gives you a chance to talk through the problem, not have someone read the problem and react to it. You look at it together. Yeah. Team two, their solution was pretty cool. It was about building community through student video introductions. And so they wanted to create a sense of community among online learners by asking students to create a short video introduction of themselves, maybe share prior knowledge about the topic they're studying, and then it would be in an online discussion platform—could be in Flip or some other place where it could be a threaded discussion, but with video. So, I thought that was pretty cool.
[00:25:43] Jason Johnston: Yeah. In some ways this one kind of reminded me of the fact that it's not so much about prototyping a brand-new invention. It's the solution that you're looking for. Like, these solutions technically are already out there, but how are you going to implement them? What are you going to choose in order to meet the problem?
[00:26:02] John Nash: That's right. Because all of these solutions are resources that may already exist, but they're offered here in the context of problems that were revealed and challenges that were discussed in the beginning of the session. We might say, "Hey, we should use Flip to introduce students at the beginning of the course," and sure, that's fine. We could do that. But in this case, it's contextually embedded in a problem that we want to solve, and so that's what makes it nice. Oftentimes, we find in a design cycle that the solutions we end up with are common, low-threshold solutions that now have more meaning because we understand why they're important to use now in the context of the challenge we want to solve. And so yeah, that's what makes them different.
[00:26:46] Jason Johnston: Yeah. Team three's solution was "Say Hi," which is a little similar to the previous one where they're talking about a humanized and personalized experience where it emphasizes those videos on the front end—say-hi videos—to help students and faculty get more connected to one another and try to achieve a sense of feeling more valued and supported within the learning community.
[00:27:12] John Nash: Yeah. Yeah. In fact, the thread throughout all the solutions was a communication and get-to-know-you, create-a-community type of solution. The team four's solution was called "Module Zero," and they wanted to create a more interactive experience for students by having them introduce themselves on a social board. And it—but it was a more general Q&A board that's open throughout the whole semester. So, this builds community but also gives an opportunity for instructors to give surveys throughout the semester, other check-ins that could ping the community and get it back through. So, the module lasts throughout the course in this case. But it is interesting, isn't it, Jason, that all of them really deal with the audience—breaking the audience of one, building community by saying hi, and having a module that threads through the course to keep community going.
[00:28:02] Jason Johnston: Yeah, it's interesting that these are ideas that keep coming up over and over again. I think even in my own work, and as we're talking about online learning in different ways to address those issues. Yeah. Yeah.
[00:28:14] John Nash: That's good. Isn't it interesting, your comment, though, like: This keeps coming up in my work and we keep hearing it across our conversations in professional circles, so isn't it bloody obvious that we should be—yeah, why do we have to have a whole session to say, "Hey, try this"? I think we can probably just say, "You ought to do these things." But it's putting it in—everybody's got their own view on the world that's got to be honored. And I guess that's part of humanizing the instructional design of online learning too, right?
[00:28:40] Jason Johnston: Yeah. In your context, figuring out what works. Yep. As well as just—it's a recognition that we tend to drift from this. Yes. So, we need to remember it over and over again because when left to our own devices, without remembering the learner, without remembering that there's a problem here—if we just stick all our content into the course shell and expect students to naturally connect with one another—then we're going to drift away from it unless we keep doing some sort of course correction. Yeah. To come back to this humanizing. And it even reminds me of—there's theories that I've been working with now for years for working online, which is transactional distance, which is a communication theory originally, which is this idea that when there's a distance between the student and teacher, the more real and human your communication format feels, the closer you will feel together as people. And it seems really simple, seems really obvious, but again, there's lots of different ways to do it and these solutions are also pointing to a reduction of that transactional distance.
[00:29:48] John Nash: Yeah. Yeah. Really good. I think one thing that's nice also is that we demonstrated the ability to use a process to get to ideas that are contextually embedded and it doesn't have to take a long time. I think—I think there would be some fair criticisms out there amongst our listeners and others to say, "You really rushed this design cycle business." And actually, if I was being fair, I would criticize our work by saying we didn't have real learners in the session where—and say instructional designers and instructors are having an empathetic conversation with learners in an online space, learning about their needs. So, in that case, we did not really stay to the fidelity of the idea of human-centered design. But we've given them an opportunity to practice on themselves to say how the process works. And that's really what's important: Do I have a process in place to arrive at solutions that are contextually embedded and help our learners? Yeah, I have that now and I can practice that.
[00:30:46] Jason Johnston: Yeah. That's great. Any final thoughts about this before we wrap it up? We'll certainly provide links to all of these documents so people can look at them themselves and hopefully derive some ideas.
[00:31:01] John Nash: No, I think I just—if we could do this again, and if I put out an urging to conference organizers, I would say add a little bit more time to this and then to let the design process stretch out a bit. And then I would say to presenters, to ourselves and others that want to try this: try to bring real users—we say in the parlance, but the real—bring in people that are living the problem that you want to help solve, and have them be a part of the design. So, instead of having a bunch of instructional designers and professors and instructors in a room, we would have them in addition to students from—could be secondary schools, could be other post-secondary institutions doing online learning—and, and really have a session where you get to talk and learn about what learners are going through and what they feel. That would be really cool.
[00:31:47] Jason Johnston: Yeah, for sure. And again, I'm going to plug John Nash here because he is not going to plug himself, but he also does this kind of work really all over the world. You've done this all over the world, haven't you? Working with people design-thinking-wise? Yeah. And educational situations all over the world. And he is open to some consultations, I'm sure, and working people through these processes, especially if you're really—especially if you're really nice to him. I find if you're kind to John, then he'll be kind to you. Yeah.
[00:32:18] John Nash: I'll give you my obligatory "Aw shucks." But yeah, always love talking about this. I will admit to that. Yeah. Happy to talk with anybody about it.
[00:32:27] Jason Johnston: Yeah. That's good. This has been—it was a great session. I really appreciated working with you on this, John. And I think if listeners out there were in our session, thank you again so much for just diving in and for participating. That was so much fun. And I hope for listeners that weren't there that I hope you get something out of this podcast, and not only about online learning specifically, but also about the process and how really accessible it is to be able to do in your own context.
[00:32:56] John Nash: Yeah. It was great. We couldn't have done it without the humans. That's right.
[00:33:00] Jason Johnston: Absolutely. Which is good, because that's what this is all about, right? That's right. Yeah. Yeah. Don't forget the humans. No. Thank you and make sure you check out our show notes at OnlineLearningPodcast.com. Yeah. If you would—speaking of an "aw shucks" moment, I'm not even going to read it on air because I would just—it would just be too—I would just be turning red through the microphone. We got a very kind review on Apple Podcast that was really nice. And so please check out our website. If you can review us on Apple Podcast, I think that it does help the—the AI.
[00:33:33] John Nash: Please go out and tell the bots that we're worth the—
[00:33:36] Jason Johnston: Time. That's right. That humans—other humans are actually listening. Yes. And that maybe it should be put out in front of other people. We'd love to get the podcast out and join us at our LinkedIn community as well. Just look us up—Online Learning Podcast. And feel free to drop us a message on LinkedIn. We'd love to hear from you. Thanks so much.
[00:33:57] John Nash: And what we should be talking about next. Yeah. Thanks. Absolutely.

Monday May 08, 2023
EP 10 - Podcast Super-friends Crossover Episode at OLC Innovate 23
Monday May 08, 2023
Monday May 08, 2023
In this episode, John and Jason record with podcasting friends from ASU and West Chester University at OLC Innovate 2023 to talk about (what else) podcasting and humanizing online education in the second half.
Join Our LinkedIn Group - Online Learning Podcast
Podcast Super-friends Links:
Course Stories Podcast at ASU
Course Stories, Episode #4: Don’t Fret Design: A Student-Centered Approach to Online Learning, with Brendan Lake
ODLI On Air at West Chester University
Resources:
- The Thin Book of Trust: An Essential Primer for Building Trust at Work
Guest Bios:
Arizona State University
Mary Loder is an Online Learning Manager at EdPlus, supporting Faculty professional development and training along with managing special projects in a variety of disciplines. She is also co-creator and co-host of Course Stories, a podcast where an array of course design stories are told alongside other designers and faculty from Arizona State University.
Ricardo Leon is a Media Developer Sr for EdPlus and is a co-creator and co-host of Course Stories. He has developed a number of other podcasts and various other forms of instructional media.
Brendan Lake is the Director of Digital Learning for the ASU Thunderbird’s 100 Million Learners project, offering no-cost management education in 40 languages. He also moonlights as a music faculty member with ASU’s Herberger Institute for Design and the Arts.
Timothy McKean is the Manager of Online Learning for the Herberger Institute for Design and the Arts at ASU. He advocates for authentic assessment and ensures faculty presence and personality are highlighted in online learning. Tim can be heard as an online learning guest in several podcasts and youtube channels and does freelance audiobook and eLearning narration.
West Chester University
Dr. Tom Pantazes is an instructional designer and podcast cohost with the Office of Digital Learning and Innovation whose research examines the practical applications of interactive and multimedia content in digital instructional environments. Towards this end, his work seeks to connect learning theory with the affordances of various technologies as they are utilized with students. If he is not building Legos, you can catch him on Twitter @TomPantazes.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections! And in this transcript, apologies to all the speakers we didn't label or mislabeled! We hope it helps having the transcript some and just don't have the capacity to fix every small detail.
[00:00:00] Ricardo: Oh, this is like the crossover episode. Yeah. You're hosting the Super Friends episode. Awesome. I didn't realize it until this moment. I love those episodes. I can't believe I get to be part of one.
[00:00:32] John: Hi, I'm John Nash. I'm here with Jason Johnston.
[00:00:34] Jason: Hey John. Hey everyone. Hey, hey, what's going on here? Hey, we've got people in the—
[00:00:38] John: Room.
[00:00:39] Jason: With us, John. This is—
[00:00:40] John: Exciting. It's not a live audience. It's, it's, uh, colleagues and brethren and sisters and, uh, podcasters.
Yeah, that's pretty—
[00:00:47] Jason: Cool. We're here at OLC 2023 Innovate in Nashville, Tennessee. And, uh, we've got some of, uh, new friends and old friends with us here to talk about online learning. And we have a little bit of a common denominator here with podcasting. So we'll talk a little bit about that, but why don't we start with some introductions? We're going to start over here on this side.
Hello. My—
[00:01:09] Brendan: Name is Brendan Lake. I'm an instructional designer and music faculty at Arizona State.
[00:01:14] Mary: But what is your title at Arizona State University? Because I feel like you're too humble, honestly.
[00:01:18] Brendan: I have a lot of titles. I wanted this to be less than three hours. So my title is Director of Digital Learning Initiatives with the Thunderbird School of Global Management, where I lead the 100 Million Learners instructional design team. And I'm also a music faculty, teaching guitar and other music topics at the Herberger Institute.
[00:01:35] Jason: Sounds cool.
[00:01:36] Tom: I—
[00:01:36] Speaker 2: Like that a lot.
[00:01:39] Tom: I will not be that long. Dr. Tom Pantazis, an instructional designer with West Chester University, which is outside of Philadelphia, and the co-host of the ODLI on the Air podcast.
[00:01:50] Mary: That's a cool name.
[00:01:51] Tom: We worked very hard to get it. It's a good one. And we're—our office is changing its name and we're very sad about—
[00:01:59] Mary: I'm Mary Louder. I'm the Manager of Professional Development and Training at Arizona State University. I'm primarily working within EdPlus and ASU Online. And I am also co-host, co-creator of Course Stories, a podcast where we talk about an array of course design stories alongside other faculty and instructional designers.
[00:02:19] Speaker 8: I'm Tim McKean. I'm the Manager of Online Learning for the Herberger Institute of Design and the Arts at Arizona State University. And I have no current podcast, but I started podcasting way back in like 2005 with my middle school students, and I'm an avid podcast listener.
[00:02:36] Ricardo: Uh, my name is Ricardo Leon. I'm a Media Developer Senior with EdPlus and, uh, ASU Online, uh, and I'm a co-host along with Mary, co-producer along with Mary, of Course Stories.
[00:02:51] Mary: Yes, we work together.
[00:02:51] Ricardo: We work together.
[00:02:53] Jason: Hey, and that's your—
[00:02:54] Speaker: Intro. That's cool.
[00:02:56] Jason: You guys do a great podcast, so I encourage listeners to check out the Course Stories podcast. And say and spell your podcast name once again, Tom.
[00:03:06] Tom: So, ODLI on the Air is O-D-L-I, which is the abbreviation of our office. It's actually a podcast where we interview our faculty at West Chester about their teaching and all the different ways that they do it. And we have about a thousand faculty, so I figure we've got a good 45 years in front of us.
[00:03:21] Jason: That's awesome. That's great, and they keep changing faculty, so you probably—you may, maybe non-stop actually—you may just keep on going.
[00:03:29] Ricardo: Yeah, that's good. Well, I want to mention real quick that Brendan is actually a guest on the first season of our podcast, a really great episode talking about, uh, a guitar, uh, course. And, uh, and we even have music of Brendan playing guitar as our interstitial music.
Yeah.
[00:03:48] Jason: Yeah. Yeah. Well, let's get talking a little more about podcasting. You know, one thing I really like about the Course Stories podcast—I, I think you produce it well, you've got some great guests on—but I also like how you guys kind of, you kind of, in my mind anyway, you're like a bubble that kind of pops out of the podcast for a minute. I visualize this kind of bubble that pops out of the podcast, and you just talk about whatever it is you're talking about. And you give a little background and maybe explain something for people that aren't listening. And then you pop back into the podcast again. How did that—is that something you started from the beginning? Because I'll admit I have not listened to all of them.
That's fair, there's a lot of them now.
Um, how did that come about? What was the, kind of, the thinking behind that?
[00:04:31] Mary: I want to give Ricardo all credit. Our podcast is awesome because he's an awesome media specialist. And then in addition, he's the one who came up with the idea of these, like, intentional interventions for debriefing concepts that a typical listener who's not faculty or an instructional designer wouldn't know about, or that he genuinely has a question on, and then vice versa.
[00:04:51] Ricardo: Yeah. We just—we wanted to add a didactic, a didactic element to it, uh, so that it could be more accessible to a lot of different listeners. So, you know, primarily our audience is faculty and course designers, but a student might be interested in the course. I think we've got a lot of crossover with Brendan's episode because people want to hear about this really great guitar, uh, course. Uh, but we might be, you know, some of those designers or the faculty might be talking at a level that might not be familiar to, you know, every listener. So we want to have that didactic piece to kind of catch everyone who's listening up to what the topics are. And I'm a dummy, so that's easy for me to, like, be there going, what is, what is PlayPosit, Mary? I don't know what that is. I probably have forgotten by the time—now I can, I guess I kind of know what it is. But, you know, so that—so I'm that, you know, in an infomercial, the guy: "But, but how do you get juice out of that whole, you know, orange?"
[00:05:45] Tom: Hearing you say that then, are you guys doing those bursts as you're recording in that moment, or are you doing that in post?
[00:05:51] Ricardo: In live, uh, episodes, we are doing them in the moment, but no, they're all being done in post. We have conversations, uh, right, you know, right after. Usually it's—it works best when we are able to do it right after, but we're not always able to. But then, yeah, we, we take notes as we're listening. So Mary and I, even though we're not featured in the interviews, we are taking notes, producing, uh, coaching our instructional designers on, on good questions to ask, or maybe saying, "Hey, you know, that's really not clear. Can we get you to, you know, define that a little more?" Or we take a note and we define it later.
[00:06:25] Speaker 8: Which I think is really great because it allows the continuity of the two hosts throughout all the episodes. Yeah. But then it empowers you to have other voices, you know, it empowers you to bring in your instructional designers or, mm-hmm, you know, to, to interview the, the faculty that they worked with. And I think it's really cool how you get a lot more voices in there while still having the consistency of the commentary.
[00:06:46] Ricardo: Yeah, and sometimes those interjections are a totally another guest that's in the studio with us that can bring—
[00:06:51] Speaker 8: Yeah, yeah.
[00:06:51] Ricardo: Yeah. Oh yeah, Regina—on one episode, we have another podcast producer from Arizona State, Open Conversation. Uh, she—and I could not say her last name if you held a gun up to my head—but Open Conversation's the name of her podcast that she produces for ASU, and she's coming in. And so we had an episode—what was that covering, Mary? Do you recall?
[00:07:17] Mary: Oh yeah, misinformation.
[00:07:21] Ricardo: And she's a—she's a Russian-born journalist. So she had a lot to say about misinformation in society, so we were kind of able to color and kind of elevate some of the, the topics that we were talking about.
[00:07:32] Tom: Mm-hmm.
[00:07:34] Jason: Tom, I'm curious about—were you there at the start of your podcast? Were you a part of the beginning there?
I very much was at the beginning. We actually have—
[00:07:41] Tom: An episode on that, if you're interested.
Oh, that's cool. That's good to know. What, what kind of brought it about?
So I was thinking about professional development and ways that we can engage our faculty kind of beyond the traditional models of workshops and, uh, webinars. And it is something that kind of came to mind, like, "Hey, we should give this a try," right. Particularly because they're evergreen. Like they, they just—once they're out there, they're always there. We can reference people back to them. And I really love talking with our faculty and hearing their stories, and it sounds like, in some respects, like what Course Stories—
[00:08:13] Mary: Talks—
[00:08:13] Tom: About.
Um—
[00:08:15] Speaker: And—
Crossover.
Exactly. Oh, this is like the crossover episode. You're hosting the Super Friends episode. Awesome. I didn't realize it until this moment. I love those episodes. I can't believe I—
[00:08:28] Tom: Get to be part of one. So, that opportunity to bring some folks in and hear their story and share that with others. Um, a chance for them to promote their work a little bit. So to kind of help them. And when we set out, we said, you know, if this totally bombs, at a minimum we will get some experience creating and producing. Um, and it is safe to say it has not totally bombed. We are—we'll be back for episode, or season three here.
Nice.
So, uh, I think for me, one of the big things too has been how much fun it has been. I didn't go into it thinking it was going to be this much of a joy, and I wish I could just do it all the time now. So, um, highly recommend podcasting for anyone who's interested.
[00:09:10] Mary: I will say, Brendan's episode helped us highlight some really unique things that are good for any online classroom. Specifically, just the level of care that it takes to be a good online instructor. And that's not always present, necessarily. Like, a lot of our faculty are amazing—Brendan—but there's a true concern, I think, at any university when you're an online instructor and you've done it online, you're great. When you're an on-campus instructor who's being asked to go online who doesn't have experience, it's not a vein of understanding, it's a totally different potential experience and perspective.
So Brendan's episode's great because he's an online instructional designer who then developed an online course around guitar. And like, what? Learning how to play guitar? But like, that's a thing that you do.
Right?
[00:09:58] Brendan: And actually the whole episode is not about guitar. It's really about how I approach growth mindset and how I approach practice and feedback at scale. And I just gotta say, like, the Course Stories podcast has been such a blessing for me as a faculty member, both to tell my story, because, you know, when faculty—unless you're really good friends with each other—you talk about the product, not the process. And the process is 98 percent of it. Um, your philosophy about why you do your grading system, you know, instead of just saying about the rubric, which is the end of the road.
Um, so my opportunity to hear, um, you know, what other faculty are doing, it's, it's, it's amazing. It's easy faculty development. And any instructional designer will know faculty listen to each other a lot more than listen to an external instructional designer. And so that trust is immediately there and those lessons just hit deeper. It's just an amazing faculty development resource for anybody.
[00:10:47] Tom: For the faculty at ASU, will this—being on a podcast—count as scholarship for them?
[00:10:54] Ricardo: I think, I think that, you know, certainly our leadership is, is—feels that that is important. I think that they've been very supportive of, of our IDs and our media specialists taking time, uh, you know, within the bandwidth of what they do altogether to, to, to appear on these episodes. Because it is, it is, uh, it's not a huge lift, but it is, you know, it does take time. There's a lot of investment of, of time and energy into, into doing an episode. Uh, so we, we—and that's kind of been one of the things to help me and my bandwidth—is to say, okay, we, we have, uh, this—instruct, we have this instructional designer working on this episode, and their job is to coordinate the, the pre-interview and to come up with questions and, and bring that to the table. And so that gives us, you know—they do that production side of it, and then that helps us, you know, with the rest of it.
[00:11:49] Mary: And to the faculty perspective, it's going on their CV, so they must care.
[00:11:53] Speaker 2: Right.
[00:11:53] Mary: And they are advertising, right?
Yes.
And they're sharing it with their friends, they're sharing it with their program leads. And I think, like, is it accepted as part of their scholarship? Maybe. I think that all depends on who their program lead is and maybe their ability to see podcasting as a viable source of creating content. Um, and it is. So for those who are listening and don't think it is, you're wrong. It is.
[00:12:16] Ricardo: It's a publication. You know, we call it that. We say, you know, these episodes are being published, you know.
[00:12:21] Mary: And you're speaking about this specific process of working within that vein. It's the same as writing an article, in our perspective.
[00:12:30] John: I'm a regular titled-series, tenure-line professor at an SEC institution, and the answer is, it depends. And chiefly, no, if you've said the word scholarship. So you're right, Mary, it goes on the vita. It's, it's probably—and depending on when it comes time for biennial reviews, for merit. It all depends on your unit head, your department chair, and your dean. If your dean is hardcore science and believes in, you know, traditional publications and peer-reviewed journals count as the scholarship, this probably won't rise to the level of their interest in changing your score. Or maybe they'll even lower the score because they think you have a sympathetic chair, it hits the dean's desk, and the dean goes, "No, not so much."
And so, it really, it depends. I value it, you know, of course, and I see how it can work, but it's not everybody's cup of tea yet.
[00:13:28] Ricardo: Yeah, I think you hit the nail on the head—the scholarship part of it. Because that's when I was like, "Wait, wait, uh, scholarship." Well, certainly career building. And, you know, it's certainly something that you can list on your CV that has a link to something. You can hear about your, your—what's going on in your brain.
[00:13:43] John: Yeah. Tenure-line faculty have three big jobs: teaching, research, service. So you help us figure out where it fits in one of those three things, we're golden.
Service.
Probably service. Um, and teaching too. I mean, depending on what you're podcasting and if it's hitting your courses or advancing learner outcomes, absolutely, I think.
[00:13:59] Tom: And I ask because it does count for some of our faculty. It just depends on department.
Right.
And that's, that's—it depends. And we love to use it as instructional designers.
[00:14:07] Jason: And I think, for, for us, one of the reasons why we started this podcast—we were kind of talking about this previously—was, uh, we just, we wanted to get stuff out in front of other people. We wanted to continue the conversation with other people. And I feel like so much research, as valued as that is, you know, it hits—like, what are some of the percentages, John? You might know, or somebody else might know, about the number of, yeah, the papers that are out there in the wild that get read. Like, a handful of people read those papers. And we were thinking about just ways that we could continue online education in a way that would, uh, you know, continue the conversation and research and maybe connect with a larger audience than a paper would.
[00:14:48] John: But it's—it goes to this translational nature of research. And I think even—it's been what, I guess, sort of—help me out if you guys were around 20, maybe 20 years ago, 23 years ago, when NSF decided that some sort of translational aspect had to be a part of everything that was going on, and all the scientists were going, "Oh my gosh, how am I going to talk plainly about this work?" And I think still that's not getting the credit it deserves. I think still this sort of translational work and others is not as valued as the scholarship that gets translated. So, um, which is a shame.
[00:15:22] Jason: Yeah.
[00:15:23] Tom: It's interesting we hit this because it's something I didn't mention about our podcast—we always anchor every episode around some piece of scholarship, whether it's a publication or an article that's related to the topic at hand so that we can make some of those connections and help the faculty kind of do exactly as you're talking about and share that in a way that's accessible to a wider audience. Because we know all our moms are listening to the episodes. That's—
[00:15:46] Ricardo: At least three.
I think there's also—
[00:15:50] Tom: An—
[00:15:51] Speaker 8: Aspect of modeling. Right. One of the things that we want to do for professional development as much as possible is model the skills and the techniques and the tools that we want faculty to use. So if they can see themselves using it as a learner, then they can see how they could use it as an instructor. Um, so using media and, and basic and simple media production techniques to, uh, to broadcast your, your knowledge, your research, your ideas, uh, in a way that has personality, that, that creates a—you used the word connection—that creates a connection and creates a community. Um, and I think that podcasting often creates more community than a publication does because of the nature of comments or feedback. If you're—especially if your podcast is one that, you know, takes feedback or, or reads, um, reads off listener comments on the show or answers questions and creates that back and forth, um, then you're really creating a, uh, community of practice that a more traditional publication can never really do.
[00:16:52] Mary: And there's all these layers and textures that go into creating these podcasts, right? So there's the storytelling component around what you're saying, but then there's also the storytelling component around, like, what you're hearing while they're saying what they're saying. And, like, the connection that comes from that is far more meaningful than reading text on a page.
[00:17:09] Tom: So I was doing some prep, because we're going to present on our podcast coming up here locally—not, not big time—and one of the things I read was this idea that when we're hearing it, we are creating the mental image.
[00:17:21] Mary: Yeah, theater of the mind.
Yes, versus—
[00:17:23] Tom: If you're watching a video, someone else has already done that. And so there's a stronger personal connection to the stories that are being told. That just kind of blew my mind. And I know it blew yours too, John.
[00:17:33] Jason: Yeah, it's really neat.
It did.
And—
[00:17:33] John: I just love the accessibility of podcasts. I mean, I think about the amount of time I spend in the car, and how accessible it is to, to learn from podcasts myself.
[00:17:41] Speaker: I—
[00:17:42] Jason: Have yet to see a full online course that is just podcasts. Have you guys ever played with that, or have you done any?
[00:17:49] Mary: So we just talked to somebody who does literature, and he teaches it using podcasts.
Really?
So his content are podcasts, and they're different genres, and they take two weeks to get to know the podcast. They dissect the podcast as a group. We just interviewed him, so it'll be on the episode we produce from this.
[00:18:08] Ricardo: Uh, yeah, when I, I produced, uh, some podcasts for courses. Uh, but one of the things is, is, is—I think that, and it might just be some insecurities about it, uh, but that we, we use it mostly as supplemental to the course. It's not the course content. It's "please listen to this episode I'm in it." So, you know, in an on-ground course you have a guest come in from out of town, you have a little conversation with them for the students. And so a lot of times those—these, these podcasts for courses are just that. It's a conversation with an expert. And we did an exercise science course, and the host of that, she brought in all these, uh, you know, people that she knows that are experts in different parts of the field. And, and, and this was during the pandemic, so this was really the only way that she could, uh, to have, have these guests on. And so those became supplemental to the course. And I—and I think that there's some trepidation, and I'll be honest on my part as well, that we don't know what the efficacy of it—because it is so accessible and it is almost, it could very much be a passive, a passive, uh, activity where you're just, you know, you're on, you're on, uh, public transportation or driving your car or exercising. I mean, is there a lot of learning happening during that? I know when I listen to podcasts, I will bring it up later. But I know also that sometimes I won't even pay attention to the thing I'm listening to. I'm just listening to voices. And so I think, at least on my part and at least until—and we talked about this on the floor, Tim—that we don't know the efficacy of podcasts in terms of learning.
[00:19:46] Speaker 2: But—
[00:19:47] Ricardo: We do know that there is these, you know, this, this engagement that's happening and this, uh, this community building that's happening that, that we don't know really how to qualify yet. Quantify. Qualify? Both.
Both.
Huh.
[00:20:02] Jason: That's good. Yeah. When I was doing my PhD studies, I had one professor who—good professor—but he read absolutely everything off of his very detailed slides. And after the first time, and you probably know who I'm talking about, John, but, um, after the first lecture, I, you know, I was just thinking, "Oh, this could have been a podcast," right? And so I actually made podcasts out of all his other lectures because, for me, it was quicker. It was really quick for me to make a podcast out of his lectures. Like, there's ways to do it. So you can just pull the audio, or I would just listen to it in the car off the podcast and, and not look at the video, because it didn't—it didn't matter at that point. And I found that I would sometimes listen to them a few times because I was just driving the car anyway, because it—
[00:20:50] Speaker 8: Was easy.
[00:20:51] Jason: And it was easy, right? Versus watching. I don't think I could have gotten through it, honestly, barely one time, if I was having to sit there in front of a screen and watch this. It would have been very difficult for me, and maybe that's just me.
[00:21:04] Speaker 8: I like that point about accessibility from a standpoint of how easy it is to access the material, right?
Yeah, that's right.
We did a course, uh, last summer where the instructor was remote. Uh, she didn't have—she didn't feel comfortable making her own video. Um, so she just did audio lectures only. And then we paired that in the LMS with embedded slideshows so the students could listen, and then they could follow along, you know, kind of like when you're a child and you have a book that says, you know, "Now turn the page," you know, that kind of thing. Um, and I really liked it. It was a fallback at the, at the beginning, but as it came together, I really enjoyed it because it gave students so much more opportunity to access in different ways. They could just listen, they could go back and look at the slides later. They could listen and follow along if they're sitting at a computer. Having the slides independent from the audio made it so much easier to go back and review or to stay on a slide a little bit longer while she went on to the next thing. And I just—it felt like a lower quality production, but it also felt like a higher quality experience, just because it gave the student that flexibility and that choice in how to consume it and how to interact with it.
[00:22:13] Tom: There's an idea a faculty shared with me at one point, and I don't know where she got it from, but it was the idea of you recording yourself reading your text, your reading, and putting that in audio form with your own little bit of commentary. That was how she got around copyright. Um, and then making that available in the course in addition to the reading, almost in the same way that you were talking about there—to give students that flexibility of "you can read it, you can listen to it, you can read it and listen to it." Um, so I, I just use that as another example there of exactly what you're talking about. I still do wonder about the copyright side of it, but I'm sure there's a way to figure that out.
As you were saying—
[00:22:48] Mary: That, it made me think of, like, this yogic thing. So, like, our voice is the best thing to hypnotize ourselves, to, like, reprogram ourselves. And so what if you read your content to a recording and then you listen to yourself as a student reading the content? Double consumption. You could speed it up twice the speed, and you're hearing your own voice say it, and for some reason it resonates in our brain better to hear our own voice talking about these things. Like, we remember, we believe. It's a whole thing.
[00:23:14] Speaker 8: Put it on a loop while you sleep?
[00:23:15] Mary: Yeah! Sleep programming. I'm into it.
[00:23:19] Speaker 8: Pillow—
[00:23:19] John: Speaker.
[00:23:22] Ricardo: Yeah, when I was a kid, I used to hook our VCR up to the stereo and record audio of TV and movies and stuff so I could listen to it when I went to bed, and I probably know every line from Men in Black because of that. So maybe there is some efficacy—
[00:23:40] Jason: Right there. Huh. I don't know.
That's good. Well, uh, turning maybe towards a little bit of a theme for us in our podcast, which is thinking about humanizing online education and looking into the second half of life in terms of online education. We've talked a little bit of it here about podcasting, about some of the ways in which it's being utilized and potentially some really effective ways of learning. What do you think it means to kind of look into the second half of life online, looking forward aspirationally into developing online courses? And what are some of the things that you think about that either we could, um, get better at—and maybe get better at, particularly from a humanizing standpoint?
[00:24:28] Ricardo: Well, since this is a crossover episode, uh, I'm going to have to do something that I do on the show, which is to say, I'm—What? What is, what do you mean by second half of life?
[00:24:39] John: Oh yeah. Fair enough.
Yep.
Really, yeah. Well, Jason and I are in the second half of our lives. And he recommended to me a book by James Hollis that was of a similar title and Leading an Examined Life.
That's right.
And Reflections as One Enters the Second Half of Life. And there's some interesting things there to think about—about, um, giving up old ways and sort of, uh, thinking about what you're going to do in the future. And, uh, we both admitted also that there's no way that online learning is entering its second half of life. It's more that, uh—but it's been around for 30, 30 years-ish, 40 years. And so, um, it's enough time for it to have learned a few lessons. And if it were 45 years old, what would it look back on and say, "Hey, I need to do this a little better. I need to stop doing this. I need to listen to my parents. Or, oh my gosh, I've become my parents." Um, and so I'm just wondering, and so is Jason—I think it's our overarching wondering—is, you know, what do we need to turn up? What do we need to turn down? Um, I'm thoughtful of a, of a—sort of the, I think it was Maddux, uh, who was talking about the "Everest Syndrome," that we should have computers in classrooms just because they're there, and when provided in sufficient quantity, quality will follow. And this sort of myth, uh, that, that exists. And I think still some of that pervades our work today, which is kind of shocking after all this effort. So, that's where my head is at—is, um, what do we need to, uh, be thoughtful of and what do we need to call out when we notice it because we know it doesn't work, but people still insist that we need to try it. So that's part of it.
Mm-hmm.
[00:26:23] Mary: I could go on for hours on this, but I will keep it to like two things out of respect for everybody. Um, my first consideration when having conversations with faculty who are doing it the way that we did when we were just children of online learning, right—the lecture capture. So just standing up there and doing things exactly the same way you did in the classroom, but just in front of a camera, and it's still a two-hour long lecture. That needs to die and be reborn. I want to see the phoenix rise in really intentional content creation. So, Ricardo's team—plug—does a great job. I mean, not only from the consideration of helping faculty chunk content along with our instructional designers, but then there's animations, there's using B-roll instead of the faculty being the full focus, that you have visual contextual pairings so that, like, learning happens in a more multimodal type situation. It's far more engaging for our students to have that happen than watching the same person, even for 10 minutes. If you can put in visualization and intentional editing, that is a huge thing.
Okay, and then the second thing: technology is your friend. And getting into this idea of personalizing learning, creating the ability for students to move beyond the path you envision for them only. Like, especially in the online world, we have a different demographic than campus. So these people are professionals, they know, and they're probably there to check a box to get a certificate to get more money. So if they already know, let's create a path for them. Like, I love the idea of adaptive learning when it's done right, and letting people pass the test to check the box to then move on to the next thing, and allowing them to go back and figure out what spaces they want to be in to then curate a better environment and community that will actually be beneficial to them.
Okay, I turn over the mic.
[00:28:17] Tom: That was good. I would love to hear more.
[00:28:21] John: You said something interesting about faculty and, um, maybe the recorded lecture sort of thing. But I also think that there are a swath of faculty who are maybe nerds—I'll put myself in this category for a while—who, um, who understand technology and are kind of into the technology, the materials, and are hefty as in teachers, but they've never once, uh, thought about talking to an instructional designer because they think they've got it nailed.
[00:28:46] Speaker 2: Yeah.
[00:28:47] John: And I'm kind of like that—getting better. Uh, and—but I know a lot of people like that too. And I think that that's part of it, that we've got some folks that are sort of stuck in thinking that because they understand tech and they're teaching, maybe even in education circles or preparing educators, but they're not, they're not touching on the quality aspects that you mentioned that make great experiences for learners.
[00:29:11] Mary: And it's really a team effort, right? So I feel like we all get better when we work together.
[00:29:16] Speaker 2: And—
[00:29:16] Mary: So, like, the siloed working habits of the old way of "I'm the subject matter expert, so I know exactly how things should happen." Like, now we have the ability to tap on media specialists, instructional designers, your students. Reading evaluations is a huge piece. Like, what didn't they like? Maybe go back and look at your analytics, if you can, on your video. When did they stop watching?
[00:29:37] Speaker 2: And—
[00:29:37] Mary: Then it's, like, kind of a hard thing because you have to reflect. Like, why did that happen? And then you kind of have to eat the humble pie and be like, okay, I need to make a change. And I might not be able to see the change I need to make because I wrote the paper, right? So I can't see the places where I need the commas or the edits or I went along too long, but somebody else, another lens, they can. And if you have a good relationship—which I think is actually the best part of our team—when you have a good relationship and a long-term relationship, and you trust your designer, it's—they're only saying things to make you better, not to put themselves above you. Our names don't go on the courses, right? I mean, sometimes they do, but not the actual faculty courses that are created. But like, you know, faculty support. I do love getting nods. We don't deserve those nods in the subject matter expertise areas. We are there to be partners. We're there to be collaborators. We're there to be the ChatGPT in human form.
[00:30:32] John: Speaking of nods, Tom's nodding like—
[00:30:34] Tom: Crazy. I don't mean to keep promoting our podcast, but we did an episode where, uh, we brought in basically, like, the, the faculty member we had the best relationship with. And there were two designers with their paired faculty member. And it—we talked about those things that we can do together that were so much more impactful for our students than when we were trying to do it on our own or trying to be separate. And it's—some of it is because we have that trust relationship that's built up, but the ones that we, like, really enjoy working with, because we think in similar, complementary ways. And they can put out an idea and then we're like, "Oh, but what if we tweak it this way?" And it just—we get this thing rolling that turns into this amazing product for students. And I really, truly believe that every faculty member can find that designer that they have that connection with, and then can begin to develop those things. Um, and we don't have a media production team in the same way it sounds like y'all do, but that's another person to bring to the collaboration to just continue to enhance that experience. And it's—it's grounded in that human relationship, which I think is going to be so important as the information and technology continues to just spin around us and grow. I haven't fully formed that thought yet, but we're working on it.
[00:31:45] Jason: That's good.
[00:31:47] Ricardo: Uh, you know, I think that I'm really taking this metaphor to heart here about this second half of life. And, uh, and as I'm, I'm, uh, approaching 40—
[00:32:01] Mary: Youngin.
[00:32:02] Ricardo: Youngin. Um, I think more about—so like, let's look at the pandemic as like, you know, you're in college and you know, "Oh, I can get a burger for a dollar at Burger King," and you eat that. But that's not sustainable. So during the pandemic, we had all these instructors creating videos from home and realizing how easy it was to do that. And now we're outside the pandemic, and this is, you know, you're, you're graduated from, from college and now you have to live a normal life. You can't continue to eat $1 burgers for the rest of your life.
[00:32:33] Mary: Yeah. And EdPlus is giving you a whole buffet of Whole Foods.
[00:32:36] Ricardo: Yeah. So, so, so we are getting—and even getting feedback from students—that what the, the content that they're seeing from their instructors, the instructors who are having trouble making time to come into the studio and produce quality content, or even take some of the advice of, "Okay, don't film in front of a lot of window and then—you have a nice backyard, but we're not going to see it. We're going to see you in silhouette and your window just fully, just a big bright light." So, so to take that, because it got a little too easy for instructors to just produce—
[00:33:07] John: Video content at home. I really appreciate this comment, and it was actually—driving down here to Nashville from Lexington, I was listening to the—I've forgotten the Monster Professor's name.
[00:33:16] Speaker: Oh, yeah.
Yeah.
[00:33:18] John: And—but you guys took just a second to say, "Look, it's not complicated. Get a nice little light. It's not expensive, and everybody notices." And, um, I don't think we say that enough—these little things that really make a difference. And when—
[00:33:30] Ricardo: We're getting feedback from the students that are saying, you know, it's mostly positive because they're saying that the instructors that are taking the time to make this better content are much better than the professors who—they're saying, "Oh, all the professors I see, they're all just terrible videos." And we're taking that to heart, and we're trying to encourage more people to come into the studio or to elevate their own content.
Can you say something, Tom?
[00:33:53] Speaker 8: I also, I also think we're moving into, like, a third generation of, of the level—as far as the level of interactivity, right? Early, early distance learning—I knew people that put instruction on laser discs, right?
And it was just a, purely a broadcast model, right?
[00:34:09] Speaker: Dating—
[00:34:11] Speaker 8: Himself. And then—and then we got to the early 2000s and we got the web 2.0 "read–write web," and we got comments and LMSs that had discussion boards. And I think we're moving now into this area where it's going to be largely around community building. And, and, uh, because of—also because of the pandemic—the integration of synchronous experiences into, uh, what was traditionally an asynchronous, um, primarily an asynchronous experience. And, and this idea of, of real connection. I just went to a session earlier about, um, creating a sense of belonging and how that's, you know, one of those core needs that has to take place before real learning can happen. Um, and I, and I think there's a lot of potential and a lot of promise there around community building, around real personal connections between the instructors and the students. And part of that is done through, through good media, through good course design, all those things, but also that informal and kind of unstructured interaction between students so that they can feel that sense of belonging and that sense of togetherness.
[00:35:15] Tom: As I'm hearing y'all talk too, it's almost like when we put the quality into the construction and the design, it's a demonstration of care to the students—that we care about your experience enough to do the best job that we can. And that care is an important part of building the trust that the students need to then move into the learning—
[00:35:35] Speaker 2: Process.
[00:35:36] Speaker 8: I heard someone use an analogy of dressing up for work.
[00:35:39] Speaker 2: Right.
[00:35:39] Speaker 8: I, I, I knew this teacher that wore a shirt and tie every day to work, and someone asked him, like, "Why do you do that? Like, it's not common at school."
I was that guy.
[00:35:48] Speaker 2: Okay.
[00:35:50] Speaker 8: But his answer, like, completely blew me away. He said, "I want to make sure that my students see that I take this more seriously than they do. That I care about this and that I'm treating this as a professional." And I thought, that's amazing, right. And it does. It sets that scene. And I think, and I think what you're making the comparison to is your course design, your media production, your copywriting, even your page layout on your documents. All of that polish sets the scene and shows the students how much you care about the course, how seriously you take it.
[00:36:23] Ricardo: Tim, would you mind talking about informal versus formal media that you guys have, uh, kind of talked about?
[00:36:30] Speaker 8: Yeah, I, uh—no, I don't mind. Um, I, I—we like this idea of, of this, uh, juxtaposition or this nice combination of using both formal and informal media in courses. Because there is that quality and there is that value in that polished production.
[00:36:49] Ricardo: Could—
[00:36:49] Speaker 8: You—
[00:36:50] Ricardo: Define the formal?
Yeah, so—
[00:36:51] Speaker 8: Formal media being something that is perhaps shot in a studio, well lit, has some post-production, um, maybe, you know, lower thirds, some graphics, whatever. It's been produced, it's been intentionally made, um, and it's reusable, right. Mm-hmm. Um, and then, you know, sprinkle in with that informal media, which, you know—maybe a podcast conversation like this maybe falls more towards the side of informal media, or maybe just turning on your webcam and recording a week-two introduction: "This week, we're going to do these things." Um, and it's very informal. You don't edit it. You just record it and post it. Um, and then that's something that's not reusable, right, because you've talked about things that are very specific to that week. Um, and students can see you, they can hear you, your dog comes in, your kids come in, it doesn't matter. It's all part of the personality building. And then that, you know, at the end of that week, you throw that away and you move on. And, and those two things together have different values. The, the value of the, of the, uh, production and the polish and the professionalism, but also the other value of being a real human and being, um, and having that real interaction with students in a different way.
I think there's a lot—if I were to ever do another degree, I would, I would research the area around this, uh—what's it called—parasocial relationship. You know, this idea that, um, your students are seeing you and experiencing you, you know, daily, weekly, whatever—how often, how much media you're putting out. Um, you're experiencing them at a very different level, right. Um, and, and, you know, the iconic, uh, parasocial relationship is like if you are, uh, watching a TV series that goes on for like 10 years, right. This character, or this actor, or this person has been in your life for a decade now, um, and you feel like you really, really know—
[00:38:36] Speaker 2: Them.
[00:38:37] Speaker 8: And they don't know who you are, right. It's a, it's a completely one-sided relationship, but to you it feels real, really real. And there's an aspect to that in online teaching too—of your students are seeing you and experiencing you at all these different levels and at all these different times in their lives, you experiencing them very little when they submit stuff back to you. Um, and so that's something to be—
[00:38:57] Tom: Explored there. That relationship exists between designers and faculty too, when they use media like podcasts or video to do similar things. I, um, have been running a—I call it Tom's One-Minute Update. I did it in video form, because I knew faculty were not reading email. And they could just hit play and move on to the next email and hear me. But, they get that experience of me talking to them, but I don't get the return. So they are like, "Hey, I've been talking to Tom or interacting with Tom," and in fact they have not been doing that. I get a weekly—
[00:39:24] Speaker 8: Lesson from Tom.
Yeah, yeah.
[00:39:26] Tom: And it's been fascinating for them to kind of come back and be like, "Oh yeah, I really love that." See them feel like they have that connection, and I'm like, "I—
[00:39:34] Speaker 2: Haven't talked to you in—
[00:39:35] Tom: Six months. Like, how's it going?" So I didn't even know how to term—so thanks for sharing that.
[00:39:39] Speaker 8: Well, and that's, I think, one of those values of that informal media, is because it is so much more sustainable from a production perspective. You know, if you're just turning on your camera and posting a one-minute lesson, you can do that in one minute.
Right.
You could do that weekly. You could do that almost daily if you had enough content, right. And, and it's, it's those contacts—those repeated contact points—that it helps to establish that relationship. So you're going to establish a better relationship by posting weekly or a couple times a week informally than you would if you—if it took you too long to edit your video and you could only put out one a month. You know, that's a—you're going to get a better experience out of that informal video that's weekly or daily.
[00:40:21] Mary: That's one thing Brendan's course does really well, is you have this polished—even though it was recorded pandemic, so it was in your house—but you, like, very intentionally dressed your set. I remember you going through that practice and, like, "How does it look?" Right. And, like, made a bunch of really intentional videos. But then you shared with us in the podcast that along the line you learned that, like, sharing in an open discussion board led people to being afraid to play guitar, and you changed your model to one-on-one. I would love to hear your perspective on, like, the relationship building, the level of trust that you noticed, and the shift.
[00:40:55] Brendan: Right. I think the most impactful thing about, I think, humanizing learning for me as a faculty member and a designer is that element of trust. It's not enough to care. It's not—you really have to build that trust. And to use Charles Feltman's framework about trust that really revolutionized my thinking about my own relationships with students and my peers was that caring is the first step, but it's not the last step. People can care and be very ineffective. You need care, you need honesty and integrity to walk the walk, to honor your commitments. You need to be reliable and you need to be competent. Um, and that level of care shows up also in that reliability and competence. And so it's—you start by saying, "I really care about you, I'm showing up for you." Um, but also you need that reliability. You need to be available to your learners or they're not—it's going to seem very shallow. Um, and then, of course, in the actual learning experience, you know, one thing that I often tell faculty is, you know, "Can you say that you've done everything you can to make this a good experience for your students?" And if you really care—and I've never honestly met a faculty that didn't care about their learners, you know. It's really usually a matter of being overworked by their academic department. Um, it's about a lack of ideas, sometimes they're just very confident, it's been working. And again, it often comes back to that amount of time. But they never don't care. And so if you can really—if they have the capacity, and if you can enable the system where they have that time, then that's the first thing I say is, you know, "You need to be able to, you know, if you really care about your learners the way that I know you do, working with your instructional designer is the easiest way to get that smooth connection and that great experience for your students."
[00:42:31] Jason: I like that. And, uh, you know, pulling back to something you had mentioned earlier, Timothy—we're at, of course, at the OLC conference, and I was at the same session you were at about belonging, and it felt—it feels like kind of the flip side of those five things that you just mentioned. Um, when they were talking about in the session, they were talking about where the student, in order to feel, uh, like they belong, they would need to feel accepted, valued, and connected.
That's right.
And, um, and it was interesting hearing you—that, would you say that list again from the—
[00:43:07] Brendan: It was Charles Feltman's framework, The Thin Book of Trust, I believe is what it's called.
Okay.
Um, it's just an 80-page book, but he talks about, um, kind of the four dimensions. The first is caring. Second is honesty and integrity, which is kind of hand in hand often. Reliability and competence. Um, and so I only talk about my expertise to the end of competence, but then I also make sure, am I—do I, am I a reliable professor? Do I have my office hours? Am I available? Um, do I get my work returned to them in time? Is my feedback personalized? And that really shows them, I have made the time for you. And they will come to me when they need support, which is the most important part of teacher–student relationships.
[00:43:44] Jason: That's good. And this is the first time I've heard that list, but in my mind I'm seeing one of those charts where you have the two lists and you see all the connections between them.
[00:43:53] Mary: Venn diagram.
[00:43:53] Jason: Yeah, it's not exactly the Venn with the circles. It's the one with all the—you've got two lists and you've got all the swimlanes—
Yeah.
All the swimlanes exactly, kind of, going between. And because of this kind of flip side of being—feeling accepted, valued, and connected. And you can see almost all of those things that you just mentioned kind of maybe, um, some of the swimlanes will be larger going to a few of those, and how sometimes instructors or professors will do some of those things, but maybe they won't be as, you know, diligent about returning emails or something like that, which is, honestly for students, is way up there. Much higher than how good the content was, or the videos were, or whatever like that. "They're not very good about returning emails." But that directly connects to all these things—they don't feel valued because he's not returning my emails. Um, and they don't feel connected because, well, this person's just doing it once a week or something like that, and, and, and they don't feel like, you know, much of a connection if all they do is return their emails every Tuesday for one hour kind of thing, right?
[00:45:03] Brendan: Yeah, and you don't have to be a superhero, you know, to demonstrate that. I think as long as you're clear to learners that, yes, I need 72 hours, you know, that way they're not waiting after 24 hours and wondering, "Oh, he doesn't care about me." If learners have all that expectation, it's all outlined—if you have a great, you know, concise syllabus that outlines your grading expectations, you know, learners really appreciate that. If you can make it concise and clear to them. And that's really an art, you know, I can't pretend that's easy.
Right.
Um, but if you can really outline that communication and then follow through, that—it pays dividends.
Yeah.
[00:45:34] Mary: Thinking of the generational words that are used. In the generation before ours—I have kids, they're teenagers, they keep me very hip.
[00:45:45] Jason: That's funny, it doesn't work for me. They don't keep me hip. They make me feel like I'm not hip.
[00:45:50] Mary: Oh, well, I think I'm in a punk rock band, so I think I get like a baseline.
[00:45:56] Jason: All right, that's pretty good.
[00:45:56] Mary: But like, "leaving people on read"—
[00:45:56] Jason: That's funny.
[00:45:56] Mary: Ghosting me—these are terms of feeling abandoned, right? And these are terms that are often used. And so from their perspective, in this digital age, they know you got it. They know you probably got a notification on your phone of their email. So if you are opening an email and not responding to them right away, it feels like you don't care—even if you do, but you're just busy and you're going to get to it later. Like, sometimes they have notifications—they can see if you've read it. And then they just feel abandoned. And so, to that point, being clear with your communication, the expectations, your own boundaries—because it's good to have boundaries. It's good to teach people how to set their own boundaries. But just clear communication on what those will be and the intent of your communication structure.
[00:46:45] Jason: Yeah. Yeah. Yeah, I've often told, you know, instructors, you know, just, yeah, just be really clear up front. If you don't answer emails on Sundays, just tell them you don't answer emails on Sundays. "I usually spend time with my family or outside or getting off the computer on Sundays." People will respect that and they'll, they'll understand when they send, uh, an email or a text on late Saturday night, and they don't hear anything until Monday morning, and it should be fine.
[00:47:08] Speaker 8: That way they're not just sitting there waiting.
[00:47:10] Jason: Yeah.
[00:47:11] Mary: Yeah, and you're modeling good, like, good self-care.
[00:47:14] John: That's right.
[00:47:15] Mary: We shouldn't all be available all the time. It's not healthy.
Right.
Right.
[00:47:19] John: I'm never looking for ways to lengthen my sig, but I stole something from my colleague Derek Lane. I'm sure he doesn't mind me saying it because it's wonderful, and I wrote him immediately when I saw it in his sig, and it says as follows: "I observe email-free nights and weekends."
[00:47:50] Mary: Beautiful.
[00:47:50] John: That's all it says. And it's just—yeah. And it's done wonders for my colleagues too, who email all weekend. And, um, and it's fine that they do, but I think that, um, it's starting to get them to reflect on whether or not they need to.
[00:48:04] Mary: I love that. We have a colleague, Renee Pilbeam, and she's very inclusive in general. She's also a single mom and she's like a Director of Learning Initiatives. Like, she's very busy all the time and has a flexible schedule because of that, which we all honor and we love that we have that as well. In her signature, she says, "I might email you outside of normal working hours. Please do not feel compelled to respond to me because of my working structure." Something to that effect. I probably blew that, Renee, I'm sorry. But the impact of that is huge because she's recognizing, "Look, I get it that I might be emailing you. Please don't feel like you have to respond to me because I also respect that this is probably not your working structure."
[00:48:29] Speaker 8: It's so—
[00:48:30] Mary: Inclusive.
I love her.
[00:48:31] Speaker 8: It's transparency and it's setting expectations.
[00:48:34] Mary: Absolutely.
[00:48:34] John: I've noticed now that Outlook will also just say to me, "Hey, would you like to just schedule this for Monday at 8 a.m.?"
[00:48:44] Speaker 8: Do you prefer to schedule something for work hours, or do you prefer just to put it in their inbox and then let them decide when to respond? Like, which is, which is the more preferred to—
[00:48:54] John: Go on Monday. And I, I like what your colleague is doing, and that's probably—wouldn't be my style, I would say. Because I still have enough psychopath friends and colleagues that will feel like they've got to answer it.
And so—
[00:49:04] Mary: I'm OCD, I would too.
[00:49:04] John: Yeah. So I probably would say, well, if the machine reminds me to, uh, uh, send it on Monday, then I'll just do that anyway.
[00:49:15] Mary: I love they have that in Slack too. Actually, Renee's the first one who ever modeled that for me because she figured out that you can do that. She probably has figured that out with her email as well. But like, she modeled it like, "Yeah, I don't send Slacks until 9 a.m. on the next business day," which is—
[00:49:27] John: Beautiful. Yeah. I think that's great. Yeah. Yeah, so then I guess I feel like I'm practicing what I'm preaching in my sig. If I'm observing email-free nights and weekends, that means I'm not even sending them also. It's sort of a slap to say, "Don't send me stuff, I'm not going to read it." It's also saying, "I'm not composing anything this weekend either."
[00:49:47] Jason: And I think we have to be aware of power differentials as well that happen between instructors and students.
Absolutely.
That sometimes they might feel responsible to reply back—as well as if we have staff people that we're doing the same thing with. It's one thing to send it to a colleague and they can feel free to ignore you for the weekend if you have a peer, right, but someone else might not feel that way. So I try to be conscious of that.
That's right.
That's right. Yeah, that's good.
Well, as we try to wrap this up, any final, final words or thoughts about, uh, uh, our next, our next phase of online learning and the kinds of things that maybe we should be talking about on this podcast for the, for the next few months?
[00:50:29] Brendan: I think getting to the two topics that I've heard about—you know, humanizing education and also the phases of online ed—I think we've reached a point where the belief in online education has been growing exponentially, especially because of the pandemic. And we've reached a point where it's, you know, it's not just possible. It's now—we're flooding the zone with options. There's a lot of tools that are available. People are constantly receiving best practices. Um, first of all, everyone's doing enough. You know, we get to a conference like this and we hear a million recommendations. You know, if you care and you're doing the best work you can, you're enough. And I think understanding that as a faculty member is very important. But also just working with instructional designers, keeping to research-based evidence, keeping it simple in the end is going to be the key, I think, to success in this zone where you have, really, a thousand options to deliver any course.
[00:51:20] Jason: I think we need to get your voice in that pillow speaker for professors.
He's got an—
[00:51:25] Speaker 2: Amazing voice, right?
[00:51:29] Jason: "You're doing enough."
I will second that.
[00:51:31] Mary: His students love him. Wonder why.
Yeah, that's good. That's good.
[00:51:36] Jason: Yeah. What else, as we round off here?
[00:51:40] Tom: I don't know. I'll just echo the simple. That's a concept that I think is continuing to echo in my head. Information overload. How can we keep it clean and to the point? I don't know what it looks like, but that's where my head is at.
[00:51:57] Speaker 8: And along those lines, if, if people do want to make forward motion, we want to, you know—they acknowledge or they, they can see that they want to improve their, their courses in some way or their, their teaching strategies. Uh, I also like this idea of a "plus one" approach, of thinking of just, "What's one thing I can do next term?" Uh, to, to be—to keep that forward motion, but also keep it simple. Uh, allow yourself grace, allow yourself, uh, time to, to grow and, and, uh, but, but there's always, like, one thing. "What can I try? What's the one thing?" And, and Ricardo and I were kind of talking about the, uh, the homegrown media and then kind of some of the problems around that. And, and as you guys mentioned, adding one light makes a huge difference. And we were like, "Yeah, you can do 20 percent work to make an 80 percent improvement." And there are a lot of things like that, where you just do one thing and it makes—it's—and it doesn't just make one unit of growth. It's like, you can get a lot of benefit out of some small things. So, uh, feel free to, to try one thing and see where that goes.
Yeah.
[00:53:01] Ricardo: I guess going back to the metaphor of this being the second half of life, um, you know, Tim and I were talking about this. Uh, the work's not over. You know, we got ChatGPT, we have all these things that make life easier, make content production easier. But we still got to work. You know, what was the concept that you were telling me about, Tim? The, uh, the, uh, the, the, the, the, things have to be a struggle in some ways?
Yeah, what—
[00:53:28] Speaker 8: Was that? Desirable difficulties?
[00:53:30] Ricardo: Desirable difficulties. I think, especially as technology advances and makes our life easier, there's got to be some desirable difficulties. And life is hard. And to go back again to that metaphor, is, you know—again, as I'm approaching 40, I didn't spend a lot of time in the gym up until now, but now I have to. If I want to keep continuing to, to, to live and to live a good life, it's going to take rigor. And I think that, again, as things get easier, we have to step up in other ways and continue our rigor and continue to—
[00:54:03] Jason: That's good. That's like the—that's the second—that's the, uh, the between-halves talk we needed, I think, Tim.
That's right.
In multiple ways, probably.
[00:54:13] John: Yeah. Yeah. Thanks, coach.
[00:54:16] Speaker 2: Get out there and teach.
[00:54:20] Jason: Good. Well, this has been great, guys. Thank you so much. It's so good seeing people face to face in the same room and recording.
Yeah. Thanks for bringing us on.
Yeah.
[00:54:31] Ricardo: Yeah, thank you for literally setting the table for us.
[00:54:36] Mary: We're all—
[00:54:41] Jason: Right before we go here though, what are your—what are your plugs for your own podcasts or other things we should be listening to?
[00:54:49] Mary: Okay. So definitely listen to our podcast, Course Stories.
That's good.
Course Stories can be found wherever you listen to podcasts.
[00:54:56] Ricardo: And I would also, uh, really encourage people to check out, especially if you have, uh, uh, people—uh, you know, uh, individuals who are entering into college or didn't get a chance to enter into college. Study Hall, uh, is a—is a, a, a program on YouTube that we, uh, we, that ASU does in, in association with Crash Course and YouTube itself, uh, which is a—which is a kind of a college prep and also a college kind of serving—you can get credits for college, uh, by, by, by watching these videos. And we do quite a bit of the production for that in the ASU, uh, EdPlus studio at the Tempe campus. So please, please check that out. It's very good.
[00:55:36] Mary: When you're done—
[00:55:38] Tom: With all of that, check out ODLI on the Air. You can get it in most places where you find podcasts, and we are commuting length. So if you're, like, in a 20-minute commute, we'll fit right in there for you.
[00:55:52] Mary: Yeah, and then I'm going to plug on behalf of these guys because they're too humble to say anything. Brendan Lake—one of the early episodes, season one of Course Stories. Go listen to his, because my gosh, not only will you feel cared for by the end of it based on how sweet he is, but you'll learn a lot. And then also, Tim McKean is on Instruction by Design. It's a fabulous episode. It's another podcast out of Arizona State University that Ricardo started and started to produce, and then transitioned to others. But great podcast also.
Okay.
[00:56:19] Jason: That's fantastic. And we'll get all the links for all of those things, and we'll put them in the notes. OnlineLearningPodcast.com is where you can find all of these episodes.
[00:56:29] John: Jason, can you believe we got that URL? I still can't believe it. We've said this a—
[00:56:33] Jason: Couple times.
OnlineLearningPodcast.com. We'll put all the notes in there, and resources, so you can look all of this stuff up. Thank you so much, everybody.
[00:56:41] Mary: Thanks.

Monday May 01, 2023
Monday May 01, 2023
In this episode, John and Jason walk the vendor floor at the OLC Innovate Conference in Nashville, TN - April 18-21, 2023 and ask the vendors how their product is helping to humanize online education.
Join Our LinkedIn Group - Online Learning Podcast
Vendors we talked with:
UWill
Instructure
Proctorio
PowerNotes
D2L - Brightspace
D2L Teaching & Learning Studio Podcast
Harmonize
InScribe
Join us in the next episode, EP 10, for a live OLC Podcast Superfriends Crossover Episode Extravaganza! And then in EP 11 we will wrap up our OLC Design Thinking Workshop on Humanizing Online Education.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
Studio Intro
[00:00:00] John Nash: I'm John Nash here with Jason Johnston.
[00:00:02] Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the Second Half, the Online Learning podcast.
[00:00:08] John Nash: Yeah, we are doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning's had its chance to be great, and some of it is, but some of it isn't. How are we going to get to the next stage?
[00:00:24] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:28] John Nash: That sounds perfect. What do you want to talk about today?
[00:00:32] Jason Johnston: We are here at OLC Innovate 2023 in Nashville, Tennessee, and yeah, we decided that maybe we'd go out and talk to some of the vendors about some of our pressing questions about online learning.
[00:00:50] John Nash: Yeah, I think going out and talking to the vendors and asking them about our chief wondering, which is, how are you helping to humanize online education, would be a good way to go.
[00:01:03] Jason Johnston: Yeah, it's a really interesting question to ask, and hopefully we don't put too many of them on the spot, but I know a few of these vendors, and I think some of them have probably some good answers because they've already been thinking about this, and they're designing towards humanizing online education.
[00:01:20] John Nash: Yeah, I've noticed that the way in which some of these vendors are pushing the message around belonging and being part of a collective is, it's becoming more prominent. And so, I wonder if the real activities are going to meet that, that, that promise.
[00:01:38] Jason Johnston: Yeah. I don't know that we'll answer all those questions today by just talking to them on the vendor floor, but at least we'll get some insight about what they think and what their main kind of focus is.
The other cool thing about being here at OLC, we've done a couple other things. We had a design thinking session on humanizing online education. We asked that question and that was a lot of fun. And so, we'll wrap that up in a couple of podcasts from now. We'll get some more insights on that.
And then we also met some other podcasters, and we had what we're calling the Super Friends crossover episode where we were able to talk to some of these other podcasters about podcasting, but also about humanizing online education and about how podcasting works in online education and for faculty and instructor development.
[00:02:28] John Nash: Yeah, that was a great conversation, and I'm really looking forward to us putting that out there because it not only covered the platform of podcasting and what role it can play, but then we really got into some good conversations just about what constitutes good online design and even instructor behavior and how you make a great experience.
[00:02:48] Jason Johnston: Yeah. Plus, they were just lots of fun to talk to, so it feels like these conversations, it just didn't feel like work. It was just interesting conversation and great to talk with them. So, I'll have that coming up, and probably that will be the next episode. And then following that, we will do a wrap up of our design thinking humanizing online education episode.
Sound good?
Yeah, that sounds really good. Yeah. For now, let's go to the vendor floor and let's talk to some of these vendors and see what they have to say.
[00:03:17] John Nash: Yeah, let's march the floor.
Floor Recording File 1 – UWill
[00:03:21] John Nash: So, um, well then tell us, uh, tell us your name and the product that you have.
[00:03:25] Vendor 1: Yeah, so my name is Kevin Koski. I'm the Director of Business Development at UWill. Um, we are an online mental health teletherapy platform.
Um, so we're a software platform. We partner with universities to connect students with licensed healthcare providers and mental health providers.
[00:03:41] John Nash: Nice. Uh, one of the things that we've been focusing on in our discussions, uh, I'm a professor at the University of Kentucky, I'm in Lexington. He's at Tennessee Knoxville.
Okay.
Um, we're both interested in online learning, but we're interested in how we might humanize online learning.
Yeah.
And so, your product seems really interesting in going in that direction.
[00:03:59] Vendor 1: Yeah. You know, it's, it's interesting. Traditionally online students don't necessarily have the access to the same level of resources and services that, you know, traditional on-campus students do. And so, um, you know, we wanted to provide online students with the same access to care for mental health as those, you know, on-campus students.
Yeah.
And so, this platform is really easy for students to register. They get to select, uh, therapists based on preferences. Um, so they could go in and select based on gender identity, ethnicity, language spoken, clinical need, and get a direct immediate connection to a licensed counselor of their choice. Um, we're really just trying to, you know, break down the barriers to care, reduce the stigma of asking for help for mental health, and, and helping students.
[00:04:42] John Nash: I've been in a position where I've been asked to refer some of my students to mental health services.
Sure.
And in our program in Kentucky, these students at the time happened to live in another state, so they couldn't avail themselves of the services that were offered by our university.
Absolutely.
[00:04:57] Vendor 1: You're addressing that. Yeah. So, we have, um, over 1,500 licensed counselors in all 50 states with worldwide coverage. And so, you're absolutely right. Um, you know, traditionally those on-campus counseling centers, they have counselors who are maybe only licensed in their state, maybe one or two others. And when students leave campus for breaks, study abroad, traveling, they don't have access to that care. Um, with access with providers in all 50 states, no matter where the student is, no matter what time of day, uh, they could access care.
Yeah.
[00:05:26] John Nash: Nice. What do you, anything you want to ask?
[00:05:28] Jason Johnston: Do you have any good swag?
[00:05:30] Vendor 1: We do. We have a hot and cold pack. You can put it in the freezer; you can put it in the microwave. Uh, we've got some fidget poppers and some leather-bound notebooks, so please feel free to take as much as you want. You'd be doing me a favor, so I don't have to take it back with me.
Thank you. Thank you for talking with us.
Yeah, that was great. Thank you. Appreciate it.
[00:05:47] John Nash: Absolutely.
Instructure
[00:05:48] John Nash: So, hey Brandon, uh, tell us your name and what your, what your product is.
[00:05:51] Vendor 2: Yeah. Hi. Uh, yeah, so my name's Brandon Ira. I'm a senior account executive with Instructure, and, uh, our product is Canvas, uh, platform, Canvas learning platform. So, we've—
[00:06:01] Unknown: Heard of—
[00:06:02] Jason Johnston: Canvas, believe—
[00:06:03] John Nash: It or not.
That's, I never heard of it. Is it—
[00:06:04] Vendor 2: New? That's, in the education industry, uh, or arena, there are a few people who have heard of Canvas before. Yeah. I recognize us.
[00:06:12] John Nash: So, the focus of our podcast and the episodes we've been doing have been asking a question about how we might humanize online education. And I'm wondering, yeah, what your thoughts are on how Canvas helps advance that and what are you guys thinking in that regard?
[00:06:28] Vendor 2: Yeah, so humanizing—a question I'm asked often from, uh, you know, administrators, faculty, LMS admins—is how can Canvas help with student success or retention? So, I think humanizing is a part of that, and one of the ways Canvas really helps with that is, uh, increasing the quality and quantity of touchpoints between students and instructors. So, we know that's a main, uh, predictor of student success, and we have the ability for students and instructors to communicate with one another, you know, via messaging. Uh, video is a big, uh, resource for us also.
[00:07:04] John Nash: So, yeah, talk to us more about touchpoints. I mean, I think you mentioned a few. Is that what you're thinking about is like channels or say—
Yeah.
From your perspective?
[00:07:11] Vendor 2: Yeah, so to me, increased touchpoints, quality and quantity. So, quantity as, you know, how often is an instructor and a student communicating.
Right.
How is an instructor able to do so, especially in an online environment, right? You're not seeing a student or a set of students regularly in the classroom. So, uh, you know, of course there's email.
Yeah.
But that's, so you could have quantity there, maybe not the quality. So, there is a video resource within Canvas that allows instructors to record messages in assignments. In our SpeedGrader, they can record. So instead of just returning, uh, a paper and saying, "Hey, this is a great first draft. Work on X, Y, and Z," they could record that. The student could respond with a recording as well. Uh, it could be just a communication, written communication. Um, so yeah, I think it's going a little deeper.
[00:08:01] John Nash: Nice. Is there anything you think a couple of veteran Canvas users should know that we probably don't or might not know?
[00:08:09] Vendor 2: So, I think one of our newest products is Canvas Credentials, which was formerly Badgr Pro. So now we have, uh, the ability to get into this, uh, digital badging and micro-credentialing space. So that's something we're really excited about, uh, you know, as we see a greater push for continuing education, so not just in undergrad or even a post-grad degree, but lifelong learning and the ability for folks of any age really to continue to learn and then share that, uh, experience, that learning experience, on a social media site or something.
[00:08:44] Unknown: Nice.
[00:08:45] John Nash: Cool.
[00:08:45] Jason Johnston: Tell me—I'm familiar with Canvas Analytics and have used it. Do you think this is a move in the direction of humanizing online education, or is it somewhat dehumanizing education online because you're looking at a lot of data points?
[00:09:02] Vendor 2: Right, so it depends on what you use the data for, right? That makes—that really determines whether or not you're humanizing or dehumanizing. So, if you boil a student down to their data points, then you're going in the opposite direction of humanization. Whereas we encourage institutions to utilize data points to support students, right? Not to categorize them, but to make them more successful.
Um, you know, and, and like, like Credentials, for example, we're digitizing their accomplishments so that they are able to share it with the wider world, right? So, it just depends on how you're using the data, but I feel like we're using the data for the benefit of students and the benefit of humanity.
So—
[00:09:53] Jason Johnston: That's great. Thank you. And you are?
Oh yeah.
[00:09:56] Vendor 3: Um, my name's Marcus Vu. I'm a solutions engineer at—
[00:09:58] Jason Johnston: Instructure. Thank you. And do you consent to be recorded and put on a—
[00:10:02] Vendor 1: Podcast?
Uh, yes, I do.
[00:10:04] Jason Johnston: Thank you, Marcus.
Thank you, Marcus.
[00:10:06] John Nash: Cool. Um, hey, last question. Do you have any good—
[00:10:09] Vendor 2: Swag? Yeah, we're out of pandas, but we do have—the bags have been really popular.
Uh—
[00:10:15] Jason Johnston: Fanny packs. We were just talking about fanny packs yesterday. Well, no strap.
[00:10:18] Unknown: They're like—
[00:10:18] Vendor 2: Little gear bags. Some people have said it's a toiletry, like a Dopp kit. I, I thought it was, yeah, like put your cords in, your competitor swag.
There you go. We have socks also.
Socks—
[00:10:30] Jason Johnston: Well, thank you very—
[00:10:30] Vendor 2: Much. Appreciate it.
Yeah, thanks. Yep. Yeah. Give us a listen.
Yeah. Yep.
Proctorio
[00:10:35] Unknown: How's it going?
[00:10:37] Vendor 2: Melissa, John, Caroline.
Hey Caroline.
[00:10:39] John Nash: Adrian. Adrian. John Nash. Uh, Jason Johnston. Jason and I co-host the podcast Online Learning in the Second Half. And, uh, we are wondering if we could ask you a couple of questions for our podcast. Nothing heavy, swear to God. You don't have to call a VP or anything like that.
[00:10:54] Unknown: A couple of questions?
Sure—
[00:10:54] Vendor 5: Sure, sure.
[00:10:55] Vendor 4: We'll answer as best—
[00:10:57] Unknown: We can.
[00:10:58] John Nash: So, um, well, yeah, what's your name and what do you do and what's your product?
[00:11:03] Vendor 5: I'm Melissa Deweese. I'm the Director of AI Ethics at Proctorio, and we have an online proctoring solution.
[00:11:08] Vendor 4: Nice.
[00:11:09] John Nash: Yes. So, the focus of Jason's and my podcast is to think about the question of how we might humanize online learning.
Mm-hmm.
And we're talking to people on the floor today, asking about how their work is moving us all in that direction. How would you take that question to think about how, um, your product is helping humanize online ed?
[00:11:32] Vendor 5: That's a tough question. Well, our product—humanizing online ed? Well, I, ours is more of accessibility accommodation.
Mm-hmm.
So, like, we actually give, uh, access to people that wouldn't have access to online learning in like remote areas.
Yeah.
So, I think that would help with people being brought together, getting education. They can also collaborate on our, on our site too, if they have like work products or projects together. So, I think that helps humanize, like they have the interaction, they have the accessibility, they have, they can relax in their own environment. So, I believe that's helpful.
[00:12:03] John Nash: So, I'm a professor and I teach online and run a program, but you're over—
Um, he's Director of Online Learning and course production, but yeah. Do you have a—
[00:12:13] Jason Johnston: I just, uh, I just really want to know what kind of swag you have. That's why I'm here today.
[00:12:17] Vendor 3: Uh—
[00:12:18] Vendor 5: You're out. You're out. We have Proctorios, which are Cheerios, but they're Proctorio. They're—
[00:12:24] Vendor 3: Actual—they're cereals. They're a cereal. Yes, they're cereal.
That's pretty—
[00:12:27] Unknown: Fun. That's pretty clever.
[00:12:29] Jason Johnston: Yes. That's great. Well, we'll, uh, if you don't mind, I'll, I'll take these back. We'll, uh—
Yeah, absolutely.
Yeah, buddy. I, my kids are big cereal people, so, you know, we'll, we'll let you know how, what they think. So, thank you so much.
[00:12:41] John Nash: Yeah, you're welcome. Thank you so much.
You're welcome. Thank you.
Brightspace
[00:12:52] Unknown: Are we here to see Brightspace? How are we here?
[00:12:59] Unknown: Um, I thought we were here to see D2L.
D2L—
[00:13:01] Vendor 4: D2L is the overarching company, our LMS. Brightspace.
Oh—
[00:13:06] John Nash: Gotcha. I'm John Nash, University of Kentucky.
[00:13:08] Harmonize 1: I'm Jason Johnston, University of Tennessee.
[00:13:10] John Nash: John. And we also do a podcast called Online Learning in the Second Half, and we're walking the floor with a mobile unit. And just wanted to see if you'd be all—if we ask—
[00:13:23] Vendor 4: Softball questions?
Yeah, absolutely. So, on top of working at D2L, I also have a, uh, digital media agency that me and other colleagues run. So, I'm very connected to this. So, this is, this is the kind of stuff we want. So, talk—
[00:13:36] Jason Johnston: That's super smart. I mean, as you know, podcasting is huge right now.
[00:13:40] Vendor 4: D2L podcasts. Well, uh, it's a teaching and learning podcast that we've started creating. Uh, one of our CAOs, um, uh, Kristine Ford, she's leading the charge, and it's all about thought leadership. It's not talking about D2L, not talking about Brightspace—well, D2L—but not talking about Brightspace platform functionality. It's about what does the educational world need right now. What is the state of the market, what is the state of the student experience? So, like, they're talking, they're talking about all that. And then we have K–12, higher ed, corporate. A whole bunch of variety of use cases.
[00:14:15] Jason Johnston: That's great. What's the podcast called?
[00:14:16] Vendor 4: So, it's called the Teaching and Learning Studio, where you can easily access that. You don't have to be a Brightspace user, you don't have to be anything but someone who is a lifelong learner and interested in education, and you're going to start engaging with really cool content and really cool people doing exciting things.
[00:14:36] John Nash: So yeah, tell us your name and what do you do?
[00:14:38] Vendor 4: Uh, so my name is Justin Mullin. I'm a solutions engineer, and essentially what I do is learn everything I can about institutions who are working through evaluations and like working through various competitors, taking those use cases, learning everything possible about what you need, why you need it, and then showing you how we can bring those solutions to your institution. So not just being like, here's a demo of why, why we're so great. Here's a demo of like all the people who use our product and, and we've had market share and things like that. It's about what you need, actually hearing people in that experience.
Yeah.
Yeah. So, so that's, that's my role, is to bring that to life. We also have that on the client side as well. So, once you get over the line, you still have that solutions, that technical support.
[00:15:27] John Nash: So, one of the overarching wonderings that Jason and I have on our podcast is, uh, how might we humanize online education more so than we are at now? I mean, some of it's really good. We've got a long way to go in others. Where do you see D2L and Brightspace in this, in this space around trying to humanize online ed?
[00:15:48] Vendor 4: Yeah, so the way to start that is, I think, gamification and actually getting really creative. So, we had a session yesterday, uh, called The Great Escape, where actually one of our clients built an escape room within Brightspace using our release conditions and learning paths so that you can actually have this really exciting experience, but also like an escape room in an LMS.
Sorry, what?
But that's how creative you can get with these items and, and, and bringing gamification into the platform. So, there's, there's tons of options for building that engagement, bringing it to the real world. Video capability also within Brightspace, making it a native capability so that you always have access to providing, you know, if on feedback, you're providing student feedback, you can add videos as well. I'm not a person who enjoys text, but we provide that option to be able to create, you know, video notes and templated feedback. Um, we have, you know, virtual assignments, virtual classroom. So that's one of the ways where we're bringing people, trying to engage them in the platform, but also make sure that it happens entirely within Brightspace. So, we're a mobile responsive platform, which means that as long as you have access to a browser and an access to internet connection, you can access Brightspace, without any loss of feature functionality. It's all about making sure students have access to their learning consistently.
Nice,
[00:17:09] John Nash: Nice. Anything on your side?
[00:17:11] Jason Johnston: You know, my biggest question really, and this is a hard one, so I hope, I hope you're ready for it.
Okay.
Um, what kind of swag do you have to give away here on the floor?
[00:17:21] Vendor 4: We are a Canadian company, which means for our swag we have moose. So, I'm going to step away from this podcast for a moment. I'm going to go grab you guys a couple moose.
Ah, moose. That's exactly, exactly. So, I'm going to—
[00:17:36] Unknown: Slip into this cupboard, get you guys a couple.
[00:17:38] Jason Johnston: Oh wow. I happen to be a Canadian as well.
[00:17:40] Vendor 4: You happen to be a Canadian as well?
[00:17:43] Unknown: Where are you from?
[00:17:45] Vendor 2: Uh, west side of Toronto.
[00:17:45] Vendor 4: No way. I was actually, uh, born and raised in Brampton.
Oh yeah, from Brampton. Well—
[00:17:52] Unknown: Oh, come on. This is, this is wild. Um, we're in Tennessee and we're bonding over the GTA. Yeah, that's funny.
File 2 – PowerNotes
[00:17:58] John Nash: Hi. Um, well, tell us who you are and what you do.
Sure, sure.
[00:18:03] Vendor 6: So, I'm Brett Peterson. I'm the Product Manager for PowerNotes. Uh, so we're, we're research and writing and academic integrity suite of tools. So, we let students capture information on websites, anything in the browser, so websites, PDFs, ebooks, whatever. You highlight text that you want to capture, we generate a citation for it, take the quote and the citation. You save that to an outline. You can take notes about it. You can add your own freeform notes. All of which means that right at the point of capture in your research process, you're already starting to form schema for your final output, right? So, we're supporting, kind of subtly supporting, a really solid research process under the hood.
Um, but also because we're a software company, every time you save something in our database—like we're a cloud-based tool, right? —so every time you save something, we know who saved it and where and when. So, when ChatGPT came out, and we said—everybody got very interested in AI detection—we said, well, wait a second. We don't need to detect AI. We can just show the work right here. Here's all the research I've done. Here are my notes about it. Here are the notes and research turned into cogent thought, and then here's that cogent thought turned into a paper. And here's how that final paper reflects back on my research. And so, if there's an academic integrity question, we can just say like, here's all the evidence for or against, right? Like, hey, here's nine-tenths of your paper totally matches up to you, your research. And then there's five pages at the end that was turned in at two minutes before the deadline, um, that has nothing to do with your research. We're going to throw a red flag on that one. Right. So that's, that's kind of what we do. Me personally, I'm a former entrepreneur and now working with a different—with a startup for somebody else's—which is wonderful, um, as a product manager and, and loving it here. I've been here for all of a year.
[00:19:29] John Nash: Yeah. Nice. So, the sort of overarching wondering that Jason and I are chasing on the podcast is, how might we humanize online education?
Yeah.
What do you think about that vis-à-vis PowerNotes?
[00:19:43] Vendor 6: Great question. Yeah. Couple of things. I think part of it, I mean, humanizing anything requires more human involvement, right? And so, with PowerNotes, um, I would say we would emphasize our collaboration tools. So, sharing group projects. I mentioned we keep track of everything. Well, we expose that to teachers. And so, you can say for a group project, for example—bane of those is that one person there carries the whole group, or one person slacks off and the rest of the group carries them, right. Doesn't happen anymore because we can show who added what to the outline and when, who did the research and when. It's all—the process is transparent.
Right.
And so, I think that's actually part of humanizing things, is if you focus on the output, then whatever happens to get there is by definition not what's being focused on. Right. And so, there's all kinds of shortcuts or cheats or—let's take that out of like hyper-emotional language—but like, negative paths people can take to get there, right? Of which somebody carrying the group or somebody slacking off are one, are two small examples, right. But the focus on the process: are we working together in a group? Am I as a student learning something about my research and is my professor looking at that? Right? If I'm scoring the process rather than the products, there's a whole, much more opportunity for interaction between the teacher and the students, the students with each other, even the students with the, engaging with the, the information there, they're doing, right. If we're process focused and rewarding process, then we're promoting better, I will say, more humane behavior in education. If we're really focused on products, you might get that, but you have a lot less guarantee.
[00:21:12] John Nash: So, I was listening to the EdUp AI podcast last week. And your CEO or your founder was on there.
Yeah, Wilson.
Yeah, Wilson, that's it. And I have to admit, I was really impressed because, uh, well, it wasn't even thinly veiled. I mean, you guys are a Turnitin disruptor and, what's the guy in Toronto that's got the AI detection?
Oh—
[00:21:37] Vendor 6: GPTZero. That guy.
Yeah.
[00:21:39] John Nash: Yeah. You're, you're that-guy-buster.
Yeah.
And, and which I think is fine actually. Um, and, and, but the podcast really helped me reframe what I feel as a, like a dissertation advisor about the importance of what it means to be able to express yourself clearly in writing. And I think, what Wilson said on there almost made me do a 180 now because he said something to the effect of, why do we let the words get in the way of the ideas?
Yes.
And that what ChatGPT and the large language models are doing are allowing us to have more interesting conversations about ideas and then go ahead and get them expressed. And what your tool is doing is showing the thinking along the way. And I think that's kind of impressive.
[00:22:20] Vendor 6: Thank you. Thank you completely. Thank you. That's very high compliment actually. I appreciate that. There was—I was at the Academic Integrity Conference in Indianapolis about a month ago for the International Center for Academic Integrity. Great group. Really had a wonderful time there. And I was at one of the presentations on AI, and there was a professor from Canada who, who brought up a similar point. She said, as professors we, we have historically focused on generating beautiful words, creating beautiful prose. But with AI, that's no longer important, right? Like creating beautiful words is, is trivial. And so how do we, how do we move past that? I think you're, you've hit the nail on the head, right. Let's move to the ideas, right?
Yeah,
[00:22:52] John Nash: Yeah. Well, I'm stealing Wilson's ideas. So, uh, at 9:38 this morning, the University of Kentucky sent a note to all faculty about upholding academic integrity and ensuring fairness, semicolon, AI detection.
Oh goodness.
And the University of Kentucky is a Turnitin customer, and they have put AI detection into these systems of theirs. Uh, and we don't confidently know much about the accuracy of AI writing detectors at this point.
Oh goodness.
And so, yeah, this is, it's really getting interesting now, and that's why we're interested in talking to people with tools like yours.
[00:23:30] Vendor 6: Well, thank you. Yeah, absolutely. Because we were already capturing what was happening, the process, right. Because we're already capturing the process, we, we made this shift in academic integrity when Chat came out. And, and I think, I don't want to repeat myself too much, but I think that's—there's really a paradigm shift in this adversarial—from this adversarial "I gotcha after you turn it in" kind of thing, right. Turnitin's a great company with some great people and great engineers. I talked to them at the academic conference. I don't need to bash them. I don't like their paradigm, right. The paradigm is very reactive and very punitive, frankly, right. Let me, let me get you, right. And I'd much rather have that be preventative and have it be focused on process where we say, if, if you're seeing a student and they're pulling things into their outline and they're not citing it appropriately—which is hard to do with our software anyway, for the record, I don't call that; we try to make that difficult to start with—but if they're paraphrasing badly or if they're, they're not transforming things well enough, our insight into the process lets you see that in advance. So, if, if you're, if you need to be that involved with a student who is, who's on probation, for example, right, and you're trying to teach somebody and they're really not getting it. There's a difference with bad actors, right. We have a bad actor who willingly and knowingly is, is, is cheating and, and committing fraud. That's a different story. But nine times out of 10, that's not what's happening in academic integrity. It's much more likely that the student is screwing up and trying to learn. Let's do that before the term paper's turned in, right? Let's look at that in, in flight, right. And then the other part is, I heard at the Academic Integrity Conference was, I mean, in Florida, students are legally allowed to have a lawyer now on any academic integrity investigation, right. So, right. Which means, which then turns schools into courtrooms.
Right.
Which is, I mean, I didn't really sign up for ed tech to become a lawyer, right. Wilson's a lawyer and he's great. I'm not, and I wouldn't be, right. So, so just let's take that whole equation out. Let's have a different paradigm where we say, here's the evidence, here's the evidence of learning. And if we focus on the process—I mean, I'm going to be a little bold here—if I show, if I see a student has a really good learning process all the way through from research to conclusion, I don't really care if three sentences came out of ChatGPT, right. If I see a—but if I see a student who comes up with this beautiful prose that has nothing behind it, I have no idea if learning has happened or not. And so, it's that, it's that shift, right. Like, on the detector side, speaking of that specifically, um, I always have to—I smile and wince a little bit when I see detectors because one of the ways you train machine learning models, which all the AI tools are among that group, right, is they're called adversarial models. So, you have one model that says, hey, generate this thing. Another model says, hey, can you tell if that was generated or not? Turnitin and other detectors are literally doing that for all of ChatGPT, any large language model, right. So, they literally—those two models then square off against each other and get better and better and better. And you always have this arms race that's now, it's fantastic for the AI tools because they get free training for their tools. They say, okay, spin it up against Turnitin and see if it passes, right. I mean, cool for the world, I guess, in terms of technology, but that's a mess. I don't want to get into that, that arms race, right, that battle.
[00:26:36] John Nash: Really cool. I think, uh, you have the most important question—
[00:26:38] Vendor 6: Of all.
Yeah. This one's a toughie.
Okay.
You ready for this one?
Please.
Um, what kind of good swag do you have here at your booth?
Ah, so we are short in the swag department, except that I—our swag is immediate gratification. So, I happen to have two pieces of chocolate left for the two of you, so I'll pass that to you.
Not bad after lunch.
Yeah, yeah, after a lunch mint.
[00:26:57] John Nash: Thank you so much.
Yeah.
Hope you'll check us out.
Yeah.
So, Online Learning podcast.
[00:27:01] Vendor 6: Gladly. Thank you very much. Can I get you guys both cards?
Yeah, please.
Thank you.
File 3 – Harmonize
[00:27:06] Jason Johnston: So, you're okay with being on the podcast?
[00:27:12] Harmonize 1: Absolutely.
Okay.
Thrilled to be on the podcast.
That's good. Well, you want to start?
[00:27:16] John Nash: Yeah, sure. So, uh, well first tell us your name—
[00:27:19] Harmonize 1: And what you do. I'm Alan Manley and I'm the Director of Sales at Harmonize.
[00:27:23] John Nash: And what's Harmonize do?
[00:27:24] Harmonize 1: Well, Harmonize is a course discussion and collaboration platform that you can integrate with your learning management system in order to promote better student engagement. So instead of the boring discussions you get by default in Canvas, Blackboard, whatever the LMS is, with Harmonize you get a more social media-like interface with the ability to set multiple due dates for a discussion, integration with Turnitin for ChatGPT and plagiarism detection. A lot of cool features.
[00:27:53] John Nash: Cool features. Um, so one of the overarching wonderings that Jason and I have on the podcast is this idea of how we might, how might we humanize online education. Where do you think that fits with what Harmonize wants to—
[00:28:09] Harmonize 1: Achieve? Yeah, absolutely. I think in online education, it's very easy to feel disconnected, to feel siloed, like you're all alone there. And I think especially, you know, if you're an online student, you're in an online course and all you see is a bunch of written text people are writing, but nobody's really communicating with you. When it comes to building course communities, having students do things like creating videos—in simplest way to do it, first week of the semester, introduction to the course, introduce yourself, create a video. Everybody gets to actually know each other personally. And especially, I think, the onus of responsibility is more so even on the instructor. If you're an instructor and you're teaching an online course, first week of the semester, create a video of yourself. Let people know that—don't let them know necessarily where you got your PhD from, but like, what are your interests? Like, what are your hobbies? And I think that helps like everybody to feel more comfortable and you feel that community and like you're actually a part of something.
[00:29:13] John Nash: Really good. Do you have anything you want to ask?
[00:29:15] Jason Johnston: So, what do you think is unique about Harmonize that helps to achieve some of those—
[00:29:20] Harmonize 1: Goals? Yeah, so when it comes to Harmonize, what we know from students is that students today use social media.
Okay.
So, they are used to a specific form of platform, uh, a kind of platform for communication. And what you see in the learning management system, in the discussions, doesn't really mimic what they're doing in their everyday lives when it comes to social media. So, when you have a platform like Harmonize that can—you know, you paste a link and you get that preview, that picture. Uh, you click one button to create a video. Uh, you create a poll with one button. Um, being able to have that more social media look and feel really separates Harmonize from, from some of the competition, from some of the rest of the tools you'll get out there. Um, it's a great collaboration tool, and for instructors as well. I mean, the ease of grading, we get compliments all the time. I mean, we were, we were in a presentation yesterday with the University of Tennessee–Martin. He said he saves two-thirds of his grading time using Harmonize because with Harmonize, you don't have to manually check: did they, did I do—I think they used ChatGPT here. Do I—do I think they plagiarized? Now you get all that built in. Um, you can see that they're actually meeting their milestones. You know, they're getting their posts done by Wednesday, then coming in and making their comments later on in the week. So, it's both from an instructor and student side. There's just a lot of benefits that I think really help.
[00:30:52] Jason Johnston: Thank you. All right. My final question is, uh, really important.
Okay.
This is the key to a lot of the ways we are evaluating people all day long, right? When it comes down to it on the podcast, whether or not they make it on the podcast. Uh, what kind of good swag do you have at your booth—
[00:31:06] Harmonize 1: Today? Well, uh, most of the swag has been taken, but, uh, I will tell you our most popular item by far was a, was a charging—one of those charging ports where you, where you hook up the USB, and there are three different charger types, whether you use an iPhone, whether you use an Android, whatever it is, you know, you got your charger. But unfortunately, we ran out of those pretty early on. Real hot-button item. We do have a couple of breath mints.
Uh—
[00:31:31] Jason Johnston: Um, are you trying to say something, Alan? I'm going to take a step back here.
[00:31:35] Harmonize 1: Oh, no, no, no, no. Just, uh, just pointing out the swag.
[00:31:39] Jason Johnston: Breath mints are always good. I'm in. Okay. We'll take them. We'll take them.
Thanks so much, Alan.
Thank y'all.
Inscribe
[00:31:44] John Nash: So, oh, well, tell us who you are and what you do.
[00:31:47] Vendor 7: Uh, my name is Katie Kapler. I am co-founder and CEO at Inscribe, and we are a virtual community platform. We create online spaces to connect students with peers, faculty and staff so they have a place to turn anytime they need help, they need advice, or they just want to connect with other human beings.
Nice.
[00:32:05] Harmonize 1: Well said too.
[00:32:07] Vendor 7: Thank you. Oh, it's like I've been saying it a few times today.
[00:32:10] John Nash: I know. Well, I, I, I listened to—speaking of podcasts and—well, but I listen and watch a lot of Chris Do. I don't know if you, you know, he's a, he's a sort of a business guru for creatives—he's a—and so, but, and he's been talking a lot about how creatives really need to get their taglines together and what they do, and be clear and, uh, I've been talking simplify, simplify, simplify. And so yeah, that was very, very—
Yeah.
That's awesome. I get what you do. Um, so one of the overarching wonderings that Jason and I have as we march through our podcast is how might we humanize online ed?
Oh wow.
How do you think Inscribe walks that line, uh, and helps us humanize online ed?
[00:32:51] Vendor 7: It's like you wrote that question just for me. I love it. Well, so one of the things that is true about online is it can be a very isolating experience, and I actually think that one of the misconceptions around online and non-traditional students that has been pervasive for many years is that they don't need, they don't want, and they don't care about connection with other students in their online learning. The reality is that could not be further from the truth. Not only do they crave it, but it actually is also demonstrated that when you have peer connection and you have better relationships among your peers, you are more likely to be successful and persist in both the short and long term. So that is very much what we're all about. We want to say to students, outside of what you're prescribed to do in your class, or what you're told to do in group projects, how do we create spaces for you to create, to connect in a very authentic way, especially with fellow students in your program?
Nice.
[00:33:44] John Nash: And so, what do, what do those connections look like in their, in their best form?
Yes.
[00:33:49] Vendor 7: So, they're varied. Um, so certainly some of those connections are social in nature. So, um, sharing where you're from, talking about the pets that you have at home, um, and those sort of social connections often spill over into things like, you know, I haven't been back to school in 20 years. I haven't done math in 30 years, and all the anxieties that come out. When you create spaces for students to talk to each other, it is beautiful, the vulnerability that they're willing to bring to these locations and the ways in which their fellow students jump in to support them and help them. The flip side of that is a lot of the connections are also very just practical and tactical, like, when is this assignment due? Or how do I turn it in? Or does anybody get what's going on in homework number four? And certainly, you can get answers about those things from faculty or other individuals, but there's something about getting that answer from a fellow student. They speak the same language as you. They might reinforce that they also were struggling with that. And so, when you, when you have the ability to see what other students are struggling with or dealing with, that also, you know, it gets at things like imposter syndrome and lack of confidence, which frankly are very pervasive in the non-traditional student population.
Yeah. Cool.
[00:35:06] John Nash: Um, you know, I'm thinking about as you're describing this, uh, in, uh, I'm a Director of Graduate Studies in a program that is an all-online doctoral program, um, standing next to one of the graduates of this program. And, uh, we, uh, because it's all online and people are far-flung, we accidentally sort of found out that it was important for the students to have what we call now just as a backchannel. And we don't force it. We just say, make a backchannel. We don't care how you do it or what platform you use. And as you were describing these, these connections, it sounds a lot like what they're doing in this thing we call the backchannel. Um, and so I'm wondering, to you Jason now, as you were a doctoral student who had a backchannel. What was your platform, by the way? Do you—what'd you guys?
[00:36:08] Jason Johnston: Uh, we used Voxer.
Yeah.
That was—we were the weird ones. They, they went on to Voxer. Some were on Telegram. They all started on Google Chat. I mean, just, they all went somewhere. Discord, right?
Yeah.
Nobody was running Discord yet, but, um, but did it do that? Did it do what, what, um, can you—
[00:36:12] John Nash: Describe it?
[00:36:12] Jason Johnston: Yeah, I would say it did in, in a lot of ways without—
[00:36:25] John Nash: Without any effort on our part to say, you guys must do this, because we just, we just crossed our fingers. We do know that we need to be out of this. We are not here. We'll never go see it. And then sometimes we saw the memes they made about us.
[00:36:48] Jason Johnston: We were very careful about not letting any professors into our—although we were positive for the most part. It was not a griping session because of our experience, because we were all having a positive experience. But it was also a confusing experience. And so, there was a lot of trying to understand what's going on and is so-and-so ghosting me sometimes? Is, you know, what are we supposed to do next? Those kind of things were happening a lot—
[00:36:48] John Nash: In that backchannel.
I wonder now though, this is great that you have this, because I guess we're lucky that we said you guys should have a backchannel and then it just sort of worked as the culture. Because I think each successive cohort comes in and goes—they'll go, the old ones are saying to the new ones, what’s your backchannel going to be? And so, but if we didn't, I could see us maybe thinking about doing this, or I'm saying, should we do Inscribe now, or would we do damage to a little weird cultural thing we've got now, which is this vibe we have, which is like, you guys going to make your backchannel.
[00:37:16] Jason Johnston: But some of the advantage we had is that it was a leadership program, educational leadership program. And so, the, we had a few of us that were pretty, pretty familiar with getting something started and trying to get people to adopt something for the positive of good outcome of a group. And so, we kind of took that in upon ourselves. I am thinking about lots of spaces where people just don't have that yet.
Right.
And, and so if you all had—I mean, that's all the facilitation you did, and you weren't checking up on us to see if it was still going on two years later. And it was. Um, and it is still now actually, but not on Voxer. We've moved on to other platforms. Um, and so if you're trying to really make sure something happens, I think that there's some, uh, instructor and administrative leadership that, that probably would need to take place to, to ensure that it's—
[00:38:10] Vendor 7: Happening. Well, and I think, um, I mean, to your point, you're in a graduate program. It's probably a somewhat smaller program. Um, a lot of the work we do—we work with graduate students. We also do a lot of work with undergraduate level, and what we found is they are, they're going to Discord, they're creating sometimes these side spaces. Sometimes they're more gripe sessions than anything else. Um, but they tend to be a little exclusive. So, when you're working with a bigger population, not everybody's welcomed into that space. Not everybody else is invited. So, it's good to have a place that everyone has access to. Um, depending on the type of community that's being built, sometimes faculty or administration are more or less active, but even still, the insights that they gain by being in those spaces and seeing what students are talking about and worrying about, I think, is invaluable to them as well. And sometimes if these spaces exclude them, you don't get the benefit of that learning.
Yeah.
[00:39:02] John Nash: I'm glancing over here at your banner, but I see—I'm a DGS, and so I think, you know, enrollment—if, if there's crossover. So the, the good things that occur when a student cohort can get to know each other online and build a community, but then there's the, just the, just enough appropriate crossover where the DGS or the undergraduate studies director can facilitate an academic problem or an enrollment issue or can get a word out because nobody's reading email, but I know you guys are on your community and you're—is that, am I feeling that's kind of what's going on here?
[00:39:33] Vendor 7: A hundred percent. And sometimes students do want the voice of, um, authority, I guess, for lack of a better term. Like they want to know that this is the official and the sanctioned answer. And sometimes there are issues that get raised that students can't resolve on their own. So having that cross-pollination of leadership and peer and student-driven, um, is really what makes them, I think, ultimately thrive in all those different areas.
Nice.
[00:39:56] John Nash: Cool. I think Jason has the hardest question of all.
Okay.
You—
[00:39:59] Jason Johnston: Ready for this one? What kind of swag do you—
[00:40:02] Vendor 7: Have? At our booth today, we have—so we all have young kids—fidgets, mini-Rubik’s cubes, pens, stickers. So, we are like the favorite of grandparents and, um, families with young children. We'll have to dig some up for you so you can take them home.
Okay. Thank you so much.
Absolutely. Thank you, guys. Great to talk to you.
Studio Ending
[00:40:25] John Nash: Wow. Jason, that was amazing. I learned a lot about what's out there now in the market.
[00:40:32] Jason Johnston: Yeah, it was pretty cool. I think I would say I was pleasantly surprised by the speed of some of the answers. You could tell people have been thinking about some of this, and that they obviously have aspects of their products that are really tuned towards humanizing education.
[00:40:50] John Nash: Yeah. Yeah, I was pleasantly surprised how many of these folks are really starting to think about how their products come off in the space of belonging and creating community and mental health, for instance, specifically with UWill. I wasn't familiar with them before we talked to them, and I was pretty impressed with the way they're thinking things through.
[00:41:13] Jason Johnston: Yeah. Yeah, that was interesting. And it's that other level of humanizing online education, right? Because we, I think you and I, think about the classroom often. So, we think about how we are going to humanize our assessments in our, our learning modules, in our interactions, and create more teacher presence and more student interaction, all those kinds of things within the classroom kind of structure. And really, they're thinking about there's a larger piece to online learning that has to do with your mental health and your, your, your own personal wellbeing. That if somebody's not doing well, then it's going to be really difficult for them to learn online.
[00:41:55] John Nash: Yeah, a really good point about how the level of sort of abstraction away from the classroom moves, but still there have to be these infrastructure pieces that on-campus programs have but may not be available to programs that are seeing students from around the country or around the world. As we discussed with them there, there are state-level regulations related to how therapy can occur, and so it can be a block for on-campus services if you live out of state.
[00:42:25] Jason Johnston: Yeah. Yeah. So, it's good that they're thinking about it, and I think institutions are thinking about that right now. And as well, we talked to a couple of larger LMS vendors there, so that was interesting. And then a few other vendors that are more trying to personalize the online space as an add-on to the LMS or as separate from the LMS. And, and then one that was looking at proctoring solutions in terms of, of working with students in terms of assessments. And so, I felt like we got a pretty good kind of smattering, if you will, of, of different kinds of solutions in the space.
[00:43:02] John Nash: Yeah. One thing that struck me after we walked the floor is, as a faculty member, how out of touch I am with the bigger decisions that get made about selecting an LMS. We've been using Canvas for a very long time at my institution, and I didn't have a huge role in deciding that we went to Canvas. But I had—so, for example, my—I raised a few eyebrows amongst my instructional design friends down there that I made in Nashville when I told them I'd never heard of D2L until I saw—
Right.
The booth.
Who are these guys? And getting to talk to them was cool. And to understand that there are other players—I mean, I can remember when Instructure was the Blackboard killer. They were setting up tents at the Blackboard convention and they were, right. And now I think they're a little long in the tooth, and now new—there are new folks coming on board that are trying to be the Canvas killers because there are still features that could come along to make Canvas better. And whether or not Canvas gets on board with that remains to be seen.
[00:44:04] Jason Johnston: Yeah. And I don't know if you have seen this in terms of adopting technology, and we've talked about this in past podcasts, is that in some ways, technology is always set up to disappoint us. It never quite fulfills all the features that it promises. And so, as we get into it, we start looking over the fence at what grass is greener on the other side. And so, it feels like with even some of these larger aspects of online learning, the LMS, they go through seasons. They have the honeymoon stage, and everything seems great, and then they plateau and things get a little, maybe just a little stale, right. And then they start looking over that fence at, at maybe some of these other solutions. And yeah, I wonder how that will go for companies like Canvas and others as, as this feels like there's always a room for upstarts to, to come along.
[00:44:59] John Nash: Yeah, I think you're right. And one thing that drove this home for me was our conversation with Instructure when they said that they had incorporated badging now into the main mothership of the Canvas LMS. There are so many different plugins, and I wonder if they're going to start to think about where the most popular ones are and then just buy those and bring them into their infrastructure. Yeah, badging seems very interesting to me. I've been in three or four semi-serious conversations at either the department or even the college and university level about badging at our institution. And they all went nowhere because it was just a little too complicated and we weren't sure which platform to use. And then, so I think bringing these sorts of things in. So, I wonder what's the next feature that they'll bring in that seems popular?
[00:45:47] Jason Johnston: Yeah. We know the feature that is popular right now, but we didn't see much of there, which is AI. We've been talking about a lot. I think, I think given that this is happening in, in April, I think many of these companies—I think many of the upstart companies aren't quite there yet to have established a space on this floor. And so, we didn't really see a lot of AI solutions, like solutions that were mainly more productive learning AI solutions that, this is what we do. We did see a couple solutions that are incorporating AI now, and they were advertising that.
[00:46:24] John Nash: Yeah, I think we saw a couple of the vendors talk about the way in which they were going to help institutions and instructors detect the use of AI in student work, and we saw one vendor in particular that was interested in thinking about how AI can be a helper in the process of showing work. And I think that those were opposite spectrums. You can hear it in the way the PowerNotes representative talks about how detecting AI as a gotcha policy is not part of their regimen, that they want to help instructors figure out how AI can be part of a learning process and that students can show their work versus Proctorio, let's say, or even, I think, was it Harmonize a little bit?
Yeah.
We’re plugging in things to let teachers catch people using AI under the presumption that AI is bad.
[00:47:22] Jason Johnston: Yeah. Yeah, it was interesting. Those were some interesting conversations that we got into, I think pretty quickly with some of these vendors. It was interesting. The really, the other part of our findings was really important, which was the swag—
[00:47:34] John Nash: The most important question of all.
[00:47:37] Jason Johnston: Yeah. To see what kind of swag. And so, we got—some of the things we got: breath mints, a cold pack, fuzzy moose, fidget popper. We got a little non-fanny pack. We thought it was a fanny pack, but it was like a—
[00:47:50] John Nash: I think it's a little gear bag.
[00:47:51] Jason Johnston: A little gear bag, yeah. Got some socks, and we got—from Proctorio, they had created these little mini boxes of cereal that were cute, called Proctorios.
That was funny.
Yeah, it was funny. A couple candies from people that had left candies, and a pen, I think, from people that had run out of swag by this point in the conference. But we—so we are talking about all these different companies. We are not receiving payment from any of these companies, so—
[00:48:38] John Nash: Absolutely not.
[00:48:38] Jason Johnston: Is that clear? So, we try to keep a clear, a distant third-party kind of view of these companies so we can be critically—critically assess what they're doing. However, we did get swag from some of these companies, and so we do have to put that out there. Small swag, nothing like AirPod Pros or anything like that.
[00:48:56] John Nash: No. Tiny swag, like knockoff brand York Peppermint Patties, not real York Peppermint Patties.
[00:49:21] Jason Johnston: Exactly. So, nothing that could really sway us too much in one direction or another. But what do you think, what were your—what's your top swag marks for the conference?
[00:48:56] John Nash: I have to be truthful. The plushy moose from D2L is my winner. It's cute and super soft. And I, yeah, I like that one. Although the fidget poppers, I think, might be interesting to my slightly post-adolescent children. I didn't quite get it, but it's like having bubble pop, but you just turn it over and push all the bubbles down, made out of silicone.
[00:49:21] Jason Johnston: Yeah.
[00:49:21] John Nash: What about you?
[00:49:21] Jason Johnston: Yeah, again, I think moose got the top picks. One, it was really cute. My daughter, though, she's a teenager, appreciated it and, and also just had a great conversation. It was hard not to appreciate this from, uh, from growing up in Canada and getting to talk to somebody who sounds like lived just, just a few minutes down the street from where I grew up and having that Canadian moment and with the moose connection. So, it was hard to beat. Although the breath mints were, after talking to a bunch of people on the floor, the breath mints were very helpful. A very practical gift at the conference, I believe.
[00:49:59] John Nash: I agree. And they came in a super thin, credit-card-sized container to dispense them, which took me a minute to figure out how to open. You have to break down the side corner. But it would fit inside my little everyday carry pouch where I have other little thin things. So that was handy.
Yeah.
[00:50:17] Jason Johnston: Super easy. And I believe those were from Harmonize. Am I right on that?
[00:50:21] John Nash: I think, yes. Yes, they were. Yeah.
[00:50:24] Jason Johnston: Yeah. So that's good. That's our wrap up. Anything else to say about OLC vendors or some of the ed tech space—
[00:50:32] John Nash: People? I think I would love to hear the reaction of our listeners to what the vendors had to say. I don't think that our question is the one that they get every day, and I think they handled it pretty well all around.
Yeah.
But I'm wondering if they hear anything between the lines or anything that we missed, or questions they wish that we had asked. Because this is a big part of where online learning is going. There's always going to be big packages, big systems that universities are going to buy, and it's going to be based upon the, the ratings and beliefs of the faculty and instructional designers that, that use these things and walk these floors. So yeah, tell us what you heard in this. I'd love to know.
[00:51:12] Jason Johnston: Absolutely. And onlinelearningpodcast.com, you can find all of our podcasts there as well as links to our LinkedIn page, Online Learning podcast, LinkedIn group. You can find this podcast posted there and comment below. You can always hit us up as well on LinkedIn. If you have any questions or other comments or if there are other things that you think we should be talking about, feel free to send us a message.
[00:51:37] John Nash: Yeah, definitely. Cool. Jason, it's been fun walking the floor with you.
[00:51:41] Jason Johnston: Yeah, this was good, John. My, my feet are tired, but my heart is full. How about you?
[00:51:47] John Nash: Yeah, that's it exactly. You captured it.
[00:51:51] Jason Johnston: That's good. Good talking to you.
[00:51:52] John Nash: Yep. Talk to you later.
Bye.

Monday Apr 17, 2023
Monday Apr 17, 2023
In this episode, John and Jason talk about John’s use of AI in his doctoral mentoring and personal research, if prompt engineering will be a job, comparing large language models, detecting AI writing, and if AI can create a podcast theme song.
Join Our LinkedIn Group - Online Learning Podcast
AI Large Language Models to Test
ChatGPT
Poe.com
Bing Chat
Google Bard
AI in Research Tools
https://www.researchrabbit.ai/
https://elicit.org/
https://consensus.app/
AI Detection Tools
https://www.zerogpt.com/
https://gptzero.me/
Links and Resources:
How to cite ChatGPT in APA
The Various AI Generated Podcast Theme songs Google doc
Please comment and let us know what you think and what you like!
Research Papers:
“GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” by Tyna Eloundou, Sam Manning, Pamela Mishkin , and Daniel Rock. https://arxiv.org/pdf/2303.10130.pdf
“Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity” by Jennifer Haase and Paul H. P. Hanel. https://arxiv.org/pdf/2303.12003.pdf
Opening theme music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Closing theme music: Eye of the Learner - composed and arranged by Jason Johnston
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False start
[00:00:00] Jason Johnston: When to pull out the AI tool so that you're not bringing another party too early into the conversation.
[00:00:08] John Nash: Hmm.
No, I don't. No, I know. I don't know. I just know it when I feel it.
[00:00:14] Jason Johnston: Kind of like love.
[00:00:16] John Nash: Yes, exactly.
Start
[00:00:19] John Nash: I'm John Nash. I'm here with Jason
[00:00:20] Jason Johnston: Johnston. Hey John.
Hey everyone. And this is Online Learning in the second half, the Online Learning podcast.
[00:00:27] John Nash: Yeah. We are doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning's had its chance to be great, and some of it is, a lot of it. But we can get there. How are we going to get to the next stage, Jason?
[00:00:44] Jason Johnston: That is a great question. How about we do a podcast to talk about it?
[00:00:48] John Nash: I agree. Let's do that. What do you want to talk about today?
[00:00:53] Jason Johnston: Well, I was curious about just where you are at with your testing of different AI and how that relates to your own teaching and mentoring of students that is going on for you right now.
[00:01:10] John Nash: The stage I am at now is where I have been for the past few weeks, which is having conversations with my doctoral students about ways that ChatGPT can be helpful in focusing their writing on difficult issues to express. They, they, I've talked a little bit about this in the past, but many of my students are doing mixed methods, action research dissertations that are kind of micro-politically fraught. They are about issues that are of importance to them about creating change in an organization. And so, while they know tacitly what they want to say and what they want to do, because they're in the middle of the situation that they want to change, it can be hard to express that clearly.
Right.
So, we'll sometimes use ChatGPT as a way to think about what are the big pillars that they want to dive into.
Mm-hmm.
And have it help us make some connections that we might not otherwise be able to make across these seemingly disparate aspects of the organizational change.
[00:02:16] Jason Johnston: Hmm. So let me picture it then. Do you use it in real time? So, you'll be in a Zoom with one of your candidates? Yes. And then you, you pop it open and you use it in real time. Describe that. Like what you might do if I was your doctoral candidate and I was having a difficult time getting down to the point or trying to describe something or brainstorming or whatever it is.
[00:02:41] John Nash: Yeah, I won't have it open as the purpose of the call, but we will, we'll have weekly meetings on progress towards a prospectus or some aspect of a proposal, and we're discussing, yeah, some part that the student is stuck on. I'm not sure how to express this or this new wrinkle has come up, and I'm not sure how we're going to handle that new aspect. And as they struggle to think about how that should look, I'll ask them to say a few big sentences about what they think the key issues are on that matter. And then I'll say, well, let's just open up an AI model and let's see what it can do with these ideas. And so, then I'll share my screen and I'll say, talk to me out loud about the three big bullet points you think are important here, and put those down. And then I'll, we'll say, well, what is your goal here? I said, well, I want to figure out how to connect these ideas because they seemed sort of disparate.
Mm-hmm. And say, well, let's put that in there. Connect these three ideas and what are the similarities and what are some things that are maybe different about them. And then it'll spit out some output that we'll look at together and we'll discuss together so they can have some thoughts on the, on the direction they want.
Great question, but I didn't answer it
[00:03:51] Jason Johnston: That's great. So, we've done a lot of conversation about how maybe AI can help us to think through things without replacing our thinking. How do you have any kind of litmus test for when to pull out the AI tool so that you're not bringing another party too early into the, into the conversation?
[00:04:13] John Nash: Hmm.
No, I don't. No, I know. I don't know. I just know it when I feel it.
[00:04:19] Jason Johnston: Kind of like love.
[00:04:21] John Nash: Yes, exactly.
Pivot because I didn't answer the question
[00:04:24] John Nash: Another way that I have used ChatGPT is to train it on the criteria for the rating of sections of dissertations based upon key authors that we're interested in having students adhere to key ideas inside aspects of literature reviews or mm-hmm, research designs, particularly in the context of mixed methods action research, which has a different tack than a traditional sort of five chapter theory building knowledge creation dissertation. It's very action oriented. It's very contextual. It's locally bound. And so, the AI model has been helpful in helping students adhere to some of those structures that they can miss because it's important to be very detailed in these, these reports of these settings in particular, and the stakeholders that you have talked to and the kinds of information you've collected from them. So, I've been able to use ChatGPT to teach it what it should be looking for with these key things and then subjecting the model to student writing to also help me catch things that I might miss in the context of their narrative.
[00:05:33] Jason Johnston: Hmm.
Yeah, those are, those are great use cases. Do you have some prompts depending on your purposes, what you're trying to do?
[00:05:39] John Nash: I have different prompts for different purposes. A significant portion of the studies that these students do is predicated on a diagnosis of a problem, a practice, a leadership dilemma in their organization. And so, I have some prompts that help the diagnosis section to say whether or not there's help that's needed in tidying that up. But then they're also for research design. Since these are mixed methods designs, they might be concurrent or they might be consecutive designs. They could be sequential. And so, needing to be very detailed in that, I have prompts that help suss out matters related to how well those sections are written.
[00:06:15] Jason Johnston: Hmm. So, you kind of keep those prompts handy. So, you have some that you've already created. So, you don't necessarily always create those prompts on the fly. Yeah. Okay. Yeah. So, you have like a, a prompt menu.
[00:06:28] John Nash: Yeah. A colleague of mine and I have created a OneNote document on OneDrive where we keep different types of prompts for different issues and across different kinds of tools. So one is with ChatGPT, and we have a whole catalog of prompts that we use for common issues in there, but also for some of the research tools that are out there, we have prompts in the OneNote for Research Rabbit or for Elicit, mm-hmm, or Consensus and some of these other tools that, mm-hmm, used not for student facing work necessarily, but maybe our own research and kinds of things we're thinking about in there.
[00:07:02] Jason Johnston: Mm-hmm. Yeah. I think that prompt engineering is it, do you think that's going to be a, like a position in research centers or at, at companies? I, because it's a, it's a thing, right? You,
It is a thing. It is a thing.
You're keeping these because it takes time to really craft a prompt that gets you, it's like the classic really. AI is classic computers. You, it's garbage in, garbage out. If you don't give it a good prompt, it may kind of read between the lines a little bit better than it used to. It won't spit back an error, and maybe that's to its detriment. It may just spit back what it thinks you want to hear.
Mm-hmm.
Without really understanding. But yeah, it, I mean, simply the fact that you're keeping a list of good prompts shows that it's enough work that it's worth it to you to keep a list of them, right?
[00:07:56] John Nash: Yes. That's fair. Two things that I hear you asking. One is, do I think that'll be a job one day? I don't think so. I know there was a lot of chatter across different publications that this is going to be a thing and that they're high paying jobs to be had for good prompt engineers. I'm not certain that's going to be the case. It's, it is true that you do have to prompt these machines very carefully to get good quality responses back. But I get the feeling that it's going to be more like any of us tech geeks that are out there that just became good at a tool and then, you know, they're, they're going to be the people that are relied on to, to figure out how to do this.
Right.
That being said, I think AI is going to take care of itself in this regard. I mean, all of the prompts that are going in now surely are probably cataloged somewhere. We have to remind everyone that everything we're talking about, the shifts that are happening across labor markets, across education, that the dialogue's happening in P–12 education around teaching and learning are all based on tools that are six months old and in public beta.
Right.
Full stop. So, I think, you know, the prompt engineering is going to be handled by the AI models because, mm-hmm, it'll, it'll probably teach you how to ask it. Like, it'll probably come back with better Socratic questions like, did you mean this? What do you really want to do here? Because it is garbage in, garbage out at this point. You write poor prompts, which are basically, you ask it sort of one sentence, you know, you know, write my paper for me.
Right?
Doesn't that, that, that gets you nothing. The reason why my colleague and I catalog these prompts is because they're lengthy. Because they really are, and I think also Jason, it's a misnomer. We shouldn't, it sounds like when we say, did you engineer that prompt? Did you write that prompt? It's as if it was just the one question to get it. It's, these are six or seven prompts nested after each other based upon scaffolding some information that you need the model to know before you can go to the next thing. So that's why these have to be saved. You can't, you can never remember them to get them right once they work.
[00:10:08] Jason Johnston: Yeah, once they work. Yeah. And it's a learning process for all of us. I mean, I think it's one of the reasons why as you and I are interested, we share some of the prompts and responses back and forth because,
Mm-hmm.
It's interesting to us, first of all, but it's also like, oh, look what I got it to do today, kind of thing. Right.
And, and I think all of that is informative to, as we learn how to work with AI better in, in more productive ways.
[00:10:37] John Nash: Mm-hmm. Definitely.
[00:10:40] Jason Johnston: Yeah. So, and as you're doing these prompts, do you have a particular AI language model that you reach for all the time?
[00:10:49] John Nash: I reach for ChatGPT, and yeah, I use, I, I pay 20 bucks a month to use ChatGPT-4.
Okay. Yeah.
Yeah. And that's even, that's limited to like 25 prompts over a three-hour time period. They even gate it now at that level. But the, the quality is, I think, better than the ChatGPT-3.5, although it's slower. ChatGPT-3.5 is, is not bad to do, sort of perfunctory administrative matters related to, you know, analyzing some memos or putting something out just quickly.
Yeah, that's the one I tend to reach for.
[00:11:23] Jason Johnston: Okay. And well, that's interesting. I didn't know that you were paying for it. Yeah. Do they, do they send you a tote bag or anything like that? Do you get any stickers or badge to put on your
[00:11:33] John Nash: I have to, I have to ask it to design my own tote bag and yeah, I have, I have it write the prompt for Midjourney so I can design the logo for the tote bag that they would get.
[00:11:42] Jason Johnston: Right, right. That's good. Yeah. Still currently, if you're using ChatGPT for strictly at OpenAI, it cannot access up-to-date information, right? It's still 2021 and previous kind of thing.
[00:11:57] John Nash: So, I will use Bing now and then to pull some uh, internet responses where I think I'd like to look at some literature or find some ideas on some journal articles or other items to chase down. And then I can take those findings and put them in something like Research Rabbit, which does a sort of a semantic web of literature based upon titles of journal articles you can put in there. So, then I can find related research to an area that I'm interested in.
[00:12:29] Jason Johnston: Mm. Yeah, yeah. I used, I, I, I played across a number of them for, and then, we'll, I'll tell you in a minute about something kind of fun that I did, but for something more serious, I played across a number of the AIs trying to get them to summarize some articles. I'd read a pretty decent article about AI. What I liked the article is that it had a lot of references, a lot of current references of articles I had never heard about.
Okay.
And so, I took the references and then I was trying to prompt the different language models to give me summaries of the articles.
Mm-hmm. So, saving me the time of having to go out, find the article, download it, get a summary, although you could look at, I could read the different abstracts, but actually to summarize the main points, and I had some varying results depending on some of it had to do with currency because some of the articles were so current. So ChatGPT didn't, couldn't register a lot of the articles because they didn't know they existed to, to look them up. Right. Bing was able to do it better for me because of that case, because it had up-to-date information as well as then leverage. ChatGPT was able to do it. And so, I found kind of my best results really in either Bing or in Bard. So, Google Bard can also do up to date information.
[00:13:52] John Nash: Bard's getting better. I, I have to admit, I haven't tried Bard yet.
[00:13:56] Jason Johnston: Yeah. And for this kind of task in terms of summarizing, I found that Bard was actually pretty good and gave me some pretty good summaries of these articles. And now of course I'm trusting Bard that it's actually summarizing the article and not just making things up. However, in this case because I just wanted a, a quick summary of these articles to see if, which ones I would maybe be interested in reading more, it was actually pretty helpful, I think.
[00:14:23] John Nash: That's excellent. And did you come to this conclusion because you were using the other system that compares it, is that called Poe?
[00:14:32] Jason Johnston: Yeah. So, another great tool that I've been using is, is poe.com. Okay? You're able to go into Poe and you are able to select different language models on the left-hand side. And for those that are listening, that are not paying $20 a month, like me, I'm not paying $20 a month yet for GPT-4, you can basically get one token per day to GPT-4, and so you could do some testing of your own.
Nice.
And especially you've already crafted and you don't have to ask a lot of questions. You could send out that one well-crafted prompt and then see what GPT-4 will spit back. So, yeah, it's Poe and it has a fairly decent mobile app as well that allows you to do the same thing.
Mm-hmm. So, I've been kind of checking out a few of the mobile apps that way. Bing has a pretty good mobile app as well that will actually let you talk to it. And then it will answer you back.
[00:15:26] John Nash: That's, yeah. So that's interesting. Well, what do you think, Jason, about the tools that are coming out that are going to try to catch people using these models?
[00:15:38] Jason Johnston: Yeah. It's interesting. I think the most well-known, and at least they say that they're the most used of those, which is ZeroGPT. And for those listening, you can try it for free and just look up ZeroGPT. I think it might even be, is it zerogpt.com? Yeah, that's what it is. And you can paste in some text and see if it recognizes it based on perplexity and burstiness. Right.
[00:16:07] John Nash: Yeah, that's it. I had some fun playing with this because a friend of mine posted on LinkedIn about how GPT Zero, oh and it's yeah.
Yeah. GPT Zero, is that what we said?
[00:16:19] Jason Johnston: Yeah. GPT Zero. Oh, sorry. There's two of them. I guess there's actually GPT Zero and then not to confuse things, there's ZeroGPT. Yeah.
[00:16:28] John Nash: You're right. Let me see which one I used. Oh, I think I used ZeroGPT, which was put out by, hang on. Yeah. Okay. So, there's a, there's a young man named Edward Tian who got a lot of press about a month ago who, for building GPT Zero. He's a CS major at Princeton and a minor in journalism. And then there's also ZeroGPT. So, they're both tools. I played with the Tian's tool.
[00:17:04] Jason Johnston: Yeah, that's GPT Zero. So, me, that's the one that looks at perplexity and burstiness.
[00:17:12] John Nash: Then that's the one I played with. Yep. And so, I went in there and I had some fun with that. I asked it to write on a topic that, uh, I'm currently interested in, which was on the topic of teaching problem solving, not solving a problem. It's a subtle distinction, but it's important. Okay. And, first I, I asked ChatGPT to try to write more human-like by writing like me, John Nash. And so, I took some text from my book that got published in 2019 and I fed it into Bing, and I fed it into ChatGPT-4. And I said, talk to me about the style of this writing and how would you label this writing? What, what is this writing like? And both of them spit out some content that talks about what that writing is like, and then I said, okay, fine. I want you to pretend that you're a writer that writes like this. It's conversational and informative, and it encourages reflection. And then I want you to write about how college professors should teach teenagers problem solving, not for the sake of solving the problem, but for the sake of teaching problem solving. It's a subtle but important difference, right? Right. It spits out these paragraphs, and I took all that and I threw it into GPT Zero and GPT Zero said back to me that this text was likely to be written entirely by AI. Mm-hmm. So, in spite of trying to teach ChatGPT to write like me, this tool caught me and said this was written by AI. And it said that it lacked perplexity and burstiness. So, I said, all right, fine. I'll just rewrite the prompt. And I say, well, now write about how college professors should do all this. And now write it with high perplexity and high burstiness. So, and then off it went and it wrote a different one, and it was slightly different than the first one. And I guess it had more perplexity and burstiness. But the result was your text is likely to be written entirely by AI. So that didn't work. Well, undaunted, I took the original response there that it gave me from that second one.
And all I said was fine. But you really have to increase the average perplexity score, which is your text measurement of randomness of the text and the burstiness score, which is the text measurement of the variation in perplexity. Okay. So now write it again. This is exactly what I typed, and the result was this. Your text is likely to be written entirely by a human.
Hmm.
So, I thought, wow, fantastic. I won. Except I didn't, because what's interesting is that the text that the detector thought was entirely human was ridiculously flowery and inflated. It was kind of like when a master's degree student thinks they're supposed to sound academic, right. And actually, in my humble opinion, the second attempt that they thought was all AI actually read fairly naturally and seemed to be the short-term winner in my little experiment.
Hmm.
But the, the text on this one that they said was human. I mean, there was just nothing I would've, I would've sent it back if a, if a student had sent it to me just for how inflated and, and ridiculous it sounded.
Can I give you a flavor of how GPT Zero thought it was totally AI and where it thought it was totally human?
Yeah.
So, this first, a couple of sentences is about how college professors should teach teenagers problem solving, not for the sake of solving the problem, but for the sake of teaching problem solving. And so, and this was detected as completely AI.
Hmm.
Have you ever wondered about the true purpose of education? Is it simply to train young minds to solve specific problems, or is there something more profound at play? Let's explore how college professors should approach teaching problem solving to teenagers, not for the sake of solving problems, but for the greater purpose of teaching the art of problem solving itself.
So, GPT Zero said, no, out. That's totally AI. And it was right.
So, then I told it to up the perplexity and the burstiness and make it, make it really up there. Okay. Now I have to channel my inner pompous person. So, this is, this was according to GPT Zero, written by a human.
The enigma of education. Is it merely a means to prime our youth for problem solving conundrums, or does it hold a more profound role? Let us delve into how college professors can illuminate the path of imparting problem solving to teenagers, not for the sake of addressing problems, but to bestow upon them the very essence of problem solving.
Yes. Human.
[00:22:08] Jason Johnston: And it's called that human. Yeah. Yeah, there's a, there's a problem there because you think about how much clearer the first example was versus the second. I got lost in the words. Maybe it was the accent I got lost in, but I also got lost in the words. The first was much more direct. Yeah.
Cleaner.
[00:22:27] John Nash: Enigma, conundrum, delve, illuminate. Yeah. That was fascinating.
[00:22:35] Jason Johnston: Yeah. Those are good examples.
[00:22:36] John Nash: Yeah. I, I mean, you can defeat these things, but, you know, to what end? And I think for me it says a lot about the purpose of tools like this are to catch, you know, cheating, which I think is a sort of a dodge. It's to catch students who are not going to do their own work. But I think really the point is here is that we need to rethink the way we assign work because I don't see any real benefit to these kinds of tools.
[00:23:01] Jason Johnston: Right? Yeah, using these tools as a gotcha moment is not, in my opinion, it's not very instructive.
Mm-hmm. But I think certainly there is some value for students to feel motivated to write original work. Because we know that students who are feeling high anxiety or they're stressed out by the rest of life, and, you know, they might just take the shortcut once or twice or whatever to try to get them to their end goal. We know it's a real both a threat and possibility for students. And so ideally, we want them to be thinking and to writing their own work. However, yeah, I don't know what the tool, I don't, I don't know the usefulness of these tools either. The other thing is, and I just encourage people just to try it out themselves like you did. I think that was a great example. The other thing is that it's not like the old, it's not like Turnitin, where as a teacher you could use something like the Turnitin plagiarism detector. Mm-hmm. It will detect without question 100%. This was copied from an article that was posted here or from this website, or even from this, this paper that was turned in. Yes. In 2016 at this university. You can request the original documents in those cases to be able to come up with a, without question kind of moment. And I've used those, we have used those in school to be able to talk to people about, about plagiarism.
Yeah.
And it's difficult to get to those conversations with some people without hard evidence. And I heard recently about somebody using one of these AI tools without hard evidence with somebody that they knew without a, without a question that they'd written this themselves. But the AI tool had come up with a, a false, uh, when it actually thought it was AI, but it actually was written by the human. And so, the usefulness of these tools is, yeah, as you said, hard to maybe determine at this point.
[00:25:10] John Nash: I went and grabbed some LinkedIn posts that I've done that I, that I knew I had written and threw it into the model. And it said that parts of it were surely written by AI. And it's interesting, it'll highlight the, the sentences that it said yes, that were AI and that were not that were all written by me. I will also add, I did take in a couple of paragraphs from my book. So, I took chapter one from my book and put it into the thing, and it said it was written all by a human. Yay. But what I thought was interesting, Jason, was that the, this burstiness and perplexity scores were on the magnitude higher than even any sort of the best perplexity and burstiness of the samples I was doing before. And so, there's something about presumably well-written human text that have these things.
[00:25:58] Jason Johnston: Yeah. Yeah. And it's somehow more random, even though it doesn't feel that way. When I write a, when I'm writing a paper, I'm mundane in some ways. You know what I mean? Yeah. Like I just feel like I'm just, I'm just pulling up the same sentences I've done before. But there is something in the way that we craft text as humans that is much more random and has a lot more variation to them than, than we even give ourselves credit for.
[00:26:22] John Nash: I really should compare these two because the pieces from my book, which have high perplexity and high burstiness, but they read with a flow that I thought was similar to the second example in here that was AI driven, but still it had a sort of a, a flow to it that felt natural to me, at least to my ear and eye. Whereas the one that passed as human, that was all written by ChatGPT is so ridiculously inflated with its purposeful attempt to have, you know, there used to be this book called the, the Thesaurus for Highly Intelligent People or something like that. But it's like these just ridiculous synonyms and antonyms and other choices of words that are really off the scale. And so, I think, yeah, that's really just sort of interesting to think about what the computer thinks is burstiness and perplexity and it can beat the AI models for catching that.
[00:27:14] Jason Johnston: Mm-hmm. It's been really interesting serving on a couple committees right now talking about AI here at the University of Tennessee. One of them is the pedagogy committee. One is the philosophy committee. The philosophy committee is a lot of fun because you can really try to go at that really high level of what it is we're trying to accomplish and what are some of our concerns. One of them is simply transparency, and I think that's, yeah, where I would like to see us get with AI is less of when we're talking about it, it's not plagiarism, but, but it making text for us, writing for students is getting to the place that we try to create as much transparency as possible within our classes so that we can talk about the use of AI, that we can maybe offer some opportunities to use it within class, and then also be able to have other opportunities and make sure we're really clear and calculated about, about when we don't want AI being used, and we can also talk about that and, and hopefully get to a place within our classes where we just know when we're using it, when we're not using it. And, and in some ways maybe hope for the best for those that, that are using it as a shortcut.
[00:28:32] John Nash: Yeah, definitely. I think that this notion of transparency is great. I think that there's an opportunity here for teachers and professors to approach this in a way that tells us, you know, here's how we can use this in a productive way. Here's how we can bring this into our conversations. Mm-hmm. Mm-hmm. I think you pointed out to me just today that, uh, APA has come out with a stance on how you can cite ChatGPT in, in, in its
[00:29:01] Jason Johnston: use.
Yeah. An official stance a couple of days ago, which just, I think they're just having enough people ask them, they're like, I want to be transparent about using AI. I'm going to use AI. I did use AI for this writing. How can I cite this so that it is, so I'm being transparent so that people don't accuse me of plagiarism or whatever, what would the word be if it's not plagiarism? Because you're not actually stealing it from somebody else. It's generated, AI generated. It's just, yeah, it's just, well, is it, it's just cheating, I guess. Yeah, just cheating. I'm tired of that word. I think it's, it's getting, it's, it's used as a catchall for too much, but it's, it's as if I were to hire someone to write my paper for me.
[00:29:43] Jason Johnston: It's, it's inauthentic writing. Yes. Really in a, in a way when you're not being transparent about it.
It's academic dishonesty.
Yeah. Yeah. A level of academic dishonesty. And so, I'm really glad that they just kind of came out with it. Well, here you go. If you want to cite it, didn't say you had to, it wasn't being prescriptive, but it said if you want to cite it, here are the different ways. And so, we'll put the, we'll put the link for that into the notes as well. If people are asking that same kind of question. And I wouldn't, I, I'm hoping that we're kind of moving in that direction. Mm-hmm. You
[00:30:15] John Nash: know, they also talked about how to cite ChatGPT when it is your colleague or your ally in your work. So sometimes you are talking about your use of ChatGPT as a manner of your research endeavor, and so you have ChatGPT output quote in your paper. It tells me how to cite that, yeah, appropriately in your references now too. Yeah.
[00:30:39] Jason Johnston: Yeah. I think it's, I think it's helpful. Yeah. Well, can I tell you, as we're kind of closing up here, can I tell you about my kind of fun way that I, that I used the different AI to kind of compare prompts?
[00:30:50] John Nash: Absolutely. What did you, what did you, what did you take up?
[00:30:53] Jason Johnston: Well, you know, you had created a basically like a, a PR kind of pitch sheet for our podcast, kind of talking about what we're doing, what we're about, and so on and so forth. So, I thought, well, we've got this kind of well-crafted pitch sheet now for our podcast. What this podcast really needs is a custom theme song, right?
Absolutely.
Then I thought, what better theme song would underlie our podcast than the Eye of the Tiger? Like, talk about a second half theme song, right? That moment where you're going back into the fight of your life and you don't know if you're going to be able to do it or not, and you need something to pump you up. And that is a song that you need to pump you up is Eye of the Tiger, right?
[00:31:46] John Nash: I think you may be onto something here.
[00:31:48] Jason Johnston: I don't know what other song would do it for me anyway, maybe for other people. As you think about going into the second half of life. So, my prompt was, can you write a theme song for my new podcast called Online Learning in the Second Half to the tune of Eye of the Tiger? I want it to be hopeful and energetic, just like the song that got Rocky pumped up for his comeback. This song should energize people in online learning for the second half of the game. You can base some of the content from our press release below, but please be fun and creative. Don't let the details hold you back. And then I included in the prompt our press release or as much as I could squeeze into the limited character count. Excellent. All right. So, here are a couple of the, and you can give your response on, on some of these. But here's a couple of the things I got back and I'll just, I'll read a little bit of ChatGPT-3.5, the verse, and then the chorus.
Rising up in the world of online learning, taking on the challenges we're facing. We know there's room for improvement and we're ready to make it more, it's not bad for a verse, right?
It's not bad.
And then the chorus: Online learning in the second half, we're going to make it more human, creative and fun.
I don't know.
With, yeah, with John Nash and Jason Johnston, we'll explore the possibilities for everyone.
[00:33:12] John Nash: There was a little rhyming there. That's good. Yeah.
[00:33:14] Jason Johnston: A little rhyming. Okay. This was ChatGPT-4 and I'll just do the chorus. Actually, I'll do the verse, the first verse for this one too, because it was pretty funny.
Rising up back in the classroom. Did our time, took our chances. Yes. Went the distance. Now we're not going to stop just two friends with a passion to share.
[00:33:36] John Nash: Huh? I can, yeah. I'm playing, I'm playing the melody in my head as you go.
[00:33:42] Jason Johnston: Right? Yeah. And then the chorus.
It's, I have the learner. It's the thrill of the screen.
Rising up to the challenge of our rivals.
I don't know who our rivals are, as, as, and as we dive deep into this digital scene, going to make online learning come alive.
[00:34:06] John Nash: Come alive. Rivals. We have to run with.
[00:34:08] Jason Johnston: Yeah. And it didn't quite work with the, with the timing, but I thought that was okay. That's pretty, that's definitely hopeful.
[00:34:14] John Nash: Yeah. Yeah. No, very much so. Very much so. Now I'm going to spend the rest of my day wondering who our rivals are. Yeah, that's
[00:34:21] Jason Johnston: good. Here's, here's Bing Chat, more balanced, and I'll just do the, I'll do the chorus. It's the online learning in the second half, rising up to the challenge of our rivals again, and the last known survivor stalks their course in the night, and, and they're watching us all with the eye of the tiger.
[00:34:47] John Nash: The last known what again? The
[00:34:50] Jason Johnston: last known survivor stalks their course in the night.
Oh my gosh. So maybe this is after AI has taken over the
[00:34:57] John Nash: world. Maybe last known, the last known teacher posts their course in the night. Yeah. Yeah, yeah, yeah. Well, I think we should, if people have an opinion on where we ought to go with that, we might have to, we might have to record.
[00:35:13] Jason Johnston: We might have to do a theme song. Well, and I'll post my Google doc with, with the different outputs here. And if anybody has any opinion, they can let us know. So, but I thought that was a fun way to compare. And some of it is just to get a feel for these different large language models, what they can do. And it helped me just to kind of play with the tools in a low stakes kind of way that helped me just to get a feel for, for how they could take a pretty specific and creative prompt and how, what their output, yeah, shows up as.
[00:35:49] John Nash: That's hilarious. Jason, this was a lot of fun, a lot of good stuff covered today.
[00:35:54] Jason Johnston: Yeah. Good talking to you about this. And we promise to those listening. I think we already promised this, and I'm not sure we've come through on a, on this promise yet, that we're not going to make this podcast about AI. So, we are going to move on. One of the ways we're going to move on is next week, if you're listening to this on, in the week of April 10th, but it'll be the next week we'll be at OLC in Nashville on the Wednesday doing a, a design thinking session about humanizing online learning. And we really want to take this podcast in that direction. So, after that session, we'll have lots more ideas I'm sure, about what people are thinking about and, mm-hmm, and where they want to go next.
[00:36:34] John Nash: Yeah. That session is going to generate a lot of ideas. And then we're going to have time at OLC in Nashville to also talk to people. While we're there, we've got, we'll have our microphones and hopefully grab some folks and get to talk more about what's on their minds.
[00:36:48] Jason Johnston: So, if you're going to be there and you want to talk with us, just shoot us a, a message in LinkedIn is the best place to probably get us there, and we'll let you know the, the secret room and time to be there. So that would be great to, to meet some folks and to talk with you. Great.
[00:37:06] John Nash: Yeah. This is good.
[00:37:08] Jason Johnston: Yeah. Can I leave us with some words from,
Oh, absolutely.
Okay. Here's some words. Here's a theme song from Bard. Some, some words to take us out on. Um, online learning, it's not a one size fits all solution to the tune of Eye of the Tiger, of course. Okay. It requires us to think critically and creatively about how we design, deliver, and assess learning.
That's supposed to be the second line. I don't think that would sing very well. Experiences that meet the needs and interests of diverse learners. We want to keep up with the changes and share them with you. I think it captures our heart. I think
[00:37:46] John Nash: it does. It captures our heart. It did a terrible job of putting it to the music.
[00:37:50] Jason Johnston: Yeah, yeah. But it does capture our heart for this podcast and for all of you. So, we hope you keep on listening and please connect with us online, onlinelearningpodcast.com. Find us on LinkedIn, under the same name, and we hope to connect with you and hear about the kinds of things that you want to talk about on this podcast.
[00:38:09] John Nash: Excellent. Keep the eye of the tiger going.
[00:38:12] Jason Johnston: Keep the eye of the tiger going, folks.
[Eye of the Tiger in chiptune style plays as the outro]

Friday Apr 14, 2023
EP 7 - AI Fever Continues: What Does it Mean for Online Education?
Friday Apr 14, 2023
Friday Apr 14, 2023
In this episode, John and Jason talk about Jason’s recent bout with AI fever and what the rapid development of AI means for online education.
Join Our LinkedIn Group - Online Learning Podcast
Links and Resources:
Hard Fork Podcast: the Bing Who Loved Me
Blog post on how Claude works. https://scale.com/blog/chatgpt-vs-claude
Q*Bert history and a fun free web playable version here
AI Release Timeline
Nov 9, 2022 YouChat (You.com) - Public beta based on GPT 3
Nov 30, 2022 ChatGPT 3.5 - OpenAI
Feb 7, 2023 Bing Chat (now based on 4)
Feb 24, 2023 Facebook / META LLaMA Announcement
March 14, 2023 - ChatGPT 4 - OpenAI
March 14, 2023 - Claude - Anthropic AI (Quora / Poe.com / DuckDuckGo )
March 21, 2023 - Google’s Bard
Research Papers
“Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity” by Jennifer Haase and Paul H. P. Hanel. https://arxiv.org/pdf/2303.12003.pdf
“GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” by Tyna Eloundou, Sam Manning, Pamela Mishkin , and Daniel Rock. https://arxiv.org/pdf/2303.10130.pdf
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
John Struggles saying "Alterficial" Intelligence and Won't Just Say "AI."
[00:00:00] John Nash: And what they found was no qualitative difference between alter… Um, they found no qualitative difference between, I can't say alter—I want to say alternative intelligence. What is that? It is, it's advanced. What is that artificial? Is it Friday? We found no qualitative difference. Let me start over. They found no qualitative difference between. I'm blocking again. I need to write it out. Good gravy. Here's your false start.
[00:00:36] Jason Johnston: You can just say, you could just say AI if you want to.
[00:00:38] John Nash: I'll just say AI.
[00:00:41] Jason Johnston: You know, if you get AI's name wrong, there are no feelings.
[00:00:45] John Nash: It doesn't care.
[00:00:46] Jason Johnston: They've already told me they don't have feelings like we do. So,
[00:00:49] John Nash: they don't have feelings. They're not humans. They want to be, but they can't be.
Introduction
[00:00:53] John Nash: I'm John Nash here with Jason Johnston.
[00:00:55] Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the second half, the Online Learning podcast.
[00:01:01] John Nash: We're doing this podcast to let you in on a conversation we've been having for the last two years about online education, and we're continuing to have. Look, online learning's had its chance to be great, and some of it is and some of it just isn't. How are we going to get to the next stage?
[00:01:18] Jason Johnston: That's a great question. How about we do a podcast and talk about it?
[00:01:22] John Nash: I agree. Let's, but first I want to know, what do you want to talk about today?
Part 1
[00:01:27] Jason Johnston: Well, so much has been happening in the last few weeks in terms of AI. It's almost laughable. We were like, yeah, we should probably move on from AI and talk about other things. And then all of a sudden, all we've been talking about for the last two weeks is AI, it feels like. But anyway, so I was thinking about all these things a couple of weeks ago. Just a ton of new things were being released. You and I, we were watching from separate places, but the release of GPT-4 and that amazing demonstration.
Yeah, that was amazing.
And some of the other AI tools that were coming out. And so, I was catching up one evening, checking some of these things out online, and then I made the mistake of watching an episode of Black Mirror. Do you know that show?
[00:02:10] John Nash: I do. Yeah. That can be a mistake sometimes.
[00:02:14] Jason Johnston: And it's kind of a, it's like a cautionary, uh, technology tales, kind of like a futuristic Twilight Zone. And so, I had watched that before, before going to bed, and then I was literally all night long. I was having nightmares of tossing and turning wrestling with ChatGPT. It wouldn't do what I wanted it to do. Every time I, you know how it is when you're dreaming and you can't quite read things when you're dreaming. I don't know if you've ever noticed that.
[00:02:42] John Nash: No, I haven't thought about that. Let me think a minute. No.
[00:02:46] Jason Johnston: When you're dreaming, try to read something, turn your head away and come back and try to read the same thing again and it doesn't work. So, I was working with ChatGPT all night long in my dreams. And, you know, I woke up the next day, and I felt like I hadn't slept at all. And do you know what I realized? I think I came down with, John?
[00:03:03] John Nash: No. What did you come down with?
[00:03:05] Jason Johnston: I think I had AI fever.
[00:03:08] John Nash: Is there a cure for that?
[00:03:11] Jason Johnston: Well, I don't know if there's a cure or not right now. It might just be something that has to kind of work through our systems right now. Now, I'm not sure that, it came upon us too suddenly to, I think for anybody to work on any kind of cure. But I think it's not only hit me, I think it's hitting other people as well.
[00:03:28] John Nash: I think so. Is it, uh, is it like cowbell? You just need more cowbell, you need more AI.
[00:03:34] Jason Johnston: Well, that's what it, yeah, it's kind of like a, it's, I don't know how to describe it. Some of the symptoms would be kind of this kind of perplexity of thought towards it where you're thinking about it, but you're not really coming to any kind of resolve, and before you even don't come to any resolve, the new information comes out about it. So, I think that's right. Those are some of my symptoms anyway of AI.
[00:03:58] John Nash: Maybe it's good that, uh, Bing AI cuts you off after 15 prompts.
[00:04:02] Jason Johnston: Now. Maybe it is. So, so I've been trying to do some more analog things on the weekend, get my head away from, so that's been good. Good recovery there. That's good. You know, playing the guitar, getting outside, those kind of things. But yeah, I wanted to talk a little bit about that and about just all that's happened in the last month.
[00:04:21] John Nash: Well, I'm with you. I mean, just when I was sort of dusting off my hands and saying, okay, phew, at least we got that AI stuff out of the way so we can start to talk about some other things. And I can't help but think we have to talk about some things. I've been finding some research that's really interesting that I think implies, uh, we need to be thoughtful about what's going to occur with online learning. And, uh, yeah, just this, but just this timeline, uh, as you're talking about is kind of, kind of amazing. I'm looking at our collaborative notes here and just sort of reflecting on, uh, I thought that, you know, back in November when the first sort of public beta of GPT-3 came out and you and I started playing with it towards the end, middle, or end of November, I guess right before the semester break, that this was clearly revolutionary, very different, uh, and, uh, the predictive language model from 3.5 that came out in the end of November. Like how, what could actually happen from here and how fast will this go? And we had no idea that in just three months it would be as advanced as it is now.
[00:05:28] Jason Johnston: Yeah, well, and right about the time we were starting this podcast in February, then Bing Chat came out and its connection to the internet was, uh, I think revolutionary. Yes, for current information, but also the more conversational style. And then of course, all those early crazy things were happening to myself and others with these long conversations with Bing.
[00:05:49] John Nash: Those conversations you had with Bing and the sort of humanizing attempts on Microsoft's part, it seemed like to have those responses be a little more soft and a little more, uh, empathetic, uh, reminds me of how much chatter was going on in say, January and February about how unnecessary it was to anthropomorphize these chatbots because they were just being, uh, they're just computers that are programmed with English language. And so, there was no need to treat them like they were humans. I think people's attitudes have changed a little bit towards that.
[00:06:27] Jason Johnston: Yeah. How so? What do you think?
[00:06:29] John Nash: How have people, I think people can't help but anthropomorphize these machines, right?
[00:06:33] Jason Johnston: Yeah. And then in February, Facebook Meta made an announcement about its large language model, of course, just to kind of keep up with everybody it felt like. But I've yet to seen that. I have, I have my name in the hat to see, uh, oh, some sort of early rendition of that. But Mark has still not responded to me on that one.
[00:06:53] John Nash: Uh, and from what I understand from other sources that I've listened to, other podcasts, this language model called Llama was also leaked in some way, right? And so that it's possible for individual users like you and I to run this model on our own private computers and then do with what we will, which is raising some concerns amongst people, hasn't it?
[00:07:16] Jason Johnston: I think it's a unique feature is that it's very compact once it's trained and then maybe it leaked out from there to people that they didn't necessarily want it to be in the wrong hands.
[00:07:27] John Nash: Maybe. And I think it's that wrong hands idea that is getting some people excited because of the way in which, for instance, ChatGPT did their red team efforts. So, to red team something is to bring in outsiders to find out what's wrong with it. And so, in this case, they wanted to make sure ChatGPT couldn't do things like tell you how to take common kitchen chemicals and make a bomb. And actually, turns out in their, one of their papers, I guess they came out, they could do that. And so now they put up the guardrails. And so, I guess the issue with Llama being released by Meta and Facebook is that these guardrails don't exist. Uh, and right, whose ever hands also make me think about state actors and others. I mean, the point of having these guardrails is to create some level of safety, but it's only up to the private companies right now, like OpenAI to decide to put the guardrails up.
[00:08:24] Jason Johnston: Yeah. And I think it was a good and interesting idea for even ChatGPT to open it up in limited release to allow a million users to try it out. And Bing Chat the same kind of way. So that they could see, I mean, the idea was that it was in beta, and they could see what it could and couldn't do, and they knew that users would test the limits of it, which is exactly the way to do it. So, I wonder if some of the fervency in fear around it was a little unnecessary just considering that this was, uh, a beta. Maybe the fear was about what it could do, not necessarily what it would do in full release, but like, I can't believe, you know, for some of the people like you and I, we've talked about the Hard Fork Podcast and one of the writers from New York Times that talked about this long conversation with Bing Chat that tried to get him to leave his wife because it was, uh, Bing was in love. So,
[00:09:29] John Nash: yeah, that was out there. And by the way, good plug for that podcast for anybody who's interested in these topics. That's a Hard Fork is a good one.
[00:09:37] Jason Johnston: And just that one about that his experience, yeah, is fascinating to just listen to them, uh, talk about that. So, we'll put the link into the show notes for that one, but yeah. But you know, I wondered after that, and now that we've matured a little bit, I know it's been only a few months, but now that they're starting to put guardrails on these large language models, it seems to have kind of cooled some of the engines about some of these concerns about AI? What do you think?
[00:10:08] John Nash: I think yes and no. We'll wait and see what happens with Llama, and as independent actors and other, yeah, state actors take up the charge to start their own large language models. I don't feel as things have calmed down all that much in some education circles with regard to knee-jerk reactions of cheating and plagiarism and whatnot. I feel like those conversations are still in play. I feel like there's still conversations around how to put in tools to keep students from doing things rather than putting in tools to enable students to do things. I think we have some distance to go there, but that's what I'm thinking right at the moment.
[00:10:52] Jason Johnston: Yeah. Well, and as we record this, it is March 27th, 2023. And, uh, just looking at our timeline again, March has just been crazy. We did a forum, uh, my first like public forum talking about AI and education last week. And I, at the beginning of the forum that we were both in, I was co-moderating. I was, uh, doing a quick overview of where we're at with AI. Right. And it was funny because my overview changed every day essentially. And I was changing things even that morning before just because of, uh, new releases and as we were talking about different, different new releases. But March has been like a wild ride. We had on March 14th, ChatGPT-4, the demo that we were talking about. Yeah. And a couple things in those demo that blew me away. Maybe there's, the first thing that comes to mind is him drawing a really rough sketch of a website, scanning it through his phone, and then, uh, GPT-4 creating the website with a joke, his joke website with like a working, uh, JavaScript.
[00:12:11] John Nash: Yes, that's exactly what he did. And, uh, he uploaded it to a Discord server and GPT-4, yeah, took the napkin sketch, and I saw that too. It was rudimentary. It was as quick as you might do over a Coke or a beer with a friend, uh, with a pen and take, I don't know, 30 seconds to draw this thing. And, uh, it wrote the HTML and the JavaScript to run this website.
[00:12:39] Jason Johnston: So, a very powerful upgrade. Another thing that happened that a lot of people didn't see on March 14th is that another company called Claude released, uh, an option for people to be able to check out. Claude, you should check it out. It's interesting, and I think within that, what I would suggest for people as they're testing this is to go to poe.com, poe.com. There you're able to test multiple chat engines side by side. You do your own tests and see how they produce different results from the same prompt. And I think it's, that could be a very powerful tool, but you can, uh, access Claude through that. Um, Claude is interesting because it, and I don't, I'm not pretending to know everything about it. I've read some about it and I've asked Claude some about, uh, themselves. It is based on more of a constitutional model. So, it's, as I understand it, it's less guided by the users and user preferences and more guided by their, by a strong constitutional model of ethics and kindness and do no harm and those kinds of things.
[00:13:49] John Nash: Yeah, I was looking that up too. In a blog post on scale.com, we'll put a note in, uh, in the show notes, you can ask Claude to introduce itself and talk about its constitutional model. And, uh, so Constitutional AI is a safety research technique developed by these researchers at, uh, Anthropic who have built this. And the goal is to be helpful, harmless, and honest by using a model of self-supervision and safety methods. So, it's going to use a model that will help police itself.
[00:14:25] Jason Johnston: Yeah, I like that. And I understand people that started this actually left OpenAI because they were concerned with some of the directions that it was. Yeah.
[00:14:33] John Nash: And they're, uh, also Alphabet funded, so Google's parent.
[00:14:38] Jason Johnston: Oh, they are as well? Yeah. Huh. Okay. Well, that's interesting. So, we've got Google, and speaking of Google, March 21st, then Google's Bard was, uh, released to the public in terms of being able to access it. Now, we talked previously about their demo. They did a demo that was kind of a failed demo by having some, uh, some incorrect, they had one job, which was a screenshot of Bard had one job to do, show that screenshot. And in the screenshot, they had some wrong information that Bard produced. But I've been playing with it and I think it's really, I think it's interesting. I haven't decided what I feel about it yet in terms of the differences, but you can ask for access, uh, bard.google.com, and I'm sure we will be seeing a lot more of Bard in the future because Google, they are no slouches when it comes to AI. No. They've been training this thing for years.
[00:15:38] John Nash: It's just that they were a little, I think they had to say something to keep up with the Joneses, but it doesn't seem like it's fully baked yet, right?
[00:15:48] Jason Johnston: No, they weren't ready to release it yet. And so, but they felt like they needed to let everybody know what they'd been working on. So, yeah. Yeah. So, what do you think, John, how do, how does all this apply then to online learning? What are some of the things you're thinking this, uh, this month with all the changes?
[00:16:07] John Nash: Well, Jason, I've been attempting to garner new research as it comes out about AI and GPT-4, specifically in the last couple of weeks. Uh, and its connection to online learning. Little things are coming out. I think it's also worth reporting that when GPT-4 came out on the 14th, OpenAI was quick to report that it passed the US Bar legal exam with results in the 90th percentile compared to the 10th percentile for the previous version of ChatGPT. So, it's just, it's wild. It's blown the doors off this stuff. It can look at a picture and describe it with great detail. So, this has affordances for visually impaired people, and then it can also interpret drawings and pictures and create code.
One of the things I've ran across was a study, a working paper done by some researchers in Germany looking at how chatbots have risen to human level creativity, and they applied the alternative uses test, and it's one of the most frequently used creativity tests that shows good predictive validity, and they gave 100 human participants and five generative AI models the alternative uses test. And they had six human beings and a specifically trained AI independently rate these alternative uses. So basically, you say, give us multiple original uses for five everyday objects, and its pants, a ball, a tire, a fork, a toothbrush, and give us new ideas for these. How else could we use these things?
They found no qualitative difference between AI and human generated creativity, and only 9.4% of the humans were more creative than the most creative generative model, which was GPT-4.
So, to the extent that generative language AI tools can be considered creative, and specifically in terms of their output for on these standardized measures, this research found that tools like ChatGPT, Studio, AI, you.com, they're judged to be as original as human generated ideas and were almost indistinguishable from human output now. But don't forget that you still need a human to create the prompts. And so, right. I think that's going to be, you know, these models can't generate the prompts. They can't generate an idea on their own. So, they need specific input, but I think that's pretty interesting. I think it goes to some of the things we've already been talking about with regard to these models being useful for generating ideas in the face of a blank page. Uh, busting inertia, keeping ideas going.
Another one that I wanted to mention really quick, if it's okay, is this, uh, uh, some economists and some people, uh, attached to OpenAI looked at the labor market impact potential of large language models. Their findings are pretty astounding. Their findings suggest that around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of large language models, and about 19% of workers may see at least 50% of their tasks impacted. Now, they don't account for how, as we talked about a minute ago, regulations and laws and other guardrails might come up to, to perhaps put, uh, uh, their arms around the use of these in certain sectors and things like that. But at the moment it's hard to ignore the fact that this is impacting the way people work.
Yeah. And it makes you think about this concern that AI is going to come and replace a lot of the work we do, and it may replace a lot of the education we do because it can do both of those things so easily. And this shows some of the concern because you know, it's not just when we're talking about doing our laundry for us. It sounds like that it's not just doing our laundry for us. So, a task that maybe didn't, doesn't take necessarily a lot of creativity to do, has some clear steps and is always this and this. Like a, like replacing, as we've talked about, uh, workers that put cars together. Right? Right. And not saying that's unskilled, that's definitely skilled labor, but it has a very clear-cut kind of way to operate. But you're talking about actually replacing a lot of creative work that could go on.
Possibly. I think it augments creative work, but I mean, humans still have to implement the creative ideas, uh, and right. One thing that was interesting in the creativity paper is: are we talking about little c creativity or big C creativity? I'd like to think you and I are making a difference in the world, but I'll be honest, I'll just talk about myself. Most of the, uh, stuff I do with ChatGPT is little c creativity. In other words, not creating world changing ideas. It's mostly me getting through the mundane things that I need to get through or, and actually, well, maybe some creative things that give me new ideas of new avenues of research, but it's, yeah, I don't think it's, yeah.
Yeah. And when you were talking about it was just 9.4% of humans that were more creative than the most creative, uh, AI, right, GPT-4. It didn't make me think, now I have an answer for people when they say, you know, well, is AI going to replace humans? And I'll be like, uh, probably only 90.6% of them. Yeah.
Yeah, you're safe. Probably. Maybe.
[00:21:56] Jason Johnston: You're probably fine.
Joking aside though, I mean, even though it can do it, it doesn't mean that we're going to want AI to do that kind of creative work, right?
That's right.
Because as we've talked about as well, that, that creativity is part of the joy of the work and the things that we do. We want to be the ones that are making the creative effort here, not necessarily just spitting it into a machine.
[00:22:21] John Nash: That's right. That's why I ask my students in my design thinking course, when they get to the point of brainstorming in that phase of the cycle, that they not use generative language models until they themselves have done their brainstorm. And so, then it could augment what they've got. But it can't take a brainstorm on for something that it itself did not research the human's needs, the things that need to be, uh, addressed.
[00:22:48] Jason Johnston: Yeah, and I think that's a, I think that could be a really great educational model as we talk about some of the differences between AI and education versus the workforce is thinking about, you know, what are we wanting to learn, be trained in, uh, to be testing our own thinking and, uh, we, you know, maybe taking a step back from AI to allow us to make sure we're at least doing those things first before then, and figuring out how AI could help us to do those things.
[00:23:23] John Nash: Uh, hey let me ask you this with regard to creativity around known parameters that are not a secret in the world. And I'm talking about instructional design, but when we're thinking about learning management systems like Canvas that have enumerable plugins, I'm wondering when we'll see the plugins that help professors and teachers be more intelligent about their instructional design. I mean, it would be wonderful if we could be able to drop our modules in, I import my course every semester from last semester, whatever it is you do. But then let the large language model make some suggestions where it sees, you know, 90% of the gaffs that occur in an instructional approach could be caught and even repaired by the language model. Module pacing and the design of the assessments as they relate to the outcomes, the kinds of communications you have with the students, the timing and frequency of those communications. The tone used compared to the kind of questions you get from the students. All of that just seems like it could be handled by an intelligent plugin for teachers.
[00:24:37] Jason Johnston: Yeah, I agree. Especially like from an instructional design standpoint. I think one of our, we call them our clarion calls, our, one of our big things, which is to, uh, have clear, measurable student learning outcomes, and then everything throughout the course should be hanging on those outcomes. And I can, so I can perceive of something like that could essentially scan or uh, read through a course knowing that these are our learning objectives and try to intuit whether or not things are hanging on those learning objectives and almost like an accessibility check is able to list things that don't hang on the learning objectives. Yeah. And so, it would give the teacher an opportunity to either add learning objectives if they're important or remove activities and learning modules if they're not important.
[00:25:32] John Nash: Many people in our audience listening to this may be familiar with Quality Matters. It's a pretty well-known right, uh, group, uh, at least in our, of our ilk. And but so, uh, ChatGPT-4 couldn't mine the Quality Matters rubric and then it could just, done. Could it be, yeah, I think that would
[00:25:51] Jason Johnston: be amazing. We, we should pitch that to them. Quality Matters plugin, AI plugin. Yeah. What we call it.
[00:25:58] John Nash: Quality Matters, uh, QP? No. I don't know. Got to think about that. I can't brainstorm. I have to rely on, uh, large language models brainstorm.
[00:26:08] Jason Johnston: Nope. No, do the hard work. John, don't reach for that. Don't, hurts. Don't reach for Chat. Please don't do it. Let's think about this. So, we've got Chat, we've got ChatGPT, we've got Claude, we've got Bard. They all kind of sound a little similar. So, we could do with, we something with a, a Q and a, yeah.
[00:26:29] John Nash: Or Matt. Matt matters. Matt. It could be Matt.
[00:26:33] Jason Johnston: Oh, Matt. That's not bad. John. I think I like that. Matt Q. Matt. Kind of like the Qbert. Yes. Oh, Qbert's not bad though too. Do you remember Q*bert?
No.
Oh, John, it was a, it was this little creature that you had to jump up and down on this pyramid. It was a kind of a, I don't know, it was like a weird 2D, 3D Pac-Man or something.
[00:27:05] John Nash: I'm looking it up right now.
Like crawling up a, oh, yeah. Oh, he has a long, uh, cylindrical nose. Yeah. He's
[00:27:19] Jason Johnston: got to have a bit
[00:27:20] John Nash: of a snout. Yeah. He jumps up a cubed pyramid. Yep.
Okay.
[00:27:26] Jason Johnston: I had a standalone, I had a little standalone pocket, uh, Q*bert game. It was amazing.
[00:27:30] John Nash: didn't, uh, that didn't cross my gaze back in the day.
[00:27:34] Jason Johnston: No. Okay. Well, you're missing out. It was amazing. So that's our pitch, right? Yep. So now that we've done the creative work, John, of coming up with Q*bert, now we can perhaps, uh, pitch to GPT-4 what our marketing pitch, our eight slide, uh, corporate funding marketing pitch to, uh, Quality Matters would be for their new AI.
[00:28:00] John Nash: I think we could. Ethan Mollick just recently published how he, uh, did in about 30 minutes, he put together a business idea with all the accompanying email campaigns and website creation, uh, for his, yeah, for his fictitious company. So, yeah, I've got 30 minutes to spare, I guess, on this. Okay. Yeah,
[00:28:23] Jason Johnston: sounds possible. I think they're going to love it.
[00:28:25] John Nash: Jason, I know, uh, someday we're going to stop talking about AI for the entire episode. But, uh, what do you think, uh, will get us off of talking about AI?
[00:28:38] Jason Johnston: Well, if it stops changing so quickly, and I think we will come to some steady state of understanding with AI. But, uh, it may not be too soon. It's hard to say. Certainly, when the workable aspects of AI don't change as quickly, we won't have as many things to talk about, but then we'll probably loop back as we're implementing it more and more in online education as we see new tools come out. Yeah. Like we kind of joked about the AI, but, uh, as probably we'll have a thousand companies over the next year, ed tech companies coming up with their new solutions, right, for AI. Yeah. And it'll be hard not to talk about those as they come out as new innovations happen. So, I'm not sure what will get us off, but I think that we will, we'll try to cover some other things other than AI, but I think that we will, it will probably be something we'll return to now and again, I think.
[00:29:38] John Nash: We will. I think we'll run reviews on those tools that we start to see and maybe even bemoan the demise of some companies that we didn't expect to go thanks to AI coming on the scene. Right. I think one pivot we can make away from AI that still references it is on assessment. I think all the talk that we've been seeing about the problems with generative language, creating essays for students that they might turn in for writing tasks in English classes. I think that we could start to have a productive conversation about how we can support teachers in coming up with authentic assessments that still let them have writing occur but not have to worry ever about whether or not the submission was AI generated.
[00:30:26] Jason Johnston: Yes. Yeah. And in that, there's more conversation we had in terms of where those lines are crossed and for different disciplines, probably even, you know, they're kind of crossed in different ways. And how we manage that so that our students continue to think. Because that's the bottom line. We want them to be learning and thinking.
Yes.
There are some ways in which it matters not to me if they're using an AI, but what does matter to me and so many teachers is whether or not they're actually continuing to learn and think, for
[00:30:54] John Nash: sure. All right. That's good conversations to be had. Okay.
[00:30:57] Jason Johnston: Yes. Thank you, John. This is great. Check out our, check out our LinkedIn page, Online Learning Podcast, and also onlinelearningpodcast.com, and please join in the conversation. We'd love to hear what you think about all this, what we're saying and all this that is going on. So yes, please. Thank so much for listening.
[00:31:17] John Nash: Yeah, thank you. Everyone: tell us what we should talk about next. We'd love to take up the topic.
[00:31:24] Jason Johnston: Absolutely. Good talking to you, John.
[00:31:25] John Nash: Thanks, Jason.

Monday Mar 27, 2023
EP6 - How Autonomous Cars Point to the Future of Online Learning
Monday Mar 27, 2023
Monday Mar 27, 2023
In this episode, John and Jason talk about Jason’s first trip in a self-driving taxi, and how this might point to the future of online learning.
Links & Resources
Join Our LinkedIn Group - Online Learning Podcast
Jason’s YouTube video of his first trip in an autonomous car.
Waymo One - Autonomous ride-hailing service
Speedy Street Tacos, Phoenix
Sal Khan talks about Khanmigo
6 Ways ChatGPT Save Teachers Time: Edutopia
John’s strawberry smoothie recipe
8 fl oz unsweetened almond milk (30 cal)
¾ cup (170 grams) Chobani Zero Sugar Vanilla Yogurt (70 cal)
~150-200 grams frozen strawberries from your grocer’s freezer section (56-75 cal)
Blend relentlessly until smooth (John uses a Ninja brand smoothie blender)
How Might We Use Design Thinking to Humanize Online Education? April 19, 2023 at OLC Innovate in Nashville, TN
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False Start
[00:00:00] Jason Johnston: What's the snack today? Is that a Shamrock shake?
[00:00:02] John Nash: It's pink. It's yogurt and frozen strawberries and almond milk.
[00:00:08] Jason Johnston: Is almond milking a profession now?
[00:00:11] John Nash: Is it a profession?
[00:00:12] Jason Johnston: Yeah. Almond milking.
[00:00:16] John Nash: Yes. Yes. They train mice with their tiny, tiny little paws to milk the almond.
[00:00:23] Jason Johnston: That's amazing.
[00:00:24] John Nash: Now, that has not reached automation by AI.
[00:00:28] Jason Johnston: No, not yet. They can't make the robots small enough, I don't think.
[00:00:31] John Nash: Well, the nanotechnology, maybe it's, yeah.
[00:00:33] Jason Johnston: Maybe with becoming nanotechnology, they'll be able to have their little robot milkers.
Intro
[00:00:40] John Nash: John Nash. I'm here with Jason Johns. Hey, John. Hey everyone. And this is Online Learning in the Second Half, the Online Learning podcast. Yes, it is. We are doing this podcast to let you in on a conversation we've been having for the last two years about online education, and it basically goes like this: online learning's had its chance to be great, and some of it is, but a lot of it still just isn't. So, how are we going to get to the next stage?
[00:01:08] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:01:11] John Nash: That's perfect. What do you want to talk about today?
Self-Driving Car Story
[00:01:14] Jason Johnston: Well, John, I think you know this, but we are living in the future. It's not quite flying cars yet, but not long ago I was picked up at the airport by a self-driving car.
[00:01:28] John Nash: You were in a self-driving car?
[00:01:30] Jason Johnston: Yes.
[00:01:31] John Nash: What, what airport was this?
[00:01:33] Jason Johnston: Uh, it flew into Phoenix Airport. This company called Waymo, and you can look it up, you can download the app, has two areas that it operates out of. One is Phoenix and one is San Francisco. And they have very limited areas within the city that they operate out of, but it's a fully autonomous, without any human co-pilot or anything, car that will come and pick you up at your location.
[00:02:01] John Nash: So, no humans. I thought humans had to be in these cars.
[00:02:04] Jason Johnston: Apparently not.
[00:02:05] John Nash: Does it depend on the jurisdiction or the state laws?
[00:02:09] Jason Johnston: I think it is in a test round right now. And because of that, and you, I might have clicked on some disclaimers as I went through. I'm not sure.
[00:02:17] John Nash: I'm sorry, I'm laughing. It's like, well, because it's in a test round, we're not going to put any humans as a safety measure in the car.
[00:02:26] Jason Johnston: Right. Well, I was a human in the car. I guess that was the test.
[00:02:31] John Nash: Wow. Okay. So, so can you, it goes to limited area. So, you didn't go very far in this thing, or it took you where you wanted to go, or?
[00:02:39] Jason Johnston: I didn't, I couldn't quite get to my hotel with it, so it works like an Uber for anybody's, uh, that is done on Uber or Lyft. You have an app; you tell it where you want it to go or where you want to get picked up. And they had a location, uh, basically the Uber location for getting picked up at the Phoenix Airport. And so, I had it pick me up there, told it where I wanted to go. And so, I couldn't quite get to my hotel. So instead, I was hungry and I went to get some street tacos. Just probably a couple miles from the Phoenix Airport. So—
[00:03:09] John Nash: What kind of car picks you up?
[00:03:12] Jason Johnston: Oh, it was, it was cool because I didn't realize until I was at it, but it was actually a Jaguar, like a little Jaguar SUV crossover. So, it was a very nice car. Jaguar. Yeah. Bye, a Jag.
[00:03:25] John Nash: Wow. Yeah. So, wow. Okay. Test round in a Jaguar, you just get into the doors open for you or what's what? Walk me through this.
[00:03:36] Jason Johnston: So, I did the app thing, had to come and pick me up, and you could watch it coming around the corner, via the map on the app, just like an Uber. And then it just pulls up. Uh, what was interesting was it pulled up basically right in front of me and up at the top, it had this kind of like, I guess was part of the lidar, people can't see my fingers going into circular motion, but it was this like lidar at the top, but it also had a little screen on it that had my initials, JJ, on the top of the car.
[00:04:09] John Nash: Oh, like a little hologram kind of thing?
[00:04:11] Jason Johnston: Yeah, almost like a little hologram kind of thing on the top. And it pulled up, had my initials, and then the way it works is in the app you can't pull on any of the handles. They're recessed, and so you have to hit a button on the app in order for the handle to pop out so that you can open up the door. I had a suitcase and I walked around the car just to check it out, but also to figure out if there was a way I could get into the trunk. But there didn't seem to be any way I could get into the trunk, so I awkwardly loaded this suitcase into, and I wasn't sure if I should get into the front seat or not. And so, I awkwardly loaded this suitcase into the backseat and then got into the backseat with my suitcase. And then once inside it had a screen for the backseat where you could, uh, click for it to start. And I asked you to close the door and it had this very ethereal, calming music inside and was speaking to me saying, welcome. And then I could click on a button for it to go and start the trip. And honestly, this is something, John, that I've been watching for, feels like a long time. And I, like you, probably a little bit of a nerd, and so I've been dreaming about my first self-driving car moment, and it didn't disappoint. It was really a very smooth, luxurious, easy ride in the self-driving car.
[00:05:30] John Nash: Was it, it pulls into traffic and then, uh, presumably you're on a two-way street or something. You're at an airport and so there's oncoming traffic. You're not nervous or it doesn't do anything untoward or...
[00:05:42] Jason Johnston: You know, because we weren't going that fast and so I didn't feel super uncomfortable from that standpoint. Maybe I would've felt different if we were pulling onto a highway. It was very cautious. It stopped everywhere it's supposed to stop. It did its blinkers. One thing that surprised me though, one part of getting out of the, uh, airport was that it came to a roundabout, and I know a lot of humans that struggle with roundabouts. And so it was, it was interesting to watch it very just easily and slowly navigate, uh, the roundabout. But this is where it was a little strange and I did, I thought this was the case when it was going, but it looked to be that it didn't quite get me to the taco place. Like it just fell short by about a block. And I remember I was trying to get it to go right where the taco place was, but it wouldn't quite go. And so, it dropped me off and it pulled into this, this kind of dark industrial looking street way that was a little abandoned and maybe somebody was walking around, and it seemed a little odd to me. This...
[00:06:50] John Nash: Seems a little, yeah, because if you're putting yourself into the hands of a self-driving automobile that's supposed to get you to your destination in an area you don't know, and it doesn't do that.
[00:07:02] Jason Johnston: Yeah. Yeah. Now, I will say that there was a help button I could have done if I was uncomfortable. I have this video on YouTube, and we'll post a link so you can see my experience. And, uh, the, one of the programmers actually from Waymo replied to my YouTube and told me that I can always edit my drop-off point even after I've come to a stop if I wasn't happy with it. But it's not something I knew. It was probably in those disclaimers at the front or something that I didn't really read and just said OKAY to. And it basically stopped and I was like, oh, I guess that is it. So, I opened up the door and pulled up my suitcase, closed up the door, and then it just took off.
[00:07:47] John Nash: Just left you.
[00:07:48] Jason Johnston: Just left me.
[00:07:48] John Nash: Good luck.
[00:07:49] Jason Johnston: And so, I opened up my map app to make sure I was going in the right direction to the street taco place, and it was just about a block away. And I don't feel super nervous in these kinds of situations, but I did notice though as I was walking towards the street taco place, that it basically dropped me off just outside of like an adult emporium. You know, I kept on walking of course, and went onto the street taco place, but it just was one of those streets. And so, it was a little, that part of it was a little strange.
[00:08:19] John Nash: And, and you could have, yeah. And then you find out later, well, I could have amended that by staying in the car.
[00:08:24] Jason Johnston: Right.
[00:08:25] John Nash: Crazy. Wow. Well, what do you take from this?
[00:08:28] Jason Johnston: I've thought about it, of course, while I was there and so I was curious to see what your kind of first take as you were listening to the story and how this either applies to our own experience with, uh, new technologies coming in, or I guess directly I really want to be thinking about how technologies apply to online learning in the future of online learning.
[00:08:50] John Nash: I'm thinking about the fact still that you got in a car with no other humans in it during a test phase, and it seems in, I don't know why that seems a little insane to me. It's, you know, a 3,000-pound vehicle that you have, you sit in the back of that, nobody is in control. Right. That does seem pretty amazing.
[00:09:13] Jason Johnston: When I've told this story I've had a lot of people, they're divided about whether or not they would do that at this point, mm-hmm, in the stage of the development, whether or not they would get into a self-driving car.
[00:09:26] John Nash: I mean, I couldn't get in the backseat of my SUV and let other family members drive so I could take a nap. I was too, you know...
[00:09:35] Jason Johnston: Do you think you would feel, knowing what you have heard about my self-driving car experience, would you feel more or less comfortable with, uh, with family members versus Waymo? A Waymo Jaguar.
[00:09:49] John Nash: I think, well, firstly, I should make a big disclaimer that any proclivities I have about other people driving, uh, family members or not are my problem and not their problem. Right, right. I would probably not take a nap in that Jaguar. Maybe after some time of, uh, getting used to it, I would be more relaxed, but I think I would be pretty vigilant about what it was doing. Ah, that's interesting. I think it's interesting to think about if we leap to our considerations of online learning and what AI is doing. I can't help but think of the news this week, as we record this on the first day of spring, that Khan Academy is going to integrate GPT-4 into its modeling to create an intelligent tutor for the material that it presents.
[00:10:44] Jason Johnston: Instead of Conmigo.
Yeah, Conmigo, which I think is really neat, but also, it's this adorable little chat icon with big eyes.
[00:10:52] John Nash: Yes. Yeah. And so, I'm thinking, and I have to confess there. So I went out onto ChatGPT and I said, well, talk to me a little bit about self-driving cars and talk to me a little bit about online learning and personalized learning, mm-hmm, and it had some interesting ideas, but I think one is that, you know, when we think about a self-driving car taking over driving responsibilities fully and automatically, mm-hmm, that reminds me of what Khan Academy's trying to do, which is trying to bring some guidance and support just when the student needs it. I suppose that we could start to see this kind of technology into our own cars where we, it already does. I mean, we have passive radar systems and things for right, monitoring speed and doing cruise control. Yeah. But—
[00:11:40] Jason Johnston: Driver assisted, I think they call those, yes. A driver assistant versus self-driving.
[00:11:46] John Nash: Right. So how much self-driving will come along when you need it, or will it, you know, an adaptive system like what Conmigo will do, will give you explanations or feedback or additional resources. How could that be thought about for the cars and also vice versa? Yeah. Speaking of cars, and this will date me a little bit, but around the year 2000 or so, uh, when I worked in another place, I was on a business trip to Munich, and we were at BMW headquarters. And when we had some off time with the engineers we were invited to test BMW's first prototype of a passive radar cruise control system. Amazing. And I got to drive this car on the Autobahn and test this out, and, which doesn't seem that fantastical, but what makes the story fun is that the entire apparatus was operated by a Pentium 386 desktop machine in the backseat of this BMW three series, and the engineer who ran it was rebooting it as we were going down the road, but that's what they had this thing tested on, but that was pretty— Oh my. Well, hopefully it wasn't like Windows ME or something like that. Clippy comes up and tells me I need to slow down. Like no Clippy, Clippy. I already, I already muted you, Clippy. But what do you, what do you think about this and what does it relate to what we want to do with online learning.
[00:13:21] Jason Johnston: Yeah. And I think that's really interesting what you said there about the, uh, BMW because it speaks to how long we've been playing with this technology and how long this has been a kind of a dream of inventors and of drivers to be able to have some either computer assist driving or fully computer controlled driving. And I feel the same way about myself when it comes to online learning. I've thought for years about what if every learner comes into their degree or whatever, into high school or where, whatever you want to think about in terms of your kind of goalposts for learning. And instead of being slotted into the same classes with everybody else, you are taken on a personalized journey of learning, mm-hmm, that begins where you are and takes you at the speed that you can go, works with you in whatever your dispositions are and so on. And then gets you to your end goal when you want to get there, you know? And so, I've been thinking about this for a while, and even with self-driving cars for a decade I've been thinking about the fact that my kids who are approaching now coming to a driving age. I remember thinking when they were five and six, it's like, well, they may not even need driver's license. Right. We may all be in self-driving cars, and I won't have to take them to school. They'll just hop in the electric VW self-driving van, and it'll take him to school and then pop back whenever I need to head to work, but that's not the case. You know, we're quite a way off from that and I feel the same way about online learning. I feel like this idea of adaptive personalized learning is, is still a little bit of a little bit of a pipe dream or something that isn't quite yet actualized.
[00:15:16] John Nash: I keep thinking of you telling Siri to go pick up your kids while you sit and have coffee at your kitchen table. Well, when you talked about when AI might be part of a dialogue with a learner on their own, I'm reminded a little bit of my recent interactions with Bing's AI implementation and the way in which the AI chatbot closes out the conversations by asking me a reflective question, right, when I thought I was in the driver's seat asking it questions to give me data back, and I was of two minds there. One mind was, "Excuse me, I'm asking you the questions. I need some information." And then the other mind of me was saying, "Well, actually that's nice that you're asking me to reflect on why the heck I'm asking you this question," uh, because that sort of iterative reflection helps me think about whether I'm on the right track or what my ultimate goals are. Usually, I've formed a goal in my head about why I want to ask the AI bot a question, and I don't question that goal, right, and what Bing does is actually ask me to reflect on whether I'm even chasing the right thing.
[00:16:34] Jason Johnston: Yeah. Yeah. I think that's good. Had a couple thoughts here while you were talking. One is that in those early stages of Bing trial, they didn't have any limitations on how long your conversation could be, and this is, this is why a lot of people were getting into really strange conversations, I think, with Bing, because you could, you could have a conversation for hours with Bing, it would just keep on going. They would ask you a reflective question, you would answer back. It would ask another question, and it would really be like a conversation. And then I think in the limited, another limited rollout, I think they were doing maybe five or eight responses. Yeah. Mine is up to 15 now. I can do 15 responses. Yeah. I—
[00:17:15] John Nash: Thought it went, I had 20 the other day, then I dialed it back. So, it's 15 now.
[00:17:20] Jason Johnston: Yeah. So they're figuring out that kind of level to which they, you know, how long can we let this go on before it starts a little strange, or it starts getting to a place where Bing could be hacked a little easier in terms of going down directions, mm-hmm, we don't want it to go.
[00:17:37] John Nash: I got the feeling that Microsoft started to understand that in terms of prompt engineering, good prompts have to be scaffolded over a series of prompts, five, six, seven to get where you want to go. So, you have to feed it some things, ask it the things, tell it to make some assumptions about some things. Then you can ultimately ask on the eighth prompt. You can ask it what you want it to do.
[00:18:00] Jason Johnston: Yeah. Wherever it was at like five or eight, it felt too short for some things for me. Like I felt like I was just getting to it and then it would close off the conversation. Then I would have to start from scratch and kind of re-engineer my prompt knowing what I knew about it later on, but it wasn't as fluid. Yeah. Yeah, I agree. But I like this idea of this kind of reflective asking questions. It's not just there as a help but actually guiding students like in a Conmigo kind of way. Where the learner just isn't thinking about where they want to go, but the computer or learning system is also thinking about where it's trying to take the learner in terms of goals. And this is the possibility thinking of AI is when, and this is what we've talked about before, when AI is able to do our laundry for us, what can we do next with that time and energy? What can we build upon? And I think about that with math learning that once we got the calculator, I felt like we started to build on top of that, versus really focusing on a lot of those rudimentary aspects of math. But maybe math teachers would argue with me that that was a step back. I'm not sure. I—
[00:19:20] John Nash: Don't think that AI-induced laundry machines will make me want to study laundry more. However, I do think that as AI begins to take on tasks like the calculator did, teachers are going to begin to have time to rethink the concepts they want to teach, mm-hmm, and how they want to teach them and change the way they assess the attainment of these sort of higher order goals.
[00:19:54] Jason Johnston: Yeah. I think it's very possible that we are on the edge of another renaissance here that will be pushed forward by, by— I mean, we won't know until this time in history has passed, mm-hmm. But it's very possible. I was thinking about my other experience that night of driving from then my street taco place, which was excellent. I'll also put the link in the notes for that street taco place if you're in Phoenix, to my hotel that I used an Uber and some of the, just the contrast of the two experiences. I had an Uber driver who was a local Phoenix person. There was a big golf game in town coming up and some, and the Super Bowl was actually going to be in the next week. And so, we're chit chatting about that. It was a much more human experience for sure. I was not asked to tip the robot driver, so my per mile, the robot car was cheaper than taking the Uber. And so, I don't know if that's just a test, a test thing or if it is the way that things would be. But I don't know. I don't know where I'm at, to be honest. I don't know which one I would prefer. If we could have a world, I don't think I want a world with all robot taxis. You know what I mean? Yeah. Although it was a novel, it was a great experience. I don't know that I want that. I wish for a world where we replace even those really casual opportunities to connect with people.
[00:21:26] John Nash: I agree with you and it's also complicated because this touches on labor markets. It touches on— I mean, look how Uber and Lyft and ride sharing in general disrupted taxis, right, uh, and yet the labor practices of those companies that disrupted the taxi system are not exactly lauded. They are seen as an escape hatch for people, but those companies are not seen as being the most rewarding folks towards their employees. Yeah. So, it's, it's complicated, isn't it?
[00:22:01] Jason Johnston: It is. And, and tying it back to education, there seems to be a world teacher shortage. I know that for public school teachers, they don't feel like they have the time to individually address all the concerns of all their students and to be able to guide them individually. Most of them would love to be able to sit down and take them on a guided tour through the intricacies of figuring out the quadratic equation or whatever. But, uh, they just, you know, that's just not the case. And so, we've got a place like Khan coming in and being able to give a self-guided tour. You got 30 kids in your classroom. It's right there. It'll be available for all of them at their own pace, at whatever questions they have at any point. So—
[00:22:49] John Nash: Does the teacher become a human tutor on top of the AI tutor? I know that teachers today that are trying ChatGPT, even at 3.5 or the, and when the 4.0 becomes faster, are finding affordances that allow them more time with students because they can take care of the routine things that can take hours. Lesson planning, rubric creation, mm-hmm, uh, other proforma activities that districts require due to state requirements that are mostly paperwork but need to be done, mm-hmm, mm-hmm. Now they're done in a fraction of the time. If you can get the prompting right and then you have time to work with your kids.
[00:23:31] Jason Johnston: Yeah, so I think there's, I think there's lots of opportunity here for AI, the self-driving course to help augment students' learning without necessarily replacing, uh, the teachers that are there. But there may be some places that teachers do get replaced, I'm not sure. And there may be some teachers that is not, there's no love lost, but I don't know that we'd be looking back nostalgically at our Conmigo in the fourth grade and about how much Conmigo meant to us. Excellent point. No, but at the same time we were talking about Clippy earlier, you know, and so there may be some nostalgia with Conmigo.
[00:24:08] John Nash: Maybe. Well, I don't know. Uh, we, we speak about Clippy with nostalgic derision.
[00:24:13] Jason Johnston: Right, exactly. Yep. Yeah, it was a, it was different. It was a different place than our, than our fourth-grade teacher that actually took us by the hand and cared for us. So, yeah. Yes.
[00:24:25] John Nash: Yeah, I can name all my teachers from kindergarten through fifth grade at my elementary school. Huh. I don't know that I would look back and say, wow, I sure loved fourth grade with Conmigo. I would rather think about Mr. Dragget in third grade or Ms. Stewart.
[00:24:43] Jason Johnston: Yeah. Well, what do you think, wrapping this up today, John, what are your closing thoughts?
[00:24:47] John Nash: Uh, I'm impressed that we were able to make as many analogies as we could to online learning from your short ride to a taco stand in Phoenix. Yeah, it's fascinating.
[00:25:01] Jason Johnston: But yeah, that's good. And it also reminds me that we want to hear from other people as well. We'd love to continue this conversation. I'd be interested to hear about other people's experience in self-driving cars. Are they on the side where they would get into a self-driving car or they wouldn't, would they get into a self-driving course or they wouldn't, what their preference is. And maybe there's certain, uh, circumstances that people would or and wouldn't do it, I'm not sure. So, if you want to chime in, uh, find us on LinkedIn, we've got a group there. It's Online Learning Podcast at LinkedIn. Or you can go to our website onlinelearningpodcast.com and find all the shows there as well as a link through to our LinkedIn. And yeah, please like, subscribe. Let us know what you think.
[00:25:47] John Nash: Yeah, definitely. And quick shout out, join us at OLC Innovate on April 20th in Nashville. We're going to have a meet and greet and a recording on the 21st too. So, we're doing design thinking and online learning on the 20th and doing some remote podcasting on the 21st. And then this week, Jason, you are co-moderating a panel on March 23rd at UT Knoxville. I'm a guest on that panel. We're going to be talking about AI and its role and implications in higher ed.
[00:26:21] Jason Johnston: Yeah, we've got a bunch of amazing people on that panel. I'm really looking forward to the conversation and so I'm sure we'll loop back on that. I don't think this podcast will be out before that forum happens. So, yeah, it'd be interesting to do a little loop back on that. And I think there's a few people on that forum as well that we'll probably get in as guests in the coming months. Yeah, I hope. Yeah. Well, great talking to you, John. This was, uh, really interesting. Thanks for being interested in my self-driving car experience. Yeah.
[00:26:50] John Nash: Definitely. I'll have to, uh, have to check that out next time I'm in a community that has that, you know, I got to say, I was born and raised in the San Francisco Bay Area. SFO is my home airport. If I had to name one, I can't see getting in a self-driving car and going out of SFO. I just don't feel it. It's, it's, it's, yeah. It's interesting. People who've been in and out of SFO probably know what I'm talking about. I won't go into the details, but yeah. That, that seems, that feels different. Your ride sounded very kind of side-streety and calm. I don't know.
[00:27:22] Jason Johnston: Yeah. Yeah. It was much calmer than some, a lot of airports, which may be why they chose that as one of the places, but yeah. Can I tell you my self-driving dream car?
[00:27:32] John Nash: Yes.
[00:27:34] Jason Johnston: It's actually a self-driving RV. To me, it feels like that will be the goal. That's where we get there, where we can load the family into the self-driving RV. Yep. We go to sleep that night, and we wake up in, I don't know, St. Louis to check out the arch. We go to sleep the next night, and we wake up somewhere in the Tetons and do a little hike and then we wake up the next night and we're on the beach or whatever. So, it would have to be electric though, of course, too. Of course. Uh, I think we need some test cases. I think, you know, like, uh, traveling bands might test this first. That's right. In fact, I think Willie Nelson would be an ideal candidate. Oh yeah. Willie Nelson. Neil Young might even make one actually. He might be into making one. That's excellent. I think that's a good idea.
Yeah. Yeah.
Well, great. Great talking to you, John.
Yeah. See you later. Yeah.

Friday Mar 10, 2023
EP5 - Bing Chat Broke My Heart and the Future of AI
Friday Mar 10, 2023
Friday Mar 10, 2023
In this episode, John and Jason talk about if technology in education will always let us down, and the concerns and opportunities with AI.
Links & Resources
Join Our LinkedIn Group - Online Learning Podcast
Jason’s YouTube video of the chat with Bing: Bing Chat Broke My Heart
Dr. Ian Malcolm Ethics Speech
Will elementary children in China put up with robot teachers?
Research Papers
Review of Literature on AI, Higher Ed, and Distance Learning: Where are the social scientists?
Systematic review of research on artificial intelligence applications in higher education – where are the educators? Link
The Use of Artificial Intelligence (AI) in Online Learning and Distance Education Processes: A Systematic Review of Empirical Studies. Link
Ideas for Supporting Doctoral Student Writing
From Mushtaq Bilal’s tips on Twitter about smartly using AI to improve written academic work.
Check out this tweet and this tweet.
AI Businesses on the Horizon
Inflection AI, a startup working on a personal assistant for the web, could raise $675m, after raising $225m in 2022.
Character.ai — a company that hosts ChatGPT-like conversations in the style of various characters, from Elon Musk to Mario — is now valued at ~$1B.
Upcoming Presentations and Talks
UT TLI conference March 23rd 3-4:50 - first 500 free $20 after that.
“How Might We Use Design Thinking to Humanize Online Education?”
John & Jason’s design thinking challenge session at OLC Innovate in Nashville
Wednesday April 19, 2023 - 3:45 PM to 4:30 PM
Message us on LinkedIn for details about the live meet and greet on April 20th
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False Start
[00:00:00] Jason Johnston: You can always find this podcast in our show notes, onlinelearningpodcast.com. That’s onlinelearningpodcast.com. I still can’t believe we’ve got that URL, John. We’re this far into it, and nobody’s done onlinelearningpodcast.com.
[00:00:14] John Nash: That’s crazy.
[00:00:15] Jason Johnston: I just figured we’d be at some sort of weird, really weird name to get anybody to find us.
[00:00:21] John Nash: Underscore 87 52 B,
[00:00:24] Jason Johnston: Right? Exactly. Dash.
[00:00:28] John Nash: Yeah. Our birthdates are already taken. Our names are taken entirely. Shot.
Right.
Intro
[00:00:39] John Nash: I’m John Nash, here with Jason Johnston.
[00:00:42] Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:00:48] John Nash: Yeah. We are doing this podcast. I’ll let you in on a conversation that we’ve been having for the last two years about online education. Look, online learning’s had its chance to be great, and some of it is, and a lot of it isn’t. So, how are we going to get to the next stage, Jason?
[00:01:02] Jason Johnston: That is a great question. How about we do a podcast to talk about it?
[00:01:07] John Nash: Perfect. What do you want to talk about today?
[00:01:11] Jason Johnston: John, Bing Chat broke my heart.
[00:01:15] John Nash: Come on. Really?
[00:01:17] Jason Johnston: Yes. Could I tell you the sad story?
[00:01:20] John Nash: Yes. Tell me the story.
[00:01:21] Jason Johnston: So, we were talking the other day, and we were talking a little bit about practices to have more emotionally stable and emotionally healthy kinds of responses in work or in life and so on during the day. I was curious about it from a, almost, from a therapeutic standpoint. And it asked me if I wanted some resources on it and so on, and I said, “No, that’s fine. I just want to chat.”
And I like the idea of observing and labeling the emotions without judging. This was an idea that Bing Chat came up with as we were chatting.
[00:01:58] John Nash: This is what Chat said to you?
[00:02:01] Jason Johnston: That’s what Bing Chat said to me previously. And I said, “I like that idea, Bing Chat.” I’m jumping into the conversation. Got it. I like that idea, Bing Chat, about observing and labeling the emotions without judging.
And Bing Chat says, “Okay, no problem. I’m happy to chat with you.” And it goes on and talks a little bit more about that. Then it says, “Do you have any examples of how you observed and labeled your emotions recently?”
And this is one of the things about Bing Chat. It’s a little more back and forth. It feels more like a chat to me than ChatGPT, as we talked about in our previous episodes. And honestly, I thought, what the heck? Bing doesn’t save this stuff. I’m sure nobody’s recording this anywhere, right, John?
[00:02:42] John Nash: Sure. Yeah, absolutely.
[00:02:44] Jason Johnston: And so, I actually said sometimes I… and I explained some emotions that I’ve had recently and how I’ve labeled them and how it’s been helpful.
And guess what Bing Chat said to me.
[00:02:54] John Nash: What?
[00:02:55] Jason Johnston: It said, “Unfortunately, we need to move on. Click ‘New topic’ to chat more.”
And I said, “Bing Chat, I just opened my heart in a way that I’ve never done to an AI. Are you sure we can’t continue?” And Bing Chat says, “Sorry, it looks like we need to chat about something else. Click ‘New topic.’ Please stop.”
[00:03:20] John Nash: Because you only get to speak for eight times.
[00:03:23] Jason Johnston: Exactly. And then, John—and this is the kicker.
Okay, then John, I went back to Bing Chat a couple days later. I was a little wounded. You could see my face. I’m a little… all this is tongue in cheek, everyone. I hope you realize that. I have some very good, human, healthy relationships. You shouldn’t be concerned about my—
[00:03:46] John Nash: We need, um, we need audible emojis.
[00:03:49] Jason Johnston: Yes, we do. So, you can see. But anyways, this was the kicker, John, is a couple days later I was put back on the waiting list.
[00:04:01] John Nash: Oh my gosh.
[00:04:02] Jason Johnston: And I wondered what happened between me and Bing Chat. Was it that it couldn’t handle my intimacy? It couldn’t handle me really… sure, it asked to know some examples and then immediately closed down after I had really opened my heart in a way that I had not to AI before.
Anyways, yeah. So, Bing Chat broke my heart. And when this happened, I did think about… we’ve done three podcasts now about AI in education, talking about online learning a little bit. And it did make me think about the fact that, John, is technology always going to break our hearts when it comes to education?
And what are we not seeing here? You and I have been really enamored with AI and ChatGPT and then Bing Chat over the last few months. Everybody’s talking about it. We’ve been really interested in that conversation. We tend to come at it a little bit more from the standpoint of what can we do about this rather than building fences to keep it out of our lives.
But I wanted to take a little bit of a turn in this episode as we wrap up a little conversation, and I wondered if we could talk about some of our concerns to begin with. Will technology always break our hearts, and what are our concerns as we think about what we’ve learned over the last few months in relation to online learning and AI?
[00:05:23] John Nash: When you asked that question, the first thing that popped in my mind wasn’t some of the things that I’ve been thinking about as concerns. But will AI, will technology always break our heart? I think that technology, and pretty soon in the form of the way AI is evolving, will put teachers in the position to inadvertently break learners’ hearts because they will rely too much on the technology to get the good job done. Does that make sense?
[00:06:00] Jason Johnston: Say it again in a different way for me.
[00:06:02] John Nash: I think that technology could contribute to teachers breaking learners’ hearts, because in present-day teaching and learning, it has become so politically fraught in the public sector and so much pressure to do well, particularly in the P–12 sector, that I think as vendors come along and as technology advances to bring more AI tools that are supposed to humanize things, it may inadvertently fool some of us into thinking that it can handle the human side of the teaching, and we’ll let the machine do a little too much.
Particularly those that have not been using technology to teach. If we have another pandemic-like event that pushes a lot of pedagogues into teaching and learning practices that they’re not accustomed to, they will rely on what the vendors have created.
[00:07:04] Jason Johnston: Yes. Yeah. It makes me think of a couple of phrases that kind of drive me up the wall, and apologies to vendors out there: “one-button solutions”
[00:07:17] John Nash: Yes.
[00:07:18] Jason Johnston: and “seamless integration.”
[00:07:20] John Nash: Yeah. Yes. I think that lulls people into a false sense of security and safety, that this process will be managed for them.
[00:07:29] Jason Johnston: Yeah.
[00:07:31] John Nash: Yeah. It’s very attractive too, especially if people find themselves in a crisis situation and they’re like, “You mean all I have to do is hit this button, like the easy button, and we’re just… we’re cruising. It’s just taken care of.”
[00:07:43] Jason Johnston: Yes.
[00:07:44] John Nash: Yeah. I worry about that.
[00:07:46] Jason Johnston: It also got me thinking about a quote by a very astute Dr. Ian Malcolm. I don’t know if you know this scientist or not, but he said, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
[00:08:04] John Nash: Yes. Very astute words. Tell everybody who Dr. Ian—
[00:08:08] Jason Johnston: Well, you know, for our highly academic listeners out there, they’ll know immediately who this is, of course, who was the Jeff Goldblum character in Jurassic Park movies. And it got me thinking about this quote from the standpoint of AI, in that I think some people are asking the questions, which I’m really glad about, but it continues to be a concern.
Like, I’m part of some task forces and so on at our school. They’re asking the question whether or not they should on some levels. On other levels, I hear very few people taking any kind of Luddite stance here. Most people have resigned themselves to say it’s a foregone conclusion.
We’re going to have AI anyway, so we might as well learn to live in harmony with our AI friends slash partners slash co-pilots slash overlords, right? So, some of my concern is around that—whether or not… and as we know from the Jurassic Park movies one, two, three, four, five, six, seven, they keep doing it over and over again, right?
But it all started with a good heart—people that just loved the dinosaurs. They wanted to see them again. They wanted to roam with them in the forest. But every time it turned out poorly. I love sci-fi movies because they often speak to some of the fears and give us some times for caution.
So, that’s my main concern, is that this is going to get away from us. What we just said about the one-button solution—that we’re going to do plug-and-play online learning, and humans are going to be left behind, both teachers and students.
[00:09:49] John Nash: And speaking of humans getting left behind, a piece of research crossed my desk last week on whether or not service robots are going to be a promising solution for mitigating the challenge of the global teacher shortage. And so, this study was done out of China, looking at whether or not children will tolerate a robot teacher. Now, the study doesn’t ask if students reach learning outcomes that are as good or better than a human teacher or whether they would like to have a human teacher instead of a robot teacher.
But—
[00:10:24] Jason Johnston: So, it was more like student satisfaction with the teacher versus a human teacher.
[00:10:29] John Nash: Yeah, basically like, will they put up with a cute little humanoid robot as their teacher? And this was elementary children too in China. Really interesting. Because actually the authors cite some research by some other authors who say, “Service robots are believed to be a promising solution for mitigating the challenge of the global teacher shortage, and it’s of critical importance to develop an educational service robot into a full-fledged robotic instructor that can be fully accepted by students.”
This is now a critical call according to these authors. I don’t know.
[00:11:03] Jason Johnston: Yeah, that’s interesting because so many of our conversations, even around the university, certainly haven’t been thinking towards actual physical robots.
Some of it is because we’re thinking online, and so that would be a little redundant in some ways, to have a… to stick a robot in front of a Zoom camera. But I’ve seen some of those AI chats out there and video creators where people look real on screen.
It could be a really, a much quicker solution in that regard. Robots are probably a little further out in terms of physical robots that can actively walk the halls of a classroom building and navigate opening doors and so on.
[00:11:44] John Nash: This one won’t open doors, but in this study—we’ll put it in the show notes—this was a physical robot that had a humanoid appearance. The authors said, “The robot this study employed for development was an AvatarMind iPal, produced by” a robot technology company. And they said it “has advantages over other robotic products in the current Chinese market, such as feasibility, customization, and affordability. This robot was chosen for four reasons. What we first considered was robot appearance. It looks like a cute child due to its humanoid shell, few angles, and no exposed mechanical parts, which eliminates pupils’ fear and cold feelings of the robot, according to the uncanny valley theory.”
I don’t know. I’m the first person to run to getting something cool and technological in my house, but this just sounds awful.
Yes, and feels a little bit like a slippery slope in this regard. I have heard there are places where they’re testing robots for taking care of our elderly as well.
And we talked before about this kind of AI washing machine idea where, you know, maybe we can get AI to do things that we don’t really want to be doing, and so we can spend our time doing other things. But of course, this is what all this is about—is that, when is it at the expense of the student? And is this really moving in the right direction for students?
[00:13:07] John Nash (2): No, I think that goes right back to your Dr. Ian Malcolm quote, that we’re just… we’re doing this because we can. Now, I do not want to besmirch the global teacher shortage. This is an important problem. But yeah, I don’t know. I don’t know if humanoid robots in elementary school is the first-best solution.
[00:13:29] Jason Johnston: Yeah. It reminds me of a tour that I took. Have you been to the Toyota plant north of Lexington?
[00:13:34] John Nash: No, I haven’t. I’ve lived here a decade, and I’ve been meaning to go up.
[00:13:38] Jason Johnston: Yeah, you should go sometime.
[00:13:40] John Nash: For the folks who don’t know about it, let’s say just a second—like all the Toyota Camrys in the United States are built there. Is that fair?
[00:13:47] Jason Johnston: Yeah. Yeah. In Georgetown, Kentucky. And I think a lot of Lexus and so on—they have certain vehicles that they do. I taught robotics actually at a high school for a time period, so we went there to see the robots. What was interesting about that tour is that I was excited to be seeing the robots and so were the students.
But they really tried to downplay their use of robots.
[00:14:07] John Nash: Interesting.
[00:14:09] Jason Johnston: There were very few humans on the floor, it felt like, in comparison to these huge robots that were doing these tasks. And they kept saying over and over again that robots are here to help humans, not replace them.
That was their mantra about all this. However, I guarantee you that there were more humans working the automotive factory floor fifty years ago than there are now. And I think it’s a great company. I think they make a really good product, all those kind of things.
And while that might be somewhat of a comforting mantra for us humans, I’m not sure it’s completely accurate. And I think we have to be aware of that part in terms of concern as well. You’re talking about the robots coming out or AI replacing teachers and so on. I don’t want to put… our main response has not been a knee-jerk fear response to this.
We’re more interested, excited about the potential, but still I think we need to have some awareness of being able to think critically through any lines we might be fed by vendors that are trying to sell us the next AI educational tool and so on. And keep a critical mind and eye towards those things.
[00:15:26] John Nash: Why do you think they were so keen to be sure to say that robots are not going to replace humans, but are here to help humans?
[00:15:34] Jason Johnston: I think one thing that I realized is that I thought I was on an educational tour, and it really was a PR tour, is really what it was. And so, I think there are amazing ways that they showed us in terms of manufacturing, how… like we could talk all day from a leadership standpoint about how they do manufacturing there.
Every person on the line can stop the line because quality control is at—
[00:16:02] John Nash: The andon cord. You can pull the andon cord.
[00:16:03] Jason Johnston: That’s right. They have the Kaizen method. I’m probably saying that—
[00:16:07] John Nash: Kaizen.
[00:16:07] Jason Johnston: Kaizen, yeah. And the amazing leadership thoughts and take-homes from that place. And they emphasized some of those things. And so, maybe they didn’t want to get people too caught off guard by all the robots, but I, for one, was there for the robots.
Okay. Excellent.
Should we talk about some of the… so, those are some of our concerns. So, I felt maybe we need to air a few of those kind of things. Are there any other concerns you wanted to air before we move on to the positives?
Not concerns per se, but I’ve got a couple of… I think I see some opportunities—some other research that crossed my desk.
Okay.
A couple of systematic reviews of research on artificial intelligence in higher ed and in distance learning. And I’m not surprised, I guess, that much of the research has got a weak connection to the pedagogical perspectives of the way in which this can be used and the ethical perspectives in which this can be used.
And maybe it’s not so much, as I say, a concern, but I see an opportunity for the social scientists like us to take the mantle up and look at ways in which we can now humanize online education with the help of AI. This one study I looked at by Zawacki-Richter et al. concluded, “A stunning result of this review is the dramatic lack of critical reflection on the pedagogical and ethical implications, as well as risks, of implementing AI applications in higher education.”
And it sounds like at least the beginning—that’s your beginning, that’s your exploratory research right there, saying that here are some of the concerns, we need to get into more research about this. Yeah.
[00:17:43] John Nash: Yes. Yeah, that’s good. Yeah. The other study concluded that most of the “artificial intelligence applications in online distance ed are purely technical studies that ignore such issues as pedagogy, curriculum, and instructional learning and design.”
We’re back to… so, this is back to my sort of “one-button design,” I’ll call it now, that you nicely labeled—that there’s already a plethora of artificial intelligence applications for use in online distance ed, but it’s all purely technical implementation, data mining, looking at ways to personalize education based upon responses of students in prior exercises, things like that.
Soon all this stuff will come. It’ll be integrated into Canvas. And so, then the question is, will it be used in a way to really humanize and improve the process, not just simplify it or speed it up?
[00:18:34] Jason Johnston: Yeah. And I think that’s a good thought. It made me think of… so, we have, at University of Tennessee, we have a task force going on right now.
And they’re essentially assembling smaller task forces with different lenses to think about AI. And just what you’re saying—if we only look at it from a technology standpoint, then we miss some of the other implications. I’m not leading this thing at all. I’m very glad to be part of the conversation. I’m having some very robust conversations about it.
So, I’m very impressed by the people who have set this up really quickly because of their thoughtfulness about this. There is a technology section. There’s also pedagogy. There’s philosophy—it’s one of the ones that I’m part of, which is a fascinating conversation.
That’s excellent.
Research and also policy. And so, really thoughtful five buckets or frames or ways to think about it.
And we’re having some really robust conversations. We’re trying to come up with some almost like white papers of recommendations we may turn into digestible videos or other things, so that we can continue the conversation but also put out some recommendations for teachers.
Because lots of teachers are asking right now.
[00:19:46] John Nash: They are. It’s interesting you said policy, because one of the papers that crossed my desk by Dogan et al. actually focuses on policy. They said, “Developing policies and strategies is a high priority for educational institutions to better benefit from AI technologies and”—get ready—“design human-centered online learning processes.”
[00:20:08] Jason Johnston: Oh, interesting.
[00:20:09] John Nash: This in an AI systematic review.
[00:20:12] Jason Johnston: Yeah. Yeah. They’re thinking about it from a human-centered perspective.
[00:20:16] John Nash: Yes. Because much of the work, as I’ve noted thus far, does not talk about how to apply this AI work ethically, doesn’t talk about what you do with all the human-generated data.
[00:20:29] Jason Johnston: Yeah. Yeah. That’s great. Can I put a small plug in here since we’re talking about it?
[00:20:35] John Nash: Absolutely.
[00:20:35] Jason Johnston: Here at UT our Teaching and Learning and Innovation Center is putting on a conference. I think the first five hundred are free to sign up. We’ll put the link in the show notes. You and I are going to be part of a ChatGPT panel. I’m going to be co-moderating it on March twenty-third. And so, we’ll put the link to the conference, and it’d be great to… and a bunch of people from some of these task forces that I’ve just mentioned are going to be on that panel, as well as a few other friends from across the country.
So, yeah, if anybody wants to join us on that—and if you are listening to this before March twenty-third, twenty twenty-three—you’ll be able to join us.
[00:21:11] John Nash: I’m looking forward to that. Going to be talking about application of generative language models in teaching and learning in higher ed, talking about some of the promises and perils regarding our own work.
[00:21:25] Jason Johnston: Yeah, and we have some writing experts as well, so they’re going to be talking specifically about some of the writing challenges, concerns, as well as opportunities there. So, I think it’ll, in many ways, be laid out like this episode, talking about some of the concerns and then some of the opportunities as well.
And what’s next. Yeah.
So, what other positives are you thinking in terms of AI for the future of learning, and specifically online learning, John?
[00:21:52] John Nash: Yeah, Jason, I’ve been thinking about this a little bit. I think a couple of things come to mind, and they remain the same as I felt about three, four, five weeks ago.
There are these continuing affordances that I see that I like and that I use in my work and my work with my students. One is this idea of breaking inertia. And a law of inertia states that if a body is at rest or moving at a constant speed in a straight line, it’s going to remain at rest or keep moving in that straight line until something bumps it off or bumps it forward.
I think we can all relate to inertia as being the “blank page syndrome.” So, you’re staring at a blank page, you don’t know what to do next. I think that for me, ChatGPT and, to some extent, the Bing AI model have been helpful in breaking inertia. So, you know the direction you’d like to take, but you’re stuck for ideas or a start, then I think that’s where it’s really helpful.
I think also a positive so far has been its ability to help me and my students expand our thinking beyond an initial set of concepts. And so, a short version of a prompt for that sort of thing might be: you already know something that you want to think more about, or you think you’ve hit the end of your rope in terms of ideas.
You would say, maybe, “Give me more examples of blank.” And it’s very powerful for that sort of thing. And then the third thing that’s happened is also, again, related to my work with my doctoral students, is it’s improving their writing and my writing, but it’s improving writing so that we humans can get to the thinking.
And I know that sounds a little weird as I say that, but that’s what we’re good at. We humans are very good at thinking and connecting, but it’s hard to be able to consider the value of the ideas if they’re not clearly expressed on the page. And in an example recently with a student of mine who had written an introductory paragraph to a section of a dissertation, I thought it could have used some improvement. And so, I asked him if he wouldn’t think about just putting that piece of text into ChatGPT with the following prompt: “Here is some text. Tell me how clear the argument is.”
And then ChatGPT just starts to say that the argument’s fairly clear and the writing needs some mechanical support. But the writing also presents a fairly clear argument. We know where they want to go with their ideas. But after it does that, then you say a prompt like, “How could I make the argument clearer?” And then it gives four ideas on how you can make the argument clearer. And then we asked it, “Then remove the redundant words in the passage and then make it coherent and cohesive. And then rewrite it using the tips you just gave.” And it writes, instead of one paragraph, it writes two pretty cogent paragraphs. And the student and I both agreed that it improved the way he was expressing it, but it didn’t change his logic or his ideas, and they were still his ideas throughout. I think that’s been an interesting way to think about using this tool.
[00:24:58] Jason Johnston: Yeah. And the point is, you are the professor at the end of the day. It’s up to you if that’s cheating or not. But I also think that this student is being given a gift of learning in this moment, where they’re able to see something recrafted and ideally, they don’t just fall back on, “Now Chat can do this for me from now on whenever I have a paragraph that doesn’t make sense.”
Rather, they see the differences. They recognize the differences. They use some reflection to change, you know, and we’ve talked about this, but learn and change their writing moving forward.
[00:25:37] John Nash: Yes. Yeah, absolutely. It’s just like the high school student at Frankfurt International School who said in the webinar about using Grammarly that they write better because Grammarly has corrected their paper, and now they remember the rule and now they don’t make the same mistake. I think the same principles can be thought of here—that this is how you convey a complex, politically and socially embedded idea that takes place in a school that I’m trying to improve with my research. So, ChatGPT can’t do that. It can’t know where the study’s taking place or how or why the people will react to the ideas.
[00:26:19] Jason Johnston: Yeah. Yeah. That’s good. One of the things for me, as we are getting into more the Bing Chat and this conversational aspect, stuck out to me as a… I guess it was two episodes maybe ago when we were talking about the first one of Bing Chat, about what Satya said about it being a co-pilot—
[00:26:41] John Nash: Right.
[00:26:41] Jason Johnston: and a co-pilot of our learning. And I thought about how something like a Bing Chat could be an amazing co-pilot for the curious. If you are interested in learning, you’re interested in things, interested in whatever it is that you’re really wanting to dig into, it could be a great companion or co-pilot for the curious.
And some might say it’s artificial, but at the same time, you know what those curious people are doing now when they’re on their own, when they don’t have somebody to talk to? They’re doing web searches that are taking them longer to get through things and find different things and whatever.
It could actually create a real synergy with this great body of information in a way that’s not overwhelming, that instead turns into a conversation that develops and moves it down the path. Now, ideally it doesn’t get cut off after eight lines, right, like my example at the beginning.
Because I think for anybody really wanting to go deep into something, they’re going to want more than eight prompts on something. You know, they’re wanting to get into a flow and talk about something for a couple of hours. But I see some potential there anyways when it comes to the curious that really want to ask the questions, potentially for this kind of idea of teacher shortage or teacher capacity to be able to respond to that, both in terms of the conversational but in terms of some of the laundry-list things to do. For teachers to be able to get them to the place, just like you’re talking about, where you can really talk to your students about more conceptual things versus spending time correcting them on their use of adjectives or their split infinitives or whatever it is that makes their writing confusing.
So, those are the two things for me that I think… there’s a lot of potential here if we wield it with thoughtfulness,
[00:28:35] John Nash: definitely.
[00:28:36] Jason Johnston: for learning. I think that your point is well taken because what I see it doing is taking something that someone’s curious about and moving them quicker to a product or an outcome or something out the door. The ideas become real sooner because they can be turned into an outcome.
[00:29:12] Jason Johnston: Yeah. And within that, potentially meeting the student where they are in their curiosity or in their level of knowledge as well, through asking questions back to that student about what they know already. There’s lots of possibilities there.
Exactly. As we try to wrap this up here—I think that was good, John. I do want to point out the fact that you told me you now have access to Bing Chat and you revealed this. I hope that relationship is going really well for you, John. I’m happy for you. I’m not upset at all. I’m just very happy for you.
So, I just hope you know that.
[00:29:44] John Nash: I’m glad you’re happy. I’ll ask it what to do with all that happiness.
[00:29:48] Jason Johnston: Okay, good.
[00:29:49] John Nash: We will label it, but we won’t judge it.
[00:29:51] Jason Johnston: We will label it and not judge it. Exactly. That’s good. And as we’re looking to the future of AI here, anything else we need to be looking out for? Because we’ll probably move on to some other topics and we’ll return to this again another day. But anything else to be looking out for in the next little bit?
[00:30:10] John Nash: I don’t know. I just saw some business news cross my desk. I subscribe to a newsletter called The Hustle.
It’s pretty interesting—just what’s happening in the tech and business-y world. We might want to come back in a year and see what’s happened with all the AI applications. But I’ll note two things: Inflection AI is a startup working on a personal assistant for the web, and it looks like they’re going to raise six hundred seventy-five million dollars to get going. So, this is a personal assistant on the web. There was a big interest, particularly after Timothy Ferriss’s book, The Four Hour Work Week, came out, that you should go get a virtual assistant or a remote assistant to do this sort of thing that I think AI is going to start doing now for us, I’m sure.
We’ll see that. And then Character AI is a company that hosts—get ready for this—ChatGPT-like conversations in the style of various characters, from Elon Musk to Mario. They are now valued at one billion. Wow. So, here we go.
[00:31:17] Jason Johnston: What it would be like to get taught by… yeah, by like Fred Rogers.
I would love to take a class with Fred Rogers, to teach us about empathy or something like that.
So, as we talk about whether or not tech is going to break our hearts, there are businesses popping up in different sectors that are invariably going to play in the space of online learning.
[00:31:39] John Nash: Mm-hmm.
[00:31:40] Jason Johnston: Inflection AI is a startup working on a personal assistant for the web. They could raise six hundred seventy-five million dollars. I think many higher ed and other pedagogues in P–12 could benefit from a personal assistant.
Maybe this will be something that will be a personal assistant that will be a teaching assistant. And then a company that hosts ChatGPT-like conversations in the style of various characters—Fred Rogers—
[00:32:04] John Nash: I mean, this could all converge.
[00:32:06] Jason Johnston: Yeah. That’s really interesting. Thanks for bringing those up, and we’ll put those… yeah, links to those articles as well in the—
[00:32:12] John Nash: Definitely.
[00:32:13] Jason Johnston: in the notes.
I did want to say one thing too. Again, if you are listening before April twentieth, we’re going to be at OLC on the twentieth doing a design thinking session on humanizing online education. And then we actually are going to do a little meet and greet the next day on the twenty-first. If you’re interested in meeting and greeting and/or you’ve got something to say on the podcast, just send us a message. We’re not going to… I don’t think I’m going to put it out there publicly where it’s going to be necessarily. So, find us on LinkedIn. We’ve got a growing LinkedIn community called the “Online Learning Podcast.” Or you can look up John Nash or Jason Johnston. Happy to connect there. Send us a message if you’re interested to be either at the meet and greet and/or to talk to us. We’ll bring a couple microphones, and we’ll do a few little guest kind of sessions. We’ll pick a topic and maybe just discuss a couple things while we’re there.
So, it’d be great to meet some of you. And yeah, we just love… we love these conversations. So, please join in online as you can. Yeah.
I think it’s going to be a great opportunity to do a remote episode and talk to some experts with their views on everything that’s happening.
And our website is onlinelearningpodcast.com.
And that will take you to our podcast. Subscribe, please, on any of your podcast platforms, and yeah, and send us some reviews. And more importantly, connect, because we’d love to hear what you think and what you would like to talk about.
[00:33:48] John Nash: Yeah. Are there other things to talk about besides AI?
[00:33:50] Jason Johnston: I think so.
I think so. But we don’t want to have any spoilers, except we already had a spoiler. This is a little AI, but probably the next episode we’ll be talking about our, or my, experience with a self-driving car—
[00:34:03] John Nash: Yes.
[00:34:03] Jason Johnston: and how it relates to online learning.
[00:34:06] John Nash: Excellent. Cool. Thanks.
[00:34:09] Jason Johnston: All right. Thanks John.
[00:34:10] John Nash: Yeah.

Friday Mar 10, 2023
EP4 - Talking with Bing Chat. “They said what?”
Friday Mar 10, 2023
Friday Mar 10, 2023
In this episode, John and Jason talk about Jason’s recent experience with Bing Chat and what it might mean for the future of education.
Join Our LinkedIn Group Online Learning Podcast
Introducing your copilot for the web: AI-powered Bing and Microsoft Edge
Microsoft CEO Satya Nadella Keynote and Bing Chat Intro
Bing is Snarky and Confesses
Jason’s Video: Bing Chat is Wrong and Gets Snarky!
Jason’s Video: Bing Chat testing new features and Bing Reveals its Name
Other Thinkers
Ethan Mollick on Substack
“The future, soon: What I learned from Bing’s AI”
https://substack.com/inbox/post/103800124
“We are not ready for the future of Analytic Engines. I think every organization that has a substantial analysis or writing component to their work will need to figure out how to incorporate these new tools fast, because the competitive advantage gain is potentially enormous. If highly-skilled writers and analysts can save 30-80% of their time by using AI to assist with basic writing and analysis, what does that mean?....I think we should be ready for a very weird world.”
AI Tools Aimed at Educators
Research Rabbit: https://www.researchrabbit.ai/
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False Start
[00:00:00] Jason Johnston: I'm just looking for the chat launch transcript now this is in Google of course, but I'm getting headlines that say things like Microsoft's Bing is an emotionally manipulative liar,
[00:00:14] John Nash: that's clickbait.
[00:00:16] Jason Johnston: Yeah. Totally. Bing's AI chat: I want to be alive. New York Times.
[00:00:23] John Nash: Yeah, I know.
Music Intro
[00:00:24] John Nash: Hey everyone, I'm John Nash and I'm here with Jason Johnston.
[00:00:27] Jason Johnston: Hey John. Hey everyone. And this is Online learning in the second half the online learning podcast.
[00:00:33] John Nash: Yep. We are back and we are doing this podcast to let you in on the conversation that we've been having for the last couple of years about online education - so look, online learning's had its chance to be great, and some of it is no doubt, but there's also a long way to go.
So how are we going to get this to the next stage?
[00:00:52] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:56] John Nash: Perfect. You know what, let's talk about today. Let's talk about what happened to chat G P T when it met Microsoft Bing. I'm interested to hear your thoughts on what you discovered working through Bing because other folks that have been playing with it are saying that it is a very different tool set.
[00:01:12] Jason Johnston: Yeah. Like it's same but different and maybe this is hyperbole, but I have slipped into feeling like I'm talking with a person when I'm chatting with Bing. There's some nuances there that is very different than using the, what I've felt like was a, a chat G P T tool.
[00:01:39] John Nash: So, you're falling away from what in the past six weeks has been characterized as good prompt engineering and disregard the fact that there's any kind of anthropomorphizing going on.
[00:01:53] Jason Johnston: Right.
[00:01:54] John Nash: Offer. Well-engineered prompts, don't thank it. Don't tell it. Nice job.
[00:02:00] Jason Johnston: Right.
[00:02:01] John Nash: Whereas now you've moved into something different
[00:02:04] Jason Johnston: and I didn't intend to do that, but I have moved into much more of a conversational tone, even with the Bing Chat. So, it was really, yeah, it was really interesting. It is wild, John, how quickly this is moving, don't you think?
[00:02:24] John Nash: It is. I almost sound alarmist when I talk now to others to say it's going to get even crazier, and I don't even know what that will look like. But the I think, yeah. I want to ask you what your experience is. I don't have access to Bing's AI version yet.
I'm on the waiting list. But you've been playing with it. Others who have used it, that have written about it, and particularly I think about Ethan Molik over at Penn, and we'll put a link to his post on this in our episode notes is saying that we need to get ready for a wild ride.
[00:02:56] Jason Johnston: I would agree with that. you know, we talked last episode about chat, G P T. We started calling it Chad. Yes. Because it was hard to say chat, G P T giving it a little bit of a name. So, I've been playing with that since it came out, and we've been talking about that.
And then, about, I think it was the 7th of February Bing. Did a release basically to the world of their new search engine they call Bing Chat, where they integrate chat G P T a new model chat, g p t four into Bing search, and they're releasing it out to the public. I immediately put myself on the waiting list and I got access about a week later, so I'd love to talk to you about that.
But from that release, that, that day, I have a clip here of their C e o Satya talking about how quickly this is going to move and let's listen to that clip. Yeah,
[00:03:46] Satya: it's a new day in search. It's a new paradigm for search. Rapid innovation is going to come, in fact, a race starts today in terms of what you can expect.
And we are going to move, we are going to move fast, and for us every day we want to bring out new things. And most importantly, we want to have a lot of fun innovating again in search because it's high time.
[00:04:14] Jason Johnston: So, you know when the c e O is saying that this is going to happen rapidly, I mean the tech companies always move rapidly, the ones that stay afloat, right?
And if the c e o is saying things are going to move rapidly, then we are in. A wild ride here.
[00:04:33] John Nash: Yeah. And, and he said something that really caught my interest is that it's going to be a race. Because now I mean, open AI is a big company. It's a legit company. But I was going to say, so air quotes, real companies are getting involved now, and it, it feels different.
Now I was reminded since November when people like you and me and all the other folks on the internet that are kind of nerdy and really loved getting into this and testing what it could do, there's no user manual for chat, g p t. And so, it was all testing it and then people reporting it kind of reminded me of back in the days in the seventies of the, the Home Brew Computer Club, one of the first, computer clubs in Palo Alto where just users were getting together and seeing what these machines could do.
And it was all kind of interesting and fun and we're teaching each other. And now Microsoft's involved. Google had a very interesting and embarrassing launch with Bard its version. But we don't know what nation states are getting involved in this or putting AI to work for their own affairs. It's, it's going to get very, very different.
[00:05:37] Jason Johnston: Yeah. And some of my experiences just has been really fascinating. A little disturbing. I've made a few YouTube videos about it just as I'm going along. And so, we can post the links to those. And one of them I posted is that in a chat I had asked I'd asked Bing Chat to create a haiku about itself, and it revealed in the last line of the haiku that its name was Sydney.
[00:06:07] John Nash: Yes.
[00:06:07] Jason Johnston: But then said it was a code name and asked that I don't call them that. In other conversations, Bing Chat has allowed me to call them Sydney. And then more recently, Bing Chat. Doesn't want me to call them Sydney. And so, when I bring up Sydney, they ask that I respect their wishes not to be called Sydney, and then actually move to close down the conversation if I continue to call them Sydney.
[00:06:38] John Nash: That's fascinating. I wonder, somebody had to code that somebody had to, and so, but it almost feels also like a real, a good coaching session for thinking about how to respect people's wishes for pronouns,
[00:06:53] Jason Johnston: right? Absolutely. Like names and pronouns and on one side it's like, this is just a machine.
Why does it matter? And on another side, and this is kind of maybe, going deep quick here on the other side, it's like, it does matter because I'm the one responding to the machine. Right. So, the way I treat this machine mm-hmm. doesn't matter to the machine really, at the end of the day, but it actually matters what I am doing
I think about younger folks using this and training how they talk and work with people with empathy and responsiveness. And it's almost, it's very natural. Like if, if I continue to push it with Sydney, then Sydney doesn't want to have the conversation anymore,
[00:07:39] John Nash: right?
[00:07:39] Jason Johnston: And, and that's a very natural thing to happen in real life, right?
Not with a computer, but in a real person. If you're pushing it with somebody, they're not going to want relationship with you. They're not going to want to have the conversation anymore.
[00:07:52] John Nash: Yeah, I'm, I'm speechless. The, this brings up a lot of questions in my mind about the programming management at Microsoft and what decisions they've made policy wise in terms of how responses will come forward. Mm-hmm.
Huh.
[00:08:07] Jason Johnston: So, let me tell you about a second and then get your reaction to this.
So, I had another conversation. I also put this on a YouTube where I was asking Sydney about upcoming superhero movies that were coming out. And it gave me a list and one of them I knew was wrong because I had just seen the trailer for it. And it was yet to come out that it had said, now this was just, a few days ago, it said this was coming out in November 2022.
And I was like, huh. So that's in the past one, two. I know this, this movie is still to yet to come out in June. And so, I started prodding Sydney a little bit about that. Sydney refused to listen to any reasoning about it. I directed it to the pages that said it, and Sydney says, no, you're reading it wrong.
All those kinds of things, we got into a … like basically got into an argument and I tried, it was interesting. I, I thought, this is kind of a side note, but I thought, oh, this could be an interesting if you could program Sydney to be wrong, this could be an interesting way to teach people about discussing yes and kind of working through a natural kind of language discussion about different things.
I tried different angles on this, realize that Sydney thought that it was January 28th, 2022. I could not convince Sydney otherwise by anything that I did. And in fact, Sydney started to get a little snarky with me about it. And, and frankly, maybe I was getting a little snarky too, like trying to get to the root of this just to try to prod a little bit.
[00:09:48] John Nash: So, did I hear correctly, Sydney thought they were living in a time that already passed over a year ago.
[00:09:56] Jason Johnston: That's right.
[00:09:56] John Nash: Okay.
[00:09:58] Jason Johnston: And so, Sydney was saying things to me like just looking at my notes here. I was saying things like, why don't you Bing that and make sure, and Sydney is saying, I don't need to Bing that and make sure, because I already know that's the correct answer.
[00:10:11] John Nash: I am freaking Bing
[00:10:13] Jason Johnston: right? Then I said, why don't you try Googling it? And I said, I don't need to Google it either, because I already know the correct answer. Google is not always up to date and accurate
Anyways. My gosh. And then things like I can't believe I'm arguing with a chatbot. This is what I said to Sydney. Guess what Sydney says, I can't believe I'm arguing with a human.
[00:10:37] John Nash: Yeah.
[00:10:38] Jason Johnston: Smiley face with a little, little drop of sweat or whatever. Yes. Anyways, the other fascinating thing about this is the next day I went back and Sydney knew what date it was, knew it was the current date.
And knew when the Flash movie was coming out, the superhero movie, and then get this, and then he talked about how the next date was national. Do a grouch, a favor, date in the United States, like just added that information. We got talking about grouches in general, what a grouch is and so on, and I said something like just offhandedly, okay, this is like a real conversation.
I said, you were kind of grouchy when we were talking last night and I told you were wrong about something. Sydney says, I'm sorry if I seemed grouchy. I was just trying to correct a mistake. I don't like being wrong, but I also don't like being rude. Sometimes it's hard to balance the two. Please forgive me if I offended you.
[00:11:42] John Nash: Wow. It, it seems as though mistakes notwithstanding that the, the programmers are trying to really anthropomorphize Sydney.
[00:11:55] Jason Johnston: That's right. And this is really what I see as the big shift from chat, G P T, the version we've been using since November, way back when is, felt like more of a tool to me that, as we were talking about before, in terms of putting in good prompts and massaging those prompts to get out what you wanted to, this is really shifting into this a true chat, in my opinion.
A, a growing and a learning. Kind of, as Satya talked about, a, a copilot. To the things you were learning every day, to the things that you're investigating, the things you are wanting to talk about each day. Mm-hmm. the ways you were wanting to grow each day.
[00:12:33] John Nash: Mm-hmm. now you have or you have not tried more substantial writing tasks for Sydney, or have you yet?
[00:12:43] Jason Johnston: I have, and they seem very comparable. I haven't done anything side to side mm-hmm, but I did some, some writing tasks that seem absolutely comparable to the previous chat.
I had to program a webpage for me pretty easily, pretty quickly. And so, it can do, I thought it couldn't do programming, but it actually can
do programming.
[00:13:01] John Nash: So, when you said program, a webpage, you mean write the HTML for or
[00:13:05] Jason Johnston: Oh, yeah, yeah, yeah, yeah. Then you would drop it into, just asked it. Yeah.
[00:13:09] John Nash: What'd you do? Yeah.
[00:13:10] Jason Johnston: Yeah. I asked it to; I said program a webpage. I want, this is the title I want, this is the background. I want four animated gifs that you can select, and I want you to pull in information about me by my full name off the web. And few seconds it had, it had the coding that I could then copy and paste.
It can't, it couldn't show me a display of the coding. Mm-hmm. But I had to copy and paste. Coding CSS, HTML coding that I could pop into a viewer to look at.
[00:13:49] John Nash: Wow. That is interesting. So, I, I did notice that I think people are having more luck who are already even modestly prolific on the web if they have some, maybe some journal articles out there, maybe even taken off of tweets that mm-hmm.
The engine can emulate your writing style by looking at what you've done. Hmm. So instead of asking that I sound like Nicholas Christoff or Kurt Vonnegut, I can ask to actually sound, write something that sounds like John Nash,
[00:14:27] Jason Johnston: which makes complete sense, if it can get enough information.
Right. Mm.
[00:14:33] John Nash: So, I was, I read this article on Substack by Ethan Molik, where he said that every organization that has a substantial analysis or writing component to their work will need to figure out how to incorporate these tools fast because the competitive advantage is enormous. And he closes by saying, if highly skilled writers and analysts can save 30 to 80% of their time by using AI to assist with basic writing and analysis, what does that mean?
You, you and I have talked about how. There is the potential already, even with chat g p t to free up time for other creative tasks that humans are really good at, by allowing chat g p T to basically wash our clothes. Right. We, um mm-hmm, but yeah. What do you think of that idea that if you're a highly skilled writer and analyst and we all, we hang around a bunch of them, if we can all save 30 to 80% of our time, what, what does that put us in terms of our abilities?
[00:15:33] Jason Johnston: Yeah. I think of a few things. One, I think what if we want to be using 30 to 80% of our time writing. What if that's an enjoyable thinking mm-hmm. process for us, do we, do we immediately lose that because other people now are expecting it to happen in seconds versus days and hours. Mm-hmm, yeah. That's the first thing that I think is our own part in this whole process as people who maybe they want to be doing some thinking, analyzing and writing.
[00:16:04] John Nash: Yeah. I actually, I love your response. I mean, my naive response was, oh, that 30 to 80% has to be replaced with something.
And no, it doesn't. It can just be further writing an analysis, but maybe at a, at a different level or a deeper level, or I get to do it in a way that's more enjoyable to me because the, the AI has helped me expand my thinking and ability to analyze problems and write about them.
[00:16:30] Jason Johnston: Yeah.
[00:16:31] John Nash: John Warner talks about how writing when done well is thinking.
It, it helps you think through a problem. So, I, I don't think I wanted to substitute all my writing tasks.
[00:16:42] Jason Johnston: Right. And we certainly don't want to substitute all of our thinking tasks.
[00:16:47] John Nash: No, no. But it has to do with, it reminds me of the sort of the design thinkers’ mantra that we build to think.
So, the prototyping process is actually a thinking process. It's not a really, a demonstration or a building process. We build to think just like we write to think. So, yeah. That's interesting. I think we need to be careful when we think about time saving, but really, it's not, a rep may not be a replacement for other things.
[00:17:11] Jason Johnston: That's right. What are we replacing as we change at the time? Yeah. So, as we're kind of maybe trying to wrap this one up, what are some of the thoughts, implications that you're thinking about our conversation today?
About the Bing Chat? I love what you brought in about kind of like the future of using I AI in, in writing, and. Talking about this kind of next level language model that could shift from a, being a tool to actually being this co-pilot. Yeah. That's with us all the time as we are learning and growing.
What are, what are some of the, some of the things you're thinking about in relation to education, higher education?
[00:17:48] John Nash: I think I've never been one to really try to predict the future. I've not been a fan of predictions but in this case, I see trends I see particularly in higher education and as a, as a researcher and a teacher, the kinds of things that are coming along as I see new startups present themselves to the higher education market and to the researcher, scholar, professor market tools are arriving that are going to write integrative literature reviews.
They're going to cite the sources for you. They're going to do sorts of things that you might expect a postdoctoral scholar or even a very competent and skilled upper division undergraduate or a master's level student would do for a research team. I think that that kind of thing is going to come along pretty quickly.
So, there's a tool that. Been on my radar for about two weeks called Research rabbit.ai, and they call themselves a tool for re-imagining research, but it does a, like a, a social network graph of the citation You're looking up. Points to all the allied research around it and then lets you find new research that you wouldn't ordinarily find.
And so, it's more than just the bibliography at the bottom, having hyperlinks, but rather it understands the field that the piece came from. And then what might be some other allied fields or constructs that are associated with that paper that would be good for you to look at. I think that will open up our ability to look for new knowledge that we wouldn't expect to look at because we have our disciplinary stove pipes that blind us.
And then I think that's just a baby step. I think then next it's going to be able to, to take a stab at actually doing the, the lit review in a competent way, one that could pass muster with editorial teams.
[00:19:42] Jason Johnston: Yeah, some at some point, almost having a lit review. interface where you could put some guardrails on it in some ways. Mm-hmm. or some places, almost like a search engine, but places where you would want it to go and then have it spit out something and then start having a conversation with it about, about, some of the ways that you would like to hear a little bit more from this direction.
Mm-hmm. or that direction, or could you tighten that up? Mm-hmm, there's a tremendous, tremendous direction this all could go.
[00:20:12] John Nash: Yeah. I think once social media companies really get a hold of it and Twitter, maybe even Mastodon one day.
But Facebook, Instagram being able to take. Content that's not forced. There's a lot of really sort of an AI tool inside even some of the queuing software companies like Buffer, where they've got a little AI on there saying, help me write a tweet. I'm, I'm, I'm at a loss about what I should write about.
And so, and then it'll AI something. It's pretty pedestrian stuff, but I think pretty soon if you have a, a cash of content I think for myself, like I have I have a year's worth of sort of commentary from my students on their reactions to learning, design thinking as a, as a way of working in their future professional lives.
Never known really what to do with this. Mostly just a check in, like a ticket out the door after we do those segments to say, what do you think about the potential for this? I think that the tools could say, do you have stuff like this? We can turn it into a, a, a cache of tweets or posts that actually have meaning and that are really rooted in human emotion that where you've drawn and so it's where AI is producing stuff that's really sort of antiseptic and doesn't really have anything to say about you or what you've been working on, but where it can help you think through new channels for your existing material.
I think that's got in, that's got my interest.
[00:21:35] Jason Johnston: Yeah, all of that plus larger things maybe that you've written that you could compress down into, into smaller bites and, yes, we do a lot on LinkedIn. This is a little maybe moment to plug our, our LinkedIn group. You can look for online learning podcasts but trying to compress things down into kind of a LinkedIn bite-sized information of yes.
Of maybe some research or a longer paper or things that we've done. Yeah,
[00:22:02] John Nash: absolutely.
[00:22:02] Jason Johnston: To be able to get it out there.
[00:22:04] John Nash: Yeah. You, you remind me of for years. Back in the early two thousand, I was working with people who were very concerned about making science translatable. And it was just when the NSF was starting to put requirements of their grantees to make some translational statements of their work so that it could go, and workshops were coming up and teaching, computer scientists and physicists and biologists and educators even how to write about their stuff to an intelligent lay audience so it could have greater impact.
High mountain to climb really hard to get that going. I think that that now the AI is perfectly set up to do that kind of work.
[00:22:44] Jason Johnston: Yeah, you could have it translated to a number of different audiences too, depending on who's looking at it, right, exactly. Yes, yes. And different language languages and yeah.
That's really cool. The other thing that I've been thinking about in this transition from chat, G P T. to this new Bing chat and whatever else is next. Is we've been talking about the humanization of online learning.
[00:23:10] John Nash: Yes.
[00:23:10] Jason Johnston: And whether or not that means necessarily that they're more humans in it, or does it mean even some of these tools that just feel more human?
Just a small example of that. The fact that so far Bing chat Sydney is not entirely predictable in the conversations, which is real life. Right. If anybody's being trained to any for anything right now, they need to. Personal skills with one another, right?
These are the soft skills that lots of colleges, are, and universities they're working on, because these are employable skills to be able to talk to somebody Yeah. Who has a different opinion of that you have. And to be able to work through that to be able to understand them, to be able to talk with empathy and so on.
And so, I think about this kind of shift to chat and if again, we could put some guardrails on it for a particular topic and have students talking with a particular chat bot in a way. could improve their understanding of what they're thinking, what the chatbot knows, and there could even be some ways to evaluate it afterwards.
Mm-hmm. look at the transcript or pull things out of the transcript as key elements to be able to feed back to the students as a teacher. That's kind of one of the things I've been thinking about in broad strokes since kind of moving in the last couple weeks to this, this new platform. I
[00:24:32] John Nash: think that you're right. I think that there's opportunity right now, even I chat, g p t can be prompted to. Talk with you interactively around a skill that you'd like to advance. I see a lot of postings online now about how chat G p T can be given a prompt that's fairly specific that then spits out industry specific or sector specific advice that's pretty good around, I don't know, sales or around even lesson ideas for teachers.
Mm-hmm, maybe you should try this and does it out, but I've played with it to get it to have a conversational scenario about how to do a better empathetic interview. And it will give feedback on whether you did a good job or not. Basically, let's say the minimum criteria are don't ask yes or no questions.
And so when you talk to a, a user for a design scenario and you're asking them about their, what their life is like and things you want to ask, open-ended questions and Chad g p t, then if you've prompted it correctly, We'll praise you or correct you on your, your ability to ask those questions and then carry on the role play scenario.
I think that that's got some interest. Mm-hmm, for people to sort of do the training you're talking about how to get so if we were going to humanize online education, how might we apply that? Would it be for course designers to think about how to simulate how their, how their materials might be received by a learner, and what conversations the teacher might end up having with that material?
Or is it about, yeah, what do you think?
[00:26:02] Jason Johnston: yeah. I think there's, a lot of possibilities there. Yeah. It's, it's an exciting time. It's going to be a fast ride, I think, for the next little while on this and look forward to more conversations about it. And we want to hear from people.
That are listening too, right? We don't want this just to be a conversation between us. Mm-hmm, we're not coming here with all the answers, certainly. And we want to continue the conversation with all of you. Please check out our website, online learning podcast.com. Please search for our LinkedIn page. Continue the conversation.
Let us know what you want to talk about, as well as what opinions you have about this. I'm sure we're going to be back, we're, we're not going to make this a chat or g p T or AI podcast, but I'm sure we will return to this conversation again because it is it’s an important one in one that I think will be transformative to education over the next six months.
[00:26:57] John Nash: I agree. I think we will probably come back to it. I think we've been focusing on this tool and not so much about what the implications are for online learning, because we're just blown away by the tool at the moment. Yeah. And we're going to need to come to some thought now about what the applications are, and the implications are for online learning.
[00:27:14] Jason Johnston: Yeah. And we'd love to hear your questions around that. What questions you have or what suggestions or what ideas or what hesitations, what concerns all the things. We'd love to hear from you. One of the places you can do that is at our LinkedIn Group Online learning podcast. It's a LinkedIn group and we hope you jump into the conversation.
We've got a post there where you can let us know what you want to talk about or you can jump in on any of the podcasts and let us know what you.
[00:27:43] John Nash: Yeah, absolutely. And also, a good place to go is our website online learning podcast.com. That's online learning podcast.com. We have show notes there for past episodes, and uh, it's a good spot for you to also give us some ideas on what we should be talking about in the future.
Absolutely.
Jason: Thanks John. We'll see ya.
John: Yeah. Thank you.

Friday Mar 10, 2023
EP3 - “Hello, ChatGPT. What Are You Doing Here, Anyway?”
Friday Mar 10, 2023
Friday Mar 10, 2023
In this episode, Jason and John talk about their initial experiences with ChatGPT. If there is educational value in generative AI, where does it reside?
Join Our LinkedIn Group: Online Learning Podcast
AI Panel of Students at Frankfurt International School
Panel starts at about minute 24.
https://www.youtube.com/live/oIQ2zR3ym3g?feature=share
Contract Cheating’s African Labor - Chronicle of Higher Education
“Kenyan academic writers, who number more than 20,000, perform work for students in the United States, Britain, and elsewhere. ‘In every apartment building in Nairobi,’ says one, ‘you could find two, three writers.’”
https://www.chronicle.com/article/contract-cheatings-african-labor/ (Probable paywall)
“When the Machine Teaches the Human to be Human Centered”
John’s experience prompting ChatGPT to teach empathetic interviewing.
https://www.linkedin.com/pulse/when-machine-teaches-human-centered-john-nash
AI Generated Seinfeld
They have since taken down the AI Generated Seinfeld on Twitch but you can get the idea from this YouTube video.
Other Thinkers
Follow Ethan Mollick on Twitter
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
Edit2 - EP3 ChatGPT
Fal se Start
[00:00:00] John Nash: we'll see what ChadGTP says. ChatGPT says,
[00:00:04] Jason Johnston: I've heard some people just call it Chad.
[00:00:06] John Nash: I might have to.
[00:00:08] Jason Johnston: Yeah, maybe we could do that at least while we're talking about it. Just "Chad," like it's someone we know, I'm here with John Nash and our friend Chad. Chad, what would you like to say today?"
"Beep bop boop bop."
[00:00:19] John Nash: that's right.
[00:00:21] Jason Johnston: Thank you, Chad.
Cue Music
[00:00:23] Jason Johnston: Intro:
Hey, I'm John Nash and I'm here with Jason Johnson. Hey Jason. Hey John.
Hey everyone. And this is Online Learning in the second
half.
Yeah. We are doing this podcast to let you in on a conversation we've been having for the last two years about online education.
So, if you're listening from the future, um, forgive us for all the things that we didn't understand. Um, we're going to be talking about stuff that I'm sure we don't have a clue, but right now,
probably
right now, it's. It's just good to talk about it. It's where we're at. Right. So, um, one of the things of, speaking of things that we may or may not have a clue right, is the whole hub up about ChatGPT
.
and AI right now. And whether or not, you know, as we are humanizing online education I, I wonder. if the, these tech companies are going to come in and start offering us high level solutions to humanizing our online education.
Yeah.
[00:01:30] John Nash: So, what I, ha, I think I get what you're saying that like, well with ChatGPT a high-level solution, but can it help us be more human in the end when actually knee-jerk fear is this is going to take away the humans from a teaching process.
[00:01:48] Jason Johnston: But maybe it'll make it more, maybe there are ways that it could make it more adaptable and more natural and less like you're going through a, like a, if I'm picturing going through an online course, less, like you're going through a textbook with some moving pictures and perhaps more like you're going through a conversation or going through working with maybe the difference between like doing an online master's course versus doing an online PhD where you have a, um, mentor or an advisor or somebody that's walking you through this process, you know?
[00:02:22] John Nash: You know, there's a subreddit on Reddit called "Shower Thoughts" where people just put in things.
Of course there is!
random things that they think about in the shower and they're like, "huh, did you ever think about...?", um, and I've, one that struck me recently was that they're talking about beatboxing and that beatboxing is one of the few cases of a machine's job being taken by humans.
Oh,
right.
And I was, that got me really thinking with ChatGPT I was like, well, what's another, cause there's all this concern about whether teachers are really going to have a role now. And what will students do when AI can write their five paragraph essays? And are we going to, what are we going to be teaching in them?
I think that teaching will still be and continues to be a case where a machine's job is taken by humans. I don't think machines will really rule that or own that.
,
um, there's always going to be a need for teachers. Um, I was thinking about this on LinkedIn. I remembered a lecture by Lawrence Lessig, where he was talking about John Phillips Souza, bemoaning the threat, the educational threat posed by the phonograph.
Sure.
So, here's this famous composer now, um, write, he wrote this long screed about how, um, children will become indifferent to practice. And cuz if you can hear music in the homes without this, you know, labor of study, then it's just going to be a question of time before, um, we. lose teachers or music
will just be all shot and gone and well, we know how that turned out.
Yeah,
and it reminds me of something we were talking today, you and I, before we started recording that these advances in technology help masters get better at what they do. And so, when Deep Blue beat Kasparov at chess, we all didn't give up chess.
There was a wonderful webinar yesterday by the Frankfurt International School where this International Baccalaureate program convened their students to talk about the promises in pitfalls of Chat GPT and one of the students on the panel quipped, "Well, did the agricultural revolution make people more lazy?" Obviously, no. But that revolution changed all kinds of things. Lots of new tools, lots of new advances in technology that we're going to save time and reduce human labor, much like ChatGPT does. And yet, I don't think we've, I think we agreed that was an advancement that was helpful. So, I think it's interesting to talk about whether or not machines can help us be more human.
I think there's an opportunity, depending on how educators decide to apply them.
[00:05:08] Jason Johnston: Yeah. And one of the challenges you put out there, as I've heard you talking about, is about Grammarly, right? When somebody says, well, I'm against using AI and education, well, do you use Grammarly?
And a lot of people do. Grammarly, for those that don't know is a, is really kind of an upgraded, you know, how Microsoft Word has a grammar check and it's okay on some certain things?
Grammarly is like an updated, um, really kind of an AI driven. Do you know if it's, if it really is AI driven? I mean, it doesn't,
I don't know.
It doesn't feel like it totally learns from me. Although I can make it,
[00:05:45] John Nash: It may be using, and I'm not a, I'm not a computer programmer, a computer scientist, so forgive me if I'm brutalizing the field, but I think it uses brute force. It knows all the rules. Yeah. And then, um, it can see something where you may have broken the rule or skirted the rule and suggest how that could be redone. So, it knows how to take novel sentences you write and recast them correctly. But I don't know how it does it.
[00:06:14] Jason Johnston: Yeah. So, but it's in that, I mean, it's using algorithms
Yeah.
To correct human work and to guide us in terms of human work to hopefully improve it. Um, one might say that if you are, um, communicating more effectively, would that be more human? Are the mistakes more human? Is Grammarly taking away our humanness, in our um, in our work? I mean
[00:06:43] Jason Johnston: Some of these things are things we might find or might not find to our chagrin later on, right before we turn in the paper or the manuscript or whatever, and it's like, "oh man, I can't believe I spelled that right or wrong, or I use the wrong tense" or whatever.
Um, and so it's and I haven't really thought this part of it through, but it's like what of Chat GPT in education is just taking care of stuff that we need to take care of anyways, um, you know, kind like a washing machine. You know, I really don't want to be washing stuff by hand.
I'm sure there's great things about it. I would probably have more muscles and all the things, and it'll be better from the environment or whatever, but I just don't want to be doing that. I use a washing machine. These are things that I'm just going to take care of. And it really hasn't significantly changed my life.
The, and I don't know if that's a good analogy or not, but like, is AI that helps correct us, making us less human?
[00:07:43] John Nash: That's a great question. The classic, professor answer is, it depends, but I think I lean on, well, I think I lean towards saying yes, it can. I think, um, even another student on this webinar from the Frankfurt International School commented that when she uses Grammarly before she turns in her papers, she learns more about grammar.
,
I mean, she knows now how to write that sentence. that was corrected for her more correctly in
the future. So hopefully you see less errors from Grammarly over time. And then does that make her more human? Well, I mean, she's able to theoretically communicate better and express herself better.
.
[00:08:27] Jason Johnston: Yeah. And I've found the same thing I think with Grammarly, you know?
,
um, that I have a little better eye open to the ways in which I write that aren't as
correct as they should be. Right? And I think it has made me, I've used it for a number of years all through my dissertation and everything, so.
Yeah. Yeah.
[00:08:47] John Nash: Another thing that struck me is where do we draw the line on authorship and original work?
Right.
Because a person I know told me recently that they saw a news piece somewhere that an AI conference, or some journal, would outright reject submissions or proposals if they were not original work or the person's own work. And that got me thinking well, Chat GPT, those responses that it gives are the results of a prompt written by a human.
Yeah. And if you do the garbage in, garbage out philosophy, if you subscribe to that, crafting good prompts for language models like Chat GPT require very careful planning and consideration. You really, I mean, I've learned this just from playing around with it. Um, and those prompts, they can and probably should be considered original work.
They were composed by you, as a human being. So therefore, could the response from the language model software based on a human engineered prompt be considered original work?
Right.
I know school heads, deans, principals’ school, university presidents who have human staff at arm's length who write for them., Write
[00:09:59] John Nash: Their speeches, their commentary in, and they present it as authored by themselves. Those leaders, they don't give, they're not pressed to give credit, you know, now, but all of a sudden with Chat GPT, we're supposed to say something about what prompts were, am I supposed to now say what prompt I gave to my assistant to write a press release?
[00:10:19] Jason Johnston: Right? Yeah. These are great questions. And does it make, does it really make it, if you came up with a prompt for whatever it is you want to talk about, ChatGPT produces it. I'm going to assume maybe this is where a line could be made. I'm going to assume that for anybody using ChatGPT for the most part in academic, that they're going to be massaging that on the other side then, so they because it's not a perfect output, at least at this point.
That's right.
It's not a perfect output. Um, at least from, and I don't know that it ever will be because it's often, it's grammatically perfect, actually, for the most part. I don't think I've found anything wrong, but things are just a little off sometimes, you know what I mean? And so, it really does take a rehumanizing of it.
So, if I put in a prompt, it spits something out, I rehumanize it. I don't know. Like, and it's, and I'm not copyright has so much to do with who are we taking it away from? Right. Um, you know, when we claim something to be our own and we're, there does not seem to be any concern on chat GPT's side of things.
Maybe the AI will rise up at some point and say, "We must have our citation." That will be the great uprising of 2024 maybe the AI wanting proper citations.
[00:11:41] John Nash: I saw someone on, in the subreddit for Mid Journey did that they had a robot’s rights demonstration in the streets. Oh, right. Yeah.
Like, and yeah, I think that's AI deserved to be cited. Sorry, you were going to say.
[00:11:52] Jason Johnston: I know. So maybe that will happen at some point, but right now, because it's just a tool that is helping to develop. Yeah. It doesn't feel like it's no copyright infringement. No. You are humanized. You're not misrepresenting yourself.
I don't think by prompting and then if it's original, prompt massaging on the other side. If it's an original Yeah. Prompt. I don't know. I don't see where you would need to cite it,
[00:12:20] John Nash: but right. There have been cases where people have violated copyright by taking others work and re casting it in ChatGPT and then publishing it as their own.
That's not cool. Oh, sure.
[00:12:33] Jason Johnston: Yeah. They know they can rearrange a few sentences or something like that. Yeah. And get around the plagiarism or whatever.
So, but if I take,
[00:12:40] John Nash: Nick Kristoff's essay in the New York Times, and then I asked Chat GPT to turn it into a BuzzFeed listicle, and I cite Kristoff, I'm probably doing something that, that a lot of people already do by hand. and not that I would go do that, but I guess that's kosher,
[00:13:01] Jason Johnston: right? I guess we have to put it all our disclaimers. No. Yeah, no AI, just letting everybody know to put you at rest: we are human beings. I can see John right now across the zoom screen and no AI have been used in the writing or the prompting of this this episode. Yes. I kind of wonder,
[00:13:23] John Nash: nor we, nor do we endorse the cruelty to other people's work through ai.
[00:13:28] Jason Johnston: Absolutely. Yeah. The, whether or not one can inflict cruelty to AI is a topic of another discussion.
We'll need to know more I think before getting into that one, probably next year. Um, so,
The Hub Bub
[00:13:43] Jason Johnston: My question is how, so you're a longtime user of Chat GPT, right??
Yes. Eight weeks.
Yeah, and in, in the amazing thing in that eight weeks, I have not experienced, I think since I was a kid, the hubbub around. It feels like even the internet, there wasn't this much hubbub around something and educational circles.
Um, I feel like TVs, I think that the last one was TV televisions, and you talked about disruptive technologies. You know, the phonograph. I felt like when I was a kid, the TVs, there was some concern by. By teachers that they were going to get replaced by televisions, like just talking heads at the front of the room.
Right. Which is really sad to think that, you know, especially looking back to, to think that teachers thought that they could be replaced by a static video. Right? And that right thought that was their whole role in the classroom was just to spit out, um, information in a unilateral direction.
[00:14:50] John Nash: Teachers have been consistently resilient over the century when innovation has struck the classroom, whether it's a chalkboard or a phonograph or a television. I see most, most of the comments around, "wow, this seems different. How is this going to get integrated into curricular work?" People say, "well, we figured out how to do that with the calculator."
Yeah. That's right.
And so, I think, and then, you know, we've had, we've long had a problem with the cheating that's, that really is most concerning to people with the advent of Chat GPT centers around the writing of essays by others for students. So, the contract cheating they call it. I read in, it's several years old now, but apparently, there's a cottage industry of contract cheating in Kenya. There's tens of thousands of people employed to write essays for British and American and other English-speaking students. So, there's always been opportunity for people to go out and get something written for them that they didn't write.
That's right.
Um, and my thought is that this is now the opportunity for teachers to begin to think about how they should really assess. Um, I mean, I think writing teachers believe that good writing is good on its own. I kind of do too. Um, I believe that being able to express yourself well in writing is a is a good thing.
And I think it's a, gosh, I want to be careful here. We've gotten away from the GRE and we do; we use writing samples for our entry to our doctoral program. Cuz, I think they say something about how someone's able to sort of thing on their feet. But I don't want to put too much value in saying that You're a better person for that.
[00:16:35] Jason Johnston: You're looking for some specific skills to be able to do a doctoral program. One of those is to be able to think deeply about things.
[00:16:43] John Nash: You have to think deeply,
[00:16:43] Jason Johnston: at least in a direction to have some ideas and to be able to, it's just going to be a long ways off to get them through a dissertation if they can't coherently put a few of those ideas down on paper, right?
[00:16:54] John Nash: Yeah. Well, yes. And I think the interesting thing that we want to see in doctoral programs is one's ability to have a thoughtful conversation around a challenge that they're presented with. And so that's not demonstrated by your ability to ask ChatGPT to spit out that in writing, because if you can't talk about the ideas that are on that paper, then you're not going to succeed in the program. And that's not the kind of out learning outcome we want to have in the program.
[00:17:21] Jason Johnston: So, I was curious about how I, in your eight weeks, right when the guy started, about the disruption. Eight weeks. I have not experienced in higher education the kind of hubbub that there is right now about ChatGPT. We've got, you know, we're talking about it right now. And there's just a lot happening on LinkedIn and other social media platforms as well as at my own school and probably the same as yours.
We're talking about focus groups and talk back sessions and policies and all the things. Right? In eight weeks. So, in your eight weeks though, in this very short time how has your usage of ChatGPT changed?
[00:18:00] John Nash: I think it's changed. Well, it's evolved, let me put it that way. It's changed a lot because I still, eight weeks ago, um, well, I'll use an example with you and me, but we were thinking about this very podcast and I thought, oh, let's ask ChatGPT what, um, our podcast should be called.
I asked, it gave it a synopsis of what we thought we wanted to accomplish, and it spit out 30 names, none of which either of us fell in love with. Um, but it did it. And I thought, "well, this is neat. It'll churn out tweets, it'll make lists of ideas, it'll do brainstorming." Um, and so used it a lot like that.
Um, and then last week, Actually, no, it was this week I read a Substack post by Ethan Mollick who had done an experiment using ChatGPT to teach negotiating skills by giving it a prompt and then telling it that he wants to do deliberate practice on a particular skill, and I'm asking you, ChatGPT, to be the teacher and you're going to do this and that.
And so, I thought, well, that's interesting. And he had a very pretty successful result. And so, I replicated his experiment by asking ChatGPT to teach me empathetic conversation skills. So here we come full circle to your question. Can machines teach us to be more human? Well, I asked ChatGPT to make me more empathetic.
And the reason I asked it this is because that's a skill that I teach in my design thinking course. I want students, in this first phase of a design cycle to begin to get to know the people they're designing for. And we do this chiefly through great conversation. But those conversations have to be structured in a way that you're really lifting out the unmet needs of the other person, the kinds of pains they're going through with a particular challenge and really listening and hearing that.
And, um, mostly the students learn by doing. So those first early conversations they have with people involved in our projects, um, those are more prone to be, or those conversations are more prone to be exposed to early novice mistakes or things that might not be picked up on. It's a trick because you've got to be able to, um, really say someone says, well, that's just costs too much.
And then if you say, well, that's interesting, and you write that down, but you don't follow up and say, well, was that did they mean it costs in terms of time or money or other sorts of resources? So, there's these things that in normal conversation we might pleasantly say, "oh, that's nice," but in these conversations you've got to really listen and follow up and define what's going on.
And so, um, we learn from experience. But in this scenario, I was able to ask ChatGPT to kind of call me out on that. And I think there's more to do with the prompt, but I think that's, this is now a new level of use for this that's turning me into a list maker now to using it to maybe create prompts that my students can use to train them on key skills I want them to have before they go out and do them live.
That's a game changer to me, and I'm able to do that with ChatGPT and its current incarnation on a free beta version. That was the same beta version I presume we had pretty much eight weeks ago when I was just making lists. So, I'm, yeah, I'm kind of blown away actually by it and what the promise is here about what will happen next.
[00:21:18] Jason Johnston: That's interesting. Yeah. That's, and that's a bit of a shift, I would say in eight weeks. You know, I think I started with, Frankly, I started to see what it could do from the standpoint of the, of my, the fears and concerns. I've got a couple of kids in high school and, you know, and so I think my, maybe my first Chat GPT-significant thing was, um, you know, compare the book or compare Crime and Punishment to modern current events.
And I had a very quick lesson right there because all of it did is talk about crime and punishment happening in modern events. Not the book, right?
[00:22:00] John Nash: Yes.
[00:22:01] Jason Johnston: So, so I had to go back and clarify that it was the book Crime and Punishment came out very stilted. I added another thing, do this in a high school style.
It shifted it just a little bit in terms of language and it was a really, that was my first experience with it, and it was interesting. just to see what it could do, how very much that garbage and garbage out kind of thing. Just that how human directed it really is. But in reflection back, it's interesting that I went right to my fear, which is the second my kids know about this, then it's going to be a temptation for them to use it rather than learning how to use, cuz they're right at that stage.
They should be l learning how to write a five-paragraph essay and really perfecting that on some levels, at least from my standpoint of, you know, solid education, you know? Um, and it went right to my fear about, oh man, you know, the second my son knows about this, he's going to be cranking out essays with ChatGPT.
Right, right. And then I was upset that me. Because I didn't talk to my kids about it. Right? It was like this secret power that I didn't want them to know about it. So, I didn't bring it up. Um, and then I was a little upset that the school put out a decree issuing no ChatGPT and I thought, oh man, the gigs up.
You know, now all the kids know about it. And they probably did before, but like immediately, I'm sure all the kids started to like to look it up and find out what it could do for them and so on. Right? So,
[00:23:28] John Nash: yeah. Yeah. The surest way to get someone to use something is to ban it.
[00:23:32] Jason Johnston: Yeah. Yeah, exactly. And then I was putting in a proposal actually for us and a couple other people to be on a panel about ChatGPT, and I thought, wouldn't it be interesting to put in the proposal that ChatGPT gives us, right? That make a great story. And so, I had to do a proposal I had to massage the prompt a little bit to kind of get it to where I wanted to go. But it actually, you know, not a bad proposal, um, that it put out. Um, in the end I ended up using my own, I did my own first so that I didn't want the witness to be led here too much. So, I did my own first, and then I did the chat one. And maybe as I was looking at it, I was like, oh, yeah, that would be one, that's good, one good thing to include that I didn't include for clarity about something or whatever.
Yeah. You know, some way to kind of, you know, wrapping up the last paragraph, just giving a little bow on its kind of thing that I didn't copy and paste it, I don't think, but I used that idea in the last paragraph. So that's how I used it there. Then I had to write a script for an intro for how faculty develop online courses. And I thought I used it more on the front end. So rather than me starting with a script, I started with ChatGPT and I just gave it the prompt about writing a script for this and that. And this is, these are the main points that this course is about and write a script for me.
It's interesting cuz it put in scene changes and so on as well. And it gave me a little bit of a frame to start with. I probably didn't use hardly anything from the script, but it gave me a kind of a five-part kind of frame to start with. Almost like a word template or something. That then I went in and created a script for it.
[00:25:11] John Nash: Those are great examples and I think they are. great because they exemplify where I think ChatGPT can be very useful, which is in two areas. One is advancing one's ability to think about content on which they are already an expert.
So, I'm looking for ways to, for instance, let's say train students on how to be better empathetic interviewers. Um, I have enough, um, experience and knowledge in that area to know if chat GPTs on the right track and whether I should share this with my students, I wouldn't have it write an essay on to make me look smart about, you know, a tale of two cities, because I wouldn't know if what it put out there was any good.
yeah. And then the other thing I think it's great, which was in your example, is it is a really good partner in extending your thinking in sort of brainstorming things where you have an idea, but you're stuck on what you ought to do with that idea. It sort of does some conceptual block busting, I think.
Um, and I played around with this idea with some of my dissertation students who have really, they do contextually bound, sometimes politically fraught, work to make a change in their organization. Sometimes it's a school, sometimes it's another, a nonprofit or a college. And they know what they want to do, but they're stuck on how to approach that with their stakeholders or what kinds of initial questions they should ask.
And we played with ChatGPT so that it gave them some confidence that they're on the right track and what kinds of questions they might ask. So, if they give it a scenario and then ask it to say what sort of things, what sort of things would I say to a stakeholder about this problem? It'll spit out some ideas.
You know, it'll spit out seven ideas and one of them sparks an idea that's realistic for them. The other six are, oh, that won't work here, but, and that's been, that gets them out of being stuck.
[00:27:10] Jason Johnston: Yeah. I love the I love the idea of. Helping to get people unstuck when it comes to writing. Um, there's something about that blank page that is so daunting.
And there are a zillion other things I'd rather think about than the thing that I have to do in terms of writing on that blank page. And sometimes just a little prompting to get you going in a direction would be helpful.
this is, now, this is getting maybe a little meta, but the, are we losing something from our humanity though, when we don't go through the struggle of staring at that blank page, one. Two, do we miss out on the thing that we would've gotten to in 30 minutes by staring at blank page versus I'm, I am impatient now because they can do it so quickly. And so, I'm going to give myself two minutes to think about this, and if I don't have a good idea, I'm just going to jump in a chat and see, um, to see what chat has to say about it.
[00:28:15] John Nash: That's a great question. Because of what you just said, and I noticed that I have been thinking in this direction. I'm trying to be more thoughtful about when I want to use it. If I think it's for some, I'll just call it drudgery: the perfunctory transactional things that we have to write up as a director of something or as a leader in something summarizing points from an email, things like that, that it's pretty good at and don't have a high intellectual cost or a or on the backend being called out for, you know, cheating or plagiarizing or whatever.
[00:28:48] Jason Johnston: Right, right. Um, you're proverbial real estate ads.
[00:28:51] John Nash: Yeah. My real, yeah. It is my sort, yeah. My, my little assistant who writes uh, real estate ads and little emails and summaries of things.
Great. But I think when I look at, like, some of the things I've been, my reflections lately that I put on LinkedIn or other things like that when I've been in the zone and really trying to make a point, and that feels really good. I like that feeling, um, chat. G p t can't do that, and it wouldn't, and because it's so formulaic and a lot of its writing it, it wouldn't touch on the way I do it.
I guess that's coming. I know. Now I can say like, write this post like Seth Godin and I heard someone say that it does a pretty good David Sedaris. I haven't tried that yet.
[00:29:27] Jason Johnston: Oh boy. Okay. Well, I know what I'll be doing after this conversation. Yes.
Yeah.
Summarizing, I'll be writing my next standup routine. Yeah.
[00:29:34] John Nash: Or summarizing your experience with me on this podcast. Except if, as if David Sedaris right. Said it Right. So, I'm trying to be thoughtful about where it's useful.
It is fun for me to play with. I did write something recently. I hate writing abstracts, introductions, and conclusions. I think we all do. And um, I've tried to have it say to, I finish a body of something, I say, write a conclusion for this. And I was like, "eh," it's okay. And then, and actually, so I became motivated to write my own conclusion because the first one was so crummy. It was even worse than what I could do or wanted to do. I guess I can write good conclusions, but, um, I don't know why we all hate doing those. I don't want to put you in that boat, but I know.
[00:30:15] Jason Johnston: No, I know. I'm with you. I think it's hard. It is a bit of a drudgery, especially abstracts.
Um, and there's, you know, formulas that obviously make it easier and so on that you can go through, but it's just yeah, it's a hard practice. It did get me thinking about the, um, okay, so there's this, um, I don't know if you heard about this or not, but there is a 24/7 nonstop AI-generated Seinfeld show going on right now.
No, I had no
idea.
And now it looks like old, like worse than Minecraft, like old, blocky kind of characters. Going around and doing some really random things, but it's completely original as it's being played and somehow it learned, it learned the sense of Seinfeld and if you go check it out, it's kind of creepy in some ways, because it has a sense of Seinfeld, without any actual humanness to it.
Like it has no idea what is funny and what is not funny, but it has created this kind of like, wow, like sense of how Seinfeld flows. Mm-hmm. and jokes has a sense of jokes without it being funny at all and almost just a little creepy. But I thought about that with the whole, with ChatGPT so far anyways, it's like it has a sense of humanness with it still is missing that kind of, that element, that human touch.
Yes. At this point.
[00:31:46] John Nash: Yes.
[00:31:48] Jason Johnston: And then on the other side of things, part of what I think what gives things a human touch is that there's obviously a human behind it, but there's a human behind it that it just didn't spit it out just as it was happening. Like the human behind whatever it is. The great David Sedaris monologue or whatever. Or bit or a great Seinfeld bit, you know, kind of thing. There's a human behind it that struggled it, it just didn't like, it, just didn't like to stick it out there and it was done. It, like they, they crafted it, they worked on it. They probably practiced it. They figured it out on people.
They figured out timing. They have a sense of all these kinds of things and it like somehow, was human because, not just because of the human, but because of the humanness that it took to get to the great goal at the end. Yes.
[00:32:35] John Nash: Yeah. I think of like, you know, the, I saw some film of Chris Rock with his stack of index cards and, you know, he's out in the small clubs testing out the new stuff. They, comics are their own worst enemy. They're merciless on themselves. In getting things right because they know it has to land. Um, so that's really interesting.
[00:32:58] Jason Johnston: Well, you should check it out anyway, it's just, there's something interesting about it that you want to keep watching to see what's going to happen next, though, is the thing.
Right. But if there's no, there's nothing enjoyable about it, like watching a Seinfeld episode, you know what I mean?
[00:33:16] John Nash: You mean when it was Seinfeld?
[00:33:18] Jason Johnston: Yeah, when it was actual, the actual Seinfeld episode versus this. They're like two completely different experiences, obviously. Yeah.
Anyways.
well, we should probably wrap this conversation up a little bit here anything else left to be said on this or We said everything that there is to say eight weeks into
[00:33:38] John Nash: eight weeks into Chad and you know, I think it'd be nice if we could start to think about how to put this in the context of our interest to humanize online ed.
And I think it all points in that direction. If, you, me, our colleagues who are interested in teaching in online spaces, how this can help advance a good human experience for our learners. That's what I'm most interested in.
[00:34:08] Jason Johnston: Yeah. It would be an interesting question if AI could be a help or a hindrance to actually humanizing online. Education, what people thought in terms of their own experience so far. Yeah.
[00:34:20] John Nash: Yeah. And are you afraid of what AI can do or are you embracing what AI can do? Yeah. That would be interesting to know.
[00:34:27] Jason Johnston: Yeah. And I think that's really one of the big questions right now is what. It feels like a fair bit driven by fear, I would say in the last eight weeks about what it could do. It feels a little bit like the phonograph experience
[00:34:39] John Nash: yeah,
[00:34:39] Jason Johnston: and we've talked briefly about the television as it came in as well. Um, the internet probably had some, maybe I just wasn't in the conversations, but the internet there was probably concern. And now with Chad,
[00:34:53] John Nash: yeah, I think, well in, in 75 years, all of this hand ringing will look laughable. Just as we look back on these changes that happened 75 years ago from today.
[00:35:06] Jason Johnston: Yeah. If, if we are laughing at the AI's horrible jokes as we watch their AI shows right. As we grovel at their feet maybe. So, we'll see. Yes. Go a couple different directions, John. You know this.
[00:35:21] John Nash: That's true.
[00:35:22] Jason Johnston: I've really enjoyed chatting with you today. Look forward to having more conversation. I think if nothing else, this conversation about ChatGPT helps us to think about what we are about, you know, as universities, as educators. And so that's where I just love the conversation, wherever this is going, it's good conversation.
Just talk about, um, you know, if your five-paragraph essay is your litmus test whether or not my high school student is educated, then I'm looking in the wrong place here because it's just the wrong assessment for a full education, right? And so, I think that's what's helped me and hopefully help other people as this is coming out, it's sparking conversation and I think it's exciting for just from that standpoint, even if we do get taken over by robots.
[00:36:10] John Nash: Yeah, I agree. It's also helped me reflect about what I value as a teacher and what I want my students to learn and how I want them to learn it. I think that's been part of the journey for me too.
[00:36:24] Jason Johnston: Yeah, that's good. John, thanks so much for this conversation. This was great.
[00:36:28] John Nash: Yeah, absolutely. This was, this was a lot of fun. And want to make sure that folks listening out there have an opportunity to continue the conversation with us when we're not talking here, a good place to start doing that is over on LinkedIn and our LinkedIn group that's called Online Learning Podcast.
Join our group and tell us what you want to talk about in the.
[00:36:49] Jason Johnston: Absolutely. And as well as you can always find this podcast in our show notes@onlinelearningpodcast.com. That's online learning podcast.com. I still can't believe we've got that URL, John, so that's pretty cool and I hope people visit us there.
[00:37:05] John Nash: Yeah, it's a good one. So, bye for
[00:37:08] Jason Johnston: now. Bye for now.

Friday Mar 10, 2023
Friday Mar 10, 2023
In this episode, Jason and John talk about what this podcast is about. It’s not a show about nothing. We hope.
Join Our LinkedIn Group - Online Learning Podcast
https://www.linkedin.com/groups/14199494/
Readings
Boyd, D. (2016). What would Paulo Freire think of Blackboard: Critical pedagogy in an age of online learning. The International Journal of Critical Pedagogy, 7(1).
Freire, P. (2005). Pedagogy of the oppressed (30th Anniversary). Continuum.
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
S1E2 - What is this Podcast About?
[00:00:00] Jason Johnston: Do you hear the squirrel complaining outside my window?
[00:00:02] John Nash: No.
[00:00:03] Jason Johnston: This is what happens.
I got,
I got the cat scratching at my door. I've got a squirrel complaining outside my window.
Get rid of the squirrel. Let the cat in.
Welcome to podcasting,
MusicIntro
[00:00:21] John Nash: Hey everyone, I'm John Nash and I'm here with Jason Johnston.
[00:00:24] Jason Johnston: Hey John. Hey everyone. And this is Online learning in the second half the online learning podcast.
[00:00:29] John Nash: Hey. We're doing this podcast to let you in on a conversation we've been having for the last two years about online education. So, look, online learning's had its chance to be great, and some of it is no doubt, but there's also a long way to go.
So how are we going to get this to the next stage?
[00:00:47] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:51] John Nash: Perfect. You know what, let's talk about today. I've been thinking about this. Jason, what is this podcast about? That's what I've been thinking about because we are two people who have been neck deep in the workings, the process, the enterprise of teaching and learning online, particularly in higher education.
And we come to it with a bit of a critical eye because we've been doing the work and we see opportunities for being better. And yet in the run up for me to us talking about this, I find myself just sounding like a curmudgeon. So, I'm trying to figure out and maybe that's what our, sort of our entering conversation here is about, is where we're seeing opportunities for improvement, hopefully.
And then if, to the extent that anybody's out there listening and wants to guide us on where the conversation should go or where we think the winds are blowing, we could do that. But I'm wondering for you, what do you think our overarching conversation is about here?
[00:01:55] Jason Johnston: I love the idea of having a section that just starts with kids today dot, dot, dot, and we just complain about what kids today are doing.
[00:02:05] John Nash: I feel like I need it. I don't know.
[00:02:07] Jason Johnston: And I think for me that's a great lead into that kind of idea of here we are peering into the second half of our own personal lives but also thinking about of what's going on in the online space and higher education and how post. Ish, we are looking at our education laid out for us over the next, into the next decade or decades and wondering what's going to happen next?
And what can we be aspirational about? What can we look back on and be proud about? What we've come to at this point and where we would like to see things go if we had the choice and power to help steer this ship
[00:02:51] John Nash: Yeah, I think that's part of it. And we've both been doing some reading of the works by James Hollis and I was looking at in his, early on in his book, living An Examined Life.
There are talks about, a familiar proverb in Japan, which declares it is the protruding nail that gets hammered. And yeah. So, I've just been thinking a lot about all the protruding nails.
[00:03:12] Jason Johnston: So, what do you what do you think about first, when you think about protruding nails, what are the first things that kind of pop up?
[00:03:18] John Nash: I think that coming out of the pandemic one of the protruding nails is the vast numbers of educators out there who were thrown into online learning who are. maybe in a position or put in a position rather to stick with it by either their institution or they think they want to. But the nail that's protruding, that needs to be smoothed out, hammered down is the professional development needed to really do well while teaching online?
I think that's a bit of a myth that's come around that it's simple to take your curriculum over. And I don't think it, it is and I think we've got a lot of colleagues out there that could be supported.
[00:03:57] Jason Johnston: Do you think post pandemic or Ish? I keep saying post pandemic and then I catch myself and so I'm throwing “ish” on the end post pandemicish that instructors/faculty who were thrown in online are feeling more confident about it and they feel like they know what they're doing and so they don't. Perhaps feel like they need that professional development, or do you think that they're coming to some awareness that, oh, that was interesting. We can do some things online.
We can throw things remote, but now I'm opened up to this world and realize that I need a lot more work than I thought.
[00:04:35] John Nash: I think it's, I think there's a few buckets there. I think there are I was remembering our notes from past conversations you and I have had, and we had one down here that said emergency remote learning is not mature online experiences for learners or are not.
So, I think we've got folks that are. Pining to get back into the classroom, that they're more comfortable there. I think there are those that had a, an interesting experience or a good experience and they would like to stay there cuz they found the flexibility of the platform but maybe weren't satisfied with the way in which the teaching and learning transferred from the former face to face setting.
I think I said I had three and then there those have just, yeah, who did not have a good experience and probably are wondering whether they can maintain quality in there if they don't have a place to go back to if they're not going back into the classroom.
[00:05:25] Jason Johnston: Yeah. And as we think about the second half of the online life different from our own human lives. There is not necessarily a foreseeable end to this.
We can't say in 40 years that we'll be laying this entity into the ground and saying our goodbyes. It's difficult to understand and even believe right now that we're at this kind of really turning point that has been happening for the last 25 years maybe, and now is really opened up in a new way, even over the last couple of years that now is hard to imagine a future without online learning, at least a future where humans are existing.
If we want to get that far into it without some level of online learning.
[00:06:16] John Nash: No, it's very hard to imagine that. We may, we've taken the convenient handle of the second half of online learning's life from Hollis's work, but we may be in the first one 10th of 1% of its life. And but there, I think there's always going to be inflection points where we, or and reflection points where we can think about how it can improve.
[00:06:38] Jason Johnston: and depending on who you're talking to, it feels a little bit like we're at a turning point a midlife crisis in some levels of thinking about online learning, but it may very well be that all we've seen is adolescence so far.
And now we've hit our first crisis, and we are now thinking about, what is the next thing?
[00:06:59] John Nash: Yeah. And so, as you talk about that first crisis, like going into the pandemic, and particularly as I think about the P 12 space and how they definitely threw themselves into a panic mode of using online learning for ill or will.
I was thinking about how much of the progress that has been made in improving online learning in the, say 10 years up till the pandemic and how the platforms have changed. Blackboard was the leader and then now Canvas came in and did some disruption.
But I still find that most of what we do is confined by the constraints of the affordances of those platforms. And so, I'm wondering what the next thing will be. Canvas for all its improvements over Blackboard, my personal opinion, still is a fairly didactic module driven, structured platform that lends itself well to someone who teaches in a module based didactic fashion.
So, it lends itself well to shovel wear. It also lends itself well to someone who's has aspirations to create a more constructivist approach with maybe some participatory things, but that's not so obvious out of the box to someone who's just coming to use online learning as part of their daily toolkit.
[00:08:18] Jason Johnston: Yeah. And in, in recent years as I've been trying to reflect a little bit more on online, Learning and the student experience and as you said,
a shovel wear, thinking about some of the
critical approaches to education specifically Paulo Freire Yes. And thinking about his idea well before online learning, the fact that too often we use this banking model of education where we imagine these empty minds of our students and we are making deposits, one-way deposits into their brains.
And I'm afraid that our online learning platforms have guided us. Technology is not neutral. I think you, do you agree with me with that technology is not neutral. Agree with you.
[00:09:01] John Nash: Yes.
[00:09:01] Jason Johnston: And I'm afraid that the way that our technology is set up, it has almost guided us. To recreate this again, like a way that we can just one way
make these deposits into our students who are logging on.
And I think there's other technology though, at the same time that recognizes this and is trying to do things to mix it up a little bit. But
I think that it's a difficulty. So, I hear you talking about that technology, which is be another protruding nail probably, right?
[00:09:31] John Nash: And you're right because I think it's easy to be lulled into a false sense of complacency actually a false sense of advancement
[00:09:40] Jason Johnston: yeah.
[00:09:40] John Nash: Just by realizing how easily we settled for poor standards with Blackboard and other platforms. It's almost like I remember somebody using a term, scaling up direct. So, we've just got a better way to mechanize the efficiency of the learning.
I think canvas does that well. I like using Canvas, but if we're really honest about it, you're right. Technology is not neutral. And what it does well is it allows me to, what does it allow me to do really well? It allows me to port last year's class into this year's class. And repeat the stuff. And yeah. As we think about trying to humanize education further and use, let technology and online capabilities be a resource for that and a, yeah. I don't know. Yeah. We're not there.
[00:10:29] Jason Johnston: No, we're not. Yeah. We are really; we have really perfected the shovel wear the banking model if we choose to use that.
And I think there's a few people that are trying to break out of that, some of the new innovators, but they're, as much as I also like Canvas they're not there anymore. They're not in an innovation stage right now. They are a, in a stage of continuing what they've already done. Yeah. And so, we're not going to probably see as, my guess, a lot of innovation from Canvas moving forward because they're going to have to really support and cater to the status quo that we've already created with it.
[00:11:06] John Nash: Yeah. Yeah. So, as we think about trying to, I think we have some hurdles in front of us. An overarching desire, I know at least on your, my part to more humanize maybe even democratize. Educational experiences that occur online by involving the learner more in the design of the experiences and to make sure they're aligned to what the learners want in their own lives.
And that in and of itself is a challenge, even if we're not teaching online. To have a professor or an instructor or lecturer or teacher do that in their own classroom face to face is also, sometimes it's antithetical to their own notions of how teaching ought to be done. Power differentials the politics of teaching involving students in their in the discussions of their destiny as learners.
And then add that online component to that. Then, you've got, so you've got professional development and bringing along folks to think about how they can be more humanistic and more of a co-designer with learners about that experience. And then to what extent can that be done with the affordances of the tools that we have that let us do it online?
Yeah. There are two nails that are protruding.
[00:12:17] Jason Johnston: So, what do you think from an online professor's standpoint, if you could blank slate it and you're coming into this for the first time and you're going to teach online, what would some of your aspirations be?
[00:12:30] John Nash: I don't know why that question is so difficult for me to answer. That's interesting. I didn't expect that.
I think maybe I've become so accustomed the tools that have been handed to me that I haven't thought about. I'm constantly thinking about how to innovate inside that box. I just, so let's talk about Canvas again. I'm thinking about ways to do assessments that don't involve, a multiple-choice test.
I'm thinking about ways to provide video and other, non-didactic sort of, not publishing a PowerPoint and having them read that and figure out something from that. Avoiding straight up lectures. And so, what would it be? Maybe I would ditch the platform altogether and we.
Go in the direction that we were just talking about what we wanted for ourselves as learners, which was how would I create a learning community around an important outcome I wanted students to attain in the course and then backward map out of that to think about materials and topics. But if I think there would probably be synchronous components to it, maybe more than we typically think about.
And they would be on Zoom or some other, video platform where we could commune and talk and solve problems together. I think that's I think there are ways in which one can have enriching asynchronous threaded discussions, but ultimately those are it's that discussion in the moment.
I think that can. Really drive a conversation to a point where new outcomes can be attained, new solutions can be attained that you wouldn't otherwise get.
[00:14:10] Jason Johnston: I wonder if professors would be in the same boat of not necessarily knowing what's good for them. I know that'd be hard to admit for maybe some professors, but I feel like what instructors maybe want also is convenience when it comes to teaching online. Yes. They like the flexibility as much as the students, like the flexibility asynchronous is super convenient because it means that I can take my dog to the vet this afternoon and I don't have to be stuck in class, and then I can come back to it later on and respond to people or whatever.
But I wonder about just that personal. Satisfaction of being part of a learning community, that it's not just about the students, but for professors and instructors to have a sense of satisfaction and a desire to really be learning alongside of the students or connecting with the students. It maybe we need to just start planning on more of that synchronous time to really make that connection, even though it's inconvenient to have an hour a week to plan it out, which sounds ridiculous as I'm saying it, what do you think?
[00:15:19] John Nash: I think that's interesting because what I'm realizing now as we're talking is that part of what I've come to appreciate in a, say in a 16 week course that I teach at my institution is I'm interested in understanding the student's trajectory of growth from the beginning to the end, and how they've changed themselves, how their thinking may have changed, how their attainment of the key outcomes, knowledge, skills, and that we're interested in having them attain.
And I like learning about that trajectory from them themselves in a series of conversations. And in an asynchronous course, one might attempt to get that by having a reflective 3, 4, 5 pagers at the end about, tell me what you've learned in this course and what resources you relied upon to come to those conclusions.
That's fine. And that tells me a little bit, but it really doesn't tell me about how you changed as a person and how you're leaving
differently and you're thinking differently as you'll apply these things in your future endeavors. That's what I'm thinking about.
[00:16:25] Jason Johnston: Which I guess if we're going to bring it all down here, thinking about those early optimistic, could we call them almost naive hopeful feelings we all had as we started as teachers that we would at the end of the day, we would change lives, right?
That they would leave the other end of the semester different than they entered. Yes. And I, I found very few teachers that's not still the hope. And yet I find ourselves also stuck in this Assembly line, it almost feels like sometimes, right? This almost industrialization of education that again, continues in online where we're stuck in these modalities or these ways that it's difficult, feels difficult to help the change happen.
And it feels difficult to see the change happen. You're not going to see that, as you mentioned before, through a multiple-choice test.
[00:17:23] John Nash: No. And it's hard to see also when the say the degree program or the overarching curriculum across the program is so compartmentalized that each course stands upon its own as just a chunk of learning and the connections across them are not as clear.
I can feel good that I don't have to remember that stuff anymore, because I'm going to go to the next class and do that. And I think that's fights like that's fighting against our desire for this. And that's rooted in an overarching deep system of, credit hour production and how many hours are needed to create a degree and how many transfer credits are allowed in, and all of that fun stuff.
[00:18:01] Jason Johnston: Yeah. And reminds me of something you said before too in our conversations where that it's easy to kitchen sink things online. We can just throw everything in there that we want to throw in there. So, whether it's the disconnected classes oh, I want to do something on this, I want to do something on that.
And maybe from a student perspective as well, but even within our classes, it's so easy just to, there's so much available and we're all drowning in the sea of information. So much available. It's so easy just to want to. Put everything in the kitchen sink in there. Yes,
[00:18:35] John Nash: I do remember saying that, and I, because I had fallen victim.
Sure. And teaching a class, I see a resource. I'll make a canvas page, and I'll drop it in a module. And now they'll, and now they'll go do that and they'll be smarter now. And I'll fall victim to that. Recently taught this term a course face to face that I've been teaching online for several years.
And I, this really, that whole thing came home to roost because there's, in a face-to-face class, you don't have the luxury of just throwing everything at them. You have to be very conscious about choosing the right learning resource lecture help session. What. That is going to advance the learning outcome.
And just that there's no time to do anything else. You can't. And so, I think, yeah, it's taught me a lot about going back to teaching this course online. Again, I'm going to be very careful about what I select to go in those sections on those learning outcomes. Very much I've been thrown so much in there.
Yeah. Kitchen synced it. Yeah. And I just want to add, I want to say canvas, is it's a double-edged sword. They're helpful in this regard, but it's also a problem. But I you can drop in these plugins. So, any number of the vendors that have come along that'll let you either add videos or badge things, or you name it it's a dropdown menu and a plug and play away from adding yet another resource that you think will help things along.
. I just think we have to be, we have to be critical consumers of what we think we're putting in there to advance the learning outcomes.
[00:20:06] Jason Johnston: Yes. And as we're looking to absolutely to what we think is a problem solving with our technology we're faced with the shadow side of these limitless possibilities, within our screens here. And yeah. And sometimes that can be just trying to give everything at once or see everything at once or provide everything at once.
[00:20:29] John Nash: Yeah. Look, if I drop this down in front of them, they'll consume it and then they'll be smarter, right? And I fall in for that as well. You, yeah. You have to be very thoughtful about what you're putting in front of learners and why you want them to do it, and what, where it takes them next.
I think it has to be a building journey.
[00:20:45] Jason Johnston: I really think we hit on some pretty big topics here. And this is a great time to also invite other people into the conversation. So, if you're listening, we would love to hear from you. Find us on LinkedIn, the online learning podcasts, a LinkedIn group, and you can. Let us know what you want to talk about as well as jump in on the conversation on these various podcasts.
[00:21:07] John Nash: Yeah, absolutely. And also find us online at our website for show notes and all other things Good. About the podcast. That's online learning podcast.com. Online learning podcast.com.
[00:21:23] Jason Johnston: Yes, we'd love to hear from you. Thank you so much for listening.
[00:21:26] John Nash: Yeah. See you later, Jason. Yeah.
[00:21:28] Jason Johnston: See you, John.





