In this episode, John and Jason talk about if technology in education will always let us down, and the concerns and opportunities with AI.
Links & Resources
- Join Our LinkedIn Group - Online Learning Podcast
- Jason’s YouTube video of the chat with Bing: Bing Chat Broke My Heart
- Dr. Ian Malcolm Ethics Speech
- Will elementary children in China put up with robot teachers?
Research Papers
Review of Literature on AI, Higher Ed, and Distance Learning: Where are the social scientists?
- Systematic review of research on artificial intelligence applications in higher education – where are the educators? Link
- The Use of Artificial Intelligence (AI) in Online Learning and Distance Education Processes: A Systematic Review of Empirical Studies. Link
Ideas for Supporting Doctoral Student Writing
From Mushtaq Bilal’s tips on Twitter about smartly using AI to improve written academic work.
AI Businesses on the Horizon
- Inflection AI, a startup working on a personal assistant for the web, could raise $675m, after raising $225m in 2022.
- Character.ai — a company that hosts ChatGPT-like conversations in the style of various characters, from Elon Musk to Mario — is now valued at ~$1B.
Upcoming Presentations and Talks
- UT TLI conference March 23rd 3-4:50 - first 500 free $20 after that.
- “How Might We Use Design Thinking to Humanize Online Education?”
- John & Jason’s design thinking challenge session at OLC Innovate in Nashville
- Wednesday April 19, 2023 - 3:45 PM to 4:30 PM
- Message us on LinkedIn for details about the live meet and greet on April 20th
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False Start
[00:00:00] Jason Johnston: You can always find this podcast in our show notes, onlinelearningpodcast.com. That’s onlinelearningpodcast.com. I still can’t believe we’ve got that URL, John. We’re this far into it, and nobody’s done onlinelearningpodcast.com.
[00:00:14] John Nash: That’s crazy.
[00:00:15] Jason Johnston: I just figured we’d be at some sort of weird, really weird name to get anybody to find us.
[00:00:21] John Nash: Underscore 87 52 B,
[00:00:24] Jason Johnston: Right? Exactly. Dash.
[00:00:28] John Nash: Yeah. Our birthdates are already taken. Our names are taken entirely. Shot.
Right.
Intro
[00:00:39] John Nash: I’m John Nash, here with Jason Johnston.
[00:00:42] Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the Second Half, the online learning podcast.
[00:00:48] John Nash: Yeah. We are doing this podcast. I’ll let you in on a conversation that we’ve been having for the last two years about online education. Look, online learning’s had its chance to be great, and some of it is, and a lot of it isn’t. So, how are we going to get to the next stage, Jason?
[00:01:02] Jason Johnston: That is a great question. How about we do a podcast to talk about it?
[00:01:07] John Nash: Perfect. What do you want to talk about today?
[00:01:11] Jason Johnston: John, Bing Chat broke my heart.
[00:01:15] John Nash: Come on. Really?
[00:01:17] Jason Johnston: Yes. Could I tell you the sad story?
[00:01:20] John Nash: Yes. Tell me the story.
[00:01:21] Jason Johnston: So, we were talking the other day, and we were talking a little bit about practices to have more emotionally stable and emotionally healthy kinds of responses in work or in life and so on during the day. I was curious about it from a, almost, from a therapeutic standpoint. And it asked me if I wanted some resources on it and so on, and I said, “No, that’s fine. I just want to chat.”
And I like the idea of observing and labeling the emotions without judging. This was an idea that Bing Chat came up with as we were chatting.
[00:01:58] John Nash: This is what Chat said to you?
[00:02:01] Jason Johnston: That’s what Bing Chat said to me previously. And I said, “I like that idea, Bing Chat.” I’m jumping into the conversation. Got it. I like that idea, Bing Chat, about observing and labeling the emotions without judging.
And Bing Chat says, “Okay, no problem. I’m happy to chat with you.” And it goes on and talks a little bit more about that. Then it says, “Do you have any examples of how you observed and labeled your emotions recently?”
And this is one of the things about Bing Chat. It’s a little more back and forth. It feels more like a chat to me than ChatGPT, as we talked about in our previous episodes. And honestly, I thought, what the heck? Bing doesn’t save this stuff. I’m sure nobody’s recording this anywhere, right, John?
[00:02:42] John Nash: Sure. Yeah, absolutely.
[00:02:44] Jason Johnston: And so, I actually said sometimes I… and I explained some emotions that I’ve had recently and how I’ve labeled them and how it’s been helpful.
And guess what Bing Chat said to me.
[00:02:54] John Nash: What?
[00:02:55] Jason Johnston: It said, “Unfortunately, we need to move on. Click ‘New topic’ to chat more.”
And I said, “Bing Chat, I just opened my heart in a way that I’ve never done to an AI. Are you sure we can’t continue?” And Bing Chat says, “Sorry, it looks like we need to chat about something else. Click ‘New topic.’ Please stop.”
[00:03:20] John Nash: Because you only get to speak for eight times.
[00:03:23] Jason Johnston: Exactly. And then, John—and this is the kicker.
Okay, then John, I went back to Bing Chat a couple days later. I was a little wounded. You could see my face. I’m a little… all this is tongue in cheek, everyone. I hope you realize that. I have some very good, human, healthy relationships. You shouldn’t be concerned about my—
[00:03:46] John Nash: We need, um, we need audible emojis.
[00:03:49] Jason Johnston: Yes, we do. So, you can see. But anyways, this was the kicker, John, is a couple days later I was put back on the waiting list.
[00:04:01] John Nash: Oh my gosh.
[00:04:02] Jason Johnston: And I wondered what happened between me and Bing Chat. Was it that it couldn’t handle my intimacy? It couldn’t handle me really… sure, it asked to know some examples and then immediately closed down after I had really opened my heart in a way that I had not to AI before.
Anyways, yeah. So, Bing Chat broke my heart. And when this happened, I did think about… we’ve done three podcasts now about AI in education, talking about online learning a little bit. And it did make me think about the fact that, John, is technology always going to break our hearts when it comes to education?
And what are we not seeing here? You and I have been really enamored with AI and ChatGPT and then Bing Chat over the last few months. Everybody’s talking about it. We’ve been really interested in that conversation. We tend to come at it a little bit more from the standpoint of what can we do about this rather than building fences to keep it out of our lives.
But I wanted to take a little bit of a turn in this episode as we wrap up a little conversation, and I wondered if we could talk about some of our concerns to begin with. Will technology always break our hearts, and what are our concerns as we think about what we’ve learned over the last few months in relation to online learning and AI?
[00:05:23] John Nash: When you asked that question, the first thing that popped in my mind wasn’t some of the things that I’ve been thinking about as concerns. But will AI, will technology always break our heart? I think that technology, and pretty soon in the form of the way AI is evolving, will put teachers in the position to inadvertently break learners’ hearts because they will rely too much on the technology to get the good job done. Does that make sense?
[00:06:00] Jason Johnston: Say it again in a different way for me.
[00:06:02] John Nash: I think that technology could contribute to teachers breaking learners’ hearts, because in present-day teaching and learning, it has become so politically fraught in the public sector and so much pressure to do well, particularly in the P–12 sector, that I think as vendors come along and as technology advances to bring more AI tools that are supposed to humanize things, it may inadvertently fool some of us into thinking that it can handle the human side of the teaching, and we’ll let the machine do a little too much.
Particularly those that have not been using technology to teach. If we have another pandemic-like event that pushes a lot of pedagogues into teaching and learning practices that they’re not accustomed to, they will rely on what the vendors have created.
[00:07:04] Jason Johnston: Yes. Yeah. It makes me think of a couple of phrases that kind of drive me up the wall, and apologies to vendors out there: “one-button solutions”
[00:07:17] John Nash: Yes.
[00:07:18] Jason Johnston: and “seamless integration.”
[00:07:20] John Nash: Yeah. Yes. I think that lulls people into a false sense of security and safety, that this process will be managed for them.
[00:07:29] Jason Johnston: Yeah.
[00:07:31] John Nash: Yeah. It’s very attractive too, especially if people find themselves in a crisis situation and they’re like, “You mean all I have to do is hit this button, like the easy button, and we’re just… we’re cruising. It’s just taken care of.”
[00:07:43] Jason Johnston: Yes.
[00:07:44] John Nash: Yeah. I worry about that.
[00:07:46] Jason Johnston: It also got me thinking about a quote by a very astute Dr. Ian Malcolm. I don’t know if you know this scientist or not, but he said, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
[00:08:04] John Nash: Yes. Very astute words. Tell everybody who Dr. Ian—
[00:08:08] Jason Johnston: Well, you know, for our highly academic listeners out there, they’ll know immediately who this is, of course, who was the Jeff Goldblum character in Jurassic Park movies. And it got me thinking about this quote from the standpoint of AI, in that I think some people are asking the questions, which I’m really glad about, but it continues to be a concern.
Like, I’m part of some task forces and so on at our school. They’re asking the question whether or not they should on some levels. On other levels, I hear very few people taking any kind of Luddite stance here. Most people have resigned themselves to say it’s a foregone conclusion.
We’re going to have AI anyway, so we might as well learn to live in harmony with our AI friends slash partners slash co-pilots slash overlords, right? So, some of my concern is around that—whether or not… and as we know from the Jurassic Park movies one, two, three, four, five, six, seven, they keep doing it over and over again, right?
But it all started with a good heart—people that just loved the dinosaurs. They wanted to see them again. They wanted to roam with them in the forest. But every time it turned out poorly. I love sci-fi movies because they often speak to some of the fears and give us some times for caution.
So, that’s my main concern, is that this is going to get away from us. What we just said about the one-button solution—that we’re going to do plug-and-play online learning, and humans are going to be left behind, both teachers and students.
[00:09:49] John Nash: And speaking of humans getting left behind, a piece of research crossed my desk last week on whether or not service robots are going to be a promising solution for mitigating the challenge of the global teacher shortage. And so, this study was done out of China, looking at whether or not children will tolerate a robot teacher. Now, the study doesn’t ask if students reach learning outcomes that are as good or better than a human teacher or whether they would like to have a human teacher instead of a robot teacher.
But—
[00:10:24] Jason Johnston: So, it was more like student satisfaction with the teacher versus a human teacher.
[00:10:29] John Nash: Yeah, basically like, will they put up with a cute little humanoid robot as their teacher? And this was elementary children too in China. Really interesting. Because actually the authors cite some research by some other authors who say, “Service robots are believed to be a promising solution for mitigating the challenge of the global teacher shortage, and it’s of critical importance to develop an educational service robot into a full-fledged robotic instructor that can be fully accepted by students.”
This is now a critical call according to these authors. I don’t know.
[00:11:03] Jason Johnston: Yeah, that’s interesting because so many of our conversations, even around the university, certainly haven’t been thinking towards actual physical robots.
Some of it is because we’re thinking online, and so that would be a little redundant in some ways, to have a… to stick a robot in front of a Zoom camera. But I’ve seen some of those AI chats out there and video creators where people look real on screen.
It could be a really, a much quicker solution in that regard. Robots are probably a little further out in terms of physical robots that can actively walk the halls of a classroom building and navigate opening doors and so on.
[00:11:44] John Nash: This one won’t open doors, but in this study—we’ll put it in the show notes—this was a physical robot that had a humanoid appearance. The authors said, “The robot this study employed for development was an AvatarMind iPal, produced by” a robot technology company. And they said it “has advantages over other robotic products in the current Chinese market, such as feasibility, customization, and affordability. This robot was chosen for four reasons. What we first considered was robot appearance. It looks like a cute child due to its humanoid shell, few angles, and no exposed mechanical parts, which eliminates pupils’ fear and cold feelings of the robot, according to the uncanny valley theory.”
I don’t know. I’m the first person to run to getting something cool and technological in my house, but this just sounds awful.
Yes, and feels a little bit like a slippery slope in this regard. I have heard there are places where they’re testing robots for taking care of our elderly as well.
And we talked before about this kind of AI washing machine idea where, you know, maybe we can get AI to do things that we don’t really want to be doing, and so we can spend our time doing other things. But of course, this is what all this is about—is that, when is it at the expense of the student? And is this really moving in the right direction for students?
[00:13:07] John Nash (2): No, I think that goes right back to your Dr. Ian Malcolm quote, that we’re just… we’re doing this because we can. Now, I do not want to besmirch the global teacher shortage. This is an important problem. But yeah, I don’t know. I don’t know if humanoid robots in elementary school is the first-best solution.
[00:13:29] Jason Johnston: Yeah. It reminds me of a tour that I took. Have you been to the Toyota plant north of Lexington?
[00:13:34] John Nash: No, I haven’t. I’ve lived here a decade, and I’ve been meaning to go up.
[00:13:38] Jason Johnston: Yeah, you should go sometime.
[00:13:40] John Nash: For the folks who don’t know about it, let’s say just a second—like all the Toyota Camrys in the United States are built there. Is that fair?
[00:13:47] Jason Johnston: Yeah. Yeah. In Georgetown, Kentucky. And I think a lot of Lexus and so on—they have certain vehicles that they do. I taught robotics actually at a high school for a time period, so we went there to see the robots. What was interesting about that tour is that I was excited to be seeing the robots and so were the students.
But they really tried to downplay their use of robots.
[00:14:07] John Nash: Interesting.
[00:14:09] Jason Johnston: There were very few humans on the floor, it felt like, in comparison to these huge robots that were doing these tasks. And they kept saying over and over again that robots are here to help humans, not replace them.
That was their mantra about all this. However, I guarantee you that there were more humans working the automotive factory floor fifty years ago than there are now. And I think it’s a great company. I think they make a really good product, all those kind of things.
And while that might be somewhat of a comforting mantra for us humans, I’m not sure it’s completely accurate. And I think we have to be aware of that part in terms of concern as well. You’re talking about the robots coming out or AI replacing teachers and so on. I don’t want to put… our main response has not been a knee-jerk fear response to this.
We’re more interested, excited about the potential, but still I think we need to have some awareness of being able to think critically through any lines we might be fed by vendors that are trying to sell us the next AI educational tool and so on. And keep a critical mind and eye towards those things.
[00:15:26] John Nash: Why do you think they were so keen to be sure to say that robots are not going to replace humans, but are here to help humans?
[00:15:34] Jason Johnston: I think one thing that I realized is that I thought I was on an educational tour, and it really was a PR tour, is really what it was. And so, I think there are amazing ways that they showed us in terms of manufacturing, how… like we could talk all day from a leadership standpoint about how they do manufacturing there.
Every person on the line can stop the line because quality control is at—
[00:16:02] John Nash: The andon cord. You can pull the andon cord.
[00:16:03] Jason Johnston: That’s right. They have the Kaizen method. I’m probably saying that—
[00:16:07] John Nash: Kaizen.
[00:16:07] Jason Johnston: Kaizen, yeah. And the amazing leadership thoughts and take-homes from that place. And they emphasized some of those things. And so, maybe they didn’t want to get people too caught off guard by all the robots, but I, for one, was there for the robots.
Okay. Excellent.
Should we talk about some of the… so, those are some of our concerns. So, I felt maybe we need to air a few of those kind of things. Are there any other concerns you wanted to air before we move on to the positives?
Not concerns per se, but I’ve got a couple of… I think I see some opportunities—some other research that crossed my desk.
Okay.
A couple of systematic reviews of research on artificial intelligence in higher ed and in distance learning. And I’m not surprised, I guess, that much of the research has got a weak connection to the pedagogical perspectives of the way in which this can be used and the ethical perspectives in which this can be used.
And maybe it’s not so much, as I say, a concern, but I see an opportunity for the social scientists like us to take the mantle up and look at ways in which we can now humanize online education with the help of AI. This one study I looked at by Zawacki-Richter et al. concluded, “A stunning result of this review is the dramatic lack of critical reflection on the pedagogical and ethical implications, as well as risks, of implementing AI applications in higher education.”
And it sounds like at least the beginning—that’s your beginning, that’s your exploratory research right there, saying that here are some of the concerns, we need to get into more research about this. Yeah.
[00:17:43] John Nash: Yes. Yeah, that’s good. Yeah. The other study concluded that most of the “artificial intelligence applications in online distance ed are purely technical studies that ignore such issues as pedagogy, curriculum, and instructional learning and design.”
We’re back to… so, this is back to my sort of “one-button design,” I’ll call it now, that you nicely labeled—that there’s already a plethora of artificial intelligence applications for use in online distance ed, but it’s all purely technical implementation, data mining, looking at ways to personalize education based upon responses of students in prior exercises, things like that.
Soon all this stuff will come. It’ll be integrated into Canvas. And so, then the question is, will it be used in a way to really humanize and improve the process, not just simplify it or speed it up?
[00:18:34] Jason Johnston: Yeah. And I think that’s a good thought. It made me think of… so, we have, at University of Tennessee, we have a task force going on right now.
And they’re essentially assembling smaller task forces with different lenses to think about AI. And just what you’re saying—if we only look at it from a technology standpoint, then we miss some of the other implications. I’m not leading this thing at all. I’m very glad to be part of the conversation. I’m having some very robust conversations about it.
So, I’m very impressed by the people who have set this up really quickly because of their thoughtfulness about this. There is a technology section. There’s also pedagogy. There’s philosophy—it’s one of the ones that I’m part of, which is a fascinating conversation.
That’s excellent.
Research and also policy. And so, really thoughtful five buckets or frames or ways to think about it.
And we’re having some really robust conversations. We’re trying to come up with some almost like white papers of recommendations we may turn into digestible videos or other things, so that we can continue the conversation but also put out some recommendations for teachers.
Because lots of teachers are asking right now.
[00:19:46] John Nash: They are. It’s interesting you said policy, because one of the papers that crossed my desk by Dogan et al. actually focuses on policy. They said, “Developing policies and strategies is a high priority for educational institutions to better benefit from AI technologies and”—get ready—“design human-centered online learning processes.”
[00:20:08] Jason Johnston: Oh, interesting.
[00:20:09] John Nash: This in an AI systematic review.
[00:20:12] Jason Johnston: Yeah. Yeah. They’re thinking about it from a human-centered perspective.
[00:20:16] John Nash: Yes. Because much of the work, as I’ve noted thus far, does not talk about how to apply this AI work ethically, doesn’t talk about what you do with all the human-generated data.
[00:20:29] Jason Johnston: Yeah. Yeah. That’s great. Can I put a small plug in here since we’re talking about it?
[00:20:35] John Nash: Absolutely.
[00:20:35] Jason Johnston: Here at UT our Teaching and Learning and Innovation Center is putting on a conference. I think the first five hundred are free to sign up. We’ll put the link in the show notes. You and I are going to be part of a ChatGPT panel. I’m going to be co-moderating it on March twenty-third. And so, we’ll put the link to the conference, and it’d be great to… and a bunch of people from some of these task forces that I’ve just mentioned are going to be on that panel, as well as a few other friends from across the country.
So, yeah, if anybody wants to join us on that—and if you are listening to this before March twenty-third, twenty twenty-three—you’ll be able to join us.
[00:21:11] John Nash: I’m looking forward to that. Going to be talking about application of generative language models in teaching and learning in higher ed, talking about some of the promises and perils regarding our own work.
[00:21:25] Jason Johnston: Yeah, and we have some writing experts as well, so they’re going to be talking specifically about some of the writing challenges, concerns, as well as opportunities there. So, I think it’ll, in many ways, be laid out like this episode, talking about some of the concerns and then some of the opportunities as well.
And what’s next. Yeah.
So, what other positives are you thinking in terms of AI for the future of learning, and specifically online learning, John?
[00:21:52] John Nash: Yeah, Jason, I’ve been thinking about this a little bit. I think a couple of things come to mind, and they remain the same as I felt about three, four, five weeks ago.
There are these continuing affordances that I see that I like and that I use in my work and my work with my students. One is this idea of breaking inertia. And a law of inertia states that if a body is at rest or moving at a constant speed in a straight line, it’s going to remain at rest or keep moving in that straight line until something bumps it off or bumps it forward.
I think we can all relate to inertia as being the “blank page syndrome.” So, you’re staring at a blank page, you don’t know what to do next. I think that for me, ChatGPT and, to some extent, the Bing AI model have been helpful in breaking inertia. So, you know the direction you’d like to take, but you’re stuck for ideas or a start, then I think that’s where it’s really helpful.
I think also a positive so far has been its ability to help me and my students expand our thinking beyond an initial set of concepts. And so, a short version of a prompt for that sort of thing might be: you already know something that you want to think more about, or you think you’ve hit the end of your rope in terms of ideas.
You would say, maybe, “Give me more examples of blank.” And it’s very powerful for that sort of thing. And then the third thing that’s happened is also, again, related to my work with my doctoral students, is it’s improving their writing and my writing, but it’s improving writing so that we humans can get to the thinking.
And I know that sounds a little weird as I say that, but that’s what we’re good at. We humans are very good at thinking and connecting, but it’s hard to be able to consider the value of the ideas if they’re not clearly expressed on the page. And in an example recently with a student of mine who had written an introductory paragraph to a section of a dissertation, I thought it could have used some improvement. And so, I asked him if he wouldn’t think about just putting that piece of text into ChatGPT with the following prompt: “Here is some text. Tell me how clear the argument is.”
And then ChatGPT just starts to say that the argument’s fairly clear and the writing needs some mechanical support. But the writing also presents a fairly clear argument. We know where they want to go with their ideas. But after it does that, then you say a prompt like, “How could I make the argument clearer?” And then it gives four ideas on how you can make the argument clearer. And then we asked it, “Then remove the redundant words in the passage and then make it coherent and cohesive. And then rewrite it using the tips you just gave.” And it writes, instead of one paragraph, it writes two pretty cogent paragraphs. And the student and I both agreed that it improved the way he was expressing it, but it didn’t change his logic or his ideas, and they were still his ideas throughout. I think that’s been an interesting way to think about using this tool.
[00:24:58] Jason Johnston: Yeah. And the point is, you are the professor at the end of the day. It’s up to you if that’s cheating or not. But I also think that this student is being given a gift of learning in this moment, where they’re able to see something recrafted and ideally, they don’t just fall back on, “Now Chat can do this for me from now on whenever I have a paragraph that doesn’t make sense.”
Rather, they see the differences. They recognize the differences. They use some reflection to change, you know, and we’ve talked about this, but learn and change their writing moving forward.
[00:25:37] John Nash: Yes. Yeah, absolutely. It’s just like the high school student at Frankfurt International School who said in the webinar about using Grammarly that they write better because Grammarly has corrected their paper, and now they remember the rule and now they don’t make the same mistake. I think the same principles can be thought of here—that this is how you convey a complex, politically and socially embedded idea that takes place in a school that I’m trying to improve with my research. So, ChatGPT can’t do that. It can’t know where the study’s taking place or how or why the people will react to the ideas.
[00:26:19] Jason Johnston: Yeah. Yeah. That’s good. One of the things for me, as we are getting into more the Bing Chat and this conversational aspect, stuck out to me as a… I guess it was two episodes maybe ago when we were talking about the first one of Bing Chat, about what Satya said about it being a co-pilot—
[00:26:41] John Nash: Right.
[00:26:41] Jason Johnston: and a co-pilot of our learning. And I thought about how something like a Bing Chat could be an amazing co-pilot for the curious. If you are interested in learning, you’re interested in things, interested in whatever it is that you’re really wanting to dig into, it could be a great companion or co-pilot for the curious.
And some might say it’s artificial, but at the same time, you know what those curious people are doing now when they’re on their own, when they don’t have somebody to talk to? They’re doing web searches that are taking them longer to get through things and find different things and whatever.
It could actually create a real synergy with this great body of information in a way that’s not overwhelming, that instead turns into a conversation that develops and moves it down the path. Now, ideally it doesn’t get cut off after eight lines, right, like my example at the beginning.
Because I think for anybody really wanting to go deep into something, they’re going to want more than eight prompts on something. You know, they’re wanting to get into a flow and talk about something for a couple of hours. But I see some potential there anyways when it comes to the curious that really want to ask the questions, potentially for this kind of idea of teacher shortage or teacher capacity to be able to respond to that, both in terms of the conversational but in terms of some of the laundry-list things to do. For teachers to be able to get them to the place, just like you’re talking about, where you can really talk to your students about more conceptual things versus spending time correcting them on their use of adjectives or their split infinitives or whatever it is that makes their writing confusing.
So, those are the two things for me that I think… there’s a lot of potential here if we wield it with thoughtfulness,
[00:28:35] John Nash: definitely.
[00:28:36] Jason Johnston: for learning. I think that your point is well taken because what I see it doing is taking something that someone’s curious about and moving them quicker to a product or an outcome or something out the door. The ideas become real sooner because they can be turned into an outcome.
[00:29:12] Jason Johnston: Yeah. And within that, potentially meeting the student where they are in their curiosity or in their level of knowledge as well, through asking questions back to that student about what they know already. There’s lots of possibilities there.
Exactly. As we try to wrap this up here—I think that was good, John. I do want to point out the fact that you told me you now have access to Bing Chat and you revealed this. I hope that relationship is going really well for you, John. I’m happy for you. I’m not upset at all. I’m just very happy for you.
So, I just hope you know that.
[00:29:44] John Nash: I’m glad you’re happy. I’ll ask it what to do with all that happiness.
[00:29:48] Jason Johnston: Okay, good.
[00:29:49] John Nash: We will label it, but we won’t judge it.
[00:29:51] Jason Johnston: We will label it and not judge it. Exactly. That’s good. And as we’re looking to the future of AI here, anything else we need to be looking out for? Because we’ll probably move on to some other topics and we’ll return to this again another day. But anything else to be looking out for in the next little bit?
[00:30:10] John Nash: I don’t know. I just saw some business news cross my desk. I subscribe to a newsletter called The Hustle.
It’s pretty interesting—just what’s happening in the tech and business-y world. We might want to come back in a year and see what’s happened with all the AI applications. But I’ll note two things: Inflection AI is a startup working on a personal assistant for the web, and it looks like they’re going to raise six hundred seventy-five million dollars to get going. So, this is a personal assistant on the web. There was a big interest, particularly after Timothy Ferriss’s book, The Four Hour Work Week, came out, that you should go get a virtual assistant or a remote assistant to do this sort of thing that I think AI is going to start doing now for us, I’m sure.
We’ll see that. And then Character AI is a company that hosts—get ready for this—ChatGPT-like conversations in the style of various characters, from Elon Musk to Mario. They are now valued at one billion. Wow. So, here we go.
[00:31:17] Jason Johnston: What it would be like to get taught by… yeah, by like Fred Rogers.
I would love to take a class with Fred Rogers, to teach us about empathy or something like that.
So, as we talk about whether or not tech is going to break our hearts, there are businesses popping up in different sectors that are invariably going to play in the space of online learning.
[00:31:39] John Nash: Mm-hmm.
[00:31:40] Jason Johnston: Inflection AI is a startup working on a personal assistant for the web. They could raise six hundred seventy-five million dollars. I think many higher ed and other pedagogues in P–12 could benefit from a personal assistant.
Maybe this will be something that will be a personal assistant that will be a teaching assistant. And then a company that hosts ChatGPT-like conversations in the style of various characters—Fred Rogers—
[00:32:04] John Nash: I mean, this could all converge.
[00:32:06] Jason Johnston: Yeah. That’s really interesting. Thanks for bringing those up, and we’ll put those… yeah, links to those articles as well in the—
[00:32:12] John Nash: Definitely.
[00:32:13] Jason Johnston: in the notes.
I did want to say one thing too. Again, if you are listening before April twentieth, we’re going to be at OLC on the twentieth doing a design thinking session on humanizing online education. And then we actually are going to do a little meet and greet the next day on the twenty-first. If you’re interested in meeting and greeting and/or you’ve got something to say on the podcast, just send us a message. We’re not going to… I don’t think I’m going to put it out there publicly where it’s going to be necessarily. So, find us on LinkedIn. We’ve got a growing LinkedIn community called the “Online Learning Podcast.” Or you can look up John Nash or Jason Johnston. Happy to connect there. Send us a message if you’re interested to be either at the meet and greet and/or to talk to us. We’ll bring a couple microphones, and we’ll do a few little guest kind of sessions. We’ll pick a topic and maybe just discuss a couple things while we’re there.
So, it’d be great to meet some of you. And yeah, we just love… we love these conversations. So, please join in online as you can. Yeah.
I think it’s going to be a great opportunity to do a remote episode and talk to some experts with their views on everything that’s happening.
And our website is onlinelearningpodcast.com.
And that will take you to our podcast. Subscribe, please, on any of your podcast platforms, and yeah, and send us some reviews. And more importantly, connect, because we’d love to hear what you think and what you would like to talk about.
[00:33:48] John Nash: Yeah. Are there other things to talk about besides AI?
[00:33:50] Jason Johnston: I think so.
I think so. But we don’t want to have any spoilers, except we already had a spoiler. This is a little AI, but probably the next episode we’ll be talking about our, or my, experience with a self-driving car—
[00:34:03] John Nash: Yes.
[00:34:03] Jason Johnston: and how it relates to online learning.
[00:34:06] John Nash: Excellent. Cool. Thanks.
[00:34:09] Jason Johnston: All right. Thanks John.
[00:34:10] John Nash: Yeah.






No comments yet. Be the first to say something!