Monday Feb 19, 2024
EP 24 - I Cancelled My Midjourney Account - The Great Big Fat AI Ethics Episode
In this episode, John and Jason talk about the ethics of AI, including how ethics are formed and a few scenarios like if it’s ethical to use Midjourney. Listen in to find out who says no! See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:
- Article: Harvard Business Review Ethics in the Age of AI Series: Part 1, Part 2, and Part 3
- Article: It's Not Like a Calculator, so What Is the Relationship between Learners and Generative Artificial Intelligence?
- Jason’s FAFSA Assistant GPT
- ”Right Choices: Ethics of AI in Education” - John hosts Jason in an episode of the School Leadership + Generative AI series
- John’s School Leader AI Bootcamp
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions!
Podcast Episode on AI Ethics - January 29, 2024
False Start
[00:00:00] John Nash: Should we do the intro?
[00:00:01] Jason Johnston: Yeah, let's do the intro.
[00:00:03] John Nash: I'm John Nash here with Jason Johnston.
[00:00:06] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning Podcast. The Online Learning Podcast. Let's try it again.
[00:00:12] John Nash: I'm John Nash here with Jason Johnston.
[00:00:14] Jason Johnston: That reminded me of do you ever watch The Office? My name is Kevin, because that's my name. My name is Kevin, because that's my name. So this is the Online Learning Podcast, the Online Learning Podcast.
Episode
[00:00:30] John Nash: I'm John Nash here with Jason Johnston.
[00:00:32] Jason Johnston: Hey, John. Hey, everyone. And this is Online Learning in the Second Half, the Online Learning Podcast.
[00:00:38] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last couple of years about online education. Look, online learning's had its chance to be great, and some of it is, but still a lot of it isn't. How are we going to get to the next stage, Jason?
[00:00:52] Jason Johnston: That is a great question. Why don't we do a podcast and talk about it?
[00:00:56] John Nash: That's perfect. What do you want to talk about today?
[00:00:59] Jason Johnston: John, I've got some ethical questions for you.
[00:01:02] John Nash: You do?
[00:01:03] Jason Johnston: I've been wondering about the ethics of using AI for certain tasks. And maybe we'll get back to some specifics later on.
But how do we form our ethics to begin with when it comes to AI and using AI these days when we think about education?
[00:01:19] John Nash: I'm stealing your line from the intro. That is a great question. How do we form our ethics? I think they're formed by the values and the beliefs we bring to anything we do. You've had a longer background and thinking and considering about ethics, both in your professional life and your education life.
What do you think about in terms of what sensibilities people bring to any task?
[00:01:45] Jason Johnston: Yeah, I think so. I like where you started there because sometimes people start externally. They think ethics are clear, right? We're not supposed to steal people's cars and we're not supposed to, kill people when we walk in front of them or whatever. And, but it's not that clear when it comes to certain things.
Certainly we can follow the ethics of a country or a city or institution, AI is something new. We haven't dealt with some of these questions before. And because of that, it does take some ethical reasoning. I happened to talk to a number of PhD students taking an instructional systems design course.
I was asked to come in by one of our previous guests, Dr. Anilda Romero Hall, and to talk about ethics in instructional design. And where I started with that was this question of what do we bring to the table? If we can understand what forms our ethics, our beliefs, our positionality to begin with, then we can start to understand why we might have some knee jerk reactions to certain things.
[00:02:49] Jason Johnston: And we might be more willing to concede on some things for the sake of the common good. And as we talk about ethics within a context or within a a group of people or a community or what have you.
[00:03:02] John Nash: Do you think the ethics of the companies that are creating these models drive how people feel ethically about using them, or is it the other way around? Did the companies decide they needed to sound ethical because they knew people were going to clamor about whether these models might be used in unethical ways?
[00:03:26] Jason Johnston: Yeah, this is a great question. Yeah, it feels like, to me they're aspects, if I'm reading down, like, and they've all got them, right? So you can look these up OpenAI, IBM, Anthropic. If you start to read down those ethics, typically you re, resonate with a lot of those ethics. They're good things, typically, about security And inclusivity and being non biased and private and so on, but then you've got to ask yourself what is really driving these companies to do what they do and what is not being said, right?
What's between the lines here and what are missing? And this is where I think we need to go beyond what the companies are saying and think ethically about our own context.
As educational institutions, I don't think we can just rely on these, do you think we can rely on these ethics to help guide our use of AI? Are they good enough, John?
[00:04:19] John Nash: we rely on them?
[00:04:21] Jason Johnston: Yes.
[00:04:22] John Nash: To what extent? I think, of course, they're a good start. They're a start. I think maybe even good gets left off of that last statement. They're a start. They're certainly not unethical, what's been put out there. I don't think that, But the companies are no fools.
They know that they're for profit companies and if they were to put out statements around ethics that didn't seem to meet with what general morally accepted principles look like, they would be derided in the marketplace.
[00:04:50] Jason Johnston: So do you think these ethical guidelines are crafted by philosophers within their midst or marketing people within their midst?
[00:04:58] John Nash: Certainly, I think it's more of the latter than the former. Many of them are Bay Area companies and there's ethos of the Bay Area and these guys and how they think.
I think they probably want to be ethical. Google once infamously now said, "do no evil." And then of course later got into many different kinds of arrangements that were not unevil.
[00:05:19] Jason Johnston: Yeah, you'd sent me an article a little while ago in the Harvard Business Review. They had a AI ethics series that I can put the links into the show notes here and where they looked at avoiding the ethical nightmares of emerging technology and questions about AI responsibility. And one of the questions was, what does the tech industry value? And it looked at some of the ideologies around the culture of speed.
And so I think my question with some of these, if you look at it, any of these big companies, Google, IBM, Anthropic with Claude, OpenAI, they have a list of ethics, but I think we always have to ask the question, what's not there, that's driving them. And I think this is one of those, is this culture of speed and the fact that it almost seems like their guiding point is that we need to do this as quickly as possible and get out there in front of other people. And, and that guides them ethically in terms of the choices that they make.
[00:06:22] John Nash: I agree with you. I think that they have two books of ethics, maybe, almost as though like a business that's got a second set of books. And so they've got the public ethics around keeping people safe and data safe and responses of our machines, that are very human like in their responses, the responses are safe. And then the other set of ethical books say we need to move on this like our board members want because shareholder value.
[00:06:52] Jason Johnston: Yeah. Yeah. And because of that, they may be willing to let some of those guardrails down a little bit to allow for the speed. And some of these post humanists or transhumanists kind of people that are running a lot of these companies think about the, from an ethical standpoint, , they're taking a more of a teleological approach, which is just looking at this ends justify the means. If in my mind, this is going to improve society so radically that we're willing to let a few things slide here along the way.
And I think that's where the speed comes in, is that if we can get there quicker, and we can improve society sooner, then we're willing to let a few, little ethical oversights go by while we're building whatever it is we're building.
[00:07:42] John Nash: Yes, because if you take what Mark Andreessen recently said, there is a belief amongst some of these founders that they are actually saving the world, that these are technologies that are going to save humans.
[00:07:56] Jason Johnston: I resonate with that idea of there being two books and we got to ask what the closed book, the secret book of ethics is, and what the open book of ethics is. The open book of ethics is almost always now talks about safety and inclusivity and privacy and these kind of things, whereas the closed book probably more things like like speed about having a perception of what the public needs in order to adopt it versus it actually being there.
So, managing basically your market and managing what the market perception is of a particular thing is more important in these cases than the actual thing itself.
[00:08:44] John Nash: Yeah, or what problem the thing is solving. We've not been privy to the real internal discussions at say Open AI when they said we will publicly release 3.5. I don't know what the problem was that they saw was being solved in the marketplace by releasing this.
[00:09:01] Jason Johnston: Right?
[00:09:01] John Nash: I don't know that there was one exactly, except that it's just, it's a fascinating technology and fun to play with and mind blowing.
But that's about it. And yet they were able to monetize that because people wanted to, play with it and actually do work with it. Yeah, I don't, I think this was all, these were all products with solutions in search of a problem.
[00:09:21] Jason Johnston: Yeah, it's strange. And this is what makes it really unlike a lot of other inventions. And I think because it's so open ended, it's so user driven,
[00:09:30] John Nash: Yes.
[00:09:31] Jason Johnston: And inquiry based that it doesn't need to be a solution to any one problem. That it's like an open ended potential solution,
[00:09:41] John Nash: Yeah, unlike the sundial, or the scientific calculator, or the phonograph, or the chalkboard, or go on. Yeah.
[00:10:05] Jason Johnston: On paper and in their heads, but you're right.
We continue to press math forward. However, again, here's a piece of technology, just like you mentioned there, with a very specific use in mind, right? And it has certain limitations to it. I think AI is more like the internet where it's wide open or as a recent article I read said : it's more like electricity somebody told me this about their about their grandfather that lived in Eastern Kentucky, didn't have electricity and they're asking do you want us to run the lines out to your home?
And he's like, why would I need lines? I don't have anything to run on the electricity. Which is true, right? It's an absolutely true statement. But electricity was almost like a, solution without a problem because as soon as you got it, then you figured out ways to use it.
[00:10:51] John Nash: I've been wrestling in my head whether or not this is like a utility. I don't think it's necessarily a public good. And, but it is, people are paying for it like it's a utility, they pay a monthly fee, they pay for their electricity, they pay for access to chat, GPT 4. 0. And so, but is it in doing so, is it just creating a situation where people need to get a bunch of stuff or do things that they didn't necessarily need?
[00:11:19] Jason Johnston: Yeah, I think my own use of it is probably a mixed bag. I sometimes come away and it feels like I've been on the internet and didn't get anywhere and then sometimes you go on the internet and you get some places, right?
[00:11:30] John Nash: Right.
[00:11:30] Jason Johnston: And you find the answers that you need or, sometimes you get lost and, a string of cat videos and you don't know how you got there.
And I feel like because it has such a lack of focus, There's a lot of experimenting still to be done with it that doesn't necessarily give you helpful results for your time investment.
[00:11:50] John Nash: What do you think about the ethics of all of the little GPTs that are getting built in the marketplace? Some of them are completely frivolous, some of them are a little malevolent, others could be useful. Do you think that the people who create a little GPT also need to have an ethical code?
[00:12:12] Jason Johnston: Yeah, that's a great question. I think, and this could lead into some other discussions about more contextual ethics. I do think that one can rely a lot on whatever the bigger ethics are in the system that you find yourself in, or the community, or the organization, or the country. So they can rely a lot on those larger ethics, but typically those larger ethics are general enough that they cannot always be helpful to guide what you should and shouldn't do in the specifics. Does that make sense?
[00:12:51] John Nash: think so.
[00:12:53] Jason Johnston: So like, maybe somebody running a little GPT might be generally guided by a care ethic, or ethic of how this might respond about certain races or stereotypes or people or whatever. I think it behooves the person who's making that to ensure that's true and do enough testing and to think about enough use cases that it might be used to get around these kind of general ethics to help guide it to keep it on track.
I really think a lot of people don't really start with ethics. When it comes to developing these things, I think it starts a lot with innovation, which is okay. I understand that, they're trying to, like you said, solve a problem. I've got a, this is a good time to plug my own GPTs, so people can ,use them.
And
I don't know, is this some sort of pyramid scheme? If I get people to use my GPTs or make GPTs, do I make money off of their GPTs?
[00:13:47] John Nash: Yeah yeah, no, I don't think so.
But I think you should, if you'd like, I would pose that to OpenAI to see if
[00:13:54] Jason Johnston: really I'm trying to find certain solutions, so I made a GPT because I've got questions, my kids are coming of age, and I've got FAFSA questions, and so I made a FAFSA GPT that is trained specifically on the information from the government so that it could answer questions from a reliable source.
And I think it was helpful for me personally. And so maybe it'd be helpful for other people, but honestly, I didn't really necessarily think of the ethics of that. It was just a utility.
[00:14:27] John Nash: You did think about the ethics tacitly because you wouldn't punk your kids on the FAFSA GPT.
[00:14:35] Jason Johnston: that's true. And I said things like, yeah, I think there would be maybe some specific ethics that we know, for instance, the, of the many qualities that GPT had, especially in the beginning, we still know that it can be very confidently wrong, right? And a lot of the other things it's grown away from, but it still can be very confidently wrong about certain things and it can hallucinate and so on.
And so I told it specifically to only give truthful answers. If it doesn't know, then say it doesn't know, and those kind of things. Whether or not that works, I don't know. Sometimes it does, I think, sometimes it doesn't. But by guiding it to, only use these resources, bang, then hopefully it will provide what I was hoping for was a truthful answering of my questions for myself and hopefully for other people, so people wouldn't get steered wrong. So I guess you're right, yeah.
[00:15:23] John Nash: Yeah. So what do you think our advice is for teachers as they think about how they might integrate ChatGPT, Claude, other large language models into their work routines, either as an instructional design assistant, which I use them for a lot. I use it more that way than I do as a tool for my students to solve a problem, for, students doing their work, or some hybrid of both, what if we're thinking about our notion of being human centered in our work, and encouraging others to be that way, what do you think we should say?
[00:16:06] Jason Johnston: Yeah, that's a great question. I would say on the front end that whatever institution or community that you're in that we should be at the place where people should have some pretty clear ethical guidelines to help guide as a community things that some principles that were agreed upon from a number of stakeholders across the community, institution, whatever that could be more general. Like, I was very thankful to be part of the a committee that developed some of these principles at UT, which can be really guiding principles. And so there are things like "we use AI intentionally, it's human centered, it's inclusive, it's open and transparent, we engage with it critically" and so on.
But then what I found when I'm working with my media team and my instructional designers, as we're talking about the use within our day to day work, I found that these guidelines were good, overarching guidelines for us that we could all agree upon. But then it came down to really specific kind of questions that we needed to talk about.
For instance, do we use AI image generators, right? And if we do which ones do we use? Do we open handedly use them? Do we just use specific ones? Are we concerned about things like copyright? Are we concerned beyond copyright? What other questions do we have in our smaller community? Questions that didn't even come up around faculty around creative works, not just about whether or not copyright is taken care of, but is there work creep happening when this person who's not a graphic designer uses AI to create graphics where another human would have typically done that, right? And so it starts to create much more of a specific kind of context for principles. And we were able to come up and we're still working on some more guiding principles , that can help inform our day to day work within our team.
[00:18:02] John Nash: Yeah, the graphic example is great because if you've got graphic designers, illustrators on your team, they take a brief from a client and they have to interpret that contextually and then they create an illustration, let's say. If they or someone else uses an image, generation model like DALI or MIDJOURNEY, they put in a prompt and it puts out something technically beautiful and maybe aesthetic, but does it hit the mark in terms of what the contextual interpretation was that was desired by the call?
That's very different. And if that can be created and say it does hit the mark and it's created by someone who's an 18-year-old intern, let's say that you hire, you have a new power dynamic problem. If we're, now we're back to my original problem, right?
[00:18:49] Jason Johnston: Yeah.
[00:18:49] John Nash: you are usurping traditional power dynamics about who's supposed to do what.
[00:18:55] Jason Johnston: And that's where it becomes so contextual, right? Because as you said, yeah there's a lot of ethical ways that you can talk about this, right? There's the copyright part of things. You can just lay it aside and say we're not gonna cross copyright laws and so we're just not gonna do it at this point or whatever.
But there are other ethical considerations beyond that someone's livelihood, potentially there could be some power dynamics, there could be some lack of care and respect for people who have done this job for a lifetime, and they're trained to do this, and they have the tools, and then all of a sudden some idiot with a Midjourney account
[00:19:28] John Nash: Yeah.
[00:19:30] Jason Johnston: that they can make graphics better than they do, and it's just not, it's not kind, and so I think that there are many ways to do that. Now, there could be another situation where somebody has a one person shop, and they're doing tech, they're doing instructional design, they're doing a little teaching and professional development, and they're expected to do graphics on top of this, and they don't have the budget. They've been told you can't hire anybody else. You don't have the budget, whatever. , it may be in those situations that that the ethical thing to do could, be to go ahead and use those,
um,
graphics .
[00:20:00] John Nash: you've hit the nail on the head. Context is everything. Because you're right. If you're a solopreneur who, say, makes logos for a living, then you are doing client development, you're doing billing, invoicing, and you're doing the creative work. I think you're probably using LLMs and image generation models all day long to help manage that process.
But that's different from a general ethic of care for just understanding how to deal with humans in the context of an organization, and whether you usurp their work without talking to them.
[00:20:32] Jason Johnston: Yeah. Yeah.
Let's do one other thought experiment here. What if two, I'll do two thought experiments.
[00:20:38] John Nash: A a 20 year old junior at university uses the LLM to critically examine the assignments given to them by a professor and writes back giving them a critique on how it doesn't really help them achieve the learning goals intended for the course.
Or a parent decides to write the lesson plans for an English composition, 10th grade teacher.
This sort of power still sits there. And so could a teacher's aide do the design work for a course instead of the teacher? Or should they? I think those are, leadership questions. Those are ethical questions. Those are organizational culture questions.
[00:21:18] Jason Johnston: Yeah. , I liked how your sentence changed there because this is a great indicator that we're doing some moral reasoning. A great indicator that we're doing some moral reasoning, is when your question shifts from could to should,
right?
And so could that parent do that?
Yeah, certainly can. Everything's there. Should they? That is the ethical question, and I think that takes some reflection. Probably takes some conversation, perhaps even to be able to work in empathy with other people. And so if we're trying to follow an ethic of care, then empathy is pretty high up there in terms of understanding.
And I'll be honest, and I, and this is also this completely contextual. I'm not saying anybody else should do this especially, present company, but I canceled my Midjourney subscription. I, hands down, it's making the best AI images out there, without question. It was worth it to me from that standpoint, and so on. But I canceled it because of some of these conversations I was having with creatives, and it didn't feel good anymore to have it,
[00:22:32] John Nash: Say more. In what context? Like, would you stop using DALL-E now? You could still make images with MidJourney without a subscription, right? And so, even if you can't, but I'm just curious, like, so you would never use it under any circumstances now?
I guess is what I'm trying to understand.
[00:22:51] Jason Johnston: in my current context the things that tipped me over, it was the, some of the copyright issues in terms of using artists work without their payment or their knowledge. Which didn't feel good to artists in general. The fact that I was paying for it as well, so somebody's making some bank off of this, right?
And so it's not experimental. This is a business. And then really thinking about this idea of why and should I be, right? Why am I doing this? Do I really need to be making images of this high quality? If it's important to somebody else that I am doing this, is it that important to me that I'm doing it?
So that's what was my reasoning around it. I'm not saying I would never for any circumstance but I, and partly a little bit of a statement, to be able to say, Oh, yeah, I just decided not to. It was an interesting experiment for a few months. And we have an Adobe Firefly subscription.
They have an ethic that includes paying artists and only using works that they have full license to. And. It's not as good, but I'm willing to do that for now if I need to use AI. And to be thinking about if there is anything that somebody should be doing that has the skills, then to be thinking about what place do they have in all this?
Should I be giving them opportunity and chance to do this?
[00:24:20] John Nash: Fantastic rationale. You've, yeah. You've convinced me I need to think about dropping mine.
[00:24:27] Jason Johnston: Again, I believe it's context. I think that people need to think about it for themselves. I'm not going to go around wagging my finger at people via LinkedIn about it. Although I have considered at least putting my thoughts out there. So maybe this will spur me to put my, some of my
[00:24:41] John Nash: Well, you know, there's, There's nothing worse than a reformed anybody.
[00:24:47] Jason Johnston: That's right. Nobody wants to talk to that person. Yeah.
This has been good John. I feel like we've covered a fair bit of ground. We partly started talking about this because we did a video, which we'll also put in there, where you and I broke some general ethics down in about 15 minutes.
You invited me to come talk to you, and this is part of a boot camp you're doing as well, tied in with that. Perfect.
[00:25:12] John Nash: yeah, you and I had a chat in a series I've launched called School Leadership and Generative AI, all in about 15 minutes where we cover pretty big topics on the top of mind of school leaders, but we get to it as quick as we can so they can gain some ground on some of these bigger issues. I did one with Dr.
Kurt Reese on data privacy with students. And then, yeah, with you on ethics and it's yeah, it's connected to my school leadership AI boot camp that I've got on Maven that people can enroll in, put a link to that in the show notes too. But yeah.
This was a good conversation today, I think. I think made me rethink some things.
Made me really think about context.
I was going to say earlier too, maybe this, maybe we fit this into the other part of the conversation. There were some articles six months ago or so, maybe about a firm in China that was going to have its CEO be a generative AI bot, and it was going to run the company.
And I don't know where that's landed since, but it made me think, could or should an AI bot run a school district? Could it even run a school? Could we have an AI LLM provost at a university? How difficult are those decisions anyway? That'll rankle some folks for me just even asking, but I think it's interesting to think about because this is the direction these are going.
Already with the terrible news of deep fakes that are coming out around Taylor Swift and others, and then with the election coming up with malevolent actors using this, these tools in bad ways. I think we're on the cusp of seeing the same sort of thing happening for leadership in organizations and maybe not malevolently, but it's going to be there.
We're going to have avatars that look very real, that'll get past the uncanny valley that will be driven by large language models that sound like they know what they're doing. So I think that we're another level of ethical discussions are coming around how badly do we need all these personnel?
[00:27:15] Jason Johnston: Yeah. All of those coming along. I'm convinced more than ever that we need to be thinking ethically about these. We need to be not just thinking about it for ourselves. We're talking about it in our communities, coming up with standards that we can support one another with, and that we bring people, all kinds of people into those circles so that we can think about not just ourselves and those ethics, but how it affects the people around us.
[00:27:39] John Nash: Yeah.
[00:27:40] Jason Johnston: Yeah, this is good. thank you, John, for this great conversation. And all of you, if you want these show notes, of course, we're at OnlineLearningPodcast. com. And you can check out all of our podcasts there as well as show notes. Yeah. Thanks for listening. And as well, if you have a chance, if you find us on Apple podcast, you can leave us a review, send us a note there.
You can always find us on LinkedIn as well. And connect with us there. We've got a community as well as you can just connect with us and we've got the links as well in the show notes for those.
[00:28:10] John Nash: Is it ethical for me to say that we found out that the algorithms like it when people go on Apple podcast and rate us and leave a comment?
Or is that just stating a fact? Am I just stating a fact without ethical considerations? It's okay to state
[00:28:28] Jason Johnston: I think that if it's true, it's ethical. And the fact that we're being transparent about this, we would like you to leave comments, not just for our own egos, but also to help the algorithm so other people can find this podcast. So, yeah, as long as we're being transparent, I think that's ethical, right? All about the algo. Talk to
[00:28:49] John Nash: Cool. Talk to you later.
[00:28:51] Jason Johnston: you soon. Bye..
[00:28:53] John Nash: Yeah, fun. I'll talk to you soon.
[00:28:55] Jason Johnston: Bye.
Comments (0)
To leave or reply to comments, please download free Podbean or
No Comments
To leave or reply to comments,
please download free Podbean App.