In EP 41, John and Jason discuss the evolving challenge of moving beyond 'copy-paste' AI policies to create syllabus guidelines that encourage students to engage in the 'productive struggle' of learning.
See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)
Host Bios:
Walk into schools today and generative AI is on the agenda—and many leaders aren’t sure what to do with it. John Nash helps them figure it out. An associate professor at the University of Kentucky and founding director of the Laboratory on Design Thinking, he makes AI practical and useful, not just theoretical. He’s on two generative AI advisory boards at the University of Kentucky and one at MidPacific Institute in Honolulu, advising educators from local superintendents to teachers in international schools. He teaches courses in design thinking, leading deeper learning, and mixed methods research, and his research interests study the application of human-centered design in organizational leadership.
Jason Johnston is the Executive Director of Online Learning & Course Production in Digital Learning at the University of Tennessee, Knoxville. His background includes developing and launching online degree programs, directing educational technology, teaching, and working as an audio engineer. Holding a PhD in Educational Leadership, an M.Ed. in Educational Technology, and an M.Div., Jason advocates for humanity and equity in online education while helping educators leverage technology for the future. He co-hosts the podcast Online Learning in the Second Half (www.onlinelearningpodcast.com) and enjoys playing guitar, building Lego, and traveling with his family.
Resources:
- University of Kentucky Syllabus Policy: https://celt.uky.edu/ai-course-policy-examples
- University of Tennessee, Knoxville Syllabus Policy: https://writingcenter.utk.edu/sample-syllabus-statements-for-ai-guidelines/
- Jason’s Policy Icons: https://docs.google.com/document/d/1MG9h68__uqPSz6HXNeVymJhal1VNapjyK-2PFa5QFxI/edit?usp=sharing
- John’s Policy Example: https://johnnash.notion.site/John-Nash-s-Stance-on-Generative-AI-Use-by-Students-in-Courses-2eff24fd17cc8043ae2be34712680c28
- Chronicle article by Geoff Watkinson “I’m an AI Power User. It Has No Place in the Classroom. Learning to think for yourself has to come first.“: https://www.chronicle.com/article/im-an-ai-power-user-it-has-no-place-in-the-classroom (paywalled - should be able to read for free with login)
Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License.
Battle Hymn of the Republic is public domain from the Library of Congress https://www.loc.gov/item/jukebox-767050/
Transcript
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
[00:00:00] Jason: Can we do the quick intro?
[00:00:02] John Nash: Yeah,
hold on.
[00:00:03] Jason: That was the intro to your other podcast.
[00:00:06] John Nash: Yeah,
[00:00:06] Jason: John, have you
[00:00:07] John Nash: exactly.
[00:00:08] Jason: Beyond My Back?
[00:00:10] John Nash: No, I am not podcasting. Behind your back.
I'm John Nash here with Jason Johnston.
[00:00:15] Jason: John. Hey everyone. And this is Online Learning in the second half the
[00:00:19] John Nash: I.
[00:00:19] Jason: Learning podcast. Mm-hmm.
[00:00:20] John Nash: Yeah, we're doing this podcast to let you in on a conversation we've been having for the last three years about online education. Look, online learning has had its chance to be great, and some of it is, but a lot still has a way to go. How are we going to get to the next stage, Jason?
[00:00:35] Jason: is a great question. How about we do a podcast and talk about it?
[00:00:39] John Nash: Perfect. What do you want to talk about today?
[00:00:41] Jason: You know, you always ask me that question and I really appreciate it. But what do you want to talk about today, John?
[00:00:47] John Nash: Oh, you know what I want to talk about today? I want to talk about the struggle that instructors are having to set guidelines for the use of generative AI in their classes.
[00:00:57] Jason: I think that sounds like a great conversation, especially the front
end of a semester here.
[00:01:02] John Nash: Yeah, is it the lawyers that say, or the justices that say, "this is not settled law?"
[00:01:07] Jason: Hmm.
[00:01:07] John Nash: This is definitely not settled law. We, we are not lawyers. We do not play them on podcasts. We are just a couple of, a couple of folks that are trying to think this through.
So, Jason, we just came off of a really cool episode with Megan Haselschwerdt at University of Tennessee, one of your colleagues, who engaged your office to think about ways to deal with how her students were using generative AI in her class.
And that's really made me think a lot about how a lot of us are wondering how we can have guidelines that work in our classes where it really doesn't matter what it is we're teaching.
I think it'd be good if we could talk about that. I've had some really big evolving thoughts around my own stance on this. Even after three years in, I think I've finally written down something I can live with. But there's a lot of options out there for faculty and teachers and instructors.
There are stoplight protocols. There are guidelines out there that universities have put out that faculty can adopt just right out of the box. But do they really fit? I talk with my colleagues at my work and they're sort of saying, sometimes "just tell me what I should put in my syllabus" and I'm replying with things like, "well, it's hard to say that this will fit your syllabus because really it's an ethical conversation you need to have with yourself and your students."
And so, I think it'd be really good for us to sort of lay out what we're doing, what we're hearing, and get people to give us feedback on what they're doing and even call into question our own stances.
[00:02:43] Jason: Yeah, at the end of the podcast with Megan, we're kind of asking for advice for faculty. One of the things she talked about was that she wished she had a little more coaching before she started her class when she was designing her class. Wish she had a little more coaching, a little more time to talk about it.
And I think that makes complete sense. I think what we're seeing is that, without a lot of time spent on it, I think faculty are, are kind of defaulting to the two extremes. Either they're not saying anything about it, maybe just letting anything happen with regards to ai, or they're putting in a really quick statement, or maybe they say something in their first class how they don't allow any AI use whatsoever. I think Megan's story, and this is a spoiler about how good it is, but I think it was just a great story to talk about how faculty might be able to kind of wrestle with that and figure out where there might be a middle ground that would actually increase learning for the students and engagement, but also potentially decrease the actual use of AI.
[00:03:51] John Nash: Yeah, maybe talk a little bit about what the overarching approaches are at University of Tennessee Knoxville, and I can say what has been promulgated here at the University of Kentucky. I think they're a little bit similar and just sort of, there are some I call them stoplight protocols.
You get a red, yellow, or green kind of approach that a faculty can copy and paste and drop into their syllabus. And can you also talk a little bit about what the pros and cons are of that copy paste approach without maybe thinking about what the actual work is that students might do in your class.
[00:04:27] Jason: Yeah, we a similar thing it sounds like to UK, which the provost office has the kind of the, the, the strict no AI, the, know. AI freedom kind of thing, and then a moderate approach. And we can put the links into both of those. Both at UK and University of Tennessee, Knoxville. And, and so that's, so that's, essentially the guidance we've been giving for syllabus.
And I think people do copy and paste those in. the kind of what our approach as we're helping. Faculty design online courses have been a little bit more customized and to spend some more time in that messy middle of "moderate," because when it comes to "moderate," it does take, I
think, more intentionality
and more communication and more thought you approach it.
And probably
more
specific. Policies that would apply to certain assignments. And so there may be one assignment that has a different kind of moderation another assignment. So, for instance, in my class the, I teach a couple of online classes a year for UT and in my human computer interaction class in the fall. In their reflection pages, I ask for no AI use whatsoever. So, I have a strict policy when it comes to my reflection because, up to a level anyways, I want to see the mistakes, I want to hear their thinking. I want to kind of walk with them on, on what they're, even if it's some rambling. But., on a lot of the other projects I have, have more of kind of a human first, human last kind of approach, or we've called an AI human sandwich with AI in the middle to help called human in the loop kind of approach.
Most of my assignments are that way and I just. Really want more transparency and I'm able to articulate that and I'll put in the link as well. You're free to take a look at and use something we use. I try to communicate with icons as well to try to communicate what that process would look
[00:06:37] John Nash: Right.
[00:06:37] Jason: So, it might look like it might look like a, a human then an AI or just a human and AI symbol, and then a human again, or it might just be human AI. And so, we expect you to do most of the work with a human first, and then you can use AI to kind of clean up at the end. Or even have an assignment that I ask them to use AI to come up with the initial idea,
Right?
So, AI first
[00:07:02] John Nash: right.
[00:07:03] Jason: but then human last
[00:07:04] John Nash: Yeah.
[00:07:05] Jason: to evaluate and critique it.
[00:07:07] John Nash: And those icons, you can put those in the syllabus by each assignment, so at a glance, they know in advance like, oh, okay. Yeah.
[00:07:13] Jason: Yeah, without even reading. If they get it from the beginning, and I put the examples in
[00:07:17] John Nash: Mm-hmm.
[00:07:18] Jason: and then in the actual Canvas assignments, they can see at a glance what my expectations are. You know, I think a lot of policy is about communication, right? And trying to communicate what your expectations are.
And I think if students are left without that communication, then they'll, they will do whatever they want to do, which is, is, I think, completely fair.
[00:07:41] John Nash: Yeah, yeah, yeah. Our stance at the University of Kentucky is similar, and we have a few sites where faculty could consult to get guidance on what to put in their syllabi.
One is a group that was convened by the provost called the Advance Group, and I happened to sit on that, we recommended that course policies exhibit four characteristics, that the AI policy is people-centered, is adaptable to the circumstances of the course, is thinking about the effectiveness of what the policy is trying to do, and then trying to keep awareness upfront for the learners too. And so, our Center for Enhancement of Learning and Teaching called CELT, they have a, a, a site where they've got real boilerplate. So, no use, conditional use and unrestricted use. And with some examples.
I think, you know, it's interesting as we listen to Megan's episode, if even with no use students will use AI if you have a "no use" policy. And so, I'm wondering if you've got a rationale that is very simple that you say, "well, idea generation and analytical thinking and critical analysis are key outcomes in this course," and I'm reading verbatim from an example, from our university site.
And so "as a result, all assignments should be submitted by the student, a hundred percent original work." Great. I think you can say that all day long and students will still jump into ChatGPT. And so, this has been the challenge for me is that you can put these in, but there's another level that has to kind of happen in the actual human interaction in the course to get students to hopefully adhere to , the rationale and interest you have for why you don't want them to use ChatGPT. Isn't that the case?
[00:09:29] Jason: I think it is, and I think it's fair that students know why. Right.
[00:09:33] John Nash: Mm-hmm.
[00:09:34] Jason: A lot of students are coming at these classes with more understanding than teachers have about AI and how it's used and how they use it in a daily basis. Right? And so, I think it's, I think it's really fair to do that.
[00:09:47] John Nash: I think it's important that we have these templates for faculty to use. I think that they're just the tip of the iceberg and the kinds of capacity building we need to still do to help faculty understand what happens when their assignments collide with students' desires to use AI in spite of whatever's written in the syllabus.
[00:10:10] Jason: Right. Well, and I had an example recently too on a dissertation committee. The, the topic is about AI and instructional design, and we didn't have a policy about dissertations and so,
[00:10:25] John Nash: Yeah.
[00:10:25] Jason: these cookie cutter syllabus ideas about AI aren't going to fully explain how we make this a rigorous experience for the student, and how we can, with assurance as a committee, sign off on it and say, this is the student's work, right? How do we foster that transparency? But also recognizes 2026, you know, do we, we, we don't expect them to go back to the card catalog anymore, do we? And start writing down all their references and checking out books and putting them on the photocopier.
[00:11:05] John Nash: Yes,
[00:11:06] Jason: You know, there's lots of technologies, that we have adopted in the last 30, 40 years, and we need to be, adaptive thinking about this.
At the same time, I think it serves the student well to communicate well. They know the expectations and we don't get any of us into a spot at the end, and so we were able to develop something that that seems to work in that regard.
[00:11:32] John Nash: I think a lot of our listeners are lecturers, instructors, professors, instructional designers and you mentioned sort of the dissertation work, so let's just take a second and talk about that weird special stripe of instruction, which is the dissertation advising.
We're talking about that now in our department. We're a chiefly graduate department. We have a lot of dissertating students, and we're wondering now, what is the stance that dissertation advisors should take. It's different from having a, there's no syllabus for the dissertation advising process.
It's a six month to two-year mentoring process where the student is expected to engage in independent research to demonstrate that they can be a scholar and execute a study. And, boy, can AI creep in just about anywhere and how shall advisors talk to their mentees about the use of AI? Should there be a department policy?
I'm not sure there should be. Or there could be, but I'm not sure there can be because every dissertation is different and every dissertation advisor's ethical stance is going to be different. We're a small department, seven or eight faculty. Very different attitudes towards how each believes students may use generative AI in their dissertation writing process. Some are quite liberal with it, some are very scared by it, and I think it's been difficult to really get down to the fact that it's a personal mentor-mentee discussion and decision about where each stand. It's a little different from just having a syllabus for an undergraduate design course or a writing, course.
It's, yeah. What do you think are, am I going too far here or is this a special strip?
[00:13:16] Jason: I don't think you are.
[00:13:17] John Nash: Yeah.
[00:13:17] Jason: I think about how really every classroom, every teacher, student connection and relationship is built on trust, at least on some level. I think that dissertation is probably 10X what you would find in, in need in a master's class, even in independent learning, right? In terms of
[00:13:38] John Nash: Yeah.
[00:13:38] Jason: building trust, and I think that's a lot of it is trust, transparency. It needs to be there because. If the trust starts breaking down one side or the other, in terms of how this is being produced, and this is,
[00:13:51] John Nash: Yes.
[00:13:52] Jason: Pre AI as well as we know, right? That the hardest dissertation experiences have been when that trust has been eroded, especially between the chair and the, and the student. And so, this is nothing new, but I think AI just brings in another, little possible foil to that relationship and something that on the front end we need to communicate, we need to put on the table and talk about, and also not just come down with some sort of edict from on high. This is the way it'll be. But let's figure out what this looks like between you and I, the, our, our own relationships with technology. The content that we're trying to make, the kind of analysis that we're doing. You want to create new knowledge? Well, in some, cases you're going to have to use AI in order to take this a further step than, what the last person that studied this was doing.
[00:14:48] John Nash (2): Yeah. And the other wrinkle that comes into this is that the, the generative ai, while it can generate text, that can be technically accurate in particularly in study design and in thinking about framing literature, the dissertation process is not just the end written product, which everybody's very familiar with.
You go to the library, you see the printed dissertation, but to get there, that student must orally defend their ideas in front of a committee. So, dissertations, if you're not in this world day to day, are challenging and difficult, tricky political and scholarly activities because the advisor's reputation is at stake with the rest of the committee members. The student's reputation is at stake with their committee members and how they talk about their ideas. If the ideas are not flowing well from the student in a defense, then the other committee members can wonder if the chair of the dissertation was doing their job.
It's fraught with all kinds of pitfalls, and I think with generative AI in play now, I'll speak only for myself as a dissertation advisor going forward, I'm going to be thinking about more mini defenses. With me and my student, mini oral defenses to make sure that they can actually talk through what they're putting out there, do a public demonstration of their learning, because if I already know they're a good writer and I've read their writing and then they start using AI and I can't tell the difference and then I think they understand what they're talking about, but they don't.
I'm in trouble. They're in trouble.
[00:16:21] Jason: Yeah, I love that. You know, and it kind of points to things we've been talking about the last two years, how rethinking what pedagogy looks like in the classroom because of AI has actually
[00:16:34] John Nash: Yes.
[00:16:34] Jason: uncovered some of the cracks. I think there's lots of times where someone gets to a dissertation defense and they're not prepared. And it is partly their fault. And maybe mostly the student's fault, we might say this is their defense, right? They're the, they're supposed to be the experts, but it also leans on the, on the committee and the chair.
[00:16:51] John Nash: Yeah.
[00:16:52] Jason: And how much better it be to have those check-ins, to scaffold, not just the writing of the chapters, but to scaffold the defending of the chapters and the ideas and putting it to
[00:17:07] John Nash: Yes.
[00:17:07] Jason: it to the test in terms of speaking it out loud, with somebody who knows a little something too and can, can push back.
[00:17:15] John Nash: So, I read something recently, Jason in the Chronicle of Higher Ed. It was just actually within a few days of us recording this piece here. It was on January 9th an essay by Geoff Watkinson, and the, the title really caught me.
He said, " I am an AI power user. It has no place in the classroom." And this immediately struck me. I thought he was. Talking to me, I'm an AI power user. I'm using this tool nearly daily. I think you are too. And I've been concerned about what place it has in the classroom. I've talked before in our episodes and with others that I teach a design thinking course that I think is almost un-AIable.
But I also teach dissertation writing courses that are AIable. And so, I'm, I'm thinking, wow, what did he have to say here? And he said, what I think I've been thinking is that he's been using generative AI since the very beginning, and he saw a change immediately in his own work. Tasks that took him a full day now took 30 minutes. He paid for a premium version of Claude, and he taught himself through an AI certification, how to use these tools. And his output increased and his quality increased. And for the tedious administrative work he was doing for writing proposals for a tech company it was great.
And then he said, I'm able to do this because I have been an expert in this field for years, and the way I'm using AI is to advance work that I already know how to do. But this is a world apart from teaching 18-year-olds how to put their thoughts on paper and talks about how really, it's the productive struggle in teaching and learning.
That's important that AI can eviscerate and therefore he's being very careful as a power user to be thoughtful about the way it comes into his classes and almost doesn't come in at all. And I thought this might be the way, this might be the way to think about this, but how do you frame it in a way that it works for you in almost every way you teach?
And that was, that's what I've been struggling with.
[00:19:24] Jason: That's really interesting and it points to this idea that is best used in the hands of experts. But the very
[00:19:34] John Nash: Yeah.
[00:19:35] Jason: reason you're in school is because you're not an expert.
[00:19:37] John Nash: Right. And he's not worried about cheating. You're like, wait, what? No. He's worried about losing the moments of revelation and growth that a student gets when they do the cool productive struggle that they do
[00:19:52] Jason: The aha moments, the light bulb moments
[00:19:53] John Nash: mm-hmm.
[00:19:54] Jason: which are
[00:19:54] John Nash: Yeah.
[00:19:55] Jason: those are teacher payback times too, right? To be able to see those things happen. I think for any of us that teach, those are the very reasons why we, we get into it. So, you would, yes. You would hate to lose those.
[00:20:05] John Nash: Yeah, yeah, yeah. And so, I wrote down for the very first time late last week in three years of using generative AI a policy kind of guidelines that I think I can live with in almost all of my classes.
It's built on what Watkinson said, and also from some folks that you've introduced me to through our podcast related to the ideas of feminist pedagogy and feminist pedagogical frameworks.
And for those who are not familiar with that, it's really about strategies that support learners' goals by promoting learner-centered approaches presenting community-driven content, keeping the learning experiences as transformative as possible and close to the ground of the work that they're doing so that you can achieve the goals that you want to do.
And by using some of those thoughts and from a book called "Feminist Pedagogy for Teaching Online", edited by one of our past guests, Enilda Romero-Hall, with also Jacquelyne Howard, Clare Daniel, Niya Bond, and Liv Newman.
And by the way, we will be interviewing Jacquelyne Howard and Enilda Romero-Hall about this book. But these notions along with Watkinson ideas, bring me to this place where I have an ethical stance about why I think the productive struggle is important, and we'll talk together every time about whether AI is appropriate. And if I catch it getting used in a way that doesn't seem right, we're going to talk about that and help you get back to the productive struggle. I think that's kind of where my head is at now.
[00:21:37] Jason: I love that. Can you give us a little sample from your syllabus?
[00:21:41] John Nash: Yeah.
[00:21:42] Jason: you do a dramatic reading of it or,
[00:21:44] John Nash: no. Yes. Like yeah, dramatic reading. You pick the, pick the voice of the ones that I like to do, like to a Sean Connery voice, or no, I'm not going to do that.
[00:21:52] Jason: Music.
[00:21:53] John Nash: can do it to Barney, the Dinosaur.
[00:21:55] Jason: Mm,
Yeah, you could do that.
[00:21:57] John Nash: Yeah.
[00:21:58] Jason: I was of thinking, with an orchestra playing
[00:22:01] John Nash: oh, right. Yeah.
[00:22:02] Jason: behind you, kind of like escalating to the final moments of the syllabus. Sorry, go ahead.
[00:22:08] John Nash: no, no. Thank you for the offer.
I'll give
[00:22:10] Jason: With all of that aside, what would you, just give us a little sample from it.
[00:22:14] John: Yeah, I think it just, it does three things.
It, it starts off by talking about how AI is powerful and it's powerful for work you've already mastered, and it's dangerous if it's a shortcut for work that you're learning. And, in it I confess that I've been using AI extensively in my professional work for tasks I already know how to do well and it makes me faster and better at that work.
But in education and the work that we would do in a class, that requires struggle, cognitive work, messiness of not knowing, ambiguity, all the things. And AI can eliminate that struggle. And for me, and I say to the student, to you, that's what I want us to have. So, the Second section tells them what I'm asking of them, and I'm asking them some of the typical things you would expect to hear but maybe expand it upon a bit.
But I want them to own their thinking. I want them to be transparent about the AI use. I want them to do the foundational work that needs to happen to have the productive struggle. And I want them to name it when it's not working. So, if they're struggling, I'm there. I'm not throwing them into the deep end of the pool without anything, I'm there.
And so, my last section is a commitment to them. And by telling them I'm going to make my criteria explicit. I will show my own thinking and my uncertainty, and I'm going to acknowledge when things might not be working and how we can work together because it's really for us to do this together. And I think it kind of, maybe we take a page as I'm always talking about Michelle Miller's "same side pedagogy."
I don't want to be adversarial here, but I want to also admit that you're here to learn something and I'm here to say, I think I can help you do that.
[00:23:55] Jason: I love that. It really is an invitation into that messy middle of. Of the syllabus approach to AI in a sense, right? Because
[00:24:07] John Nash: Mm.
[00:24:08] Jason: You are welcoming the student then into that productive struggle, even through your approach to how AI may or may not be used.
[00:24:16] John Nash: Yes, and so we'll see how it goes. I think that we as instructors need to be helpful to ourselves and helpful to our students by saying "we're in this together, and I think I have some guidance that can help you get to a goal you're hoping to reach. And if you take a shortcut and I notice it, I mean I can, I can look squinty-eyed at what you turned in, and I can tell that you probably skipped some steps and used AI. I'm going to politely call you out on that and tell you, maybe you should do this again, because I want you to really learn this stuff." And I think that's where we've got to get.
[00:24:53] Jason: Yeah. Yeah, I love that. Would you share that perhaps with our listeners? Can
[00:24:58] John Nash: Yeah, absolutely.
[00:25:00] Jason: you share a copy of that. We'll put it into the resources as well.
[00:25:02] John (2): And I invite anyone listening to this and reading it to push back on it. Poke holes in it. Tell me how we can make this better. I mean, maybe I'm being pollyannish here and hopeful, but I think it's, I don't know. We'll see.
[00:25:12] Jason: Yeah. No, I love it.
[00:25:13] John Nash: I.
[00:25:13] Jason: John for that. And yeah, this has been a great little conversation about AI and syllabus policies. I think it's really helpful at the front of a semester to talk about these things. I think we need to have more conversations and so we welcome conversations from you all.
We're on LinkedIn. We'll put at our website online learning. podcast.com. You'll see all the notes for this show, as well as places that you can reach us at LinkedIn. to have more conversation about this, and if there's any way we
[00:25:42] John Nash: Yes.
[00:25:42] Jason: you either through ourselves or through these resources, then yeah, please reach out.
[00:25:49] John Nash: Yeah. If you're a first-time listener, please hit follow and subscribe to our podcast. You'll get this in your feed. If you like what you're hearing, you'll, you'll get more. We've got some good guests coming up.
[00:25:59] Jason: Oh, man we've got some great, I'm really excited to release our upcoming episodes. They just keep getting better and better and a lot of themes that are happening this year as expected, I guess, around wrestling with policies and use and technology use. Of course, these are our ongoing themes, but yeah, we've just got some great podcasts coming.
[00:26:18] John Nash: Good talking to you, Jason.
[00:26:19] Jason: Good talking to you.
Good talking to you, John. And with that, we will leave you listeners with a dramatic reading of a selection of John's AI syllabus.
[00:26:32] John: AI is powerful and it's powerful for work you've already mastered, and it's dangerous if it's a shortcut for work that you're learning. I confess that I've been using AI extensively in my professional work for tasks I already know how to do well and it makes me faster and better at that work.
But in education and the work that we would do in a class, that requires struggle, cognitive work, messiness of not knowing, ambiguity, all the things. And AI can eliminate that struggle. And I say to the student, to you, that's what I want us to have.
and I'm going to acknowledge when things might not be working and how we can work together because it's really for us to do this together.
I don't want to be adversarial here, but I want to also admit that you're here to learn something and I'm here to say, I think I can help you do that.






No comments yet. Be the first to say something!