S3xE12: Socially Responsible Computing UTA Program
Episode Summary
How do you infuse a class to engage students with socially responsible computing? Kathi Fisler from Brown University discusses Brown’s undergraduate teaching assistant (UTA) program, where they hired UTAs to specifically focus on finding ways to do just that in the classes they were embedded in. In this episode, we talk about the program, how she teaches socially responsible computing in her intro computer science (CS) classes, and how her goal is to get students to ask the right questions. While she also lets go of needing to know the answers or even how to answer the questions.
You can also download this episode directly.
Resources
Paper that inspired this episode: Lena Cohen, Heila Precel, Harold Triedman, and Kathi Fisler. 2021. A New Model for Weaving Responsible Computing Into Courses Across the CS Curriculum. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education (SIGCSE ‘21). Association for Computing Machinery, New York, NY, USA, 858–864. https://doi.org/10.1145/3408877.3432456
Transcript
Kristin[00:07] Hello and welcome to the CS-Ed Podcast, a podcast where we talk about teaching computer science with computer science educators. I am your host, Kristin Stephens-Martinez, an assistant professor of the practice at Duke University. Joining me today is Kathi Fisler, a full research professor and associate director of the CS undergrad program at Brown University and co-director of Bootstrap.
Kathi[00:27] Hi, Kristin. Thanks for having me.
Kristin[00:28] So, Kathi, first I’d like to ask, how did you get to where you are today? So, for example, what is a full research professor?
Kathi[00:37] Well, so a full research professor. I am full professor in rank. I am a research professor because I’m not on the regular tenure track. So I was previously a full-tenured professor of computer science at Worcester Polytechnic, WPI, and I moved to Brown five years ago to take on the role of associate director of undergrad studies and to continue some of my research projects which have been joint with people at Brown for a while. So it was a good, good professional move, and this title was the one that best fit out of the catalog of things that Brown offers. Bigger picture of how I got here. I started as an undergraduate math major, and they required intro computer science of all math majors at the time. So I had an open slot in the schedule second semester freshman year and decided to get this required CS course out of the way. It did not go well. I was not a natural computer scientist. I didn’t really get it. I basically got through the class because I had a friend a couple of years older who had gotten through it and was helping me a lot with it. And I came out of that first class saying, I can’t be that bad at this. This is giving logical instructions to a hunk of metal. And if I’m not doing this well, it’s all on me. It’s not on the computer. And so, I decided I needed to take the second class to see if I could figure out why I had struggled so much with the first class. So I took the second course. I was failing it at mid-semester, but I was at a small college with faculty with open office hour doors and very supportive. And I just spent a lot of time in the in the office of the professor. And about two-thirds of the way through the course, it all started to make sense.
Kathi[02:38] And my back third of the course was really strong. The professor decided to ignore the first two-thirds of the course and graded me on where I was at the end. And so then I decided I had to take the third course to prove that the second course back part hadn’t been a fluke. And around that point, I realized that I was having far more fun in computer science, and I wasn’t really cut out to be a college math major. And here I am.
Kristin[03:07] I love that, how you’re like I have to take the next class because it can’t be that bad.
Kathi[03:13] And I don’t know what I was thinking at the time, but I’m glad that turned out the way it did. But a lot of my friends laughed at me, and I did have a second major as an undergrad. So this whole computer science thing could not work out, and I would still graduate.
Kristin[03:27] Nice. So this semi-segues into our main topic, which is about UTAs and socially responsible computing. And this conversation, this recording got started because I read your paper from SIGCSE TS 2021. And so, how about you first give us an overview of that? And it’s also been a while, I’m assuming, since you submitted that paper and give us like an update on what’s been happening since then.
Kathi[03:54] So the big picture is the Brown Computer Science Department made a decision a couple of years ago, and this was through the leadership of the then Chair, Ugur Cetintemel, who’s a database researcher by positioning. But the decision had been made in the department to address social impact of computing by weaving it into many courses in the department. We were not going to do a standalone computer ethics class. We didn’t think that was scalable to the number of majors we had, and we didn’t think that was an effective approach for having students understand where social impact issues come up in practice. So we had decided to embark on this department-wide effort to embed this content in as many courses as possible. Like many computer science departments, we don’t have a faculty full of people with expertise in these areas. We have a department full of faculty who are very open to working this content in, but not confident in how to do it themselves. I’m also at a department at Brown that has had a very strong undergraduate TA culture for a very long time. Our undergraduate courses, by and large, run on undergraduate TAs. With very few graduate student TAs. So that context, I think, is important for understanding this paper and how it comes to be.
Kathi[05:31] So we decided to hire a special group of teaching assistants specifically to work on socially responsible computing. So many of these students either had a second major or a lot of coursework in something broader, whether it was society and technology studies or philosophy or interest in ethics. But these undergrad teaching assistants were assigned to work with courses to try to figure out how to integrate this material into the assignments and the flow of the courses as they already existed. The paper was written, it was me and three of the original socially responsible computing TAs who wanted to step back and analyze how the program was going a year in. So they approached me about doing an independent study research project to assess the program. And when we did that, and we saw we were having some findings from it, we decided to submit a SIGCSE paper. And that’s what the paper’s about and how it came about. So it describes the program, presents some of the findings in assessments we’ve had. And since then, we’ve been making a lot of updates and changes to the program to respond to what we’ve been learning about how to do this well in general and in our context.
Kristin[07:02] So, can you fill in some details for me, like how big are your classes? How many TAs are assigned to a typical size of a class like what’s this TA-student ratio like that piece as well as maybe give an example of how social responsible computing was integrated into a course, maybe either for CS1 or CS2 just because most people are familiar with that so it’d be easier or pick some high level one and talk about that, so like fill in the picture for us a bit?
Kathi[07:36] Sure, I teach both a CS1 and a CS2 that have integrated this. So I can give the most useful detail from those courses. But I can also mention what we do at the upper level. So we maintain a 10 to 1 student-to-TA ratio in the upper level and an 8-to-1 ratio in the first two years. Our classes, our intro level classes, are anywhere from 200 to 400 students. So we have some courses with TA staffs of 30 to 45 teaching assistants as part of the course infrastructure. So we are a lot of hands-on work between students and TAs in the department. Something else that I think is unusual relative to some other departments I’ve talked to, our TAs are allowed to grade. They can grade final exams, they cannot assign course grades. But our TAs are used to having a degree of responsibility that I know many schools reserved for graduate TAs, and this is a program that’s been in place for 40-odd years at Brown. So it’s kind of an established culture of a lot of TA interaction.
Kathi[08:54] When it came to the social responsibility content, there were lots of different ways that courses approached it at first. I would say many courses initially tacked on some article readings to assignments that were already going out. So you have an assignment going out on a particular machine learning approach. You add a reading and reflection question to the assignment. I think what we did in the intro courses was trying to be a little more interwoven. So, for example, in my CS2 class, which for us is data structures and object-oriented programming, we do an assignment that starts with a lecture in class where we look at a YouTube-style recommendation algorithm. This was based on danah boyd’s article, The Fragmentation of Truth. So I basically took her article. It’s a really nice piece, looking at some of the fairness in algorithmic flaws in recommendation systems. And I basically built a lecture around that, where we look at the architecture of a recommendation system from a model-view-controller perspective, and we say, what role does each of the model, the view, and the controller potentially play in housing these unfair results that can come from these algorithms? So we talk about who gets to see what through the view. We talk about in the controller something is deciding priority of articles to show you. And that’s leaving out key information that might represent users or videos. That decision is in the controller part. But the model has to store sufficient information about the videos and the users in the first place. So I think the way I like to think about this is I want students to realize that the decisions they make as entry-level programmers can have implications for social impacts. This is not just a C-suite executives decide we’re going to make fair products, and the world is a happy place. These seemingly innocent decisions we make at the choice of data structures and sorting criteria and things like that can have positive and negative social impacts. And so I wanted to use that to bring this point home to students, that this is not an abstract concept. This is something they need to be thinking about.
Kristin[11:41] Do students in that lecture because I’m now I’m really curious about this lecture. Do you try and connect it to their personal experience at all?
Kathi[11:52] We do the personal experience part in other ways. So in that CS2 course, we have roughly five assignments on socially responsible computing that come in in different ways, and this is similar in my CS1 course. So, for example, in each of those courses, I do a background survey at the beginning of the semester, collecting prior experience, student assessment of various technical skills. So I have a sense of where students feel they’re at. And we’ll put a question on there along the lines of you’ve been asked to develop something like a Yelp or a Dàzhòng Diǎnpíng, which is the Chinese version of Yelp. You’ve been asked to develop one of these recommendation systems for restaurants. What would make that fair? And so we invite the students to tell us what they think makes it a fair system. There’s another question we ask about what criteria do you look at in deciding whether an algorithm is good? And most of them say efficiency and not many of them are saying what impact it’s having on people. So we kind of set the stage. We’re not directly asking them what their experiences are.
Kathi[13:15] I’ve been uncomfortable doing that in an anonymous or in a large-scale space like that, because I feel like it’s asking students to put themselves on the line before we’ve worked together for them to have a trust relationship as to what I’m going to do with that information. But we do invite students to talk about this through systems where they can raise issues that they’ve experienced or that they worry about. And that’s some of how I try to bring that perspective in. And I think for me this is one of the things I think about in how to talk about this content respectfully, knowing that there are students in my class who have been harmed or have families or communities that have been harmed by these technologies. I don’t want to put them on the spot and say here, represent these problems for the rest of your classmates. That’s a burden I don’t think should be be on them. But I also want to find ways for them to contribute to conversations in ways that we can share out to the rest of the class. And that’s a tension I’m still trying to figure out how to work with in a reasonable way.
Kristin[14:35] So when you ask them these open text box questions, how do you analyze that data and then talk about it to the students? Because I’ve been thinking about wanting to do something like this, but part of me is like, with so many students, I know how to do qualitative research, but the turnaround time is potentially prohibitive.
Kathi[15:01] So I think the first thing to realize is I am not trying to do publishable quality research out of those baseline studies.
Kristin[15:08] Okay.
Kathi[15:09] What I am trying to do is get a temperature check on where we are in the room coming into this. So I tend to use the data from that in two ways. One is we’ll go through, and we’ll tag everybody like who mentioned efficiency in the open-ended one, who mentioned lines of code, who mentioned resource consumption. So we just have these five or six categories that we tagged as to who came up with what. And then, we can simply produce a very simple chart that says, here’s the frequency with which students in the class in the open-ended part raised these issues. And then, when I get to the fairness question, I end up mining the responses for different, interesting perspectives on who might be affected and how. So I pull out a set of highlight quotes. And then, when we’re doing this model-view-controller architecture lesson in the course, I pull up the set of 12 to 15 quotes that I pulled, and I said, now I want you to map those on to where in the system they get controlled. Are they model? View? Controller? Outside policy? So that’s how I’m trying to use it. I’m saying here are voices from your classmates on these issues, and let’s use that as a seed in this exercise that I can tailor quite a bit. So I don’t see that as where the research is. I am curious how much students are aware of these issues coming into college and into a CS class. And that’s really what I’m trying to get a sense of, is as a whole, are more students thinking about impact and societal impact as a key evaluation criteria for a system alongside performance?
Kristin[17:11] Okay.
Kathi[17:13] And that’s what I want out of CS2. I want you to say what makes an algorithm? How do we evaluate an algorithm? Well, we look at its runtime. We look at its space. And we look at its social impact. I want those three things on par. When they think about criteria they should be attending to as they go on from CS2.
Kristin[17:33] Besides recommender systems, is there another example where those three questions are salient? Because I think most people will get recommender systems, but like if you pulled out, if you talk about sorting algorithms, you still have to consider these three things. I feel like that would be more of a really tangible example as to why all three things are important to consider.
Kathi[18:02] I think you can get at some of these, and I do get at some of these in my CS1 by looking at things like table schema. So how do you represent names? Is it first name, last name? Okay. Now you’ve knocked out some cultures. There’s a nice series of articles called Falsehoods Programmers Believe about X, so falsehoods programmers believe about names, about dates, about I think there might be one on time. And what they are are these long lists of 50 or so things that people building real-world scale systems have learned you can’t make this assumption about the structure of a name or a date if you’re going to try to be inclusive. I used that as a basis, say, in my intro class for how are we going to set up this table that’s storing information about people, say, for a medical research project? That’s a similar place where you’re thinking about what’s an effective organization of this table for the questions I’m trying to answer. So, you also get to that performance versus impact dimension a little bit there. You can get into usability and accessibility dimensions with, for example, language internationalization. So if you are building some sort of tool that’s going to support people giving voice-driven feedback. Well, now you have to deal with international languages and how that gets represented. So that comes up. I have a colleague who teaches the upper-level programming languages class, and he does a lot with this question of who’s left in and who’s left out in designing schemas and even some decisions in programming languages and programming language implementations. What is a name look like, and how do you make a naming scheme in a programming language that’s reflective of how different cultures will think of things? So I think the algorithmic decision systems, they’re the easy places to start. The more interesting challenges come trying to go beyond that, as you’re pointing out.
Kristin[20:34] Yeah, now, I really want to read these articles of, like, things that programmers assume that aren’t true. One thing we should mention before we go back into our original topic, which is about your CS1, is not the typical CS1, I believe, because it’s very data-oriented. Right. Do you want to give like a brief blurb on that? And then we’ll try and backtrack back to the original topic of this podcast.
Kathi[21:00] Sure. Sure. So the CS1 that I teach is what I call a data-centric introduction to computer science. It serves as both the intro course for the data science program and one of our possible intro courses for the CS major. So we start the course looking at tabular data. Our first data structure is a two-dimensional table. So we do a lot of work on decomposing problems and basic scripting by doing operations on two-dimensional tables like you might find in a Google sheet. So filtering columns, computing new columns, things of that nature. And from there, we go into lists and trees and other forms of data organization, all motivated by when we run into a limitation with tables. So in that course, I’m assuming that I’m working with students, many of whom are not going to be CS majors, who are going to work on projects where they want to know how to programmatically deal with a large spreadsheet worth of data. And that’s the framing. It’s a fantastic framing if you want to think about social responsibility because so many issues that have social impacts end up relying on data sets. So it’s been a very natural weaving for me to, to do those things together. But we talk about things like how do you compute the relevance of an ad to a particular person as a query. When we’re doing basic conditionals, we say, well, look, we can take in some variables as inputs, and we can write a conditional statement. Now we’ve gotten to tables. Well, maybe we want to do a similar computation, but looking in different cells of the table. So we scale it a little more. And now, we get to filtering rows of tables, and now we expand it a little bit more. So I use that sequence of understanding ad matching to run through this progression of computing topics from conditionals to tables to function-based extraction of a data set. And I give them a little overview of machine learning and how this stuff isn’t actually done with if statements. So that’s a way that to work some of that into a CS1.
Kristin[23:26] Yeah. So to backtrack back into the original topic about using TAs, and I think you touched on it a little bit when you were talking about how Brown has a long history of having undergrad teaching assistants. But you could have done this with other people, like besides undergrads, because some people would claim that why would a faculty listen to a undergrad TA who is just learning this stuff versus a colleague? So why did you choose to have this be through the undergrad program versus some other way of starting this kind of social computing initiative?
Kathi[24:05] So partly the idea of giving undergraduate TAs a significant role in framing our courses has been Brown culture for a really long time, so partly it was the natural thing to do in our culture. When we are starting new courses, for example, there’s heavy involvement of undergrads in brainstorming assignments and interesting ways to go at material. So partly for us, it came out of that culture. Partly we thought that working with the undergrads would help us tap into ways to talk about this content that would relate to current students.
Kathi[24:49] And I think that’s that’s been one of the strengths of this is we get some information on how the current undergrads are seeing these issues and what aspects of it they are either turned into or not quite tuned into. So that’s really useful calibration for us. Why didn’t we go the route of hiring post-docs with expertise in philosophy or social justice, as some other universities have done in their programs? Partly when we asked around at the university, there weren’t people with available cycles to work on something like this. So we had talked about that, but we wanted to start making the move in that direction, and we didn’t have other people on campus. I think stepping out of Brown’s shoes, looking at other universities in general, working with undergrads is, frankly, it’s much more cost effective than hiring postdocs in other areas. And many of the assignments that I have seen from places that did this through postdocs in the social sciences, they are done more as add-on assignments. They’re not deeply woven into the technical content. And I think we have embraced the idea that the closer we can get it to the technical content that students already respect having to know, the higher likelihood there is that we’re going to get through about this.
Kathi[26:28] That is absolutely not to say that we have undergrads running off all on their own, trying to come up with how to teach this material. That doesn’t work. It was one of the things that it was a shortcoming of the program coming out of the first year that emerged, and we talked about a little bit in the SIGCSE paper. We now have a philosophy professor who specializes in tech ethics overseeing the program, along with an advanced Ph.D. student from media and communication studies who’s doing a lot of work. They work very closely with the undergrad TAs in planning out what topics they want to cover, how they see integrating them into the courses. So we were making up for the expertise the faculty don’t have by bringing in these colleagues to help the students flesh out their ideas. And then still using the undergrads who have the knowledge of the course structure to figure out how to pull it into different assignments.
Kathi[27:37] I would say the other thing we really learned about training and expertise was the need for expertise in pedagogy around these more social science and humanities facing issues. Many of us had no idea what kinds of assignments you create. You can say, read this article and tell us what you think. I’m pretty sure colleagues in humanities and social sciences don’t ask students, hey, what did you think? And then grade that.
Kathi[28:10] Right. They have sophisticated, well-established ways of asking these kinds of questions pedagogically. And that was another form of expertise that we have been bringing in, both through our colleagues in other disciplines, and me coming in as an expert in CS Education and CS pedagogy. So we now have this kind of interdisciplinary team of faculty supervising this group of undergraduate TAs who then work with the faculty in their courses.
Kristin[28:43] So pushing a little bit more on the pedagogy piece, because I watched your video, I looked through the paper, and I was like, this sounds great, but how do you get students to actually believe you that this is important as opposed to a checkbox that I just check and then I walk away, and I forget this immediately.
Kathi[29:03] I think there’s a couple of things that help students take it seriously. And to be clear, not all students do take it seriously. We still get pushback from students saying this is not relevant. We are increasingly having juniors and seniors coming back and saying they’re being asked questions about this on interviews for internships at certain software companies. Frankly, I think that will be the greatest driver of student response to this. If they think this is going to affect them in interviewing for jobs, they’re going to take it more seriously. So that has been one effective way to look at this. Frankly, I also have the benefit of being at a fairly liberal-leaning university in the Northeast. A lot of our students are already concerned about these kinds of issues, about social justice in a broader context. So we’re able to tap into it. There are some students who are not going to be that excited about it. But then again, there are students who aren’t excited about learning how to implement hash tables either. So, you know, I think it’s one of many things. And by introducing it as another skill you need, students are going to accept some of the skills we teach them and reject others. And that’s fine, I mean, that’s going to happen. So I kind of look at it in that perspective. I don’t have to make everybody love this material. My job is to teach them what questions they should be thinking about. And hope that some of them take that into their practice the same way I hope that they will take testing strategies into their practice and that they will eventually work for organizations that expect and value them for being able to have these conversations.
Kristin[31:05] So transitioning a little bit to, let’s say, you wanted to give advice to a faculty in a department who wants to try and start an initiative like this. Like, the faculty might not even be the chair of the department, but it’s like someone who thinks that this is an important thing to try. What advice would you give them, as well as perhaps provide them a bit of a framework of the pros and cons of how you did it and how they should think through trying to do it at their own institution.
Kathi[31:38] The first thing I would suggest is take a look at what’s in your course. And where do you see issues of representation, fairness, privacy, potentially coming up with the topics you are already doing. There are a fair number of people working in this space that you can look at for ideas. There is a social responsible computing teaching handbook that emerged from a Mozilla challenge. Mozilla funded a group of faculty at different institutions across the country to work on projects like this. Brown was one of the institutions that got some of that funding. So I would look at the handbook that came out of the Mozilla socially responsible computing effort because there’s a lot of articles or references in there to different people in this space and what they have tried. Perhaps to me, the most interesting finding we saw that we reported in the paper was from students saying, I take this more seriously when the professor gives it lecture time.
Kathi[32:55] And so the students were noticing the difference between classes that tacked this on as a reading to a homework, which kind of everybody knew you didn’t really have to take that seriously. And a professor choosing to say, this is important enough that I’m going to pull some lecture time for us to talk about this. Whether you did that as a discussion, as a guided activity, like what I’m doing. The sense I got is the students want to hear this from the people they respect, and those are the faculty. So I think that’s another piece of advice I would give somebody is figure out how to talk about some of this in your class. Don’t panic about you not being an expert. Because there are ways to talk about these topics that don’t need you to have a lot of expertise. And again, I’ll refer you to these materials that you’ll find referenced off the Mozilla effort for examples of how you can do this, you do not have to be an expert in machine learning, AI, social justice to begin the conversation.
Kathi[34:13] I think the one last piece of advice I would give is be realistic about your expectations on yourself. I know for myself I am not an expert in social impacts. I’m not trying to be an expert. I’m trying to get students to learn to ask questions. And learning to ask questions is a different skill from knowing how to answer them. I think teaching somebody how to answer these questions, how they build fair and reliable systems. We need a lot of expertise outside of computer science to do this well. There’s been a really nice report that came out, oh, just in the last couple of months from an NSF-based working group on increasing responsible computing in the practice and teaching of computing. And it emphasizes the need for expertise beyond what computer scientists have. We are not teaching students to solve the problems. We are teaching students to be aware of the problems and to know to ask questions so that when they are on a job and asked to do something, they might go to their manager and say, hey, is anybody thinking about X? And that can be a way to to get over the imposter syndrome for a faculty member or the lack of expertise that we might feel in actually teaching students how to address these issues because we can’t go at this without respecting the considerable work in areas outside of computer science that’s needed to do justice to these questions.
Kristin[35:59] Yeah, though, like, I feel like in some ways I can let go of the need to have an answer to those questions, but another part of me still, like, I want to try and have an answer to those questions. And then I feel like the students also will be like, how come you’re making me ask these questions, but you’re not giving me a way to answer them? Like, is there a way to resolve that discontent?
Kathi[36:24] One thing I would eventually like to see, and I think, I’m hoping we are moving towards this at Brown as we do more faculty hiring, is a model where we have dedicated courses on social impacts of computing that students who really want to learn more about the solutions and the approaches can take. In some places, I’ve heard this question of how to teach responsible computing being framed as standalone course versus integrated, and I don’t think that’s a useful dichotomy. You need both. We need the integration to reach students where they are and raise the issues. We need the standalone courses able to go deeper with the expertise that some people will have for learning how to address these questions. So I think one of the ways that I make myself feel better about this is, first of all, trying to read a lot so that I improve in my own understanding of it, but also seeing myself as a piece of a system. And I don’t pretend that I’m going to be the whole system. I’m doing my part to try to set up a system in which all the pieces will come together. And this is partly something I’m doing in my role, as I would undergrad studies, is trying to set foundations and ways that we go about this program, that it will get to the kind of robustness and respect for other disciplines that it needs to have. But not pretending that I can do it all on my own.
Kristin[38:01] Yeah, it is. Yeah, part of me is very much like, I wish I had all the answers, but I know that’s not possible.
Kathi[38:09] You know, and I think my experience is the more you read, the more you get disabused of the notion that you will have all those answers. And partly that’s because, you know, so much of what forms an adverse impact based on content, where you take the same algorithm and drop it in two different settings, you get two completely different experiences and sets of affordances and sets of harm. So to try to tell ourselves, oh, yeah, you know, I’m going to totally master this. I am not going to totally master the health care system, the prison industrial context, public school education, the financial. Context matters. And I think this is a lesson that in CS Ed, in general, we are learning slowly. We can’t take a study that we did in a course in one school and necessarily assume that you’re going to get the same results in another school just because both studies use Java in CS1.
Kathi[39:13] Right. The community is slowly learning that the context of what students were taught and what they knew coming in, and the nature of the assignments, all of those actually matter to having robust research results. This is another example of that. The context is bigger, the context is messier, and it matters even more. So I think the only way to do this respectfully is to acknowledge that we don’t understand all these contexts. You have to work with people with expertise in the contexts. And that’s why, again, this mindset of “can you ask the questions” because then you can get together with somebody with expertise in the context and together work towards figuring out how to not build irresponsible tech.
Kristin[40:05] Yeah. I think for me, I’m kind of in the phase where I’ve accepted that I can’t have all the answers, but I’m not past the phase of feeling like I should have all the answers anyway. Like, once I’m past that, maybe I’ll be like, alright, at least I can ask good questions. Like, maybe once I get past that, I’ll be like, alright, I know I shouldn’t have all the answers.
Kathi[40:28] Right.
Kristin[40:29] All right. So as we close out, let’s do TL; DL. Too long, didn’t listen. What would you say is the most important thing you would want our listeners to get out of our conversation?
Kathi[40:40] I think what I would like people to take away is that responsible computing is not just about putting readings and articles in your machine learning class. There are scenarios and problems and questions that you can bring into the technical courses and the technical content you’re teaching so that students can see it where they are, in ways that may show up in job interviews that they’re doing now, and that don’t require the professor to feel that they have extensive expertise in algorithmic fairness and some of these issues in order to have an impact in introducing these ideas to students.
Kristin[41:25] You’re reminding me a little bit of like an analogy where it’s kind of funny in some ways how faculty feel like I have to be an expert in something before I can teach it. And I’m like, but are you really an expert in how the bare metal works all the way up to, like the top of the software engineering stack? I would suspect, no. But you still feel like you’re an expert enough to teach the material you are teaching. Like, why are you changing your standards?
Kathi[41:50] I mean, there’s a huge professional development piece here for faculty as well. And you know, none of this is on the topic we’re supposedly talking about TAs and everything else. But I think this is all an ecosystem. Right, to get this stuff into classes, there’s going to have to be this ecosystem of professors getting comfortable and outside experts contributing and students able to help us understand what is and isn’t landing with their peers like it’s no one piece of this solves it. It’s a systemic change to how we think about talking about computing and responsibility.
Kristin[42:31] Yep, that’s right.
Kathi[42:35] It’s wonderfully exciting if you’re into this stuff. I mean, it’s a fascinating systems design and engineering question.
Kristin[42:43] Thank you so much for joining us, Kathi.
Kathi[42:45] Thanks. It’s been a lot of fun.
Kristin[42:47] And this was the CSED Podcast hosted by me, Kristin Stephens-Martinez, and produced by Amarachi Anakaraonye. And remember, teaching computer science is more than just knowing computer science. I hope you found something useful for your teaching today.