1. Stacey Forsyth
  2. https://www.linkedin.com/in/stacey-forsyth-8595b99/
  3. Director, CU Science Discovery
  4. STEM+C: Integrating AI Ethics Into Robotics Learning Experiences
  5. https://colorado.edu/project/imagine-ai
  6. University of Colorado Boulder
  1. Bridget Dalton Dalton
  2. http://www.colorado.edu/education/bridget-dalton
  3. Associate Professor
  4. STEM+C: Integrating AI Ethics Into Robotics Learning Experiences
  5. https://colorado.edu/project/imagine-ai
  6. University of Colorado Boulder
  1. Ellie Haberl Foster
  2. Research Associate
  3. STEM+C: Integrating AI Ethics Into Robotics Learning Experiences
  4. https://colorado.edu/project/imagine-ai
  5. University of Colorado Boulder
  1. Scott Sieke
  2. https://www.linkedin.com/in/scott-sieke-06715a77/
  3. STEM Education Designer
  4. STEM+C: Integrating AI Ethics Into Robotics Learning Experiences
  5. https://colorado.edu/project/imagine-ai
  6. University of Colorado Boulder
  1. Jackie Smilack
  2. Graduate Research Assistant
  3. STEM+C: Integrating AI Ethics Into Robotics Learning Experiences
  4. https://colorado.edu/project/imagine-ai
  5. University of Colorado Boulder
  1. Ben Walsh
  2. Graduate Research Assistant
  3. STEM+C: Integrating AI Ethics Into Robotics Learning Experiences
  4. https://colorado.edu/project/imagine-ai
  5. University of Colorado Boulder
Facilitators’
Choice
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Lead Presenter
    Director, CU Science Discovery
    May 11, 2021 | 12:48 a.m.

    Thanks for stopping by to check out our video! Our STEM+C project is inspired by the need for educational materials that support youth in exploring ethical issues related to artificial intelligence (AI). As AI technologies become increasingly prevalent in our daily lives, it is critical to educate and empower youth to be both critical consumers as well as ethical designers in our digital world.

    Although this project was originally designed to be integrated into robotics summer camps, we shifted our plan last spring, due to COVID-19. When the pandemic limited our ability to run in-person summer camps, we transitioned to offering online camps (for middle and high school students) that integrated literacy (short stories), computer science (online AI demonstrations and activities) and multimedia design (comics, videos and chatbots), developing new activities that could work well in a virtual setting. Following our positive experience last summer, we then tested the materials in two 9th grade English Language Arts classes, held online in fall 2020.

    We are currently analyzing data collected during these four programs, including pre-/post- surveys and AI drawings, as well as student artifacts and interviews. We’re interested in hearing from other educators working to bring AI education into K-12, as well as those integrating computer science (CS) with other disciplines. How have you approached exploring ethical issues related to AI and related technologies? What topics or projects have been most interesting to your students? What challenges have you faced in tackling some of these issues in formal and/or informal learning settings?

  • Icon for: Andres Colubri

    Andres Colubri

    Facilitator
    Assistant Professor
    May 11, 2021 | 10:45 a.m.

    Thanks for a great video, and for taking on such an important problem! The use of stories to engage students with the challenges posed by the the widespread use of AI in society sounds like a great idea. A couple of questions: first, are you anticipating creating new stories, perhaps contributed by the students themselves? Second, you mention in the video that students will build their own AI systems. That's very interesting, but also brings a whole other element of complexity into the project, how are you planning to do it? Do you have specific software, tools in mind?

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Lead Presenter
    Director, CU Science Discovery
    May 11, 2021 | 05:15 p.m.

    Great questions, Andres! Yes, we're certainly planning to create additional stories for the project, and we currently have a new story (about algorithmic bias in the criminal justice system) in review that we're hoping to test with youth soon. Working directly with teens and learning about the issues that most resonate with them helps provide ideas for new stories -- but if anyone has ideas for other stories that they'd like to see, please let us know! We hope to get to the point where students are contributing their own short stories to the project, but in this first year we focused on supporting students in telling their own stories through comics (using the comic-making software, Pixton EDU). This enabled students to create their own stories about issues of interest or concern, but in an approachable way that seemed to be really fun and engaging for most students (and, added bonus, worked well over Zoom, too!).

    In terms of designing their own AI systems, we used Google's Teachable Machine to introduce students to machine learning and in particular, to highlight the impacts of biased training data. Teachable Machine is easy for students to use and understand, regardless of their prior experience with computer programming, which made it a good fit for our purposes. Some other online tools, like App Inventor and Machine Learning for Kids, offer similar functionality but provide opportunities to integrate some basic programming as well.

    One challenge we face in our project, particularly in our online summer camps (due to time limitations), is finding the right balance between time dedicated to exploring the technology and tinkering with the technical/AI tools and time spent diving into the relevant ethical issues. It can be a tricky balance, allocating sufficient time for each component, so that students are able to develop their understanding about how AI works, while also having time to reflect about its broader societal impacts.

  • Icon for: Andres Colubri

    Andres Colubri

    Facilitator
    Assistant Professor
    May 12, 2021 | 10:19 a.m.

    People at the Processing Foundation (https://processingfoundation.org/), including Daniel Shiffman who runs the popular Coding Train channel in Youtube are very interested in AI Ethics. That'a a community that could be interested in your project, and in user-contributed stories.

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Lead Presenter
    Director, CU Science Discovery
    May 13, 2021 | 12:24 p.m.

    Thanks for the suggestion, Andres - we'll look into that!

  • Icon for: Michael Chang

    Michael Chang

    Facilitator
    Postdoctoral Research
    May 11, 2021 | 11:56 a.m.

    I loved the opening and closing with the Echo (?) device. I appreciate the idea of stimulating young people’s imaginations with stories featuring AI-based ethical dilemmas. One question I had was how you chose those stories, and whether there was a particular focus in mind towards highlighting specific ways that AI bias manifest (e.g., racial bias). The other question I had was regarding whether young people were able to extract core concepts of AI and identify how AI bleeds into their day-to-day lived experiences — and consequently, critiques and reimaginations of how those AI applications could be reformed for the better.

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Lead Presenter
    Director, CU Science Discovery
    May 11, 2021 | 08:24 p.m.

    Thanks, Michael – glad to hear you enjoyed the Alexa opening.  : )  (With so few photos and videos from this year’s remote programming, we had to get a little creative with the video!)

    In developing this first set of short stories (which were authored by Ellie Haberl, a member of our team), we initially identified key AI ethics issues that we wanted to highlight, including algorithmic bias, data privacy, etc. Ellie then worked her magic to create original short stories, including realistic fiction and speculative/dystopian fiction, that served as an anchor for class discussion, activities and reflection. (Unfortunately, we had to trim from the video some of Ellie’s discussion about the story development process, due to time limitations; hopefully, she’ll be able to chime in here with some additional detail!)

    And yes, I think the experience was quite eye-opening for the teens who participated, many of whom came into the class without a clear idea of how AI was relevant to their lives. Students frequently referenced the stories when discussing different issues and in some cases, commented that a story had inspired them to make some type of change in their use of technology (e.g., checking their privacy settings, turning off notifications, etc.).

  • Icon for: Bridget Dalton Dalton

    Bridget Dalton Dalton

    Co-Presenter
    Associate Professor
    May 14, 2021 | 11:57 a.m.

    HI Michael,

    I'm on the team as well, and wanted to add a bit about our story development process.  We intentionally made the stories short  (tried to keep at 1000 words, although in a few cases the length might be 1200 words), featured teens as the protagonists who experience an AI related ethics issue in a strong plot, balanced gender in characters, made them open ended so as not to suggest  solutions, since that is part of the conversation that follows).  We also checked the readability of the texts so that they were appropriate for middle and high school students. The stories are open education resources, and can be accessed at our website.  We would love to hear from you if you decide to use the stories or have suggestions for us. 

  • Icon for: Chelsea LeNoble

    Chelsea LeNoble

    Higher Ed Faculty
    May 11, 2021 | 04:24 p.m.

    I absolutely love the rich engagement of students throughout the learning process. I feel like this project really exemplifies the humanization of STEM education by using stories and various forms of media to teach about this concept of ethics & AI--which has absolutely critical implications for our lives now and in the future.

    Similar to Michael, I'm curious about the various forms of learning content that was used to engage students. How did your team decide on the different assignments of stories, comics, drawings, etc.? Are you finding that some of these are more effective than others or if equally effective, are there some that seemed more popular with students or were easier for instructors to implement? 

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Lead Presenter
    Director, CU Science Discovery
    May 12, 2021 | 01:37 p.m.

    Thanks, Chelsea! To some degree, the different projects are a result of our team’s collective expertise (which includes computer science, STEM/maker education, and multimodal literacy) and the pandemic. Our original plan was to integrate the AI ethics modules into robotics summer camps, but when COVID-19 restricted our ability to run in-person programs, we had to find other options that would allow students to explore and reflect on these critical issues, even when participating remotely. It was important to our team that we preserve the creative making and design aspects of the project, as we wanted to ensure that the program effectively engaged middle and high school youth (most of whom were already feeling burned out by online school) and provided students with opportunities to reimagine AI and design new solutions that addressed the critical issues we were discussing.

    Comics provided a way for students to contribute their own stories in a way that was creative and fun and not intimidating. Pixton EDU was a great tool for this, as it provides all the art assets that students need so that the focus is on the storytelling process, rather than on drawing the comics themselves. It also allowed students to create avatars, which worked well for e-introductions and getting to know each other in a virtual space. We used Adobe Spark (and in the ELA class, WeVideo) to create posters and videos, again because these tools were easy for students to access and use online while still offering a lot of room for creative expression. We selected Juji as a chatbot platform because it aligned with our class goals while not requiring extensive previous technical experience.

    At the end of each camp or class, students selected one of the tools that we had worked with to create a final project and they were fairly evenly divided across the three tools, which was interesting. Pixton and Spark were easy to facilitate online, but we did run into some technical glitches with the chatbot software (Juji) initially. Fortunately, the developer was fairly responsive and we had a better experience working with that program in the fall classes. We’re planning to test some additional tools this summer and based on that experience, we hope to add some new resources to our website in the coming months.

     
    2
    Discussion is closed. Upvoting is no longer available

    Chelsea LeNoble
    Pati Ruiz
  • Icon for: Phillip Eaglin, PhD

    Phillip Eaglin, PhD

    Founder and CEO
    May 11, 2021 | 08:32 p.m.

    Hi, are the Ethics in AI curriculum and the stories available to share?  How are the course materials culturally relevant for Black and Hispanic youth?  Thx.

  • Icon for: Stacey Forsyth

    Stacey Forsyth

    Lead Presenter
    Director, CU Science Discovery
    May 13, 2021 | 11:26 a.m.

    Great question, Phillip! One of the key issues we focus on in the class is algorithmic bias. There are numerous examples of biased decision-making by algorithms across different fields, including education, health care, employment/HR, the criminal justice system, etc. AI is playing an increasingly important role in decisions like who is eligible for medical treatment, a mortgage, or parole, and in many cases, these biased algorithms disproportionately impact people of color. In the class, we dive into this in a few different ways. For example, a story that introduces algorithmic bias (in a future dystopian world) is complemented by a video and news stories related to racial and gender bias in facial recognition systems. (If you haven’t already seen it, I recommend viewing the film Coded Bias, now on PBS, which focuses on Joy Buolamwini’s research on bias in facial recognition technologies. In the class, we show a shorter video about her work, called Gender Shades.) Students then work with Google’s Teachable Machine to see how biased training data impacts the accuracy of their models. This fall, we’re hoping to test a new story (currently being reviewed) that addresses algorithmic bias in the criminal justice system.

    You can find the stories and some related resources on our website (https://www.colorado.edu/project/imagine-ai/), but it’s still a work in progress. We’ll be adding additional resources to the site over the next few months.

  • Icon for: Romelia Rodriguez

    Romelia Rodriguez

    Graduate Student
    May 12, 2021 | 04:47 p.m.

    I really enjoyed watching your video. It is great to see how you addressed such an important topic in a simple but transcendent way. I wonder what the criteria for selecting the readings are?

    I invite you to provide feedback to our project: https://stemforall2021.videohall.com/presentations/2139

  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Facilitator
    Executive Director, Learning Sciences
    May 12, 2021 | 07:34 p.m.

    This is great stuff. As others have said, the issues are SOOO important. And you are combining consciousness-raising with student learning about AI -- so they learn what it is along with what the problems are. I just want to know MORE! Especially about the last bit -- how students are responding. I couldn't easily infer from the drawings shown that students' understanding was getting richer. Please say more about how you are studying student conceptual change and how you are making inferences about what students are learning. What makes you most confident that students are learning? And what have you observed about who isn't learning as well as you'd like -- and why might that be? 

    p.s. great production values on the video as well. Congrats!

  • Icon for: Ben Walsh

    Ben Walsh

    Co-Presenter
    Graduate Research Assistant
    May 13, 2021 | 10:02 p.m.

    Thanks for the question, Jeremy. We are building on work by researchers who have used pre and post drawings to understand shifts student understanding of science concepts  and another group of researchers using pre post drawings to understand shifts in how students think about social problems. Because of the interdisciplinary nature of AI ethics, both of these lines of inquiry are relevant to our work. 

    In addition to the drawings our analysis will look across multiple data sources including pre and post surveys, student created comics, videos and chatbots, chat transcripts from the Zoom sessions, field notes, and interviews. We're in the middle of data analysis now (and we have a lot of data) and have two papers in the works. We're not ready to share our findings just yet, but I'd be happy to contact you when those papers are ready. 

  • Icon for: Bridget Dalton Dalton

    Bridget Dalton Dalton

    Co-Presenter
    Associate Professor
    May 14, 2021 | 12:02 p.m.

    HI Jeremy,

    Thanks for your interest and questions!  In addition to Ben's information about our varied data sources, we're also exploring using/adapting existing instruments assessing conceptual change in the core AI concepts that are being developed. Do you have any recommendations for us?  Thanks!

  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Facilitator
    Executive Director, Learning Sciences
    May 17, 2021 | 12:43 a.m.

    I wish I had a recommendation for you -- but you are in a cutting edge area, so I don't. I guess I'd suggest you look up Shuchi Grover at Stanford and ask her -- she's a leader in assessment of computation thinking concepts.

  • Icon for: Jessica Sickler

    Jessica Sickler

    Researcher
    May 13, 2021 | 05:06 p.m.

    This is a really interesting discussion!  I'm curious to hear more about how you -- and your teacher-partners -- are grappling with the balance (or is it a push-pull?) between the learning goals of the technology side (what it is, how it works, tinkering with it) and the learning goals around the critical thinking around the ethical debate.  I can see both being super-compelling to youth, and both are important, but in limited time, what are the critical choices?

    Great work!

  • Icon for: Ben Walsh

    Ben Walsh

    Co-Presenter
    Graduate Research Assistant
    May 13, 2021 | 10:25 p.m.

    Great question, Jessica. I think that balance you speak of is very much context specific. I believe that AI ethics is a topic that belongs in a Social Studies class or Language Arts class as much as it does in a CS class. But it can't be taught the same way across disciplines. In a CS class, more time spent experiencing and developing an understanding of the underlying technology makes sense. In a Social Studies class there would likely be an emphasis on the positive and negative influences AI is having on our society (and will increasingly have in the future). In an ELA class I would expect more focus on how AI is influencing the way we communicate and consume and create media. There can't be just one way to teach this topic.

    But a complete treatment of the topic in any discipline requires us to move between disciplines. We recognize teachers as experts in their own spaces, but they may need help delving into topics outside their individual comfort zones. This is why we are creating AI ethics modules focused around stories that can be adapted to different contexts, rather a single fixed curriculum. We hope these resources can support, for example, the humanities teacher who has never had to explain algorithms or machine learning to students, or the CS teacher who has never facilitated conversation about racial bias. 

  • Icon for: Kara Dawson

    Kara Dawson

    Higher Ed Faculty
    May 13, 2021 | 07:52 p.m.

    What an awesome idea to engage students in AI through stories. I also enjoyed this discussion thread as it answered many of the questions I had while watching the video. Although our content is quite different we are also trying to engage students via story - in our case a comic book about diverse characters who must learn about cryptology and cybersecurity in order to escape from an unexpected cyber adventure. Kudos to you and your team! 

  • Icon for: Bridget Dalton Dalton

    Bridget Dalton Dalton

    Co-Presenter
    Associate Professor
    May 14, 2021 | 12:04 p.m.

    HI Kara,

    We would love to hear more about your project (I'm going to look at your video now!).  Let's stay in touch and perhaps we can have our teams meet via zoom to share what we're learning about the power of stories in engaging with AI.

  • Icon for: Kara Dawson

    Kara Dawson

    Higher Ed Faculty
    May 14, 2021 | 01:59 p.m.

    Bridget - This sounds wonderful. You can reach me via email at: dawson@coe.ufl.edu. Have a great weekend -Kara

  • Icon for: Bridget Dalton Dalton

    Bridget Dalton Dalton

    Co-Presenter
    Associate Professor
    May 14, 2021 | 02:07 p.m.

    Thanks, Kara, we will be in touch soon!