1. Christina Krist
  2. Assistant professor
  3. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  4. https://tca2.education.illinois.edu/
  5. University of Illinois at Urbana-Champaign
  1. Nigel Bosch
  2. https://pnigel.com
  3. Assistant Professor
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. https://tca2.education.illinois.edu/
  6. University of Illinois at Urbana-Champaign
  1. Cynthia D'Angelo
  2. https://cynthiadangelo.com
  3. Assistant Professor
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. https://tca2.education.illinois.edu/
  6. University of Illinois at Urbana-Champaign
  1. Erika David Parr
  2. Postdoctoral Researcher
  3. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  4. https://tca2.education.illinois.edu/
  5. Middle Tennessee State University
  1. Elizabeth Dyer
  2. https://www.mtsu.edu/faculty/elizabeth-dyer
  3. Assistant Director, TN STEM Education Center
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. https://tca2.education.illinois.edu/
  6. Middle Tennessee State University
  1. Nessrine Machaka
  2. https://www.linkedin.com/in/nessrine-machaka/
  3. Research Assistant
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. https://tca2.education.illinois.edu/
  6. University of Illinois at Urbana-Champaign
  1. Joshua Rosenberg
  2. https://joshuamrosenberg.com
  3. Assistant Professor, STEM Education
  4. Advancing Computational Grounded Theory for Audiovisual Data from STEM Classrooms
  5. https://tca2.education.illinois.edu/
  6. University of Tennessee Knoxville
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Christina Krist

    Christina Krist

    Lead Presenter
    Assistant professor
    May 11, 2021 | 11:12 a.m.

    Thank you for your interest in our video! We'd love to hear your feedback and questions about our project. Here are some additional questions that we are interested in discussing:

    Qualitative research:

    • Have you seen phenomena related to teachers' or students' physical positioning in your work? What role(s) have you seen it play?
    • Do you have ideas about how some sort of automated detection could support your qualitative analysis?

    Computational analysis:

    • Have you done work with OpenPose or other visual detection/tracking software on video? How have you used it?
    • What patterns in audio and/or visual data from classrooms have been important in your work?

    Combined analysis:

    • Our goal is to utilize OpenPose-based tracking algorithms to identify potential moments in video where important shifts in participation might be occurring. Are there other ways of combining computational and qualitative analysis that you could imagine being impactful for your work?
  • Icon for: Andres Colubri

    Andres Colubri

    Facilitator
    Assistant Professor
    May 11, 2021 | 11:31 a.m.

    Hi Christina, interesting research, thank you for sharing! I wonder how to you plan to incorporate context about that's going on in the class at the moment the videos are taken. I'm asking this as a complete neophyte in this area :-) I'd imagine that interpretation of body poses depends on the activity going on in the class, and even more specific information that only the teacher knows about. So I'm wondering if it would it be possible to annotate the videos and inferred poses in your system. I'm also thinking that the automatic pose inference might make mistakes from time to time (specially in crowded environments such as a classroom) so the teacher would need to enter manual corrections. Thank you!

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    Assistant Professor
    May 11, 2021 | 11:48 a.m.

    Hi Andres, I can speak to the automatic pose inference part: it definitely makes the occasional mistake! However, the advantage of doing it automatically is the large and fine-grained nature of the video data that can be handled. So, on average, we anticipate that occasional errors will be offset by a volume of correct inferences in surrounding frames of the video -- though that remains to be seen, and we are also considering ways for a teacher/researcher to get involved in the process and evaluate or correct these scenarios.

    With respect to context, we do have a bit of additional data regarding the type of activities (e.g., group work, student presentations) going on in the classroom at each point. From the qualitative perspective, this info is certainly relevant to interpretation, though it is a bit tougher to consider in computational analyses. This is an area we plan to explore, however!

  • Icon for: Andres Colubri

    Andres Colubri

    Facilitator
    Assistant Professor
    May 12, 2021 | 10:38 a.m.

    I see, the pose inference occurs on a per-frame basis, so then you get an average or consensus inference over time?

  • Icon for: Paul Hur

    Paul Hur

    Graduate Student
    May 12, 2021 | 10:53 a.m.

    Hi Andres -- I work with Nigel on doing automatic pose inference work on the project. We still need to explore how the overall poses change over time, but I think it will for sure be an important first step to see if our team's qualitative observations on classroom poses are corroborated by the fluctuations of pose inferences on the computational side! 

  • Icon for: Michael Chang

    Michael Chang

    Facilitator
    Postdoctoral Research
    May 11, 2021 | 12:07 p.m.

    This seems like a very useful tool! I’ve always wondered if/how researchers should use AI in qualitative research. How do you imagine qualitative researchers interacting with this tool? What sorts of analytics do you think are useful to provide to the qualitative researchers? And finally, what happens when a qualitative researcher identifies a new positioning movement that was previously un-coded? Do you think this will tool will indirectly constrain what types of behaviors qualitative researchers look for in their video analysis?

  • Icon for: Nigel Bosch

    Nigel Bosch

    Co-Presenter
    Assistant Professor
    May 11, 2021 | 12:26 p.m.

    Great questions! We have some future plans to explore exactly how qualitative researchers can take advantage of the tool; for the scope of the current grant project, the tool is a bit too much of an early prototype. But eventually, we plan to explore the tool as a sort of filtering mechanism to help qualitative researchers find interesting and relevant parts of large video datasets, which might consist of visualizations of behaviors over time and key moments in time that could fit with a particular behavior. This may indeed somewhat constrain the types of behaviors qualitative researchers examine, but at the same time it might expand the types of behaviors they examine, if they were not already looking for these types. For new qualitative researchers, such as research students, it might be especially effective for pushing them in new (to them) directions.

  • Icon for: Paola Sztajn

    Paola Sztajn

    Higher Ed Faculty
    May 12, 2021 | 08:44 a.m.

    This is so interesting! In our work on discourse, it is definitely the case that when teachers remove themselves from the center of the classroom, for example, it encourages student discussion. I look forward to learning about what you are learning.

  • Icon for: Paul Hur

    Paul Hur

    Graduate Student
    May 12, 2021 | 10:41 a.m.

    Yes, absolutely. We are also looking at how the encouragement of student discussion in the absence of immediate teacher presence (as you had mentioned) relates to shifts in students' physical movement. Maybe the students become more relaxed, and more expressive with hand gestures or head movements? Would love to hear your thoughts based on you and your team's observations from analyzing math classroom discourse!

  • Icon for: Jeremy Roschelle

    Jeremy Roschelle

    Facilitator
    Executive Director, Learning Sciences
    May 12, 2021 | 07:42 p.m.

    Hi all, congrats on all you've accomplished -- this is difficult stuff. Last week I watched Coded Bias on Netflix and so now I can't help but wonder what issues of Bias in the visual algorithms you may have found -- and how are you protecting against bias? 

    Also, are you aware of Jacob Whitehall's work? He has done some nice stuff with body posture as capture in a corpus of pre-schooler videos. see for example this paper

    Keep up the good work or one could say... um.... lean in!

     
    1
    Discussion is closed. Upvoting is no longer available

    Joshua Rosenberg
  • Icon for: Paul Hur

    Paul Hur

    Graduate Student
    May 12, 2021 | 10:17 p.m.

    That is a great and important question regarding the bias in the visual algorithms, but one that I'm not sure I can sufficiently answer (perhaps another person more knowledgeable could jump in, if available!). I will say, though, within our application of OpenPose's pose detection methods, we are able to see some limitations of the algorithms. Characteristics present in the low camera angles of the classroom videos, such as the occlusion of certain students in the back of the class, and the density/close proximity of individuals' bodies, lead to inconsistent pose detections in the OpenPose output. It is difficult to confidently say whether this is due to biases present, or even tell in which situations (certain contrast, brightness, etc.) the inconsistencies tend to manifest, since there are instances when pose detection fails (for a few frames) even when it has been working up fine up to that point. While it is difficult to account and protect individuals against these situations, we hope to eventually explore more privacy options for individuals who do not want to be included in the detection output.

    Thank you for linking that paper -- I have not seen it before! We are aware of Jacob Whitehill's other work and are inspired by it and other work in the area of developing automatic methods for analyzing classroom data. 

     
    1
    Discussion is closed. Upvoting is no longer available

    Jeremy Roschelle
  • Icon for: Daniel Heck

    Daniel Heck

    Researcher
    May 14, 2021 | 02:15 p.m.

    Really appreciate the project team sharing this great work. I am especially curious about how you've structured work on the same data between computational video analytics and human researcher methods of looking at video. When and why do you pass data, in both directions, between computer algorithmic analysis and human researcher/observers? What has worked well in this process? And what has been surprising or challenging?

  • Icon for: Joshua Rosenberg

    Joshua Rosenberg

    Co-Presenter
    Assistant Professor, STEM Education
    May 14, 2021 | 03:30 p.m.

    Hi Daniel, thanks for viewing our video and for asking about this - we're at a point in the project at which we'll soon be working to integrate output from the two methods (computational and human researcher-driven), but we haven't done this just yet. We do have some ideas about how we'll be doing this, some high-level and more at the conceptual stage for now, and some quite concrete/technical. 

    At a high level, we're curious about whether the computational output can serve as a starting point for qualitative analysts, such as by identifying segments of the data at which certain kinds of activities of interest might be more likely to occur. Then a qualitative analyst could investigate those segments in an in-depth way.Christina and project colleagues presented a paper at the most recent AERA conference on this and some other ways we can integrate the data. 

    At a concrete/technical level, we've discussed how selecting the time scale at which we'll join data (e.g., based on second or millisecond units) requires us to make certain decisions about what a meaningful unit of data is. But, we're very much working out these details now. I hope we can share more next year on what's working (and what's not) after we've explored this in greater depth.

    thanks again for this!

    Josh