Aggregated Annotation of Noticing in Video through Frame+
- Video is recognized in learning sciences as a valuable medium for motivating and supporting practices of ‘noticing’ key events in complex interactions. Various approaches to support development of noticing have included in-person group discussions and threaded text linked to time segments in a given video. This paper reports on an initial effort to use aggregate representation of time-stamped text annotations across users as a way of supporting university teaching activities. The imagined use cases are courses where there are a dozen or more students who can identify and annotate specific moments they notice in video. The aggregated representation can then provide feedback for the entire class about where everyone is noticing key events and can help anchor discussion around the video and the content that video is helping students to explore.
|Type of resource
|August 11, 2023; June 30, 2022
|Teachers > Training of
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.
- This work is licensed under a Creative Commons Attribution 4.0 International license (CC BY).
Graduate School of Education Open ArchiveView other items in this collection in SearchWorks
Also listed in
Loading usage metrics...