Cognitive Models for Abacus Gesture Learning

In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024).

In this paper, we developed three ACT-R cognitive models to simulate the learning process of abacus gestures. Abacus gestures are mid-air gestures, each representing a number between 0 and 99. Our models learn to predict the response time of making an abacus gesture. We found the accuracy of a model's predictions depends on the structure of its declarative memory. A model with 100 chunks cannot simulate human response, whereas models using fewer chunks can, as segmenting chunks increase both the frequency and recency of information retrieval. Furthermore, our findings suggest that the mind is more likely to represent abacus gestures by dividing attention between two hands rather than memorizing and outputting all gestures directly. These insights have important implications for future research in cognitive science and human-computer interaction, particularly in developing vision and motor modules for mental states in existing cognitive architectures and designing intuitive and efficient mid-air gesture interfaces.

Files

Metadata

Work Title Cognitive Models for Abacus Gesture Learning
Access
Open Access
Creators
  1. Lingyun He
  2. Duk Hee Ka
  3. Md Ehtesham-Ul-Haque
  4. Syed M. Billah
  5. Farnaz Tehranchi
Keyword
  1. Finger counting
  2. Abacus gesture
  3. Mid-air interaction
  4. Cognitive model
  5. ACT-R
  6. Cognitive architectures
License In Copyright (Rights Reserved)
Work Type Conference Proceeding
Acknowledgments
  1. We appreciate anonymous reviewers for their insightful reviews and comments. This work was supported by The Pennsylvania State University. We thank Human-Centered AI and A11y Lab members at PSU for their help.
Publisher
  1. Proceedings of the 46th Annual Meeting of the Cognitive Science Society
Publication Date 2024
Related URLs
Deposited June 25, 2024

Versions

Analytics

Collections

This resource is currently not in any collection.

Work History

Version 1
published

  • Created
  • Updated
  • Updated Description, Publication Date Show Changes
    Description
    • In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024).
    Publication Date
    • 2024
  • Updated Acknowledgments Show Changes
    Acknowledgments
    • We appreciate anonymous reviewers for their insightful reviews and comments. This work was supported by The Pennsylvania State University. We thank Human-Centered AI and A11y Lab members at PSU for their help.
  • Added Creator Lingyun He
  • Added Cognitive Models for Abacus Gesture Learning.pdf
  • Updated License Show Changes
    License
    • https://rightsstatements.org/page/InC/1.0/
  • Published
  • Updated
  • Updated Keyword, Publisher, Description, and 1 more Show Changes
    Keyword
    • Finger counting, Abacus gesture, Mid-air interaction, Cognitive model, ACT-R, Cognitive architectures
    Publisher
    • Proceedings of the 46th Annual Meeting of the Cognitive Science Society
    Description
    • In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024).
    • In this paper, we developed three ACT-R cognitive models to simulate the learning process of abacus gestures. Abacus gestures are mid-air gestures, each representing a number between 0 and 99. Our models learn to predict the response time of making an abacus gesture. We found the accuracy of a model's predictions depends on the structure of its declarative memory. A model with 100 chunks cannot simulate human response, whereas models using fewer chunks can, as segmenting chunks increase both the frequency and recency of information retrieval. Furthermore, our findings suggest that the mind is more likely to represent abacus gestures by dividing attention between two hands rather than memorizing and outputting all gestures directly. These insights have important implications for future research in cognitive science and human-computer interaction, particularly in developing vision and motor modules for mental states in existing cognitive architectures and designing intuitive and efficient mid-air gesture interfaces.
    Related URLs
    • https://escholarship.org/uc/item/6mk359vh
  • Added Creator Duk Hee Ka
  • Added Creator Md Ehtesham-Ul-Haque
  • Added Creator Syed M. Billah
  • Added Creator Farnaz Tehranchi