
Cognitive Models for Abacus Gesture Learning
In Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024).
In this paper, we developed three ACT-R cognitive models to simulate the learning process of abacus gestures. Abacus gestures are mid-air gestures, each representing a number between 0 and 99. Our models learn to predict the response time of making an abacus gesture. We found the accuracy of a model's predictions depends on the structure of its declarative memory. A model with 100 chunks cannot simulate human response, whereas models using fewer chunks can, as segmenting chunks increase both the frequency and recency of information retrieval. Furthermore, our findings suggest that the mind is more likely to represent abacus gestures by dividing attention between two hands rather than memorizing and outputting all gestures directly. These insights have important implications for future research in cognitive science and human-computer interaction, particularly in developing vision and motor modules for mental states in existing cognitive architectures and designing intuitive and efficient mid-air gesture interfaces.
Files
Metadata
Work Title | Cognitive Models for Abacus Gesture Learning |
---|---|
Access | |
Creators |
|
Keyword |
|
License | In Copyright (Rights Reserved) |
Work Type | Conference Proceeding |
Acknowledgments |
|
Publisher |
|
Publication Date | 2024 |
Related URLs | |
Deposited | June 25, 2024 |
Versions
Analytics
Collections
This resource is currently not in any collection.