Table of Contents Author Guidelines Submit a Manuscript
Applied Computational Intelligence and Soft Computing
Volume 2014, Article ID 101642, 8 pages
Research Article

Frequent Pattern Mining of Eye-Tracking Records Partitioned into Cognitive Chunks

1Department of Social Systems & Management, University of Tsukuba, Tsukuba 305-8573, Japan
2National Institute of Advanced Industrial Science & Technology (AIST), Tsukuba 305-8566, Japan

Received 23 July 2014; Accepted 27 October 2014; Published 23 November 2014

Academic Editor: Yongqing Yang

Copyright © 2014 Noriyuki Matsuda and Haruhiko Takeuchi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Assuming that scenes would be visually scanned by chunking information, we partitioned fixation sequences of web page viewers into chunks using isolate gaze point(s) as the delimiter. Fixations were coded in terms of the segments in a mesh imposed on the screen. The identified chunks were mostly short, consisting of one or two fixations. These were analyzed with respect to the within- and between-chunk distances in the overall records and the patterns (i.e., subsequences) frequently shared among the records. Although the two types of distances were both dominated by zero- and one-block shifts, the primacy of the modal shifts was less prominent between chunks than within them. The lower primacy was compensated by the longer shifts. The patterns frequently extracted at three threshold levels were mostly simple, consisting of one or two chunks. The patterns revealed interesting properties as to segment differentiation and the directionality of the attentional shifts.