A searchable list of some of my publications is below. You can also access my publications from the following sites.
My ORCID is
https://orcid.org/0000-0002-6236-2969
Publications:
1.
Jonathan Bidwell, Irfan Essa, Agata Rozga, Gregory Abowd
Measuring child visual attention using markerless head tracking from color and depth sensing cameras Proceedings Article
In: Proceedings of International Conference on Multimodal Interfaces (ICMI), 2014.
@inproceedings{2014-Bidwell-MCVAUMHTFCDSC,
title = {Measuring child visual attention using markerless head tracking from color and depth sensing cameras},
author = {Jonathan Bidwell and Irfan Essa and Agata Rozga and Gregory Abowd},
url = {https://dl.acm.org/doi/10.1145/2663204.2663235
http://icmi.acm.org/2014/},
doi = {10.1145/2663204.2663235},
year = {2014},
date = {2014-11-01},
urldate = {2014-11-01},
booktitle = {Proceedings of International Conference on Multimodal Interfaces (ICMI)},
abstract = {A child's failure to respond to his or her name being called is an early warning sign for autism and response to name is currently assessed as a part of standard autism screening and diagnostic tools. In this paper, we explore markerless child head tracking as an unobtrusive approach for automatically predicting child response to name. Head turns are used as a proxy for visual attention. We analyzed 50 recorded response to name sessions with the goal of predicting if children, ages 15 to 30 months, responded to name calls by turning to look at an examiner within a defined time interval. The child's head turn angles and hand annotated child name call intervals were extracted from each session. Human assisted tracking was employed using an overhead Kinect camera, and automated tracking was later employed using an additional forward facing camera as a proof-of-concept. We explore two distinct analytical approaches for predicting child responses, one relying on rule-based approached and another on random forest classification. In addition, we derive child response latency as a new measurement that could provide researchers and clinicians with finer grain quantitative information currently unavailable in the field due to human limitations. Finally we reflect on steps for adapting our system to work in less constrained natural settings.
},
keywords = {autism, behavioral imaging, computer vision, ICMI},
pubstate = {published},
tppubtype = {inproceedings}
}
A child's failure to respond to his or her name being called is an early warning sign for autism and response to name is currently assessed as a part of standard autism screening and diagnostic tools. In this paper, we explore markerless child head tracking as an unobtrusive approach for automatically predicting child response to name. Head turns are used as a proxy for visual attention. We analyzed 50 recorded response to name sessions with the goal of predicting if children, ages 15 to 30 months, responded to name calls by turning to look at an examiner within a defined time interval. The child's head turn angles and hand annotated child name call intervals were extracted from each session. Human assisted tracking was employed using an overhead Kinect camera, and automated tracking was later employed using an additional forward facing camera as a proof-of-concept. We explore two distinct analytical approaches for predicting child responses, one relying on rule-based approached and another on random forest classification. In addition, we derive child response latency as a new measurement that could provide researchers and clinicians with finer grain quantitative information currently unavailable in the field due to human limitations. Finally we reflect on steps for adapting our system to work in less constrained natural settings.
2.
Ravi Ruddarraju, Antonio Haro, Kris Nagel, Quan Tran, Irfan Essa, Gregory Abowd, Elizabeth Mynatt
Perceptual User Interfaces using Vision-Based Eye Tracking Proceedings Article
In: International Conference on Multimodal Interfaces (ICMI), Vancouver, Canada, 2003.
@inproceedings{2003-Ruddarraju-PUIUVT,
title = {Perceptual User Interfaces using Vision-Based Eye Tracking},
author = {Ravi Ruddarraju and Antonio Haro and Kris Nagel and Quan Tran and Irfan Essa and Gregory Abowd and Elizabeth Mynatt},
url = {https://doi.org/10.1145/958432.958475},
doi = {10.1145/958432.958475},
year = {2003},
date = {2003-01-01},
urldate = {2003-01-01},
booktitle = {International Conference on Multimodal Interfaces (ICMI)},
address = {Vancouver, Canada},
abstract = {We present a multi-camera vision-based eye tracking method to robustly locate and track user's eyes as they interact with an application. We propose enhancements to various vision-based eye-tracking approaches, which include (a) the use of multiple cameras to estimate head pose and increase coverage of the sensors and (b) the use of probabilistic measures incorporating Fisher's linear discriminant to robustly track the eyes under varying lighting conditions in real-time. We present experiments and quantitative results to demonstrate the robustness of our eye tracking in two application prototypes.
},
keywords = {computer vision, eye-tracking, ICMI, multimodal interfaces},
pubstate = {published},
tppubtype = {inproceedings}
}
We present a multi-camera vision-based eye tracking method to robustly locate and track user's eyes as they interact with an application. We propose enhancements to various vision-based eye-tracking approaches, which include (a) the use of multiple cameras to estimate head pose and increase coverage of the sensors and (b) the use of probabilistic measures incorporating Fisher's linear discriminant to robustly track the eyes under varying lighting conditions in real-time. We present experiments and quantitative results to demonstrate the robustness of our eye tracking in two application prototypes.
Other Publication Sites
Copyright/About
[Please see the Copyright Statement that may apply to the content listed here.]
This list of publications is produced by using the teachPress plugin for WordPress.