Paper in MICCAI (2015): "Automated Assessment of Surgical Skills Using Frequency Analysis"
Paper [bibtex file=IrfanEssaWS.bib key=2015-Zia-AASSUFA] Abstract We present an automated framework for a visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis technique for extracting motion quality via frequency coefficients is introduced. The framework is tested in a case study that involved analysis of videos of medical students with different expertise […]
2015 C+J Symposium
Source: 2015 C+J Symposium Participated the 4th Computation+Journalism Symposium, October 2-3, in New York, NY at The Brown Institute for Media Innovation Pulitzer Hall, Columbia University. Keynotes were Lada Adamic (Facebook) and Chris Wiggins (Columbia, NYT), with 2 curated panels and 5 sessions of peer-reviewed papers.Past Symposiums were held in
Presentation at Max-Planck-Institut für Informatik in Saarbrücken (2015): "Video Analysis and Enhancement"
Video Analysis and Enhancement: Spatio-Temporal Methods for Extracting Content from Videos and Enhancing Video Output Irfan Essa (prof.irfanessa.com) Georgia Institute of Technology School of Interactive Computing Hosted by Max-Planck-Institut für Informatik in Saarbrucken (Bernt Schiele, Director of Computer Vision and Multimodal Computing) Abstract In this talk, I will start with describing the pervasiveness of image and video content, […]
Dagstuhl Workshop 2015: "Modeling and Simulation of Sport Games, Sport Movements, and Adaptations to Training"
Participated in the Dagstuhl Workshop on “Modeling and Simulation of Sports Games, Sport Movements, and Adaptations to Training” at the Dagstuhl Castle, September 13 – 16, 2015. Motivation Source: Schloss Dagstuhl : Seminar HomepagePast Seminars on this topic include
Presentation at Max-Planck-Institute for Intelligent Systems in Tübingen (2015): "Data-Driven Methods for Video Analysis and Enhancement"
Irfan EssaGeorgia Institute of Technology Thursday, September 10, 2 pm,Max Planck House Lecture Hall (Spemannstr. 36)Hosted by Max-Planck-Institute for Intelligent Systems (Michael Black, Director of Perceiving Systems) Abstract In this talk, I will start with describing the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras. I will use […]
Paper in Ubicomp 2015: "A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing"
Paper Abstract Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple onbody sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation […]
Paper in ISWC 2015: "Predicting Daily Activities from Egocentric Images Using Deep Learning"
Paper Abstract We present a method to analyze images taken from a passive egocentric wearable camera along with contextual information, such as time and day of the week, to learn and predict the everyday activities of an individual. We collected a dataset of 40,103 egocentric images over 6 months with 19 activity classes and demonstrate […]
Paper in ACM IUI15: “Inferring Meal Eating Activities in Real-World Settings from Ambient Sounds: A Feasibility Study”
Citation Abstract Dietary self-monitoring has been shown to be an effective method for weight loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging but often requires that eating activities be detected automatically. In this work, we describe results from a feasibility […]