Scientific Workshop & 40th Anniversary Reunion
Wednesday, May 28 - Friday, May 30, 2008
University of Toronto, Toronto, Ontario


  • Autodesk
  • Side Effects Software
  • Microsoft Research
  • University of Toronto Department of Computer Science
  • University of Toronto Alumni Association

Posters & Demos Session

As part of the DGPis40 Scientific Workshop, a poster and demo session is planned for the evening of May 29 (6:000-9:00 pm). This will be run in conjunction with a lab tour and will include a reception.

Last updated: May 27, 2008 new


Precomputed Curvature for real-time line drawing of dynamic scenes new
Evangelos Kalogerakis
PhD Candidate, Dynamic Graphics Project, University of Toronto

We present a method for real-time line drawing of deforming objects. Object-space line drawing algorithms for many types of curves, including suggestive contours, highlights, ridges and valleys, rely on surface curvature and curvature derivatives. Unfortunately, these curvatures and their derivatives cannot be computed in real-time for animated, deforming objects. In a preprocessing step, our method learns a mapping from a low dimensional set of animation parameters (e.g. joint angles) to surface curvatures for a deforming 3D mesh. The learned model can then accurately predict curvatures and their derivatives in real-time, enabling real-time object-space rendering of suggestive contours and other such curves.

Mobile Augmented Reality Activities new
Radek Grzeszczuk
Senior Scientist, Nokia

We summarize recent research activities in mobile augmented reality conducted at the Nokia Research Center in Palo Alto. We present a system for robust image-based image retrieval for cell phones. We describe a number of applications built using the system, including a visual search and a landmark-based pedestrian navigation.

DirectCam: A Gestural System for Animatic Creation new
Noah Lockwood
PhD Candidate, Dynamic Graphics Project, University of Toronto

A system that uses optical track of hand postures and gestures to allow users to create animatics on large-scale displays for interactive design of rough animation.

The Motion Lab at UC Davis
Michael Neff
Assistant Professor, University of California at Davis

The Motion Lab is the focal point at UC Davis for computer animation and 3D interaction research. With one foot in Computer Science and one in Technocultural Studies, the lab bridges the art and technology communities, conducting computer science research, collaborating on dance productions and working with researchers ranging from geologist to movement analysts. The lab is also part of the large Davis computer graphics research community, consisting of seven faculty centred in the Institute for Data Analysis and Visualization.

Discovering Communities in Online Collaborative Environments
Alvin Chin
PhD Candidate, Interactive Media Lab, University of Toronto

A novel methodology and framework for identifying communities is described, based on cohesive subgroups, node connectivity measures, and similarity in subgroupings between adjacent time periods. We present and contrast results from case studies involving a video sharing site, an online discussion group, and a social networking site.

Emotional Response as a Measure of Human Performance
Danielle Lottridge
PhD Candidate, Interactive Media Lab, University of Toronto

Emotional reactions are a key part of the user experience, and are particularly of interest to the design of systems that consider user emotions. This dissertation studies methods of measuring emotional responses through a novel two-dimensional tool, based on the model of valence and arousal.

Shadowed Relighting of Dynamic Geometry with 1D BRDFs
Derek Nowrouzezahrai
PhD Candidate, Dynamic Graphics Project, University of Toronto

We present a method for synthesizing the dynamic self-occlusion of an articulating character in real-time (> 170Hz) while incorporating reflection effects from 1D BRDFs under dynamic lighting and view conditions. Spherical Harmonics per-vertex visibility vectors are generated on-the-fly using an efficient linear model of the character's pose, and the product of these vectors with circularly symmetric BRDFs are computed in real-time using graphics hardware. We are able to shadow and shade an animating character with dynamic lighting, camera and BRDF settings.

The Kneed Walker for Human Pose Tracking
Marcus Brubaker
PhD Candidate, University of Toronto

We explore a physically realistic abstraction of biped locomotion. We combine this with a more complex model of human pose to perform tracking using only a single camera.

Usable speech recognition: toward improved access to webcast archives
Cosmin Munteanu
PhD Candidate, University of Toronto

A growing number of webcasts are archived after being delivered live. In the absence of transcripts, users are faced with increased difficulty in performing tasks easily achieved with text documents. The goal of my research is to improve the usefulness and usability of automatically-generated transcripts of webcasts, in particular for lectures and presentations. I achieve this by integrating novel speech recognition techniques specifically addressed at increasing the accuracy of webcast transcriptions with the development of an interactive collaborative interface that facilitates users' contribution to the improvement of machine-generated transcripts.


Sketching Piecewise Clothoid Curves new
James McCrae
MSc Candidate, Dynamic Graphics Project, University of Toronto

We present a novel approach to sketching 2D curves with minimally varying curvature as piecewise clothoids. A stable and efficient algorithm fits a sketched piecewise linear curve using a number of clothoid segments with G2 continuity based on a specified error tolerance. Our formulation is ideally suited to conceptual design applications where aesthetic fairness of the sketched curve takes precedence over the precise interpolation of geometric constraints. We show the effectiveness of our results within a system for sketch-based road and robot-vehicle path design, where clothoids are already widely used.

Handheld Projector Interaction
Xiang Cao
PhD Candidate, Dynamic Graphics Project, University of Toronto

The recent trend towards miniaturization of projection technology indicates that handheld devices will soon have the ability to project information onto any surface, thus enabling interfaces that are not possible with current handhelds. We demonstrate a research prototype of interacting with information spaces embedded in a physical environment using a handheld projector and a passive pen.

One Laptop per Child (OLPC)
Patrick Dubroy
MSc Candidate, Dynamic Graphics Project, University of Toronto

The XO-1 laptop is the first product from the One Laptop per Child (OLPC) association. I will provide a guided tour of this fun new gadget and highlight unique aspects of the Sugar UI. Come prepared to get your hands dirty.

Ryan Schmidt
PhD Candidate, Dynamic Graphics Project, University of Toronto

ShapeShop is sketch-based 3D design software intended to be both easier to use than existing tools, and more expressive. ShapeShop incorporates research in volumetric implicit modeling, dynamic surface parameterization, layered surfaces, gestural 3D manipulation, and crossing interfaces. Freely available on the internet, ShapeShop is in use today by a wide range of users, from traditional artists to elementary school teachers.

Staggered Poses: A Motion Representation for Character Animation
Patrick Coleman
PhD Candidate, Dynamic Graphics Project, University of Toronto

The coordinated timing among body parts is an important part of articulated character motion. We present a model of motion--staggered poses--which explicitly represents these relationships, allowing us to design new motion editing approaches that preserve or emphasize such coordination relationships. We can use our approaches with either new keyframe-based motion or with dense motion data such as recorded motion by reverse-engineering it to fit our model.

Motion Quickview: Interactive Visualization of Character Motion
Patrick Coleman
PhD Candidate, Dynamic Graphics Project, University of Toronto

We describe a system for interactively exploring long motions using a combination of automatically generated cameras and interaction tools that provide both local and global temporal navigation. Our camera generation algorithms model idealized handheld cameras with preferred vantage points and view directions, from which we create either global, static views or local, animated views. We then map mouse motion to either local time changes that cause important objects to follow the mouse or global time changes that provide random access to the entire time domain of the motion.

BumpTop - Rethink your Desktop
Anand Agarawala
Founder, BumpTop

BumpTop is a fresh and engaging new way to interact with your computer desktop. You can pile and toss documents like on a real desk. Interact by pushing, pulling and piling documents with elegant, self revealing gestures. BumpTop's interface makes use of 3D presentation and smooth physics-based animations for an engaging, vivid user experience. Featured in the New York Times, TED, PC World and Digg amongst others. http://bumptop.com

LumaPix::FotoFusion - high-end interactivity for a consumer audience
Michael Sheasby
Founder, LumaPix

LumaPix is a Montreal company started by DGP alumni Michael Sheasby. LumaPix::FotoFusion is a powerful consumer tool for image layout whose user interface design reflects the team's experience in designing high-performance media production tools, gathered while working at Softimage/Microsoft. Michael will sketch the LumaPix company history and demonstrate the application.

Global IP Video
Dave Abrams
CTO, TrueSentry

Navigate the globe with 3D mapping (Google Earth), and view remote live cameras, video panoramas from megapixel cameras, and use optical zoom to control pan/tilt cameras, and share pictures with a community of users. See computer vision used to analyze surveillance cameras integrated into a command and control system used by 911 centers with notification and collaboration tools for users to manage incidents.

Prof. John Danahy
Co-Director, Centre for Landascape Research, University of Toronto

Polytrim, 20 years old this term, is a realtime CAD-GIS research testbed for landscape architectural design we built as part of the visualization work that began when I was working at DGP with Alain Fournier, John Amanatides, Ron Baeker and Bill Buxton. Polytrim is used to do community visualization projects and teaching.

NECTAR, the Network for Effective Collaboration Technologies through Advanced Research
Annette Mayer
Network Manager, NECTAR, University of Toronto

NECTAR is a network of Canada's leading researchers in human-computer interaction (HCI) and computer-supported cooperative work (CSCW). Led by NECTAR PI, Professor Ron Baecker, our 12 participants come from 6 universities across Canada including Toronto, Saskatchewan, Calgary, British Columbia, Dalhousie & Queen's and are world-class experts in these two fields. Its focus is investigating technological and social issues to make computer-supported collaboration more efficient, productive and natural. NECTAR is funded by CDN 4.4M from Canada's NSERC Research Council, and 1.1M from Industry. Major sponsors include Microsoft and Smart Technologies.

Real World Interactive 3D Design Applications
Stephen Bohus
Lead 3D Programmer, Parallel World Labs

This demo will highlight various projects including, research from University of Toronto's Centre for Landscape Research (CLR), Virtual Modeling of the University of Toronto Campus, HCI work done at Siemens telecom division, and work done at Immersion Studios and Parallel World Labs.

ePresence Interactive Media
Rhys Causey and Christian Damianidis
Knowledge Media Design Institute, University of Toronto

ePresence Interactive Media, http://epresence.tv, is the world's first open source webcasting, conferencing and publishing solution. It is designed to support conferences, online meetings, seminars, and demonstrations by broadcasting them live over the internet, or making them available as on-demand webcasts. If you're interested in online multimedia communications or open source software then you won't want to miss this demo.

Technology for the Elderly to Support Cognition
Masashi Crete-Nishihata
Research Assistant, Knowledge Media Design Institute, University of Toronto
Kent Fenwick
MSc Candidate, Dynamic Graphics Project, University of Toronto

The next 25 years will see significant aging in our population and a tripling in the prevalence of cognitive challenges caused by afflictions such as Alzheimer's disease (AD) and Mild cognitive impairment (MCI). As advances in mobile, ubiquitous, and multimedia computing allow us to create powerful new aids to cognition, we shall present design projects for cognitive prostheses.

Our current work is based on results from a three-year grant (supported by the Alzheimer's association and Intel Corporation) on the participatory design of DVD-based multimedia biographies for AD and MCI individuals and their families. We are also working on four other projects in more preliminary stages.

The first project (supported by NSERC) is developing collaborative tools for information management among individuals with amnesia and their caregivers. The second (supported by MSR Cambridge) compares and contrasts two different media capture methods for stimulating recall and reminiscence of daily experiences among individuals with mild AD. The third (supported by BUL) is designing a context-aware cell phone to aid individuals with MCI who have difficulty remembering names. The fourth is developing a gaming web site to support cognitive and social stimulation for seniors, and rigorous prospective studies on the effectiveness of promising mental fitness regimens.

The Mobile Sensing Platform
Beverly Harrison
Senior Scientist, Intel Corporation

The Mobile Sensing Platform is a pager-sized hardware and software platform (publicly available) that enables real time, embedded processing of sensor traces, activity inference, speech processing, etc. http://seattle.intel-research.net/MSP

Beverly Harrison
Senior Scientist, Intel Corporation

UbiFit is a mobile application exploring how on-body sensing (real time activity inference) and personal displays can encourage people to incorporate physical activity into everyday life.


Additional Information

If you have any questions regarding the poster/demo session, please feel free to directly contact Martin de Lasa or Delia Couto.