Eduaction

  • 2013 2017

    Ph.D. in Media Design

    Graduate School of Media Design, Keio University

  • 2011 2013

    Master in Media Design

    Graduate School of Media Design, Keio University

  • 2006 2010

    Bachelor of Applied Science in Computer Engineering

    University of British Columbia

Work

  • Present 2019.3

    Huawei Technologies Canada

    HCI Researcher

  • 2019.2 2018.4

    wrnchAI

    Deep Learning Engineer

  • 2018.3 2017.4

    AIST Digital Human Research Center

    Postdoctoral Researcher

  • 2017.3 2016.4

    AIST Digital Human Research Center

    Research Assistant

  • 2015.11 2015.5

    Microsoft Research Asia

    Research Intern

  • 2014.1 2013.11

    Singapore University of Technology and Design

    Research Intern

  • 2013.4 2012.4

    RIKEN Brain Science Institute

    Research Assistant

News

  • May 2020
    Our paper, "Tent Mode Interactions: Exploring Collocated Multi-User Interaction on a Foldable Device", has been accepted at MobileHCI 2020
  • Mar. 2019
    Started a HCI researcher position at Huawei Technologies Canada
  • Apr. 2018
    Started a deep learning engineer position at wrnchAI
  • Oct. 2017
    Journal paper published in Augmented Human Research.
    "Multi-Embodiment of Digital Humans in Virtual Reality for Assisting Human-Centered Ergonomics Design" [Link]
  • Apr. 2017
    Started a postdoctoral researcher position at AIST Digital Human Research Center. [Link]
  • Mar. 2017
    Received PhD. degree in Media Design from the Graduate School of Media Design, Keio University
  • Aug. 2016
    Talk at Digital Human Technology Consortium. [Link]
  • Jul. 2016
    Demo at SIGGRAPH 2016 VR Villagies.
    "VR Planet: Interface for Meta-View and Feet Interaction of VR Contents"
  • Apr. 2016
    Started a research assistant position at AIST Digital Human Research Center. [Link]
  • Feb. 2016
    Demo at AH2016.
    "Electrosmog Visualization through Augmented Blurry Vision"
  • May 2015
    Started an intership at the HCI group at Microsoft Research Asia
  • Oct. 2014
    Awarded "Microsoft Research Asia Fellowship" by MSRA. [Link]
  • Aug. 2014
    Poster presentation at SIGGRAPH 2014.
    "Ubiquitous Substitutional Reality: Re-Experiencing the Past in Immersion"
  • Mar. 2014
    Paper presentation at AH 2014.
    "SpiderVision: Extending the Human Field of View for Augmented Awareness"
  • Nov. 2013
    Started an internship at the Augmented Senses group at SUTD. [Link]
  • Nov. 2013
    Demo at SIGGRAPH Asia 2013 Emerging Technologies.
    "Cuddly: enchant your soft objects with a mobile phone"
  • Sept. 2013
    Received Master's degree in Media Design from the Graduate School of Media Design, Keio University
  • Apr. 2013
    Paper presentation at CHI 2013. [Talk]
    "Reality Jockey: Lifting the Barrier between Alternate Realities through Audio and Haptic Feedback"
  • Mar. 2013
    Awarded "Promising Young Researcher" award by VRSJ. [Link]
    "To Confuse the Perception of Reality through Mixing the Past with Audio and Haptic Feedback"
  • Aug. 2012
    MIRAGE - Performance Art with Substitutional Reality system.
  • Apr. 2012
    Started a research assistant position at RIKEN Adaptive Intelligence Lab.

Research Projects

  • image

    Tent Mode Interactions

    Exploring Collocated Multi-User Interaction on a Foldable Device

    Foldable handheld displays have the potential to offer a rich interaction space, particularly as they fold into a convex form factor, for collocated multi-user interactions. In this paper, we explore Tent mode, a convex configuration of a foldable device partitioned into a primary and a secondary display, as well as a tertiary, Edge display that sits at the intersection of the two. We specifically explore the design space for a wide range of scenarios, such as co-browsing a gallery or co-planning a trip. Through a first collection of interviews, end-users identified a suite of apps that could leverage Tent mode for multi-user interactions. Based on these results we propose an interaction design space that builds on unique Tent mode properties, such as folding, flattening or tilting the device, and the interplay between the three sub-displays. We examine how end-users exploit this rich interaction space when presented with a set of collaborative tasks through a user study, and elicit potential interaction techniques. We implemented these interaction techniques and report on the preliminary user feedback we collected. Finally, we discuss the design implications for collocated interaction in Tent mode configurations.


    Related Publications

    Tent Mode Interactions: Exploring Collocated Multi-User Interaction on a Foldable Device

    Gazelle Saniee-Monfared, Kevin Fan, Qiang Xu, Sachi Mizobuchi, Lewis Zhou, Pourang Irani, Wei Li.
    Conference Paper In Proc. MobileHCI 2020. ACM, 12 pages.
  • image

    Multi-Embodiment

    of Digital Humans in Virtual Reality for Assisting Human-Centered Ergonomics Design

    We present a multi-embodiment interface aimed at assisting human-centered ergonomics design, where traditionally the design process is hindered by the need of recruiting diverse users or the utilization of disembodied simulations to address designing for most groups of the population. The multi-embodiment solution is to actively embody the user in the design and evaluation process in virtual reality, while simultaneously superimposing additional simulated virtual bodies on the user’s own body. This superimposed body acts as the target and enables simultaneous anthropometrical ergonomics evaluation for both the user’s self and the target. Both virtual bodies of self and target are generated using digital human modeling from statistical data, and the animation of self-body is motion-captured while the target body is moved using a weighted inverse kinematics approach with end effectors on the hands and feet. We conducted user studies to evaluate human ergonomics design in five scenarios in virtual reality, comparing multi-embodiment with single embodiment. Similar evaluations were conducted again in the physical environment after virtual reality evaluations to explore the post-VR influence of different virtual experience.


    Related Publications

    Multi-Embodiment of Digital Humans in Virtual Reality for Assisting Human-Centered Ergonomics Design

    Kevin Fan, Akihiko Murai, Natsuki Miyata, Yuta Sugiura, Mitsunori Tada.
    Journal Paper In Augmented Human Research 2017. Springer, Volume 2, Article 7, 14 pages.
  • image

    VR Planet

    Interface for Meta-View and Feet Interaction of VR Contents

    The emergence of head-mount-displays(HMDs) have enabled us to experience virtual environments in an immersive mean. At the same time, omnidirectional cameras which capture real-life environments in all 360-degree angles in either still image or motion video are also getting attention. Using HMDs, we can view those captured omnidirectional images in immersion, as though we are actually "being there". However, as a requirement for immersion, our view of these omnidirectional images in the HMD is usually presented as first-person-view and limited by our natural field of view (FOV), i.e. we only see a fraction of the environment which we are facing, while the rest of the 360-degree environment is hidden from our view. This is even more problematic in telexistence situations where the scene is live so setting a default facing direction for the HMD is impratical. We can often observe people, while wearing HMDs, turn their heads frantically trying to locate interesting occurrences in the omnidirectional environment they are viewing.


    Related Publications

    VR Planet: Interface for Meta-View and Feet Interaction of VR Contents

    Kevin Fan, Liwei Chan, Daiya Kato, Kouta Minamizawa, and Masahiko Inami.
    Conference Demo In ACM SIGGRAPH 2016 VR Village (SIGGRAPH '16). ACM, Article 24, 2 pages.
  • image

    AnyOrbit

    Fluid 6DOF Spatial Navigation of Virtual Environments using Orbital Motion

    Emerging media technologies such as 3D film and head-mounted displays (HMDs) call for new types of spatial interaction. Here we describe and evaluate AnyOrbit: a novel orbital navigation technique that enables flexible and intuitive 3D spatial navigation in virtual environments (VEs). Unlike existing orbital methods, we exploit toroidal rather than spherical orbital surfaces, which allow independent control of orbital curvature in vertical and horizontal directions. This control enables intuitive and smooth orbital navigation between any desired orbital centers and between any vantage points within VEs. AnyOrbit leverages our proprioceptive sense of rotation to enable navigation in VEs without inconvenient external motion trackers. In user studies, we demonstrate that within a sports spectating context, the technique allows smooth shifts in perspective at a rate comparable to broadcast sport, is fast to learn, and is without excessive simulator sickness in most users. The technique is widely applicable to gaming, computer-aided-design (CAD), data visualisation, and telepresence.


    Related Publications

    AnyOrbit: Fluid 6DOF Spatial Navigation of Virtual Environments using Orbital Motion

    Benjamin I Outram, Yun Suen Pai, Kevin Fan, Kouta Minamizawa, Kai Kunze.
    Conference Poster In Proc. 2016 Symposium on Spatial User Interaction (SUI '16). ACM, 1 page.
  • image

    Electrosmog Visualization

    through Augmented Blurry Vision

    Electrosmog is the electromagnetic radiation emitted from wireless technology such as Wi-Fi hotspots or cellular towers, and poses potential hazard to human. Electrosmog is invisible, and we rely on detectors which show level of electrosmog in a warning such as numbers. Our system is able to detect electrosmog level from number of Wi-Fi networks, connected cellular towers and strengths, and show in an intuitive representation by blurring the vision of the users wearing a Head-Mounted Display (HMD). The HMD displays in real-time the users' augmented surrounding environment with blurriness, as though the electrosmog actually clouds the environment. For demonstration, participants can walk in a video-see-through HMD and observe vision gradually blurred while approaching our prepared dense wireless network.


    Related Publications

    Electrosmog Visualization through Augmented Blurry Vision

    Kevin Fan, Jean-Marc Seigneur, Suranga Nanayakkara, and Masahiko Inami.
    Conference Demo In Proc. 7th Augmented Human International Conference (AH '16). ACM, Article 35, 2 pages.
  • image

    Ubiquitous Substitutional Reality

    Re-experiencing the Past in Immersion


    We propose an immersive Substitutional Reality (SR) system that enables users to experience alternate realities as they walk around in the live reality’s environment by substituting or blending in pre-recorded realities. SR is fundamentally a concept of presenting realities that are of a different time than the live reality to the users through seamless transitions so the users feel as one coherent experience. Suzuki et al. construct the alternate realities from pre-recorded panoramic video images of the past and aim to study humans’ brain behavior when they perceive the past reality as happening before their eyes and interact in their SR system. We install our system in the home environment as to provide an immersive way for people to record and re-experience their treasured memories. To achieve seamless transition, we integrate sensors in the furniture that can sense the users’ interactions so the transition can be implicitly triggered as users subjectively interact with the furniture. As the sensor interfaces are integrated invisibly with the furniture and the experience occurs naturally when interacted, we consider this to be a kind of ubiquitous experience.

    Related Publications

    Ubiquitous Substitutional Reality: Re-Experiencing the Past in Immersion

    Kevin Fan, Yuta Sugiura, Kouta Minamizawa, Sohei Wakisaka, Masahiko Inami, and Naotaka Fujii.
    Conference Poster In ACM SIGGRAPH 2014 Posters (SIGGRAPH '14). ACM, 1 pages.
  • image

    SpiderVision

    Extending the Human Field of View for Augmented Awareness

    We present SpiderVision, a wearable device that extends the human field of view to augment a user’s awareness of things happening behind one’s back. SpiderVision leverages a front and back camera to enable users to focus on the front view while employing intelligent interface techniques to cue the user about activity in the back view. The extended back view is only blended in when the scene captured by the back camera is analyzed to be dynamically changing, e.g. due to object movement. We explore factors that affect the blended extension, such as view abstraction and blending area. We contribute results of a user study that explore 1) whether users can perceive the extended field of view effectively, and 2) whether the extended field of view is considered a distraction. Quantitative analysis of the users’ performance and qualitative observations of how users perceive the visual augmentation are described.


    Related Publications

    SpiderVision: Extending the Human Field of View for Augmented Awareness

    Kevin Fan, Jochen Huber, Suranga Nanayakkara, and Masahiko Inami.
    Conference Paper In Proc. 5th Augmented Human International Conference (AH '14). ACM, 8 pages.
  • image

    Cuddly

    Enchant Your Soft Objects with a Mobile Phone

    Cuddly is a mobile phone application that will enchant soft objects to enhance human’s interaction with the objects. Cuddly utilizes the mobile phone’s camera and flash light (LED) to detect the surrounding brightness value captured by the camera. When one integrate Cuddly with a soft object and compresses the object, the brightness level captured by the camera will decrease. Utilizing the measurement change in brightness values, we can implement diverse entertainment applications using the different functions a mobile phone is embedded with, such as animation, sound, Bluetooth communication etc. For example, we created a boxing game by connecting two devices through Bluetooth; with one device inserted into a soft object and the other acting as a screen.


    Related Publications

    Cuddly: Enchant Your Soft Objects with a Mobile Phone

    Suzanne Low, Yuta Sugiura, Kevin Fan, and Masahiko Inami.
    Conference Demo In SIGGRAPH Asia 2013 Emerging Technologies (SA '13). ACM, Article 5, 2 pages.

    Cuddly: Enchant Your Soft Objects with a Mobile Phone

    Suzanne Low, Yuta Sugiura, Kevin Fan, and Masahiko Inami.
    Conference Paper In Proc. Advances in Computer Entertainment 2013 (ACE '13). 138-151.
  • image

    Reality Jockey

    Lifting the Barrier between Alternate Realities through Audio and Haptic Feedback


    We present Reality Jockey, a system that confuses the participant's perception of the reality by mixing in a recorded past-reality. The participant will be immersed in a spatialized 3D sound environment that is a mix of sounds from the reality and from the past. The sound environment from the past is augmented with haptic feedback in cross-modality. The haptic feedback is associated with certain sounds such as the vibration in the table when stuff is placed on the table to make the illusion of it happening in live. The seamless transition between live and past creates immersive experience of past events. The blending of live and past allows interactivity. To validate our system, we conducted user studies on 1) does blending live sensations improve such experiences, and 2) how beneficial is it to provide haptic feedbacks in recorded pasts. Potential applications are suggested to illustrate the significance of Reality Jockey.

    Related Publications

    Reality Jockey: Lifting the Barrier between Alternate Realities through Audio and Haptic Feedback

    Kevin Fan, Hideyuki Izumi, Yuta Sugiura, Kouta Minamizawa, Sohei Wakisaka, Masahiko Inami, Naotaka Fujii, and Susumu Tachi.
    Conference Paper In Proc. SIGCHI Conference on Human Factors in Computing Systems (CHI '13). ACM, 2557-2566.