RAA #5 – Skinput

1. APA Citation :

Chris Harrison, Desney Tan, and Dan Morris. 2010. Skinput: appropriating the body as an input surface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10). ACM, New York, NY, USA, 453-462. DOI=10.1145/1753326.1753394 http://doi.acm.org/10.1145/1753326.1753394

2. Purpose :

     In this paper, the Authors present a technology called as ‘ Skinput’ – a device that makes the whole body as an input device by analyzing the mechanical vibrations that propagate through the body by touching a  part of body skin (here Skin in the Arm region) called Bi-Acoustic Signal Acquisition Process.  This device and research paper is another great valuable addition to Wearable Computing and Finger Input Technologies.  Also, this device will open doors for an always available ‘System’ with which we can interact anytime.

3. Methods : 

i. Construction of the Device :

The Research Scholars built a custom device with the following important components for developing the ‘Skinput’ system.  And, they are :

a. Pico-Projector – For projecting the output from the system to the User for Visualization.

b. Bio-Acoustic Sensors – For detecting the User Touches.

c. Arm Band – The Arm band collectively houses the Bio-Acoustic Sensors for the Users to wear them and the arm band itself serves as a place holder for co-ordination of the sensors with that of the arm.

d. Sample Applications – They also built several sample applications which served for testing the Device, User Study and Performance Analysis of the Device.

e. Audio Output Device – For audio feedback.

For clarification, the following image shows the Input Sets and proximity positions of the Bio-Acoustic sensors that should be present in the arm for detection of the user inputs by the device.

ii.  Evaluation of the Device :

For evaluating the device, an User Study was conducted with 6 Male and 7 Female participants of an age group ranging from 20 to 56 with different Body Mass Indexes.  (The BMI of a person is an important criteria in this evaluation because it directly affects the Vibration Level of the Skin in turn affecting the detection of ‘User Touches’ by the Sensor.)

Apart from the above said testing locations of the skin (i.e., Fingers, Whole Arm and Fore Arm), there was also another set of supplemental testing with respect to the environmental conditions so as to test the device in real-time environment and targeted setting.  The following are the supplemental methods used for testing the device :

a. Walking / Jogging

b. Single Handed Gestures

c. Surface and Object Recognition

d. Identification of Finger Tap Type

e. Segmenting Finger Input

The participants were 1 Male and 1 Female for the first exercise.  For the rest of the experiments, 7 new participants (3 Females) were invited for the testing the device.

4. Main Findings :

After the User Study, the following were the main results :

a. The device performed well with an average accuracy of 87.6 % from the all experiments.

b. Additionally, the results of the experiments were also analyzed by considering the Body Mass Index, Sex and Age of the Participants for improving the device in the future.

c. By numbers, the device was very much and well capable in detecting the inputs from the Male participants in higher rate than that of female participants.

d. From the overall results of the supplemental tests performed, the environmental conditions and targeted test settings did not affect the performance of the device.

5. Analysis :

     In every front, this was a very good research.   It has the potential to be successful if it is harnessed and used in the right way, so as it is to be usable and useful in every dimension.  The authors included several methods for testing the device which is very much convincing for me that the user study is performed well.  Also, separated indexing of the impact of BMI analysis in the device was good.

   Personally, though I cannot find any flaws in this paper, I felt that this paper was not much of a phenomenal game changer but an incremental research.  I chose this paper as it was kind of similar to one of my favorite research articles of all time – SixthSense by the MIT Media Lab.  This paper being just two years old has more than 96 citation in total.  I am interested in following up this paper’s citations and the future research direction of the research scholars.

Advertisements

RAA #4 – Inflatable Touch Display

1. APA Citation :

Andrew Stevenson, Christopher Perez, and Roel Vertegaal. 2010. An inflatable hemispherical multi-touch display. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI ’11). ACM, New York, NY, USA, 289-292. DOI=10.1145/1935701.1935766 http://doi.acm.org/10.1145/1935701.1935766

2. Purpose :

The research scholars introduce a multi-touch display surface which can be dynamically deformed from its initial flat surface state to either a concave shaped surface or a convex shaped surface.  Also, the display can be deformed by the amount of pressure that an user extends on the surface.  This research is a part of Organic User Interfaces that helps the user in providing a real-time deformable surface for virtual objects.  Also, this will also help the user in providing a projection of the geometrical objects as a three-dimensional virtual objects which overcomes the negatives traditional flat displays.

3. Methods :

The main design philosophy that they considered before the authors started to design the product was unifying both the input and output interfaces, so as to create an natural  and user-oriented organic interface for the users.  From the concept they derived the main interaction property of the inflatable display : The shape of the inflatable display in relation to the image being displayed in the device.  For example, if the device is being for simulating a steel drum application, then the device should inflate itself for giving the surface a specific pitch.  Also, the user should be able to deform the display by the input pressure from the fingers.

Based on the above said concepts, the research authors designed the device using the following components for the mentioned purposes :

     a. Latex Sheet – acts as the surface of the display which can inflate and deflate

     b. Infrared LED Strip – provides lighting for the display surface

     c. Projector – projects image to the mirror which in turn gets projected to the display

     d. Internal Pump – provides air for the display

     e. Mirror – projection mirrors

     f. Infrared Camera – light sensor to track the touch input by the user

     g. External Pump – for letting out the air from the device during deflation

4. Main Findings : 

After building the device, they tested it’s usability and interaction methods using three applications – Google Earth, Steel Pan Drum Simulator and displaying Medical Scans.

a. Google Earth was a perfect choice for testing since, the map of earth would be hemisphere before zooming in and when we zoom in to a specific place it would become flat.  The device was built for simulating the same.

b. Steel pan drum simulator is used for demonstrating negative inflation i.e., concave shaped deflation of the surface.

c. Finally, for rendering Medical Scans for showing volumetric data.  eg., Brain Scans – which can reveal the layers for more detail.

   They stated that the application performed well, giving all the details and working as per the design.  They also mentioned that the device showed the Z-axis information and inflation / deflation properties properly.

5. Analysis : 

     This is a very recent research.  It was published just one year back.  i.e., in 2011.  I am excited and sure that there will be many papers will be following up this idea.  Personally, I felt that this would have very good application in the field of 3-D Medical Scan application.  The research scholars spoke of improving the device in future by a much precise air flow rate and more improvements on the touch capability.  The two flaws that I observed in this paper is that they could have observed how their products are perceived by the Users through User Studies.  And, the use of Latex Rubber on the display.  I am worried of the smoothness of the finger touch on a rubber display when comparing it over a glass display.  Another issue is the clarity of the renders displays on the rubber display since the images are projected through the rubber.

   I chose this research paper in particular because of the advanced Z-Axis Displays that Humans would use in the movie Avatar to explore Pandora.  I was so excited to watch that on screen.  With no doubt, I can assure that we will develop that kind of technology someday in the near future.

RAA #3 – Haptic Wristwatch

1. APA Citation :

Jerome Pasquero, Scott J. Stobbe, and Noel Stonehouse. 2011. A haptic wristwatch for eyes-free interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’11). ACM, New York, NY, USA, 3257-3266. DOI=10.1145/1978942.1979425 http://doi.acm.org/10.1145/1978942.1979425

2. Purpose :

     The research scholars present a haptic wristwatch which can acquire information from a mobile device which is paired with it through Bluetooth.  The device provides feedback to users for notifying certain things using haptic stimuli on their wrist.  Currently, the device perform various actions like Interpreting Notifications, Changing Ringing Profile or Consulting a profile.  The main inspiration for the research scholars for creating the device was the day-to-day inspiration from the human beings to operate certain things without looking them like the door knobs, switches etc.,

3. Methods :

     The authors built the wrist watch using a custom-made piezoelectric actuator combined with a few sensors to create a natural gesture-based interface. The device is a part of wearable computing group and uses eyes-free interaction method for interacting with the users using their gestures.  This device will act as a proxy between the user and the mobile device with which it is connected.

   For developing the gesture sets for the wristwatch, the authors identified and filmed the interactions being done with a watch from day-to-day life.  The gestures were analyzed with a dozen of expert mobile users in a Brainstorming session with the research authors. From that large set, they eliminated the gestures which lasted more than 5 seconds and classified the remaining gesture into 3 main groups, namely :

   a. Reactive Gestures –  Gesture made in response to a notification (eg., Mute a Phone Call, Snooze a Calendar)

   b. Control Gestures   –  Gesture made for controlling the device  (eg., Change ringing Profile, Switch Tracks)

   c. Query Gestures      –  Gesture made for requesting some information from the device  (eg., Number of Unread E-Mails)

   After conceiving the device and its working, the device was tested using a targeted experimentation with 3 different groups of people for each type of gestures (Reactive, Control and Query Evaluation) for collecting information from variegated sources.  For qualitatively validating the product in a realistic mobile environment, the watch was presented to a small set of users (the numbers were not mentioned) with an introduction how the gestures would work with the watch.

4. Main Findings :

     The main findings of the user study experiment were :

   a. The users were comfortable in using those gesture sets with the watch for communicating with their mobile device.

   b. The user reported that the  interaction with their mobile device was intuitive and comfortable.

   c.  A few users reported that their shirts invoked the gestures by touching the wrist watch inturn controlling the mobile device.

   d.  One user was able to hear the actuator part of the watch moving in a very silent environment while doing his activity.

   e. But, Most of the users were happy with controlling the mobile device using the watch.

5. Analysis : 

     Personally, I felt that this research work is a breakthrough with the way they dealt with contributing to both knowledge bodies – Eyes-Free Tactile Interaction Category using Wearable Computing.  Using 2 sets of User Study was a very smart approach.  They tried to address the issues faced by the first set of users by improving the device and presented it for the final user study.  But, the environment, the set of users and their background could have been a worth full mention in this research paper.

   I was interested in this article because of Sony’s SmartWatch which was released back in May this year and I liked the idea.  I chose this paper for analysis solely because of my interest on the idea.  Now, I am interested in knowing much more about Sony’s SmartWatch development stages.  I also think that was not much of a hit among the crowd.  Most of them, do not even know that it exists.  I want to know more why it failed because of bad hardware or just because of bad marketing strategies.  It has even received comments like “May be the worst thing Sony has ever made.” on technology blog, Gizmodo.

RAA #2 – Reality Based Interaction

1. APA Citation :

Robert J.K. Jacob, Audrey Girouard, Leanne M. Hirshfield, Michael S. Horn, Orit Shaer, Erin Treacy Solovey, and Jamie Zigelbaum. (2008). Reality-based interaction: a framework for post-WIMP interfaces. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (CHI ’08). ACM, New York, NY, USA, 201-210. DOI=10.1145/1357054.1357089 http://doi.acm.org/10.1145/1357054.1357089

2. Purpose : 

     In this paper, the authors propose the theme of ‘Reality-Based Interaction (RBI)’ for unifying and tie together the emerging interaction styles for easier user understanding and testing them.  Also, the authors suggest the following purposes of RBI in this paper :

a. How ‘Reality Based Interaction’ is different and diverse from ‘WIMP’ (Window, Icon, Menu and Pointing Device) or  Direct Manipulation Interaction Styles such as not depending on interactions with 2-D Widgets like Menus and Icons etc.,

b. Presenting the existing concepts of ‘Reality Based Interaction’ in the current interaction styles by presenting their case studies.

c. This would provide insights for new design and clear pathways to provide opportunities for future research direction in the field of Human Computer Interaction.

3. Methods : 

This paper’s scholars use 4 main reality based interaction themes with a set of conditions in which one of the RBI concept can be given up.  The main RBI concepts are derived from day-to-day to life that completely defines the whole concept of ‘Reality Based Interaction’.  If the Interaction Concept mimics those themes without any trade-offs, they are categorized under ‘Reality-Based Interaction’ conceptualization.  Those Interaction Themes are as follows :

a. Naïve Physics : People have the general common sense about their current physical environment.

b. Body Awareness & Skills : People have the general awareness of their own body and ability for controlling them.

c. Environment Awareness & Skills : The skills of people for negotiating, manipulating, and navigating within their environment.

d. Social Awareness & Skills : The knowledge of others present in the environment and the ability to interact with them.

The following are the set of conditions in which one of the RBI principles can be sacrificed.  They have also explained how the following can be sacrificed in reality for more gain and user experience.  Like one of their examples in which they did explain of how walking can be more efficient at certain situation rather than driving a car or riding a bike. :

a. Expressive Power

b. Efficiency

c. Versatility

d. Ergonomics

e. Accessibility

f. Practicality

     They applied their concept in the following innovative and upcoming interfaces to prove the validity of their claim through case studies.  Their respective interaction type is given after the arrows following them. :

Case Study 1 : Urban Resource Planning (URP) -> Tangible Environment 

Case Study 2 : Apple iPhone -> Touch Based Interaction

Case Study 3 : Electronic Tourist Guide Application -> Location and Orientation Aware Interactive GPS based Application

Case Study 4 : Visual-Cliff Virtual Environment -> Virtual Reality Environment 

4. Main Findings : 

     The researchers found that every interaction system that they analyzed tethered to the concepts of RBI but sometimes sacrificed the rule for improved user experience.  The main findings of the case studies were the following :

a. Each system that they analyzed and studied had inspiration from the real world which made them more innovative but also gave up a few concepts of reality when necessary to obtain additional design goals and enriched User Experience.  For example,

     i. Case Study 1 : For example, URP system gave up reality for making users to touch the virtual building to change their color which in reality is not possible.

ii. Case Study 2 : Trade-off of Reality for Improved Accessibility in its several inbuilt applications like the Safari browser displaying the full web page sacrificing the readability.

iii. Case Study 3 : Trade-off of Reality for User’s Expressive Power by the reduced press of Buttons for interacting with the system .

iv. Case Study 4 : For walking in the virtual world, the real world distance is sacrificed by increasing navigational direction with reduced steps by the human controller of the system.  Here, it is a practicality trade-off.

b. Each system strongly mainly oriented itself with the concepts proposed by the Authors.

c.  The 4 main defining terms seemed more practical to analyze the emerging interfaces.

5. Analysis : 

     In the lines of Research, I liked the way on how they connected the emerging Human-Computer Interaction trends with the Reality Based Interaction (RBI) concept.  But, it would have been nicer if there is a software framework for analyzing the capability of RBI methodologies for solving the case study along with the case study.  It can also have been a good industry standard reference in the development of future standards and improvement for future interaction trends.  This paper was published way back in 2008 and there are huge improvements in trends of all the HCI types analyzed in the case studies.  I am interested to read other journals that are based on this.

   Personally, I am much interested in HCI devices in general, since the blisters in fingers because of playing games all day long.  All these days, Natural User Interface (NUI) would grab my interests where ever or when ever I read them.  But, since now, I would also be looking for RBI.  I would analyze each and every current interaction trends applying these principles.

RAA #1 – User-Defined Gestures

1. APA Citation :

Jacob O. Wobbrock, Meredith Ringel Morris, and Andrew D. Wilson. (2009). User-defined gestures for surface computing. In Proceedings of the 27th international conference on Human factors in computing systems (CHI ’09). ACM, New York, NY, USA, 1083-1092. DOI=10.1145/1518701.1518866 http://doi.acm.org/10.1145/1518701.1518866.

2. Purpose : 

     Most of the gestures used for touch-based tablet and surface devices are personally employed by the product design and development team which does not necessarily speak the user’s language (Mental Model) and also making it tough to use sometimes.  So, the paper tries to solve the issue by providing the following solutions :

a. Understanding the gestures and characteristics that non-technical users make and how it differs from being systematic like the current gestures used in touch based devices.

b. For improving interaction gestures for table top devices based on day-to-day natural interaction gestures from the non-technical users.

c. Providing a dictionary of Gestures for Designers for creating better gesture sets with well informed data from User behavior.

3. Methods : 

     The authors conducted a user study experiment by first portraying the effect of a gesture, and then asking users to perform its cause.  In the user study only non-technical people were recruited who have no prior experience on using any touch based devices since they would use the interactive table top devices with their own mental model which definitely be different from the technical user with experience (User Count – 20 Individuals).  The interactive table top device was a Microsoft Surface prototype with a custom software using an average resolution (1024 x 768) which is applicable for every one’s viewing pleasure.  It consisted of the following sub activities :

a. Using a ‘Guess-ability Study Methodology’ which presents the effects of gestures first to the participants and asking the causes meant to invoke them.

b. Using a ‘Think-Aloud Protocol and Video Analysis’, for obtain rich qualitative data that showcases Users’ Mental Models.

c. The custom software for logging all the quantitative measures and details regarding gesture timing, activity, and preferences.

4. Main Findings : 

    The main finding is a detailed report of non-technical user-defined gestures, their mental models and the performance associated with them.  The important part of the finding is that the gestures completely differed from the previous touch gestures that were employed in the devices like Apple iPhone etc., The gestures were non-systematic, non-procedural and free-flowing like the human thought.

   The most interesting facts from the 1080 gestures that the users were asked to perform was :  Users rarely cared about the number of fingers that they employed – only one hand is preferred instead of using two hands and the current desktop idioms in the real world strongly influenced users’ current mental models.  Also, the results were characterized both quantitatively and qualitatively as a taxonomic reference report for designing user based gestures for touch based devices in future which will help designers in creating better gestures sets for their devices.

5. Analysis : 

     This study is a part of Microsoft’s Research during development of Surface Computing technology for designing the gestures that the device would use.  The device implements only the gestures that were the part of the research findings i.e., completely based on Users’ Mental Model and real world gestures.   But, still the surface computing technology didn’t find a huge welcome from the consumers.  I would like to know much more about why a few products fail even after having all the features that user desired of – Whether it is because of lack of marketing strategy or the emerging and growing market of tablets. 

   In personal opinion, I also think that user study based on 20 non-technical users is a generalized study and it does not represent the entire population of those kinds of users.  The scope should have been much more broader than the strategy that they have used here.  Also, the ethnic group of the representing participation is not mentioned here and it should have been wider and inclusive of the every ethnic groups.  Their will definitely be cultural impacts on their personal user gestures.  

   Though this article is not aligned directly to my research interest but it helped in understanding the importance and influence of user-study before designing usable gestures for any product that we are going to working on.  This also served as an addendum for the previous week’s reading based on User Study and the class activity on User Personas yesterday.  In future, I will be designing custom tools for the existing pipelines in the Animation Industry.  I would definitely understand the need of the Artists and allocate time for User Persona Study based on whatever the time limit.  Because, I learnt form the previous class that Some Data is better than No Data.