GithubHelp home page GithubHelp logo

seitakayukawa / blind_accessibility_papers Goto Github PK

View Code? Open in Web Editor NEW
49.0 9.0 7.0 87 KB

Papers about assistive technologies for visual impairments / 視覚障害者支援研究に関する論文の日本語まとめ

blind_accessibility_papers's Introduction

Blind_Accessibility_Papers

About This Repository (日本語はページ下部にあります)

This repository lists HCI papers about assistive technologies for visual impairments.

For each paper, We create an issue which includes the abstract (in English), a link to the paper, labels of the research field (e.g. Localization Research which tries to localize the user's position / 視覚障害者の位置や方向推定を行う研究 , Screen Reader Paper that proposed a new screen reader system / Webサイトやスマホのスクリーンの読み上げに関する研究 , Collision Avoidance A paper that proposed a collision-avoidance system / 障害物との衝突回避を支援する研究 , and so on), figures, and a short summary (in Japanese).

I will continue to add new papers in the future.
I would be so happy if anyone added a new paper or some comments!!
(You definitely don't need to write a short summary in Japanese.)

How to Use

  • You can search for papers by using some keywords.
  • You can narrow the paper list by using labels.

Label List

Feel free to add a new label!!

If you want to manage labels, please let me know via an issue comment. I will add you as a collaborator.

Keyword Description
Audio Research using an audio interface / 音声によるインタフェースを使用した研究 Research using an audio interface
Tactile Research using a tactile interface / 触覚によるインタフェースを使用した研究 Research using a tactile interface
Robotics Research using a robotic system / ロボットベースの研究 Research using a robotic system
AR/VR Reserach using AR/VR technologies / ARやVRに関する研究 Reserach using AR/VR technologies
Scan 3D Research using 3D scan technologies Research using 3D scan technologies
Mobility Research which helps blind people walk in the real-world / 実世界での移動の支援に関する研究 Research which helps blind people walk in the real-world
Global Navigation A paper that proposed a turn-by-turn navigation system / 曲がり角などで目的地への経路を案内する研究 Papers that proposed a turn-by-turn navigation system.
Local Navigation Research which helps blind users approach a specific object /ドアや椅子など特定の物体や場所へ案内する研究 Research which helps blind users approach a specific object such as a door and chair
Collision Avoidance A paper that proposed a collision-avoidance system / 障害物との衝突回避を支援する研究 Papers that proposed a collision-avoidance system
Localization Research which tries to localize the user's position / 視覚障害者の位置や方向推定を行う研究 Research which tries to localize the user's position
Recognition Research which helps blind users to recognize the surrounding information / 周囲の状況を認識してユーザに伝える研究 Research which helps blind users to recognize the surrounding information
Map Papers that proposed a new map for blind people / 視覚障害者向けの地図に関する研究 Papers that proposed a new map for blind people
Input Method Papers that proposed a new input/interaction method / 新たな入力方法や操作方法に関する研究 Papers that proposed a new input/interaction method
Developer Research for developers with visual impairments / 視覚障害者向けの開発ツールに関する研究 Research for developers with visual impairments
VAPA Research about Voice-Activate Personal Assistance (e.g. Siri) / SiriやAlexaのように音声で指示ができる支援システムに関する研究 Research about Voice-Activated Personal Assistant (e.g. Siri)
Screen Reader Paper that proposed a new screen reader system / Webサイトやスマホのスクリーンの読み上げに関する研究 Paper that proposed a new screen reader system
Web Accessibility Research about web interface / Webインタフェースに関する研究 Research about web interface
Captioning Research about Captioning / 画像を言語で説明して支援する研究 Research about Captioning
RSA Remote Sighted Assistance / 視覚障害者を遠隔から支援する技術 Research about Remote Sighted Assistance
Sports Research about Sports Assistance Research about Sports Asistance
Survey Paper A survey paper about assistive technologies for visual impairments / 視覚障害者支援研究に関する論文をサーベイした論文 A survey paper about assistive technologies for visual impairments
Investigation/Discussion A paper which discussed or investigated blind people's experience / 視覚障害者を調査した研究 A paper which discussed or investigated blind people's experience
Case Study A paper which discussed a detailed study of a specific subject A detailed study of a specific subject, such as a person, group or event
Low Vision A paper which proposed an assistive system for people with low-vision / 低視力者を対象とした研究 A paper which proposed an assistive system for people with low-vision
TVI Research about Teachers of the Visually Impaired (TVI) / 視覚障害者を教える先生に関する研究 Research for Teachers of the Visually Impaired (TVI)
Lable Conference/Journal Name
ASSETS ACM SIGACCESS Conference on Computers and Accessibility
CHI ACM CHI Conference on Human Factors in Computing Systems
UIST ACM symposium on User Interface Software and Technology
TACCESS ACM Transactions on Accessible Computing
TOCHI ACM Transactions on Computer-Human Interaction
MobileHCI ACM Conference on Human-Computer Interaction with Mobile Devices and Services
UbiComp ACM Conference on Pervasive and Ubiquitous Computing
ICMI ACM International Conference on Multimodal Interaction
DIS ACM SIGCHI Conference on Designing Interactive Systems
ISS ACM International Conference on Interactive Surfaces and Spaces
W4A International Web for All Conference
WWW International World Wide Web Conference
ICCHP International Conference on Computers Helping People with Special Needs
ECCV IEEE European Conference on Computer Vision
CSCI IEEE International Conference on Computational Science and Computational Intelligence
ISAECT IEEE International Symposium on Advanced Electrical and Communication Technologies
UAIS Universal Access in the Information Society
MobiQuitous International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services
HCIS Human-centric Computing and Information Sciences
FHN Frontiers in Human Neuroscience
Sensors Journal on the science and technology of sensors

Contributors

Thank you for your contribution!!

はじめに

Human-Computer Interactionに関する学会で発表された,視覚障害者支援に関する研究論文をまとめています.

論文一本ずつにissueを作成し,リンクや図や日本語のまとめ,分野のラベルを載せています.

CHI (2013~), UIST (2013~), ASSETS (2016~)など著名なHCI会議の口頭発表論文は網羅されているはずです(漏れがあったらスイマセン).

今後も見つけ次第どんどん更新していく予定です.

アクセシビリティ研究を行う方の助けに少しでもなれば嬉しいです.

お断り

各記事は僕が5分から15分程度で斜め読みしてまとめたものなので,詳細な日本語による解説は期待しないでください
また,僕(視覚障害者の移動支援に関する研究をやっています)の興味範囲的に,ジャンルごとにまとめの量や質にばらつきがあることもご了承ください

そのため,このレポジトリは以下のような目的で使用すると良いかと思います

  • 膨大に存在するHCI論文の中から視覚障害者支援に関する論文をピックアップする
  • 視覚障害者支援研究ではどのようなトピックが研究として行われているかを把握する

使い方

閲覧する際は,

  • 気になるキーワードで検索をかけてみる
  • issueに貼り付けられたラベルで絞り込んで論文を探してみる

といった感じで使っていただければよろしいかと思います.

ラベル一覧

ラベルの追加や編集もお気軽にどうぞ

もし,ラベルの付与や編集,追加を行いたい方は,issueのコメントを通して私にご連絡ください.Collaboratorに招待し,labelの編集権限を付与します.

キーワード 説明
Audio Research using an audio interface / 音声によるインタフェースを使用した研究 音声によるインタフェースを用いた研究
Tactile Research using a tactile interface / 触覚によるインタフェースを使用した研究 触覚によるインタフェースを用いた研究
Robotics Research using a robotic system / ロボットベースの研究 ロボットを用いた研究
AR/VR Reserach using AR/VR technologies / ARやVRに関する研究 ARやVRを用いた研究
Mobility Research which helps blind people walk in the real-world / 実世界での移動の支援に関する研究 実世界での移動を支援する研究
Global Navigation A paper that proposed a turn-by-turn navigation system / 曲がり角などで目的地への経路を案内する研究 曲がり角などで目的地への経路を案内する研究
Local Navigation Research which helps blind users approach a specific object /ドアや椅子など特定の物体や場所へ案内する研究 ドアや椅子など特定の物体や場所へ案内する研究
Collision Avoidance A paper that proposed a collision-avoidance system / 障害物との衝突回避を支援する研究 障害物との衝突回避を支援する研究
Localization Research which tries to localize the user's position / 視覚障害者の位置や方向推定を行う研究 視覚障害者の位置や方向推定を行う研究
Recognition Research which helps blind users to recognize the surrounding information / 周囲の状況を認識してユーザに伝える研究 周囲の物体や状況などを認識し,ユーザにフィードバックする研究
Map Papers that proposed a new map for blind people / 視覚障害者向けの地図に関する研究 視覚障害者が使用する地図に関する研究
Input Method Papers that proposed a new input/interaction method / 新たな入力方法や操作方法に関する研究 新たな入力方法や操作方法に関する研究
Developer Research for developers with visual impairments / 視覚障害者向けの開発ツールに関する研究 視覚障害者向けの開発ツールに関する研究
VAPA Research about Voice-Activate Personal Assistance (e.g. Siri) / SiriやAlexaのように音声で指示ができる支援システムに関する研究 Siriのように音声で指示ができるシステム(Voice-Activated Personal Assistant)に関する研究
Screen Reader Paper that proposed a new screen reader system / Webサイトやスマホのスクリーンの読み上げに関する研究 スマホやパソコンのディスプレイを読み上げてくれるシステムに関する研究
Web Accessibility Research about web interface / Webインタフェースに関する研究 Webインタフェースに関する研究
Captioning Research about Captioning / 画像を言語で説明して支援する研究 画像を言語で説明して支援する研究
RSA Remote Sighted Assistance / 視覚障害者を遠隔から支援する技術 視覚障害者を遠隔から支援する技術
Investigation/Discussion A paper which discussed or investigated blind people's experience / 視覚障害者を調査した研究 視覚障害者の体験について調査した研究
Survey Paper A survey paper about assistive technologies for visual impairments / 視覚障害者支援研究に関する論文をサーベイした論文 視覚障害者支援に関する研究のサーベイ論文
Low Vision A paper which proposed an assistive system for people with low-vision / 低視力者を対象とした研究 低視力者を対象とした研究
TVI Research about Teachers of the Visually Impaired (TVI) / 視覚障害者を教える先生に関する研究 視覚障害者を教える先生(Teachers of the Visually Impaired)に関する研究
Scan 3D Research using 3D scan technologies 3Dスキャンを用いた研究
Sports Research about Sports Assistance 視覚障害者のスポーツ支援に関する研究

記事投稿歓迎

  • 論文を見つけたら誰でも新しくissueを作っていただいて結構です(めちゃくちゃ嬉しいです).
  • HCI論文にかかわらず,視覚障害者支援に関係する論文であればなんでも大歓迎です.
  • (記事見ていただければわかると思いますが)現状の日本語まとめはかなり適当なので,ハードルは全く感じなくてOKです.
  • なんなら論文名のissueを作るだけでも全然構わないです(見つけ次第僕が追記しておきます).

謝辞

  • CHIで発表された論文は,CHI勉強会で公開されているスライドの情報が今回の目的に十分すぎるほどの情報量を持っていたので,スライド製作者の方の氏名を明記した上で,そのまま貼り付けてあります.
  • スライド製作者の皆様およびCHI勉強会関係者の皆様にはこの場を借りて御礼申し上げます.

blind_accessibility_papers's People

Contributors

wotipati avatar xiyuewang-miraikan avatar seita-kayukawa avatar seitakayukawa avatar

Stargazers

Hongtao Hao avatar ZoeyZheng avatar  avatar naruya avatar  avatar  avatar  avatar J Taery Kim avatar Hochul Hwang avatar Aoife avatar  avatar Zahiriddin Rustamov avatar Yukie Nagano avatar Masaki Kuribayashi avatar naoto avatar  avatar John avatar  avatar Sandeep Zechariah George avatar  avatar  avatar Yuto Ueda avatar Yusuke Miura avatar  avatar t.komikado avatar  avatar Yibo Shen avatar jinszhang avatar  avatar Hsuan-Wei Fan avatar Mesut Yılmaz avatar Yun Chen avatar Navid Salehnamadi avatar Sirus avatar Tetsuaki Baba avatar Daisuke Sato avatar Yueqi Xie avatar Akifumi Takahashi avatar tomkake avatar GPII Developer Space avatar Károly Szántai, CPWA avatar Jannick Scherf avatar Crystal Preston-Watson avatar Oscar avatar Fotis Papadogeorgopoulos avatar Dayvson Sales avatar Taro avatar Rimuru Tempest avatar Tatsuya Ishihara avatar

Watchers

Sandeep Zechariah George avatar James Cloos avatar Tetsuaki Baba avatar Hiro Takagi (IBM) avatar Tatsuya Ishihara avatar  avatar Hochul Hwang avatar J Taery Kim avatar Taro avatar

blind_accessibility_papers's Issues

Understanding the Impact of TVIs on Technology Use and Selection by Children with Visual Impairments

Links

Abstracts

  • The use of technology in educational settings is extremely common. For many visually impaired children, educational settings are the first place they are exposed to the assistive technology that they will need to access mainstream computing devices. Current laws provide support for students to receive training from Teachers of the Visually Impaired (TVIs) on these assistive devices. Therefore, TVIs play an important role in the selection and training of technology. Through our interviews with TVIs, we discovered the factors that impact which technologies they select, how they attempt to mitigate the stigma associated with certain technologies, and the challenges that students face in learning assistive technologies. Through this research, we identified three needs that future research on assistive technology should address: (1) increasing focus on built-in accessibility features, (2) providing support for independent learning and exploration, and (3) creating technologies that can support users with progressive vision loss.

スクリーンショット 2019-10-27 21 28 22

スライド出典: CHI勉強会2019
スライド製作者: 上智大学 秋元弘太さん

“Bursting the Assistance Bubble”: Designing Inclusive Technology with Children with Mixed Visual Abilities

Links

Abstract

  • Children living with visual impairments (VIs) are increasingly educated in mainstream rather than special schools. But knowledge about the challenges they face in inclusive schooling environments and how to design technology to overcome them remains scarce. We report findings from a field study involving interviews and observations of educators and children with/without VIs in mainstream schools, in which we identified the "teaching assistant bubble" as a potential barrier to group learning, social play and independent mobility. We present co-design activities blending elements of future workshops, multisensory crafting, fictional inquiry and bodystorming, demonstrating that children with and without VIs can jointly lead design processes and explore design spaces reflective of mixed visual abilities and shared experiences. We extend previous research by characterising challenges and opportunities for improving inclusive education of children with VIs in mainstream schools, in terms of balancing assistance and independence, and reflect on the process and outcomes of co-designing with mixed-ability groups in this context.

スクリーンショット 2019-10-29 21 53 21

スライド出典: CHI勉強会2018
スライド作成者: 公立はこだて未来大学 美馬義亮さん

Feeling Fireworks: An Inclusive Tactile Firework Display

Links

Abstract

  • This paper presents a novel design for a large-scale interactive tactile display. Fast dynamic tactile effects are created at high spatial resolution on a flexible screen, using directable nozzles that spray water jets onto the rear of the screen. The screen further has back-projected visual content and touch interaction. The technology is demonstrated in Feeling Fireworks, a tactile firework show. The goal is to make fireworks more inclusive for the Blind and Low-Vision (BLV) community. A BLV focus group provided input during the development process, and a user study with BLV users showed that Feeling Fireworks is an enjoyable and meaningful experience. A user study with sighted users showed that users could accurately label the correspondence between the designed tactile firework effects and corresponding visual fireworks. Beyond the Feeling Fireworks application, this is a novel approach for scalable tactile displays with potential for broader use.

  • 視覚障害者の人でも花火を楽しむことができるような触覚システム
    スクリーンショット 2019-10-27 17 21 22スクリーンショット 2019-10-27 17 21 26

Approach

  • 花火の映像をプロジェクタで投影し,映像に合わせて水を放出する
  • 水が当たった部分の振動を手で感じ取ることで花火を楽しむ
    スクリーンショット 2019-10-27 17 21 31スクリーンショット 2019-10-27 17 21 38

Strong Point

  • 映像も映しているので晴眼者も視覚障害者も楽しめる
  • ノズルの動きを制御することで,様々な種類の花火を再現

Evaluation

  • 視覚障害者に使ってもらい,どういう種類の花火だったかを当ててもらうタスクを実施

Result

  • それなりに区別可能
  • 先天性の視覚障害者も楽しめた

Limitation & Future Work

  • 現状スクリーンが大きいという制限がある(小さくするのは難しかった)

An Independent and Interactive Museum Experience for Blind People

Links

Abstract

  • Museums are gradually becoming more accessible to blind people, who have shown interest in visiting museums and in appreciating visual art. Yet, their ability to visit museums is still dependent on the assistance they get from their family and friends or from the museum personnel. Based on this observation and on prior research, we developed a solution to support an independent, interactive museum experience that uses the continuous tracking of the user's location and orientation to enable a seamless interaction between Navigation and Art Appreciation. Accurate localization and context-awareness allow for turn-by-turn guidance (Navigation Mode), as well as detailed audio content when facing an artwork within close proximity (Art Appreciation Mode). In order to evaluate our system, we installed it at The Andy Warhol Museum in Pittsburgh and conducted a user study where nine blind participants followed routes of interest while learning about the artworks. We found that all participants were able to follow the intended path, immediately grasped how to switch between Navigation and Art Appreciation modes, and valued listening to the audio content in front of each artwork. Also, they showed high satisfaction and an increased motivation to visit museums more often.

  • 視覚障害者が美術館を一人で楽しむことができるように支援するシステム
    スクリーンショット 2019-10-27 23 56 47スクリーンショット 2019-10-27 23 56 38

Approach

  • 経路に沿って案内するモード(Navigation Mode)と作品の前に来た時に説明するモード(Art Appreciation Mode)の二種類のモードを提供
  • Navigation Modeはビーコンとスマートフォンを用いたローカリゼーションシステムを使って行う

Strong Point

  • 作品の前で作品の方向を向くと自動でArt Appreciation Modeに入って作品の説明をする(二種類のモードをシームレスに切り替えることができる)

Evaluation

  • 美術館にシステムを実装し,視覚障害者に美術館内を巡るタスクを与える実験を行なった

Result

  • システムを使って作品を巡ることができた
  • このシステムがあるなら美術館にいきたいという意見が得られた

Limitation & Future Work

  • 人がいる環境などでは,人を避ける手段を提供していない

A Face Recognition Application for People with Visual Impairments: Understanding Use Beyond the Lab

Links

Abstract

  • Recognizing others is a major challenge for people with visual impairments (VIPs) and can hinder engagement in social activities. We present Accessibility Bot, a research prototype bot on Facebook Messenger, that leverages state-of-the-art computer vision and a user's friends' tagged photos on Facebook to help people with visual impairments recognize their friends. Accessibility Bot provides users information about identity and facial expressions and attributes of friends captured by their phone's camera. To guide our design, we interviewed eight VIPs to understand their challenges and needs in social activities. After designing and implementing the bot, we conducted a diary study with six VIPs to study its use in everyday life. While most participants found the Bot helpful, their experience was undermined by perceived low recognition accuracy, difficulty aiming a camera, and lack of knowledge about the phone's status. We discuss these real-world challenges, identify suitable use cases for Accessibility Bot, and distill design implications for future face recognition applications.

  • 視覚障害者が友人を認識して話しかけることができるように支援するシステム(Accessibility Bot)を提案

スクリーンショット 2019-10-30 21 51 45

Approach

  • 八人の視覚障害者とのディスカッションを通してニーズを調査
    • 自分の周りに誰がいるのか知りたい
    • 普段は声をかけて友人を探しているが,うるさい環境や逆に静かな環境では躊躇われる
    • 友人かどうかわかると嬉しい
    • 自分から見てどちら側にいるのか,見た目(髪や瞳の色や服装),表情などが知りたい(重要順)
  • 調査結果を元にFacebook messengerを利用したシステム(Accessibiliry Bot)を実装
    • Facebookの顔認識結果を利用
    • カメラの方向合わせのために,画角に入っている顔の数を音声でフィードバック
    • 検出した顔のメタデータ(左右どちらにいるか,表情,顔のラベル)を検出
    • 検出された人物のリストをMessengerでフィードバック

Strong Point

  • Neural Networkを使ったSOTAの顔検出器を使用したところ
  • 単に一時的な実験をするのではなく,システムを1週間普段使いしてもらい有効性を検証した

Evaluation

  • 6人の視覚障害者に7日間普段使いしてもらった

Result, Limitation & Future Work

  • Social Activityに置いて有効だった
  • 認識精度自体が満足のユーザが満足の行くものではなかった.
  • 特にカメラを向ける方向,写真の撮り方が難しかった(手ブレなどが起きてしまう)
  • 顔認識の精度自体の検証も行なっていない
  • プライバシー保護の観点でも問題がある

StoryBlocks: A Tangible Programming Game To Create Accessible Audio Stories

Links

Abstract

  • Block-based programming languages can support novice programmers through features such as simplified code syntax and user-friendly libraries. However, most block-based programming languages are highly visual, which makes them inaccessible to blind and visually impaired students. To address the inaccessibility of block-based languages, we introduce StoryBlocks, a tangible block-based game that enables blind programmers to learn basic programming concepts by creating audio stories. In this paper, we document the design of StoryBlocks and report on a series of design activities with groups of teachers, Braille experts, and students. Participants in our design sessions worked together to create accessible stories, and their feedback offers insights for the future development of accessible, tangible programming tools.

スクリーンショット 2019-10-27 21 29 34

スライド出典: CHI勉強会2019
スライド製作者: 上智大学 秋元弘太さん

Environmental Factors in Indoor Navigation Based on Real-World Trajectories of Blind Users

Links

Abstract

  • Indoor localization technologies can enhance quality of life for blind people by enabling them to independently explore and navigate indoor environments. Researchers typically evaluate their systems in terms of localization accuracy and user behavior along planned routes. We propose two measures of path-following behavior: deviation from optimal route and trajectory variability. Through regression analysis of real-world trajectories from blind users, we identify relationships between a) these measures and b) elements of the environment, route characteristics, localization error, and instructional cues that users receive. Our results provide insights into path-following behavior for turn-by-turn indoor navigation and have implications for the design of future interactions. Moreover, our findings highlight the importance of reporting these environmental factors and route properties in similar studies. We present automated and scalable methods for their calculation and to encourage their reporting for better interpretation and comparison of results across future studies.

スクリーンショット 2019-10-28 18 17 08

スライド出典: CHI勉強会2018
スライド作成者: 北海道大学 渡邉拓貴さん

Accessible Maps for the Blind: Comparing 3D Printed Models with Tactile Graphics

Links

Authors

  • Leona Holloway, Kim Marriott, Matthew Butler

Abstract

  • Tactile maps are widely used in Orientation and Mobility (O&M) training for people with blindness and severe vision impairment. Commodity 3D printers now offer an alternative way to present accessible graphics, however it is unclear if 3D models offer advantages over tactile equivalents for 2D graphics such as maps. In a controlled study with 16 touch readers, we found that 3D models were preferred, enabled the use of more easily understood icons, facilitated better short term recall and allowed relative height of map elements to be more easily understood. Analysis of hand movements revealed the use of novel strategies for systematic scanning of the 3D model and gaining an overview of the map. Finally, we explored how 3D printed maps can be augmented with interactive audio labels, replacing less practical braille labels. Our findings suggest that 3D printed maps do indeed offer advantages for O&M training.

スクリーンショット 2019-10-29 18 07 30

スライド出典: CHI勉強会2018
スライド作成者: 北海道大学 渡邉拓貴さん

Identifying how Visually Impaired People Explore Raised-line Diagrams to Improve the Design of Touch Interfaces

Links

Abstract

  • Raised-line diagrams are widely used by visually impaired (VI) people to read maps, drawings or graphs. While previous work has identified general exploration strategies for raised-line drawings, we have limited knowledge on how this exploration is performed in detail and how it extends to other types of diagrams such as maps or graphs, frequently used in specialized schools. Such information can be crucial for the design of accessible interfaces on touchscreens. We conducted a study in which participants were asked to explore five types of raised-line diagrams (common drawings, perspective drawings, mathematical graphs, neighborhood maps, and geographical maps) while tracking both hands fingers. Relying on a first set of results, we proposed a set of design guidelines for touch interfaces.

スクリーンショット 2019-10-30 11 32 16

スライド出典: CHI勉強会2017
スライド作成者: 東京大学 杉本 貴章さん

Making Sense of Art: Access for Gallery Visitors with Vision Impairments

Links

Abstract

  • While there is widespread recognition of the need to provide people with vision impairments (PVI) equitable access to cultural institutions such as art galleries, this is not easy. We present the results of a collaboration with a regional art gallery who wished to open their collection to PVIs in the local community. We describe a novel model that provides three different ways of accessing the gallery, depending upon visual acuity and mobility: virtual tours, self-guided tours and guided tours. As far as possible the model supports autonomous exploration by PVIs. It was informed by a value sensitive design exploration of the values and value conflicts of the primary stakeholders.

スクリーンショット 2019-10-27 21 40 39

スライド出典: CHI勉強会2019
スライド製作者: 神戸大学 西田健志さん

The RAD: Making Racing Games Equivalently Accessible to People Who Are Blind

Links

Abstract

  • We introduce the racing auditory display (RAD), an audio-based user interface that allows players who are blind to play the same types of racing games that sighted players can play with an efficiency and sense of control that are similar to what sighted players have. The RAD works with a standard pair of headphones and comprises two novel sonification techniques: the sound slider for understanding a car's speed and trajectory on a racetrack and the turn indicator system for alerting players of the direction, sharpness, length, and timing of upcoming turns. In a user study with 15 participants (3 blind; the rest blindfolded and analyzed separately), we found that players preferred the RAD's interface over that of Mach 1, a popular blind-accessible racing game. We also found that the RAD allows an avid gamer who is blind to race as well on a complex racetrack as casual sighted players can, without a significant difference between lap times or driving paths.

スクリーンショット 2019-10-28 16 47 43

スライド出典: CHI勉強会2018
スライド作成者: 東京大学 大伏仙泰さん

Rich Representations of Visual Content for Screen Reader Users

Links

Abstract

  • Alt text (short for "alternative text") is descriptive text associated with an image in HTML and other document formats. Screen reader technologies speak the alt text aloud to people who are visually impaired. Introduced with HTML 2.0 in 1995, the alt attribute has not evolved despite significant changes in technology over the past two decades. In light of the expanding volume, purpose, and importance of digital imagery, we reflect on how alt text could be supplemented to offer a richer experience of visual content to screen reader users. Our contributions include articulating the design space of representations of visual content for screen reader users, prototypes illustrating several points within this design space, and evaluations of several of these new image representations with people who are blind. We close by discussing the implications of our taxonomy, prototypes, and user study findings.

スクリーンショット 2019-10-28 16 44 52

スライド出典: CHI勉強会2018
スライド作成者: 明治大学 亀田裕也さん,吉田直矢さん,中野内涼也さん

Understanding Trust, Transportation, and Accessibility through Ridesharing

Links

Abstract

  • Relatively few studies of accessibility and transportation for people with vision impairments have investigated forms of transportation besides public transportation and walking. To develop a more nuanced understanding of this context, we turn to ridesharing, an increasingly used mode of transportation. We interviewed 16 visually-impaired individuals about their active use of ridesharing services like Uber and Lyft. Our findings show that, while people with vision impairments value independence, ridesharing involves building trust across a complex network of stakeholders and technologies. This data is used to start a discussion on how other systems can facilitate trust for people with vision impairments by considering the role of conversation, affordances of system incentives, and increased agency.

  • UberやLyftのようなRideSsharingサービスを視覚障害者の人たちが利用した際に,どのような問題点があるか調査した研究

Hands Holding Clues for Object Recognition in Teachable Machines

Links

Abstract

  • Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.

スクリーンショット 2019-10-27 21 08 40

スライド出典: CHI勉強会2019
スライド製作者: 北海道大学 秋葉翔太さん

Weaving Lighthouses and Stitching Stories: Blind and Visually Impaired People Designing E-textiles

Links

We describe our experience of working with blind and visually impaired people to create interactive art objects that are personal to them, through a participatory making process using electronic textiles (e-textiles) and hands-on crafting techniques. The research addresses both the practical considerations about how to structure hands-on making workshops in a way which is accessible to participants of varying experience and abilities, and how effective the approach was in enabling participants to tell their own stories and feel in control of the design and making process. The results of our analysis is the offering of insights in how to run e-textile making sessions in such a way for them to be more accessible and inclusive to a wider community of participants.

Hybrid-Brailler: Combining Physical and Gestural Interaction for Mobile Braille Input and Editing

Links

Abstract

  • Braille input enables fast nonvisual entry speeds on mobile touchscreen devices. Yet, the lack of tactile cues commonly results in typing errors, which are hard to correct. We propose Hybrid-Brailler, an input solution that combines physical and gestural interaction to provide fast and accurate Braille input. We use the back of the device for physical chorded input while freeing the touchscreen for gestural interaction. Gestures are used in editing operations, such as caret movement, text selection, and clipboard control, enhancing the overall text entry experience. We conducted two user studies to assess both input and editing performance. Results show that Hybrid-Brailler supports fast entry rates as its virtual counterpart, while significantly increasing input accuracy. Regarding editing performance, when compared with the mainstream technique, Hybrid-Brailler shows performance benefits of 21% in speed and increased editing accuracy. We finish with lessons learned for designing future nonvisual input and editing techniques.

スクリーンショット 2019-10-29 21 12 30

スライド出典: CHI勉強会2018
スライド作成者: 東京大学 眞鍋美祈さん

Towards Enabling Blind People to Independently Write on Printed Forms

Links

Abstract

  • Filling out printed forms (e.g., checks) independently is currently impossible for blind people, since they cannot pinpoint the locations of the form fields, and quite often, they cannot even figure out what fields (e.g., name) are present in the form. Hence, they always depend on sighted people to write on their behalf, and help them affix their signatures. Extant assistive technologies have exclusively focused on reading, with no support for writing. In this paper, we introduce WiYG, a Write-it-Yourself guide that directs a blind user to the different form fields, so that she can independently fill out these fields without seeking assistance from a sighted person. Specifically, WiYG uses a pocket-sized custom 3D printed smartphone attachment, and well-established computer vision algorithms to dynamically generate audio instructions that guide the user to the different form fields. A user study with 13 blind participants showed that with WiYG, users could correctly fill out the form fields at the right locations with an accuracy as high as 89.5%.

  • 視覚障害者の人が一人で(レシートなどに)サインをすることができるように支援するシステム

スクリーンショット 2019-10-28 14 22 40

Approach

  • スマートフォンに3Dプリンタで作成したデバイスを取り付ける
  • CVベースの手法で正しい位置を認識し,音声を使ってガイドする

スクリーンショット 2019-10-28 14 22 46

スクリーンショット 2019-10-28 14 22 56

Strong Point

  • Reading(レシートの内容などを読む)を支援するシステムはこれまで提案されているが,Writing(サインを書く)を支援するシステムは初めて

Evaluation

  • システムを実際に使ってサインを記入してもらい,成功率を評価

Result

  • 成功率9割

Limitation & Future Work

  • 触覚を用いたインタフェースも検討したい(音声だとうるさいから)
  • サインを書く場所や内容も自動で検出してガイドできるようにしたい

Voice User Interfaces in Schools: Co-designing for Inclusion with Visually-Impaired and Sighted Pupils

Links

Abstract

  • Voice user interfaces (VUIs) are increasingly popular, particularly in homes. However, little research has investigated their potential in other settings, such as schools. We investigated how VUIs could support inclusive education, particularly for pupils with visual impairments (VIs). We organised focused discussions with educators at a school, with support staff from local authorities and, through bodystorming, with a class of 27 pupils. We then ran a series of co-design workshops with participants with mixed-visual abilities to design an educational VUI application. This provided insights into challenges faced by pupils with VIs in mainstream schools, and opened a space for educators, sighted and visually impaired pupils to reflect on and design for their shared learning experiences through VUIs. We present scenarios, a design space and an example application that show novel ways of using VUIs for inclusive education. We also reflect on co-designing with mixed-visual-ability groups in this space.

  • Google HomeやAmazon Echoのような声で制御するシステム(VUI: Voice User Interface)を盲学校で使用した場合どうなるか議論した研究

Approach

  • 視覚障害者に教えている学校で議論を実施
  • 教育用のVUIを視覚障害者と主に設計するワークショップを実施
  • 結果をもとにVUIを教育の現場で用いる際のデザインスペースやサンプルアプリを提案

スクリーンショット 2019-10-28 15 58 09

スクリーンショット 2019-10-28 15 58 37

Let's Play Together: Adaptation Guidelines of Board Games for Players with Visual Impairment

Links

###Abstract

  • Board games present accessibility barriers for players with visual impairment since they often employ visuals alone to communicate gameplay information. Our research focuses on board game accessibility for those with visual impairment. This paper describes a three-phase study conducted to develop board game accessibility adaptation guidelines. These guidelines were developed through a user-centered design approach that included in-depth interviews and a series of user studies using two adapted board games. Our findings indicate that participants with and without visual impairment were able to play the adapted games, exhibiting a balanced experience whereby participants had complete autonomy and were provided with equal chances of victory. Our paper also contributes to the game and accessibility communities through the development of adaptation guidelines that allow board games to become inclusive irrespective of a player's visual impairment.

スクリーンショット 2019-10-27 22 09 29

スライド出典: CHI勉強会2019
スライド製作者: 筑波大学 橋爪智さん

VIPBoard: Improving Screen-Reader Keyboard for Visually Impaired People with Character-Level Auto Correction

Links

Abstract

  • Modern touchscreen keyboards are all powered by the word-level auto-correction ability to handle input errors. Unfortunately, visually impaired users are deprived of such benefit because a screen-reader keyboard offers only character-level input and provides no correction ability. In this paper, we present VIPBoard, a smart keyboard for visually impaired people, which aims at improving the underlying keyboard algorithm without altering the current input interaction. Upon each tap, VIPBoard predicts the probability of each key considering both touch location and language model, and reads the most likely key, which saves the calibration time when the touchdown point misses the target key. Meanwhile, the keyboard layout automatically scales according to users' touch point location, which enables them to select other keys easily. A user study shows that compared with the current keyboard technique, VIPBoard can reduce touch error rate by 63.0% and increase text entry speed by 12.6%.

スクリーンショット 2019-10-27 21 43 20

スライド出典: CHI勉強会2019
スライド製作者: 立命館大学大学院 出口大樹さん

Caption Crawler: Enabling Reusable Alternative Text Descriptions using Reverse Image Search

Links

Abstract

  • Accessing images online is often difficult for users with vision impairments. This population relies on text descriptions of images that vary based on website authors' accessibility practices. Where one author might provide a descriptive caption for an image, another might provide no caption for the same image, leading to inconsistent experiences. In this work, we present the Caption Crawler system, which uses reverse image search to find existing captions on the web and make them accessible to a user's screen reader. We report our system's performance on a set of 481 websites from alexa.com's list of most popular sites to estimate caption coverage and latency, and also report blind and sighted users' ratings of our system's output quality. Finally, we conducted a user study with fourteen screen reader users to examine how the system might be used for personal browsing.

スクリーンショット 2019-10-29 22 26 28

スライド出典: CHI勉強会2018
スライド作成者: はこだて未来大学 奥野茜さん

Accessible Gesture Typing for Non-Visual Text Entry on Smartphones

Links

Abstract

  • Gesture typing--entering a word by gliding the finger sequentially over letter to letter-- has been widely supported on smartphones for sighted users. However, this input paradigm is currently inaccessible to blind users: it is difficult to draw shape gestures on a virtual keyboard without access to key visuals. This paper describes the design of accessible gesture typing, to bring this input paradigm to blind users. To help blind users figure out key locations, the design incorporates the familiar screen-reader supported touch exploration that narrates the keys as the user drags the finger across the keyboard. The design allows users to seamlessly switch between exploration and gesture typing mode by simply lifting the finger. Continuous touch-exploration like audio feedback is provided during word shape construction that helps the user glide in the right direction of the key locations constituting the word. Exploration mode resumes once word shape is completed. Distinct earcons help distinguish gesture typing mode from touch exploration mode, and thereby avoid unintended mix-ups. A user study with 14 blind people shows 35% increment in their typing speed, indicative of the promise and potential of gesture typing technology for non-visual text entry.

スクリーンショット 2019-10-27 21 53 47

スライド出典: CHI勉強会2019
山梨大学 松澤光希さん

"Accessibility Came by Accident": Use of Voice-Controlled Intelligent Personal Assistants by People with Disabilities

Links

Abstract

  • From an accessibility perspective, voice-controlled, home-based intelligent personal assistants (IPAs) have the potential to greatly expand speech interaction beyond dictation and screen reader output. To examine the accessibility of off-the-shelf IPAs (e.g., Amazon Echo) and to understand how users with disabilities are making use of these devices, we conducted two exploratory studies. The first, broader study is a content analysis of 346 Amazon Echo reviews that include users with disabilities, while the second study more specifically focuses on users with visual impairments, through interviews with 16 current users of home-based IPAs. Findings show that, although some accessibility challenges exist, users with a range of disabilities are using the Amazon Echo, including for unexpected cases such as speech therapy and support for caregivers. Richer voice-based applications and solutions to support discoverability would be particularly useful to users with visual impairments. These findings should inform future work on accessible voice-based IPAs.

スクリーンショット 2019-10-28 16 32 24

スライド出典: CHI勉強会2018
スライド作成者: 青山学院大学 近藤匠海さん,越智喬之さん,畠雅和さん,加藤岳大さん

Headlock: A wearable navigation aid that helps blind cane users traverselarge open spaces

Links

Abstract

  • Traversing large open spaces is a challenging task for blind cane users, as such spaces are often devoid of tactile features that can be followed. Consequently, in such spaces cane users may veer from their intended paths. Wearable devices have great potential for assistive applications for users who are blind as they typically feature a camera and support hands and eye free interaction. We present HEADLOCK; a navigation aid for an optical head-mounted display that helps blind users traverse large open spaces by letting them lock onto a salient landmark across the space, such as a door, and then providing audio feedback to guide the user towards the landmark. A user study with 8 blind users evaluated the usability and effectiveness of two types of audio feedback (sonification and text-to-speech) for guiding a user across an open space to a doorway. Qualitative results are reported, which may inform the design of assistive wearable technology for users who are blind.

  • 特定のドアへ向かって視覚障害者を案内するシステム

スクリーンショット 2019-10-20 14 49 08

Approach

  • Google Glassを使ってドアの位置を検出し,ユーザにドアの方向を指示する
  • 歩いている最中はドアの方向もしくはドアまでの距離を教える

スクリーンショット 2019-10-20 14 49 23

スクリーンショット 2019-10-20 14 49 14

Strong Point

  • 音声フィードバック方法として以下の二種類のインタフェースを実装し,比較検証した
    • Text-to-speech: 自然言語("Right", "Left")を用いる
    • Sonification: 方向のずれや距離に合わせてパラメタを連続的に変更させた音を用いる

Evaluation

  • 提案手法を使ってドアを見つけるタスク(Discovery)を与え,タスク完了時間を測定
  • 見つけたドアへ向かって移動するタスク(Guidance)を与え,タスク完了時間を測定
  • ユーザの向きとドアの方向のずれ(Veering)を計算し,比較
  • 諸々インタビュー

Result

  • DiscoveryとGuidanceではText-to-Speechを用いた手法の方がタスク完了時間が短かった
  • Veearingでは有意差は得られなかった
  • ユーザ的にもText-to-Speechの方がわかりやすかった

Limitation & Future Work

  • スマホベースの手法も考えられるがそことの比較は行なっていない
  • Detection自体はかなりいい加減(OpenCVで色ベースで検出)かつ今回のドアに特化したものなので改善したい
  • ドア以外のオブジェクトにもアクセスできるようにしたい

"I can do everything but see!" -- How People with Vision Impairments Negotiate their Abilities in Social Contexts

Links

Abstract

  • This research takes an orientation to visual impairment (VI) that does not regard it as fixed or determined alone in or through the body. Instead, we consider (dis)ability as produced through interactions with the environment and configured by the people and technology within it. Specifically, we explore how abilities become negotiated through video ethnography with six VI athletes and spectators during the Rio 2016 Paralympics. We use generated in-depth examples to identify how technology can be a meaningful part of ability negotiations, emphasizing how these embed into the social interactions and lives of people with VI. In contrast to treating technology as a solution to a 'sensory deficit', we understand it to support the triangulation process of sense-making through provision of appropriate additional information. Further, we suggest that technology should not try and replace human assistance, but instead enable people with VI to better identify and interact with other people in-situ.

スクリーンショット 2019-10-28 16 29 26

スライド出典: CHI勉強会2018
スライド作成者: 青山学院大学 近藤匠海さん,越智喬之さん,畠雅和さん,加藤岳大さん

Enabling People with Visual Impairments to Navigate Virtual Reality with a Haptic and Auditory Cane Simulation

Links

Abstract

  • Traditional virtual reality (VR) mainly focuses on visual feedback, which is not accessible for people with visual impairments. We created Canetroller, a haptic cane controller that simulates white cane interactions, enabling people with visual impairments to navigate a virtual environment by transferring their cane skills into the virtual world. Canetroller provides three types of feedback: (1) physical resistance generated by a wearable programmable brake mechanism that physically impedes the controller when the virtual cane comes in contact with a virtual object; (2) vibrotactile feedback that simulates the vibrations when a cane hits an object or touches and drags across various surfaces; and (3) spatial 3D auditory feedback simulating the sound of real-world cane interactions. We designed indoor and outdoor VR scenes to evaluate the effectiveness of our controller. Our study showed that Canetroller was a promising tool that enabled visually impaired participants to navigate different virtual spaces. We discuss potential applications supported by Canetroller ranging from entertainment to mobility training.

Future Work

同じ研究グループから後発の発展版の研究が出てます
Virtual Reality Without Vision: A Haptic and Auditory White Cane to Navigate Complex Virtual Worlds
link: #166

スクリーンショット 2019-10-29 18 22 14

スライド出典: CHI勉強会2018
スライド作成者: 立命館大学大学院 山本拓也さん

Virtual Showdown: An Accessible Virtual Reality Game with Scaffolds for Youth with Visual Impairments

Links

Abstract

  • Virtual Reality (VR) is a growing source of entertainment, but people who are visually impaired have not been effectively included. Audio cues are motivated as a complement to visuals, making experiences more immersive, but are not a primary cue. To address this, we implemented a VR game called Virtual Showdown. We based Virtual Showdown on an accessible real-world game called Showdown, where people use their hearing to locate and hit a ball against an opponent. Further, we developed Verbal and Verbal/Vibration Scaffolds to teach people how to play Virtual Showdown. We assessed the acceptability of Virtual Showdown and compared our scaffolds in an empirical study with 34 youth who are visually impaired. Thirty-three participants wanted to play Virtual Showdown again, and we learned that participants scored higher with the Verbal Scaffold or if they had prior Showdown experience. Our empirical findings inform the design of future accessible VR experiences.

スクリーンショット 2019-10-27 21 47 56

スライド出典: CHI勉強会2019
慶應義塾大学 住友聡馬さん, 松井駿さん

NavCog: A Navigational Cognitive Assistant for the Blind

Links

Abstract

  • Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces.
    We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.

  • 視覚障害者向けの屋内の道順案内システム"NavCog"を提案

スクリーンショット 2019-10-25 20 53 04

Approach

  • Bluetoothのビーコン(BLE beacons)とスマートフォンを用いて視覚障害者が現在いる場所をリアルタイムで推定
  • 目的地への道順を音声を使って案内

Strong Point

  • GPSなどでは難しい屋内のローカリゼーションを高精度で実現
  • スマートフォンにアプリを実装し,公開

Evaluation

  • ローカリゼーションの精度を評価
  • 6人の視覚障害者に大学構内で使用してもらい,道中で起こった出来事(曲がり角で間違えた回数や方向合わせを行なった回数など)を観察
  • インタビューでユーザビリティを調査

Result

  • ローカリゼーションの精度は0.5m程度でGood
  • 初めて訪れた場所でも目的地にたどり着くことができ,高評価だった

Limitation & Future Work

  • ユーザからは「間違えた方向へ進んだ時に修正してくれるシステムが欲しい」という意見が得られた
  • 道案内の成功率はローカリゼーションの精度に依存するため,道順で重要なポイント(交差点など)に重点的にビーコンを配置するなどして,精度を高めていきたい

CodeTalk: Improving Programming Environment Accessibility for Visually Impaired Developers

Links

Abstract

  • In recent times, programming environments like Visual Studio are widely used to enhance programmer productivity. However, inadequate accessibility prevents Visually Impaired (VI) developers from taking full advantage of these environments. In this paper, we focus on the accessibility challenges faced by the VI developers in using Graphical User Interface (GUI) based programming environments. Based on a survey of VI developers and based on two of the authors' personal experiences, we categorize the accessibility difficulties into Discoverability, Glanceability, Navigability, and Alertability. We propose solutions to some of these challenges and implement these in CodeTalk, a plugin for Visual Studio. We show how CodeTalk improves developer experience and share promising early feedback from VI developers who used our plugin.

スクリーンショット 2019-10-28 16 40 35

スライド出典: CHI勉強会2018
スライド作成者: 明治大学 亀田裕也さん,吉田直矢さん,中野内涼也さん

Taking into Account Sensory Knowledge: The Case of Geo-techologies for Children with Visual Impairments

Links

Abstract

  • This paper argues for designing geo-technologies supporting non-visual sensory knowledge. Sensory knowledge refers to the implicit and explicit knowledge guiding our uses of our senses to understand the world. To support our argument, we build on an 18 months field-study on geography classes for primary school children with visual impairments. Our findings show (1) a paradox in the use of non-visual sensory knowledge: described as fundamental to the geography curriculum, it is mostly kept out of school; (2) that accessible geo-technologies in the literature mainly focus on substituting vision with another modality, rather than enabling teachers to build on children's experiences; (3) the importance of the hearing sense in learning about space. We then introduce a probe, a wrist-worn device enabling children to record audio cues during field-trips. By giving importance to children's hearing skills, it modified existing practices and actors' opinions on non-visual sensory knowledge. We conclude by reflecting on design implications, and the role of technologies in valuing diverse ways of understanding the world.

  • Paper: https://dl.acm.org/citation.cfm?id=3173810

SeeingVR: A Set of Tools to Make Virtual Reality More Accessible to People with Low Vision

Links

Abstract

  • Current virtual reality applications do not support people who have low vision, i.e., vision loss that falls short of complete blindness but is not correctable by glasses. We present SeeingVR, a set of 14 tools that enhance a VR application for people with low vision by providing visual and audio augmentations. A user can select, adjust, and combine different tools based on their preferences. Nine of our tools modify an existing VR application post hoc via a plugin without developer effort. The rest require simple inputs from developers using a Unity toolkit we created that allows integrating all 14 of our low vision support tools during development. Our evaluation with 11 participants
    with low vision showed that SeeingVR enabled users to better enjoy VR and complete tasks more quickly and accurately. Developers also found our Unity toolkit easy and convenient to use.

スクリーンショット 2019-10-27 21 47 51

スライド出典: CHI勉強会2019
慶應義塾大学 住友聡馬さん, 松井駿さん

Nonvisual Interaction Techniques at the Keyboard Surface

Links

Abstract

  • Web user interfaces today leverage many common GUI design patterns, including navigation bars and menus (hierarchical structure), tabular content presentation, and scrolling. These visual-spatial cues enhance the interaction experience of sighted users. However, the linear nature of screen translation tools currently available to blind users make it difficult to understand or navigate these structures. We introduce Spatial Region Interaction Techniques (SPRITEs) for nonvisual access: a novel method for navigating two-dimensional structures using the keyboard surface. SPRITEs 1) preserve spatial layout, 2) enable bimanual interaction, and 3) improve the end user experience. We used a series of design probes to explore different methods for keyboard surface interaction. Our evaluation of SPRITEs shows that three times as many participants were able to complete spatial tasks with SPRITEs than with their preferred current technology.

スクリーンショット 2019-10-29 20 49 08

スライド出典: CHI勉強会2018
スライド作成者: 東京大学 眞鍋美祈さん

"I Bought This for Me to Look More Ordinary": A Study of Blind People Doing Online Shopping

Links

Abstract

  • Online shopping, by reducing the needs for traveling, has become an essential part of lives for people with visual impairments. However, in HCI, research on online shopping for them has only been limited to the analysis of accessibility and usability issues. To develop a broader and better understanding of how visually impaired people shop online and design accordingly, we conducted a qualitative study with twenty blind people. Our study highlighted that blind people's desire of being treated as ordinary had significantly shaped their online shopping practices: very attentive to the visual appearance of the goods even they themselves could not see and taking great pain to find and learn what commodities are visually appropriate for them. This paper reports how their trying to appear ordinary is manifested in online shopping and suggests design implications to support these practices.

  • 視覚障害者の人たちがどのようにオンラインショッピングサービスを利用しているか調査した研究

Approach

  • 20名の視覚障害者に対してインタビューを実施

Strong Point

  • これまでのHCI研究はオンラインショッピングのアクセシビリティやユーザビリティの評価のみだったので,実際どのように使用しているのかを調査してみた

Result

  • ユーザ(視覚障害者)は商品の見た目も気にしている
  • 単に機能などはレビューや説明文で把握できるが,見た目的に良いものなのかを把握することに時間がかかって大変

EarTouch: Facilitating Smartphone Use for Visually Impaired People in Mobile and Public Scenarios

Links

Abstract

  • Interacting with a smartphone using touch input and speech output is challenging for visually impaired people in mobile and public scenarios, where only one hand may be available for input (e.g., while holding a cane) and using the loudspeaker for speech output is constrained by environmental noise, privacy, and social concerns. To address these issues, we propose EarTouch, a one-handed interaction technique that allows the users to interact with a smartphone using the ear to perform gestures on the touchscreen. Users hold the phone to their ears and listen to speech output from the ear speaker privately. We report how the technique was designed, implemented, and evaluated through a series of studies. Results show that EarTouch is easy, efficient, fun and socially acceptable to use.

スクリーンショット 2019-10-27 21 23 45

スライド出典: CHI勉強会2019
スライド製作者: 慶應義塾大学 加藤花歩さん,斎藤彩音さん,宗形篤恭さん

SteeringWheel: A Locality-Preserving Magnification Interface for Low Vision Web Browsing

Links

Abstract

  • Low-vision users struggle to browse the web with screen magnifiers. Firstly, magnifiers occlude significant portions of the webpage, thereby making it cumbersome to get the webpage overview and quickly locate the desired content. Further, magnification causes loss of spatial locality and visual cues that commonly define semantic relationships in the page; reconstructing semantic relationships exclusively from narrow views dramatically increases the cognitive burden on the users. Secondly, low-vision users have widely varying needs requiring a range of interface customizations for different page sections; dynamic customization in extant magnifiers is disruptive to users' browsing. We present SteeringWheel, a magnification interface that leverages content semantics to preserve local context. In combination with a physical dial, supporting simple rotate and press gestures, users can quickly navigate different webpage sections, easily locate desired content, get a quick overview, and seamlessly customize the interface. A user study with 15 low-vision participants showed that their web-browsing efficiency improved by at least 20 percent with SteeringWheel compared to extant screen magnifiers.

スクリーンショット 2019-10-29 22 19 47

スライド出典: CHI勉強会2018
スライド作成者: 公立はこだて未来大 土居将史さん

Editing Spatial Layouts through Tactile Templates for People with Visual Impairments

Links

Abstract

  • Spatial layout is a key component in graphic design. While people who are blind or visually impaired (BVI) can use screen readers or magnifiers to access digital content, these tools fail to fully communicate the content's graphic design information. Through semi-structured interviews and contextual inquiries, we identify the lack of this information and feedback as major challenges in understanding and editing layouts. Guided by these insights and a co-design process with a blind hobbyist web developer, we developed an interactive, multimodal authoring tool that lets blind people understand spatial relationships between elements and modify layout templates. Our tool automatically generates tactile print-outs of a web page's layout, which users overlay on top of a tablet that runs our self-voicing digital design tool. We conclude with design considerations grounded in user feedback for improving the accessibility of spatially encoded information and developing tools for BVI authors.

スクリーンショット 2019-10-27 20 59 18

スライド出典: CHI勉強会2019
スライド製作者: 北海道大学 渡邉拓貴さん

Haptic Navigation Cues on the Steering Wheel

Links

Abstract

  • Haptic feedback is used in cars to reduce visual inattention. While tactile feedback like vibration can be influenced by the car's movement, thermal and cutaneous push feedback should be independent of such interference. This paper presents two driving simulator studies investigating novel tactile feedback on the steering wheel for navigation. First, devices on one side of the steering wheel were warmed, indicating the turning direction, while those on the other side were cooled. This thermal feedback was compared to audio. The thermal navigation lead to 94.2% correct recognitions of warnings 200m before the turn and to 91.7% correct turns. Speech had perfect recognition for both. In the second experiment, only the destination side was indicated thermally, and this design was compared to cutaneous push feedback. The simplified thermal feedback design did not increase recognition, but cutaneous push feedback had high recognition rates (100% for 200 m warnings, 98% for turns).

スクリーンショット 2019-10-27 21 03 42

スライド出典: CHI勉強会2019
スライド製作者: 北海道大学 渡邉拓貴さん

BSpeak: An Accessible Voice-based Crowdsourcing Marketplace for Low-Income Blind People

Links

Abstract

  • BSpeak is an accessible crowdsourcing marketplace that enables blind people in developing regions to earn money by transcribing audio files through speech. We examine accessibility and usability barriers that 15 first-time users, who are low-income and blind, experienced while completing transcription tasks on BSpeak and Mechanical Turk (MTurk). Our mixed-methods analysis revealed severe accessibility barriers in MTurk due to the absence of landmarks, unlabeled UI elements, and improper use of HTML headings. Compared to MTurk, participants found BSpeak significantly more accessible and usable, and completed tasks with higher accuracy in lesser time due to its voice-based implementation. In a two-week field deployment of BSpeak in India, 24 low-income blind users earned rupee 7,310 by completing over 16,000 transcription tasks to yield transcriptions with 87% accuracy. Through our analysis of BSpeak's strengths and weaknesses, we provide recommendations for designing crowdsourcing marketplaces for low-income blind people in resource-constrained settings.

スクリーンショット 2019-10-29 22 28 13

スライド出典: CHI勉強会2018
スライド作成者: 公立はこだて未来大学 玉井由良さん

Improving Gesture Recognition Accuracy on Touch Screens for Users with Low Vision

Links

Abstract

  • We contribute in this work on gesture recognition to improve the accessibility of touch screens for people with low vision. We examine the accuracy of popular recognizers for gestures produced by people with and without visual impairments, and we show that the user-independent accuracy of $P, the best recognizer among those evaluated, is small for people with low vision (83.8%), despite $P being very effective for gestures produced by people without visual impairments (95.9%). By carefully analyzing the gesture articulations produced by people with low vision, we inform key algorithmic revisions for the P recognizer, which we call P+. We show significant accuracy improvements of $P+ for gestures produced by people with low vision, from 83.8% to 94.7% on average and up to 98.2%, and 3x faster execution times compared to P.

スクリーンショット 2019-10-30 11 16 31

スライド出典: CHI勉強会2017
スライド作成者: 北海道大学 黒澤紘生さん 瀧見彰太さん

Airport Accessibility and Navigation Assistance for People with Visual Impairments

Links

Abstract

  • People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports.
  • 視覚障害者が空港内を移動する際に生じうる課題を調査
  • NavCogという視覚障害者向けのナビゲーションシステムを空港に実装し,有効性を検証

スクリーンショット 2019-10-28 16 13 39

Approach

  • 視覚障害者を集めてディスカッションしてもらい,空港内で移動する際に困っていることや要望などを調査
  • 既存のビーコンとスマートフォンを用いたナビゲーションシステム(NavCog)を空港に実装し,ユーザに使用してもらうことで空港での有用性を検証

スクリーンショット 2019-10-28 16 13 54

Result

  • ゲートまで案内されてもそのあとどうすればいいか(椅子に座るとか周囲にどんな店があるか把握するなど)困ることがある
  • 空港は広いオープンスペースがあり,壁などを頼りにすることができないので,位置や方向を見失うことがある
  • 提案システムを使うことで空港内で目的地へ移動することができた

Limitation & Future Work

  • 空港内を移動する際は,店や搭乗ゲートにできた行列の回避,ゴミ箱のような障害物,広いオープンスペースでの案内,などの課題が残っている

Easy Return: An App for Indoor Backtracking Assistance

Links

Abstract

  • We present a system that, implemented as an iPhone app controllable from an Apple Watch, can help a blind person backtrack a route taken in a building. This system requires no maps of the building or environment modifications. While traversing a path from a starting location to a destination, the system builds and records a path representation in terms of a sequence of turns and of step counts between turns. If the user wants to backtrack the same path, the system can provide assistance by tracking the user's location in the recorded path, and producing directional information in speech form about the next turns and step counts to follow. The system was tested with six blind participants in a controlled indoor experiment.

スクリーンショット 2019-10-28 18 16 28

スライド出典: CHI勉強会2018
スライド作成者: 北海道大学 渡邉拓貴さん

Comparing Computer-Based Drawing Methods for Blind People with Real-Time Tactile Feedback

Links

Abstract

  • In this paper, we present a drawing workstation for blind people using a two-dimensional tactile pin-matrix display for in- and output. Four different input modalities, namely menu-based, gesture-based, freehand-stylus and a Time-of-Flight (ToF) depth segmentation of real-world object silhouettes, are utilized to create graphical shapes. Users can freely manipulate shapes after creation. Twelve blind users evaluated and compared the four image creation modalities. During evaluation, participants had to copy four different images. The results show that all modalities are highly appropriate for non-visual drawing tasks. There is no generally preferred drawing modality, but most participants rated the robust and well-known menu-based interaction as very good. Furthermore, menu was second in performance and the most accurate drawing modality. Our evaluation demonstrated direct manipulation works well for blind users at the position of the reading hand. In general, our drawing tool allows blind users to create appealing images.

スクリーンショット 2019-10-29 21 20 59

スライド出典: CHI勉強会2018
スライド作成者: 東京大学 眞鍋美祈さん

Blocks4All: Overcoming Accessibility Barriers to Blocks Programming for Children with Visual Impairments

Links

Abstract

  • Blocks-based programming environments are a popular tool to teach children to program, but they rely heavily on visual metaphors and are therefore not fully accessible for children with visual impairments. We evaluated existing blocks-based environments and identified five major accessibility barriers for visually impaired users. We explored techniques to overcome these barriers in an interview with a teacher of the visually impaired and formative studies on a touchscreen blocks-based environment with five children with visual impairments. We distill our findings on usable touchscreen interactions into guidelines for designers of blocks-based environments.

スクリーンショット 2019-10-29 21 51 23

スライド出典: CHI勉強会2018
スライド作成者: 公立はこだて未来大学 美馬義亮さん

Baang: A Viral Speech-based Social Platform for Under-Connected Populations

Links

スクリーンショット 2019-10-28 16 37 49

Abstract

  • Speech is more natural than text for a large part of the world including hard-to-reach populations (low-literate, poor, tech-novice, visually-impaired, marginalized) and oral cultures. Voice-based services over simple mobile phones are effective means to provide orality-driven social connectivity to such populations. We present Baang, a versatile and inclusive voice-based social platform that allows audio content creation and sharing among its open community of users. Within 8 months, Baang spread virally to 10,721 users (69% of them blind) who participated in 269,468 calls and shared their thoughts via 44,178 audio-posts, 343,542 votes, 124,389 audio-comments and 94,864 shares. We show that the ability to vote, comment and share leads to viral spread, deeper engagement, longer retention and emergence of true dialog among participants. Beyond connectivity, Baang provides its users with a voice and a social identity as well as means to share information and get community support.

スライド出典: CHI勉強会2018
スライド作成者: 明治大学 宮下芳明先生

Audible Beacons and Wearables in Schools: Helping Young Visually Impaired Children Play and Move Independently

Links

Abstract

  • Young children with visual impairments tend to engage less with their surroundings, limiting the benefits from activities at school. We investigated novel ways of using sound from a bracelet, such as speech or familiar noises, to tell children about nearby people, places and activities, to encourage them to engage more during play and help them move independently. We present a series of studies, the first two involving visual impairment educators, that give insight into challenges faced by visually impaired children at school and how sound might help them. We then present a focus group with visually impaired children that gives further insight into the effective use of sound. Our findings reveal novel ways of combining sounds from wearables with sounds from the environment, motivating audible beacons, devices for audio output and proximity estimation. We present scenarios, findings and a design space that show the novel ways such devices could be used alongside wearables to help visually impaired children at school.

スクリーンショット 2019-10-30 11 43 17

スライド出典: CHI勉強会2017
スライド作成者: 立命館大学 羽鹿 諒さん

Playing Blind: Revealing the World of Gamers with Visual Impairment

Links

Abstract

  • Previous research on games for people with visual impairment (PVI) has focused on co-designing or evaluating specific games - mostly under controlled conditions. In this research, we follow a game-agnostic, "in-the-wild" approach, investigating the habits, opinions and concerns of PVI regarding digital games. To explore these issues, we conducted an online survey and follow-up interviews with gamers with VI (GVI). Dominant themes from our analysis include the particular appeal of digital games to GVI, the importance of social trajectories and histories of gameplay, the need to balance complexity and accessibility in both games targeted to PVI and mainstream games, opinions about the state of the gaming industry, and accessibility concerns around new and emerging technologies such as VR and AR. Our study gives voice to an underrepresented group in the gaming community. Understanding the practices, experiences and motivations of GVI provides a valuable foundation for informing development of more inclusive games.

スクリーンショット 2019-10-27 22 08 38

スライド出典: CHI勉強会2019
スライド製作者: 筑波大学 橋爪智さん

FLight: A Low-Cost Reading and Writing System for Economically Less-Privileged Visually-Impaired People Exploiting Ink-based Braille System

Links

Abstract

  • Reading printed documents and writing on a paper pose a great challenge for visually-impaired people. Existing studies that attempt to solve these challenges are expensive and not feasible in low-income context. Moreover, these studies solve reading and writing problems separately. On the contrary, in this study, we propose FLight, a low-cost reading and writing system for economically less-privileged people. FLight uses ink-based Braille characters as the medium of textual representation. This helps in keeping a compact spatial representation of texts, yet achieving a low-cost status. Additionally, FLight utilizes a low-cost wearable device to enhance ease of reading by visually-impaired people. We conduct a participatory design and iterative evaluation involving five visually-impaired children in Bangladesh for more than 18 months. Our user evaluation reveals that FLight is easy-to-use, and exhibits a potential low-cost solution for economically less-privileged visually-impaired people.

スクリーンショット 2019-10-30 11 29 43

スライド出典: CHI勉強会2017
スライド作成者: 東京大学 杉本 貴章さん

How Teens with Visual Impairments Take, Edit, and Share Photos on Social Media

Links

Abstract

  • We contribute a qualitative investigation of how teens with visual impairments (VIP) access smartphone photography, from the time they take photos through editing and sharing them on social media. We observed that they largely want to engage with photos visually, similarly to their sighted peers, and have developed strategies around photo capture, editing, sharing, and consumption that attempt to mitigate usability limitations of current photography and social media apps. We demonstrate the need for more work examining how young people with low vision engage with smartphone photography and social media, as they are heavy users of such technologies and have challenges distinct from their totally blind counterparts. We conclude with design considerations to alleviate the usability barriers we uncovered and for making smartphone photography and social media more accessible and relevant for VIPs.

スクリーンショット 2019-10-29 22 21 53

スライド出典: CHI勉強会2018
スライド作成者: はこだて未来大学 奥野茜さん

BBeep: A Sonic Collision Avoidance System for Blind Travellers and Nearby Pedestrians

Links

Abstract

  • We present an assistive suitcase system, BBeep, for supporting blind people when walking through crowded environments. BBeep uses pre-emptive sound notifications to help clear a path by alerting both the user and nearby pedestrians about the potential risk of collision. BBeep triggers notifications by tracking pedestrians, predicting their future position in real-time, and provides sound notifications only when it anticipates a future collision. We investigate how different types and timings of sound affect nearby pedestrian behavior. In our experiments, we found that sound emission timing has a significant impact on nearby pedestrian trajectories when compared to different sound types. Based on these findings, we performed a real-world user study at an international airport, where blind participants navigated with the suitcase in crowded areas. We observed that the proposed system significantly reduces the number of imminent collisions.

  • 視覚障害者が周囲の歩行者と衝突せずに混雑した環境を移動できるように支援するシステム

  • 周囲の歩行者に対して衝突の危険性を知らせることで,ユーザに対して道を譲るように促す

スクリーンショット 2019-10-20 14 35 29

Approach

  • スーツケースに取り付けられたステレオカメラを用いて,人物の検出,トラッキング,衝突予測を行う.

スクリーンショット 2019-10-20 14 35 19

  • 衝突の危険性があるときにスピーカーからビープ音を鳴らすことで,ユーザの存在を歩行者に伝え,道を譲るように促す.

スクリーンショット 2019-10-20 14 35 03

Strong Point

  • 既存研究はシステムからのフィードバックをもとにユーザ自身が頑張って歩行者を回避していたが,提案手法は周囲に積極的にインタラクションすることで自由度の高い晴眼者に助けを求めるという点が新しい.
  • 色々な種類の警告音を鳴らす実験を行い,最適な警告音をデザインした.
  • 実際にリアルな環境(空港内)で実験をして,提案手法の有用性を示した

Evaluation

  • 国際空港内で搭乗を待つ行列に向かって歩くタスクを与え,衝突した回数を比較した
  • BaselineはBBeepと同じ音をヘッドセットを使ってユーザのみに知らせるシステム

スクリーンショット 2019-10-20 14 35 36

Result

  • 提案手法を用いることで,衝突回数が有意に下がった

Limitation & Future Work

  • 衝突予測自体は歩行者の位置情報のみを用いているため,ユーザに気づいて避けようとしている人に対しても鳴らしてしまうことがある(→顔向き等の情報を用いて気づいているかを判別したい)
  • システムに対して周囲の歩行者が抱く印象などはまだ調査していない.

Towards a Multisensory Augmented Reality Map for Blind and Low Vision People: a Participatory Design Approach

Links

Abstract

  • Current low-tech Orientation & Mobility (O&M) tools for visually impaired people, e.g. tactile maps, possess limitations. Interactive accessible maps have been developed to overcome these. However, most of them are limited to exploration of existing maps, and have remained in laboratories. Using a participatory design approach, we have worked closely with 15 visually impaired students and 3 O&M instructors over 6 months. We iteratively designed and developed an augmented reality map destined at use in O&M classes in special education centers. This prototype combines projection, audio output and use of tactile tokens, and thus allows both map exploration and construction by low vision and blind people. Our user study demonstrated that all students were able to successfully use the prototype, and showed a high user satisfaction. A second phase with 22 international special education teachers allowed us to gain more qualitative insights. This work shows that augmented reality has potential for improving the access to education for visually impaired people.

スクリーンショット 2019-10-29 18 09 25

スライド出典: CHI勉強会2018
スライド作成者: 北海道大学 渡邉拓貴さん

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.