Miraikan Accessibility Lab. is a research consortium to invent technologies that will support the future lives of the visually impaired by cooperating with partners.
Introduction video of the works of Accessibility Lab.
Vision
Science and technology have transformed the lives of people with disabilities. Speech synthesis and mobile devices are now indispensable technologies for the daily lives of the visually impaired in areas such as education and jobs. Miraikan Accessibility Lab. is a research consortium that collaborates with companies and institutions with advanced AI and robotics technologies, striving to create technologies that can help the visually impaired move freely about the city, recognize information overflowing inside the city, and become independent within the city. We will expedite the social implementation of these technologies by having visitors actually experience them throughout Miraikan and together, consider both issues and possibilities.
Research
AI Suitcase
The AI Suitcase is an autonomous, navigation robot that guides visually impaired people. It looks like a commercially available suitcase, yet it can recognize obstacles and people to guide the visually impaired user safely to their destination. We are improving its functions through collaboration with partners.
Cooperation:Advanced Assistive Mobility Platform
Publications
Masaya Kubota*, Masaki Kuribayashi*, Seita Kayukawa, Hironobu Takagi, Chieko Asakawa, and Shigeo Morishima (*-equal contribution). 2024. Snap&Nav: Smartphone-based Indoor Navigation System For Blind People via Floor Map Analysis and Intersection Detection. In Proceedings of the 26th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2024).
[Paper Link]
[Project Page]
Yuka Kaniwa*, Masaki Kuribayashi*, Seita Kayukawa, Daisuke Sato, Hironobu Takagi, Chieko Asakawa, and Shigeo Morishima (* - equal contribution). 2024. ChitChatGuide: Conversational Interaction Using Large Language Models for Assisting People with Visual Impairments to Explore a Shopping Mall. In Proceedings of the 26th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2024).
[Paper Link]
[Project Page]
Masaki Kuribayashi, Kohei Uehara, Allan Wang, Daisuke Sato, Simon Chu, Shigeo Morishima. 2024. Memory-Maze: Scenario Driven Benchmark and Visual Language Navigation Model for Guiding Blind People. arXiv.
[Paper Link]
Seita Kayukawa, Daisuke Sato, Masayuki Murata, Tatsuya Ishihara, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2023. Enhancing Blind Visitor’s Autonomy in a Science Museum Using an Autonomous Navigation Robot. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI 2023).
[Project Page]
Seita Kayukawa, Daisuke Sato, Masayuki Murata, Tatsuya Ishihara, Akihiro Kosugi, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2022. How Users, Facility Managers, and Bystanders Perceive and Accept a Navigation Robot for Visually Impaired People in Public Buildings. In Proceedings of the 31st IEEE International Conference on Robot & Human Interactive Communication (IEEE RO-MAN 2022).
[Project Page]
"Touchable" Exhibits
Telescopes and Microscopes are essential science equipment that allows us to see the unseeable. To enable visually impaired people to understand visual information by touch, we are developing "touchable" exhibits, especially with an interactive audio guide.
Publications
Xiyue Wang, Seita Kayukawa, Hironobu Takagi, Giorgia Masoero, and Chieko Asakawa. 2024. Direct or Immersive? Comparing Smartphone-based Museum Guide Systems for Blind Visitors. In The 21st International Web for All Conference (W4A’24). (To Appear) [Best Technical Paper Award]
[PDF (preprint)]
Xiyue Wang, Seita Kayukawa, Hironobu Takagi, and Chieko Asakawa. 2023. TouchPilot: Designing a Guidance System That Assists Blind People in Learning Complex 3D Structures. In The 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’23).
[Paper Link]
[PDF]
[Project Page]
Xiyue Wang, Seita Kayukawa, Hironobu Takagi, and Chieko Asakawa. 2022. BentoMuseum: 3D and Layered Interactive Museum Map for Blind Visitors. In The 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’22), October 23–26, 2022, Athens, Greece.
[Project Page]
Researcher
Xiyue Wang
Ph.D. (Information Science) / Researcher, Miraikan - The National Museum of Emerging Science and Innovation
My current research focuses on developing tangible and tactile user interfaces that provide barrier-free museum experiences to people with vision impairments and diverse needs.
Keywords: Human-Computer Interaction, 3D Printing, Touch Sensing, Behavioral Data Analysis
Personal Page: https://xiyue-w.github.io/
Allan Wang
Ph.D. (Robotics) / Researcher, Miraikan - The National Museum of Emerging Science and Innovation
My research interest is Social Navigation, or how can we let a robot navigate smoothly while conforming to the various social rules in a pedestrian-rich environment. It has many application scenarios, such as navigation for the AI suitcase robot.
Keywords: Social Navigation, Visual Navigation, Motion Prediction, Human-Robot Interaction.
Personal Page: https://allanwangliqian.com/
Renato Ribeiro
Researcher, Miraikan - The National Museum of Emerging Science and Innovation
I have researched methods for making navigation in virtual environments accessible for people with visual impairments. I currently explore ways to enable robots to navigate smoothly in pedestrian-rich environments.
Keywords: Human-Computer Interaction, Virtual Reality, Navigation
Kohei Uehara
Ph. D. (Information Science and Technology) / Part-time Researcher, Miraikan - The National Museum of Emerging Science and Innovation
My research focuses on assistive technologies for the visually impaired using technologies such as computer vision and natural language processing/generation.
Keywords: Vision and Language, Image Caption Generation, Computer Vision, Natural Language Generation, Visual Language Navigation
Personal Page: https://uehara-mech.github.io/
Masaki Kuribayashi
Part-time Researcher, Miraikan - The National Museum of Emerging Science and Innovation
My research focuses on developing robots that assist visual impaired people in navigating and exploring unfamiliar environments or places they are visiting for the first time.
Keywords:Human Computer Interaction, Map-less Navigation, Visual Language Navigation
Personal Page: https://www.masakikuribayashi.com/
Alumni
Open Data / Open Source
- Miraikan Accessibility Lab. GitHub
https://github.com/miraikan-research - AI Suitcase Project GitHub
https://github.com/CMU-cabot/ - Miraikan 360-degree Video Dataset
This is a data set of all-around images of Miraikan captured using a 360-degree camera.
Anyone can use it for research and development purposes.
Click here for data set details and application - Model data for 3D printing
The model data for 3D printing such as “touchable” exhibits is published on Thingiverse.
https://www.thingiverse.com/accessibilitylabmiraikan/designs
Participating Organizations
Sponsorship
Miraikan (Accessibility Lab.) sponsors the following events and organizations.
Recruitment ofResearchers, Experimental Participants
Miraikan Accessibility Lab. is seeking researchers who aspire to work with us at Miraikan, as well as people who can participate in experiments as users.
- About Recruitment of Researchers
- If you wish to participate in experiments as users: Please send us your request from the “Contact through the Internet” link below.
Contact
Miraikan - The National Museum of Emerging Science and Innovation
Contact through the Internet