Miraikan Accessibility Lab. is a research consortium to invent technologies that will support the future lives of the visually impaired by cooperating with partners.

Logo of Miraikan Accessibility Lab

Introduction video of the works of Accessibility Lab.

Vision

Science and technology have transformed the lives of people with disabilities. Speech synthesis and mobile devices are now indispensable technologies for the daily lives of the visually impaired in areas such as education and jobs. Miraikan Accessibility Lab. is a research consortium that collaborates with companies and institutions with advanced AI and robotics technologies, striving to create technologies that can help the visually impaired move freely about the city, recognize information overflowing inside the city, and become independent within the city. We will expedite the social implementation of these technologies by having visitors actually experience them throughout Miraikan and together, consider both issues and possibilities.

Research

AI Suitcase

AI Suitcase

The AI Suitcase is an autonomous, navigation robot that guides visually impaired people. It looks like a commercially available suitcase, yet it can recognize obstacles and people to guide the visually impaired user safely to their destination. We are improving its functions through collaboration with partners.

Cooperation:Go to an external websiteAdvanced Assistive Mobility Platform

See more about AI Suitcase

Publications

Masaya Kubota*, Masaki Kuribayashi*, Seita Kayukawa, Hironobu Takagi, Chieko Asakawa, and Shigeo Morishima (*-equal contribution). 2024. Snap&Nav: Smartphone-based Indoor Navigation System For Blind People via Floor Map Analysis and Intersection Detection. In Proceedings of the 26th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2024).
Go to an external website[Paper Link]
Go to an external website[Project Page]

Yuka Kaniwa*, Masaki Kuribayashi*, Seita Kayukawa, Daisuke Sato, Hironobu Takagi, Chieko Asakawa, and Shigeo Morishima (* - equal contribution). 2024. ChitChatGuide: Conversational Interaction Using Large Language Models for Assisting People with Visual Impairments to Explore a Shopping Mall. In Proceedings of the 26th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI 2024).
Go to an external website[Paper Link]
Go to an external website[Project Page]

Masaki Kuribayashi, Kohei Uehara, Allan Wang, Daisuke Sato, Simon Chu, Shigeo Morishima. 2024. Memory-Maze: Scenario Driven Benchmark and Visual Language Navigation Model for Guiding Blind People. arXiv.
Go to an external website[Paper Link]

Seita Kayukawa, Daisuke Sato, Masayuki Murata, Tatsuya Ishihara, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2023. Enhancing Blind Visitor’s Autonomy in a Science Museum Using an Autonomous Navigation Robot. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI 2023).
Go to an external website[Project Page]

Seita Kayukawa, Daisuke Sato, Masayuki Murata, Tatsuya Ishihara, Akihiro Kosugi, Hironobu Takagi, Shigeo Morishima, and Chieko Asakawa. 2022. How Users, Facility Managers, and Bystanders Perceive and Accept a Navigation Robot for Visually Impaired People in Public Buildings. In Proceedings of the 31st IEEE International Conference on Robot & Human Interactive Communication (IEEE RO-MAN 2022).
Go to an external website[Project Page]

"Touchable" Exhibits

"Touchable" 3D model of the Miraikan building

Telescopes and Microscopes are essential science equipment that allows us to see the unseeable. To enable visually impaired people to understand visual information by touch, we are developing "touchable" exhibits, especially with an interactive audio guide.

Publications

Xiyue Wang, Seita Kayukawa, Hironobu Takagi, Giorgia Masoero, and Chieko Asakawa. 2024. Direct or Immersive? Comparing Smartphone-based Museum Guide Systems for Blind Visitors. In The 21st International Web for All Conference (W4A’24). (To Appear) Go to an external website[Best Technical Paper Award]
Go to an external website[PDF (preprint)]

Xiyue Wang, Seita Kayukawa, Hironobu Takagi, and Chieko Asakawa. 2023. TouchPilot: Designing a Guidance System That Assists Blind People in Learning Complex 3D Structures. In The 25th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’23).
Go to an external website[Paper Link]
Go to an external website[PDF]
Go to an external website[Project Page]

Xiyue Wang, Seita Kayukawa, Hironobu Takagi, and Chieko Asakawa. 2022. BentoMuseum: 3D and Layered Interactive Museum Map for Blind Visitors. In The 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’22), October 23–26, 2022, Athens, Greece.
Go to an external website[Project Page]

Researcher

Xiyue Wang

Ph.D. (Information Science) / Researcher, Miraikan - The National Museum of Emerging Science and Innovation

My current research focuses on developing tangible and tactile user interfaces that provide barrier-free museum experiences to people with vision impairments and diverse needs.
Keywords: Human-Computer Interaction, 3D Printing, Touch Sensing, Behavioral Data Analysis
Personal Page: Go to an external websitehttps://xiyue-w.github.io/

Allan Wang

Ph.D. (Robotics) / Researcher, Miraikan - The National Museum of Emerging Science and Innovation

My research interest is Social Navigation, or how can we let a robot navigate smoothly while conforming to the various social rules in a pedestrian-rich environment. It has many application scenarios, such as navigation for the AI suitcase robot.
Keywords: Social Navigation, Visual Navigation, Motion Prediction, Human-Robot Interaction.
Personal Page: Go to an external websitehttps://allanwangliqian.com/

Renato Ribeiro

Researcher, Miraikan - The National Museum of Emerging Science and Innovation

I have researched methods for making navigation in virtual environments accessible for people with visual impairments. I currently explore ways to enable robots to navigate smoothly in pedestrian-rich environments.
Keywords: Human-Computer Interaction, Virtual Reality, Navigation

Kohei Uehara

Ph. D. (Information Science and Technology) / Part-time Researcher, Miraikan - The National Museum of Emerging Science and Innovation

My research focuses on assistive technologies for the visually impaired using technologies such as computer vision and natural language processing/generation.
Keywords: Vision and Language, Image Caption Generation, Computer Vision, Natural Language Generation, Visual Language Navigation
Personal Page: Go to an external websitehttps://uehara-mech.github.io/

Masaki Kuribayashi

Part-time Researcher, Miraikan - The National Museum of Emerging Science and Innovation

My research focuses on developing robots that assist visual impaired people in navigating and exploring unfamiliar environments or places they are visiting for the first time.
Keywords:Human Computer Interaction, Map-less Navigation, Visual Language Navigation
Personal Page: Go to an external websitehttps://www.masakikuribayashi.com/

Alumni

Kayukawa Seita

(2021/04-2023/03, Researcher, Personal Page: https://wotipati.github.io/

Open Data / Open Source

Miraikan 360-degree Video Dataset

Participating Organizations

Sponsorship

Miraikan (Accessibility Lab.) sponsors the following events and organizations.

Recruitment ofResearchers, Experimental Participants

Miraikan Accessibility Lab. is seeking researchers who aspire to work with us at Miraikan, as well as people who can participate in experiments as users.

  • About Recruitment of Researchers
  • If you wish to participate in experiments as users: Please send us your request from the “Contact through the Internet” link below.

Contact

Miraikan - The National Museum of Emerging Science and Innovation
Contact through the Internet