Skip to content

Enjoy 3D modeling but have trouble seeing? New Stanford invention opens up maker-world to the visually impaired

Researchers, working with those who are visually impaired, have developed a touch-based display that can produce physical, temporary models of objects.

Son Kim wants to design his own 3D printed coffee cup. After programming the dimensions of his cup into 3D modeling software, he reaches his hand toward a tall field of plastic pegs beside his computer, which have just now taken on the shape of his cup. By touching these pegs, Son, who is blind, is able to get an idea of his design.

Recognizing that people who are blind or visually impaired lack an easy way to interact with 3D modeling software, researchers at Stanford created a touch-based -- or tactile -- display that can produce quick, temporary, physical models of objects from software-based instructions. It makes these shapes by moving rectangular pegs up and down, and is able to adjust them whenever the user instructs it to do so. This technology was presented at the Association for Computing Machinery (ACM) ASSETS conference on Oct. 29. 

In a Stanford News story, lead author of the study Alexa Siu, a graduate student in mechanical engineering at Stanford, explained why she thinks this work is important.

"Design tools empower users to create and contribute to society but, with every design choice, they also limit who can and cannot participate," said Siu. "This project is about empowering a blind user to be able to design and create independently without relying on sighted mediators because that reduces creativity, agency and availability."

To make computer-aided design more accessible to people who are blind and visually impaired, Stanford researchers developed a tool that can quickly produce touchable representations of a user's work-in-progress. (Video by Farrin Abbott)

The researchers co-designed the tactile display, and the software that accompanies it, with people who are blind and visually impaired, including Kim, who is an assistive technology specialist for the Vista Center for the Blind in Palo Alto and a co-author of the paper.

Together, this team created a system capable of rendering 2.5D shapes -- not quite 3D because the bottom of the display doesn't change shape -- that also offers the user the ability to rotate their object, zoom in or out on it, and separate it into parts, such as showing the top and bottom of a cup side-by-side.

"I really am excited about this project," said Kim, in the Stanford News story. "If it moves toward implementation or mass distribution in such a way that is cost effective that would enable future, visually-impaired or blind designers coming out of college to have a tool, which would give that person or persons the level of accessibility to enhance their learning; it contributes to the principle of individual, universal access and promotes independence."

Next steps for the tactile display include making versions that are less expensive, bigger and that have smaller pegs to allow for more detailed renderings.

Photo by Farrin Abbott

Popular posts

Animal Research
Could the avian flu be our next pandemic threat?

What does it mean that H5N1 bird flu, also known as highly pathogenic avian influenza A, is spreading among dairy cows? And how should U.S. health systems — and consumers of milk products — be responding?