University of Maryland

Featured Paper

First screenshot on the left is (a) home screen of the app with Fritos in the camera view and three buttons, scan, view, and teach from left to right at the bottom of the screen. A purple arrow goes from the scan button to the next screen shot which is (b) home screen with recognition result. Fritos is in the camera view and there is “Fritos” overlaid text label on the screen. A purple box on the left side of this screenshot says: Recognize an object in the camera view. A red arrow goes from the view button on screenshot (a) to screenshot (c) List of items screen. There are three objects listed on this screen, from top to bottom, today Cheetos, today Lays, and today Fritos. A red box on the left of this screenshot says: view objects in your model. A red arrow goes from the Fritos item to the next screenshot, (d) Item information screen (top). This screen shows the following information for Fritos object: Background variation 34%, side variation 15%, distance variation 64.3%, small object 0 out of 30, cropped object 4 out of 30. A red arrow with scroll down written on top of it goes from this screenshot to (e) Item information and audio description screen (bottom). There is a list of photos on this screen and an audio bar with play button for the audio description at the bottom. A green arrow goes from the Teach button in screenshot (a) to screenshot (f) Teach screen with descriptors. In this screenshot there is a cropped Fritos in the camera view, with overlaying text saying: photo attributes cropped object. A green arrow goes from this screen to (g) Teach screen indicating number of training photos left. Fritos is in the camera view and overlaying text on the screen says: 10 left. A green arrow goes from this screen to (h) Review screen with descriptors. There is a small picture of Fritos at the top and text saying training quality below the small picture. Background variation 32.6% and side variation 30% is listed on this screen. There are two buttons at the bottom from left to right: ok, and retrain. A gray dotted line goes from retrain button to the screenshot (f). A green arrow goes from the “ok” button to the next screenshot (i) Labeling screen. On this screen there is a white box with “name your object” text at the top and a placeholder saying: enter the name of this object, and an "ok" button at the bottom. A green arrow goes from the "ok" button to the next screenshot (j) Home screen with training in progress. This screen is the home screen with Fritos in the camera view and overlaying text saying: Training in progress. A gray dotted line goes back from this screen to screenshot (a).
The user flow of MYCam. MYCam has three main parts: Recognizing an object in the camera view (purple top left thread), reviewing and editing the information of the objects (orange top right thread), and teaching an object to the model (green bottom thread).

Blind Users Accessing Their Training Images in Teachable Object Recognizers (Summary)

To improve access for blind users to the training of teachable object recognizers, researchers from the University of Maryland’s iSchool conducted a study to explore the potential of using ‘data descriptors’ as a form of non-visual access to training data. The results of this study are reported in “Blind Users Accessing Their Training Images in Teachable Object Recognizers,” a paper by former and current students at UMD, Jonggi Hong, Jaina Gandhi, Ernest Essuah Mensah, Farnaz Zamiri Zeraati, Ebrima Haddy Jarjue, Kyungjun Lee, and Dr. Hernisa Kacorri (core faculty and principal investigator at the Trace R&D Center).  This paper, a Best Paper Nominee at ASSETS ’22, was presented at the 24th International ACM SIGACCESS Conference on Computers and Accessibility in Athens, Greece, on October 24, 2022. 

To explore more work by Dr. Kacorri and her team on Teachable Interfaces, visit the project page for this research. This work is supported by the National Science Foundation (#1816380). Kyungjun Lee is supported by the Inclusive Information and Communications Technology RERC (#90REGE0008) from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), Administration for Community Living (ACL), Department of Health and Human Services (HHS). Learn more about the work of the Inclusive ICT RERC.


Read the summary

Featured News

An overhead shot of a child, hand outstretched and taped to a handpiece on the surface of a communication board with a grid of letters, numbers, and symbols; a man on their left and a woman on their right.
Gregg Vanderheiden demonstrating the AutoCom with children at the OCCC in Toronto (1973).

New History of the Trace Center Released: Technology and Disability: 50 Years of Trace R&D Center Contributions and Lessons Learned

The new book from Springer recounts Trace’s 50-year history, its enduring contributions to the field, and the lessons learned along the way. According to reviewers, Trace is “a tour de force,” “a catalyst for a global movement” and “an indispensable engine propelling accessibility forward for people with disabilities.” 


Dr. Vanderheiden, professor in the College of Information Studies at the University of Maryland, recounts Trace’s origin story at the beginning of the new book Technology and Disability: 50 Years of Trace R&D Center Contributions and Lessons Learned (Springer, 2023). (Available in print or ebook format from Springer or for Kindle on Amazon.) The book is co-authored by the Trace Center’s current core faculty since its 2016 move to the University of Maryland College of Information Studies: Trace’s new director, Professor Jonathan Lazar, Assistant Professor Hernisa Kacorri, Assistant Professor Amanda Lazar (no relation to Jonathan), Assistant Research Scientist J. Bern Jordan, and Professor Gregg Vanderheiden, the Trace Center Director Emeritus. The book is a chronological journey through the Center’s remarkable achievements, peppered with compelling anecdotes and lessons learned from a half-century of pioneering work at the intersection of technology and disability. 

Read the story


Celebrating 50 Years

Trace 50th Anniversary Logo

The Trace R&D Center has been a leader in the field of Information and Communication Technology since 1971, and during the 2021-22 academic year will be celebrating its 50th anniversary.

Trace R&D Center History


About Trace

Trace PIs (left to right): Hernisa Kacorri, Gregg Vanderheiden, J. Bern Jordan, Amanda Lazar, Jonathan Lazar
Trace R & D Center Investigators (left to right): Hernisa Kacorri, Gregg Vanderheiden, J. Bern Jordan, Amanda Lazar, Jonathan Lazar

The Trace Research & Development Center has been a pioneer in the field of technology and disability, known for high-impact research and development.

About Trace R&D Center