University of Maryland

Featured Paper

First screenshot on the left is (a) home screen of the app with Fritos in the camera view and three buttons, scan, view, and teach from left to right at the bottom of the screen. A purple arrow goes from the scan button to the next screen shot which is (b) home screen with recognition result. Fritos is in the camera view and there is “Fritos” overlaid text label on the screen. A purple box on the left side of this screenshot says: Recognize an object in the camera view. A red arrow goes from the view button on screenshot (a) to screenshot (c) List of items screen. There are three objects listed on this screen, from top to bottom, today Cheetos, today Lays, and today Fritos. A red box on the left of this screenshot says: view objects in your model. A red arrow goes from the Fritos item to the next screenshot, (d) Item information screen (top). This screen shows the following information for Fritos object: Background variation 34%, side variation 15%, distance variation 64.3%, small object 0 out of 30, cropped object 4 out of 30. A red arrow with scroll down written on top of it goes from this screenshot to (e) Item information and audio description screen (bottom). There is a list of photos on this screen and an audio bar with play button for the audio description at the bottom. A green arrow goes from the Teach button in screenshot (a) to screenshot (f) Teach screen with descriptors. In this screenshot there is a cropped Fritos in the camera view, with overlaying text saying: photo attributes cropped object. A green arrow goes from this screen to (g) Teach screen indicating number of training photos left. Fritos is in the camera view and overlaying text on the screen says: 10 left. A green arrow goes from this screen to (h) Review screen with descriptors. There is a small picture of Fritos at the top and text saying training quality below the small picture. Background variation 32.6% and side variation 30% is listed on this screen. There are two buttons at the bottom from left to right: ok, and retrain. A gray dotted line goes from retrain button to the screenshot (f). A green arrow goes from the “ok” button to the next screenshot (i) Labeling screen. On this screen there is a white box with “name your object” text at the top and a placeholder saying: enter the name of this object, and an "ok" button at the bottom. A green arrow goes from the "ok" button to the next screenshot (j) Home screen with training in progress. This screen is the home screen with Fritos in the camera view and overlaying text saying: Training in progress. A gray dotted line goes back from this screen to screenshot (a).
The user flow of MYCam. MYCam has three main parts: Recognizing an object in the camera view (purple top left thread), reviewing and editing the information of the objects (orange top right thread), and teaching an object to the model (green bottom thread).

Blind Users Accessing Their Training Images in Teachable Object Recognizers (Summary)

To improve access for blind users to the training of teachable object recognizers, researchers from the University of Maryland’s iSchool conducted a study to explore the potential of using ‘data descriptors’ as a form of non-visual access to training data. The results of this study are reported in “Blind Users Accessing Their Training Images in Teachable Object Recognizers,” a paper by former and current students at UMD, Jonggi Hong, Jaina Gandhi, Ernest Essuah Mensah, Farnaz Zamiri Zeraati, Ebrima Haddy Jarjue, Kyungjun Lee, and Dr. Hernisa Kacorri (core faculty and principal investigator at the Trace R&D Center).  This paper, a Best Paper Nominee at ASSETS ’22, was presented at the 24th International ACM SIGACCESS Conference on Computers and Accessibility in Athens, Greece, on October 24, 2022. 

To explore more work by Dr. Kacorri and her team on Teachable Interfaces, visit the project page for this research. This work is supported by the National Science Foundation (#1816380). Kyungjun Lee is supported by the Inclusive Information and Communications Technology RERC (#90REGE0008) from the National Institute on Disability, Independent Living, and Rehabilitation Research (NIDILRR), Administration for Community Living (ACL), Department of Health and Human Services (HHS). Learn more about the work of the Inclusive ICT RERC.


Read the summary

Featured News

woman wearing headphones typing on a keyboard in a carrel
A growing pool of users at libraries, colleges, and job centers across the US and in Canada have the opportunity to take advantage of Morphic, a software tool that makes computers easier to use.

Morphic: Advancing Social Equity Through Digital Inclusion

We live in a world of digital technology. Computers have become embedded in our everyday lives to the point where many of us hardly notice how often we interact with technology—to buy our groceries or train tickets, check out books from the library, communicate with colleagues, friends and family, do our banking, enter secure spaces, and use household appliances. But for some users, this reliance on computers to complete everyday tasks leads to constant struggle, frustration, and in some cases giving up. Too many older adults, people with disabilities, and people who lack “digital affinity” (i.e., the ability to understand and use technology, irrespective of education or intelligence) are being left behind. 

All of these user groups were top of mind for the developers of Morphic, an open-source software application that makes computers easier to use. The development of Morphic was a decade in the making and grew out of work by an international consortium of organizations (industry, universities and NGOs) led by the Trace R&D Center (first, at the University of Wisconsin and then, after its 2016 move, at University of Maryland), Raising the Floor, and the Inclusive Design Research Centre at OCAD University. 

At its core Morphic is focused on social equity through digital inclusion....


Read the story


Celebrating 50 Years

Trace 50th Anniversary Logo

The Trace R&D Center has been a leader in the field of Information and Communication Technology since 1971, and during the 2021-22 academic year will be celebrating its 50th anniversary.

Trace R&D Center History


About Trace

Trace PIs (left to right): Hernisa Kacorri, Gregg Vanderheiden, J. Bern Jordan, Amanda Lazar, Jonathan Lazar
Trace R & D Center Investigators (left to right): Hernisa Kacorri, Gregg Vanderheiden, J. Bern Jordan, Amanda Lazar, Jonathan Lazar

The Trace Research & Development Center has been a pioneer in the field of technology and disability, known for high-impact research and development.

About Trace R&D Center