Multimodal tagging for disaster response

design, course (Berkeley)

Overview

My Role: Conducting several stakeholder interviews and leading the development of the prototype for NLP-based tagging and spatial mapping
Tools/Skills: User Interviews, Natural Language Processing, Speech Recognition, Data Visualization, Python, Google Cloud API
Timeline: January - May 2021 (5 months)
Team: 5 (MechE, Cognitive Science, Data Science, Business)

Context

Military branches, such as the US Navy, are often tasked with emergency response during natural disasters. However, in addition to the military, the government agency FEMA, local police and firefighters, and even civilians must work together to address rapidly shifting incidents within an environment constrained by latency and low bandwidth. Therefore, emergency responders across the board face the challenge of ensuring that information from a wide variety of sources is properly organized, prioritized, and acted upon. We worked with Naval Information Warfare Center (NIWC) Pacific to tackle this challenge (see intro video here)

Glowing globe representing information and data

Goal

Use technology to improve information sharing for decision-makers in disaster response scenarios

Outcomes

1) Pinpointed the difficulties of communicating and synthesizing information during and after a disaster through stakeholder interviews

2) Developed a mockup for a dashboard that enables automatic data organization and visualization for decision makers

3) Developed a prototype for tagging unstructured audio and text from disaster-related incidents, then mapping them geospatially. The integrated example of audio and text tagging and mapping is shown below:

Design Process

Description of interviews, pain points, and market research. Introducing final prototype

See audio-only prototype.

NLP Prototype with Audio and Text

See text-only prototype.

Future Steps

Reflection

It was interesting to work with NIWC on this project (as part of ME292C, Innovation in Disaster Response, Recovery, and Resilience) because they had a lot of domain expertise on the technical side, while we took a more user-centric approach to this complex problem. I think we were able to blend together both sides into our final prototypes, although it is likely that some of the problems such as latency and low-bandwidth in communication have to be solved by from the more technical side.