See all apps

Real-Time Emergency Response

Submitted on March 31, 2013

What problem are you intending to solve?

Detection, observation, and assessment of situations requiring intervention by emergency responders depends on high-quality "live" data.

What is the technological approach, or development roadmap?

In the third development round, we began with a proof of concept and finished with an alpha product, viable for in-the-field testing by partner emergency response groups: the Ministry for Public Safety of Quebec and the police and emergency services of Red Wing, MN. The main advances realized in this round include:

  1. Incorporating feedback and feature requests from two different emergency response teams who are testing the rtER apps.

  2. Moving to improved real-time video from mobile devices, and implementing an iPhone version of the smartphone app, since not all responders seeking to try it have access to Android devices

  3. Extending the HTML5 web application to better filter more content types (Twitter, YouTube, etc), and also to improve collaboration between responders in real-time via tagging.

The collaborative grid view of video streams, Twitter feeds, and web pages, related to a fire being monitored in Montreal Early feedback from both groups guided many of the new developments. The Red Wing users wanted the ability to create group and individual workspaces so that responders in different areas or scope of responsibility could interact with the system simultaneously without interfering with each other. In Quebec City, the users asked about support for multiple filtered information streams for parallel events of unequal importance.

We handled both of these by making taxonomy a key component of our software architecture. Any item -- video stream, Twitter search, YouTube video -- can be tagged, either through the web interface or by automated filters configured on the server, and each tag has its own workspace that can be opened and sorted collaboratively by drag and drop within its grid view. As a result, items can also be sent and shared between tag workspaces by adding and removing tags from items. For example, Red Wing would create both city EOC and state EOC tags during a major crisis to facilitate data sharing between these different levels. Similarly, the Quebec ministry would have a general feed via the Quebec-all tag, but during a flood, an automatic tag for live video in a certain area, video-zone1 is created, confining its contents only to video in the affected area.

The Quebec city responders who have spent more time in the field with our system made numerous user interface suggestions and bug reports that were also incorporated into the design. In particular, their field work underlined the necessity of a decent frame rate for video streaming. This was not possible on the earlier prototype, which frustrated interaction users. Our latest version now offers substantially improved frame rate.

Going forward, we want to ensure that we can not only bring data into our system easily, but also make this data available to other services as part of the emergency response ecosystem. To this end, our data is available through a standard RESTful API, which means that data aggregation tools can be extended to leverage our filtering tools and our live streaming platform.

<hr/>

The remainder of this section provides a retrospective history of the previous rounds:

In the first development round, our efforts revolved around a prototype visualization environment for the Emergency Operations Center (EOC), assisting coordinators and dispatch operators in their decision-making and resource allocation. The prototype integrates live video streams and mash-up data of relevance to emergency management within an immersive, navigable, street-view representation of the environment.

Visualization Environment from Round 1 However, such a tool is only useful if the data can be pre-processed and curated before it arrives at the EOC. Prompted in large part by our discussions with emergency network representatives and disasters management personnel, we began to focus on the problem of team-based data management. Thus, for the second round, we shifted gears to prototype the tools and user interface that would be used by a Virtual Operations Support Team (VOST). As noted in our conversation with David Black, one of the managers of Crisis Commons, VOSTs serve as a valued community of trusted volunteers, with the training necessary to help emergency coordinators in disaster response, and are considered critical in recovery operations.

Information Tiers, showing interaction between Emergency Dispatch, VOST Volunteers, and the general public

In particular, we aim to support the VOST activities of discovering, collating, filtering, and organizing information of relevance to disasters, during times of crisis when emergency coordinators are easily overwhelmed with the masses of incoming data. "During large-scale emergency events millions of new posts, pictures and videos are added to YouTube, Twitter, Facebook, etc. every day. How can a small local public health, first response, or emergency management agency sort through all of that?" Cheryl Bledsoe, Emergency Manager for Clark County WA, notes, "I’m way too busy to watch the internet. Prior to use of a VOST, we just simply wouldn’t listen to the community. We would connect with emergency response organizations and limit our engagement with those who would just call 911. That is a very limited view of the world ..."

IVOST interface for filtering and managing video data in geosptatial context

We were pleasantly surprised to discover that the same tools we were building for the VOST community also address certain needs of the emergency response organizations of Red Wing, MN and the Ministry of Public Safety in Quebec, who both desire access to geolocalized video views, e.g., highway traffic, status of ice floes on rivers, to make decisions regarding management of potential flood situations, large-scale fires, and weather-related situations requiring possible evacuation of nearby communities.

The tools we developed in help these user groups effectively receive, understand and take action on live picture or video streams relevant to the current disaster, and integrate accompanying text data, e.g., from chat sessions and RSS feeds, all linked to a geospatial representation of the corresponding locations within a map. In addition, the interface allows for remote interaction with smartphone users in the field, guiding them to orient their cameras in a particular direction to obtain a particular view as desired by an analyst, as shown in our round #2 demo video.

Clicking on any thumbnail brings up a full-frame video of the selected feed with indicated orientation on map

In the future, much of this source video content may be generated by the public. Just as people post to Twitter and Facebook from disaster areas today, tools like Instagram, Cinemagram and Snapchat are also seeing use in the same contexts, generating significant network load. By definition, the virtual nature of the VOST-type volunteers that receive and process this information means that they will often be far from the disaster area, perhaps even in remote countries. In this regard, an additional motivation for us in developing prototype technology intended for use by the VOST community is that this helps move our project toward the goal of tapping into crowdsourced resources during crises.

Interaction during a sample emergency scenario

We illustrate with reference to the scenario described in further detail on our website, loosely based on actual events during and after Hurricane Sandy, with an eye toward how our rtER app as demonstrated in the demo video would be used.

The rtER smartphone application is installed on the phones of people in the National Guard, military reservists, etc., who have some background in emergency response, and who know how to keep themselves safe in a disaster area. Further video would be streamed from the public via other applications, but would lack the guidance system shown in our demonstration video. Assuming a pre-existing communications network (or an alternative channel) can be kept operating, our rtER technology will help improve detection and response time in such critical events requiring urgent attention.

At approximately 3 am, part of the VOST team is tasked with monitoring the Breezy Point neighborhood, which was flooded, and where dozens of homes are now burning, with a response of over 190 firefighters trying to contain the blaze where approximately 100 homes will be burned to the ground (National Post | NBC News)

Violet and Victor, both VOST volunteers, have been tasked with monitoring the situation in Breezy Point. On their screens, they see:

  1. Twitter feeds, visualized and filtered geographically to the Breezy point area + any other tweets with hashtag #BreezyPoint
  2. Video feeds from the area, visualized geographical on an interative map and linked to a grid where they can be organized based on priority.

Violet sees a tweet: "Fire in houses across the street in #BreezyPoint, strong winds spreading flames!". Using the map view to correlate the tweet with live video feeds she finds a firefighter from the Roxbury Volunteer Fire Department, who is also tweeting and streaming video from his phone using the rtER mobile app, showing a large glow in the direction of Breezy Point. Violet moves the priority of that video to the top by dragging it, and messages Victor, telling him that emergency responders and displaced citizens are going to need a safe place to stage and regroup in the area of the fire. The position of the video is already updated on his screen. Homing in on the map, Victor finds another streaming video, being sent by Anne, who is closer to the fire.

Anne is focusing on the fire itself, but Victor needs to find a safe area that is not flooded, and is not burning. He selects this video feed, and prompts Anne to pan to the right, looking for a safe spot. Anne sees an indicator telling her to move right. She does so, and Victor sees an empty parking lot across a wide road from the fire.

Victor messages the Public Information Officer (PIO) responsible for Queens, who is situated in a remote Emergency Operations Center, with a message, "Fire in Breezy Point <link to first video feed> - safe area here: <link>". The PIO, who is viewing the unfolding situation in a large-scale immersive display environment, allowing her to keep tabs on additional trouble areas at various locations, and view live video streams from each of these, can now direct people toward the parking lot, and make sure there are responders there to meet them.

After getting confirmation from the PIO that responders will be using the parking lot to stage relief operations and help displaced citizen, Victor annotates the video feed with this information. Finally he places a static marker on the map with this information and promotes it to the top. Meanwhile, Violet, still monitoring social media sees a tweet: "Scrapes, bruises and cold in #BreezyPoint. Why is no one helping us?". Noticing that the tweet is near the new staging area that Victor just flagged, she responds to the tweet giving directions to nearby help.

How will end users interact with it, and how will they benefit?

Further to the text above, interaction is the key: the types of user activity rtER supports are intensive, time-constrained, group efforts in which every minute counts. Our demo videos illustrate some of this interaction, but what's most important to note is that rtER is really new, vs. existing mapping and information filtering tools like those based on Ushahidi. First, it makes real-time video a first-class citizen for emergency response. Second, it offers real-time interaction with the people contributing video content from mobile devices. Last, it provides a novel UI for managing content while collaborating with others to analyze and organize it.

How will your app leverage the 1Gbps, sliceable and deeply programmable network?

In the first development round, we demonstrated an immersive prototype for an Emergency Operations Centre (EOC) that included an interactively updating street view display incorporating real-time video from mobile devices. Our statement from the first development round still stands: "The demands of routing potentially large numbers of live video streams, e.g., from such on-site smartphone cameras in an arbitrary location to specific control sites, requires the features of programmable networks such as UCLP to manage bandwidth allocation and possibly multicast distribution of the data with minimal latency." Smart, fast, reliable networks are the only effective way to enable widely dispersed volunteers to collaborate in such a scenario. In addition, our development round 2 demonstration highlights real-time round-trip communication with those in disaster areas, such as guiding someone with a smartphone to gather footage of another section of the scene. Managing this on a large scale will pose many challenges around ensuring that such collaborations are able to be carried out in real-time. Only by designing usable interfaces for doing this kind of work can we make the underlying network useful in practice. The GENI network deployment recognizes that mobile is a significant aspect of next generation networking, via its WiMax deployments (http://www.geni.net/?p=2001). This is exactly the sort of functionality, perhaps even deployed on short notice to support a disaster area, a system like rtER would need. Moreover, we might anticipate rapidly deployed unmanned aerial vehicles (UAVs) equipped with cameras, used to supply real-time video feeds from locations where environmental conditions prevent direct access by the responders.

Further application information

Additional supporting information, materials and resources

Read about project updates - project blog

Take a look at the existing code - project repository

Will your work be beta-ready by the end of the Development Challenge?

Yes. In addition to our previous Android app, developed in Round #2, we have now added an iOS version, and significant enhancements to the server-side capabilities. Full video support is presently limited to certain web browsers but will be expanded over the next few weeks. These tools are now being shared with our users both in Quebec and Red Wing, MN, for further testing and feedback. The EOC tool, developed in Round #1, requires additional work to benefit fully from our recent emphasis on the video streaming and collaborative information management tools developed in Rounds #2 and #3.

How much effort do you expect this work to take?

The prototypes developed for the first three rounds of the development challenge have involved steadily increasing efforts as we shifted from proof-of-concept exploration of early ideas to tailoring our system in response to the needs of real users. In particular, the design has been influenced heavily by discussions and feedback from early tests with emergency response coordinators in Red Wing, MN, and Quebec City, QC, which motivated the iOS port and implementation of a low-latency video streaming protocol for mobile devices. Now that the core technology is reasonably functional and capable of running on the client devices of both user groups, we expect to carry out significant testing in various use cases for the final round, working with these user communities to refine the tools further.

Do you need help?

Thanks to not only the financial support in the previous two rounds, but also to contacts made via Mozilla and publicity through Ignite, we have recently benefited from valuable input of experts in emergency and disaster response. These have included a director of Crisis Commons, the Quebec Ministry for Public Safety, in addition to police and sheriffs' offices, and a Virtual Volunteer Team Leader. This addressed our urgent need of technology direction focus from those who work in the field. We also benefited from the valuable assistance of a User Experience expert, who volunteered through Ignite, to refine our use case scenario and develop the user personas around which we are continuing our work. Although further such support will continue to be crucial, we expect that assistance will be most needed on the technical and testing side as we investigate integration with larger codebases such as Ushahidi and carry out wider-scale testing, including exploration of crowdsourced content contributions.

If you can help let them know in the comments below.

Jeremy

and team members

We are a group of computer engineers, combining backgrounds in human-computer interaction, high-bandwidth videoconferencing, immersive media, hardware prototyping, and mobile device interaction. The heavy lifting is currently being undertaken by a talented core of B.Eng. students and interns with diverse experience in embedded systems, mash-ups, mobile platforms, and geospatial navigation: Severin Smith, Stephane Beniak, Stepan Salenikovich, Nehil Jain, and Alex Eichhorn. Additional assistance was provided by Jonah Model in generating and refining our user personas. General input and guidance is provided by Jeremy Cooperstock, Jeff Blum and Jan Anlauff.

comments powered by Disqus