All the information collected by the TeamAware sensors has its role to play on the system. The Command-and-Control (C2) operators can manipulate and analyze all this data in detail before taking decisions. However, the first responders (FRs) that are deployed on the field can’t stop their operation to browse through all the available data.
For this reason, the Mobile and AR interfaces are critical if we want the project to add value to the deployed FRs. These interfaces must filter the data and only display the information really needed in a way that doesn’t hinder the ability of the FR to do their job. That way we can enhance the decision making of the FRs without modifying their actual way of operation.
To archive this, we are following some common design principles on both applications:
Hick's Law: keeping the options on each view reasonable will minimize the time it takes for a user to complete an action
Aesthetic Usability Effect Law: the more pleasing to use the application is, the more the end-user feels motivated to use it
Fitts’s Law: the time to acquire a target is a function of the distance to and size of the target
Ergonomics: Movements required for gesture interaction need to be easy to perform even on extreme and dangerous conditions
The Mobile Interface (MI) is being built with Ionic Framework which uses Capacitor Native Plugins, to access both Android and iOS capabilities, that will deliver a multi-platform application. In addition, Angular Framework, a component-based framework for building scalable applications, is being used as development code, encapsulated inside the Ionic App.
The aim of the MI is to deliver real-time information to the FRs in any situation without introducing additional stress on their operations. So, the FRs should be mostly hands-free to be able to perform their activities efficiently. Therefore, the main goal of the MI is to provide the necessary information for the operation in each step although it will also provide a direct communication channel between the C2 and the FRs.
Also following approach, the map will handle the main interaction with the FRs, displaying several types of alerts, FR location and sensors. This is an easy and fast way to make the FRs aware of what is currently surrounding them.
Moreover, to bring more details to the end users, the MI provides a view which allows to navigate through all FRs or sensors and check its current state. In the case of the sensors, there will be a vast variety of parameters that will display dynamically depending on their danger levels, easing then the readability.
Similarly, the “Alerts View” will provide a list of alerts sorted by danger levels, that can be clicked on for more information.
Finally, the “Audio Notes View” will support communications from the FRs to the C2. This feature allows them to send an audio note and information such as Geo position, health status, hazard conditions, and some predefined alert descriptions. Once this information is sent, the C2 should analyze it and raise the proper alert.
We’ll begin exploring the different approaches to AR technology and some examples of the types of applications that can be built with each one. After that we’ll analyze how we plan to work together with the end users to design our interface.
This approach uses the camera of a mobile device and displays the augmented reality scene in the screen of the device. On this technology the real world and the virtual world are mixed and can interact, but all is displayed in a flat 2D view. It’s the most common and approachable way of AR technology but it has its limitations. The main disadvantages are:
It’s not very immersive because you are looking at a screen (2D)
Mobile devices sensors and processing power are limited and not designed to work specifically for AR applications
Using mobile devices for AR still looks odd (an issue that we have with most AR technology)
It’s not really an AR technology. A HUD is just a screen that shows information in 2D in front of the user. It has its applications, for example it could be useful to show navigation data to someone that is driving a vehicle, so they don’t have to take their eyes off the road to see the instructions. However, it does not take the real world into account when displaying the data.
These interfaces require a specialized device to work (HoloLens, Magic Leap…) and provide the best AR experience. They come equipped with dedicated hardware and software for AR tasks which allows them to understand the surrounding environment and present data in 3D integrated in the user’s real space. The main disadvantages are:
The AR devices are pricy and usually suited to very concrete use cases (industrial, research…)
The AR devices are bulky and people using them may look odd (which may cause reject when proposed as a solution)
AR technology is still in development, that means there are still limitations to overcome and that the users have to be trained to use it (unlike mobile AR that is closer to a “normal” mobile use)
On this first phase of the project, we have collected the end users’ needs to prioritize what information they want to see in our application. One of the main obstacles we face when designing an AR application is that conveying how things looks and how the interaction methods work by using pictures and video is not easy.
To overcome this challenge, we have decided to make our interface as modular as possible, to allow us quick iteration. With this idea in mind, we plan to build several prototypes for the midterm demonstration so we can have the end users try different interaction methods and ways to organize the information on the screen.
Our objective is to incorporate the end users’ input in all the development steps, so our final application not only complies with the technical requirements, but also is tailored to their needs.
Contact
Monica Florea
Administrative Coordinator
European Projects Department
SIMAVI
Soseaua Bucuresti-Ploiesti 73-81 COM
Bucuresti/ROMANIA
Email:
Çağlar Akman
Technical Coordinator
Command and Control Systems
HAVELSAN
Eskişehir Yolu 7 km
Ankara/TURKEY
Email:
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101019808.