We were able to take a look at two mission critical systems at the DLR (German Aerospace Center) open day yesterday, and noted some significant differences between these and “normal” desktop or web apps.

Tsunami warning system

This system was a €50mio+ prezzie from the German state to Indonesia following the 2004 Tsunami. It is an early warning system which compiles data from a number of sources (GPS, satellite, seismographs, buoys in various ports etc) and informs the operator whether a Tsunami is likely or not.

(Note to self: next time bring a proper camera)

The user interface consists of four screens, all with fairly typical desktop app user interfaces (think Outlook, but with more panels). Each screen has a function assigned, ranging from observation to action and includes maps, figures and so on.

The action screen is the only one to introduce a more web-app-like pattern, namely big buttons. This is the screen which will allows the user to alert the population, so (thank god) it is used nowhere near as often as the rest of the screens. It is also operated under pressure, so it makes sense that this would present the kind of simplifications and guidance seen in web apps to make them easier to learn and operate occasionally.

When triggered, the system then exposes another interaction: Disseminating the warning to the population. This happens through a number of channels such as SMS, PA etc. Crucial to this is the reduction of false alarms, which would be frequent without some pretty advanced tech, since only one in ten earthquakes results in a Tsunami. Naturally a population that fled nine times for no reason is unlikely to heed the tenth – this time correct – alert, making the whole system worthless. It is a scary thought that the effectiveness of millions of Euros of life saving technology can be undone at the last minute by a simple human factor known to kids as "crying wolf".

International Space Station control center

The classic “mission control” room, from where the European ISS module is constantly monitored. The other modules (US, Japan, Russia) have their own mission controls in their respective countries.

The UI is once again split over four screens with vast amounts of data covering all aspects of the operational health of the module, as well as map visualisation and some larger key figures. Occasionally they need to respond to some warning or other, but the bulk of the time is spent working through procedures to manage the experiments being run.

Interestingly when a warning does occur, they are supposed to pull out the manual to find out the procedure to follow. I would have thought this would be directly integrated in the UI, but possibly keeping the displays absolutely constant is more important.

Key Differences between mission critical and ‘mere mortal’ systems

  • A high level of training and knowledge of the system is assumed
  • The screens are mostly very information dense
  • The hardware config is designed to work with the software, and the software is single purpose. So it can be designed with no consideration for platform flexibility or cohabitation.
  • They use a lot of multi-screen displays, within which the UI is divided into panels. This amounts  to a sort of Russian doll of info panels.

As a consequence of this, all the information is visible at all times in a predictable position, and incurs no navigation nor window management overhead. The daily cognitive overload is massively reduced and there’s no hunting through taskbars, docks or browser tabs in a crisis.

Could they be improved?

The two key aspects of the UX of such a system are  reducing training costs and mistakes, especially under pressure.

Impossible to know how those could be improved from such a cursory glance though - we'd need to study users in context and run pretty extensive tests before giving any advice that is not pure speculation.

A couple of small things did strike me:

  • In particular on the ISS screens, the information design seems quite "windows-esque", in terms of typography, colors etc. I believe some application of "Tufte" principles could make them both clearer and more pleasent to stare at all day
  • There is obviously something not quite right when the user brings his laptop and dumps in front of the dedicated screens. Maybe a space for laptop docking would be tidier and offer the advantage of a full screen and keyboard without taking away from the integrity of the primary system
  • The ISS people get to run through procedures, but what on earth do the Tsunami monitoring people on all those days where nothing happens? And how do they avoid nodding off?

Lessons for the rest of us

The main lesson for “normal” UX designers is: Don't try this at home. These interfaces are heavily optimized for expert use and they are completely off-putting for new / occasional users.

The idea of a screen with a dedicated function is very attractive. I don't know what percentage, but a large amount of my cognitive energy is dissipated daily by hunting around for the right application or browser tab. 

On the other hand if I look the current state of my machine, I have 9 applications open, with a total of 36 documents / browser tabs. And I’m pretty sure that my desk is not certified to hold the weight of 36 screens. 

Perhaps the issue could be mitigated if I took the trouble to close my unused documents a bit more often. But that’s the crux of the problem: They are not in sight so I don’t care. And when they do get in the way it is because I am trying to get something else done, and stopping that process to do housekeeping is a pain. I’m starting to think my optimal setup would be some combination of multi-screen panelling with virtual desktops / Spaces.

Steffi suggests we could have dedicated rooms for each task: Communication room with screens dedicated to Outlook, Skype, etc; Design room, dev room, web browsing room. Maybe we could do it open plan and replicate the office layout of the ISS control centre. Now I just need to find a large warehouse studio ...