Over the last year, several people have asked for my advice about the viability and efficacy of information dashboards. What’s an information dashboard? The instrument panel behind the steering wheel of a car is the most common form of dashboard. It provides useful information to the driver, such as the speed of the car and how much fuel is left. The car dashboard has been carefully designed so that information can be read at a glance, with only the most important information positioned front-and-centre. If it were otherwise, the driver would have a difficult time reading the dashboard while concentrating on the road.

An information dashboard is roughly the same idea. A well designed information dashboard is a single computer screen full of facts and figures. Instead of monitoring the conditions within a complex machine, such as a car, the information dashboard monitors the conditions within an organization, process, or field of practice. Usually this amounts to a collection of visual elements, by which I mean charts, graphs, gauges, tables, lists, and text snippets. The ideal is for a decision-maker to be able to glance at the dashboard and decide whether a decision is required. If a decision is required, then the dashboard would also provide information to help make that decision a good one—or, more likely, point to where the right information might be found. I say that this is an ideal because the activity within an organization is usually much messier and less predictable than the functioning of a machine.

Information dashboards are in widespread use in several professions. Take the most famous case: the Bloomberg Machine, a computer terminal for financial investors that features a built-in information dashboard. It is said that this dashboard provides all the (continually updated) information that a finance rascal might need (news alerts, asset values, indicators of market conditions, and the like). It can be argued that panic- and mania-prone investment analysts ought to throw their Bloomberg Machines into the Hudson River, do intensive research, and adopt a more prudent long-term strategy. (And study the wonderfully contrarian Grant’s Interest Rate Observer.) But that’s a different matter entirely.

Or is it? One of the criticisms of this type of information management is that it encourages reaction to short-term conditions over longer term considerations. And that easily quantified information is highlighted at the expense of messier qualitative information. And that standardized information is the focus, even when the most relevant information can’t be standardized to fit the dashboard format. And that people within organisations “game” the marquee indicators so as to favourably exaggerate personal or organisational performance. (And, and …) Thus, there is a need to make sure that an information dashboard is not implemented for purposes it is not suited for. Many financial traders will admit that they have an unhealthy addiction to their Bloomberg boxes. As with many forms of addiction, acknowledging the harm doesn’t necessarily lead to kicking the habit.

Information Dashboard Design: The Effective Visual Communication of Data by Stephen Few (O’Reilly, 2006), pp. ix, 211.

In his book Information Dashboard Design, Stephen Few acknowledges the problem in general and describes some additional technical challenges. There isn’t a consensus as to what an information dashboard actually is among those who peddle them; vendors have marketed a number of software packages and information services under the fashionable monicker. Mr. Few defines an information dashboard as a “visual display of the most information needed to achieve one or more objectives which fits entirely on a single computer screen so it can be monitored at a glance.” (p. 34) The display must also be concise, clear, and easily customizable. Unfortunately, most off-the-shelf information dashboards are needlessly ornate, cluttered, and confusing. Little thought is given to the science of perception and how it can aid the visual communication of data. Mr. Few wants to fix that. His book is mostly about which visual elements are ideal for dashboards and which are not, as well as how to arrange them on the screen in the optimal way.

I’ll start with Few’s discussion of perception and design because it has implications for other formats of data reporting, not just dashboards.

  • First, the reason why a dashboard is only a single screen is that iconic memory (i.e., the visual sensory register or “visual sketchpad”) and short-term memory (working memory) can hold only a few pieces of information for a brief period. If the information is scattered over several screens, it is difficult to make meaningful comparisons without overtaxing memory. This does not preclude hypertext linking to other information so that a person can “drill down” into greater levels of detail. But it does suggest that all the most relevant information must be on the same screen. As an aside, slide presentations (e.g., Powerpoint) are so bad at communicating data because they break data down into small bites for each (relatively low resolution) screen. This undermines comparison by heavily taxing iconic and short-term memory.
  • Second, the visual perception that happens before you have enough time to think about what you are seeing (i.e., pre-attentive processing) can only handle certain attributes of colour, form, spatial position, and motion. For example, most of us can quickly perceive the length, width, and direction of a line when presented in two dimensional space. We can not, however, judge with precision the relative sizes of circles. This has implications for the selection of visual elements to enhance rapid perception of the dashboard (e.g., bar charts are well suited but bubble charts are not).
  • The way the mind interprets patterns has implications for how graphic elements are designed and arranged on the screen. This is where Few raises the Gestalt Principles of perception. These principles stipulate that we perceive things to be in the same group if they are: (a.) placed close to each other; (b.) similar in shape, colour, size and orientation; (c.) bound by an enclosure, such as a square, even if that “enclosure” isn’t entirely closed, such as brackets; (d.) aligned so as to appear as a continuous flow; (e.) linked with a connector, such as a line. If you want to invite comparisons between certain indicators, for example, then design them so that they appear to be in the same group.
  • Finally, certain areas of the screen benefit from greater attention than others. For example, the top-left corner of the screen has a higher profile than the top-right corner (unless, I suppose, we’re talking about people whose first language is written from right to left). The centre of the screen is also more likely to get attention. Thus, the most important information should be placed in these areas.

There is also the issue of aesthetics. According to Few, a dashboard should be pleasant to look at because most of us are disinclined to stare at an ugly screen. Yet, the dashboard should not contain visual embellishments that have no function. Such “eye candy” takes up valuable screen space and distracts the eye.

So what, specifically, do these ideas about perception suggest about dashboard design?

Understandably, many design choices depend on the purpose the dashboard is intended to serve. Mr. Few lists three general types of dashboard. First, strategic dashboards (such as the “executive dashboard”) are designed to monitor organizational performance, threats, and opportunities. Here the emphasis is usually on summary (“high level”) indicators of those things most relevant to the general direction of an organization, such as business profitability. Second, analytical dashboards are used to monitor complicated systems, such as markets, and diagnose the implications of changes. Here, interpretation of data is more dependent on detail and context. Such dashboards should explicitly facilitate comparisons between indicators or trends over time, plus allow the analyst to drill down to reveal more detailed data. Finally, operational dashboards monitor constantly changing activities and events, such as the flow of a manufacturing assembly line. This information is usually very specific and continually updated. This allows for rapid and decisive responses if any action is needed.

Mr. Few advocates directness and simplicity of design. If a pixel isn’t being used to enhance the communication of data, then that pixel doesn’t belong. This includes summarizing data with indicators instead of littering the screen with data that is too finely grained. A great deal of thought should be given to where blank space is placed on the screen because blank space is extremely important for guiding the eye—a standard piece of graphic design advice. Muted and tasteful colour pallets are ideal because bright colours can result in sensory overload. According to Few, instructions, logos, and descriptions should not be placed on the main screen. Small and unobtrusive legends can be used to remind someone what, say, a colour on a graph means. The same goes for labels that remind the reader what the unit of measurement is. The most important data and alerts should be highlighted using colour, bold type, blinking, or a simple symbol (such as a circle). Mr. Few also puts a great deal of stock in multi-foci displays, which render the same data side-by-side but at different levels of aggregation. This technique allows a viewer to see the big picture (such as a long-term trend) while also taking note of detailed changes.

What graphic elements are most appropriate for dashboards and which are not? The following figure provides a summary of the elements Few seems to despise.

Mr. Few claims that one of the biggest problems with dashboards is the needless variety of graph, gauge, and chart types on a single screen. Mr. Few argues against using too much text and raw numbers, as with large tables and scorecards, because it is difficult to quickly pin-point the most useful facts. Pie charts and bubble charts (or anything round, for that matter) are bad because they take up too much space and size differences can’t be perceived accurately. Radar graphs make the accurate comparison between different variables difficult. They are a complete mess, especially when more than one set of figures is plotted on the same graph. Finally, Few also singles out faux dials, gauges, and meters for scorn, such as speedometers and thermometers. These elements might be intuitive but they are needlessly ornate and often misrepresent or obscure units of measurement. This is especially the case for those decorated with bevels, colour gradients, and shadows so as to look like part of a physical instrument panel. Often the analogy doesn’t work unless you’re trying to simulate a real instrument panel.

What graphic elements are suited to dashboards? Mr. Few favours straightforward bar charts and line graphs, including combinations of the two (e.g., the Pareto chart). That doesn’t include stacked bar charts, which take too much concentration to interpret. Lists (e.g., top ten clients for the month) and very simple tables have their place if used sparingly. Box plots (which specify a range of values plus a measure of central tendency) are useful, especially for probabilistic statistics, although the variant known as the box-and-whisker plot is too complicated for most dashboards. Scatterplots are permissible but usually require a line of best fit to make the cloud of points easy to interpret in a glance. Of course, if you are exploring anything more complicated than a linear relationship, it is difficult to automate the right type of line; if there is any risk of non-linearity, I would argue that automation can be down-right dangerous. Spatial maps (e.g., a map of a geographical region with data specified for various sub-regions) can be used so long as they do not become too busy and poses an awkward fit.

I want to dwell on a three graphic elements because they are relatively recent in origin and may not be familiar to you. I’m talking about Edward Tufte’s sparkline, Ben Shneiderman’s treemap, and Stephen Few’s own creation, the bullet graph. I’ve illustrated each of these below.

Sparklines are graphs that are about as tall as a line of text and are designed to provide context to aid the interpretation of a figure. In the example above, the revenue figure shows a steady increase over a period of time with one major drop. The second sparkline shows the history of an indicator and whether that indicator is within an acceptable range (the gray band). Strangely, Few doesn’t include the third type of sparkline, one that uses bars. This omission is curious because I’ve seen some examples that are both compact and easy to read on computer screens. This particular type can display more information, as is the case here with the weekdays and weekend days coloured differently (something that explains the short-term drops in the sequence of numbers). I’ll have more to say about sparklines in my forthcoming review of Edward Tufte’s Beautiful Evidence (2006), which devotes an entire chapter to this graphing technique.

The bullet graph is a cross between a gauge and a bar chart. The unit of measurement appears at the bottom. The background bar is coloured differently to show qualitative interpretations of different ranges (e.g., low, medium, and high performance). The dark bar in the middle shows the actual indicator. A crosshatching line can also be used to show a comparative measure, such as a performance benchmark, value from a previous time period, or sales target. The second bullet graph is similar to the first but the range of possible values doesn’t begin at zero. In such cases, Few suggests that it is more sensible to use a symbol to indicate the indicator (in this case, a circle). Bullet graphs are well suited to dashboards because they can be stacked in a tidy way and, therefore, use less room that other gauges. Unfortunately, as far as Few knows, there are no software packages that use his invention at the moment.

Finally, the treemap uses the size of an area and the area colour to communicate two values simultaneously. Imagine that the above example reports sales figures for ten different regions. The larger the region, the greater the box. The colour could also indicate qualitative ranges, such as below sales target, at sales target, and above sales target. The treemap is useful because over- and under- performers would stand out clearly because of the size of the area or the colour.

All told, I highly recommend Few’s book if you are at all interested in how data can be displayed visually on a computer screen. This is a medium that many of the gurus of information design stay away from because screens offer very little choice compared to paper … but that’s what makes information design ideas even more crucial. There is also quite a bit of advice about charting and graphing techniques more generally, with plenty of good and bad examples. However, if you’re looking for analysis of the appropriateness of implementing an information dashboard, you won’t get that here.

Mr. Few does defer much of this discussion of rationale to Wayne Eckerson’s Performance Dashboards (2005), which talks about the management of dashboard systems. There is a need for some scepticism that Eckerson’s book does not offer, I would argue. This is best put by the aforementioned Edward Tufte who is quite hostile towards the dashboard concept, partly because the metaphor is overwrought. While consulting for a company, Tufte observed: “At one point it appeared to me that too many resources were devoted to collecting data. It is worth thinking about why employees are filling out forms for management busybody bureaucrats rather than doing something real, useful, productive.” (Tufte, 2001)

Susan Webber recently wrote an excellent essay that expands on Tufte’s point (Webber, 2006). She looks at why managers and administrators crave numbers for the wrong reasons. Webber forewarns:

When numbers begin to assume a reality of their own, independent of the reality they are meant to represent, it’s time for a reality check. Metrics presuppose that situations are orderly, predictable, and rational. When that tenet collides with situations that are chaotic, non-linear, and subject of the force of personalities, that faith—the belief in the sanctity of numbers—often trumps seemingly irrefutable facts. At that point, the addiction begins to have real-world consequence. While business managers must recognize the limitations of metrics, this does not mean that metrics and measurement are inherently bad things … But quantitative measures can be and frequently are used naïvely. It’s all too easy to abdicate judgement to the output of a model or scorecard. (p. 1)

Webber goes on to point out how managers can become preoccupied with numbers because it’s easier than looking at the messy activities of people. More worrisome, most managers just aren’t very good at data analysis because, for example, they are often not versed in the principles of statistical probability. And people often can act irrationally to numbers. For example, psychologists have found that if a random number is mentioned to a person who is attempting to make a numerical estimate, that suggestion will influence the estimate (a cognitive bias called anchoring).

Obviously, Tufte and Webber aren’t the first to draw these conclusions. Her contribution is timely, however, because recent technologies have made dashboards more viable. This is similar to what happened in the late-1970s and 1980s when spreadsheet software was proliferating throughout the workplace. This technology allowed managers to model their organisations using available data (such things as finance, sales, and personnel data). This led many to test hypotheses and plan initiatives using spreadsheet data alone. Soon enough, the numbers became the organisation in the minds of these people. It wasn’t until the late 1990s that the prevailing fashion in management stopped neglecting the fact that organisations are fundamentally collections of people. Even today, “spreadsheet thinking” has a powerful grip over how managers perceive the workplace. I wonder whether “dashboard thinking” will develop as an extension of this.

Review by Peter Stoyko

Update (24.02.10)

I couldn’t mention this at the time, but when I wrote this review I was a member of an ad hoc roundtable tracking seven information dashboard projects. That’s what the first sentence in this review refers to. Six project leaders of varying levels of success were advising an eager, fledgling dashboard initiative. And so, kinda like the plot of Akira Kirosawa’s film Seven Samurai (or the Western remake The Magnificent Seven), I was one of seven swordsmen recruited to help the hapless village. I was invited as the pointy headed one without actual battle medals on my chest. I was known for my research in the fields of information design and knowledge management; that is, the two unwed, poorly matched, squabbling parents of information dashboard design. Alas, after a year-and-a-half of toil, the poor village fell to the callous villains (i.e. a new executive team with different priorities). Or were they felled by liberating heroes? I’ll let you decide. Thankfully, in the end, I was just a bystander who sifted through the wreckage, learned a lot, and lived to tell the tale.

I hail Stephen Few for his continuing efforts to bring these squabbling parents together. His new book Now You See It (2009) is worth a look.

Update (09.03.10)

The software company Panic has created an interesting information dashboard called the Panic Status Board. It’s a team dashboard. Team members consider it a useful way to coordinate themselves without overbearing supervision. It even tracks bus arrivals at the end of the day so that employees don’t miss their bus ride home.

Update (28.07.10)

I came across an interesting example of the grand status-board at the headquarters of Baidu, China’s most popular Web search engine. I offer some pondering here.


Wayne W. Eckerson, Performance Dashboards: Measuring, Monitoring, and Managing Your Business (Indianapolis: Wiley Publishing, 2005).
Stephen Few, Now You See It: Simple Visualization Techniques for Quantitative Analysis (Oakland: Analytics Press, 2009).
Edward Tufte, Beautiful Evidence (Cheshire: Graphics Press, 2006).
Edward Tufte, “Response to Executive Decision Support Systems: Ideas for Monitoring Business and Other Processes,” August 27, 2001.
Susan Webber, “Counting on the Numbers Game … Management’s Great Addiction,” The Conference Board, Executive Action Series, no. 192, May 2006.