more difficult to handle very large numbers (above 1000), and very small numbers (such as smaller fractions of 1, like 0.0032 or. Wherever possible it is best to try and rebase index numbers to run from 0 to 100, the number range that all readers are most comfortable operating with. In this case, however,
such an effort would mean rebasing to cataract treatments per people and would have two drawbacks. First, decimal points would be needed to differentiate some observations from each other. And second, this measurement unit could be rather misleading, suggesting to readers that the Scottish health boards being covered actually
have populations
in the millions, whereas none of them do. So here rebasing on cataract treatments per population delivers tractable numbers running from up to 723. It also makes visible the differences between observations, but without any clutter of decimal points.
H AND LING ATTENTION POINTS 7
Table 7.2How Scotland’s health boards compared in treating cataracts, 1998–9 financial yearHealth boardsTreatment rates per100,000 peopleBorder
723
Tayside
503
Highland
339
Ayrshire and Arran
332
Argyll and Clyde
332
Lothian
318
Greater Glasgow
318
Dumfries and Galloway
317
Western Isles
308
Forth Valley
297
Shetland
282
Grampian
277
Lanarkshire
239
Fife
229
Orkney
217
Mean treatment rate335
Notes:
The range is 506; midspread (dQ) is 55.
Two upper outliers, no lower outliers.
Source:
National Audit Office, 1999.
4Upper outlierUpper outlierUpper quartileMedianLower quartile The principles here can easily be extended to any kind of data numbers. Express very large figures in units of hundreds of millions, or millions or thousands as appropriate. And multiply very small ratio numbers to get rid of fractions of 1 and the need for several decimal points. You can also go along way by rounding numbers up or down (so that 10.51 becomes 11
for instance, while 10.49 becomes 10). Or you can just cut numbers by eliminating all decimal points (which would mean that both and 10.49 are expressed as 10). Some people find it helpful to design tables using as a rule of thumb that there should never be more than three effective digits in any cell, and hence no more than 3 numbers vary from one cell to another.
On this rule you might enter 1,215,689 in a table either as
1.22 million, or as 1,220,000 (that is, rounding to the nearest. If you went to four effective digits the same number would be 1,216,000 (rounding to the nearest 1000). In any table showing such large numbers rounding to the nearest 100 is almost always sensible in cutting away pointless detail, and often to the nearest 1000. This is especially appropriate where numbers are being analysed in main text tables, but the same data are also included in a reference annex or a data CD. Here there is no need to overburden the main text tables simply in order to read a precise number into the record.
Numerical progression.
The sequence of rows in Table 7.1 is set alphabetically, so that the data in the second column are completely jumbled, with one number succeeding another in a completely unpredictable way. Readers will find the table very hard to follow, and must fend for themselves in trying to workout the central level of the data or which health board is doing well or badly. By contrast Table 7.2 reorders the rows to give a clear downward numerical progression. Health boards performances here are visible at a glance, with strongly performing boards at the top of the table and weakly performing ones at the bottom.
Never keep data arranged in alphabetical ordering of rows or some other customary order if this obscures the numerical progression in the table. Some authors argue against this advice because they want to present data for cases or other units in the same standard sequence from one table to another. Most of the 6
AUTHORING AP H D time, though, this strategy helps the author, who is very familiar with the data’s complexities, but actually only confuses readers by creating badly jumbled numbers in the tables.
Always apply the need to know criterion rigorously before accepting any deviation from numerical progression. A numerical progression is desirable in
all tables, with only two clear exceptions those showing overtime data, and those covering categorical data which have to be kept in fixed order to be meaningful (for example, survey response options on a scale like agree strongly/somewhat agree/somewhat disagree/
strongly disagree. Other departures from numerical progression are only very occasionally justified. There might be one or two cases where readers need to make comparisons across a small
set of easy-to-read tables, and where they would be helped slightly more by having a standard row sequence across tables,
rather than being given a clear pattern in each table’s data.
In the case of larger tables with multiple columns, achieving numerical progression is a little trickier. You need to determine which is the most important column and rearrange the rows so as to get a numerical progression on that column. Make sure that the progression column is visible by placing it first (closest to the row labels) or last (where it will standout as the salient column. If you can, try to achieve a numerical progression not just down the rows but also
across columns in the table, either ascending (smallest data numbers in the first column and largest in the last) or descending (largest data numbers in the first column and smallest in the last. Here you reorder both the sequence of rows and the sequence of columns to maximize a table’s readability.
Share with your friends: