Dr Aleks Krotoski Asks: Do Social Media Make The World More Boring?

Real Time Club, 23 November 2010

This event garnered a significantly higher than average attendance, just squeezing into the National Liberal Club room we had. Higher attendance meant more demanding acoustics and our guest of honour did her very best to speak without a proper sound system for such a number.  That said, she rose above the obstacles to set out some excellent arguments and lead a vibrant debate with the audience. Continue reading

The Eternal Coin: Physical Endurance or Digital Failure

On 15 June 2010, the Real Time Club evening’s proposition for discussion was that new technology will move beyond a facsimile of current exchange to new means of exchange that are better for society as a whole.  Future e-money synthetic currencies for speculative fiction writers shouldn’t be, “that will be ten galactic credits, thank you”, but rather, “you owe me a return trip to Uranus and a kilogram of platinum for delivery in 12 months”.  Well, that’s what our payments autodroid bots (i.e., mobile phones) will agree amongst themselves.  Dave sets out his stall: “When you digitise something, you have the opportunity to re-engineer it.  So it is with money.  As money has changed from barter to bullion, from paper to PayPal, it has changed the markets and societies that depend on it.  Where next?

The Eternal Coin

The Eternal Coin

Dave Birch opened in front of about 40 members with a reminder that Hayek always believed that money was too important to be left to governments.   Dave argued that we ideally needed many units of account for many things but that multiple currencies increased the cost of transactions markedly – how could the cash register be large enough.  He pointed out though that in border areas people seemed able to handle concepts of multiple currencies easily.  This led to a quick reminder of the many new currencies emerging online, e.g. QQ in China, but Dave emphasised the crucial role of the mobile phone, e.g. M-Pesa in Africa.  Finally, Dave touched on new currencies related more closely to real value, e.g. based on commodities, such as people in Norway using future kwHs of electricity as currency.

The core of the argument was that:

1 – we have reached a time of great change in the nature or money

2 – the mobile phone is the most important technological part of the change

3 – some of the nascent currencies will transform our view of money

Dave concluded by musing on what these changes might mean for definitions of communities and community values across space rather than being confined by geography.

Malcolm Cooper opened his reply by asserting that the mobile phone is a transient technology, witness the iPad.  He believed that Dave confused the communications device with the technology.  Malcolm, drawing from some of the themes in his book, “In Search of the Eternal Coin: A Long Finance View of History”, felt the aberration over history was currency.  The norm is trading and storing value in a multiplicity of ways.  As an example Malcolm pointed to the extent of the Carthaginian trading empire and its relatively low use of coinage.

The discussion was, as ever with the Real Time Club, quite vibrant and funny.  Some comments and ripostes included:

  • shouldn’t we conclude from Dave’s arguments that Nokia ought to be a bank? This led to a further reminder of the 1994 paper by Edward de Bono  published by the Centre for the Study of Financial Innovation, “The IBM Dollar”;
  • would Carthage have been better off or stronger with currency?
  • Michael King of WDX (commercial interest) spoke of his firm’s Wocu (World Currency Unit), a basket of top 20 nations by GDP, weighted by GDP;
  • Michael Mainelli raised a point about trading currencies versus stores of value (reserve currencies) and pointed out other initiatives directed at that, e.g. the UTU;
  • wasn’t the deeper problem removing swings in markets, or was it perhaps that swings in markets were exacerbated by our reliance on currency?
  • the use of the quote from Dostoevsky, “money is coined liberty” (House of the Dead, part 1, chapter 2) led to a ponder as to whether we are at our most vulnerable when everything is cash;
  • people were reminded that fiat currency is fiat because the government only accepts the currency for tax purposes, giving government other opportunities to tax through debasement and devaluation and inflation;
  • was the importance of the mobile phone the global connective power and little else, followed by a comment that these days a mobile phone was hardly that, rather a computer with a phone attached;
  • a discussion kicked off on the importance of anonymity to money, including the withdrawal of cheques in the UK, and of course the Real Time Club’s interest in many things cryptographic.

The evening closed with a poem, “Liquidity”, composed on the night and read by long-standing member Andy Low:

The lake of commerce gives life its pace,

For on its smooth and shiny face,

Ripples form, surge forth and race.

(What do I want, what can I get)

They cross, connect and intersect.

The lives of people who’ve never met.

About the speakers

Dave Birch is a Director of Consult Hyperion, the IT management consultancy that specialises in electronic transactions.  Here he provides specialist consultancy support to clients around the world, including all of the leading payment brands, major telecommunications providers, governments bodies and international organisations including the OECD.  Before helping to found Consult Hyperion in 1986, he spent several years working as a consultant in Europe, the Far East and North America.  He graduated from the University of Southampton with a BSc (Hons) in Physics.

Described by The Telegraph as “one of the world’s leading experts on digital money”, by The Independent as a “grade-A geek”, by the Centre for the Study of Financial Innovation as “one of the most user-friendly of the UK’s uber-techies” and by Financial World as “mad”, Dave is a member of the editorial board of the E-Finance & Payments Law and Policy Journal, a columnist for SPEED and well-known for his blogs on Digital Money and Digital Identity.  He has lectured to MBA level on the impact of new information and communications technologies, contributed to publications ranging from the Parliamentary IT Review to Prospect and wrote a Guardian column for many years.  He is a media commentator on electronic business issues and has appeared on BBC television and radio, Sky and other channels around the world.  For much more, see www.dgwbirch.com

Dr Malcolm Cooper holds a First Class Bachelor of Arts in History from Dalhousie University, a Master of Arts in History from the University of Western Ontario, and a Doctorate of Philosophy in Modern History from Oxford University.  His thesis on the formation of the Royal Air Force was subsequently developed into a book, The Birth of Independent Air Power, and published in 1986.  His career has included a Research Fellowship at Downing College, Cambridge, management of the research programme of the Institute of Chartered Accountants in England and Wales, equity research management with three different investment banks (none of which, alas, exist today under their original name), and a five year spell as Head of Research for the City of London Corporation.  His most recent post was as Head of research for the independent public policy think tank Centre for Cities.

Malcolm was the first foreigner to take up coverage of the Istanbul and Athens stock markets and spent most of his investment banking career in European emerging markets, his last post being as Head of EMEA Equity Research for ABN-AMRO (a job he gave up in 2000 – not because he could see the dot.com crash coming, but because he decided he really didn’t want to be on the Central Line at 6.30 in the morning any more).  Most of his recent work has been in the UK public policy field but be retains an active interest in the more challenging parts of the world, and is still inordinately proud of having a letter published in The Times pointing out some of the more obvious problems with the UK’s current military commitments in Afghanistan.  He has also published several pieces on Turkey, including an article in International Affairs, a written submission to the Commons Select Committee and a contribution to a Chatham House forecast of likely regional scenarios following the second Iraq war.

ABCB – New Alphabet For Standards – Association of British Certification Bodies

My Lord, Ladies and Gentlemen.  It is a real honour for me to have this opportunity to address a group of people who share my passion that standards markets can improve the world, this Association of British Certification Bodies.  This is a personal address, not speaking as a non-executive director of UKAS, though I realise that probably had a bearing on inviting me here.  My remarks to follow are not meant in any way as UKAS policy.

Yet non-executive directorships are not filled for money – the risks are high, the time commitments always exceed the estimates and the thanks are low – nor are directorships filled with love.  If seeking a non-executive directorship is the first sign of madness; the second sign is probably taking one.  In return, we non-executive directors can be your worst nightmare.  In my case it’s because I have a passion for world trade and sustainable economies that I would like share with you.

To start with, I’d like to explore standards themselves.  Standards are funny things.  Because of my accent, I’m going to start with pronunciation standards.  Because the name of the International Organization for Standardization (ISO) would have different abbreviations in different languages (IOS in English, OIN in French), back in the 1940’s it was decided to use a language-independent word derived from the Greek, isos, meaning “equal”.  Therefore, the short form of the Organization’s name is always ISO – “I-S-O” – and ISO follows the “z” spelling as in “organization” and “standardization”.  Is that clear?  ISO’s recommendation on their website is to pronounce their name whichever way comes most naturally.  “So, you can pronounce it “EZO”, “EYE-ZOH” or “EYE-ESS-OH”, we don’t have any problem with that.”  What a great credential for flexible standards!

And then we have date standards – oh no, I’m not talking about the US month first versus the UK day first, or even the Chinese year first.  For 38 years ISO has designated World Standards Day to recognize the thousands of experts worldwide who collaboratively develop voluntary international standards that facilitate trade, spread knowledge and share technological advances.  ISO officially began to function on 23 February 1947, but 14 October was chosen as World Standards Day because on 14 October 1946 delegates from 25 countries met in London and decided to found ISO.  Of course, in the spirit of standards, in 2006 India, Ghana and others celebrated World Standards Day on 13 October while Nigeria celebrated from 12 to 14 October.  In 2007 the European Commission held its World Standards Day conference on 17 October, while the United States celebrated World Standards Day on 18 October.  Need I say more?

The truth is that the world is a messy place.  Moreover, it’s human nature to resist standards, or at least male nature.  A friend of mine, Paul, is raising three boys on his own.  When I asked him how he kept the house clean he explained, “Michael, men don’t have standards, women do.  Men have thresholds.”

The objective of standards is to help the evolution from complete mess to complete order by putting things in boxes at the right time.  Managing evolution isn’t easy.  Sometimes we try to box things in too early.  Other times we’re so late we just add cost and unnecessary complexity to existing commodities.  But done right – ahh, there we add a lot of value to consumers, to business and to society.  I recently conducted a large study with PricewaterhouseCoopers and the World Economic Forum looking at solving global risks, “Collaborate or Collapse”.  We concluded that society solved global risks using four collaborative approaches – sharing knowledge, implementing policies, markets and, yes, standards.

Sometimes I wish we could have a better sense of humour about it all.  I’d like us to avoid going down the path of political correctness and keep our largely scientific and engineering outlook on life and its problems.  Sadly, certification and standard jokes are rarer than one might like.  Perhaps we should have a comedy kite-mark.  Fortunately, the only after-dinner/lunch joke on standards I know doesn’t concern an ABCB member.  It is about certification taken to extremes over wine.

Two oenologists are trying to outdo each other on their exacting standards.  They both grab a tasting glass of red wine from the examination table in front of them.  Inhaling deeply, the first wine expert remarks that “this wine is an outstanding Bordeaux”.  The second interjects, “particularly when you recognise the difficulties inherent in raising vines of character in Côtes de Bordeaux-Saint-Macaire”.  “Indeed”, says the first, “and as this wine is from Saint-Macaire, the terroir in that area most suited to this interpretation of the Malbec grape is, I’d suspect, Château Malromé”.  “Ahhh”, counters the second, “self-evidently Château Malromé, but clearly the south-facing side, near the old well.”  “Elementary really”, replies the first oenologist, “and probably the fifth row, slightly higher up the hill”.  “Mais bien-sur”, adds the second oenologist gaining the upper hand by saying, “though I’d say a late summer picking from the eighth vine in the row and, dare I add, probably by picked by Pierre.”  “Well”, says the first, now delivering what he believes to be the fatal blow, “of course I detected Pierre’s hand on the grapes, after a cool morning and a late dejeuner.  Though his post-prandial micturation infuses this wine with a somewhat disagreeable undertone.”  “Naturally it does” says the second oenologist rather coolly, “as one must certainly ask why-oh-why did Pierre drink such an inferior claret for lunch?!”.

But standards are not just about quality and one-up-manship.  Adam Smith advanced the metaphor of “the invisible hand”, that an individual pursuing trade tends to promote the good of his community.  Yet the Doha round is stymied.  Valid social and ethical concerns transmogrify into trade restrictions.  Property rights are a battleground, from carbon emissions to intellectual capital.  Already, emerging carbon standards are being sharpened as weapons in future carbon dumping wars.  Our state sectors swell out of recognition, crowding out the private sector that delivers value.  Standards and certification markets exist to improve the functioning of global markets and trade, and even to inject market approaches into monopolistic service delivery.

Adam Smith knew that markets alone are not enough.  Smith’s argument is too rich to take after an excellent meal, but what I admire about certification bodies is that you exemplify Smith’s Moral Sentiments of Propriety, Prudence, and Benevolence, combined with Reason.  As ABCB members you do set high standards, think to the long-term, explore new ways to help society advance and make business and government think about risk.  You realize that there is more to economic life than money – as  comedian Steve Wright says – “You can’t have everything, where would you put it?”

Things change fast with trade.  Looking back to post-war Japan and thinking of Japan today reminds me of the apocryphal quality control tale about relevant standards.  A western company had some components manufactured in Japan in a trial project.  In the specification to the Japanese, the company said that it would accept three defective parts per 10,000.  When the shipment arrived from Japan, the accompanying letter stated something like: “as you requested, the three defective parts per 10,000 have been separately manufactured and have been included in the consignment.  We hope this pleases you.”

Today China is the sobering reminder of the importance of trade.  I heard a great sound-bite at the IOD China Interest Group two years ago, “we’ve had a commercial break these past 200 years, but now we’re back, on air”.  In the 18th century China was the world’s biggest economy, with a GDP seven times that of Britain’s.  But China closed its doors to trade missing the industrial revolution, the capital revolution and the information revolution.  There is a children’s joke that “you should never meddle in the affairs of dragons, because you are crunchy and taste good with brown sauce.”  But we must mix-it-up with the Dragon.  Money is odourless and poverty stinks.  We must reach out to all the returnees to world trade.  And we must ensure that our own standards markets are open and competitive, in turn helping world trade be open and competitive.

So, is today’s luncheon talk supposed to be slick & humorous, a call to arms or an academic lecture?  Actually I want to end by emphasising the importance of conflict.  Regulatory capture is a phenomenon in which a regulatory agency which is supposed to be acting in the public interest becomes dominated by the vested interests of the existing incumbents in the industry that it oversees.  In public choice theory, regulatory capture arises from the fact that vested interests have a concentrated stake in the outcomes of political decisions, thus ensuring that they will find means – direct or indirect – to capture decision makers.  Conflict and competition, not calm quiescence or silence, are key signs that things are working well in standards markets.

Accreditation and certification only work when the entire system is a market system, not a bureaucratic one.  We are good, but we can do better.  For example,

  • development of a standard should be an open process involving interested stakeholders, but many ISO affiliates typically charge three figures for short documents that could be supplied electronically at no charge;
  • despite our claims for openness, transparency and public benefit, certification agencies often fail to be open oto the general public about whom they’ve audited for what. Outputs such as certifications and grades awarded could be better published so that they can be validated – yet the industry complains about the ‘grey’ certification market;
  • accreditors must be vigilant regulators and ensure the separation of standards development from the commercial elements of implementation and review. Yet accreditors must be realistic and engage in meaningful dialogue with the industry while avoiding regulatory capture.

I could go further and talk about the widest view of standards from financial audit through to social, ethical and environmental standards with which I also work.  I’d even mention that to me, ideally, certifiers should bear some indemnity that can, with the price paid by the buyer, be made publicly available.  Developing countries rightfully worry that “the things that come to those that wait may be the things left by those who got there first”.  Sustainable commerce means doing things differently.  We must clasp the hands of the developing countries, support the invisible hand of commerce, restrain the visible hand of government and slap the grabbing hands of special interests.  We must prove that a global Commerce Manifesto deserves to replace a soiled Communist Manifesto.  We must keep our standard and certification markets open, transparent and competitive.

Standards markets are the great alternative to over-regulation or naked greed.  We professionals committed to standards prevent both the abuse of capitalism, red in tooth and claw, and the abuse of government regulation, 1984 but without Orwell’s sense of humour.  We open up trade.  Let’s sell standards markets as the new third way to the sustainable economics everyone wants.

On behalf of all the guests I salute the ABCB’s hospitality and its great work on behalf of standards markets.  Thank you!



[this April Fool’s spoof came out on 1 April 1985 in the corporate magazine of British Leyland’s IT arm, ISTEL, in Redditch, Warwickshire]

In a move almost guaranteed to establish a major market share in the booming consumer defence market, ISTEL Limited today announced the first defence product of its newly-formed Military and Aerospace Division (MAD), formerly the Entertainment and Videodisc Action Group. The announcement has dealt a serious blow to several companies working under the umbrellas of both the American Strategic Defence Initiative (Star Wars) and the European Eureka programs.

ISTEL’s product fulfils several market needs, specifically:

  • Military combat training systems
  • Security alertness assurance systems
  • Rooftop security dissuasion systems
  • Simulated combat evaluation systems
  • Urban nocturnal mass-entertainment and crowd control systems
  • System systems

The product is a “high-tech”, cost-effective solution to a variety of problems, yet its modular construction allows customers to tailor the system to their needs. The product’s intended market is medium to large companies headquartered in urban centres and the most likely first purchaser is rumoured to be a large, unnamed art merchandising centre in Trafalgar Square. Beta testing of the product was performed in the aviary at Regent’s Park, a particularly target-rich environment [no captive birds were injured in the testing of this product]. ISTEL has received serious inquiries from several airline companies who wish to evaluate the system for hijack prevention.

Components of the “PIGEONHOLER” (Precise Infra-red Guided Eradication of Night, High, Or Low-Flying Evil Rodents) System are a target tracking system, command chair and console system, hacker-proof scoring system, night-vision enhancement system, remote sensing systems, and four roof-mounted high energy laser cannon systems (although particle beam devices may be substituted).

The systems require little training to operate by typical security guard staff because it was designed for familiarity and resembles a number of arcade games, yet is high on realism. Training and honing of staff skills is almost continuous, a definite military plus with the threat of electronic warfare looming ominously as a disgusting grey bird only inches above the ISTEL flag. In fact, staff have been known to fight in their eagerness to use the new equipment.

Operation is simple. A trained staff member occupies the command chair and initiates the remote sensors and target tracking system. Using the simple controls provided and following the action projected onto the visor of his Dead-Turkey helmet, the operator ‘homes’ in on his specially targeted pigeon while PIGEONHOLER supplies automated support facilities until termination. Scoring is automated and all kills are validated before reporting over the Infotrac network. The target recognition system distinguishes feral pigeons from show, game, and homing pigeons automatically disabling the weapons system when appropriate, as well as providing intermediate racing results via cellular radio.

PIGEONHOLER is a cost-effective solution to a problem which has flown out of proportion in urban centres. Great savings on both cleaning and poison are anticipated by several companies planning to use the system. A typical comment from executives has been “We can hardly wait to blast the s_____s”. Continued maintenance is guaranteed under ISTEL’s exclusive hole-in-one policy.

ISTEL’s R&D and artificial intelligence sections prototyped the knowledge-based expert system in virtual assembler cross-compiled into LISP simulated in C running through Infotrac under the central BBC 6502-processor system at CDC with an attached Motorola 260 processor emulating UNIX (a trademark of AT&T) supporting twin Winchester rifles and IBM mainframe peripherals giving real-time response in under a day.

Prototyping involved several ‘snail hunts’, as the staff humorously termed them, before being taken to the moors for a grousing debugging session where several turtles were eliminated. The finished product has been successful in a variety of situations and has been pronounced “F_____ god-d____ combat ready” by Dr M F Smith of the Research and Disarmament section.

Although PIGEONHOLER is necessary in today’s increasingly tense and dangerous modern world, ISTEL hopes to provide an uplifting experience to the public as its product scours the skies and covers concrete. The wavelengths of the lasers were deliberately chosen by the R&D section to enhance the aesthetic appearance of the skyline. The next generation can look up at the pigeon-free skies of our capital and city centres safe in the knowledge that the intricate criss-crossing patterns of light and colour prove that the security of the nation rests alert in the command chairs of the night guards.

As with too many of my spoofs, it became real –

And in August 2019 I had the privilege of seeing Nathan Myhrvold’s Intellectual Ventures team and their Photonic Fence, “This Bug-zapper Has Laser-guided Precision” in Seattle.

Computer, Where Is Poughkeepsie? An Introduction To Computer Cartography (1984)


The paradox of computers is that they seem to be able to do something of everything and yet nothing fully. Computers have assisted people in thousands of applications, yet computers have not been able to fully replace people in any but the simplest applications. One area of application, computer mapping, illuminates both parts of this paradox.

Computer cartography is a significant portion of the large industry of computer graphics. Companies and government agencies as diverse as the CIA and your local gas utility are major users of computer cartography. It is estimated that in the petroleum industry alone, over two thousand maps are produced by computers world­wide each day. Computer maps are produced by the Census Bureau to show income distribution and population; the New York Times evaluates its markets with computer maps; defense agencies use computer maps to guide missiles and simulate battles; local governments and planning boards update their maps by computer; and utilities use computer maps to simplify their extensive service maps. The combination of computers and maps occurs more frequently as time goes by. The future may allow personal computer users to customize and produce their own unique maps at home. Such maps could range from simplified world maps to detailed maps of local bike paths.

Computers have affected cartography in three major ways. One, they have aided in the basic production of maps. Maps can be produced partially or wholly by machines. There are certain problems which will be discussed later, but, on the whole, computers can reproduce any map. This assistance has made maps more widely available and has led to maps being used in places where they would have earlier been considered extravagant. Two, using computers has changed the way people examine, create, judge, and use maps. Using computers for mapping has altered the use of maps themselves. Computers even provide new ways of evaluating maps. Three, new uses of maps and the newer definitions of mapping threaten values we hold today, particularly privacy. Cartography may not be a benign discipline.

State of the Art (1983), picture of a Tektronix 4014 screen.

The Computer Revolution

The increasing use of computing machines is the most heralded change of the second half of the twentieth century. Although the trumpet of change sounds loudly, certain problems elude the call better than others. The early applications, for which computers were developed, were strictly numerical. Computers had to compute. Throughout the 1960’s and 1970’s more non-numerical uses were found. In the fourth generation of computing, the 1980’s, personal computing, expert systems, and artificial intelligence are the hardly recognizable descendents of the number-crunching applica­tions deemed their ancestors. Nevertheless, all uses of computers remain, ultimately, numerical uses.

In short, computers only represent objects which people or programs have described to them in numbers. Computers only operate on numbers, hence, all opera­tions, from wordprocessing to choosing an airline ticket, are for them operations upon numbers. It is natural then that problems which are numerical, or easily represented numerically, are the first problems solved by computing machines. It is also natural that problems too complex for people to easily describe numerically are solved last. This is why computers can multiply thousands of numbers a second without error, but still cannot automatically correct a misspelling.

Representing problems numerically can be seen as a theoretical problem with a probable solution in most cases. Still, practical considerations are important. Certain problems can be solved in theory, but not in reality. Practical constraints, such as the amount of storage or speed of the processor, can render solution of problems impossible or improbable due to the time or resources necessary to solve them.

Computer Cartography

Cartography is a discipline of contrasts. On one hand, cartography is an exacting, scientific discipline requiring very precise numerical answers. Cartographers pursue an unattainable goal of absolute accuracy. On the other hand, cartography is an art. There are good cartographers and bad cartographers. Despite the fact that both good and bad cartographers may work with the same data or achieve the same precision, maps are subjective representations of reality. Representations of locations are subject to evaluation by people who will compare their values with the map maker’s values.

Petroconsultants (Computer Exploration Services), Cambridge, England (1983), computer room with VAX in background and Versatec 44 inch plotter in foreground

Naturally, computers were first applied to the numerical aspects of cartography. Physical maps can be considered mathematical ‘maps’. A three-dimensional world is `mapped’ (projected) to a plane: (x, y, z) i (x’, y’). Time sequences of populations (four dimensions) are ‘mapped’ onto a flat surface. Map projections are numerical functions easily calculated by computers, while manual calculations are time-consuming. Today, virtually all strictly numerical cartographic operations are performed by machines. These operations include adjustment of triangulations, datum shifts, scaling, coor­dinate transformations, great circle measurements, and area calculations. Seeing maps as mathematical representations of the world is a prerequisite for performing these operations.

As symbolic representations of the world, maps boggle computers. Computers can scan and store exact images of maps, and with computer-controlled devices like plotters and scribers, computers can produce maps. But computers cannot interpret the images they plot or scribe as anything other than a duplicate of what they originally scanned without more information and extensive software to utilize the information. Given raw information, they are not aware of a map’s meaning. A line is not considered a road, or even a line, simply a sequence of numbers or discrete dots. When features on a map are given symbolic representations, e.g., a person tells the computer that a red line represents a road, or that a particular dot and associated text represent a city named Poughkeepsie, it is possible to use the computer to perform selective plotting not capable with unexplained raw input, for instance, plotting only cities adjacent to a road. Interpreting and representing maps as humans do is well nigh impossible at present, but it is a goal of computer cartographers.

There are many advantages in using computers for cartography. Computers can take the data they have stored and plot it at a variety of scales very quickly. They can simplify and generalize the information so that a small map of the world does not need to contain all of boundaries or detail within the world’s countries. They can amplify, by interpolating and smoothing, in order to produce maps far larger than the data originally warranted. They can quickly alter the appearance of maps – one time plotting roads in red, the next time plotting roads with black dashes. Computers can update and correct maps quickly, because the specific information can be altered and the entire map redrawn in a matter of minutes. All of this adds up to the easy production of maps far faster than traditional draftsmen can work, for the one time input of a cartographic database.

The History of Computer Cartography

The historical problems of computer cartography are with us today. The very first numerical uses of computers indicated that tedious projection calculations ( e.g., transforming survey locations to a Mercator projection) would be simplified, but the first implementations also indicated that there would be difficulties. In 1963, Ivan Sutherland developed a graphic display program for his doctorate at M.I.T. called “Sketchpad”. This aroused interest in computer graphics in general, and pointed the way to later computer production of maps.

That the history of computer cartography is primarily a history of governments and defense should not come as a surprise. Cartography has been bound with war since the first battle plan. The initial impetus for mapping in most countries has been military, as attested by the names “Ordnance Survey”, and “Admiralty Charts”. The armed forces have vast quantities of information that computers can use, e.g., sonar depth tracks. Armed forces have a particular need to arrange their cartographic information for swift retrieval and updating. In today’s world, the U.S. Defense Mapping Agency has the most modern computer cartographic center.

While the most immediate use of computer cartography has been defense, intelligence agencies have been just as busily applying computers to their mapping problems. In the early 1970’s the Central Intelligence Agency was the first organization to compile a comprehensive database of the world. This database was called the World Data Bank and contained over six million points. Containing only water bodies and political boundaries, the World Data Bank was useful at scales from I:1million to 1:5million. During the same period, the Census Bureau compiled a more detailed cartographic database of the United States showing their census tracts and Standard Metropolitan Statistical Areas. Both of these databases were available to the public and widely used. However, creating them was prohibitively expensive and demonstrated a fundamental problem of computer cartography; large, hard-to-construct databases are necessary to make maps. The last major governmental impetus for computer mapping was from resource management agencies, such as the Forestry Service, Department of Agriculture, and state agencies which manage parkland, water, or minerals. These agencies carried out extensive surveys which they wanted to integrate with other databases of land use and resources. Resource management agencies constructed their own databases or combined databases, especially combining them with computerized information starting to be supplied by national mapping agencies e.g., the United States Geological Survey.

Within the private sector, computer cartography advanced in universities, utilities, transportation companies, and mineral companies (primarily oil firms). Universities were intrigued by the unique problems of graphics and cartography. Important contributions came from laboratories at M.I.T., Harvard, Utah, and Edinburgh University, to name some of the more important research centers. Each university made contributions to specialized hardware or software which simplified the problems specific to mapping. The utility industries, gas, telephone, and electric, frequently revise databases of their services and began to contribute new applica­tions and research. Their databases changed frequently and were best displayed as maps. Thus, utilities investigated many updating problems of computer cartography. Transportation companies, especially railroads, needed extensive mapping services to reflect the changes in their networks. Another major private sector input was from the mineral companies. Most particularly, oil companies needed mapping to describe their far-flung reserves, potential reserves, and to plan new exploration. Oil companies com­bined new computer techniques in seismic exploration and mapping to develop corn­rehensive cartographic capabilities and specifically developed many of the standards for map accuracy and estimating error. The private sector focused on presenting computer map data with spatial (coordinate) data that was already being used by computers in another application. Today, the latest commercial use of computer mapping is in marketing, where marketeers evaluate potential markets and market penetration using computer produced maps.

Laser line-following digitiser designed by Geodat team (1983).

Satellites and Mapping

A near revolution in cartography came from the flood of data provided by satel­lites. The interpretation of sensor data from satellites (remote sensing) has produced some astonishing results in the last decade, but these results fall short of the expecta­tions which many experts in remote sensing held. Since space travel became available in the late 1950’s, scientists have used space to learn as much about the earth as they have about the outer reaches of the universe. Pictures of the earth, meteorological satellites, and the latest earth resources satellites have had the study of earth as their function, not the study of the moon, the planets, or the stars.

The most important series of earth-studying satellites has been the U.S. Landsat series: Landsat-1 (1972, also known as ERTS), Landsat-2 (1975), Landsat C (1980), and Landsat D (1982). Similar satellites have been scheduled for operation in the next three years by France and Japan. The scanning device contained in these satellites has been a Multi-Spectral Scanner, MSS for short. An MSS is capable of recording the electro-magnetic energy which falls on it when it is pointed at the earth. There are also plans for a Thematic Mapper (TM) with increased resolution, but the TM in Landsat D failed before it could be fully evaluated.

The Landsat D satellite orbits the earth once a day, roughly from pole to pole, recording data in a 185km wide swath. The entire area covered is divided into 80 meter squares, known as pixels, for ‘picture element’. Thus, most of the globe is represented by pixels showing primarily the reflected sunlight of a pixel at a certain time, which in turn indicates what features are contained in the area represented by a pixel.

The MSS data is relayed to earth and distributed at a nominal charge in the U.S. through the Eros Data Center in Colorado. Because the data is cheap and readily available, Landsat results have been used widely. Agricultural yield estimation, crop disease detection, mineral prospecting, resource evaluation, and discovery of new lakes in Colombia are some of the remarkable things which have been achieved with the data. Having interpreted the area using procedures common to aerial photography, the computerized data can be used to produce statistics or it can be combined with other computerized data for composite analyses. These results are achieved by using the Landsat data to make photographs or maps of the areas under consideration. All analysis depends on making maps from the data and there are problems.

Landsat users have received both less and more than they bargained for. On one hand, the data has been voluminous, so much so that the amount needed to produce even a single map requires large amounts of computer storage, processing time, and special programming if maps are going to be produced on a regular basis. Special processing requires special computers, generally high-volume, high-speed graphic com­puters known as image processors. On the other hand, totally automated mapping of Landsat data has eluded researchers despite vast efforts on their part and hefty bank balances on the part of firms selling image processors. Progress has been such that the U.S. Geological Survey has produced detailed 1:200,000 scale maps of test areas, but a large amount of manual interpretation has been necessary.

The most immediate problem has been accurately locating what the MSS has scanned. The satellites wobble a bit in orbit and so they are not necessarily looking straight down. An interpreter must determine specific known points on the image (tiepoints) and use these to correctly position the rest of the data (rubber-sheeting). In the future, better instrumentation will give increased accuracy. However, in the present, the rubber-sheeting necessary for accurate interpretation is time-consuming, in both human and computer time.

A second large problem has been classifying what each pixel means. Each pixel is a sometimes confusing conglomeration of different spectral readings. For instance each pixel can contain a variety of features all jumbled together; houses, cars, roads, trees, and water features can all meet in one place. Clouds can obscure part of the picture. In different seasons snow or leaves or flooding blur the picture. Furthermore, as any pilot would agree, recognizing many features from above is difficult under the best of circumstances. In addition, features that people find important, such as the only road within 200 miles in central Brasil, may not be apparent to the satellite. Some progress has been made: in one specific case, oil companies can identify varieties of surface plants and use characteristic plants to locate oil-bearing rock below. However, progress is slow in categorizing the data accurately on a large scale.

In theory, an accurate up-to-the-minute map of town-as-it-looked-last-week is possible and such map production is a goal of researchers. Some hope to achieve a Star Trek-like computer response, “Captain, sensor readings indicate a newly constructed bypass ahead”. In the pursuit of modernization, cartographers are changing some of their old methods for use with computers and using Landsat data to produce some maps. Despite the computerization, annotation is only semi-automated, simple maps need lots of expensive processing, and conventional maps and surveys are required for accurate identification of ‘cultural’ features and tiepoints.

All this work has resulted in a thirst for more data. Users would like more coverage and more detail. There has been talk of 10 meter square pixels, which could result in maps at scales of 1:20,000. Governments are considering the effects on privacy; you can almost count individual automobiles at such scales. Governments are also considering the cost. To date, the U.S. has provided much of the funding and both the U.S. and resource management agencies have benefited in assessing their domains. Nevertheless, the U.S. government questions the need and usefulness of Landsat, data with better resolution. French and Japanese satellites will give a closer look, but users are worried that the data may not be freely available.

Close-up of head of laser line-following digitiser. Laser hits map and goes back to photoelectric cells. Gain from cells drives feedback to stepper motors to reposition the head to stay in the centre. The feeling was as if you were a needle following a phonograph groove. For a skilled operator, this meant that a complex bathymetric or topographic chart might take a day to digitise rather than one to three weeks.

How Computer Cartography Differs from Other Computer Applications

Having seen the background to computer cartography and the effects of new satellite information, we can examine specific differences between cartographic applica­tions and other computer applications. Computer cartography, and computer graphics in general, differ from other uses of computers in two major ways. The first difference is that the volume of data is astoundingly large. To store a simple map of the United States showing the outlines of the states, requires a minimum of six hundred points to be recognizable, two thousand points to look decent, and can reach up to twenty thousand points without difficulty. Ignoring the overhead, this means a significant 160 kilobytes of storage are required to store and use a relatively simple map. More com­plicated maps can easily need ten times more space. Also, unlike other applications, cartographic transformations must be performed on all of the data, regardless of the relative interest of particular portions. An entire map needs to be projected, not just the portion which is going to be used. If you want to plot Massachusetts surrounded by the other New England states, you must work on all of the data for New England.

The second difference is that in graphics and cartography the data used is spa­tial data, not tabulated data. Unlike typical computer applications, say, maintaining inventories, maps combine mathematical transformations and database manipulations. The information looks different every time it is displayed, but contrary to perception remains the same data. Dealing with spatial data involves two different problems. First, the data needs to be rotated, projected, scaled, etc. These are computationally-intensive mathematical transformations which are to some degree dependent upon the data being transformed. For instance, different mathematical projections are used in different countries to give the best results for that country’s location, size, and shape. The particular use of the map is also important. A map projection good for navigation is not a projection good for assessing population distribution.

Second, in addition to the above, the data also needs to be manipulated in a traditional database fashion, i.e., a user needs to retrieve the data by attributes. But this retrieval is not as traditional as it looks. Asking for a combination of spatial characteristics involves different calcula­tions than asking for a combination of names and addresses. As an example, a person could want to plot a map of all the rivers in Wisconsin. To be able to do this, data needs an attribute describing what type of feature it is (city, river, lake, road, railroad, boundary…) and the data must be stored in such a way that all data is associated with a state (this river is a river in Wisconsin). If the data is not associated with a state, then the computer will need to establish what is Wisconsin (a boundary outline) and determine whether the river is within Wisconsin, and if so, totally or partially within it – another lengthy calculation. Obviously, any manipulation is compounded by the first problem, the amount of data, which means that people may wait a long time for results.

The differences between computer cartography and other computer applications are demonstrated by asking a computer database of the United States, Where is Poughkeepsie? The answer to such a question immediately requires an extensive database holding as a minimum most cities in New York State. The answer also requires a new way of dealing with data. To get the answer, the computer must deal with the data spatially. Does the user need coordinates in latitude and longitude, does he need a represention on a general map of New York or the United States, or does he require coordinates in some national grid system, for instance United States Geological Survey coordinates? Other questions which can be usefully asked are: How far is Poughkeepsie from Albany? What is the area of Poughkeepsie? What is the nearest railroad to Poughkeepsie? What county is Poughkeepsie in? These questions differ from traditional ones asked of employee or inventory databases. These questions deal with the spatial characteristics of the data. People process spatial information easily. If a person looking at a map is asked whether or not Poughkeepsie is close to Albany, he will respond quickly and without much thought, because he can easily glance at a map and evaluate distance and use the evaluation to judge the relative value of ‘closeness’. Just posing the problem to a computer is difficult.

Two Basic Approaches – Raster & Vector

At the heart of computer graphics, and thus of computer cartography, are two distinct ways of storing and handling data – the raster method and the vector method. These two distinct methods are an outgrowth both of different philosophies and of different technologies. Simply put, the raster method is a brute force approach of dealing with data by computer, while the vector method is an attempt to make computers deal with data as humans do. Both approaches have their positive and negative aspects. The use of one approach determines the kinds of manipulations that can be performed upon the cartographic data.

The raster approach developed from the way in which computers initially handled changing images. Changing images were stored on a screen similar or identical to a television screen. These screens were divided into lines and the lines into rasters. Rasters are to all intents the same as pixels. The raster approach is a way of managing data in a fashion computers find acceptable. Landsat data is in a raster format. One advantage of the raster approach is that less interpretation is carried out between acquiring the data and displaying the data; what you see is what you have. Another advantage of the raster approach is that the basic operations upon the data are simple and the concepts are easily understood. The ‘map’ is a series of cells holding a value corresponding to a value of the cell in the real world. People most often want to ‘zoom in’ on a portion of the data or enhance a particular area. If people want to `overlay’, i.e., combine, two databases of the same area, they can just combine the appropriate pixels with each other.

However, the raster approach is not a cure-all. To store a map with a few different values for an area takes just as much storage as a map with many different values for the same area. The choice of pixel size is crucial in determining at what scale the data can be effectively used. If the chosen pixels are too small, there is a large overhead in processing each pixel. If the chosen pixels are too large, the data at best looks bad, and at worst is unusable. Pixels used to draw a line at an angle to the pixel grid exhibit the ‘staircase effect’. The ‘staircase effect’ is seen when a line that people feel should be straight, say northwest to southeast between two cities, has been represented by square pixels which can only represent the line as whole squares. The resulting picture looks like a jagged string of squares when it should look like a straight line. It is similar to trying to copy the Mona Lisa by coloring in a checkerboard. You can copy the painting adequately with small enough squares, but you have to use invisibly small squares to make the jagged edges invisible. Finally, combining pixels of one size from one database with pixels of a different size from another database is a complicated task subject to a large degree of interpretation.

The vector approach tries to imitate the way humans deal with pictures and maps. People draw lines. The vector approach simulates all of the complicated lines drawn by people with straight line segments – `vectors’. As an example, a circle is represented by a polygon, i.e., a large number of vectors forming a polygon, say forty or more, appear virtually the same as a circle. The vector approach is more attuned to the way people actually work. Although people feel that a circle is continuous, physically drawing a circle requires a large number of short segments. To change a line drawing of a house, people think of increasing the roof height by moving the junction of the two lines at the apex to a new, higher apex. Thus they think of a vector operation, moving lines, rather than thinking of a raster operation, moving all of the pixels making the two lines.

While vector data is the way people draw data, transformed vector data may not have the same characteristics as its source. For instance, the boundary of the U.S. with Canada is along a line of latitude. On some projections this line is curved (Lambert Conic Conformal), on some straight (Mercator). If the original vector data was in latitude and longitude and merely stored the endpoints of the Great Lakes and Washington State, when used in Lambert Conic Conformal, there would be a straight line between the projected endpoints of the Great Lakes and Washington State, instead of a curved one. The vector approach does save storage space and represent certain features better than rasters ( e.g., circles, sine curves) but at the expense of completeness and sometimes greater computation requirements.

The two approaches tend to be exclusive of each other, although there are methods of converting between the two. Conversions between raster and vector tend to be computationally intensive and are not yet practical for large databases. Until recently, the vector method predominated in storing and handling data. Raster devices were prohibitively more expensive and required more of that expensive storage. Also, very little data was in raster format. People traced maps as they would if they were drawing them, i.e., using vector techniques. With the decreasing cost of computer hardware and memory, and the easy availability of raster data (Landsat), raster devices have become as common as vector devices.

The vector approach is still an important one, and it should be noted that it is still the preferred method of storing databases for most applications because it uses less storage. The production of both raster and vector databases is difficult and tedious. Expensive equipment is still necessary to accurately scan maps and produce detailed images in a raster format. Producing vector databases is very labor-intensive. People must trace, in detail on a digitiser, every vector used to create a cartographic database. If you can imagine the detail and number of different types of features on a typical topographic map, you can imagine the length of time necessary to input a single map. Because of the large expenditures both methods require, countries like the U.S. will not be covered by detailed, comprehensive cartographic databases at topographic scales of 1:25,000 until the next century.

The Impact of Computer Cartography

It would be hard to find a less controversial topic than computer cartography. To most people it is simply another “gee whiz” application of computers. On the surface, the only socially significant aspect is the possibility of rising unemployment among cartographers. However, when examining computer cartography as an all-embracing discipline which relies upon computers and satellites, we can discern trends that may affect the man in the street.

The easiest point to establish is that there is a steady trend toward greater map production, and the man in the street is going to be exposed to more maps than ever before. Just as computerized listings from databases have inundated us with information, computerized maps are starting to be produced in floodlike proportions. In the past, a large number of highly-skilled cartographers produced a relatively small number of maps. Today, once the cartographic database has been created, an infinite number and variety of maps can be made. Greater exposure to maps will require the man in the street to be better informed about their meaning. Maps are useful media for propaganda – take a look at the widely different sizes of the U.S.S.R. on different world maps. Formerly, cartographers were trained to evaluate maps to avoid misinterpretation. They avoided certain color combinations, used standard symbols, and adhered to high standards of accuracy. Because many non-cartographers now produce their own maps, the old standards are not used. The computer can give a false aura, of accuracy and people can be deluded into believing that a flashy computer map is a good map. Greater map production does not mean better maps.

A second noteworthy point is that computer cartography will change our basic notion of a map. Already, satellite data is often considered map data. Because people will be using raster methods, they will stop thinking of maps as lines on a piece of paper and start thinking of maps as an array of cell values in computer storage. The nightly television weather broadcast displays computer produced meteorological maps which are a combination of satellite photos and boundary maps. Such composite maps are becoming more common. With the addition of labels and grid markings, Landsat data is often used as a substitute for a map. Previously, cartographers interpreted everything they placed on a map. For instance, roads on highway maps are up to a mile wide, if they are measured strictly according to the scale. Obviously, this type of enhancement is important, because if roads were drawn to scale, they would hardly be visible, in some cases a pen width would be too wide. Interpreted maps are useful summaries of the world, while raw data from satellites can give detail where it is need. A balance must be struck between interpreting data and dealing directly with basic data.

A third point to note is that creative uses of maps will increase. By freeing people from the labor-intensive parts of map-making, computer cartography has en­couraged experimentation in maps. Such experimentation has changed some notions of maps and created new notions: three-dimensional representations of population are no longer time-consuming; the Census Bureau has developed a new method of displaying tract values as mixtures of different color; and statistics are as frequently mapped as tabulated. Cartographic databases permit us to give information previously devoid of (x, y) coordinate content a spatial component. An address is no longer just a label for a mass-mailing, it can be used to calculate the density and distribution of the mass-mailing. Plenty of information which has been thought of as strictly numerical or textual will now be tied to coordinates in the world, and thus will be mappable. Although we are unable to forsee the future, it can surely be stated that change is inevitable and will increase.

The fourth point should warn us that another piece of technology will reveal its two-edged character. Personal privacy will diminish with two cutting advances in computer cartography – increasingly detailed data and increasingly sophisticated data manipulation. Satellites will provide the better detail. Although most satellites are designed for peaceful purposes, everyone has heard of their military potential. The information is classified, but sensitive and sophisticated military spy satellites are probably capable of distinguishing at least 0.3 meter pixels. This allows individual people on the earth’s surface to be tracked. During the day they can be tracked with visible light, while at night they can be tracked by infrared spectral readings. It is not paranoically Orwellian to imagine an extensive series of geostationary satellites and computers providing information about the exact location of everyone on earth and recording their every action. Despite the positive potential of crime prevention, there exists a serious potential for abuse. Even the relatively low resolutions used today produce complaints from farmers who are unable to conceal the cash potential of their crops from outside evaluation.

Satellites are not the only source of detailed cartographic information. Other detailed databases are being constructed today. Used in combination, databases of addresses, zip code boundaries, county plans, and housing plans, can be used to invade privacy. Sophisticated data manipulation is being built into today’s hardware and software. A good bit into the future, a credit company could, in theory, evaluate a person by taking only his address and use it to access databases so the company can count the cars in his parking lot, examine his house plans, check all deliveries to and from the address, and take note of the comings and goings of people, perhaps even following them to their addresses.

It is generally agreed that people have a right to privacy. Although techniques, from electronic ‘bugs’ to bulldozers, exist for violating that privacy, such techniques are illegal. Satellite data has been virtually free, yet restricting the data or making it illegal to use could result in more concentrated abusive power; how will the potential abuse of detailed databases be curbed? The potential benefits of such databases must be balanced against their harmful effects and a solution found that will keep the equilibrium.

It is a common problem with technology, from genetics research labs to teflon-coated cookware, that the application of the technology has its dangers. However, there is a distinction between global effects and local effects. A person chooses to purchase teflon cookware or through negotiations a genetic research lab and local residents choose a site to build labs. In both cases the effects are local and people have some choice. Nuclear weapons have a potential for global effects and the people affected have little choice in participating. Detailed cartographic databases and the manipulation of spatial data are the last links necessary to make the effects of information abuse as global as weaponry. Although theoretical abuses will remain theoretical without extensive software and hardware development, this development cannot be effectively regulated, and on the contrary, will expand because of the push for benevolent uses alone.

The fifth and final point to be made is that the volume of information computer cartography will soon make available will reveal new ways of looking at the world. Just as the rapid development of mapping during the age of colonial expansion fostered a world view of events, the new cartography will shrink the world and once again reshape our conceptions of the world. The recent timely interpretation of world-wide deforestation has only been possible with new cartographic overlays showing up-to-the-minute forest cover. We can almost hear, “Captain, last year Burma lost 275 square miles of dense forest”, and this statement may prompt us to do something about it. Our planet could become a better place because we will know more about the earth and how we change it.

Proud Director of Geodat Project (and author) with state-of-the-art colour Tektronix 4027 colour screen, Petroconsultants (Computer Exploration Services), Cambridge, England (1983).


We have seen that, despite some unique problems of volume and spatial orien­tation, computers can produce maps. These problems will diminish in time, and com­puter maps will be extremely common. Our ideas of map use will change and new uses will appear. Unfortunately, cartographic databases and techniques can tie data together in harmful ways. Lastly, the sheer volume of maps, where previously they were few or unavailable, will provide new insights and interpretations of the world.

There is no simple way of getting only the good returns from the expansion of mapping and databases. If the data is strictly controlled, there is the risk of misuse by the controlling agency, probably a government. If the data is freely available, we will have to change our basic concepts of privacy. Thought may become the last bastion of personal privacy in a shrinking world. For clues to action, we should look at the last major historical database creation, the census. This data has great detail rendered harmless by secure anonymity. If new databases can be secured for individuals in the same way as census databases are, perhaps a pragmatic solution can be found for privacy. Future computing progress will develop mapping further and produce benefits, but even this seemingly benign technology has implications in the year of 1984.