In an earlier article, I mused on the role of “thought leaders” in indirectly influencing the popularity of websites. These are further rough thoughts on the topic. Caveat: this text is not well researched.
My basic premise is this: ‘Web authors and ‘bloggers are creating trust-based filters for information. Many online writers are looking to evoke discussion and change. But most readers, most of the time, are just trying to get through the day, and aren’t too interested in discussion and change. For them, the author “sounds like they know what they’re talking about”. That creates a sense of trust, and validates the author as a reliable filter for information on that topic. Even the most objective and discerning people don’t have time to review everything themselves. They merely spend more time determining which source to trust. Likely, others will trust them, which makes the source they trust a very important actor indeed.
So we create a network of trust for information. If I want to know something about topic x I might follow the recommendation of author y, because I trust their depth of reading on that topic.
Of course, that doesn’t mean that all webmasters and bloggers are automatically trusted. Far from it. The internet or “blogosphere” is so easy to publish to, it fills up with low-grade content faster than any other media in history. Authors have to earn trust, at least from their early readers. Subsequent readers may be more prepared to trust because others are already trusting (a herd or celebrity mentality).
Why is this happening? Take Herbert Simon‘s statement that, “the rapid growth of information causes scarcity of attention.” The sentiment is repeated in Davenport and Beck’s “The Attention Economy“. We simply can’t manage all the available information any more.
Is that really a new problem? It probably hasn’t been possible to know everything there is to know since the early Victorian era. In some cases there are now technical barriers to knowledge: Simply being well educated isn’t enough to allow one to understand most cutting edge scientific developments in depth. In most cases the prime problem is volume of information: In our World of Warcraft example, more information is written than is possible for a human to read. Finding the important or useful information within can be immensely time-consuming.
Trusting people one barely knows to filter information does not automatically turn these authors into celebrities. In a few cases it may do – some readers will feel the need to trust only those who appeal to many. However, if there is a trend towards writing in narrow niches with in-depth content, rather than content with mass-appeal, an individual author may never be known to millions of people, because the topics they write about aren’t sufficiently mainstream.
Those narrow niches will similarly prevent most authors from emulating the role of pre-internet mass media, notably newspapers. They do, none the less, retain the same duty to their readers: Their readers may be inclined to trust them, but that trust will be eroded if abused. Of course, much like modern mass media, readers can still be subtly manipulated…
As I noted, Google’s biasing of sources by the number and strength links to the source. This automated approach fails to value who is creating links, so has become less valuable as the internet has become more mainstream and prone to abuse. There does not yet seem to be an effective automated equivalent of personalised networks of trust – perhaps because emulating humans is hard to do?
The United Kingdom’s local public transport network is likely to become part of Google Transit. Technically that should be far easier in the UK than in North America, where Google Transit was first developed: The UK has a decade’s bitter experience putting all the data together. In practice it is raising wider issues over data control and availability, that the public sector is somewhat reluctant to tackle.
This article describes how the UK’s public transport data is being integrated into Google. It questions why data is being made available based solely on the business model adopted. It explores the real value of this information, and presents a case for the liberalisation of data.
Readers unfamiliar with the topic area should read my earlier Introduction to UK Local Public Transport Data, which contains non-technical background information, and defines many of the terms used (such as “local”). The original research for this was done in June/July 2007, so may now be slightly out of date.
The illustration on the right is the Google part of a visual representation of web trends, based on the Tokyo metro map, by Information Architects Japan.
This article provides a basic non-technical introduction to the United Kingdom’s electronic local public transport data: The data sources primarily used to produce passenger travel information. It does not cover solely operational data, for example, financial, patronage or staff rostering.
The article is intended to provide a background for anyone wishing to understand how these data sources might be used. It was written to support my commentary on the Implications of Google Transit in the UK. The article first introduces the local public transport sector (primarily bus and rail), then explores the development of different data formats, before summarising data availability.
This page contains a demographic profile of the area I’ve called “Caleys”. This a single block of Edinburgh tenements, which all share a common “backgreen”. The area is defined by Caledonian Road, Caledonian Crescent, Caledonian Place and Dalry Road. Continue reading “Caleys Demographic Profile”