Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Forster N. - Maximum performance (2005)(en)

.pdf
Скачиваний:
29
Добавлен:
28.10.2013
Размер:
3.45 Mб
Скачать

430 MAXIMUM PERFORMANCE

changes during this century as the very nature of organizations, ‘work’ in the Taylorist sense and (perhaps) even what it means to be human may be changed forever. Some commentators have described the technological transition we are currently going through as being, by far, the most radical in human history, superseding the agrarian (First Wave) and industrial (Second Wave) revolutions that transformed human civilizations and societies in the past. In the industrial age, business growth stemmed from the ability to make things big. In the 21st century, growth will come from the power to make things tiny. As we move through the first decade of the 21st century, we are entering a century where the Third Wave of technological innovation will profoundly affect every aspect of human existence, including our professions, the organizations we work in and the manner in which we lead and manage people in these environments (Kurzweil, 1999; Dyson, 1999; Warwick, 1998). We’ve also seen in several earlier sections of this book that one of the primary roles of leaders is to anticipate the future and map the way, road or path their followers can take in order to move towards this. Hence, towards the end of this chapter, we will be looking in some detail at a variety of radical and bizarre scenarios about the future of both organizations and humanity.

The word ‘technology’ is derived from the Greek word for knowledge, technos, and is defined here as the application of mechanical and applied sciences, technical knowledge, expertise, tools, machines, techniques and methodologies. Technological development has perhaps been the most distinctive characteristic of human evolution over the last 130 000 years. Even more remarkable is the pace at which technological innovation has been accelerating over the last 10 000 years. To illustrate this, imagine for a few minutes that you are going to walk from the San Francisco coastline, on the far-west coast of the USA, to the Chrysler building on Manhattan Island in New York on the eastern seaboard. This journey is going to represent the length of time from when our hominid ancestors were living in Africa, some 500 000 years ago, to the present day. Recent archeological evidence indicates that the first basic stone tools, spears and fire were being used as long ago as 400 000 before the Common Era (BCE), but for the next 370 000 years technological evolution largely stood still. By 30 000BCE, when you have reached Philadelphia on your journey across the USA, there is the first evidence of the use of iron and copper tools and cave art.

From about 12 000BCE, after the end of the last ice age, the biggest growth in human technological evolution occurred. The earliest evidence of settled human communities, based around agriculture and

LEADERSHIP AND PEOPLE MANAGEMENT 431

the domestication of animals, is from 10 000–8000BCE, in what is now the Middle East. Around this time, human cultural evolution really started to kick in and, in the process, generated many new inventions and innovations. Modern cuneiform writing was developed during the third millennium BCE. The wheel and the plough also emerged some time between 3500 and 3000BCE (as you have just arrived at the outskirts of New York) and these two innovations marked the ascendancy of the first truly modern human civilizations in the Middle East, Greece and Italy. In turn, these generated an explosion of innovations and developments in irrigation, agricultural production, commerce, international trade, boat design and navigation, military technologies and strategies, political and civil governance, architecture, art, philosophy, astronomy and mathematics. One of the most important – paper

– was developed by Tsai Lun in about 100CE.

As the Roman Empire collapsed during the fourth century CE, you are just arriving on Manhattan Island. Gunpowder and printing were in use by about 500CE in China, the most technologically advanced civilization of the first millenium CE. After the European Dark Ages and the Renaissance comes the next burst of technological innovation, driven in large part by the Gutenberg printing press, unveiled in 1455. Copernicus turned our geocentric view of the universe on its head with the publication of De Revolutionibus Orbium Celestium in 1539 (although the little-known Greek scientist Aristarchus is credited with having first deduced that the Earth revolves around the Sun, after watching a lunar eclipse). It would take the Roman Catholic Church until 1983, nearly 450 years, to accept this and apologize for their persecution of Copernicus and Galileo. The first modern systems of physics, the progenitor of many subsequent inventions and innovations, was developed by Sir Isaac Newton between 1666 and 1715. The first steam engine was unveiled in 1770. The first electrical battery was produced in 1800. The first locomotive, Stephenson’s Rocket, made its maiden journey in 1829. The first working telegraph was displayed in 1837 by Samuel Morse. Electromagnetism was discovered in 1831, and the first electrical conductor was unveiled in 1832. By 1876, Alexander Graham Bell had invented an ‘electric speech machine’ with Thomas Watson. In 1889 the first coin-operated public phone appeared in Connecticut, USA.

The first internal combustion engine appeared in 1877, the radio in 1893, the airplane in 1903, oil refining in 1913, the first transcontinental phone line from San Francisco to New York in 1917, and the first radio transmission from a plane to the ground was made in 1919. The first liquid fuel rockets were built in the 1920s. The first working prototype of a television was unveiled in 1923. In 1927, the first

432 MAXIMUM PERFORMANCE

intercontinental phone line service between New York and London was established. In 1937, the first commercial transatlantic airplane service flew and the domestic vacuum cleaner was launched. The first atomic bombs were detonated in 1945. The first analog computer and microwave oven appeared in 1946, and the first electrical transistor in 1947. The first satellite, Sputnik, flew in 1958. In 1959, one of the most important innovation/inventions of the 20th century made its debut – the silicon chip. The world’s first commercial communication satellite, Telstar, was launched in 1962. The combined handset phone also appeared this year and the first steam-or-dry electric iron in 1963. The first commercially available personal computer appeared in 1977. In 1982, Nokia launched the first generation of mobile phones in Finland and Sweden, in the form of the Mobira Senator. In 1990, the first universal HTML Internet code was launched. The first prototype phone with Internet access, fax and email facilities appeared in 1994, and in July 2000 the first commercially available mobile videophone, the VP-210, was launched (Paul, 1999). All the innovations and inventions described in these two paragraphs have appeared in the time it has taken you to walk from the edge of Manhattan Island to the Chrysler Building (Bronowski, 1980; various history of science websites, June 2003).

The analogy of a journey across the USA highlights some truly extraordinary facts about human technological development. First, about 95 per cent of all human inventions and innovations have appeared in less than 5 per cent of the time that modern humans have lived on this planet. Second, almost all of these have appeared in just the last 200 years. There were more inventions and innovations in the 20th century alone than in all of preceding human history. Third, the pace of technological innovation is going to continue to accelerate at an even faster rate for the foreseeable future. For example, it took 50 years from the development of the first electrical generator to the opening of the first power station in the USA in 1882. It took another 50 years before 50 million American homes were connected to electricity supplies. It took radio 37 years to reach a global audience of 50 million and television 15 years. It took the Internet just three years to achieve this figure (Coyle and Kay, 2000). There will be more technological innovations in the next 50 years than have emerged in the entire span of time humans have inhabited the planet. This has one other important implication: technological evolution is now outpacing human biological evolution and yet we are, genetically, virtually identical to our ancestors of 130 000 years ago (Kurzweil, 1999).1 The profound implications of these developments will be described later in this chapter.

LEADERSHIP AND PEOPLE MANAGEMENT 433

Recent scientific and technological developments

Exercise 11.1

Some other examples of recent technological advances are the events that occurred on 23 February 1997, 12 May 1997, 24 August 1998, 26 June 2000, 11 November 2001, 16 June 2002 and 15 March 2003.These are seven of the most significant dates in human evolution. Why?

Who first predicted the event that took place on 23 February 1997 and in which year did he make this prediction? And, what didn’t happen on 31 December 1999?

The answers can be found in note 2.

There are many factors driving the current Third Wave of technological evolution. The most important of these are humanity’s age-old curiosity about its environment, combined with a hard-wired desire to understand and master it, and a unique ability to utilize tools and innovate with them.3 More recent drivers of technological change include the emergence of empirical science in the 18th and 19th centuries, the arrival of universal education, the development of the silicon chip, the globalization of business, the emergence of the Internet, the evolution of the knowledge economy, increasing competition for markets, shortening product life cycles, and fast-changing consumer and customer demands. However, perhaps the single most important factor has been the speed with which computing power advanced during the 20th century (see Figures 11.1 and 11.2). These two graphs highlight the dramatic growth in computer-processing power, particularly during the last two decades of the 20th century.

The first commercially available chip, produced by Intel in 1972, was able to perform 3500 calculations per second (CPS). By 1982, this had grown to 134 000 CPS and by 1985 to 275 000. From 1989 to 1995 the number increased from 1 200 000 to 5 500 000 CPS. By 2000 this had grown to 28 million CPS. The Intel Pentium IV, launched in 2001, was capable of performing 42 million CPS, and Intel expects its standard chips to be able to perform 400 million CPS by 2007 (Hawking, 2001: 166). Computers of the 1940s and 1950s took up a whole room. By the late 1960s, they were the size of desks and by the mid-1970s the size of suitcases. Now they fit into our pockets and, in the not too distant future, they will be the size of dust particles. This growth in computing power is mirrored by the miniaturization of devices that we now take for granted, For example, when the first transistor was developed in

434 MAXIMUM PERFORMANCE

Figure 11.1 The exponential growth of computing, 1900–1998

Calculations per second

$US1000 of computing buys: 1010

108

(49)

(48)

10

6

 

 

 

 

 

 

 

 

 

 

(47)

 

 

 

 

 

 

 

 

 

(45)

 

 

 

 

 

 

 

 

 

 

(41)(42)

(46)

10

4

 

 

 

 

 

 

 

 

(38)

(43)(44)

 

 

 

 

 

 

 

(35)

(40)

 

 

 

 

 

 

 

 

(30)

(34)

(37)

 

 

 

 

 

 

 

 

 

(33) (36) (39)

 

 

 

 

 

 

 

(26)

 

(29)

 

102

 

 

 

 

(25)

(28)

(32)

 

 

 

 

 

(18) (23) (27)

(31)

 

 

 

 

 

(9)

 

 

(16)

 

(22)

 

 

 

 

 

 

 

 

(12)

(15)(19)

 

 

 

 

 

 

10

 

(10)

(14)

 

(24)

 

 

 

 

 

 

 

(13)

(17)

(21)

 

 

 

 

 

 

 

(11)

(20)

 

 

 

 

 

 

(8) 10–2 (6)(7)

(4)(5)

10–4 (2)(3)

(1)

10–6

1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000

Note: See note 4 for an explanation of numbers 1–49.

Figure 11.2 The exponential growth of computing, 1900–2100

Calculations per second

Year

Source: Kurzweil (1999: 24, 104); used with permission.

1947, it was the size of a modern mobile phone. The equipment for the first mobile car phone, developed by Ericsson in the mid-1950s, weighed more than 40 kilograms. The company’s website once joked that the first mobile telephone users could only make two calls; the first was to say hello to their friends, the second was to let them know that the car battery was about to go flat (Ericsson website, 30 June 2002).

The first commercially available mobile phone, Nokia’s Mobira

LEADERSHIP AND PEOPLE MANAGEMENT 435

Senator, came with a battery pack weighing 9.8 kilograms complete with carrying handle. Now, several million micro-transistors can be imbedded on a computer chip smaller than a baby’s fingernail, and a modern mobile phone has more processing power than a 1950 mainframe computer, and weighs less than 100 grams. They are now capable of performing a range of functions that would have been regarded as science fiction just ten years ago, including transmitting some 100 billion SMS messages a month. More than two-thirds of all adults in the USA now own a mobile phone, which has even made the transition from being regarded simply as a functional communication device to becoming an essential fashion accessory for most young people (Iwatini, 2003; Grayson, 2003). Even a humble greeting card that sings ‘Happy Birthday’ has more computing power than first generation desktop computers in the 1970s. A run-of-the-mill 2004 laptop had far more computing power than the spaceships that flew to the moon in the late 1960s and early 1970s. Kurzweil (1999) has observed that, if car technologies had advanced as rapidly as computing technologies over the last 50 years, a typical car would now cost 5 cents to build and travel at the speed of light.

Until very recently, this growth in computing power had been governed by ‘Moore’s Law’. First suggested by Gordon Moore, in a 1965 article for Electronics magazine, this stipulated that the number of transistors contained on a silicon chip doubles, on average, every 18 months, and will continue to do so until about 2010 (Schlender, 2002: 52). This law has held good, enabling the science fiction of one decade to be converted into the new consumer items of the next. However, Intel has since built the world’s fastest silicon transistors, running at speeds approaching 20GHZ, and this will give Moore’s Law at least another decade of life beyond 2010. One billion of these new transistors can be packed onto a single processor, making these 33 per cent smaller and 25 per cent faster than previously available chips (Gengler, 2001). In 2002, IBM researchers announced that they had developed ultra-fast transistors out of carbon nano-tubes. Nano-tubes are pipe-shaped molecules built of carbon atoms one nanometre thick (or 50 000 times thinner than a human hair). In time, these will outperform the most advanced silicon chips (Gengler, 2002a). Computer chips will continue to become faster, more efficient and smaller, and so ubiquitous that they will merge into our environments and become increasingly transparent in the process. Computer chips are becoming so powerful, cheap and small that all mentofactured objects, including clothes and the fabric of our houses, will soon contain tiny embedded chips. ‘The computer’, as we currently think of it, will gradually merge into our physical environments and become effectively invisible, in the same way that small electric motors in everyday appliances did during the first half of the 20th century.

436 MAXIMUM PERFORMANCE

The PC is dead.

(Leo Gerstner, former CEO of IBM, cited in The Australian, 14 November 2000)

As a result of these developments, we are now passing over the crest of what Andy Grove, the co-founder of Intel, has described as ‘a strategic inflection point’: a point at which human beings may fundamentally alter the way they work and live (cited by Isaacson, 1997: 25). The microchip has become, like the printing press, the steam engine, electricity and the assembly line before, the primary driver of a new economic paradigm and, as we will see later in this chapter, potentially the precursor of a radical shift in human evolution. The new economy has several features: it is truly global, networked, based on information and knowledge, decentralized (for now) and rewards transparency and the sharing of information. It is, in theory, open to anyone who has a web-linked PC or access to the Internet – the most ubiquitous feature of this new economy.

A consensual hallucination experienced daily by millions of legitimate operators, in every nation . . . a graphic representation of data abstracted from the bank of every computer in the human system. Unthinkable complexity. Lines of light ranged in the non-space of the mind, clusters and constellations of data, like city lights receding.

(William Gibson, creator of the term ‘cyberspace’, in the 1984 sci-fi novel,

Necromancer)

The origins of the Internet can be traced back to a US Defense Department initiative in the 1970s, which ‘escaped’ from the Pentagon in 1984 and has since spread rapidly. In the USA, Japan, Australia, New Zealand and parts of Europe the number of Internet users grew at more than 1000 per cent a year from 1985 onwards (De Witt, 1995: 7). It represents a web of millions of computers and databases, connected by a telecommunications infrastructure encompassing satellites, landbased telephone and fibre-optic networks, and rapidly expanding wireless services. This vast array of networks being created in cyberspace has already provided a platform for a revolution in human communication. At the beginning of January 2004, about 600 million people in 160 countries had direct access to the world-wide web. Many more have access through commercial cyber-cafes and Internet shops. Traffic on the web now doubles every 100 days and it has been estimated that three billion people (about half the world’s population) will be linked to the web by 2010. Only now is the true potential of the web being realized as it reaches a critical user threshold, and recent advances in computer technologies continue to act as accelerants to the evolution of this global communication and information system.

Those who fail to get online in this information revolution may well be swept aside and left behind in the rush to access ‘The Information

LEADERSHIP AND PEOPLE MANAGEMENT 437

Superhighway’ (a description first coined by former US Vice President, Al Gore, in 1995). Many significant players initially bought into the potential of the Internet in the late 1990s. For example, George Sheehan

– the man who built Arthur Andersen into one of the biggest global consulting firms in the world before its ignominious collapse in 2002 – could have stayed in a comfortable job with large stock options and a large income. Instead, he chose to start up a web-based grocery business called webvan.com. When asked why he had chosen to do this he replied, ‘Arthur Andersen is history. I can’t wait to join the future’ (cited in The Australian, 12 July 1999). Alas, this business, like so many e-companies, went bust less than 24 months later, for reasons that will be described in the next section.

We continue to envelop our planet with a network in which millions of people can be connected, work and do business together. The ‘egalitarian’ nature of cyberspace has effectively created a level playing field, where people of all backgrounds can communicate directly and instantaneously, not only on the one-to-one basis common to traditional forms of communication, but also one-to-many and many-to-many, simultaneously. It is still largely open and non-proprietary and anybody with the appropriate hardware and software can link up with it. The Internet is rapidly becoming the primary communication media of the 21st century, because almost everything can now be done through it, including phone calls, emails, liaising with customers and clients, banking, medical consultations, education and learning, creative design, computer dating, downloading music and so forth. Cyberspace has revolutionized the way people communicate with each other and the way information is disseminated within and between organizations.

The Internet and other technological advances over the last two decades have already changed the ways in which people work and organizations operate – forever. Communication in the Information Age no longer requires employees to be in physical contact with each other, their customers or even the companies that employ them, as was the case in the industrial age. Organizations now have greater flexibility to restructure rapidly to suit fast-changing competitive environments, and technology has provided businesses with the opportunity to communicate and transact business more efficiently and effectively than at any time in history. The Internet will also drive the emergence of the world’s first true cyber-cities, similar to the ones emerging in Singapore, Cyber-Jaya in Malaysia, and Silicon Valley in California.

Another example of a technology that has become widespread is virtual reality. Just a few years ago this was the subject matter of

438 MAXIMUM PERFORMANCE

science fiction movies like The Lawnmower Man and Johnny Mnemonic. In the near future, it may become a normal feature of the working environments of most organizations in industrialized countries. While prototype technologies for virtual meeting places (VMPs) have been under development since the 1980s, many early systems had teething problems. These included blurred and jerky images, time lags between audio-sound and visual images on the screen, dim and underpowered images, and a lack of real-time eye contact as well as the loss of communication through body language. However, recent advances in augmented reality systems, in which graphical images are overlaid onto the physical world to overcome motion-sickness, the development of infinite reality graphics engines and daybright display systems, at Boeing, Philips, BMW, NASA and Daimler Benz, all promise to make these problems a thing of the past in the near future.

A computer-graphic representation of a conference room is one example of a modern VMP. To enter this environment, a person virtucommutes, by donning a lightweight head-mounted display and data-gloves. Other people appear in the environment as holographic images. It is possible to ‘move’ around the room, to ‘gesture’, ‘look’ someone else in the face and engage in ‘real’ conversations. Participants can ‘sit down’ on a virtual chair, ‘pick up’ a virtual pen, ‘write’ on virtual white-boards and ‘pass’ virtual memos. Objects within a virtual meeting place can be ‘held’ and ‘examined’. In a technological first, scientists in London, and Boston in the USA, ‘picked up’ a computer-generated virtual cube between them and moved it, each responding to the force the other exerted on it. The devices that allowed them to do this have been named ‘phantoms’, which recreate the sense of touch by sending very small impulses at high frequencies over the net, using newly developed fibre-optic cable and high bandwidths (abridged from Long, 2002). The virtual meeting place offers many benefits to users and significant value to organizations that adopt this technology. And, since the events of 11 September 2001 and subsequent terrorist acts, there has been an increased interest in these technologies in organizations.

The world’s first real-time video networking system, Access Grid Node, was launched in October 2001. This allowed groups at multiple sites to interact simultaneously and to share data and software systems. Data windows from participants’ laptops, PowerPoint images, DVD and video clips, and spreadsheets can all be integrated into these virtual meetings. In turn, these media can now be transmitted on plasma screens the size of an office wall, to create the first cybertapestry of virtual communication. Meeting in virtual environments, in the guise of personal avatars, is already commonplace, allowing people

LEADERSHIP AND PEOPLE MANAGEMENT 439

across the globe to interact and socialize in virtual reality. Threedimensional modelling experts are revolutionizing virtual simulations of proposed building projects. Using powerful silicon graphics computers and advanced three-dimensional modelling software, architects can provide customers with 3D 180° wraparound views of new projects. The screen image is provided by three projectors, which create sharper and clearer images than ever before. This software is also capable of making real-time adjustments to the on-screen image and has the ability to quickly reconfigure aspects of a project’s image, such as adding an extra couple of floors or changing the colour scheme of a building (Lynch, 2002; Foreshew, 2001).

Virtualization and information networks will continue to evolve and their effects on organizations will be significant. High-speed data networks, electronic mail and groupware facilities are fast overcoming limits of time and geography. They are enabling companies to harness resources worldwide, achieve greater economies of scale and provide an instantaneous, global responsiveness to their customers and clients. By interconnecting everything with everything, networks exponentially increase the number of commercial relationships that businesses are exposed to. And, as we saw in Chapter 10, from these new knowledge relationships, new products, services, ideas and information can quickly emerge. Take the example of banks. Before the advent of call centres in the early 1990s, if you had a query about your banking facilities or accounts, you would have gone to the bank to talk to an adviser, or called your bank and talked to a bank officer. Today, you are expected to use your bank’s website facilities, or put your queries through centralized and largely automated call centres, responding to verbal instructions with the phone’s keypad. Such automation, operating under the acronym IVR (Interactive Voice Response), has eliminated the need for hundreds of tele-staff. It has been estimated that bank websites and automated call centres reduced the cost of customer relations by as much as 90 per cent between 1982 and 1992 (Darroch, 2002). In technologically advanced countries, close to 100 per cent of all customer transactions with banks, and other financial institutions, will be done via automated or electronic means by 2005. Many accountants and financial advisers may also find themselves out of work as intelligent software takes over the day-to-day management of our accounts, inventories, billing, salaries and tax returns. They may even make investment and share portfolio decisions on our behalf (Joy, 2000).

New technologies continue to emerge in many different forms, and are now being used in a multitude of ways. These include Sales Force Automation (SFA), which began to revolutionize front-office operations in the early 1990s, by enabling sales and customer liaison staff to take