Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Forster N. - Maximum performance (2005)(en)

.pdf
Скачиваний:
29
Добавлен:
28.10.2013
Размер:
3.45 Mб
Скачать

480 MAXIMUM PERFORMANCE

technologies have failed to deliver on most of the promises made about them 30 years ago. For example, whatever happened to the extensive leisure time we should all have been enjoying in the 2000s, a scenario confidently predicted by many commentators in the 1960s and 1970s? Baby-boomers reading this book may recall something called the ‘leisure society’ that was going to emerge in the 1990s. Alvin Toffler, in his 1970 book, Future Shock, suggested that computers and robots would take over so many mundane and routine work tasks that most people living in industrialized countries would be able to start work at 25 and retire before they reached 50. They would all be independently wealthy, enjoy six months’ holiday a year and four-day working weeks, and might even require leisure counsellors to help them cope with their newfound freedom from the drudgery of full-time work. The 21st century reality is very different from this utopian vision. For most employees, new technologies have instead meant greater flexibility and multi-skilling, work intensification, ever-increasing expectations of higher performance and productivity, less job security, 24-hour accessibility, the blurring of work/family boundaries, longer working hours and far higher levels of occupational stress.

Furthermore, surveillance technologies allow organizations to monitor their employees secretly, and specialist snooping programmes are becoming widespread. Systems such as ProtectCom’s Orvell Monitoring 2002 allow employers to monitor every website that employees visit and all emails sent and received. It is able to identify all the software applications used by employees, and can even monitor what is on their PC screens in real time (Klimpel, 2002). The new generation of interactive TVs will routinely monitor consumers’ programming, viewing and purchasing choices, further diminishing personal privacy. With web-access, video-on-demand and targeted advertising comes unprecedented power to collect data about consumers from the programmes and adverts they choose to watch (Hopper, 2001). Voice–face recognition systems are becoming commonplace. They can be found in most public spaces in all industrialized cities around the world, raising the spectre of ‘Big Brother’ monitoring of people. There are also Global Positioning System satellites that can spot and monitor individuals from space, as portrayed in the 2000 movie, Enemy of the State. This scenario is no longer science fiction. In the UK, this system has been used since 2000 by the wastemanagement company ONYX to monitor the movements of their garbage collectors, via a satellite positioning system fitted to their garbage collection trucks. In response to these developments, there have already been several legal cases in the USA concerning covert surveillance. Shortly before this book was published, the American Civil Liberties Union had been planning a class action against the use

LEADERSHIP AND PEOPLE MANAGEMENT 481

of covert video surveillance in workplaces as a violation of the Fourth Amendment of the US Constitution.

Privacy is dead – deal with it.

(Scott McNealy, CEO of Sun Microsystems, 2001)

These technologies also provide companies with the freedom to quickly uproot their operations and move to ‘innovation hotspots’, meaning that some businesses will gradually lose their national identities and loyalties. They have triggered a ‘workplace implosion’, with the destruction of many jobs and the rise of a new underclass, the ‘techno-peasants’. Paul James has referred to the emergence of a ‘20/60/20 society’. In this society, a privileged minority of the population, an economic techno-elite of skilled knowledge workers, will have secure and well-paid employment. The bulk of the population may well be employed on a series of short-term contracts, as a periphery or non-core workforce. The remainder will come to form an economically and technologically disenfranchised underclass in the near future. There are clear indications that this has already started to happen in most industrialized countries (Hamilton, 2003).

While all of these new technologies are extremely seductive and their progress is probably unstoppable, hardly any management and organizational researchers have begun to get to grips with their potential impact on organizations, and the world of work, over the next 20 years. In this chapter, some suggestions for a new paradigm that is able, conceptually and practically, to get to grips with the possible effects of these new technologies on both people and organizations have been outlined. It is vitally important that we do this, because the first 20 years of the 21st century will be when we gain mastery over life (through the DNA revolution), over matter (through the quantum revolution) and over intelligence and creativity (through the bio-computer revolution). Later this century we will, in all probability, be redesigning the human race and perhaps, as Ray Kurzweil believes, become the first species in history to engineer its own extinction, by creating the next dominant life forms on Earth. However, three important and, as yet, unanswered questions remain:

What are the real benefits of new technologies?

Whose interests do they serve?

Can we retain control over new technologies, or will they control us in the future?

They improve productivity, but don’t ever seem to improve the quality of our working lives. They mean we are accessible 24-hours a day, but we never get a real break from work. They mean we are able to do

482 MAXIMUM PERFORMANCE

more during the working day, but our work hours never decrease. We can access huge quantities of information and knowledge resources with amazing rapidity, but all suffer from increasing levels of information overload and technostress. We can communicate instantaneously with anyone on the planet, but are exposed to ridiculous quantities of unsolicited spam, junk mail and computer viruses. We can buy laboursaving and communication gizmos by the score, but feel left behind if we don’t buy the latest ones that appear with monotonous regularity on the market. We have a global Internet and a quasi-global economy, yet primitive nationalistic, religious and tribal forces continue to threaten the economic and political stability of our planet. We can communicate instantaneously with thousands of people, but may not know the names of the next-door neighbours. Standards of living, at least in industrialized countries, rise inexorably, but we may at the same time be destroying the fragile ecology of our planet.

Globally, inequalities of wealth grow year by year, and these will continue to cause conflict and war within and between nation states for many decades. Because of the remarkable growth of technological innovation in the 19th and 20th centuries, the citizens of industrialized capitalist countries enjoy the highest standards of living and material affluence in human history, and yet they have an insatiable – and apparently unquenchable – hunger for acquiring more and more things. Why? This is an important question to address because research evidence accumulated over the last decade indicates that ever-increas- ing levels of material consumption have not made people living in rich industrialized countries any happier or more content with their lives over the last 50 years (for example, Hamilton, 2003: 22–92). If we are going to cope actively with the impact of new technologies on our working and personal lives, and use these to serve our best collective interests, these issues must be debated by politicians, policy makers, business leaders, intellectuals and the community at large. Unfortunately for humanity, there does not appear to be anyone who has the vision, imagination or intellect to deal with these, for the simple reason that technological development has an unstoppable, inexorable impetus and life force of its own. This means that we have not even begun to address perhaps the biggest question that will face humanity in the first half of the 21st century: can we control the emergence and nature of new technologies and use them to improve and enhance our lives and organizations – or will they end up controlling us?7

The post-human world could be one that is far more hierarchical and competitive than the one that currently exists, and full of social conflict as a result. It could be one in which any notion of ‘shared humanity’ is lost, because we have mixed human genes with those of so many species that we no longer have any clear idea about what a human being is. It could be one

LEADERSHIP AND PEOPLE MANAGEMENT 483

in which the median person is living well into his or her second century, sitting in a nursing home hoping for an unattainable death.

Or it could be the kind of soft tyranny envisaged in Brave New World, in which everyone is healthy and happy, but has forgotten the meaning of hope, fear or struggle. We do not have to regard ourselves as slaves to inevitable technological progress when that progress does not serve human ends. True freedom means the freedom of political communities to protect the values they hold most dear, and it is that freedom we need to exercise with regard to the biotechnology revolution today.

(Francis Fukuyama, Our Post-Human Future: Consequences of the Biotechnology Revolution, 2003)

Sometime in the 21st century, our self-deluded recklessness will collide with our growing technological power. One area where this will occur is in the meeting place of nanotechnology, biotechnology and computer technology. What all three have in common is the ability to release self-replicating entities into the environment. We may hope that by the time they emerge, we will have settled upon international controls for self-reproducing technologies. But, of course, it is always possible that we will not establish controls. Or, that someone will manage to create artificial, self-reproducing organisms far sooner than anyone expected. If so, it is difficult to anticipate what the consequences might be.

(Abridged from the introduction to Michael Crichton’s Prey, 2002)

Before the 21st century is over, human beings will no longer be the most intelligent or capable type of entity on this planet. Actually, let me take that back. The truth of that last statement depends on how we define human.

(Ray Kurzweil, The Age of Spiritual Machines, 1999)

Exercise 11.2

Having read through this chapter, think about how new technologies may impact on your leadership and management practices in the near future, and how you will stay on top of emergent technologies over the next five to ten years.

The near future:

1.

2.

3.

4.

The next five to ten years:

1.

2.

484 MAXIMUM PERFORMANCE

3.

4.

 

 

 

Notes

1This frenetic pace of technological innovation looks even more astonishing if we set it against the backdrop of the evolution of our planet. It is now believed that the Earth formed about five billion years ago, with the Moon being created from the impact of a Mars-sized planet about 500 million years later. Without the Moon’s stabilizing influence on the Earth’s erratic orbital spin at this time, it is highly unlikely that any life forms would have evolved. The first primitive single cell creatures emerged about four billion years ago, but for the next two billion years evolution stood still. Approximately one billion years ago the first multicellular organisms appeared and 540 million years later there was an explosion of life forms during the Cambrian era. It is now believed that this was triggered initially by a massive asteroid slamming into Southern Australia. This created mass extinctions, similar to the one that was later to wipe out the dinosaurs, but also created opportunities for new life forms to emerge (including the first mammals).

By 200 million years ago a huge variety of plants and animals had appeared, including the dinosaurs, who reigned as the dominant species for millions of years, until another massive asteroid struck the Gulf of Mexico 65 million years ago, creating new opportunities for mammalian species to emerge and spread over the planet. About four million years ago, the first ape-like creatures appeared in Africa, hybrid primi- tive-modern humans appeared some 200 000 years ago and Homo sapiens about 130 000 years ago (reported in Nature, 423, 12 June 2003). The analogy that has often been used to illustrate this dramatic evolutionary acceleration is to compress the history of life on Earth into twenty-four hours. Multicellular organisms appeared in the last twelve hours, dinosaurs in the last hour, the first hominids in the last forty seconds, and modern humans less than one second ago.

It is remarkable that the evolution of our species came about because of at least six massive and planet-threatening asteroid impacts millions of years ago. In addition to these, there have been many other cataclysmic events such as the planet’s polarity reversing several times, several highly destructive super-volcano explosions, lengthy periods of global warming and lengthy ice ages and, quite possibly, ‘super-solar’ flares hitting the planet and causing widespread extinctions in the past. These, in conjunction with plate tectonics, have all had profound short and long-term effects on the climate and temperature of the planet and the evolution of animal and plant species. It is only because of mass extinctions, and other substantial changes during the evolution of the Earth, that a small and very insignificant mouse-sized mammal was enabled to emerge and find an environmental niche it could survive in; an animal that would, after hundreds of millions of years, eventually evolve into Homo sapiens. It is a miraculous accident that our species survived and evolved to colonize the whole planet. The fact that you now exist to read this note challenges all laws of probability.

2Quiz answers: (1) Sunday 23 February 1997 saw the arrival of the first cloned mammal, Dolly the sheep. She died prematurely in June 2003. (2) On Monday 12 May 1997, IBM’s Big Blue supercomputer defeated world chess champion, Garry Kasparov. (3) 24 August 1998 was the day that Professor Kevin Warwick became the first human being in history to have an implant inserted in his body that enabled him to communicate remotely with a computer. (4) 26 June 2000 was the day human genome number 22 was mapped for the first time. (5) 11 November 2001 saw the cloning of the first human embryo by the US biotech company Advanced Cell

LEADERSHIP AND PEOPLE MANAGEMENT 485

Technology. (6) 16 June 2002 witnessed the announcement of the first teleportation of photons by two Australian scientists (in theory, opening up the possibility of teleporting matter in the future). (7) On 15 March 2003, scientists with the Human Genome Project in Bethseda, Maryland announced that their work on mapping the human genome was complete. In essence, these men and women have succeeded in identifying and sequencing about three billion pairs of DNA (the chemical building blocks that produce human beings).

This genetic map will enable revolutionary breakthroughs to be made in biomedical sciences, and in the health and welfare of humanity. However, this marks just the beginning of a very long journey of discovery. We now have a basic understanding of what we are made of, but we are a long way from understanding how all this works. The quest now is to crack the far more complicated code of the human proteome: the library of information that creates proteins. To give you an idea of how difficult this will be, there can be as many as one hundred million proteins at work in a single human cell and several thousand of these can fit into the full stop at the end of this sentence.

Bonus points: In 1945, the mathematical genius Alan Turing (who worked on the Enigma code-breaking programme during World War II) first predicted that a computer would beat a human being at chess by 2000. Chess Grand Masters now routinely use computers for match analysis and practice, and can no longer compete without this back-up.

From the original Big Blue project, IBM developed an even more powerful computer, Blue Gene, at a cost of $US100 million to model the folding of human proteins in gene studies. This will be capable of multi-petaflop processing (one petaflop = one million gigaflops; one gigaflop is equivalent to the processing power of a single top grade PC in 2003). In 2006, this machine will be capable of 1000 trillion operations a second. At the time this book was published, the world’s largest computer built by NEC could ‘only’ perform 36 trillion operations a second (Horovitz, 2002). In November 2003, this initiative was given a further boost when it was announced that the US government was to invest $US516 million in the development of Blue Gene and another computer, called ASCI Purple.

What didn’t happen on 31 December 1999 was the meltdown of the world’s computer systems, as a result of the ‘Millennium Bug’. Amongst the very few events of note that occurred at this time were the following:

Andrea Scancaralla, a 29-year-old from Florence in Italy, fearful about losing his money after Y2K, withdrew all his savings from his bank account on 20 December 1999. Outside, two men on a scooter drove past and snatched his bag. He lost 11 million lira (c. $US4500) which was never recovered.

Alonzo Andersen, of Michigan in the USA, fearing possible post-Y2K shortages, decided to stockpile (along with other survival supplies) gas cylinders. On 16 December 1999, these exploded and completely destroyed his house.

3Some animals such as birds and apes do use primitive tools, but they are unable to innovate with these.

4Key to Figure 11.1 (Kurzweil, 1999: 22–3; reproduced with permission)

Mechanical computing devices

1.1900 Analytical Engine

2.1908 Hollerith Tabulator

3.1911 Monroe Calculator

4.1919 IBM Tabulator

5.1928 National Ellis 3000

Electromechanical (relay-based)

6.1939 Zuse 2

7.1940 Bell Calculator Model 1

8.1941 Zuse 3

486 MAXIMUM PERFORMANCE

Vacuum-tube computers

9.1943 Colossus

10.1946 ENIAC

11.1948 IBM SSEC

12.1949 BINAC

13.1949 EDSAC

14.1951 Univac I

15.1953 Univac 1103

16.1953 IBM 70

17.1954 EDVAC

18.1955 Whirlwind

19.1955 IBM 704

Discrete transistor computers

20.1958 Datamatic 100

21.1958 Univac II

22.1959 Mobidic

23.1959 IBM 7090

24.1960 IBM 1620

25.1960 DEC PDP-1

26.1961 DEC PDP-4

27.1962 Univac III

28.1964 CDC 6600

29.1965 IBM 1130

30.1965 DEC PDP-8

31.1966 IBM 360 Model 75

Integrated circuit computers

32.1968 DEC PDP-10

33.1973 Intellec-8

34.1973 Data General Nova

35.1975 Altair 8800

36.1976 DEC PDP-11 Model 70

37.1977 Cray 1

38.1977 Apple II

39.1979 DEC VAX 11 Model 780

40.1980 Sun-1

41.1982 IBM PC

42.1982 Compaq Portable

43.1983 IBM AT-80286

44.1984 Apple Macintosh

45.1986 Compaq Deskpro 386

46.1987 Apple Mac II

47.1993 Pentium PC

48.1996 Pentium PC

49.1998 Pentium II PC

5If you’re thinking of setting up an e-commerce venture, there are several websites, (for example, www.businessplanarchive.com and www.webmergers.com) containing information on the collapses of dozens of e-businesses during the dotcom collapse of 2000–2002.

6The title ‘Gattaca’ was derived from the names of the four nitrogenous bases of the human genome: guanine, adenine, thymine and cytosine.

7For a detailed analysis of the moral, ethical and legal implications of the biotechnology revolution, see Fukuyama (2003).

12Leadership and business ethics

Objectives

To define ethics and business ethics.

To look at the impact of unethical business practices on organizations, and their effects on economic development in industrializing countries.

To help you evaluate your business values and ethical beliefs.

To examine the implications of ignoring ethical issues when doing business in other countries.

To establish the business case for promoting high standards of ethical conduct in organizations, leadership and people management.

Introduction

The point is ladies and gentleman that greed, for the sake of a better word, is good. Greed is right. Greed works. Greed will save the USA!

(Michael Douglas, as Gordon Gekko, in Wall Street, 1987)

Greed is good. I think greed is healthy. You can be greedy and still feel good about yourself.

(Ivan Boesky, the junk-bond dealer, during a talk to business studies students at Berkeley, California, in 1987. Soon after, he was arrested, prosecuted and imprisoned for insider trading.)

In Chapter 1 we saw that honesty and integrity were two of the most cherished qualities of successful, respected and admired leaders, and it is in the area of business ethics that the true value of these qualities is fully realized. ‘Ethic’ is derived from the Greek word ethos, meaning ideal or excellence. Ethics are things that we are (or should be) familiar with, including a sense of honesty and fairness, prudence, respect for and service to others, keeping promises, being truthful and developing business relationships based on trust and integrity. The study of ethics

487

488 MAXIMUM PERFORMANCE

is concerned with disciplined inquiry into the basis of morality and law. Business ethics are defined here in the conventional sense as that which constitutes acceptable behaviour in organizational, commercial and business contexts. Business ethics consist of four dimensions: legal, economic, social and personal. As an academic discipline, this is concerned with the study of how personal values fit the cultural, moral and managerial values of an organization, and the environments in which they operate. In this chapter, we will look at several examples of unethical conduct in organizations, and the negative effects of these on business, capitalism and national economic development. We will then consider why business ethics have been gaining greater credibility in recent years, and why some business leaders now believe that the operation and management of their organizations must be underpinned by solid ethical standards and a sense of corporate social responsibility that goes beyond simply making money and generating profits.

The impact of unethical business practices on organizations

Historically, unethical, corrupt and illegal practices have been part and parcel of doing business for centuries, in spite of the considerable damage that such activities have caused. In the 20th century alone, there have been thousands of instances of these. For example, while the roles of the Swiss banking industry, German industrialists and the inactivity of the Papacy during World War II have been well documented, it is less well known that several major US firms were also complicit in collaborating with the Nazi regime. Prominent amongst these were General Motors and Ford. When American GIs arrived in Germany in 1945, they were very surprised to discover that the basic design of German army trucks was similar to their own. This was because they had been built to the same specifications by GM’s subsidiary company, Opel. Henry Ford was an anti-Semite and a known admirer of Adolf Hitler, who in turn had a picture of Ford on his office wall in Munich and awarded him the Grand Cross of the German Eagle in 1938. A US army report, by the war crimes investigator Henry Schneider, dated 5 September 1945, accused the German branch of Ford of serving as ‘an arsenal of Nazism, at least for military vehicles’, with the consent of the US parent company. It was later revealed that Ford and GM had done little to prevent their German subsidiaries from retooling their factories to provide war materials to the German army after 1933 (abridged from Dobbs, 2000). IBM’s Hollerith card sorters were used to identify and classify Jews and other ‘undesirables’ in round-ups during the 1930s, prior to the genocidal

LEADERSHIP AND BUSINESS ETHICS 489

holocaust that would follow during World War II. The CEO of IBM, Thomas Watson (another anti-Semite), did little to prevent the use of these machines for this purpose, and IBM quickly regained control of its German subsidiary and employees after the war ended (Black, 2001).

Moving forward into the 1960s, we find the case of the Ford Pinto. Soon after this new car was introduced, it was discovered that Pintos turned into fireballs when they were involved in low-speed collisions. The company discovered that a badly designed, poorly positioned and unprotected gas tank caused this. Ford’s accountants worked out that it would have cost $US110 per vehicle to solve this problem (or about $137 million a year at that time). However, the company’s senior management calculated that the cost of out-of-court payments and litigation would only amount to $50 million a year. So, even though Ford had a patent on a much safer petrol tank, the company did nothing until Ralph Nader exposed this scandal in the early 1970s. It was estimated that as many as 900 people burned to death as a direct consequence of this problem. Not surprisingly, the company’s advertising agency, J. Walter Thomson, quickly dropped a line from the end of a Ford radio advertisement of the day: ‘The Pinto leaves you with a warm feeling’. The court cases that followed this scandal led to multimillion dollar payouts to the victims and families (Dowie, 2002). Ralph Nader also forced the automotive industry in the USA to adopt seatbelts, airbags and crumple zones – all vigorously opposed by GM, Ford and the rest on the grounds of ‘cost’. Several million people now owe their lives to this pioneering consumer advocate. He was also the first person to suggest (in 1987) that airlines should install secure cockpit doors to prevent terrorists from hijacking planes. All the major US airlines objected loudly to this proposal, because it would have added 50 cents to the cost of an average domestic airline ticket.

More recently, in 2000, General Motors, du Pont and Standard Oil were accused of deliberately introducing lead into petrol in the 1920s, knowing that it would poison millions of people and cause brain damage in tens of millions of children throughout the world. They covered up their scientists’ findings on these dangers for more than 50 years. Although the use of lead in the USA was prohibited in 1976, it is still used in petrol in many industrializing countries. Ninety per cent of this market is now supplied by one British company, Octel (Brown, 2000). This conduct was compared to that of the tobacco companies who had systematically lied about the effects of their products on people’s health for more than 40 years, leading to the successful prosecution of most of the world’s major cigarette manufacturers during the late 1990s and early 2000s. Tobacco companies knew by the early 1960s that