I break rule in order to create public attention and public transparencies.
I break rules in order for you...the public... to understand the system.
Originally published April 9, 2013...
Dark Secrets:
Face-work, Organizational Culture and Disaster Prevention
by
Marc S. Gerstein and Edgar H. Schein
All the world’s a stage,
And all the men and women merely players:
They have their exits and their entrances;
And one man in his time plays many parts.
Shakespeare, As You Like It.
Part 1: Face-work
and other forces preventing reporting
Shakespeare had it
right and sociologist Erving Goffman went further: Men and women play many parts at the same time, each
representing his or her ‘self’ in different ways in different settings as well
as at different times. These differentiated
representations of self are called faces and the work we do to create
and maintain them is called face-work.
Organizations,
too, reveal different faces, perhaps aggressive in some dealings and compliant
in others. They may treat their employees, customers, suppliers, and regulators similarly, or with great
differentiation. Yet this facile summary is a vast oversimplification—the variations in any
organization’s face are far more subtle and just as variable as our
own—especially when it comes to matters of risk.
Since an
organization is not alive it is the behavior of its people that animates its
faces. Nevertheless, there is often great commonality in the way organization
members behave and in the faces they present to each other and to outsiders
when playing their organizational roles. This commonality is a key aspect of
the organization’s culture, the pattern of assumptions, beliefs, practices and
behaviors that define what it is like to live and work within a particular
setting.
Through
assumptions that are widely shared—even if they are not always visible or even
conscious to its members—an organization’s culture imprints its common ways
upon its community even across generations as people come and go. This
imprinting is particularly important when it comes to matters of safety and
ethics because organizational expectations and incentives sometimes encourage
behavior that people might otherwise avoid or even find repugnant.
One of the most
fundamental aspects of any culture is the set of rules by which we maintain
face and self-esteem. We are all trained early to believe that the social order
will not survive if we say to each other exactly what we feel. Instead, we have
learned to grant each other what we claim so that life can go on smoothly. Weaknesses,
errors, and sometimes behaviors that are far worse are overlooked unless we
specifically set out to criticize them.
This same tendency to uphold each others’ positive self-images and the
social order that depends on them occurs within government, the military, and
industry, even among organizations that compete with one another.
In consequence,
organization members not only avoid reporting the worrying and unethical things
they see out of fear, greed, or social pressure, but because they don’t want to
acknowledge things that are wrong and don’t want to upset themselves, their bosses
or colleagues by pointing them out even though grievous harm may be done or
laws broken.
We may all love to
jokingly criticize what we do wrong or complain about how awful things are in
our workplaces. But we then rationalize it away with a smile and a shrug of the
shoulder (Weeks, 2004). Taking our observations seriously by reporting them
might be too anxiety provoking and disruptive even if we were not punished for
doing so. In this sense all groups, organizations, and societies have faces
just as individuals do, and we learn not to destroy these collective faces just
as we learn not to destroy those of individuals.
An insider
exposing organizational secrets is thus a ‘social revolutionary’ so it should
not be surprising when she gets a violent response from the establishment. From
the vantage point of most leaders, it is better to have one’s wrongs discovered
by an outsider than to have them revealed by one’s own members. People who do so—whistleblowers—are
considered disloyal and are usually severely punished.
From this
perspective, insider-reported misbehavior should not really be expected unless
lawmakers, regulators, auditors, leaders and managers make special efforts to create
the structures, incentives, and requisite sense of safety to counterbalance the
powerful drives of face-work that keep embarrassing secrets from getting out.
People also have
other reasons not to speak up. We show in Table 1 the extensive list of factors
why risks are not surfaced by insiders. The length of this list is testament to
the ubiquity and strength of the forces that maintain silence. After the table,
we will focus in greater depth on the particular cultural role of face-work and
its implications for solutions.
Table 1.
Reasons Why People Don’t Report Hazards,
Safety Violations and Acts of Malfeasance
Edgar H. Schein & Marc S. Gerstein
INDIVIDUAL
Self-interest
- Concrete
rewards discourage reporting of facts or judgments concerning risk because it might
affect employment, bonus payments, or chances for promotion. (Also see ‘Fear’,
below.) Ignorance
- Lack of
awareness or understanding of the relevant conditions and warning signs.
Carelessness or indifference
- Too much trouble
to bother—need or desire to remain aloof and uninvolved.
- Lack of energy
or motivation to see it through if not initially listened to.
- Complacency,
indifference to problems—’who cares if they screw up, it doesn’t affect me.’
Machoism
- Visible signs
are considered trivial, and to report them would appear to be a sign of
weakness.
Resignation/apathy
- Belief that
‘they never listen anyway’ or ‘I’ve tried before and they didn’t listen then so
they
won’t listen now.’
Overload
- Too much is
visible that could be reported, so nothing is reported. The truly important
becomes lost in the clutter of the mundane.
- Unwillingness to
expend the extra energy required to look into an abnormal condition and fix it if
something is found.
- Just as the
subordinate may be unwilling to ‘rock the boat’, ‘make waves’, or ‘turn over rocks’,
the boss also does not want to dig too deeply over concern about what might be
found.
Fear
-Fear of being
punished for bringing bad news (shooting the messenger).
- Fear of personal
embarrassment or criticism from higher-ups that something may not be right in
one’s own area of responsibility whether or not the source of the problem is beyond
one’s control.
- Fear that what I
see may not be valid leading to embarrassment or perception of self as
having poor judgment or lacking in
expertise.
- Fear that what I
report may be denied (making me look foolish) or I may be asked to prove it by mustering
more proof than I have (making me look unprofessional for being unprepared).
- Fear of reprisal
arising from raising a point of view counter to prevailing dogma or in
opposition to policy or operational decisions concerning the nature of risks
being faced.
- Fear of
legitimizing the presence of ‘undiscussable’ risks or conditions whose presence
higher ups wish to deny.
- Fear of getting
a reputation as a trouble-maker, complainer, or worrier, and therefore less
likelyto be considered loyal and team-spirited. More instrumentally, concerned
that pointing out problems will reduce
chances of being re-hired (as in the case of merchant seamen), given choice assignments, or promoted.
GROUP/INTERGROUP
Unwillingness to challenge, disrupt, or embarrass one’s group, department,
or the organization
- Unwillingness to
challenge the ideal self-image because it is too threatening to consider that
things are not working properly. (This leads to an initial attitude of
skepticism toward any negative feedback and asking for escalating levels of
evidence or proof, i.e., defensively ignoring ‘weak signals.’1)
- Reluctance to
‘make waves’ by upsetting the social order.
- Fear of being
ostracized by the peer group for telling something bad that is going on in the
group.
- Loyalty to
buddies or to one’s superiors who might be hurt or embarrassed by the new
information.
Hostility to others laterally or upward
- Not reporting
something in order to let others get into trouble, or not reporting something
soon enough, knowing that it will get worse if left alone.
- Anger—‘they
deserve to fail’, etc.
- Passive-aggressive
complacency—‘who cares if they screw up’.
Intramural rivalry and competition
- Cutting corners
and not reporting known risks in order to save time and money or to make one’s
own group look better.
Structural/cultural barriers to systemic risk identification and
team-based diagnosis
- Organization
design or cultural factors encourage stove-piped views of the enterprise and
inhibit the identification of potentially dangerous interaction effects.
Productivity pressures
- Unwillingness to
report hazards or attend to potential risks due to perception and/or reality
that doing so will reduce productivity and efficiency.
ORGANIZATIONAL
Normalization of deviance
- Unacceptable
levels of risk persist without incident and thus become acceptable conditions
that do not merit special attention.
- Lax regulatory
or other enforcement redefines the de facto meaning of rules and regulations.
Weak or compromised safety, compliance, audit, and risk management
functions
- Formal risk
control groups lack the power, motivation, and resources to compel action or
set demanding safety, risk control, and ethical standards.
- Incentives and
rewards for risk control and compliance groups encourage greater risk-taking or
strongly motivate ‘pleasing the client’ rather than ensuring high risk control
standards.
Weak protections for truth tellers
- Personnel
wishing to raise concerns lack adequate advice, counseling, legal assistance
and protection from management or peer group retaliation.
- Lack of
meaningful punishment (and sometimes rewards) for retaliators.
‘No-win’ demands for managers and supervisors ‘in the middle’
- Supervisors and
managers are caught between pressures to simultaneously meet production, quality and financial targets vs.
safe/ethical standards. Faced with unavoidable real-world trade-offs, instead of help and support they
are told, ‘You take care of it, that’s what you are paid for.’
Part 2: The role of leadership, outside forces, and regulation by Marc Gerstein
Beyond the factors
listed in Part 1, further erosion of safety and ethics often occurs when
actions are undertaken in the name of desired organizational outcomes initiated
from the top. Cost cutting, in particular, gives rise to all manner of risky
actions, especially since delaying inspections; skimping on upgrades,
maintenance and training; increasing staff workloads; and other commonplace
expense reduction initiatives tend to increase danger by subtly changing the
odds of an adverse outcome over time. For example, training and equipment
cut-backs contributed to the mistaken 1994 shoot-down by U.S. Air Force F-15s
of two U.S. army helicopters carrying a contingent of twenty-six high level
allied dignitaries, soldiers, and air crew on a tour of the no-fly-zone in
northern Iraq after the first Gulf War. This shootdown is considered by many to
be the most serious friendly fire accident in U.S. military history (Gerstein,
Ellsberg & Ellsberg, 2008: 146-169; Snook, 2000).
The inability of
the fighters and the helicopters to communicate with each other and the failure
of the AWACs to identify the helicopters and notify the fighters— especially
since the extra wing tanks made them look more like the Iraqi enemy helicopters—is
perhaps as attributable to the cost-cutting efforts that had been occurring
over several years as it is to human errors. For example, financial pressures and
budget cutbacks precluded the rewrite of AWACS pre-mission training simulator software
to conform to the actual mission’s requirement to manage routine helicopter flights.
In addition, limited funds precluded radio equipment upgrades that would have
allowed the helicopters to communicate with the F-15’s more advanced combat radios2,
and also eliminated a liaison role between the army helicopter and air force commands
that might have improved coordination between the two services.
Although only a
small minority of leaders would willfully harm their organization’s members,
customers, or the public, most of them (like the rest of us) are nevertheless likely
to focus on their own priorities rather than those of others. Some people of power,
however, are willing to take self-interest considerably further. Daniel Ellsberg
helps explain their possible motives: of others. Some people of power, however,
are willing to take self-interest considerably further. Daniel Ellsberg helps explain
their possible motives:
When the potentially disastrous gamble offers the possibility of
avoiding loss altogether, coming out even or a little ahead; and when the
alternative to taking the gamble assures the certainty of loss in the
short-run, a loss that impacts the leader personally.
The sure loss that is rejected may appear small or even trivial to an
observer, compared to the much greater damage, perhaps societally-catastrophic,
that is risked and often subsequently experienced. The latter damage, however,
may be to ‘other people’, outside the decision-maker’s organization or even
nation, and inflicted in ‘the long run’: thus, less easily attributed to this
decision-maker, who may well have moved on by the time of the disaster. In
effect, the decision-maker acts as if a sure, short-term loss to his own
position—a perceived failure, risking his job or reelection or his
influence—were comparably disastrous to a possible social catastrophe that costs
many lives: an avoidable war, a widely-used drug that proves to have lethal
side-effects, a dangerous product, the explosion of a nuclear plant or space
vehicle.
In the leader’s eyes, both of these outcomes are ‘disasters.’ One of
them, resulting from a particular course of action, is sure to occur. The other
is uncertain, a possibility under another course of action, though perhaps very
likely; and it is combined with the possibility, not available with the other
course, of coming out even or perhaps ahead, winning or at least not losing. In
choosing the latter option, he sees himself as accepting the possibility of a
loss—in hopes of coming out even—rather than accepting a certainty of a
failure, defeat. It seems—and it is so presented to him by some advisors—a
simple, inescapable decision, ‘no real choice’: the possibility of winning, or
at least of avoiding or postponing defeat, versus a ‘no-win’ course of action,
or worse, a sure loss in the short run. He and these advisors simply ignore the
fact that the scale of the respective losses, and who it is that mainly suffers
them, are vastly different in the two courses (Gerstein, Ellsberg & Ellsberg,
2008: 287-288).
Many of the major
disasters we have researched—Bhopal3, Space Shuttles Challenger and Columbia
(Gerstein, Ellsberg & Ellsberg, 2008: 11-22, 66-91), Chernobyl (Gerstein,
Ellsberg & Ellsberg: 91-125), Thailand’s preparations for the 2004 tsunami
(Gerstein, Ellsberg & Ellsberg: 240-245; PBS News Hour With Jim Lehrer,
2005), Vioxx (Gerstein, Ellsberg & Ellsberg: 126-142; United States Senate Committee
on Finance, 2004; Loudon, 2005), Xerox’s extensive accounting fraud (described
below), among others—fit Ellsberg’s description: the taking on of a significant
risk in order to avoid the guaranteed failure of a major financial loss, political
embarrassment, or personal humiliation.
Beyond taking
imprudently large risks, leaders may also contribute to harm by engaging in
‘damage control’: covering up or otherwise seeking to minimize the negative
impact of organizational errors, misjudgments, or acts of malfeasance about which
they may or may not have been aware but which nevertheless occurred on their
watch and for which the organization will pay a price and current leadership may
well be blamed (Gerstein, 2010). ill pay a price and current leadership may well
be blamed (Gerstein, 2010).
In the 2005 BP
Texas City Refinery explosion of vent stack gases that killed 15 and injured
180, for example, the company first blamed employees by firing the accused responsible
parties, then it claimed that it could not have anticipated the complex interaction
of events that led to the explosion. As the government-led investigation and
legal cases unfolded, however, it was revealed that the plant had previously sustained
a number of ‘near miss’ incidents related to vent stack over-filling, failed to
fix faulty instrumentation linked to the accident, repeatedly postponed major equipment
upgrades to save money, violated its own rules concerning the safe location of
employee work trailers on the site as well as the explicit restrictions on the
use of motor vehicles in the vicinity of explosive vapors, and ignored repeated
union safety complaints. Not only had leadership deliberately and over many
years put their employees and the facility at considerable risk to keep costs
down, they tried to shift the blame for one of the U.S.’s most serious
industrial accidents onto the victims (Gerstein, Ellsberg & Ellsberg, 2008:
142-145).
While preventing
and mitigating disasters are our clear priorities, the exposure and punishment
of those who directly contribute to harm by their actions would likely aid such
prevention. Unfortunately, it occurs far too rarely even when thousands die.
Organizational culture and ‘dark secrets’
Let us now look
more closely at the cultural impacts on reporting. Through an understanding of
face-work, we realize that the representations that individuals and organizations
make to the outside world are not always consistent with all the known facts in
their possession. We all ‘put our best foot forward’ when making a presentation,
selling ourselves or our organization, or writing ad copy.
Representing one’s
self and one’s organization in the best light is not the exception, it is the
rule, and most organizational leaders have learned that projecting a positive self-image
eventually leads to more effective performance. Naturally, once we develop a
positive self-image and express it to others we work zealously to preserve it.
Taking some liberties with the facts—or at least not revealing all that we
might know or suspect—is not only natural, it very much appears necessary when
others do the same.
We all know this,
of course, which is why claims such as ‘Brand-X detergent gets your clothes
“whiter-than-white” ’ are not always taken literally, or even seriously.
But what if,
hypothetically speaking, as a 1960s-era employee of a big soap company you knew
of research that showed that phosphate-based so-called whiter-than-white detergents
contribute to the growth of algae in lakes and rivers, a process known as eutrophication
that results in the premature aging of these waters. Would you go to your
bosses? Or to the environmental authorities or the media if the company didn’t heed
your warnings?
While the U.S.
Environmental Protection Agency did eventually break this story in 1970, thus
sparing would-be whistleblowers from possible company sanctions arising from
their public disclosure, it’s not much of a stretch to say that most employees
would keep silent, bystanders to potential harm. While phosphate pollution may
not appear to affect health and safety in a big way (although it’s arguable
that water quality is an important environmental issue), the differences between
phosphate contamination of surface waters and tobacco’s injurious impact on
cancer and health, cold remedy ingredient PPA’s relationship to hemmorhagic stroke4,
and gasoline’s tetra-ethyl lead (TEL) pollution of the air and soil is one more
of extent than of kind (Bryson, 2003: 149-160; Kitman, 2000; Hamilton, 1997;Agency
for Toxic Substances and Disease Registry, 2007).
In particular,
TEL’s toxicity was known by its manufacturers since the 1920’s and its widespread
environmental effects were publicly known since the 1950s. Nevertheless, it
took another 45 years—over 50 years from the initial rash of TEL manufacturing
deaths—for it to be banned as a fuel additive in the U.S. Today we know that
lead is extremely dangerous, even in minute quantities.
During the sixty years TEL remained in widespread use, the powerful
lead lobby promoted doubt, funded self-serving research, prevented the leading
expert on atmospheric lead, Clair Patterson, from participating in the 1971
U.S. National Research Council investigation of lead poisoning, among other
delaying tactics.(Patterson also found his research funding withdrawn, and the
trustees of his university were pressured to fire him or keep him quiet. Merck
used the same methods against Vioxx critics Stanford Medical School professor
Dr. Gurkipal Singh and Cleveland Clinic provost Dr. Eric Topol (Gerstein,
Ellsberg & Ellsberg, 2008:130-131).
While such tactics appear extreme, the harm done by a relatively small
number of institutions is startling: the tobacco industry kills 5.4 million
people a year from lung cancer, heart disease and other illnesses, a number that
will increase to more than eight million a year by 2030, according to WHO (Why
is tobacco a public health priority?; Deaths); TEL has demonstrably lowered intelligence
(Canfield, 2003) and increased antisocial behavior for many children who were
exposed to lead poisoning from vehicle exhausts; the Union Carbide (now Dow
Chemical) Bhopal, India chemical plant explosion killed 7,000 people and injured
555,000, 100,000 of whom suffer from untreatable chronic conditions and 15,000
of whom have died over the years from lung cancer, kidney failure, and liver disease
(Amnesty International,2004); and Merck’s Vioxx is estimated to have killed
between 26,000 and 55,000people and caused well over 100,000 heart attacks before
it was withdrawn from the market in 2004 according to testimony by Dr. David
Graham, senior drug safety researcher at the FDA (United States Senate
Committee on Finance, 2004).
Although many organizations do not make, use, or trade in products that
can produce grievous harm even in worst case scenarios, there are a surprising
number of industries in which massive harm is possible. These include food
production, distribution, and retailing; health care; pharmaceuticals and
medical devices; automobiles; air travel; transportation; natural resources and
mining; marine shipping and passenger transportation; energy; heating
equipment; home products and construction; toys; and banking and finance, among
others.
To these industries we must also add the governmental and
non-governmental regulatory or watchdog organizations that exist to protect us.
Failures there, such as the subprime crisis that went unrecognized by the
entire spectrum of U.S. and international financial regulators and credit
rating agencies, eloquently underscores the watchdog’s critical role.
While the risks to members and the public posed by some organizations
are obviously larger than others, to sustain their self-image all organizations
engage in protective behavior to some degree. They may:
• Repress and deny bad news, or the possibility of bad news (Cases
include Enron, BP Texas City refinery, Merck’s Vioxx, NASA).
• Promote doubt, hide data, and support biased, self-serving research
in pursuit of financial gain and avoidance of blame for harm. (Cases include
asbestos tobacco, PPA, TEL, Vioxx, Columbia University Medical Center
(Gerstein, 2010)).
• Take advantage of scientific uncertainties and political debates to
promote their own interests, continuing their traditional business models for
as long as possible, even when there are viable alternatives. (The most visible
case is tobacco, but cases also include beryllium, lead, mercury, vinyl
chloride, chromium, benzene, benzidine, nickel, and many other toxic substances(Michaels, 2005)).
• Employ legal, public relations and political options after a tragedy
or disaster to avoid blame and liability as well as minimize compensation to
victims and their families. (Cases include asbestos, lead, PPA, tobacco,
Merck’s Vioxx, BPTexas City, Columbia University Medical Center). And to some degree all organizations expect their members to
demonstrate immoral loyalty by.
• Abandoning their escalation of concern if those in charge accept the
identified large scale risk or deny it in spite of the evidence. (This applies
to governmentagencies,
NGOs, regulators, and watchdogs as well as to commercial firms.)
• Minimizing the relevance (or simply covering up the existence) of compromising
research or evidence, especially when results were known bysuperiors.
• Remaining silent about known dangers or acts of malfeasance. If
necessary, equivocating or lying to authorities to cover up the transgression
to protect the organization and their superiors.
These characteristics sound melodramatic, and perhaps they are when
stated this baldly. Yet there seems little doubt that having some secrets
increases one’s chance of success as well as helping to preserve the existing social
order upon which one’s success so often appears to depend. Inevitably, however,
this means that among these secrets some organizations will have so-called dark
secrets—those harmful or unethical aspects of organizational practice whose
revelation would deeply threaten the external faces that any organization, public
or private, presents to its constituents—members, customers, investors, regulators,
and the public. When darksecrets exist, one’s external face cannot be wholly
truthful.
In addition, since dark secrets tend to engender cover-ups to contain
them, there are additional rounds of secrets to be kept and increasingly more
people who must keep them. Faced with evasive responses from their bosses, most
organization members learn that certain topics are ‘undiscussable’, and it is best
to keep one’s questions and criticisms to one self. The near universal and often
ruthless retaliation against whistleblowers is a convincing argument that keeping
your head down is usually wise.
As human beings have confronted such situations over millennia, it
seems likely that we have evolved not only to create secrets but to be ‘loyal’
by protecting those of our family, clan, village, and tribe. Betrayal by ancient
man almost certainly meant ostracism and, with it, the likelihood of suffering and
death. We have carried this ancient fear of banishment into the present day,
extending our loyalty to the workplace. Such loyalty helps explain why so many of
us remain bystanders even in the face of unambiguous harm to others or ethical
transgressions we know to be wrong. To permit us to live with ourselves, we
have developed a capacity to psychologically deny or rationalize our
complicity. Denial and rationalization are virtually universal human characteristics.
An interesting question is whether organizations follow these same
practices. We know that to maintain the social order we work to maintain each
others’ faces and avoid exposing faults or embarrassing facts unless we wish to
cause harm. Do organizations also protect one another, preserving their dark
secrets and expecting others to do the same? History suggests that when an
industry or government body shares a common but questionable practice, such as
unregulated lobbying, or a profitable but dangerous raw material, such as lead,
asbestos, tobacco, or PPA, it is likely that they will support one another by
going to great lengths to protect their shared secrets for as long as possible.
Some possible ways forward
The challenge we face as shapers of organizations is how to harmonize
the unavoidable need for secrets and the loyalty they command with the equally important
need to protect the public interest and create a safe, moral and just society. While
the preservation of the social order – or at least its orderly change – is arguably
essential for stability, it must not come at the cost of ethical conduct or result
in grievous social harm.
Two major barriers must be surmounted. First is the obvious need to
transcend the rules of the social order to keep secrets, particularly in the
workplace where the drives toward conformity and bystander behavior are unusually
potent because of the normative power of these cultures and the rewards and
punishments created to ensure compliance. The second and far more difficult
challenge is to deal with those risks and unethical practices instigated by
leadership itself. Such initiatives typically harness the full power of the
organization’s structures, systems and rewards. Overtime, they also co-opt and
poison the culture itself.
Fortunately, the most numerous and straightforward cases involve
preventing the potentially dangerous or unethical acts committed by employees
outside the purview and against the wishes of their leaders. Such actions
inevitably run counter to the organization’s safety protocols and ethical
standards and are simply undesirable byany objective standard. Unfortunately,
however, most organizational remedies are conceived and implemented piecemeal
when a systemic approach is actually necessary because organizational behavior
tends to be over-determined, the product of many overlapping forces, and thus
largely immune to simple solutions such as policy statements, rule-making, and
training. While the more routine actions listed in Table 2 will address many of
these undesirable behaviors, simple fixes will not address dismantling the
rewards for risky behavior, eliminating the social benefits of remaining a
bystander, or reduce the freedom of most public or private sector managers to
arbitrarily punish ‘disloyal’ subordinates.
Beyond these already formidable obstacles, history has shown that some
of the greatest harm is created as a side-effect of misguided leadership
polices5 or as a set of deliberate acts conceived to save money, make money, or
achieve other legitimate organizational objectives, albeit by questionable and
sometimes illegal means. Considerably stronger medicine is needed to address
such transgressions.
Creating a robust context for safety and
ethics
In simplest terms, to change toward organizations in which information
about hazards and malfeasance are willfully reported one must overcome the
cultural rules that make us want to believe in ourselves and maintain our
self-esteem and the existing social order. Put this way, it should be clear
that only a force that is partially outside of the existing
organization’s or industry’s culture can initiate and sustain such change.
The most important changes needed to create the proper incentives are:
1. Strengthening the responsibility, independence and power of external
oversight.
2. Ensuring that internal whistleblowers and truth-tellers are free
from the retaliation that serves as both a punishment for ‘disloyalty’ and as a
warning to others.
Creating such change is a burden that
clearly falls to each organization’s top leadership, to its employee
representatives or union (if it is more than a ‘company union’), and to
law-makers, regulators, auditors, and external watchdogs.
On both counts, many will claim that changes are unnecessary because an
extensive web of laws, regulations, and professional practice support systems
are already in place. While a considerable infrastructure certainly exists,
there is extensive evidence that it often fails when it is most needed.
For example, virtually all of the major U.S. accounting-based
frauds—Waste Management, Enron, Xerox, among others—involved collusion on the
part of the firm’s audit firms. Similarly, according to many critics the U.S.
Food and Drug Administration’s Office of New Drugs played a complicit role in
the Vioxx crisis. Dr. Sidney Wolfe, Director of the Public Citizens Health
Research Group, has argued that many re-labeling compromises negotiated by FDA,
such as that affecting the use of Vioxx at the high doses known to increase the
risk of heart attacks five-fold, effectively sustain dangerous drugs on the market even when there are
safer alternatives (Wolfe, 2006; Molotsky, 1987).
In addition, the frequent criticism that private sector boards of
directors are often too close to their CEOs appears valid, and that their
involvement in matters of risk is especially when auditor or regulator
independence is compromised.
After Merck announced the withdrawal of Vioxx in September 2004, for
example, its stock price dropped precipitously and did not return to its prior
relationship to the S&P index for approximately three years. Since dramatic
stock price drops in the wake of a crisis tend to ‘reset’ market price levels,
and such resets effectively deprive shareholders of a significant return on
their investment, it is the board’s fiduciary responsibility to try to prevent
them.7
While Merck has continuously defended its actions in the Vioxx case,
court papers and congressional testimony reveal that well before the drug’s
withdrawal the company was in possession of worrying clinical trials data, it
had systematically intimidated critics, violated its own rules regarding the
professional staffing of its clinical trials, and published positive research results
that were based upon questionable research protocols (Gerstein, Ellsberg &
Ellsberg, 2008: 126-142). In light of these facts, Merck’s board arguably
failed in its responsibilities for not having put in place systems to detect such widespread transgressions in
a drug that was such an important engine of company revenues and shareholder
value. Like other ‘insiders’, board members are expected to keep secrets
when, in fact, their larger responsibilities demand that they sometimes do
exactly the opposite.
Similarly, the late 1990s Xerox accounting fraud was characterized by
the SEC to be one of the most blatant because of its duration, the company’s
lack of cooperation with the government investigation, and the complicity of
company executives including board chairman and former CEO Paul Allaire.8 KPMG,
Xerox’s auditors, paid a large fine for its role, as did a number of company
executives, including Allaire, all of whom were barred from being an officer or
director of a public company for a period of time.9
However, when the possibilities of wrongdoing prompted an SEC
investigation in the summer of 2000, an in-depth internal investigation by the
board’s audit committee would likely have revealed the lies Xerox was feeding
the media. In fact, by that time James Bingham, a Xerox assistant treasurer,
had written a scathing memo and presented his conclusion to Xerox’s CFO and
other executives that there was a ‘high likelihood’ that the company had issued
‘misleading financial statements and public disclosures.’ Bingham was fired
immediately thereafter—which should itself have been an important red flag for
the board. In spite of the seriousness of the fraud, and despite the symbolically
‘large’ fines,none of the participants was required to admit guilt, and no criminal
penalties were involved. One must wonder whether the relatively modest sanctions for
both Xerox
staff or
KPMG will dissuade others from similar acts (Bander & Hechinger, 2001:As much as a governing board can and should do, however, it remains the responsibility of an organization’s operating head and its top managers to specify and reinforce the behaviors that will lead, in time, to a change in culture. We assume, of course, that management will issue the proclamations and establish the policies that appear, on paper at least, to require high standards of behavior and encourage truth telling.
While policy statements are necessary and useful, the difficulty arises with the practicalities of achieving them, especially when people are under time and financial pressure or are encouraged by performance measures or incentives to engage in risky practices.
Unfortunately, there are no simple answers to conflicts between the
drive toward organizational success and safety or ethics. In the infamous B.F. Goodrich
A7DCorsair II attack aircraft brake scandal, the company won an important
contract on the basis of an innovative but unproven low cost brake design.
Unfortunately, the new design didn’t work. Rather than take their lumps and
redesign the brake system after internal tests failed (as eventually occurred),
BFG chose to falsify test data, a step that only delayed the inevitable (as
well as violating U.S. federal law). In a subsequent real-world test, the brake
overheated and fused causing the test aircraft to dangerously skid 1500 feet
down the runway upon landing. A Senate investigation followed, although no one
at the company was indicted and the managers who ordered the falsification were
promoted, a surprisingly common occurrence. Unfortunately, Kermit Vandivier,
the BFG whistleblower that brought the fraud to FBI and Senate attention fared
much less well (Vandivier, 1972).
The plight of whistleblowers
As with Vandivier and Xerox’s Bingham, much has been written about the
severe punishment whistleblowers receive after revealing dark secrets (Alford,
2001;Glazer, 1983). The U.S. federal government is perhaps the most striking
example of failure to support dissenting employees despite superficially good
intentions. The various attempts to protect U.S. federal government whistleblowers in law have all had limited effect in practice because the
legislation excludes certain groups, such as intelligence services, and
provides inadequate protections against retaliation, specifically limitations
to access to trial by jury in federal court as an escalation to existing bureaucratic administrative procedures.
While the situation is arguably somewhat better in the U.S. private
sector, the retaliation against corporate truth-tellers is much the same—and
perhaps more arbitrary because of near universal employment-at-will and the
enormous expense and time involved for restitution through the courts which, in
any case, place the burden of proof on the employee. Even professional associations,
such as those for engineers, which might be expected to stand up for their
members on matters of safety and ethics fail to defend them. While members must
often sign their association’s ethical code of conduct, those who abide by it
by speaking out are often on their own if they are punished or dismissed.
In contrast to such iniquitous results, one might think that employees
revealing the truth in areas in which management wants visibility—primarily
safety and malfeasance—would be free from retaliation. That is incorrect. While
there are certainly cases of danger or individual misbehavior that can be
safety reported, as the B.F. Goodrich, BP Texas City, and Xerox fraud cases
illustrate, many dangerous or unethical acts are performed with tacit or
explicit management permission in order to accomplish a sanctioned
organizational objective by dubious means. Management’s desired response to an
employee’s discovery of the dark side of such an initiative is to go along with
it and to keep his mouth shut. Any alternative behavior is seen as disloyal and
often grounds for reprimands, sanctions or dismissal.
Conclusion
Since organizational culture is formed primarily through the pattern of
leadership actions rather than through words alone, it must be these actions
that counter the ‘normal’ range of individual, group, and organizational
phenomena that act in concert to preserve the social order and the organization’s
dark secrets. This means that the successful reporting of hazards and ethical
violations requires that behavior typically considered countercultural must be
transformed to become culturally acceptable.
In particular, conditions must be changed such that truth-tellers are
confident that they are not putting their careers and families at risk by
speaking out. Counter intuitively, this is much less about protecting the
rights of those few who may be forced by an unresponsive management to ‘go
public’ than it is about creating an organizational culture that values dissent
and protects all truth-tellers from punishment, even those whose concerns
ultimately prove ill-founded. Therefore, one must view the recommendations summarized
in Table 2 as intended for those leaders who are dedicated to greater transparency
and dissent. Even for such committed leaders, however, achieving success will
be difficult. External oversight and strong sanctions for truth-teller retaliation
should therefore be considered ‘critical success factors’ (Rockart, 1986).
For the reasons we have described in this paper, while progress can be
made we believe that changes in
voluntary reporting cannot be sustained over time by either internal or
external actions alone. Unless we concurrently alter both internal organizational
culture and the larger societal context that contains it, our struggle to prevent
disasters will ultimately remain a losing one.
Table
2.
Specific
Suggestions for Committed Organizations
Edgar
H. Schein & Marc S. Gerstein
A. Leadership and governance
-Explicit main
board-level responsibility for safety and risk supported by regular external
auditsand reviews in finance, safety, and risk by truly independent firms.
- Clear executive
leadership policy position regarding safety and risk.
- Political and
financial support for government and NGO regulators and other watchdogs to setcstandards, provide oversight and critique, and impose sanctions.
- Top management
reinforcement actions to ensure that safety and open communications are
genuinely rewarded (praise, financial rewards, responsibility
taking, consistent promotion practices, etc.)
- Policy of fair compensation for whistleblower retaliation consistent
with the true harm done
(loss of long term employment, psychological harm, etc.)
B. Education, training, and review
- Top management
education regarding the inadvertent risk-seeking incentives they create for the levels below them.
- Training for managers about the appropriate responses to employee
worries, concerns, and
reports of danger or abuse.
- Employee
education on the possible negative consequences of silence.
- Real world
practice/simulation, especially of unusual conditions (i.e., ‘flight simulator’
approach to training for potentially catastrophic rare events).
- Formal ‘after
action reviews’ for both good and bad experiences to build on success and identify areas for improvement in areas of safety, risk, and ethics.
- ‘Intergroup’
interventions, such as membership exchanges between interdependent or rival teams to reduce dysfunctional intergroup competition and add
multiple perspectives to diagnosis and problem-solving.
C. Measurement, reward, discipline
and incentive systems
- Organization-wide
risk-reduction goals/priorities.
- Improved
reporting of risks, identified risk conditions, and near misses.
- Measurement and
reward/incentive systems at all levels to better balance productivity/financial
objectives and risk, especially positive incentives to make communication of
dangers and concerns worthwhile.
- Redesign of financial reward systems to incorporate long-term
risk/reward provisions.
- Rewards and
recognition for risk-prevention activism.
-‘Zero tolerance’
approach to truth-teller/whistleblower retaliation.
D. Organizational structure and
process
- Truly independent
internal safety/audit units reporting to knowledgeable and sympathetic
senior executives.
- Disciplined
reporting and follow-up of ‘weak signals’.
- Anonymous
truth-telling reporting mechanisms via independent outside parties and independent, anonymous legal counsel and advice for truth tellers if
necessary.
E. Culture
- Reinforcement for
danger/ethics reporting by employees from supervisors and rank-and-file opinion leaders.
- New heroic myths
and positive role models (with formal and informal rewards/recognition to
reinforce them).
- Redefinition of
‘loyalty’ under conditions of risk or malfeasance.
- New norms of
supervisory and work group behavior concerning matters of risk and malfeasance.
F. Budgets/funding practices
- Special resource
pools to deal with unbudgeted risks. (Note: It must be culturally valid for
such
pools to be tapped and considered a violation for them to be
abused.)
- Adequate
budgets/staff for risk control, safety, and audit functions.
-Funding for
outside counsel/advice for truth-tellers/whistleblowers.
G. Staffing, selection, and careers
- Selection
criteria and financial compensation of key safety, risk and audit functions
consistent with other high importance organizational activities.
-Staffing of risk
management, safety and audit with high potential individuals who then go on to significant
senior positions.
- Attractive career
paths for functional alumni who spend time in safety, audit, etc. (In other words, these functions are not ‘dead ends’ or second class careers.)
Notes:
1 The concept of ‘weak signals’ and their role in warning
systems is credited to Igor Ansoff
(1976:133). According to Ansoff, weak signals have to be carefully
interpreted because they are not by themselves ‘adequate for estimating the
potential profit impact [and other performance impacts].’ He argues that
‘surprises’ could be anticipated if the organization’s information processing
capabilities were capable of such interpretations. Also see Tsoukas and
Shepherd (2004), Schwartz (1991), Schoemaker and Day (2009).
2 While the F-15s had the capability to talk to the Army
helicopters using conventional radios, it was their practice to rely on their
secure and highly sophisticated HAVE QUICK II radios that utilized both
encryption and frequency hopping. Some but not all of the Army helicopters had
HAVE QUICK II capabilities but the Army’s helicopter command had decided not to
enable them (a task that required the loading of daily codes) because not all the
helicopters in use were so equipped.
3 The material on the Bhopal disaster is voluminous. For an
excellent overview see (Wikipedia, Bhopal_disaster) and particularly the extensive
references therein. An important debate, as well as Union Carbide’s primary
defense, is that the proximate cause of the accident was employee sabotage, a
conclusion contested by others. In any case, a widely shared view is that many
other factors contributed to the release of the poisonous gas, including the
basic design of the plant, especially the raw materials used and the decisions made
about the storage of hazardous materials and safety systems. There is also a
clear consensus that many critical safety systems were not operational at the
time of the accident. Union Carbide’s (Dow Chemical) position is articulated on
their dedicated web site (Dow Chemical, 2001-2009).
4 Phenylpropanolamine (PPA) was a widely used over-the-counter
ingredient employed since the 1930s as a decongestant in cold remedies such as
Alka Seltzer Plus, Dimotapp, and Contac, and as an appetite suppressant in
so-called ‘diet pills’. Before it was withdrawn in the U.S., six billion doses
of PPA were consumed annually by approximately seven and a half million people
in the U.S. (PPA is still sold over the counter in the UK and Europe, although
it is a different isomer of the molecule with a different side-effects profile.
Appetite suppressants (‘diet pills’) containing PPA are not sold in these
markets.)Starting in the late 1970s, scattered reports of hemmorhagic
(bleeding) stroke began to surface in the U.S. among young women who had used
diet pills containing PPA. Although substitutes for PPA were available (and
have successfully replaced it since the FDA’s decision that the drug be
withdrawn), PPA manufacturers rejected the FDA’s concerns, employing scientists
and lobbyists to pressure for continued PPA use in their OTC preparations. The
delays were successful: It took nearly twenty years from the initial reports for
a systematic epidemiological study to be undertaken. Prior to PPA’s withdrawal,
the FDA estimated that between 200 and 500 strokes per year among 18- to
49-year-old women were caused by the drug. Although these are small numbers in
percentage terms, viewed through the lens of public health the deaths were
unnecessary since effective alternatives to PPA were available. Curiously,
while the industry sponsored and approved the Hemmorhagic Stroke Project
(Keman, 2000), a five year study conducted during the 1990s by the Yale Medical
School with FDA approval, industry defense lawyers attacked its conclusions in
a series of victims’ lawsuits filed after the FDA’s decision to withdraw the
drug.
5 Arguably the mistreatment of detainees initiated under the
banner of the U.S. ‘War On Terror’ provides a vivid example.
6 Carl Bass, a member of auditor Arthur Andersen’s Professional
Practice Group, and the firm’s expert on audit rule interpretation, was removed
from the Enron account at the request of CFO Andrew Fastow because he objected
to a number of Enron’s more aggressive accounting practices. See Eichenwald
(2005:426).
7 Gerstein’s analysis of stock market data was performed for
Merck, Xerox, and BP. One should note that since each stock trade involves a
buyer and seller—and the vast majority involve institutional investors and
market-makers—all stock movements reset prices to some degree, and significant
changes in price levels often (but not always) do so for an extended period.
Contact the author for more information.
8 For the SEC documents concerning the Xerox accounting
fraud, see U.S. Securities and Exchange Commission (Various documents
concerning the late 1990s Xerox accounting fraud). In addition, The New York Times and The Wall Street Journal extensively covered the story
from 2000, when the first signs of the fraud emerged, to 2006 when the final settlement
was reached.
9 Since Xerox executives were not convicted of a crime or
required to admit guilt, under the
bylaws of the corporation the company repaid their lost
stock sale profits and bonuses for all
but the one million dollar per person fines imposed by the SEC. In Allaire’s case, this means
that the company repaid his disgorgement of $5.2 million he received from selling shares of
Xerox during the fraud, $500,000 in bonuses he received for meeting profit goals the S.E.C.
determined were met because of the fraud’s earnings impact, and $1.9 million of interest
payments on these amounts. Xerox’s shareholders, therefore, paid bonuses for profits never
earned and provided restitution to Allaire for stock prices inflated by these fraudulent profits
but the one million dollar per person fines imposed by the SEC. In Allaire’s case, this means
that the company repaid his disgorgement of $5.2 million he received from selling shares of
Xerox during the fraud, $500,000 in bonuses he received for meeting profit goals the S.E.C.
determined were met because of the fraud’s earnings impact, and $1.9 million of interest
payments on these amounts. Xerox’s shareholders, therefore, paid bonuses for profits never
earned and provided restitution to Allaire for stock prices inflated by these fraudulent profits
(Norris,
2003).