Risk Management and
the Next Generation of Aircraft
Understanding
the inherent risks in flying, combined with establishing well-founded safety
principles and response planning is paramount for survival in the aviation
industry. Start-ups and companies new to the aviation industry must
understand that there is always risk in flying. Above all, gravity always
wins. However, realizing that accidents do happen, in addition to being
prepared for that possibility, will go a long way in contributing to a
company’s success: that and a whole lot of cash and a stellar aircraft design.
In
October 2019, one such stellar eVTOL design experienced a hard lesson – in
addition to a hard landing – after logging just 4.7 hours of total flight
time. According to the National Transportation Safety Board’s (NTSB) Final Report,
dated March 27, 2020, on October 17, 2019, Kitty Hawk's Heaviside 2 crashed
after the aircraft ground operator forgot to disable the battery charging
script (computer program) prior to the accident flight. The still-running
script caused multiple flight computer limits to be exceeded, affecting the
aircraft’s system performance. The controllability of the aircraft degraded,
resulting in substantial damage when the ground operator attempted to remotely
land the aircraft.
According to the accident report form (NTSB Form 6020.1) filed by Kitty Hawk,
the accident identified “small gaps” in its Standard Operating Procedures
(SOPs) that contributed to the accident. Kitty Hawk, in its post-accident
safety recommendations, notes that the “SOPs will be carefully evaluated
routinely to ensure they represent the safe and effective operation of the
system as a whole.”
During
my military flight training, a ground instructor once pointed out the
difference between a “WARNING,” “CAUTION,” and “NOTE” in the helicopter
operations manual. It seemed excessive that almost every page of the
manual had at least one (or several) such alerts, telling the flight crew not
to exceed this temperature or not to exceed that operating limit – almost as if
it was written by an overbearing parent concerned for the safety of his or her
child. At least that was my initial impression.
My
instructor, however, pointed out that each warning, caution, and note has
meaning and often a story behind it. Unfortunately, many of the warnings
were written in blood, cautions were the result of an expensive mistake, and
notes, well because common sense is not that common.
If
design risks cannot be mitigated through preferred design changes or fail-safe
measures, then the appropriate procedures and warnings must be in place to
minimize the risk of inherent design flaws progressing into human
flights. The operating procedures and instructions should have a
corresponding warning, caution, or note for the level of risk:
A WARNING, if not promptly monitored and
corrective action taken, a loss of life or injury to persons or property could
result. Warning: Do not approach the helicopter from the front with rotor
blades turning – seems obvious, right?
A CAUTION indicates damage to the
aircraft or hazard to humans could occur. Turn off the pitot tube heat
after landing to prevent damage to the heating system or to prevent burn
injuries to the mechanic.
A NOTE generally described an
essential operating procedure. Ensure the radio power is on before
attempting to transmit.
When
the HH-60G Pavehawk was first introduced to the Air Force in the early 1980s,
its design weight was 16,825 pounds (there are specific numbers I still recall
about this helicopter, and its design weight is one of them). That was
before the Pavehawk was weighed down with cabin fuel tanks and other
modifications (Pavehog?). At its design weight, the Pavehawk was viewed
as the "Ferrari" of helicopters. Pilots knew it, and would
intentionally exceed 90 degrees bank ("overbank"), which the
helicopter could do quite well with its fully-articulated rotor system and
"power for days."
During
a particular flight, my instructor recalls, the pilot flying kept the
helicopter inverted for just a tad too long after executing a fighter-jet like
overbank while crossing up and over the crest of a mountain ridge. The
"extended-stay" overbank preceded with a Christmas-like show of
warning lights in the cockpit, alerting the pilots to low engine oil
pressure. That particular event resulted in the pilot’s need of a change
of underwear, in addition to a new "WARNING" that excessive bank
angles could result in a loss of oil pressure since the engine oil sumps were
positioned on the bottom of the engine.
It
would seem evident that a helicopter should not be inverted. "C’mon, who in their right mind would
intentionally invert a helicopter in-flight!” – thought the engineer who
designed the engine oil system with the oil sumps at the bottom of the engine,
assuming gravity would naturally ensure continuous oil flow. In aviation,
Murphy's Law prevails, and one should always expect the unexpected.
Circling back to the Kitty Hawk accident, you can bet that the SOPs will be
modified to include a shiny new warning concerning battery script – at least
until a more robust design change to the system is implemented.
WARNING:
Failure to disable the battery script prior to flight may result in degraded
system performance and a loss of aircraft controllability.
Equally
important is a company's response following an accident. Being prepared
will make a difference as to whether the company survives following a major
accident. Part of that preparation occurs during the aircraft design
stage, i.e., identifying risks and accounting for the potential for human
error. Additional preparation comes in the form of instilling a strong
safety culture throughout the company to identify risks.
Finally,
a company must have in place a well-prepared emergency response plan for when
Murphy's Law comes into play (typically at the worst possible time).
Airline carriers know this all too well. Every airline has in place an
emergency response plan and regularly conducts drills so they are prepared for
when an accident occurs.
Identifying design risks early on
A
standard allegation in every product liability lawsuit is a claim for “failure
to warn” – that is, the aircraft manufacturer failed to provide adequate
warning to the crew or passenger of inherent risk in the design or use of the
product. This liability exposure is also a reason why operational manuals
are chalked full of warnings, cautions, and notes. A lack of sufficient
or inadequate instructions could result in the accident and lead to liability
if (hindsight being 20/20) the operator failed to warn of a potential hazard.
Kitty
Hawk, like many eVTOL developers, conducts unmanned test flights (to the extent
possible) to reduce the risk of injury to an on board pilot. Unmanned
flights allow for rapid flight-testing but increase the potential risk to the
aircraft since test-flights may shift focus to the testing of software or
equipment, as opposed to the safety of aircraft. However, future test
flights will be conducted with a pilot, and finally operational flights with
passengers.
When
developing aircraft procedures (testing or otherwise) and operating manuals,
keep in mind:
- The consequence if a
particular instruction is not followed. What happens if the operator fails
to deactivate a particular system, and attempts to fly the aircraft?
- Foreseeable misuse of the
product. Helicopters are not intended to be inverted, but it’s still
possible.
- Hazards are likely
obvious to an experienced flight crew, but not so apparent to the
layperson. You would have to be an idiot to walk in front of the aircraft
radome with the weather radar transmitting. (It’s happened before, more than once.)
A
strong company safety culture
Every
aviation company should (whether required by regulation or not) have a safety
policy in place. Promoting a strong safety culture is not just about
putting a wet floor sign in place after mopping, but also means, for example,
employees know they can report safety hazards due to their own unintentional
mistakes without fear of punishment for self-reporting.
An
employee who understands they can report such errors without fear of losing
their job is more likely to report the safety risk, as opposed to sweeping it
under the rug, praying no one finds out about that tiny but expensive crack the
mechanic just caused when drilling a hole in the airframe. When
developing a safety policy, consider, for example, the following criteria:
- Safety objectives/goals:
Set specific, measurable, and relevant safety goals/objectives, e.g.,
ensure all new pilots and operators are trained on operational safety
checks and procedures prior to flight.
- Safety processes and
procedures: Ensure proper processes and procedures are in place so that
safety performance is maintained at the appropriate level (e.g., unmanned
vs. manned flights) and specified objectives/goals are achieved.
- Non-punitive reporting
policy: Where human errors are made without deliberate intent to cause
harm or damage, then they are "normal errors."
Self-reporting “normal errors” should not result in punitive action being
taken against individuals.
- Safety review and audit
policies: Conduct periodic reviews of operating procedures and safety
measures to ensure the appropriate level of safety.
An emergency plan for when all else
fails
A
robust emergency response plan should be prepared well before electric
generators are turning. Every
emergency plan should include the following general information:
- Develop a checklist and
include a list of key persons and their contact information. Include both
company employees and outside contacts such as the Federal Aviation
Administration, NTSB, outside counsel, and insurance representative.
Identify when each should be contacted and what information must be
relayed. eVTOL developers/operators should alert first responders
and investigators to the presence of crash hazards, e.g., lithium-ion
batteries or ballistic parachutes.
- Preserve aircraft
wreckage. The operator of an aircraft is responsible for preserving
aircraft wreckage, cargo, and data recorders until the NTSB takes custody
of it or issues a release. (See 49 CFR §
830.10(a).) For accidents that occur in remote
locations (or when a pandemic occurs), NTSB representatives may not be
able to visit the accident site for several days. If so, operators
may need to arrange 24-hour security (e.g., off-duty officer) for the
accident wreckage.
- Identify and secure key
documents. The operator of an aircraft must retain “all records, reports,
internal documents, and memoranda dealing with the accident or incident
until authorized by the NTSB to the contrary.” (See 49 § CFR
830.10(d).) Understand what company documents must
be released to the government, and in what manner. For eVTOL
developers/operators, this includes all data and related software
programs.
- Brief all personnel
involved in the investigation on legal ground rules and the nature of the
NTSB investigative process. For example, identify whom within the company
will speak with the press and government officials.
As
electric air-taxis become a reality, you can bet any accident will certainly be
highly scrutinized and subject to extensive media coverage. Mishap
occurrences are not a matter of if, but statistically, a matter of when, and
like the overbearing parent, start-ups and new entrants must understand the
risks and prepare for the unlikely as part of its emergency/contingency
planning.
Erin I. Rivera is an aviation attorney
with Fox Rothschild LLP. He also served in the U.S. Air Force as a combat
search and rescue (CSAR) flight engineer on board the Sikorsky HH-60G PaveHawk
helicopter. Erin holds a private pilot license and previously interned as
an air accident investigator with the National Transportation Safety
Board. Erin is particularly well-versed in current developments in
aircraft certification regulations, eVTOL aircraft enabling technology, and advanced
air mobility/urban air mobility (UAM/AAM) initiatives.
Ingen kommentarer:
Legg inn en kommentar
Merk: Bare medlemmer av denne bloggen kan legge inn en kommentar.