Errors happen. That is not a failure of the system — it is an acknowledgement of how the system works. Human performance is variable. Attention is finite. Conditions change. The two-crew cockpit exists in part precisely because errors will occur and need to be caught. Accepting that errors are possible is not complacency. It is the honest starting point from which effective error management begins.
What matters is what comes next. Identifies errors and undesirable aircraft states and acts in a timely manner to remedy is the behaviour that closes the loop — that converts error from an event into a managed consequence. But closing that loop requires two things that are easily assumed and rarely examined: the knowledge to recognise the error, and the capacity to act on it before it compounds.
Why Errors Happen
An error is rarely the whole story. It is the visible end of a chain — distraction, fatigue, complacency, imprecise procedure execution, a system designed in a way that invites misuse, an SOP that does not quite match the reality it describes. The error is what you can see. The cause is usually one or more of those things operating upstream of it.
This matters because identifying the error without understanding its cause is correction without learning. The aircraft state is restored. The immediate consequence is managed. But the condition that produced the error remains — and will produce another. The behaviour asks for identification and remedy. Both are necessary. But the deeper identification — of cause rather than just symptom — is what prevents recurrence.
Two crew members provide mutual protection against individual error. But that protection only functions if both are actively monitoring, if the atmosphere makes identifying an error in a colleague a normal and valued act, and if the crew member who spots the error has the confidence to say so without hesitation. All of that depends on conditions that have been built across the rest of the competency framework — trust, open communication, psychological safety. Error identification is not a standalone skill. It is the product of everything else working.
An error is the visible end of a chain. Correcting the error closes the immediate loop. Understanding the chain prevents the next one.
Knowledge Is the Detection Mechanism
There is a prerequisite to error identification that is rarely stated directly: you have to know what right looks like. The deviation is only detectable against a baseline. If the baseline is imprecise — if the expected state of the aircraft, the automation, the procedure at this point in the flight is only approximately known — the ability to detect departure from it is correspondingly imprecise.
This is where knowledge and procedural discipline intersect with situational awareness in a way that is operationally significant. A crew member who executes procedures with genuine consistency has a reliable internal reference for what each phase of a procedure feels like when complete. When that feeling is absent — when a callout that always happens here hasn't happened, when a confirmation that should have come hasn't come — the gap registers. The crew member who is imprecise in their execution has no such reference. The variation that signals an error is indistinguishable from the variation that is simply their normal approach.
Currency of knowledge matters in the same way. The crew member who knows their aircraft systems accurately — who understands what normal behaviour looks like across the range of conditions they operate in — will detect anomalies that a crew member with approximate knowledge will not. You cannot identify an undesirable state if you are uncertain what the desired state should be. The knowledge is not just professional background. It is the detection instrument.
Effective monitoring is not passive observation. It is active comparison — a continuous process of checking actual state against expected state, and registering the gap when one exists. That comparison requires both elements: the actual state, which requires attention directed at the right instruments and outputs; and the expected state, which requires knowledge of what those instruments and outputs should show at this point in the flight.
A crew member who is monitoring without a clear expected state is looking without knowing what they are looking for. The error may be present. The gap between actual and expected may be significant. But without a clear expected state against which to compare, the gap cannot be reliably detected.
The Timely Manner
The qualifier in this behaviour is doing significant work. Not identifies and corrects. Identifies and acts in a timely manner to remedy. The timing is the behaviour — and it is more demanding than it first appears.
Small errors, unaddressed, grow. A minor deviation from the desired flight path, not corrected promptly, becomes a larger one that requires a larger correction. A procedure item missed early in a sequence, not caught, creates downstream assumptions that are built on an incorrect foundation. An automation mode that has transitioned unexpectedly, not noticed promptly, may have already begun driving the aircraft in a direction that will take time to recover from. The cost of correction rises with the delay. The window to correct cheaply is short.
Timely action also requires spare capacity. This is the thread that connects error identification to workload management directly. A crew that is saturated — whose cognitive resource is fully committed to managing the current demands of the operation — has reduced capacity to detect errors and reduced capacity to act on them. The monitoring that catches errors early requires headspace. The decision and action that corrects them requires more. Both depend on workload being managed well enough that the capacity exists when it is needed.
This is why workload management is not simply a performance efficiency behaviour. It is a safety behaviour. The crew that manages workload well enough to maintain spare capacity is the crew that can identify and correct errors before they compound. The crew that is consistently at or near saturation is the crew that will be slow to detect and slow to correct — and will occasionally miss the window entirely.
The Two-Crew Advantage
Two crew members bring two independent pictures of the operation. When those pictures are both current and accurate, and when the atmosphere allows them to be compared openly, the crew has a detection capability that neither individual possesses alone. The error that one crew member did not catch — because their attention was directed elsewhere, because they were the one who made it, because their particular knowledge gap meant they did not register it as an error — may be entirely visible to the other.
That capability depends on several conditions all being met simultaneously. Both crew members must be actively monitoring, not just the pilot flying their sector. The atmosphere must be one where identifying an error in a colleague is a straightforward professional act — unremarkable, welcomed, expected. And both must have the knowledge and the baseline against which to make the comparison. A two-crew cockpit where one crew member is not monitoring, or where the atmosphere makes error identification feel risky, or where knowledge is approximate, is a cockpit operating significantly below its error detection potential.
The two-crew cockpit is a detection system. Whether it functions as one depends on both crew members monitoring, both having the knowledge to detect, and the atmosphere making it safe to say what they see.
On the Line
High Performance Pilot structures your development of Identifies Errors and Undesirable Aircraft States and Acts in a Timely Manner to Remedy across three levels — Foundation, Proficient, and Mastery. Each session takes minutes. The development happens on every flight. Free to start.
Start Free — highperformancepilot.com