Of the nine Core Competencies, Flight Path Management — Automatic is the one that most resists being treated in isolation. Every other competency has a relatively direct human output: a decision made, a threat assessed, a message sent, a procedure followed. This one is different. Its primary output — when it is working well — is the conditions in which everything else can be done properly. Used with judgment, automation creates spare capacity. It reduces error. It frees the crew to manage the operation rather than be consumed by it. That is not a small thing. It is, in many respects, the whole point.
The behaviours within this competency are therefore not really about the automation. They are about the quality of thinking that surrounds it. When to engage it, at what level, with what understanding of what it will do next — and how to ensure that both pilots share the same picture of what the aircraft is doing and why. Get those things right, and the automation works for you. Get them wrong, and it works against you in ways that are silent, gradual, and occasionally catastrophic.
Automation as a Workload Decision
The pilot who engages the autopilot early on departure, who selects managed modes during the climb, who arms the approach automation well before it is needed — that pilot is making active workload management decisions. They are not being passive. They are deliberately creating headroom: capacity that will be available for traffic conflicts, weather deviations, non-standard clearances, and the hundred other things that competing for attention on a busy sector. The automation is the tool. The decision to use it is the competency.
The inverse is equally true. The pilot who delays automation engagement, who reverts to selected modes when managed modes are available, who hand-flies segments that could be automated without good reason — that pilot is consuming workload that could have been preserved. Sometimes this is deliberate and appropriate, particularly for training purposes or when the situation genuinely benefits from direct control. But when it is habitual or unconsidered, it is a quiet tax on the crew's available capacity, paid at a time when the balance sheet may not be able to afford it.
The decision to use automation is not a passive act. It is workload management in real time.
Knowledge Is the Prerequisite
There is a version of automation use that is fluent without being understood. The pilot who can select the right modes in the right sequence, whose hands move across the automation controls without hesitation, who can fly a full procedure without ever consciously thinking about what the aircraft is doing — that pilot may be highly proficient in normal operations. They are also carrying a significant hidden risk. Because automation competence without automation knowledge is proficiency that works until the moment it doesn't.
Mode confusion is the most visible expression of this problem. An aircraft in an unexpected configuration, climbing when it should be levelling, descending when it should be climbing, accelerating when the crew believes it is in speed protection — these events share a common origin. Somewhere in the sequence, a pilot made an input without fully understanding what the automation would do in response. The aircraft did exactly what it was told. The crew did not know what it had been told.
Modern flight management systems are extraordinarily capable. They are also extraordinarily complex. The number of possible mode combinations, the transitions between them, the conditions under which the automation will revert or change behaviour — this is not knowledge that can be acquired by feel. It requires deliberate study, and it requires regular reinforcement.
The pilot who can answer the question "what will the aircraft do if I select that now?" before making the selection is operating with genuine automation competence. The pilot who selects and observes — waiting to see what happens — is not. At low workload, the difference rarely matters. At high workload, it can matter enormously.
The Shared Picture Problem
Two pilots occupy the same cockpit. They are operating the same automated system. They may have entirely different mental models of what it is doing. This is not a hypothetical risk — it is one of the most consistent findings in automation-related incident investigations. The handling pilot selects a mode. The monitoring pilot notes a different FMA annunciation and assumes a different state. Neither challenges the other's understanding. The aircraft proceeds in a configuration that neither pilot has consciously accepted.
This is where Flight Path Management — Automatic becomes inseparable from Leadership and Teamwork and Communication. The shared picture of automation state is not created by the FMS. It is created by the crew — through callouts, through cross-checks, through the discipline of verbalising mode selections and their expected consequences, and through a crew environment in which either pilot feels confident saying "I'm not sure what it's doing." The automation provides the capability. The crew provides the understanding.
The FMS doesn't create a shared picture. The crew does.
The Monitoring Paradox
There is a persistent misconception about what the non-handling pilot does during an automated segment. The framing — "the autopilot is flying, so I'm monitoring" — implies a reduction in cognitive demand. The reality is the opposite. Effective monitoring of an automated system is harder than manual flying in one important respect: it requires sustained attention to a system that is doing exactly what it should be doing, for extended periods, with no immediate feedback loop to keep the monitor engaged.
Manual flight is self-correcting in its demands. Deviations from the desired flight path create physical feedback — a change in attitude, a shift in instrument readings — that immediately calls for a response. Automated flight removes this feedback loop. The aircraft proceeds smoothly and the instruments change slowly and the crew's attention, absent deliberate effort, will drift toward more immediately engaging tasks. This is not a character flaw. It is a well-documented feature of human attention when monitoring automated systems. Recognising it is the first step to managing it.
Research on automation monitoring consistently shows that human vigilance degrades significantly over periods as short as twenty to thirty minutes when the monitored system is functioning normally. The monitoring pilot who has been passive for the first hour of a sector is not the same monitor as the one who began the flight. Structured scans, regular cross-checks, and deliberate re-engagement with the automation state are not optional extras — they are the mechanism by which effective monitoring is maintained.
The Pilot Monitoring who asks "what's it doing now and what will it do next?" at regular intervals is not being anxious. They are doing their job.
When Automation Should Not Be Used
A competency in the use of automation necessarily includes knowing when not to use it. There are situations in which automation adds complexity rather than reducing it — where the cognitive overhead of programming and monitoring a system exceeds the workload cost of flying manually. A short sector with multiple level changes and heading constraints. An approach in rapidly changing conditions where the managed profile may not reflect what is actually needed. A go-around at low level where the immediate demands of control and configuration leave no capacity for FMS management.
The pilot who defaults to automation in every situation is not demonstrating automation competence. They are demonstrating automation dependence. The distinction matters because dependence fails at exactly the moments when competence is most needed — when the workload is highest, the conditions most demanding, and the margin for error smallest. The competent pilot chooses the level of automation appropriate to the situation. That choice requires judgment, knowledge, and an accurate assessment of what the crew can manage — which brings us back, inevitably, to Workload Management.
Flight Path Management — Automatic is, in the end, a competency about the quality of judgment that surrounds a tool. The tool itself is neutral. It will do what it is told, correctly and without complaint, whether or not the crew understands what it has been told, whether or not both pilots share the same picture, and whether or not the workload balance made the decision sensible. The judgment is the competency. The automation is just the evidence of it.