Designing lines of authority: how humans and AI share work
Models of AI integration are emerging and they raise questions of control, accountability and decision making
By Jason DaPonte, Managing Director, and Alex Holley, Senior Innovation Consultant
From our perspective, in most workplaces, AI strategy still starts with a tool and ends with a hope. Most AI pilots start with a hope and end in wastage. We take a different path: map the relationship. Who sets intent, who handles the drudgery, who carries the risk – and where does judgement sit?
Three models are emerging: copilots that speed up craft without stealing accountability; partnerships that divide work by intrinsic strengths; or fully AI‑managed workflows with humans in audit mode. These all raise one practical question leaders must answer before they buy the next model: who manages who?
The first model is the Augmented Human model. The human is the decision‑maker. AI is the copilot.
It’s research assistants that surface the right signals, legal reviewers that flag risky clauses or coding engines that clear the path for deeper problem solving. A study of 140 radiologists interpreting chest X‑rays found that AI assistance improves calibration and sensitivity. Yet effects vary by task and individual, reinforcing why human judgement belongs in the loop. Speed goes up, responsibility stays put.
The second is the Symbiotic Partnership. Work is divided by intrinsic strengths.
AI handles data‑heavy, repetitive and real‑time tasks, such as the logistics of a supply chain, the instant personalised response in customer service. Humans pivot to strategy, creativity and ethics. Consider Unilever’s ice cream supply chain, from factory production lines to an estimated three million ice cream freezer cabinets. The planning team uses weather-aligned forecasts to adjust production and routing, improving forecast accuracy (+10% in Sweden) and cutting waste. Planners focus on strategic allocation and market timing. Distinct domains, shared outcomes.
The most challenging model involves an inversion of control: the AI‑Managed Workflow.
This is where complex processes are run end‑to‑end by AI e.g. automated trading algorithms, self‑optimising campaigns, autonomous network management. Telefónica has achieved Level 4 (highly advanced) autonomous networks in some use cases, where closed loops handle traffic, faults and spectrum decisions, with engineers monitoring KPIs and stepping in on anomalies. Here, the leader becomes an auditor, reviewing outputs, governance and exceptions.
Across all three models, the most credible implementations share two traits: clear human accountability and empirical validation of AI performance in the workflow. That’s where value compounds and risks stay bounded.
This inversion forces leaders to shift from managing tasks to governing values. If a sophisticated AI‑driven process goes wrong, accountability must remain human. Governance is anchored to people, not just processes. Leaders must decide when human deliberation, such as pausing for quality, ethics or equity, should override the machine’s relentless efficiency.
To broaden success beyond speed and cost, sustainability and fairness matter. If AI gains come at the expense of social impact or energy use, they aren’t true advantages. The competitive edge now belongs to organisations that balance the machine’s scale with human judgement, insight and moral compass.
That balance requires organisational redesign: measuring productivity by strategic impact rather than output volume, redefining innovation beyond rapid iteration and embedding clear human accountability at the core of operating models.
To talk about AI, accountability and operating models, email Jason on