If Ai Is Doing The Work, Should We Still Pay For The Results?

If Ai Is Doing The Work, Should We Still Pay For The Results?

Reward Integrity in the Age of Hybrid Intelligence:

Why Output, Accountability and Initiative Remain the Bedrock of Variable Pay – Insights by Dr Chris Blair

This article, written by Dr Chris Blair, forms part of a continuing exploration of leadership, work and the human consequences of technological change. While earlier discussions have rightly highlighted the transformative power of AI in augmenting performance, a closer examination of variable pay practices reveals that the fundamentals of effective reward design have not been overturned. In an era of hybrid intelligence, organisations should continue to reward delivered results and the human initiative that harnesses technology, rather than attempting to dissect the precise contribution of man and machine.

Recent research strongly supports this stance. The PwC Global AI Jobs Barometer (2025) demonstrates that AI-exposed industries have seen productivity growth nearly quadruple since the proliferation of generative AI, with wages growing twice as fast as in less-exposed sectors. Workers with AI skills command an average 56% wage premium in 2024 – up sharply from 25% the previous year – across virtually every industry analysed. Far from punishing “tool privilege”, the market enthusiastically rewards those who seek out, master and deploy these technologies (PwC, 2025). This pattern echoes historical precedents: the consultant who mastered Excel modelling in the late 1990s was not asked to forfeit part of their bonus because the spreadsheet performed the calculations. They were rewarded for superior output and commercial impact. The same logic applies today.

The proposed shift from “pay for results” to “pay for responsible contribution”

The suggestion that organisations should move from rewarding results to rewarding “responsible contribution to results, accounting for how much AI did the heavy lifting” sounds philosophically appealing. In practice, however, it introduces unnecessary complexity into systems already under pressure to remain agile. Traditional pay-for-performance models, refined over decades, rest on the observable link between effort, output and business value. AI does not break the link between effort, output and value, but it does make contribution harder to interpret. Our view is that reward systems should respond by sharpening accountability and performance expectations, rather than trying to isolate human and machine input too precisely.

Empirical evidence from executive remuneration practices bears this out. Leading organisations such as Microsoft and Salesforce have incorporated AI-related strategic objectives into incentive plans, but they do so by linking variable pay to tangible business outcomes – AI-driven revenue growth, platform adoption, efficiency gains and innovation metrics – rather than attempting to apportion credit between human judgement and machine assistance (Farient Advisors, 2026; Equilar, 2026). Variable pay as a proportion of total fixed pay has remained remarkably stable worldwide, with continued differentiation for high performers (GECN, 2025) underscoring a sustained focus on performance delivery.

At senior and executive levels – precisely where accountability is non-negotiable – the market has always operated on a clear principle: deliver the results and the reward follows. How those results are achieved, including the intelligent deployment of available tools, is the executive’s responsibility. Attempts to retroactively adjust for AI “heavy lifting” risk undermining the very accountability that boards and shareholders demand.

Adjusting key performance measures for AI-augmented productivity

Rather than attempting to unpick human versus machine contribution in every transaction, a more practical and effective approach in variable pay design is to adjust key performance measures (KPIs) (Mercer,2025) to reflect the increased effectiveness, productivity and efficiency that AI enables. With AI tools allowing professionals to achieve significantly more in the same timeframe, organisations can and should raise performance targets, output expectations and efficiency benchmarks accordingly.

This recalibration maintains the integrity of “pay for results” while capturing the productivity gains. For example, sales targets, report turnaround times, code output, or analytical deliverables can be set at higher thresholds that incorporate typical AI augmentation. High performers who leverage AI effectively will naturally meet or exceed these elevated standards, earning their variable pay through superior results. Those who do not adopt the tools will find it more challenging to hit the new benchmarks – which fairly rewards initiative and adaptability without introducing subjective attribution debates.

This approach is consistent with current leading practices. Many organisations are updating scorecards and incentive plans by increasing productivity metrics or incorporating AI-enabled outcome targets while keeping the focus firmly on measurable business results (Equilar, 2026; Mercer, 2025; WorldatWork, 2025). It avoids the governance overhead of microscopic contribution analysis and instead channels energy into higher overall performance.

Re-framing the equity argument

The concern that uneven access to AI tools creates an “unearned advantage” that skews reward also merits scrutiny. In high-performance (HiPo) environments, the ability to identify, learn and harness new tools is not an unearned privilege; it is a core demonstration of initiative, adaptability and commercial acumen – precisely the behaviours organisations have always rewarded. Designing remuneration systems mainly to correct for differential tool access may weaken incentives for initiative and innovation. In our view, the better response is to widen access, build capability and keep reward linked to business outcomes.

Where genuine disparities in access exist – more commonly at operational or junior levels – the appropriate response is not to dilute rewards for high performers but to accelerate equitable provision of tools and training. The PwC data confirm that AI skills premiums exist even in automatable roles and that augmented jobs are growing faster than purely automated ones (PwC, 2025). Organisations that invest in broad access and celebrate adoption through recognition, skills-based pay elements or targeted incentives are already seeing stronger engagement and retention (HBR, 2026).

Latest practice in variable pay design

Contemporary variable pay practices increasingly embrace AI without abandoning output orientation. Remuneration teams are deploying AI themselves for real-time pay equity analysis, dynamic merit planning, benchmarking and personalised incentive recommendations, always with human oversight to preserve judgement and context (Sequoia, 2025; Payscale, 2026). Some organisations are introducing explicit AI fluency or integration metrics into scorecards where they directly drive business results – for example, forecast accuracy improved through predictive tools or revenue attributable to AI-enabled processes. These are additive to, not replacements for, traditional performance metrics.

Crucially, studies show that performance-based (as opposed to fixed) remuneration actually increases appropriate reliance on AI advice, leading to better decision quality (Cornell University, 2025). This reinforces the case for mature pay-for-performance systems that evolve with technology rather than retreat from it.

What boards and remuneration committees should prioritise

Boards should continue to ask rigorous questions about performance and value creation, but these questions are best framed around outcomes and strategic execution rather than microscopic attribution:

  • Have we delivered the agreed results and strategic priorities, including those enabled by AI?
  • Are we attracting, retaining and rewarding talent with the skills to thrive in hybrid intelligence environments?
  • Does our reward structure incentivise responsible risk-taking and innovation while maintaining clear accountability?

These questions align governance with commercial reality. They avoid overly complex attempts to measure what cannot yet be measured consistently (the precise division of labour between human and machine in every output) while still demanding explainability of overall contribution and risk ownership.

Responsible reward governance in practice

Four updated principles can anchor variable pay governance without over-complicating it:

  1. Outcome primacy with process transparency: Reward primarily for delivered business value while encouraging open discussion of methods, including AI leverage, in performance calibration.
  2. Access and adoption equity: Ensure comparable roles have equitable tool access and invest in training; reward demonstrated mastery through differentiated pay and recognition.
  3. Accountability at every level: Maintain clear ownership of outcomes, especially where judgement, ethics and risk are involved – qualities AI cannot replicate.
  4. Defensibility through results: Pay decisions should be justifiable on the basis of impact delivered, not the inputs or tools employed.

These principles build on, rather than replace, established pay-for-performance frameworks. They recognise that AI is a powerful tool in the hands of capable professionals, not a substitute for human contribution.

Keeping human value visible

At its heart, effective reward in the age of hybrid intelligence continues to signal what organisations truly value: results, accountability, initiative and the distinctly human capacity to direct technology toward meaningful ends. AI can generate volume, speed and polish; it cannot assume ultimate responsibility for outcomes, exercise moral judgement under uncertainty, or drive the organisational courage required for genuine transformation.

People care deeply about fairness, but in high-performing environments that fairness must balance strong results with systems that remain credible and transparent. The real risk to legitimacy is not that high performers benefit from AI; it is that organisations fail to recognise and reward the adaptability that turns technological capability into sustained competitive advantage.

The challenge of the next decade is not whether reward systems can perfectly disentangle human from machine contribution. It is whether they can continue to motivate the human ingenuity that harnesses AI to create superior value. By staying firmly anchored in pay for results, organisations will not only maintain reward integrity; they will reinforce the meritocratic principles that have always driven performance in competitive markets.

This article is based on research conducted by Dr Chris Blair of 21st Century, one of the largest remuneration and HR consultancies in Africa. Please contact us at [email protected] for any further information.

Total Words: 1491

Submitted on behalf of

  • Company: 21st Century
  • Contact #: 0760781723
  • Website

Media Contact

  • Agency/PR Company: The Lime Envelope
  • Contact person: Bronwyn Levy
  • Contact #: 0760781723
  • Website

All content is copyrighted to the respective companies.
Under no Circumstances is raramuridesign responsible for any mis-communication conveyed in these articles.
Copyright © raramuridesign. All Rights Reserved.

Our Social Media Channels
Linkedin ++ Facebook ++ BlueSky ++ Mastadon ++ X.com ++ Muck Rack