Kasyapp Ivaaturi: An automated action should be as explainable and accountable as a human action. Otherwise, instead of innovation, you get an incident generator

Kasyapp Ivaaturi: An automated action should be as explainable and accountable as a human action. Otherwise, instead of innovation, you get an incident generator

Summary

Kasyapp Ivaaturi, Vice President – Applications at Framestore and experienced ERP and transformation leader, argues that agentic automation must be governed like human decision-making: clear ownership, constrained permissions, audit-grade evidence and designed exception paths. He draws on recovery work across 50+ business units and large-scale finance and ticketing system projects to show practical steps: diagnose operating alignment, run structured workshops, reset the target operating model, create a Centre of Excellence, and design for repeatability and auditability. His rule for build vs buy prioritises ownership and explainability where controls and sensitive data are involved. Leaders must reward early problem reporting and insist on clear accountability to avoid automation becoming an “incident generator” rather than an advantage.

Key Points

  • Automated actions must be as explainable and accountable as human actions: who owns the decision, what the agent may do, and what evidence proves correct behaviour.
  • Put an operating model in place first: decision rights, tight access boundaries, defined exception paths and audit-grade evidence by default.
  • Recovery of failed ERP programmes requires diagnosing alignment issues, stakeholder workshops, a reset operating model and a Centre of Excellence to secure lasting ownership.
  • Build vs buy should be judged on ownership, control, explainability and how frequently workflows change; keep critical controls in-house or wrapped by an internal layer.
  • Design for repeatability: automate routine decisions (eg daily exchange-rate feeds) to reduce manual judgement and increase reconciliation stability.
  • Adoption is execution: workshops, training and stakeholder anchoring make systems practical day to day and reduce informal workarounds.
  • In reliability-critical systems, innovation means predictable behaviour under load: throughput, integrity, recovery, safe rollback and clear reconciliation.
  • Start small with one measurable workflow and a short control specification that defines allowed actions, approvals, data access, evidence and uncertain-state behaviour.

Context and Relevance

This interview sits squarely in ongoing industry discussions about agentic AI governance, security and auditability. As businesses deploy agents into ERP, finance and operations, the risk of opaque automation causing costly incidents rises unless organisations embed operating discipline up front. The article links practical transformation experience to current WSJ Technology Council concerns about permissions, authentication and accountability for AI agents.

Why should I read this?

Because if you’re rolling out bots that can post journals, change vendor records or trigger payments, this is the checklist you want before someone else has to clean up the mess. It’s short, practical and written by someone who’s fixed the real-world chaos that happens when control gets left behind in the rush to automate.

Author style

Punchy. The piece isn’t theoretical: it amplifies why getting governance, ownership and evidence right is essential. For execs deciding where to invest in automation capability, the interview highlights the difference between durable innovation and risky tool adoption.

Source

Source: https://ceoworld.biz/2026/04/08/kasyapp-ivaaturi-an-automated-action-should-be-as-explainable-and-accountable-as-a-human-action-otherwise-instead-of-innovation-you-get-an-incident-generator/