0:00
/

Trust the agency, not just the agent: a conversation with Vanta's Khush Kashyap

Episode 06 of Amer Altaf's Studio — Vanta's Senior Director of GRC on agentic governance, the talent pipeline, and the Trust Is the Growth Engine series starting Monday.

On the morning of 14 May 2026, episode six of The Control Layer is going live. My guest is Khush Kashyap, Senior Director of Governance, Risk and Compliance at Vanta, recorded in a London studio in the days following VantaCon UK 2026. The conversation was nominally about the platform — the agentic trust platform launched in November 2025, the Vanta AI Agent 2.0, the live trust graph, and the Common Control Framework for enterprise buyers announced at the conference on 7 May 2026.[1] The conversation was actually about a larger and harder question, and that is the question I want to flag this morning, on the day the episode lands.

Share

The framing the episode settles on, in the closing minutes, is a question I have been carrying through board papers and strategy decks for six months: are we automating the work, or are we automating the workers? Khush, asked the question directly, gave the kind of answer that comes from a senior practitioner who has actually had to live with the consequences of his own answer. “At this point of time, we are definitely at a place where we are automating the work and not the workers. We are just freeing up their time to do more meaningful things.”[2] He then flagged the harder follow-on with the same directness: “will the size be decreased or are we going to have smaller teams is really hard to say.”[2]

I want to label that as my analytical reading rather than as the podcast’s narration: the work-versus-workers question is not yet decided in the GRC profession; it is being decided, in real time, by every senior leader making this year’s hiring plan. The conversation with Khush is a window into how one of those leaders is currently making it.

The 24/7 GRC engineer, on the record

The headline product launches at VantaCon UK 2026 have been covered elsewhere — including, with considerable analytical depth, in three pieces of mine going up across the next ten days, the schedule for which sits at the bottom of this post. What the studio conversation adds is the operational picture from inside Vanta’s own GRC function. Khush is, by his own description, the platform’s customer zero — using the product he sells, making the same decisions about what to automate and what to keep in human hands that every other Senior Director of GRC in the UK and EU is making.

Three lines from the episode are worth pulling out before the audio is heard.

The first is on whether continuous monitoring has crossed from maturity-marker to operational baseline. “Earlier, continuous monitoring used to be like a maturity thing. We are mature, we have these technical controls which are being continuously monitored. Now it is table stakes.”[2] The phrase “table stakes” lands harder when the speaker is a Senior Director of GRC at a continuous-assurance platform than when it lands in a vendor pitch deck — Khush is, in effect, conceding the ceiling of his own maturity-curve sales argument. The buyer who treats continuous monitoring as a maturity initiative in the second half of 2026 is, on this account, two years late.

The second is on the talent pipeline below the player-coach. “It does concern me about the people who are graduating now and the people who are coming in the industry fresh and new... Without a lot of growth and headcount, there will be certain restrictions. So maybe they need to pivot and retrain on AI.”[2] That is candour about a structural concern most of the industry is not yet talking about with this level of directness. The false-economy risk — automating the entry-level roles in 2026 and discovering in 2030 that the senior bench has no replacement candidates — is the part of the AI-rewriting-GRC thesis that warrants its own podcast, and it is the thread I expect to pull on hardest in Part 2 of the Trust Is the Growth Engine series on Wednesday.

The third is on accountability when the agent is wrong. “Don’t just trust the agents — trust the agency that you build, and trust the systems that you build around the agents. Because in the end, your audits are not happening on your agents and how well the agents are doing. The audits are happening on your systems and your scope.”[2] That is the cleanest single articulation of the agentic-AI accountability thesis I have heard in six months of conference attendance. The audit happens at the system and at the scope. The agent is a tool that operates inside both. The accountability stays with the agency that designs the system and defines the scope. Khush gave me that line; I am stealing it for Part 2 of the series; he gets the citation.

The Asimov problem in modern dress

The conversation kept landing on a framing I want to surface explicitly because the podcast itself only implies it. An agentic GRC system is, structurally, an Asimov problem in modern dress — a class of intelligent operator that follows its instructions logically, reaches conclusions that look correct on the rules-as-given, and produces outcomes the principals did not anticipate. Isaac Asimov’s Three Laws of Robotics, articulated in I, Robot in 1950, are now a seventy-five-year-old literary device.[3] They remain, in 2026, the cleanest analytical handle for the question who is accountable when the agent is wrong. Asimov wrote the laws to surface the exact regress problem an agentic platform now operationalises commercially: rules followed faithfully, outcomes that surprise the rule-givers, and a genre of accountability that the legal and regulatory layers have not yet caught up to.

Khush is, on the evidence of this episode, working that problem at three layers simultaneously — the agent layer (evals against ground-source-truth defined by ex-GRC, ex-auditor, ex-assessor subject-matter experts inside Vanta), the agency layer (the controls, the systems, the human-in-the-loop sign-off), and the scope layer (what is actually being audited, by whom, against which framework). The point of the “trust the agency” framing is that the second and third layers are where the operating reality lives. The first layer is a technical question; the second and third are the organisational ones that decide whether the technical question gets answered well.

The DeLorean question

Towards the end of the episode I asked Khush the question I have been asking every guest of the studio: if you were in the DeLorean from Back to the Future and you only had fuel to go back two years, what would you tell yourself about what was coming?[4] His answer was specific. Two years ago, he had been struggling to automate as many security controls as possible and getting steady feedback from product and engineering teams that the audit-prep work was eating the calendar. “I would tell myself this is all going to be solved... With agents on top of really good GRC automation tools and trust management platforms like Vanta, this will all become a reality. I wouldn’t have known at that time.”[2]

That is the operational answer. The strategic answer Khush gave alongside it is the one that should travel further. “Governance as meetings and committees and people and RACIs is not going to keep up for AI.”[2] When a senior practitioner describes the orthodox governance machinery as structurally inadequate for the technology it is meant to govern, the question is no longer whether the model needs to change. The question is who is currently writing the next one. The conference circuit answer in May 2026 is that the platforms are writing it faster than the regulators are. That is a real fact, and it is the question the Trust Is the Growth Engine series argues UK and European boards should be commissioning rather than waiting on.


What is coming in the series

The Trust Is the Growth Engine series — three pieces, paid-subscription, each behind a 200-word free preview — uses the VantaCon UK 2026 keynote and panel content, plus the Khush interview that aired today, to make the analytical case across three distinct vantage points.

Part 1 publishes on Monday 18 May 2026. It is the platform piece — a constructive analysis of the agentic trust platform, the trust graph, and the three places the architecture is asking to be tested: the regress problem, the lock-in problem, and the sovereignty problem that every UK board paper should now be commissioning about every US-headquartered SaaS platform that holds critical metadata. The Khush interview informs the regress-problem and the sovereignty critiques in particular; the picture on his account is more textured than the conference narrative implied.

Part 2 publishes on Wednesday 20 May 2026. It is the player-coach CISO piece — built around the VantaCon panel where senior security leaders at Intercom, Synthesia, and Dashlane rated the change in their job at eight, eight, and nine out of ten in eighteen months, and extended with Khush’s account of the GRC team transformation at Vanta itself. This is the piece that picks up the talent-pipeline thread Khush raised in the episode.

Part 3 publishes on Monday 25 May 2026. It is the Nando’s piece — Jason Kirk, the CISO of one, the Minimum Viable Security framing, and what the British consumer economy actually looks like beneath the FTSE 100 surface. This is the piece I expect to travel furthest on social.

Each is a paid post; each carries a 200-word free preview; each ends on a falsifiable predictive judgement with a date and a list of signals to track.

The bottom line

The most useful thing the conversation with Khush did was not to give me a single quotable handle to lift onto a social card. It was to confirm, from inside the platform of record, that the agentic GRC architecture is being built with the methodological discipline the buyers who are about to commit to multi-year contracts need to see — and that the talent-pipeline question and the sovereignty question are real, are unresolved, and are being held openly by senior practitioners rather than papered over. That is not a vendor posture. That is a working professional articulating the limits of his own product alongside its capabilities, on the record. It is also, by my reading, the register every UK and European GRC conversation should be operating in over the next eighteen months, and is currently not.

The episode is live now. The series starts Monday.

Don’t just trust the agents. Trust the agency you build, and trust the publication that calls its predictions in writing.


The Control Layer publishes weekly. Subscribe free.

Decision-grade analysis on AI, cybersecurity, technology sovereignty, and the geopolitics of the technology stack — written for the board paper, not the timeline. By Amer Altaf, Founder & CEO of Arkava and Managing Editor of The Control Layer.

Subscribe free

One email a week. No paywalls on the analytical pieces. Unsubscribe in one click.


References

[1]: Vanta. “Vanta Introduces Agentic Trust Platform to Unify Compliance, Risk, and Security Assessments.” Press release, 18 November 2025. . For VantaCon UK 2026 conference details

[2]: Khush Kashyap, Senior Director of Governance, Risk and Compliance at Vanta, in interview with Amer Altaf for (The Control Layer podcast), Episode 06, recorded May 2026, published 14 May 2026. All direct quotations from Kashyap in this article are drawn from the recorded interview transcript; the full audio is available at https://thecontrollayer.arkava.ai.

[3]: Isaac Asimov. I, Robot. Gnome Press, 1950. The Three Laws of Robotics are introduced in the short story “Runaround” (1942) and developed across the Robot and Foundation sequences. The narrative engine of the Robot stories is the systematic surfacing of edge cases in which an actor follows its instructions correctly and produces outcomes the principals did not anticipate — the cleanest mid-twentieth-century literary articulation of the regress problem agentic AI now operationalises commercially.

[4]: Robert Zemeckis (dir.). Back to the Future. Universal Pictures, 1985. The DeLorean question“if you could go back two years, what would you tell yourself about what was coming?”


Author

Amer Altaf is Founder and CEO of Arkava, a UK and European sovereign AI agentic automation business, and Managing Editor of The Control Layer, the publication where he tracks the convergence of cybersecurity, AI, and the geopolitics of the technology stack. A techUK member, he contributes to industry engagement on UK technology sovereignty policy.

Discussion about this video

User's avatar

Ready for more?