<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Agentic on</title><link>https://www.ngallo.it/tags/agentic/</link><description>Recent content in Agentic on</description><generator>Hugo</generator><language>en-uk</language><managingEditor>nicola.gallo@nitroagility.com (Nicola Gallo)</managingEditor><webMaster>nicola.gallo@nitroagility.com (Nicola Gallo)</webMaster><lastBuildDate>Tue, 05 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://www.ngallo.it/tags/agentic/index.xml" rel="self" type="application/rss+xml"/><item><title>Trusting AI Agents, Who Is Acting?</title><link>https://www.ngallo.it/blog/2026-05-05/trusting-ai-agents-who-is-acting/</link><pubDate>Tue, 05 May 2026 00:00:00 +0000</pubDate><author>nicola.gallo@nitroagility.com (Nicola Gallo)</author><guid>https://www.ngallo.it/blog/2026-05-05/trusting-ai-agents-who-is-acting/</guid><description>&lt;figure class="post-banner"&gt;
 &lt;img src="https://www.ngallo.it/images/2026-05-05/taia-who/trust-ai-agents-who-is-acting.png" alt="Sample Work Pool" loading="lazy"&gt;
 &lt;figcaption&gt;Who Is Acting?&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;AI agent security is usually approached as an &lt;strong&gt;identity problem&lt;/strong&gt;: the question becomes &lt;em&gt;&amp;ldquo;what is the agent&amp;rsquo;s identity, and what is it allowed to do?&amp;rdquo;&lt;/em&gt;. This framing is inherited from decades of human-centric and client-centric authorization design, and it works reasonably well as long as the entity being authorized is stable, persistent, and accountable in its own right. AI agents are none of those things in practice, and the framing produces a steady accumulation of edge cases that &lt;strong&gt;identity-centric security models&lt;/strong&gt; keep trying to patch without changing the underlying assumption.&lt;/p&gt;</description></item></channel></rss>