Prefer watching instead of reading? Watch the video here. Prefer reading instead? Scroll down for the full text. Prefer listening instead? Scroll up for the audio player.
P.S. The video and audio are in sync, so you can switch between them or control playback as needed. Enjoy Greyhound Standpoint insights in the format that suits you best. Join the conversation on social media using #GreyhoundStandpoint.
At Microsoft Build 2025, it became clear that the company is no longer content with building smarter assistants. It’s building agent ecosystems — interconnected, identity-aware, and secure by design. This isn’t a Copilot upgrade. It’s a strategic escalation.
For years, AI in the enterprise meant scattered copilots embedded in apps, each with their own narrow domain. But Microsoft is now reshaping that paradigm — connecting these task-specific agents into collaborative systems that don’t just execute prompts, but coordinate missions. Multi-agent orchestration, as unveiled in the latest Copilot Studio announcements, lets enterprises build a network of AI agents that combine skills, share context, and tackle complex workflows together, much like how functional teams operate in real-world organizations.
This is no minor interface tweak. It’s a full-blown architectural shift. Instead of humans stitching together APIs, now agents coordinate across tasks, with built-in memory, tool use, and identity. This orchestration layer sits at the heart of a broader rethink: what happens when AI agents become collaborators, not components?
The answer lies in a coordinated stack that combines Azure AI Foundry’s Agent Service to deploy and observe agents, Entra Agent ID to give them managed identities, and Copilot Tuning to align them with business context.
In many ways, this is Microsoft’s clearest answer yet to the enterprise anxiety around “AI sprawl.” By moving from single-agent deployment to multi-agent orchestration with lifecycle governance, Microsoft is building a stack that doesn’t just scale technically — it scales safely and strategically.
A Greyhound Fieldnote from a VP of IT at a global pharma firm captured the urgency: “We’ve built a dozen agents. None of them talk to each other. It’s like running a team where no one knows who’s on shift. What Microsoft’s doing here could finally give us an AI workforce — not just a collection of disconnected temps.”
Greyhound Standpoint: At Greyhound Research, we believe this moment marks a foundational inflection: enterprises don’t just need better agents. They need agents that work together, with rules, roles, and runtime observability. Microsoft’s Build 2025 announcements signal the company is designing for agent economies, not just agent experiences.
Azure AI Foundry Agent Service: Observability, Interoperability, and Custom LLMs
The real test of any AI agent platform isn’t how clever the agents are in isolation — it’s how observable, interoperable, and governable they are at scale. With the new Agent Service introduced in Azure AI Foundry, Microsoft is directly responding to that challenge. This is where the company is putting serious architectural muscle behind the future of agent-based enterprise AI.
The Azure AI Foundry Agent Service isn’t just about deploying agents faster — it’s about managing them with surgical precision. Developers can now monitor agent performance, cost, quality, and safety metrics in real time. Built-in observability lets enterprises understand not only what agents are doing, but how they’re behaving across workloads. That’s critical for regulated industries, where the difference between acceptable variance and a compliance breach can come down to a single misfired prompt.
And unlike many closed ecosystems, Microsoft is opening the gates to broader interoperability. The platform supports open agent protocols like A2A and MCP, allowing developers to orchestrate agents across different clouds and frameworks. This openness means that enterprises aren’t locked into a single LLM or service boundary. In fact, the Agent Service allows users to bring their own LLMs — whether hosted on Azure, AWS, or elsewhere — and still plug into the broader Foundry orchestration layer. It’s Microsoft’s strongest statement yet in support of a post-monolithic model strategy.
According to the Greyhound CIO Pulse 2025, 61% of CIOs globally now prioritise AI observability over raw model performance when evaluating GenAI investments. Many have already deployed small fleets of agents internally, only to find themselves blind to task flow errors, hidden cost inflation, or hallucinated handoffs. Observability is no longer optional — it’s the only way forward.
A Greyhound Fieldnote from the Head of Automation at a large Southeast Asian telecoms provider put it plainly: “We were building agents like mad last year. But when they failed, we had no idea why, and even less idea who approved what. What Azure’s Agent Service gives us isn’t just metrics. It gives us accountability.”
This launch shows Microsoft playing a long game. Rather than simply enabling more agent creation, the Agent Service is about governing agent operations. It gives engineering leaders the same kind of maturity model they’ve come to expect from modern CI/CD and AIOps stacks — now applied to agentic systems. It’s not just a DevTools upgrade. It’s infrastructure for the agent-native enterprise.
Greyhound Standpoint: At Greyhound Research, we believe this layer — observability, governance, and interoperability — is what will separate agent hype from agent reliability in 2025 and beyond. Microsoft’s Agent Service is not flashy. It’s foundational. And in enterprise AI, foundational wins.
Identity for AI Agents: Why Entra Agent ID Is More Than Just an Add-On
If observability is the nervous system of AI agent governance, identity is the spine. And with the introduction of Microsoft Entra Agent ID, Microsoft is signaling that it understands what few vendors have yet articulated: agents are not just tools — they are operational actors. And actors need identities, credentials, and boundaries.
In a world where AI agents can execute workflows, access data, generate content, and now interoperate across services, the risk of “agent sprawl” isn’t theoretical. It’s already showing up in enterprise security reviews, where agents built in sandboxes graduate to production use without identity, logging, or lifecycle management. Entra Agent ID assigns a unique, manageable identity to each agent created via Microsoft Copilot Studio or Azure AI Foundry — and crucially, integrates those identities into the organization’s existing Entra directory. This is not a convenience feature. It’s a governance anchor.
The value of this integration isn’t just in authentication — it’s in accountability. With Entra, security teams can apply policies, monitor activity, set least-privilege roles, and ensure agents don’t persist beyond their intended lifespan. And because Agent IDs are now first-class entities, they can be managed the same way as human or machine users — with full compliance, auditability, and deactivation paths.
According to Greyhound CIO Pulse 2025, 57% of CISOs in Global 2000 firms have cited agent identity ambiguity as a top-5 security concern for GenAI deployment. Many enterprises have raced to develop agents with access to internal systems, only to discover later that those agents had no standardized identity, no scoped permissions, and no revocation trail. It’s not just a risk — it’s a liability.
A Greyhound Fieldnote from the Chief Information Security Officer of a global insurance firm drove the point home: “We don’t allow unbadged consultants to walk into our datacenter. Why are we letting nameless agents run inside our architecture? Entra Agent ID closes that loop.”
This is Microsoft leveraging its identity and security DNA in a way that directly addresses enterprise pain. It’s a step beyond best practice — it’s a recognition that AI agents will soon require the same operational rigor as traditional users and APIs. In fact, the better these agents become, the more dangerous they are when left unmanaged. Entra Agent ID isn’t just a feature — it’s a policy control plane for agent-based computing.
Greyhound Standpoint: At Greyhound Research, we believe this move is long overdue in the market. Vendors have been too eager to show what agents can do — and far too quiet about how they’ll be governed. Microsoft’s approach with Entra Agent ID is a rare moment of security-first thinking in an industry still obsessed with capability demos. And that’s exactly what enterprises need now.
Domain-Tuned Agents in Microsoft 365: Precision, Security, and Style
For all the hype around AI agents, most enterprise leaders have the same unspoken skepticism: “Can this thing actually work the way we work?” With Microsoft 365 Copilot Tuning, Microsoft is finally addressing that head-on. It’s offering enterprises a way to build agents that don’t just understand tasks — they reflect internal process, tone, and context.
The tuning process is low-code by design. Business users and developers alike can train Copilot agents on their organization’s specific data, workflows, and styles securely and from within the Microsoft 365 boundary. This means an HR team can tune an agent to handle onboarding in line with company policy. A legal department can train one to draft documents in firm-specific language. A marketing team can generate brand-compliant copy without ever leaving Outlook or Teams. The real win here isn’t just productivity. It’s precision.
Crucially, Microsoft is making it possible to do this without forcing enterprises to send sensitive data into external LLM pipelines. The agents operate within the Microsoft 365 security model, which, for many regulated firms, is already vetted and approved. That keeps governance teams happy — and IT out of escalation hell.
And tuning isn’t where it stops. These domain-specific agents can also participate in multi-agent workflows. Using Copilot Studio’s orchestration capabilities, enterprises can chain tuned agents together to handle more complex, cross-functional flows. This isn’t just horizontal automation — it’s vertical domain intelligence, stitched together by design.
According to Greyhound CIO Pulse 2025, 62% of enterprise technology decision-makers believe GenAI’s biggest limitation today isn’t cost, compute, or hallucination — it’s contextual irrelevance. Enterprises don’t want agents that write generic copy or analyze out-of-context spreadsheets. They want agents who understand how things are done here. Microsoft 365 Copilot Tuning aims to close that gap, not with another model, but with tools that empower the enterprise to embed its DNA into the agent.
A Greyhound Fieldnote from a CIO at a leading European law firm summed up the appeal: “We’ve tested LLM agents before — some were brilliant in theory, but unusable in practice. With tuning, we don’t need to rebuild the wheel. We just teach the agent how we already roll.”
Microsoft’s play here isn’t about mass-market GenAI. It’s about enterprise-grade specificity. By embedding customization and security directly into the existing Microsoft 365 experience, the company is positioning its platform not as a place to test AI, but as the place where enterprise AI gets production-ready.
Greyhound Standpoint: At Greyhound Research, we believe this is the most practical and immediately impactful piece of Microsoft’s agent strategy. It doesn’t scream innovation, but it delivers integration. And in the enterprise, that’s what wins adoption.
Not Just Smarter Agents — Safer Ones That Scale
With the pace at which AI agents are being rolled out, the question is no longer “can we build them?” It’s “Can we control them at scale?” Microsoft’s Build 2025 announcements offer a clear, if quietly radical answer: control doesn’t come after capability — it must be designed in from the start.
Taken together, the launches across Azure AI Foundry, Copilot Studio, Microsoft 365 Copilot Tuning, and Entra Agent ID represent something larger than a feature set. They reflect an architecture designed for multi-agent security posture, identity-aware governance, and cross-cloud interoperability. This isn’t just AI in enterprise — it’s AI as enterprise infrastructure.
That distinction matters. Most vendors still pitch agents as “power users with LLMs.” But Microsoft’s approach sees them as operational entities — with entitlements, boundaries, logs, and accountability trails. The combination of observability tooling, role-based identity management, and orchestration logic changes the frame. These aren’t plugins. They’re programmable actors embedded in enterprise workflows, and they must be held to the same standards as any other system actor.
According to Greyhound CIO Pulse 2025, 69% of CIOs surveyed now say their top concern with AI agents is not performance or hallucination — it’s “silent privilege creep.” Agents begin with modest scopes but expand unchecked, often accumulating more access and autonomy than originally intended. The result? Audit exposure, integration fragility, and compliance landmines. Microsoft’s zero-trust approach, detailed here, is timely — and, frankly, necessary.
A Greyhound Fieldnote from a global bank’s Head of Risk made it plain: “The risk isn’t rogue code. It’s silent agents — ones you forgot were live, connected to systems you no longer monitor. If we don’t treat them as first-class actors in our identity and observability stack, they’ll eventually cause first-class problems.”
What stands out in Microsoft’s posture is the framing of AI agents not as a novelty, but as a new class of digital citizen. That framing has profound implications for governance, compliance, and architecture. Enterprises will need to evolve their security practices, not just to manage AI outputs, but to govern AI operations.
Greyhound Standpoint: At Greyhound Research, we believe Microsoft is one of the few major vendors actually treating this challenge with the weight it deserves. These announcements aren’t a sprint for headlines. They’re blueprints for enterprise resilience. Because in 2025, it’s not the smartest agents that will survive — it’s the safest ones that can scale without setting off alarms.

Analyst In Focus: Sanchit Vir Gogia
Sanchit Vir Gogia, or SVG as he is popularly known, is a globally recognised technology analyst, innovation strategist, digital consultant and board advisor. SVG is the Chief Analyst, Founder & CEO of Greyhound Research, a Global, Award-Winning Technology Research, Advisory, Consulting & Education firm. Greyhound Research works closely with global organizations, their CxOs and the Board of Directors on Technology & Digital Transformation decisions. SVG is also the Founder & CEO of The House Of Greyhound, an eclectic venture focusing on interdisciplinary innovation.
Copyright Policy. All content contained on the Greyhound Research website is protected by copyright law and may not be reproduced, distributed, transmitted, displayed, published, or broadcast without the prior written permission of Greyhound Research or, in the case of third-party materials, the prior written consent of the copyright owner of that content. You may not alter, delete, obscure, or conceal any trademark, copyright, or other notice appearing in any Greyhound Research content. We request our readers not to copy Greyhound Research content and not republish or redistribute them (in whole or partially) via emails or republishing them in any media, including websites, newsletters, or intranets. We understand that you may want to share this content with others, so we’ve added tools under each content piece that allow you to share the content. If you have any questions, please get in touch with our Community Relations Team at connect@thofgr.com.
Discover more from Greyhound Research
Subscribe to get the latest posts sent to your email.
