In this blog post The New Enterprise AI Stack Identity Observability and Control we will explain why getting AI into the business is no longer just about choosing a model. For most organisations, the real work now sits behind the scenes in identity, observability, and control.
If that sounds abstract, here is the plain-English version. Identity decides who can use AI and what they can access. Observability means you can actually see what the AI is doing, what it costs, and where it is making mistakes. Control puts rules around data, outputs, and approvals so AI helps the business without creating a security or compliance problem.
This matters because many mid-sized businesses are in the same position right now. A few teams are using ChatGPT. Someone is trialling Microsoft 365 Copilot. Another team wants an internal chatbot. The business is excited, but nobody is fully sure who has access, whether sensitive information is being pasted into public tools, or how to prove the setup is safe if a customer, auditor, or board member asks.
That is why the new enterprise AI stack matters. The model still matters, of course. But the companies that get value from AI over the next few years will not be the ones with the flashiest demo. They will be the ones with the cleanest operating model.
Why the model is not the stack anymore
A year or two ago, many AI discussions were about which model was smartest. That is still part of the picture, but it is not the business bottleneck anymore. The real challenge is connecting AI to your people, your data, your devices, your policies, and your risk controls in a way that is manageable at scale.
In Australia, that is more than an IT housekeeping issue. The OAIC has made it clear that the Privacy Act applies when AI is used with personal information, and it recommends that organisations do not put personal information, especially sensitive information, into publicly available generative AI tools as a best practice. That moves AI governance out of the innovation bucket and into the risk, compliance, and executive decision-making bucket.
1 Identity comes first
If you only fix one part of your AI stack this year, start with identity. In plain English, identity is how the business knows who a person is, whether they should have access, and under what conditions. It also covers non-human access, such as an AI app, workflow, or agent acting on behalf of a user.
Microsoft Entra, which manages identity and access for Microsoft environments, uses Conditional Access rules to make these decisions. That means you can create practical if-then rules based on things like the user, their device, their location, the app they are trying to reach, and increasingly the identity of the AI agent itself. A simple example is allowing access to an approved AI tool only from a company-managed device with multi-factor sign-in completed.
That same thinking is now showing up across enterprise AI platforms. OpenAI supports enterprise single sign-on, which lets staff use their normal work login, and automated user provisioning, which creates and removes accounts from your identity system. Anthropic offers the same sort of enterprise controls, including single sign-on, automated user management, audit logs, and role-based permissions.
For a business leader, the outcome is straightforward. Identity reduces the chance of ex-employees keeping access, limits who can reach sensitive data, speeds up onboarding and offboarding, and gives you a cleaner answer when someone asks, “Who in this business can use AI, exactly?”
If employee is in Finance
and device is company-managed
and multi-factor sign-in is complete
then allow access to the approved AI app
else require extra verification or block access
This is what a modern AI access rule looks like in practice. Not exciting, but very valuable.
2 Observability stops AI becoming a black box
Once AI is live, the next problem appears quickly. You know people are using it, but you cannot see enough. Which prompts are failing? Which teams are driving costs? Which answers are slow, inaccurate, or risky? If nobody can answer those questions, AI becomes a black box, and black boxes do not survive budget reviews for long.
That is where observability comes in. Observability means collecting the right signals so you can monitor, understand, and troubleshoot an AI system over time. In Microsoft Foundry, Microsoft’s current enterprise AI platform on Azure, that includes evaluation, monitoring, and tracing. In practical terms, teams can track things like response quality, safety, token consumption, latency, error rates, and the execution path an AI agent took to produce an answer.
This is more important than it sounds. Traditional software usually does the same thing each time. AI does not. The same question can produce different answers, and a helpful answer can still be wrong, poorly sourced, or based on the wrong internal document. Microsoft’s observability tooling is designed to measure quality and safety, not just uptime.
Enterprise AI vendors are also adding better audit visibility around how people use their platforms. OpenAI’s enterprise tooling includes compliance data and logs that can feed into legal discovery, data loss prevention, and security monitoring tools. Anthropic has also added audit-focused enterprise controls and a Compliance API for better governance and visibility.
The business outcome here is huge. Better observability means tighter cost control, faster issue resolution, clearer reporting to executives, and fewer surprises after rollout. It also gives you a much better basis for deciding whether AI is genuinely improving productivity or just creating impressive-looking usage numbers.
3 Control is what makes AI safe to scale
If identity answers “who gets in” and observability answers “what is happening,” control answers “what is allowed.” This is the layer that protects sensitive information, enforces business rules, and stops a promising AI pilot from turning into a data leakage event.
In the Microsoft ecosystem, this is increasingly handled through tools such as Microsoft Purview, which manages data security and compliance, and Microsoft Defender, which helps discover and monitor risky activity. Microsoft Purview now supports governance and protection across AI apps including Microsoft 365 Copilot, Microsoft Foundry, and ChatGPT Enterprise. Microsoft also recommends using Defender for Cloud Apps to discover, monitor, or block generative AI apps in use across the organisation.
Put simply, this is how you stop staff from sending confidential information into the wrong place. Sensitivity labels let you classify information such as public, internal, confidential, or highly confidential. Data loss prevention rules can then use those labels to stop sensitive content being overshared. That is far more useful than a blanket “don’t use AI” email that nobody follows.
Control also matters inside the AI application itself. Good enterprise setups limit which internal systems an AI tool can connect to, which actions it can perform, what content it should refuse, and when a human must approve the next step. Microsoft’s current AI tooling also supports workflow patterns with branching logic and human-in-the-loop approval steps, which is exactly what many business processes need.
For Australian businesses, this is where privacy and cyber requirements start to meet day-to-day operations. If customer records, HR files, or commercial documents may contain personal information, the control layer needs to be designed deliberately from day one, not added after a scare. The OAIC’s guidance is a useful reminder that AI does not sit outside existing privacy obligations just because it is new.
The technology behind the stack in plain English
At a high level, the technology works like this.
A user signs in using the company identity system, such as Microsoft Entra or another approved identity provider.
The identity layer checks whether that user, device, and application should be allowed to access the AI tool.
The AI app pulls only the approved data or connected systems it is allowed to use.
The model generates an answer, while observability tools record performance, safety, quality, and usage signals.
The control layer applies policies such as data labels, content filters, logging, approval workflows, and retention rules before the result is shown or actioned.
That is the new stack. Not just model plus prompt. Model plus identity, logging, policy, and governance.
A real-world scenario
Imagine a 180-person professional services firm in Melbourne. Staff are already using AI, but mostly in an unplanned way. Some use public tools in the browser. Some have paid accounts on different platforms. Leadership likes the productivity gains, but the IT manager cannot confidently answer three basic questions: who is using what, what data is going in, and how much the business is really spending.
The fix is not to ban AI. It is to simplify it. Move staff onto approved tools, connect those tools to the business identity system, restrict access based on role and device, classify sensitive data, and turn on monitoring that shows usage, cost, and risk in one place.
Once that happens, AI stops being a free-for-all and starts becoming an operating capability. Offboarding becomes easier. Security reviews become faster. Business units get a safer path to experiment. And leadership can finally judge AI on outcomes instead of noise.
What decision-makers should do next
Choose your approved AI entry points instead of letting every team pick its own tool.
Connect AI access to your identity platform so users, groups, and devices are governed consistently.
Turn on observability early so you can measure quality, cost, and risk from the beginning.
Use data classification and loss prevention rules before broader rollout.
Review your setup against privacy obligations and broader cyber controls, including your Essential 8 roadmap where relevant.
The companies that get this right will not necessarily be the ones spending the most on AI. They will be the ones treating AI like a business system that needs the same discipline as email, finance, or customer data.
At CloudPro Inc, this is the work we do every day for organisations that want practical AI, not AI theatre. As a Melbourne-based Microsoft Partner and Wiz Security Integrator with more than 20 years of enterprise IT experience, we help businesses put the right foundations under Azure, Microsoft 365, Intune, Windows 365, OpenAI, Claude, Defender, and Wiz without turning the project into a science experiment.
If you are not sure whether your current AI setup is secure, governable, or costing more than it should, we are happy to take a look and give you a straight answer — no strings attached.