CalSync — Automate Outlook Calendar Colors

Auto-color-code events for your team using rules. Faster visibility, less admin. 10-user minimum · 12-month term.

CalSync Colors is a service by CPI Consulting

In this blog post Goldman Sachs Is Automating Accounting With AI What It Means for You we will unpack what’s really happening when a major bank starts automating accounting with AI, and what practical, low-risk moves Australian businesses can take next.

If you’re a CIO, IT manager, CTO, or tech lead in a 50–500 person business, the headline isn’t “banks are doing cool AI stuff.” It’s this: the cost and speed expectations for back-office work are changing, and your finance and operations teams will start asking why month-end still hurts.

Let’s keep this grounded. Most organisations don’t need a “robot accountant.” They need fewer manual steps, fewer spreadsheet hand-offs, tighter controls, and better audit trails. AI can help—but only if you wrap it in the right guardrails.

What Goldman Sachs is doing in plain English

Goldman Sachs has been working with AI models (notably Anthropic’s Claude) to build AI agents that can complete multi-step internal tasks. Think of an agent as software that can: read information, follow rules, take actions in other systems, and report what it did.

In the reporting, these agents have been aimed at workflows such as trade and transaction accounting and other operational processes. That matters because these are high-volume, rules-heavy tasks where small errors create big downstream costs.

The main technology behind it (high-level first)

The core technology is a large language model (LLM). An LLM is the “brain” that can read and write human language, extract meaning, and follow instructions. You’ve seen this style of AI in tools like ChatGPT and Claude.

On its own, an LLM is just a conversation engine. The big leap is when you combine an LLM with:

  • Your business context (policies, procedures, chart of accounts, process maps).
  • Tools and permissions (APIs and service accounts to safely read/write data in finance systems).
  • Workflow orchestration (logic that says “first do A, then B, then ask for approval, then post”).
  • Controls (logging, approvals, segregation of duties, and model safety checks).

AI assistant vs AI agent (why this matters)

AI assistant is typically “help me write/summarise/analyse.” It’s useful, but it often leaves humans to do the operational steps.

AI agent is “do the work” within defined boundaries—like reconciling exceptions, drafting journals for approval, or preparing a variance explanation using source data.

That shift from “help me” to “handle it” is what makes executives pay attention. It moves AI from productivity to process cost reduction.

What this means for mid-sized Australian businesses

Goldman’s scale is obviously different. But the underlying pain points—manual handling, fragmented systems, inconsistent processes—are very similar in mid-market organisations.

1) Month-end close will become a competitive advantage

If your month-end close takes 10–15 business days, you’re making decisions on old information. That creates real cost: delayed hiring decisions, late pricing adjustments, and slow responses to budget blowouts.

AI can reduce close time by automating the “glue work”:

  • Chasing missing invoices and approvals
  • Matching transactions to the right categories based on rules and history
  • Explaining variances using notes pulled from source systems
  • Preparing reconciliation packs and checklists

Business outcome: faster decisions and less overtime during close.

2) The real ROI is less rework, not fewer people

A lot of AI talk gets stuck on headcount reduction. In practice, the first win is usually reducing rework: fewer mistakes, fewer handoffs, fewer “who changed this spreadsheet?” moments.

For many organisations, finance doesn’t want to cut staff. They want their best people spending time on analysis, not copy/paste.

Business outcome: the same team can support more growth without burning out.

3) Audit and compliance expectations are rising (even if you’re not regulated like a bank)

When AI starts touching financial processes, auditors will ask basic questions:

  • Who approved it?
  • What data did it use?
  • What changed, and when?
  • Can we reproduce the result?

This is where a lot of “quick AI wins” fail. If someone is using an AI tool ad-hoc, outside governed systems, you can end up with invisible risk.

Business outcome: safer automation that stands up to audit scrutiny.

4) Security becomes more important, not less

Accounting automation touches sensitive information: payroll, vendor bank details, invoices, customer contracts, and sometimes M&A or pricing.

In Australia, you also have to consider privacy obligations and breach impact. Even if you’re not a bank, a data leak can still become an incident that hits customers and your reputation.

That’s why we treat AI as part of your security program, not a side experiment. At CloudPro Inc, we typically anchor this in:

  • Microsoft 365 (your core productivity suite—email, documents, collaboration)
  • Microsoft Purview (which helps classify and protect sensitive data)
  • Microsoft Defender (which helps detect and respond to security threats)
  • Microsoft Intune (which manages and secures all your company devices)
  • Essential 8 (the Australian government’s cybersecurity framework many organisations are expected to align to)

Business outcome: lower risk of AI becoming a new “leak path” for sensitive finance data.

A realistic scenario (what this looks like outside a global bank)

Imagine a 200-person professional services firm in Melbourne. Month-end takes 12 business days. The finance manager spends days chasing timesheets, matching expenses to cost centres, and manually explaining project margin variances to leadership.

They don’t need “AI to do accounting.” They need a better workflow:

  • An AI assistant that drafts variance explanations using data from project systems and the general ledger
  • An agent that flags exceptions (missing approvals, unusual transactions) and routes them to the right person
  • Automated creation of a close checklist and evidence pack for audit

The result is not magic. It’s fewer late nights, fewer errors, and leadership getting cleaner numbers earlier.

How to implement this safely (practical steps)

Step 1: Pick one process with measurable pain

Good candidates are repetitive, rules-based, and high-volume:

  • Invoice coding suggestions
  • Reconciliation exception handling
  • Vendor onboarding checks
  • Close commentary drafts (not final posting)

Step 2: Separate “recommend” from “post”

In early phases, AI should recommend actions, not execute irreversible ones.

Example: AI drafts a journal with supporting evidence and routes it for approval. A human approves, and only then does it get posted.

Step 3: Put guardrails around data and identity

This is where your Microsoft stack matters. Use the same identity controls you rely on for everything else:

  • Single sign-on and multi-factor authentication
  • Least-privilege access to finance data
  • Conditional access policies (e.g., block risky logins)
  • Device compliance via Intune

Step 4: Log everything and make it auditable

If an AI agent touched a process, you should be able to answer: what happened, who approved it, and what evidence was used.

Step 5: Align to Essential 8 (even if you’re not required yet)

Essential 8 is a practical baseline for reducing cyber risk. AI doesn’t replace it. In many cases, AI increases the need to get the basics right—patching, access control, backups, application control, and macros governance.

For developers and IT leaders (a simple reference architecture)

Here’s a non-scary way to think about the “agent” pattern in a Microsoft environment:

  • Interface: a chat UI in Teams or a small internal web app
  • Brain: an LLM (OpenAI or Anthropic Claude) with a system prompt and policies
  • Knowledge: secure retrieval from finance policies and process docs (so it uses your rules, not internet guesses)
  • Actions: connectors/APIs to finance systems, approvals, ticketing, and document storage
  • Controls: approvals, logging, rate limits, and monitoring

Example pseudo-flow:

// Pseudocode: “recommend, don’t post” close assistant
onCloseDay():
 data = fetchGLExtract(read_only=true)
 exceptions = findAnomalies(data)

 for item in exceptions:
 evidence = gatherEvidence(item) // invoices, approvals, prior month patterns
 draft = llm.generate(
 task="Draft variance explanation and suggested journal",
 inputs={ item, evidence, companyPolicy }
 )

 createApprovalRequest(
 approver=financeController,
 content=draft,
 attachments=evidence,
 action="RecommendOnly"
 )

 logAllSteps()

This is intentionally boring. Boring is good in finance.

Where CloudPro Inc fits (and why most AI projects stall)

Most mid-market AI initiatives don’t fail because the model is “not smart enough.” They fail because:

  • Data is messy and scattered
  • Access control is inconsistent
  • There’s no clear approval workflow
  • No one owns the process end-to-end

CloudProInc is a Melbourne-based Microsoft Partner and Wiz Security Integrator with 20+ years of enterprise IT experience. We’re hands-on, and we build AI into the systems you already rely on—Azure, Microsoft 365, Intune, Windows 365, and your security controls—so it’s usable, supportable, and safe.

Wrap-up

Goldman Sachs automating accounting with AI is a signal: AI is moving from “helpful chat” to real operational automation. For your business, the opportunity isn’t copying a global bank. It’s choosing one painful finance workflow, tightening controls, and using AI to reduce rework and speed up decision-making.

If you’re not sure whether your current month-end process, security posture, or Microsoft setup is costing you more than it should, we’re happy to take a look and map out a practical next step—no pressure, no strings attached.


Discover more from CPI Consulting -Specialist Azure Consultancy

Subscribe to get the latest posts sent to your email.