RelayOps
How a Sprint works

From documented policy to audit-ready evidence — in 30 days.

A worked example focused on the Standard 6 preference-to-plate evidence trail — the gap most visible in published audit findings since 1 November 2025. Walked through week by week, based on a representative regional NSW provider we'll call Ironbark Aged Care Group.

Ironbark Aged Care Group, Sarah Mitchell, and the specific scenarios on this page are illustrative composites used for educational purposes. They do not reference any real provider, resident, or assessor. The reading is based on public-domain audit findings, Commission bulletins, and sector commentary published since 1 November 2025.
The scenario

Who Ironbark is.

The provider

Ironbark Aged Care Group

120 beds across 2 residential homes in Maitland and Cessnock, NSW (Lower Hunter). Not-for-profit, board-governed. Tech stack: Microsoft 365 + AutumnCare clinical system. Operating margin -2.4%.

The buyer

Sarah Mitchell — Quality & Compliance Manager

RN by training, 11 years in aged care quality. Reports to the Director of Care. Last assessment flagged a Standard 6 finding around the preference-to-plate evidence trail. Next visit somewhere between July and October.

The Sprint outcome — day 30 A workflow that captures preference, action, and evidence per resident per day — particularly around modified-texture diets and food-related preferences. Automatically, on-device, timestamped to the system clock, linked to the care plan, and surfaceable as a one-click Standard 6 audit pack.
Week 1

Map the workflow as it actually runs.

We start with what's already there. Sarah and one operational lead from each home spend half a day with us mapping one specific Standard 6 evidence trail — from documented food preference, through daily delivery to the table, to where evidence is supposed to be captured.

Before — what runs today
  • Preferences live in three places: care plan PDF, kitchen folder, handover notes
  • When a preference changes mid-cycle, Sarah emails the kitchen and asks the floor supervisor to brief the next shift verbally
  • Daily-delivered evidence captured on paper sheets near each resident's room — ~60% returned, ~40% lost
  • Sarah spends 4–6 hours every Wednesday reconstructing the evidence trail
After Week 1 — what we know
  • Every node in the current workflow mapped — who, where, what tool, what fails when
  • The three places preferences live, marked as the source-of-truth problem
  • Five specific failure modes documented with examples from the last 30 days
  • Agreement on the one workflow to redesign in Week 2 — not three, not five — one
Sarah's commitment this week: 3.5 hours (workshop + walk-the-floor session). No tools changed yet.
Week 2

Design the target workflow.

Design first. Tool second. We sketch the to-be workflow on a single page before opening any platform — who does what, where each piece of evidence is captured, what triggers what, and where the audit pack assembles from.

Before — design problems
  • Preference data duplicated three times
  • No system event fires when preference changes — human-to-human pass-through
  • Evidence capture last step, on paper, no feedback loop
  • Audit pack reconstructed weekly from scratch — Sarah's manual labour
After Week 2 — target design
  • Preference lives in one place — the resident's care plan record
  • Change triggers events to kitchen prep list, handover note, supervisor brief — automatically
  • Daily capture moves from paper to a Forms checklist on iPad. Three taps. Timestamped to system clock.
  • Missing captures surface in real-time on a SharePoint dashboard
  • Audit pack assembles itself — one-click PDF, ready for an assessor
Sarah's commitment this week: 1.5 hours (design review + sign-off). We do the build prep alongside.
Week 3

Build the workflow. Test it with real data.

We configure the workflow inside Ironbark's Microsoft 365 tenant. No new platform. No licence uplift. The kitchen team and one floor supervisor run it live for the week against real residents, real preferences, real shifts.

Before — what worried the team
  • "The kitchen team won't use another system"
  • "Floor supervisors are flat out — they'll skip the checklist on busy days"
  • "What if the iPad battery dies?"
  • "What if a preference changes after the shift starts?"
After Week 3 — what the live week showed
  • Kitchen team didn't need to use the new system — it reads from the care plan and updates their existing prep list. Net change for them: zero.
  • Floor supervisor checklist took 90 seconds per round, not 5 minutes. Three taps per resident. Capture rate over the week: 94%.
  • Offline mode on Forms held captures locally; they synced when iPad came back online. No data loss.
  • Preference-change-mid-shift was the one real edge case; we adjusted the workflow Friday to handle it.
Sarah's commitment this week: 2 hours (live observation + daily debrief). The workflow runs without her.
Week 4

Handover and the evidence pack.

Final test against an extended week of data. Documentation. Training video. Policy-to-evidence map the assessor can read. A one-page Board summary Sarah can hand to the CEO. We're gone on day 30.

Before — what Sarah used to walk into
  • Wednesday: 4–6 hours reconstructing the evidence trail
  • Monthly Board pack assembled from spreadsheets and memory
  • Assessor visit anxiety: "will I find the trail for any resident, any day?"
  • Standard 6 evidence trail listed as a current control gap on the risk register
After Week 4 — what she walks into now
  • Wednesday: 20 minutes to review the auto-assembled audit pack. Four hours back.
  • Monthly Board pack: live SharePoint dashboard, exported on demand.
  • Assessor visit response: one-click audit pack for any resident, any 90-day window.
  • Standard 6 evidence trail control closed on the risk register. Sarah signed it off.
Sarah's total commitment over 30 days: ~9 hours. We promised ≤6. The extra 3 were her choice.
How the decision got made

Why Ironbark said yes.

Sarah's CEO read the engagement letter Tuesday and signed it Thursday. The decision wasn't about the A$9,950. It was about three things, in order:

1. Specificity

The Sprint named one gap — the Standard 6 food-and-nutrition evidence trail. Not "compliance." Not "audit readiness." One specific thing.

2. Risk reversal

The 50% refund meant the worst-case downside was A$4,975. Easier to approve than a year-long platform commitment.

3. No platform sprawl

Everything sat inside the Microsoft tenant they already paid for. IT didn't need to be in the room.

If the Ironbark scenario looks like your scenario.

The 25-minute scoping call covers your current Standards exposure, which gap to close first, and whether a Sprint is the right fit. No pitch deck.

Book the call