Halo Explore

← Back to home

AI Transparency Notice

Last updated: 24 April 2026

Halo Education uses AI-assisted workflows to help generate draft educational outputs such as intervention plans, Individual Education Plan (IEP) drafts, parent summaries, referrals, risk reports, transition notes, and case reviews.

This notice explains how those AI features work.

1. AI is used to assist, not replace, human judgment

Halo's AI features generate drafts, suggestions, and structured options. They are designed to support educators and authorised users, not to replace professional judgment. Users are expected to review, edit, and approve outputs before they are relied on.

2. Human review is part of the workflow

Halo is built around guided workflows with checkpoints. Depending on the feature, users may:

  • Review evidence
  • Choose between AI-generated options
  • Edit suggested goals or actions
  • Approve or discard the final draft

Outputs should not be treated as autonomous decisions.

3. What the AI may receive

When AI features are used, Halo may transmit only the minimised context needed to provide the feature, including:

  • De-identified evidence packets
  • Pseudonymised records
  • Aggregate metrics
  • Schema or structural information needed for mapping

Halo is designed so direct student identifiers remain on the user's device during normal AI workflows.

4. What the AI does not guarantee

AI-generated outputs may be incorrect, incomplete, biased, outdated, or inappropriate for a particular context. Users must not rely on Halo outputs as the sole basis for decisions affecting a student, family, or staff member.

5. Typical AI-assisted outputs

Halo may use AI to help draft:

  • Intervention plans
  • IEP components
  • Parent-facing summaries
  • Case reviews
  • Meeting preparation notes
  • Referrals
  • Transition documents
  • Cohort summaries and risk narratives
  • Other structured school-workflow drafts

6. Auditability and traceability

Where applicable, Halo is designed to preserve visibility into:

  • The workflow step
  • The type of input used
  • The user's approval or selection step
  • The resulting draft output

This is intended to support transparency, governance, and human accountability. Audit information maintained on a user's device is a working record of the user's own workflow and is not a tamper-evident provenance system. Enterprise deployments can opt into server-side audit capture for stronger accountability.

7. Limitations

Halo does not guarantee legal, clinical, diagnostic, or regulatory correctness. It is not a substitute for legal advice, psychological assessment, medical advice, or mandatory institutional review.

8. Safe use expectations

Users should:

  • Verify important claims
  • Review sensitive outputs carefully
  • Avoid using AI output as a final decision without human review
  • Ensure they are authorised to process the underlying information

9. Australian automated-decision disclosure

From 10 December 2026 the Australian Privacy Principles require privacy policies to include information about substantially-automated decision-making that significantly affects an individual's rights or interests. Halo's AI features are designed as human-in-the-loop drafting tools. We do not intend for AI outputs to be used as substantially-automated decisions; the educator reviewing and approving each output is the substantive decision-maker. If we introduce any feature that departs from this model, we will update this notice and our Privacy Policy before it is made available.

10. Questions

Questions about AI-assisted features can be sent to privacy@haloeducation.app or hello@haloeducation.app.