To cut a long story short. The difference is based on the extend to which the human in present. Do they approve, do they monitor, do they get a dashboard, or do they get called upon when there is an issue…
In more detail.
HITL Human-in-the-Loop
Summary:
Humans are actively involved in the training, testing, and feedback loop of the AI system. They validate, correct, or fine-tune outputs before the AI acts on them. This is often used in high-stakes, ambiguous, or evolving environments.
UX Patterns & Behaviors:
- Active validation UI: Users are prompted to confirm or adjust AI suggestions before proceeding.
- Thumbs up/down feedback: Common for quick sentiment input on individual results.
- Editable suggestions: AI provides a draft, and humans make final changes.
- Confidence indicators: Show how sure the AI is, encouraging user scrutiny in low-confidence scenarios.
- Interruptible flows: Humans can override or halt actions easily.
HATL Human-Above-the-Loop
Summary:
Humans oversee the AI system from a supervisory position. The AI acts autonomously, but humans define the rules, monitor outcomes, and intervene as needed.
UX Patterns & Behaviors:
- Dashboard-based monitoring: Users interact with summary views showing system performance and potential flags.
- Audit logs: Traceable histories of AI actions to support transparency and governance.
- Threshold settings: Users can tune decision thresholds, parameters, or escalation rules.
- Periodic interventions: Users engage primarily in exception handling or system updates.
- Alerts and notifications: Passive until anomalies arise.
HOTL Human-On-the-Loop
Summary:
Humans remain available during AI operations and can intervene, but are not required to act unless there’s a problem. AI has more autonomy than in HITL.
UX Patterns & Behaviors:
- Real-time status feeds: Users track the process in progress, like a live operations panel.
- Intervention tools: UI includes options to pause, stop, or adjust processes if anomalies are detected.
- Confidence thresholds with color coding: Visual indicators help prioritize what might need attention.
- “Set it and watch it” behavior: Users trust the system but want assurance it’s working correctly.
HBTL Human-Below-the-Loop
Summary:
The AI operates entirely autonomously, and humans only review or assess the results after the fact. This is typical in automated environments where human review is rare or only needed for auditing.
UX Patterns & Behaviors:
- Post-hoc reporting interfaces: Users engage with analytics dashboards or outcome summaries.
- Quality sampling UI: Random or targeted review of outputs with flagging options.
- Batch review workflows: Evaluate AI performance over time or across large data sets.
- Detached interaction: Users engage only periodically, often through performance or compliance lenses.