
General
Upscend Team
-February 22, 2026
9 min read
Clear xAPI statements use actor, verb, and object plus optional result and context to power reliable learning analytics. This article explains naming conventions, verb selection, and provides JSON examples and a validation checklist. Follow the design process and enforce templates to prevent inconsistent IDs, synonyms, and noisy data.
xAPI statements are the atomic records that describe learning experiences across systems. In our experience, well-formed statements are the difference between actionable learning analytics and noisy, unusable logs. This article explains how xAPI statements are structured, breaks down the anatomy of a statement, shows naming and verb-selection conventions, provides xAPI statement examples, and gives a practical validation checklist you can use immediately.
At its core, an xAPI statements record follows a simple triplet model augmented with optional metadata. Understanding each part helps you author clear, reliable data. A typical statement contains actor, verb, and object, plus optional result, context, and metadata fields like timestamp and statementId.
The actor declares who performed the action. Use a stable identifier (email, ORCID, or a persistently mapped UUID). In our experience, inconsistent actor identifiers cause the most downstream aggregation problems.
The verb expresses the action. Verbs should use canonical IRIs whenever possible (for example, verbs from the ADL or Tin Can registry). Choose a single verb per activity type and avoid synonyms in the same dataset to prevent fragmentation.
The object is the activity acted upon (a course, quiz, simulation). The optional result records outcomes like score or success. Context supplies group, instructor, platform, or session-level details. Together these fields let you ask rich questions later, such as "Which cohort had the highest simulation success rate?"
Consistent naming is the fastest win when you author xAPI statements. We've found teams that enforce a small set of standards get clean analytics within weeks. Below are practical rules to implement immediately.
Keep resource identifiers stable and human-readable. Use a namespace pattern: domain/resource-type/resource-id (for example, example.com/course/module-3). Avoid embedding user- or session-specific data in the resource ID. That keeps objects reusable and comparable.
Pick verbs from reliable registries and map your UI actions to those canonical verbs. Treat synonyms as aliases in your implementation layer, not in statements. For example, normalize “completed”, “finished”, and “done” to a single canonical verb IRI.
Below are concise xAPI statement examples for typical learning activities. Use them as templates and adapt identifiers and IRIs to your environment.
Example JSON (copy and adapt):
{ "actor": {"mbox":"mailto:learner@example.com","name":"Jordan"}, "verb": {"id":"http://adlnet.gov/expapi/verbs/completed","display":{"en-US":"completed"}}, "object": {"id":"https://example.com/courses/intro","definition":{"name":{"en-US":"Intro Course"}}}, "result": {"success":true,"duration":"PT1H20M"}, "timestamp":"2024-07-01T10:15:00Z" }
Example JSON (copy and adapt):
{ "actor":{"mbox":"mailto:learner@example.com"}, "verb":{"id":"http://adlnet.gov/expapi/verbs/answered","display":{"en-US":"answered"}}, "object":{"id":"https://example.com/quizzes/q1","definition":{"name":{"en-US":"Quiz 1"}}}, "result":{"score":{"scaled":0.86,"raw":86,"min":0,"max":100},"success":true}, "context":{"contextActivities":{"category":[{"id":"https://example.com/courses/intro"}]}}, "timestamp":"2024-07-01T10:45:00Z" }
When you author xAPI statements, run a strict validation process before sending to an LRS. A short checklist prevents many common problems we've seen in production systems.
Validation checklist:
Tooling: Validate against the xAPI specification, use schema validators, and capture statement samples in a staging LRS. For environments that need real-time feedback and monitoring, consider platforms that provide live statement inspection and dashboards (available in platforms like Upscend). Using a staging LRS for smoke tests prevents messy data from reaching production analytics.
Structure depends on the questions you want to answer later. Start from analytics use-cases and model your statements to support them. We recommend designing from the end: define reports, KPIs, and dashboards, then map required statement fields to those outputs.
Enforcing templates reduces ambiguity. For example, for simulations include result.success, result.score, and a standardized context that identifies the simulation scenario and difficulty level. This enables reliable cross-cohort comparisons.
Common pain points are inconsistent verbs, divergent object IDs, and too much optional data sent inconsistently. These create "messy statements" that make downstream analysis expensive.
Root causes and remedies:
In our experience, a short governance document (1–2 pages) that shows allowed verbs, object ID patterns, and required result fields prevents most inconsistencies. Pair that with automated QA: run nightly jobs that detect new verbs or unexpected object ID patterns and alert owners.
Authoring robust xAPI statements is a mix of clear modeling, small governance, and automated validation. Start by defining the analytics questions you care about, build canonical statement templates, and enforce them in your event layer. Use a staging LRS and the validation checklist above before moving to production.
Quick next steps:
Call to action: If you're building or auditing xAPI implementations, export a 3-day sample of statements and validate them against the checklist above — use this to prioritize fixes and improve your analytics faster.