Measure Impact in CE: Assessments, AI, and Telemetry
Years ago, I had the chance to work with psychometricians while building assessments for large-scale degree programs. That experience completely changed how I think about education.
I used to see assessments as the “end” of a course, something you tack on after the learning is done. Now I see them as the most important design tool we have. They’re not just about testing knowledge; they’re about understanding how people think, learn, and apply.
And in customer education, that’s everything.
Why assessments matter so much
When you’re creating customer education, your goal isn’t just to inform, it’s to transform how someone uses your product or service. Assessments help you measure whether that behavior change actually happened.
They let you move beyond “Did they watch it?” and get to “Did they understand it—and can they apply it? And did they within X timeframe?”
Working with psychometricians also taught me that good assessments are as much about design as the rest of the learning experience is. Each question is a data point about what people need more of, what’s confusing, and what’s making sense. When you treat assessments as continuous feedback loops instead of final exams, your training gets more effective and its easier to show ROI.
In an AI world, measurement matters more than ever
AI is accelerating training creation. We can generate outlines, scripts, and visuals in a fraction of the time it used to take. At the end of the day, though, it’s garbage in, garbage out. The internet will start to become flooded with material that looks good but may not teach well.
The gap now isn’t in production, it’s in quality. Anyone can create faster. Fewer can tell what’s actually worth keeping.
Assessment results are the best performance indicator we have as instructional designers. They anchor AI-assisted creativity in human outcomes and keep us focused on what learners are actually taking away.
What to track (and why telemetry matters)
Assessments tell me what learners know. Telemetry tells me what they do.
Telemetry tools are software tools that automatically collect, transmit, and analyze data about user activity and system performance often in real time. In the context of customer education or learning, telemetry tools go beyond quiz scores or course completions. They capture behavioral signals that reveal how learners actually interact with your content or product.
Telemetry tools collect behavioural data: where learners drop off, which modules they revisit, how long they engage, and whether they actually use the product features afterward.
📊 Examples of What Telemetry Tools Track
Which learning modules or videos customers open (and how long they stay)
Where customers drop off or rewatch
Click paths and navigation patterns
Time spent on interactive elements or exercises
Feature usage in the product after training
Correlations between learning behavior and outcomes (adoption, retention, renewal)
When you combine the two—assessment results and telemetry signals—you start to see a complete picture of learning impact. That’s when customer education stops being a cost center and starts becoming a growth engine.
⚙️ Common Telemetry Tools (and Where They’re Used)
Common telemetry tools fall into a few main categories.
For learning engagement, platforms like Skilljar, Intellum, Docebo, and LearnUpon track course starts, completions, quiz data, and dwell time.
Product usage telemetry tools such as Pendo, Mixpanel, Heap, and Amplitude capture in-product feature usage, session data, and post-training behavior.
Customer journey analytics platforms like Gainsight PX, HubSpot, and Totango monitor onboarding engagement and lifecycle milestones.
Finally, data visualization and business intelligence tools such as Looker Studio, Power BI, and Tableau combine telemetry and assessment data into dashboards that reveal patterns and insights across the entire learner journey.
No getting product usage telemetry tools up and running can feel intimidating if you don’t come from a data or engineering background. But the good news is: you don’t need to become a data scientist or developer to make meaningful progress.
Reach out to your product analytics, engineering, or data insights team and explain: “Our education team wants to understand if training is driving adoption. Could we look at ways to track product usage for trained vs untrained users?” They may already have telemetry tools (like Mixpanel, Amplitude, Heap, or Pendo) capturing product data so you just need to access or tag that data for your learner cohorts.
To keep that conversation going, be sure to bring examples of what you’d like to see (e.g., “Did users who took the onboarding course use Feature X within 7 days?”). This makes the conversation concrete and builds credibility and trust between you and the product owner you’re collaborating with.
What I’ve learned
Good assessments don’t just evaluate the learner, they evaluate the quality of our ID work.
They reveal how well we’ve designed with learning science. They expose what’s not making sense to our customer. And they give us the data we critically need to show our bosses that we are getting results.