Teaching AI in Context: Use Company Earnings to Help Students Critically Evaluate AI Hype
educationAIcritical-thinking

Teaching AI in Context: Use Company Earnings to Help Students Critically Evaluate AI Hype

AAvery Morgan
2026-04-08
7 min read
Advertisement

Turn earnings reports and executive AI commentary into classroom case studies to teach AI literacy, critical thinking, and responsible skepticism.

Teaching AI in Context: Use Company Earnings to Help Students Critically Evaluate AI Hype

Quarterly earnings reports and CEO commentary are rich, underused resources for teaching AI literacy, business judgment, and ethical skepticism. Instead of abstract debates about whether "AI will change everything," use real-world financial reporting and executive remarks to help students and early-career mentees distinguish substantive AI strategy from marketing spin. This practical approach builds critical thinking, financial fluency, and responsible skepticism—skills essential for future managers, product builders, and thoughtful citizens.

Why earnings reports are ideal case studies for AI literacy

Earnings calls and 10-Q/10-K filings force companies to reconcile promises with numbers. When executives mention "AI" on a call, they must also explain impacts on revenue, margins, customer adoption, and risks. That makes earnings reports excellent primary sources for classroom analysis because they combine:

  • Quantitative data (revenue, guidance, R&D spend)
  • Qualitative narrative (CEO/CFO commentary, prepared remarks)
  • Regulatory context (risk disclosures in filings)
  • Market reaction (stock price, analyst questions)

Using these materials trains students to map marketing language onto measurable business outcomes and ethical trade-offs. As an instructor or mentor, you can leverage current events—like roundtables such as the "AI Playbook for Earnings Prep and Analysis"—to make lessons timely and engaging.

Learning objectives

  • Interpret financial statements and identify where AI-related investments show up (R&D, capex, revenue lines).
  • Differentiate substantive product strategy from marketing rhetoric.
  • Develop evidence-based assessments of AI claims tied to business metrics.
  • Practice ethical evaluation of AI deployment risks and stakeholder impacts.
  • Communicate findings clearly to different audiences: investors, customers, and peers.

Practical classroom case: 6-step lesson plan

This modular plan fits a single class or a multi-week project. Each step includes concrete deliverables and assessment ideas.

Step 1 — Select the company and materials

Choose a public company that frequently mentions AI in earnings calls (tech platforms, enterprise software, or media companies). Provide students with:

  • Latest 10-Q/10-K and recent quarterly slides
  • Transcript or recording of the earnings call
  • Recent press releases and product announcements
  • Analyst notes or news coverage (optional)

Tip for remote classes: assign different companies to small groups so students compare approaches.

Step 2 — Guided reading: locate AI in the financials

Ask students to find where AI investments or AI-driven revenue could appear in the numbers:

  1. R&D and capital expenditure lines—are they rising?
  2. Revenue segmentation—are there new AI-based product lines?
  3. Gross margins—does AI materially improve efficiency?
  4. Guidance—does management tie future growth to AI adoption?

Deliverable: a 1-page summary that cites specific filings and figures.

Step 3 — Qualitative coding: statements vs. substance

Students should annotate the earnings transcript for types of AI claims. Create a simple coding schema such as:

  • Vision statements (long-term, vague)
  • Product claims (specific features, timelines)
  • Monetization plans (how AI generates revenue)
  • Risk disclosures (privacy, bias, regulation)

Deliverable: annotated transcript with short notes on credibility (high/medium/low) and justification.

Step 4 — Cross-check with external signals

Teach students to validate claims with external evidence:

  • Customer case studies or pilot announcements
  • Open-source repos, patents, or research publications
  • Hiring trends (are they hiring ML engineers at scale?)
  • Competitive landscape—are competitors delivering similar features?

Deliverable: a short annotated bibliography linking evidence to management claims.

Step 5 — Build a simple scorecard

Create an evidence-based rubric to rate the company's AI credibility on dimensions like:

  • Financial commitment (spend & guidance)
  • Operational integration (product vs. marketing)
  • Customer traction (contracts, usage metrics)
  • Ethical preparedness (risk disclosures & governance)

Deliverable: a public-facing 1-page memo summarizing the rating and key support points.

Step 6 — Present and reflect

Students present findings in short briefs aimed at different audiences: investors (numbers-focused), customers (benefit-focused), and peers (methodology-focused). Follow with a reflection prompt: What type of evidence would change your assessment?

Signals that separate substantive AI strategy from marketing spin

Use this checklist as a quick reference when evaluating executive AI commentary:

  • Specific KPIs: Does management cite measurable KPIs tied to AI (e.g., customer retention uplift, cost-per-transaction reduction)?
  • Line-item spend: Is there growth in R&D or a clear budget reallocation to AI initiatives?
  • Customer proof points: Are paying customers referenced by name with outcomes?
  • Time-bound milestones: Are roadmaps precise or vague timelines like "soon"?
  • Regulatory and ethical transparency: Does the company discuss safeguards or governance?
  • Hiring and partnerships: Are they scaling teams or entering meaningful partnerships?

Classroom assignments and project ideas

Practical tasks keep learning active and assessment clear. Try these assignments:

  • One-week "Earnings Brief"—students produce a 500-word memo evaluating the AI claims on the latest earnings call.
  • Portfolio comparison—groups compare two companies in the same sector and debate which has a stronger AI moat.
  • Ethics case study—students propose mitigation plans for risks disclosed in filings (privacy, bias, job impacts).
  • Director memo—students draft a short memo for a board considering an AI investment, focusing on financial and ethical trade-offs.

Assessment rubric (suggested)

Use a transparent rubric to evaluate both analysis and communication:

  • Evidence integration (30%)—use of filings, transcripts, and external data.
  • Analytical rigor (30%)—logical mapping from evidence to claims and ratings.
  • Ethical consideration (15%)—identification and mitigation of risks.
  • Clarity & audience adaptation (15%)—presentation tailored to a target audience.
  • Original insight (10%)—novel synthesis or practical recommendations.

Mentoring tips for teachers and early-career mentors

When guiding students and mentees through this work, prioritize process over verdicts. Encourage curiosity and a habit of checking sources. Practical mentoring moves include:

  • Model evidence-based skepticism by sharing your own annotated transcripts.
  • Pair novices with students who have finance or ethics experience for cross-disciplinary learning.
  • Provide templates—sample memo, scorecard, and annotated transcript—to lower the barrier to entry.
  • Assign roles in group work (analyst, ethicist, presenter) to replicate workplace collaboration.

For mentors looking to integrate AI tools into coaching, consider guided AI learning resources to accelerate skills—see our guide on AI as a Mentor for practical tips on using AI without losing critical judgment.

Ethics and responsible skepticism

Discernment should not be cynical dismissal. Teaching AI literacy includes evaluating the social impacts of AI deployment. Use earnings disclosures to discuss:

  • Potential workforce impacts and reskilling commitments
  • Data governance and user consent practices
  • Bias risk and transparency measures
  • Long-term societal risks versus short-term marketing gains

Prompt students to propose specific, feasible governance steps a company could take and how those steps might appear in future filings or calls.

Resources and extensions

Useful follow-ups include:

  • Track hiring via public profiles to corroborate technical scale-up.
  • Monitor patent filings and open-source contributions for technical depth.
  • Compare analyst models to management guidance to spot optimism bias.
  • Use past earnings cycles as longitudinal case studies to see whether claims materialized.

Related reading on mentorship, adapting to change, and building communication skills can help frame these projects—see our posts on Adapting to Change and Crafting a Winning Resume for broader career-context lessons.

Final classroom-ready checklist

Before you launch the assignment, make sure you have:

  • Selected companies and provided all primary materials
  • Shared templates for annotations, memo, and scorecard
  • Set evaluation criteria and timelines
  • Prepared extension activities for high-performing groups

By turning earnings reports into active case studies, educators and mentors can give students a powerful, practical toolkit: how to read between executive soundbites, how to tie AI hype to financial and ethical consequences, and how to communicate clear, evidence-based judgements. This method builds not just technical literacy, but the kind of business and moral reasoning that will matter in any career touched by AI.

Source inspiration: Industry roundtables like the "AI Playbook for Earnings Prep and Analysis" (see industry commentary such as the recent Inside Adobe discussion) highlight how professionals prepare for AI-focused earnings dialogue—use those conversations as live teaching material.

Advertisement

Related Topics

#education#AI#critical-thinking
A

Avery Morgan

Senior SEO Editor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-04-09T16:54:53.787Z