AI Layoff Pulse — Dashboard Guide

Rendered documentation for the dashboard instruments, metrics, and moderation workflow.

← Back to dashboard
# AI Layoff Pulse — Dashboard Guide

## What this dashboard is for

AI Layoff Pulse tracks the **human side of AI-linked job disruption**.

It is not a generic layoff counter and it is not a perfect labor-market truth machine. It is a curated signal surface for finding and understanding **first-hand narratives** about layoffs, displacement, fear, adaptation, and rebuilding in an AI-shaped job market.

The dashboard combines:
- **promoted stories** in `layoff_stories`
- **raw staged candidates** in `layoff_raw_candidates`
- **derived daily metrics** in `layoff_daily_metrics`

That means some panels show the curated published dataset, while others show the messy intake layer before and after moderation.

---

## Core data model

### 1) Raw candidates
These are staged items collected from X and Reddit before or during gating.

Raw candidates include:
- accepted items
- rejected items
- rejection reasons
- score breakdowns
- feature flags used during scoring

This layer exists so tuning is evidence-based instead of guesswork.

### 2) Stories
These are the promoted stories that passed the collector gate or were manually promoted during review.

A story is what the main dashboard analytics use for:
- story count
- sentiment
- adaptation metrics
- emotion metrics
- company mentions
- recent stories feed

### 3) Daily metrics
These are aggregated daily snapshots used by timeseries and KPI views.

They summarize:
- story count
- average sentiment
- adaptation rate
- anxiety index
- total engagement
- average emotions

---

## Top KPIs

### Stories
**What it is:** Total number of promoted stories in the selected time window.

**Why it matters:** This is the main signal volume for the curated dataset.

**How to read it:**
- rising count = more stories are entering the promoted narrative layer
- falling count = fewer relevant stories are being collected or passing the gate

**Caution:** This is influenced by query quality, collector strictness, and moderation—not just real-world layoff volume.

### Avg Sentiment
**What it is:** Weighted average sentiment across promoted stories.

**Why it matters:** It gives a rough emotional temperature of the narrative stream.

**How to read it:**
- negative = more pain, fear, anger, collapse language
- neutral = mixed narratives, uncertainty, or balancing loss with adaptation
- positive = more recovery, rebuilding, opportunity, or resilient framing

**Caution:** It is heuristic sentiment, not deep semantic truth.

### Top Adaptation
**What it is:** Most common adaptation path in the selected story set.

**Examples:**
- `pivot`
- `gig`
- `job_hunting`
- `none`

**Why it matters:** It shows how people are responding, not just suffering.

### Anxiety Index
**What it is:** A synthetic metric derived from emotional features, weighted over time.

**Why it matters:** It acts as a quick stress indicator for the overall story stream.

**How to read it:**
- higher = stronger fear / instability / distress signal
- lower = calmer or more adaptation-oriented dataset

**Caution:** This is a directional indicator, not a clinical or statistically standardized score.

### Candidates Staged
**What it is:** Number of raw candidates staged in the review layer for the selected window.

**Why it matters:** This is intake volume before final curation.

### Accepted
**What it is:** Number of staged candidates currently marked as accepted.

**Why it matters:** Shows how much intake is actually becoming curated dataset.

### Rejected
**What it is:** Number of staged candidates currently marked rejected.

**Why it matters:** Helps measure noise, drift, or intentional strictness.

### Top Rejection Reason
**What it is:** Most common rejection reason in the selected review slice.

**Why it matters:** It tells you what kind of noise is dominating the intake.

---

## Chart-by-chart guide

## Volume & Sentiment Over Time
**What it shows:**
- story count over time
- sentiment trend over time

**How to read it:**
- the filled area shows story volume
- the line shows sentiment movement

**Use it for:**
- spotting bursts of narrative activity
- seeing whether spikes are panic-heavy or adaptation-heavy
- comparing emotional tone against intake/promoted volume

**Common pitfall:** A volume spike does not necessarily mean a worse labor market. It may reflect a single viral story cluster or a query shift.

---

## Adaptation Paths
**What it shows:** Distribution of promoted adaptation types.

**Use it for:**
- understanding response patterns to layoff/displacement pressure
- seeing whether the dataset is dominated by rebuilding, freelancing, job hunting, or pure distress

**Interpretation examples:**
- high `pivot` = many people trying to build, launch, switch lanes, or reposition
- high `job_hunting` = classic re-entry struggle
- high `gig` = survival mode or freelance fallback

---

## Emotion Breakdown
**What it shows:** Average emotional mix across promoted stories.

**Current axes:**
- fear
- anger
- sadness
- hope

**Use it for:**
- distinguishing despair from rage from resilient rebuilding
- reading emotional composition, not just net sentiment

**Caution:** A dataset can be negative overall while still carrying a meaningful hope signal.

---

## Platform Distribution
**What it shows:** Share of promoted stories by platform.

**Current sources:**
- X
- Reddit

**Use it for:**
- understanding where signal is coming from
- noticing if one source dominates or if source balance changes over time

**Important operational note:** X is currently noisier than Reddit and typically needs stricter curation.

---

## Top Companies Mentioned
**What it shows:** Most frequently mentioned companies in promoted stories, plus sentiment toward them.

**Use it for:**
- identifying concentration of named-employer narratives
- spotting which firms are emotionally central in the dataset

**Caution:** A company appearing often may reflect media cycles or layoff discussion clusters, not necessarily direct verified layoffs.

---

## Recent Stories
**What it shows:** The promoted story feed used for the main narrative layer.

Each row includes:
- author
- platform
- story text excerpt
- date
- sentiment badge
- adaptation label
- engagement counts

**Use it for:**
- human review of what the analytics actually represent
- checking whether the promoted layer still matches project intent

---

## Raw Candidate Review
This is the most operationally important panel for maintaining quality.

**What it shows:**
- staged intake candidates
- accepted vs rejected decisions
- rejection reasons
- feature flags
- per-item scores
- source links

**Use it for:**
- auditing collector quality
- finding false positives
- manually overriding decisions
- refining source-specific heuristics

### Decision controls
Each row supports:
- **promote** — mark accepted and create a story record if needed
- **reject** — mark rejected using the selected rejection reason

### Review filters
The review table supports filtering by:
- platform
- decision state

Useful modes:
- `X + rejected` to inspect noise leakage on X
- `Reddit + accepted` to inspect clean narrative examples
- `all + rejected` to study drift in the pipeline

---

## Rejection reasons
These exist to make moderation analytically useful, not just operational.

### `manual_reject`
Fallback bucket when a reviewer rejects something manually without a more precise reason.

### `x_commentary_not_firsthand`
Used when an X post discusses layoffs or AI job loss as commentary, but does not read like a first-hand personal narrative.

### `prefilter_narrative_lt_4`
Narrative score too weak to remain viable.

### `promotion_threshold_not_met`
Candidate was stage-worthy enough to inspect, but not strong enough to promote.

### `duplicate`
Same underlying story already exists in the dataset.

### `off_topic`
Mentions layoffs/AI adjacent concepts but does not fit the project’s target narrative.

### `news_not_personal`
News resharing, article commentary, or broadcast-style reporting without first-hand experience.

### `spam_link_post`
Low-value link-heavy or promotional material.

---

## Scoring model

## Narrative Score (0–10)
Signals for whether a candidate reads like a real first-hand narrative.

Typical components:
- first-person pronouns
- layoff-related verbs
- timeline signal
- emotional language
- employer/team context
- adaptation/action signal

## Depth Score (0–10)
Signals for whether the candidate contains enough substance to be useful.

Typical components:
- longer form content
- multi-sentence structure
- past → present → future arc
- multiple themes
- engagement threshold

## Promotion gate
A candidate may be:
- rejected immediately
- staged then rejected
- staged then promoted

Current pipeline behavior is intentionally stricter on X than on Reddit because the X stream contains more commentary, resharing, and low-context noise.

---

## How to use the dashboard operationally

### If you want to assess dataset quality
Start with:
1. Raw Candidate Review
2. Top Rejection Reason
3. Platform Acceptance Split
4. Recent Stories

### If you want to assess emotional climate
Start with:
1. Avg Sentiment
2. Anxiety Index
3. Emotion Breakdown
4. Volume & Sentiment Over Time

### If you want to understand behavior after layoff/displacement
Start with:
1. Top Adaptation
2. Adaptation Paths
3. Recent Stories

### If you want to tune the collector
Work in this order:
1. inspect rejected X candidates
2. inspect accepted X candidates
3. inspect manual overrides
4. adjust query/filter/scoring rules
5. rerun collector
6. re-check review layer before trusting aggregate metrics

---

## Known limitations
- Sentiment is heuristic.
- Emotion extraction is heuristic.
- X is noisier than Reddit.
- Story volume is affected by collection strategy, not only real-world events.
- Manual promotion inserts are useful but may carry lighter metadata than collector-native promotions.
- This system is for curated signal analysis, not definitive labor-market measurement.

---

## Recommended workflow for Deca
1. Use the main KPIs for quick orientation.
2. Use Raw Candidate Review before making collector changes.
3. Use explicit rejection reasons whenever rejecting manually.
4. Treat X with skepticism until proven otherwise.
5. Use Recent Stories as the human reality check against the charts.

---

## Future improvements
- Inline tooltips for key KPIs and charts
- About / Methodology panel in UI
- Bulk moderation actions
- Manual promotion badge in stories feed
- Better duplicate detection
- More source-specific heuristics
- Review analytics over time (acceptance rate by platform, rejection drift, manual override trends)