Reporting
| Key | Value |
|---|---|
| Status | Active |
| Owner | QA Automation |
| Updated | 2026-03-26 |
| Scope | Weekly and monthly reports, Slack delivery, PDF output, and report setup |
Periodic reports give managers and stakeholders a structured view of system health over time. Unlike per-run Slack alerts, reports answer the longer-range question: are things getting better, worse, or holding steady?
Report Types
| Report | Coverage Window | When It Runs | Channel |
|---|---|---|---|
| weekly | 7 days | Monday morning | #qa-reports |
| monthly | 30 days | 1st of each month | #qa-reports |
Both are triggered by GitLab CI schedules. Both can also be run manually from the command line.
What The Weekly Report Contains
| Section | What It Shows |
|---|---|
| overall pass rate | percentage across all suites for the past 7 days |
| failure breakdown | per-site and per-suite failure counts |
| top failing tests | tests that failed most often during the period |
| top flaky tests | tests with mixed pass/fail outcomes |
| trend direction | pass rate compared to the prior week |
| root-cause rollup | failures grouped by incident or category, not listed one by one |
What The Monthly Report Contains
| Section | What It Shows |
|---|---|
| 30-day pass rate | overall quality trend for the month |
| stability patterns | which sites and suites are most or least reliable |
| recurring failures | incidents that appeared repeatedly |
| recovery rate | how quickly failures were addressed after first appearing |
PDF Output
Reports are generated as PDF documents using Playwright's page.pdf(). They are saved to test-results/reports/ and uploaded as Slack file attachments when SLACK_BOT_TOKEN is set.
| Artifact | Location |
|---|---|
| PDF report | test-results/reports/ |
| Slack attachment | uploaded to the thread in #qa-reports |
Commands
| Command | What It Does |
|---|---|
npm run report:weekly | generate and post a 7-day report |
npm run report:monthly | generate and post a 30-day report |
npm run data:weekly | pull weekly summary data without posting |
How To Read A Report
Pass Rate
The headline number is the overall pass rate for the period. A drop of more than 2-3 percentage points is worth investigating. Isolated drops on one site are usually selector or CMS changes. Broad drops across sites suggest infra or CI issues.
Failure Breakdown
Look at the per-site breakdown before the total number. If one site is responsible for most failures, the problem is site-specific. If multiple sites fail on the same test type, the issue is more likely in a shared component or external dependency.
Top Flaky Tests
Flaky tests are not failures, but they add noise and reduce confidence in results. If a test appears in the flaky list consistently across multiple weekly reports, it is a candidate for review.
Root-Cause Rollup
This section groups failures by their matched incident or category. If the same root cause appears repeatedly, it is a known recurring issue. If a cause is new, it warrants attention.
Slack Channel Setup
| Variable | Purpose | Default |
|---|---|---|
SLACK_REPORTS_CHANNEL | target channel for reports | #qa-reports |
SLACK_BOT_TOKEN | enables PDF upload and thread chaining | required for PDF delivery |
SLACK_WEBHOOK_URL | fallback delivery (message only, no PDF) | used when bot token is absent |
Set SLACK_REPORTS_CHANNEL in GitLab CI variables or the local .env file to direct reports to the correct channel.
Common Issues
| Issue | What To Check |
|---|---|
| report not posting to Slack | check SLACK_BOT_TOKEN or SLACK_WEBHOOK_URL is set in the environment |
| PDF not attached | SLACK_BOT_TOKEN is required for file uploads; webhook-only delivery cannot attach files |
| report shows no data | check GRAFANA_SERVICE_ACCOUNT_TOKEN and OPENSEARCH_URL are set; generator falls back to local history files if OpenSearch is unreachable |
| wrong channel receiving reports | check SLACK_REPORTS_CHANNEL value in CI variables |
| duplicate reports | check whether both a schedule trigger and a manual trigger ran for the same period |
Data Sources
The report generator uses data in this order:
- OpenSearch via Grafana proxy (requires
GRAFANA_SERVICE_ACCOUNT_TOKEN) — covers all CI runs test-results/history/{YYYY-MM-DD}.json— per-day local run recordstest-results/results.json— latest Playwright JSON reporter output (single-run fallback)
In CI, OpenSearch is the primary source. Locally, history files are typically used.
Related Pages
| Need | Go To |
|---|---|
| Slack alerts and run summaries | Reporting |
| OpenSearch and Grafana | Observability |
| CLI commands | CLI Reference |