Not all SOC 2 reports are created equal.

We applied the SOC 2 Quality Guild's Reliability Rubric to 25 real audit reports. Here's what we found.

100%
scored D or F
52.1
average score (of 100)
68%
"No exceptions" rate
21
reports, same license

Signal Breakdown

Average score per signal across 22 SOC 2 reports. The strongest signals for detecting rubber-stamped audits are S7 (test procedures) and S8–S10 (firm credibility).

Pillar Profile

Average scores across 22SOC 2 reports with min–max range. Notice the characteristic shape: high Structure (template is correct), collapsed Substance and Source (the work wasn't done).

Average
Min
Range

The Three Pillars

The Guild's rubric evaluates reports across three dimensions. These reports pass on Structure — the template is correct. But Substance and Source tell a different story.

Structure
84.4
Report formatting and AICPA compliance
Substance
50.8
Depth of audit testing and specificity
Source
22.5
Audit firm credibility and independence

Spot the Difference

Real test procedures describe specific evidence, sample sizes, and time periods. Rubber-stamped reports use identical boilerplate for every control.

Rubber-stamped
“Inquired of the control owner to ascertain the appropriateness of the control performed. Selected a sample of active employees. For the selected samples, inspected evidence to observe that employees are required to acknowledge the Code of Conduct.”
Result: “No exceptions noted” — on every control, every report.
Rigorous
“Selected 25 of 142 access provisioning tickets from Q1–Q3 2025. For each, reperformed the approval workflow by tracing to the manager's documented authorization in Jira. Confirmed provisioned permissions matched the approved role profile in Okta.”
Result: “2 of 25 samples lacked documented manager approval. Exception noted.”
Explore the DataScore Your Report