Validates the survey you wrote. Surfaces bias, double-barreled items, scale mismatches, and construct gaps. Tells you which trade-offs are yours to defend — and gives you a documented methodology trail.
Diagnose, revise, defend. Three steps. No black box.
Most survey tools score one dimension. We run ten.
Acquiescence bias (yea-saying stems), leading questions ("don't you think…"), loaded language ("our excellent…"), social desirability bias, and filler preambles ("Overall," "In general," — vague qualifiers that add no information).
Catches items where the question stem ("How satisfied…?") doesn't match the response scale (Strongly agree → Strongly disagree). The kind of mismatch that quietly invalidates your data analysis.
Flags double-barreled items asking two things at once ("How clear and helpful was the instructor?") so you can split them or rewrite. Two ideas, one response = uninterpretable data.
Compares your questions to your stated goal. Names dimensions you're measuring but didn't list, and dimensions in your goal that no items measure. Helps you decide what to add or drop.
Audience-aware grade-level thresholds. Employees get grade 12, students grade 8, general public grade 10. Items above your audience's threshold are flagged with concrete shortening suggestions.
Cronbach's α and McDonald's ω computed on synthetic responses with auto-reverse-scoring. Tells you whether your subscales would likely hold together at deployment — before you collect a single real response.
Two independent AI models (Claude Opus 4.7 + GPT-5.5) score each item independently. Disagreement is surfaced as a half-point deduction — a calibration mechanism the field has never had.
An 8-dimension assessment of structural integrity: scale defined, construct coverage, demographic segmentation, framing balance, item independence, open-end design, question wording, length/load. Severity-rated. Not just "what's wrong" but "what's the user's call to make."
We can't promise your survey will produce more profitable decisions. No one honest can — that's not how survey methodology works. What we can promise is cleaner data and a methodology you can defend. The decisions you make from that data are still yours.
We don't sell pre-vetted instruments. If a Gallup Q12 fits your construct, use that. We catch the problems in the survey you wrote when no off-the-shelf instrument fits.
We don't replace a psychometrician. We give you a structured diagnosis, AI-assisted revision, and a documented trade-off list. That's a real piece of methodological hygiene most teams skip.
One price. One link. No subscription.
1 survey, unlimited iteration.
Prices in USD. CAD card payments accepted; Stripe converts at ~2% FX margin. Refunds honoured within 30 days of purchase if the PIN hasn't been used.
One distinct upload, identified by the content of its question text. Re-validating the same survey, applying AI wording revisions, running structural revisions, and re-uploading the revised version of the same questions are all free. You only burn a credit when you validate a meaningfully different set of questions.
No. The AI rewrites wording when wording is the problem. It also proposes structural moves — split a double-barreled item, drop an item with no construct, merge redundant pairs, add an item to fill a construct gap. You review each move with a checkbox and accept only the ones you want.
Surveys you submit are processed by Anthropic (Claude Opus 4.7) and OpenAI (GPT-5.5) under their respective API terms. We don't retain a copy of your survey content beyond your session. The audit trail is yours — a Word document you download. Nothing is published or shared.
Yes — that's one of the audiences we built for. The methodology appendix is designed to be cite-able in a paper's methods section. Pilot your instrument here, run the validator, document the trade-offs, then proceed to formal validation if needed.
Email support@validsurvey.app with your course completion confirmation. We honour LearnFormula graduates with a discount code.
Buy again with the same email. The Worker recognises your email, tops up your existing PIN with 1 more survey, and resets the 365-day timer. You don't need a new PIN.
The PIN is hashed with PBKDF2-SHA256 + a random salt. We don't have access to the plaintext after sending. Store the welcome email safely. If lost, contact support@validsurvey.app — we can verify your purchase via Stripe and issue a replacement if the original hasn't been disabled.
Yes, within 30 days, if the PIN hasn't been used to validate a survey. Email support@validsurvey.app. Refunds process through Stripe; we mark the PIN as refunded in the same step.