Independent analytical verification for biomedical, clinical, and healthcare research
RIQA provides third-party verification of analytical outputs across biomedical, clinical, and healthcare analytics domains — ensuring that conclusions drawn from data are internally consistent, transparently derived, and reproducible under defined conditions. AI-assisted detection. Human-reviewed interpretation.
Core assurance principles
01 / Provenance
Data lineage verification
Source, transformation logic, and analytical inputs traced and documented end-to-end.
02 / Consistency
Internal logical coherence
Reported metrics evaluated against the structure of the underlying data.
03 / Reproducibility
Independent reconstruction
Results reconstructed from provided data under defined, standardized conditions.
04 / Transparency
Structured audit output
Signed reports in human-readable and machine-readable formats with full provenance.
100%Conflict-free independence
3Research domains covered
2Modules stable for production
4Phase audit pipeline
SHA-256Immutable audit provenance
Framework status
Validation module roadmap
RIQA develops modular validation engines for each analytical technique. Current production status:
RIQA-qPCR Livak
RT-qPCR Validation
Stable v1.1
RIQA-Flow Herzenberg
Flow Cytometry Validation
Stable v1.1
RIQA-RNAseq
RNA-seq Module
Advanced prototype
RIQA-Translational
Translational Biomarker Analytics
Advanced prototype
RIQA-Claims
Healthcare Analytics Assurance
Planned
Scope of coverage
Three domains, one consistent standard
RIQA applies the same four-phase verification framework across three primary domains. Select a domain to explore the full scope of coverage and what RIQA specifically verifies.
Domain 01
⬡
Preclinical & Biomedical Research
Studies involving experimental or computational data generated from laboratory, genomic, imaging, or in vivo techniques — where quantitative results are statistically summarized to support biological or translational conclusions.
Domain 02
⬡
Clinical Research & Trial Analytics
Preclinical and clinical study analyses where the integrity of outcome reporting has direct implications for regulatory review, publication, and clinical practice — verified by a neutral, non-sponsoring party.
Domain 03
⬡
Healthcare Analytics & CMS
Claims data, risk adjustment models, population health analytics, and policy simulations where data-driven conclusions influence reimbursement decisions and regulatory compliance at scale.
The audit pipeline
How a RIQA audit works
01
Data provenance review
Source, lineage, and all transformation steps applied to input data are traced and documented before analysis begins.
→ provenance-trace.pdf
02
Methodology assessment
Statistical methods evaluated for appropriateness. Reconstruction class assigned. Concordance standards pre-specified before outcomes are known.
→ reconstruction-declaration.json
03
Independent reconstruction
Results reconstructed from provided data by RIQA. Findings classified by severity. Sensitivity analysis across alternate assumptions.
→ reconciliation-findings.csv
04
Structured audit report
Signed report issued combining human-readable summary with machine-readable artifacts. Full provenance archive retained.
→ riqa-audit-report.pdf + audit-trail.json
Framework integrity
Why RIQA findings are defensible
Every RIQA audit is built on deterministic, reproducible, version-controlled analytical infrastructure.
◈
Deterministic scoring
The same inputs always produce the same findings. No stochastic or black-box components in the scoring logic.
◈
Version-locked pipelines
Every audit runs against a specific, documented framework version. RIQA-qPCR Livak v1.1 and RIQA-Flow Herzenberg v1.1 are stable for production.
◈
SHA-256 audit provenance
Input files are hashed at intake. The audit trail records every analytical step with cryptographic traceability.
◈
Pre-specified concordance
Reconstruction standards are declared before analysis begins — not after outcomes are known. Findings cannot be reverse-engineered to reach a desired conclusion.
◈
Human-reviewed outputs
AI-assisted detection accelerates pattern recognition. Human expertise governs all finding classification, interpretation, and report issuance.
◈
Catalog-driven findings
Finding language is defined in a versioned catalog, not generated ad hoc. Consistent, legally defensible language across all engagements.
Who RIQA serves
Built for organizations where data integrity is non-negotiable
Preclinical & Biomedical Research
Academic research laboratories
Core facilities and shared instrumentation
Contract research organizations (CROs)
Translational and pre-clinical research groups
Grant-funded research consortia
Journal authors seeking pre-submission assurance
Clinical Research & Trial Analytics
Clinical trial sponsors and biotech organizations
Biostatistics teams preparing NDA submissions
Regulatory affairs and clinical operations groups
Trial monitoring and data safety organizations
NIH grant applicants with preliminary data
Healthcare Analytics & CMS
Healthcare analytics vendors and consultancies
CMS contractors and Medicare Advantage plans
Payment integrity and fraud detection teams
Population health and quality reporting organizations
Health system analytics and informatics teams
What an engagement looks like
Typical RIQA engagements
Every engagement follows the same four-phase pipeline. The scope, inputs, and deliverables are defined upfront before analysis begins.
Biomedical
Pre-publication verification
A research group submits raw qPCR Cq values and flow cytometry summaries. RIQA independently reconstructs fold changes, p-values, and gating coherence. A signed assurance report is referenced in the manuscript methods section before journal submission.
Clinical
Sponsor-side SAP concordance review
A pharmaceutical sponsor submits patient-level data and the Statistical Analysis Plan ahead of a regulatory or publication submission. RIQA reconstructs primary and secondary endpoints, evaluates censoring rule implementation against the SAP, and delivers a structured concordance report with sensitivity analyses.
Healthcare analytics
CMS analytics implementation audit
An analytics team submits model documentation, production code, and claims data outputs before a quality reporting or compliance submission. RIQA verifies whether the implemented logic matches the declared specification — evaluating look-back windows, ICD mappings, and ETL transformation consistency.
Research funding
NIH grant reproducibility assessment
A PI submits preliminary data from an R01 application. RIQA verifies that the statistical conclusions in the specific aims are analytically defensible and issues a brief reproducibility certificate referenced in the grant application.
Translational
Biomarker concordance review
A translational research group submits biomarker panel data from a clinical assay. RIQA evaluates endpoint concordance, directional consistency across cohort subgroups, and assay-to-conclusion traceability before submission to a translational journal or regulatory authority.
Institutional
External methodological assurance
A research institution or core facility engages RIQA as an external quality assurance layer for studies generated within their infrastructure — providing structured reproducibility documentation to support submissions or reporting cycles.
"RIQA does not position itself as a gatekeeper or decision-maker, but rather as an enabling layer that enhances trust, supports peer review, and strengthens the overall quality of data-driven work."
— RIQA Mission Statement · riqassure.com
Ready to verify your research?
Start with a conversation — no commitment required.
Framework documentation
The RIQA Assurance Framework
A four-phase, modular methodology for third-party analytical result reconstruction across biomedical, clinical, and healthcare analytics domains. Every audit is pre-specified, version-controlled, and reproducible under defined conditions.
Four-phase pipeline
How every audit proceeds
The same four phases apply across all three domains. Domain-specific verification procedures are applied within each phase.
Phase 01
Data Provenance & Transformation Review
Evaluates the lineage and transformation pathway from source data to reported outputs. In biomedical research this covers normalization procedures, batch correction documentation, and gating strategy definitions. In clinical research it covers endpoint derivation files, censoring rule implementations, and SAP alignment. In healthcare analytics it covers ETL logic, look-back window specifications, and crosswalk file versioning.
Evaluates alignment between the declared analytical methodology and the structure of the underlying data. Each analytical component is assigned to a reconstruction class within the RIQA taxonomy. A Reconstruction Methodology Declaration is issued specifying the concordance standard to be applied in Phase 03. Analytical drift is evaluated where relevant.
Results are independently reconstructed from source data and documented analytical procedures. For exact and near-deterministic methods, numerical reconstruction is performed. For architectural audit methods, specification-to-implementation alignment is evaluated. A sensitivity analysis is separately performed to evaluate conclusion stability under reasonable alternate assumptions.
Generates structured assurance outputs including the signed audit report, findings register classified by severity, integrity score across four dimensions, and machine-readable artifacts. SHA-256 hashes of all input files are recorded in the audit trail. Every finding traces to a specific catalog entry with stable finding code, severity, and remediation guidance.
Reproducibility holds; minor or moderate items warrant attention.
70–84
Methodological concerns
Direction of conclusions holds, but specific issues should be addressed.
< 70
Material reproducibility concerns
One or more results cannot be reproduced from submitted data; revision required.
Audit provenance flow
End-to-end analytical pipeline
From submission intake to signed report — every step is documented, version-controlled, and reproducible under defined conditions.
Scroll horizontally on small screens to view full pipeline.
Severity framework
Finding classification
All findings are classified using a four-tier severity framework. Deductions are subtractive from a 100-point base per dimension.
Material
−25 pts
Reported endpoint does not reproduce from submitted data, or methodological flaw of substance. Revision required.
Moderate
−9 pts
Methodological concern not changing the direction of effect; or partial reproducibility. Disclosure recommended.
Minor
−4 pts
Reporting or documentation gap with no effect on the result. Better documentation recommended for future submissions.
Informational
0 pts
Best-practice recommendation. No defect identified; surfaced to support continuous improvement.
Research-use and scope of findings
RIQA provides analytical assurance for research and quality-review purposes. RIQA does not function as a regulatory authority, certifying body, or legal compliance organization. RIQA findings are reproducibility and methodological consistency statements derived from submitted data and declared methodology — not determinations of scientific truth, biological validity, or regulatory compliance. Audit reports are intended to support author due diligence, pre-submission review, and internal quality assurance programs.
Anchor standards
Standards alignment
RIQA's provenance requirements are anchored to established community standards. RIQA goes beyond checklist compliance to provide independent quantitative reconstruction.
MIQE 2.0 — Minimum Information for qPCR Experiments
Bustin et al. 2025, Clin Chem 71:634. RIQA-qPCR Livak v1.1 provenance catalog is structured along MIQE 2.0 sections with severity assignments consistent with the essential/desirable distinction.
MIFlowCyt — Minimum Information about a Flow Cytometry Experiment
Lee et al. 2008, Cytometry A 73A:926. RIQA-Flow Herzenberg v1.1 provenance layer is anchored to MIFlowCyt, covering panel, gating tree, compensation, FMO controls, and viability declarations.
ICH E9(R1) — Estimands and Sensitivity Analysis
The RIQA sensitivity analysis framework for clinical trials aligns with ICH E9(R1) estimand principles, evaluating conclusion stability under alternate analytical assumptions.
RIQA audit outputs include machine-readable JSON artifacts structured for downstream integration, consistent with FAIR data principles for research provenance infrastructure.
Download the RIQA White Paper
Full framework documentation · RIQA-WP-001 · 2026 · Open Access CC BY-NC 4.0
Scope of coverage
Three domains. One consistent standard.
RIQA applies the same four-phase verification framework across three primary research and analytics domains. Each domain has domain-specific verification procedures anchored to established community standards.
Domain 01
Preclinical & Biomedical Research
Quantitative experimental and computational workflows from laboratory, genomic, imaging, and in vivo techniques. Two production-ready validation modules currently available.
Clinical endpoint and survival analyses where integrity of outcome reporting has direct regulatory implications. Independent verification of SAP compliance, ITT construction, and sensitivity analyses.
Claims data, risk adjustment models, and quality reporting systems where implementation errors carry direct financial and compliance consequences. Architectural audit approach for specification-to-implementation verification.
Independent validation of quantitative experimental workflows — from single-cell transcriptomics and flow cytometry to RT-qPCR and imaging — where analytical implementation choices directly affect reported biological conclusions.
Production modules
Available validation frameworks
Stable v1.1
RIQA-qPCR Livak
Anchor standard: MIQE 2.0 (Bustin et al. 2025)
Independent reconstruction of fold changes via the 2^−ΔΔCt method from submitted per-replicate Cq values. Evaluates reference gene stability, statistical methodology, and methodological provenance against MIQE 2.0 expectations.
Independent reconstruction of frequency endpoints in percentage-point space and MFI endpoints in log₂-ratio space. Unique gating tree arithmetic coherence layer verifies parent-child population consistency across every gate transition.
Frequency endpoints (pp space)MFI endpoints (log₂ space)Gating tree coherencePath A and Path B supportMIFlowCyt provenance
Advanced prototype
RIQA-RNAseq
In development · Target: v1.0
Differential expression concordance, pathway enrichment sensitivity, batch correction documentation, and normalisation method consistency for bulk RNA-seq workflows. Pilot engagements in scoping.
Western blot densitometry and loading control validation. Histology and image-based quantification. Proteomics and metabolomics normalisation workflows. scRNA-seq cluster stability and UMAP embedding assessment.
Alignment tool, normalisation method, DE tool, gene set database version
Software-tolerant · Architectural audit
scRNA-seq
Clustering resolution, random seed, batch correction method, cell filtering thresholds
Structural verification · Architectural audit
Imaging / IHC
Segmentation threshold, ROI definition, reader variability, quantification software version
Near-deterministic · Structural verification
Western blot
Loading control linear range, ROI definition, normalisation reference
Exact · Near-deterministic
Submit a biomedical study for review
Initial scoping consultation at no cost.
Domain 02
Clinical Research & Trial Analytics
Independent verification of clinical endpoint analyses, survival studies, and regulatory submission outputs — where the integrity of analytical implementation directly affects regulatory decisions, publication conclusions, and patient care.
Verification scope
What RIQA evaluates in clinical research
Modern clinical studies involve implementation-sensitive analytical choices that are rarely subjected to external reconstruction. RIQA evaluates whether the analytical logic is sound, the SAP was followed, and conclusions hold under alternate assumptions.
Verify correction method and scope; recompute adjusted p-values
Concordance standard
When exact reconstruction is not expected
A critical principle of the RIQA clinical framework: exact numerical reconstruction is not always achievable or analytically meaningful. MMRM analyses across SAS, R, and Python use different optimization algorithms. RIQA evaluates directional stability and significance preservation, not identity.
Illustrative concordance assessment
Scenario
HR
p-value
Sponsor reported
0.74
0.041
RIQA reconstruction
0.79
0.067
Alternate censoring
0.83
0.110
Breslow tie-handling
0.81
0.089
Illustrative example from RIQA-CS-CLI-001 demonstration study.
RIQA assessment
Treatment direction preserved across all scenarios. Statistical significance was not maintained under four of six sensitivity scenarios evaluated, including the RIQA primary reconstruction. This class of finding — analytically subtle, methodologically significant — is RIQA's primary value proposition in the pharmaceutical domain.
F-102 · Moderate
Significance not maintained under alternate analytical assumptions. Recommendation: strengthen SAP censoring specification and pre-register sensitivity analyses.
Engagement models
When to engage RIQA in clinical research
Pre-NDA / pre-submission
Sponsor-engaged assurance
Independent reconstruction of primary and secondary endpoints before regulatory submission. RIQA assurance report included in the submission package.
Pre-publication
Author-engaged assurance
Research groups seeking methodological assurance before submission to high-impact journals. Signed report referenced in methods section.
Grant applications
NIH grant proposal assurance
PIs seeking neutral third-party verification of preliminary data statistical conclusions before R01 or STTR submission. RIQA certificate referenced in application.
Submit a clinical study for review
Pre-NDA, pre-publication, or grant proposal assurance.
Domain 03
Healthcare Analytics & CMS
Architectural audit of claims-based models, risk adjustment systems, and quality reporting pipelines — where specification-to-implementation gaps carry direct financial and regulatory consequences at scale.
Audit approach
Architectural audit, not numerical reconstruction
In healthcare analytics, exact numerical reconstruction is often structurally impossible without access to the same data vintage, crosswalk file version, and processing environment. RIQA verifies whether the implemented analytics match the declared specifications — identifying inconsistencies between documented logic and production implementation, and confirming that population construction and output derivation are consistent with what the method description requires.
HEDIS measure specification compliance, numerator and denominator derivation, measure version tracking
Specification-to-code gaps, measure version drift across reporting periods
ETL governance
Transformation logic documentation, data lineage tracing, schema version tracking, field-level mapping verification
Undocumented transformations, schema changes without version control, field mappings deviating from specification
HCC drift analysis
Longitudinal consistency of HCC assignments across model updates, software version changes, and crosswalk revisions
Silent output changes across model versions, specification-implementation divergence accumulating over time
Claims anomaly reconstruction
Verification of reported rate calculations, outlier provider identification logic, and risk score distribution consistency
Rate calculation errors, provider threshold misclassification, risk score discrepancies across reporting systems
Concrete anomaly example
What a typical implementation gap looks like
HCC flag miscalculation
Chronic condition look-back mismatch
Specification requires a 24-month look-back window for chronic HCC conditions. Production code implements 12 months. Result: 3,847 members missing qualifying diagnoses. RAF scores systematically understated by an average of 0.14 per affected member.
ETL transformation drift
Revenue code crosswalk version mismatch
Model documentation references FY2022 ICD-10-CM crosswalk. Production pipeline silently retained FY2021 mappings after a scheduled update. 214 procedure codes mapped to incorrect HCC categories across all claims processed in Q1–Q3.
Quality measure reconstruction
Denominator construction error
Reported measure denominator: 18,440 eligible members. RIQA reconstruction from enrollment files: 16,982. Discrepancy of 1,458 members traced to an enrollment gap period not excluded per HEDIS specification. Compliance rate overstated by 2.3 percentage points.
These examples are illustrative. All figures are synthetic and constructed for demonstration purposes.
Illustrative finding
The stakes of implementation gaps
RIQA-CS-CMS-001 · Demonstration study
HCC look-back window mismatch
A Medicare Advantage analytics team submitted a hierarchical logistic regression model for risk-adjusted readmission rate verification. RIQA's architectural audit identified that the HCC chronic condition look-back window had been implemented as 12 months in production code despite a 24-month specification in the model documentation.
No evidence of misconduct was identified. The error arose from a specification-to-implementation gap between the model documentation and the production code — a class of error that is invisible to any review mechanism that does not externally examine the code against the specification.
Resubmission completed with corrected covariate derivation and updated ICD-10 crosswalk.
Submit an analytics model for review
Risk adjustment, quality reporting, or claims analytics.
Demonstration series
RIQA Analytical Demonstration Studies
These studies illustrate how the RIQA framework evaluates analytical integrity across three primary domains. They demonstrate the audit methodology, finding classification system, and integrity scoring framework — not client outcomes.
▲
Framework demonstration series. These studies utilize synthetic and reconstructed datasets derived from publicly available analytical patterns and are intended solely to illustrate the RIQA audit methodology and reporting framework. They are not client engagements, real studies, or validated deployments. Initial collaborative pilot engagements with research institution partners are currently in development.
A research group engaged RIQA for pre-publication methodological assurance of a multi-modal vascular biology study integrating bulk RNA-seq, single-cell transcriptomics, in vivo phenotyping, and image-based lesion quantification. The engagement focused on cluster stability assessment and cell-type specificity verification.
Biological direction preserved across all reconstruction workflows.
RIQA-CS-CLI-001Clinical Research & Trial Analytics
Pre-NDA Assurance · Phase III Oncology Survival Analysis
A pharmaceutical sponsor engaged RIQA for pre-NDA external analytical assurance of a Phase III randomized trial in advanced NSCLC. Primary endpoint: progression-free survival. RIQA independently reconstructed the primary PFS analysis and evaluated conclusion stability across six sensitivity scenarios.
F-101 · Censoring rule implementation gapModerate
F-102 · p-value sensitivity to censoringModerate
F-103 · SAP tie-handling underspecifiedMinor
82/ 100Verified with notes
Treatment direction preserved. Significance not maintained across 4 of 6 sensitivity scenarios.
A healthcare analytics team submitted a hierarchical logistic regression model for risk-adjusted readmission rate verification prior to CMS quality reporting. RIQA verified whether the implemented model matched the declared specification across 142,800 beneficiaries and 14 providers, evaluating look-back window logic, ICD mapping versions, and ETL transformation consistency.
F-201 · HCC look-back window mismatch (12 vs 24 mo)Major
F-202 · ICD-10 crosswalk version discrepancyModerate
F-203 · ETL transformation undocumentedMinor
72/ 100Methodological concerns
8.3% of beneficiaries affected. 3 of 14 providers crossed compliance threshold. Resubmission required.
About this series
Methodology over narrative
These demonstration studies were developed to show how the RIQA framework reasons through complex analytical integrity problems — not to simulate a client portfolio. The value is in the methodology: the finding classification logic, the concordance vocabulary, the integrity scoring framework, and the structured audit report format. Each of these elements is identical to what a real engagement would produce.
RIQA's first real pilot engagements are currently in development with collaborating research institution partners. As those engagements are completed, this page will be updated with real-world findings (subject to confidentiality agreements). If you are interested in participating in a pilot engagement, contact us through the form below.
Interested in a pilot engagement?
Real data. Real findings. No commitment required at this stage.
Get in touch
Submit a study or start a conversation
Initial scoping consultations are conducted at no cost. Use the form below to describe your study or engagement interest and we will respond within two business days.
By submitting this form you agree to be contacted by RIQA LLC. Your information will not be shared with third parties. All engagement discussions are held in strict confidence.
Initial scoping consultations at no cost. No commitment required.
STTR collaboration
Research institution partners
RIQA is actively seeking research institution co-investigators for an STTR grant application in biomedical computational reproducibility. Select "STTR collaboration inquiry" above.