Independent Assurance Initiative · Est. 2026

Independent analytical verification for biomedical, clinical, and healthcare research

RIQA provides third-party verification of analytical outputs across biomedical, clinical, and healthcare analytics domains — ensuring that conclusions drawn from data are internally consistent, transparently derived, and reproducible under defined conditions. AI-assisted detection. Human-reviewed interpretation.

Core assurance principles
01 / Provenance
Data lineage verification
Source, transformation logic, and analytical inputs traced and documented end-to-end.
02 / Consistency
Internal logical coherence
Reported metrics evaluated against the structure of the underlying data.
03 / Reproducibility
Independent reconstruction
Results reconstructed from provided data under defined, standardized conditions.
04 / Transparency
Structured audit output
Signed reports in human-readable and machine-readable formats with full provenance.
Framework status
Validation module roadmap
RIQA develops modular validation engines for each analytical technique. Current production status:
RIQA-qPCR Livak
RT-qPCR Validation
Stable v1.1
RIQA-Flow Herzenberg
Flow Cytometry Validation
Stable v1.1
RIQA-RNAseq
RNA-seq Module
Advanced prototype
RIQA-Translational
Translational Biomarker Analytics
Advanced prototype
RIQA-Claims
Healthcare Analytics Assurance
Planned
Scope of coverage
Three domains, one consistent standard
RIQA applies the same four-phase verification framework across three primary domains. Select a domain to explore the full scope of coverage and what RIQA specifically verifies.
Domain 01
Preclinical & Biomedical Research
Studies involving experimental or computational data generated from laboratory, genomic, imaging, or in vivo techniques — where quantitative results are statistically summarized to support biological or translational conclusions.
Domain 02
Clinical Research & Trial Analytics
Preclinical and clinical study analyses where the integrity of outcome reporting has direct implications for regulatory review, publication, and clinical practice — verified by a neutral, non-sponsoring party.
Domain 03
Healthcare Analytics & CMS
Claims data, risk adjustment models, population health analytics, and policy simulations where data-driven conclusions influence reimbursement decisions and regulatory compliance at scale.
The audit pipeline
How a RIQA audit works
01
Data provenance review
Source, lineage, and all transformation steps applied to input data are traced and documented before analysis begins.
→ provenance-trace.pdf
02
Methodology assessment
Statistical methods evaluated for appropriateness. Reconstruction class assigned. Concordance standards pre-specified before outcomes are known.
→ reconstruction-declaration.json
03
Independent reconstruction
Results reconstructed from provided data by RIQA. Findings classified by severity. Sensitivity analysis across alternate assumptions.
→ reconciliation-findings.csv
04
Structured audit report
Signed report issued combining human-readable summary with machine-readable artifacts. Full provenance archive retained.
→ riqa-audit-report.pdf + audit-trail.json
Framework integrity
Why RIQA findings are defensible
Every RIQA audit is built on deterministic, reproducible, version-controlled analytical infrastructure.
Deterministic scoring
The same inputs always produce the same findings. No stochastic or black-box components in the scoring logic.
Version-locked pipelines
Every audit runs against a specific, documented framework version. RIQA-qPCR Livak v1.1 and RIQA-Flow Herzenberg v1.1 are stable for production.
SHA-256 audit provenance
Input files are hashed at intake. The audit trail records every analytical step with cryptographic traceability.
Pre-specified concordance
Reconstruction standards are declared before analysis begins — not after outcomes are known. Findings cannot be reverse-engineered to reach a desired conclusion.
Human-reviewed outputs
AI-assisted detection accelerates pattern recognition. Human expertise governs all finding classification, interpretation, and report issuance.
Catalog-driven findings
Finding language is defined in a versioned catalog, not generated ad hoc. Consistent, legally defensible language across all engagements.
Who RIQA serves
Built for organizations where data integrity is non-negotiable
Preclinical & Biomedical Research
  • Academic research laboratories
  • Core facilities and shared instrumentation
  • Contract research organizations (CROs)
  • Translational and pre-clinical research groups
  • Grant-funded research consortia
  • Journal authors seeking pre-submission assurance
Clinical Research & Trial Analytics
  • Clinical trial sponsors and biotech organizations
  • Biostatistics teams preparing NDA submissions
  • Regulatory affairs and clinical operations groups
  • Trial monitoring and data safety organizations
  • NIH grant applicants with preliminary data
Healthcare Analytics & CMS
  • Healthcare analytics vendors and consultancies
  • CMS contractors and Medicare Advantage plans
  • Payment integrity and fraud detection teams
  • Population health and quality reporting organizations
  • Health system analytics and informatics teams
What an engagement looks like
Typical RIQA engagements
Every engagement follows the same four-phase pipeline. The scope, inputs, and deliverables are defined upfront before analysis begins.
Biomedical
Pre-publication verification
A research group submits raw qPCR Cq values and flow cytometry summaries. RIQA independently reconstructs fold changes, p-values, and gating coherence. A signed assurance report is referenced in the manuscript methods section before journal submission.
Clinical
Sponsor-side SAP concordance review
A pharmaceutical sponsor submits patient-level data and the Statistical Analysis Plan ahead of a regulatory or publication submission. RIQA reconstructs primary and secondary endpoints, evaluates censoring rule implementation against the SAP, and delivers a structured concordance report with sensitivity analyses.
Healthcare analytics
CMS analytics implementation audit
An analytics team submits model documentation, production code, and claims data outputs before a quality reporting or compliance submission. RIQA verifies whether the implemented logic matches the declared specification — evaluating look-back windows, ICD mappings, and ETL transformation consistency.
Research funding
NIH grant reproducibility assessment
A PI submits preliminary data from an R01 application. RIQA verifies that the statistical conclusions in the specific aims are analytically defensible and issues a brief reproducibility certificate referenced in the grant application.
Translational
Biomarker concordance review
A translational research group submits biomarker panel data from a clinical assay. RIQA evaluates endpoint concordance, directional consistency across cohort subgroups, and assay-to-conclusion traceability before submission to a translational journal or regulatory authority.
Institutional
External methodological assurance
A research institution or core facility engages RIQA as an external quality assurance layer for studies generated within their infrastructure — providing structured reproducibility documentation to support submissions or reporting cycles.
"RIQA does not position itself as a gatekeeper or decision-maker, but rather as an enabling layer that enhances trust, supports peer review, and strengthens the overall quality of data-driven work."
— RIQA Mission Statement · riqassure.com
Ready to verify your research?
Start with a conversation — no commitment required.
Framework documentation

The RIQA Assurance Framework

A four-phase, modular methodology for third-party analytical result reconstruction across biomedical, clinical, and healthcare analytics domains. Every audit is pre-specified, version-controlled, and reproducible under defined conditions.

Four-phase pipeline
How every audit proceeds
The same four phases apply across all three domains. Domain-specific verification procedures are applied within each phase.
Phase 01
Data Provenance & Transformation Review
Evaluates the lineage and transformation pathway from source data to reported outputs. In biomedical research this covers normalization procedures, batch correction documentation, and gating strategy definitions. In clinical research it covers endpoint derivation files, censoring rule implementations, and SAP alignment. In healthcare analytics it covers ETL logic, look-back window specifications, and crosswalk file versioning.
Output → provenance-trace.pdf · transformation-log.json
Phase 02
Statistical & Methodological Assessment
Evaluates alignment between the declared analytical methodology and the structure of the underlying data. Each analytical component is assigned to a reconstruction class within the RIQA taxonomy. A Reconstruction Methodology Declaration is issued specifying the concordance standard to be applied in Phase 03. Analytical drift is evaluated where relevant.
Output → reconstruction-declaration.json · methodology-assessment.pdf
Phase 03
Independent Result Reconstruction
Results are independently reconstructed from source data and documented analytical procedures. For exact and near-deterministic methods, numerical reconstruction is performed. For architectural audit methods, specification-to-implementation alignment is evaluated. A sensitivity analysis is separately performed to evaluate conclusion stability under reasonable alternate assumptions.
Output → reconciliation-findings.csv · sensitivity-analysis.json
Phase 04
Structured Assurance Reporting
Generates structured assurance outputs including the signed audit report, findings register classified by severity, integrity score across four dimensions, and machine-readable artifacts. SHA-256 hashes of all input files are recorded in the audit trail. Every finding traces to a specific catalog entry with stable finding code, severity, and remediation guidance.
Output → riqa-audit-report.pdf · audit-trail.json · findings-register.csv
Reconstruction taxonomy
How RIQA classifies analytical methods
Before reconstruction begins, each analytical component is assigned to a class that defines the applicable concordance standard.
ClassMethod examplesConcordance standard
Exactt-test, chi-square, ANOVA, Kaplan-Meier, ΔΔCt reconstructionFull numerical agreement within rounding tolerance. Any deviation is a finding.
Near-deterministicCox PH, logistic regression, ANCOVA, log-rankPoint estimates within defined tolerance; significance and direction must match.
Software-tolerantMMRM, mixed models, GEE, GLMMDirection, significance, and order verified; numerical differences attributed to optimization algorithm documented.
Structural verificationMultiple imputation, Bayesian MCMC, adaptive designsCorrect methodological implementation verified; seed and software version documentation required.
Architectural auditML pipelines, CMS-HCC models, federated systemsLogic consistency, population construction, covariate derivation, and specification-to-implementation alignment.
Integrity scoring
Four dimensions, one weighted score
Every engagement produces an overall integrity score and four per-dimension scores on a 0–100 scale.
30%
Independent result reconstruction
Whether RIQA could reproduce the reported quantitative claims from the raw data.
25%
Statistical methodology
Soundness of test choice, test scale, and multiple-comparisons handling.
25%
Conclusion-to-result alignment
Whether the manuscript's conclusions match what the data support.
20%
Data provenance & transformation
Completeness and traceability of inputs, reagents, and normalization.
Score bands and classifications:
Score rangeClassificationInterpretation
95–100VerifiedReproducibility fully demonstrated. Informational notes only.
85–94Verified with notesReproducibility holds; minor or moderate items warrant attention.
70–84Methodological concernsDirection of conclusions holds, but specific issues should be addressed.
< 70Material reproducibility concernsOne or more results cannot be reproduced from submitted data; revision required.
Audit provenance flow
End-to-end analytical pipeline
From submission intake to signed report — every step is documented, version-controlled, and reproducible under defined conditions.
RIQA audit pipeline Submission Data Intake SHA-256 hash Phase 01 Provenance Review Lineage · ETL · SAP Phase 02 Method Assessment Pre-specified concordance Phase 03 Reconstruction Findings classified Phase 04 Audit Report Signed + archived intake.json provenance-trace.pdf recon-declaration.json findings.csv audit-report.pdf

Scroll horizontally on small screens to view full pipeline.

Severity framework
Finding classification
All findings are classified using a four-tier severity framework. Deductions are subtractive from a 100-point base per dimension.
Material
−25 pts
Reported endpoint does not reproduce from submitted data, or methodological flaw of substance. Revision required.
Moderate
−9 pts
Methodological concern not changing the direction of effect; or partial reproducibility. Disclosure recommended.
Minor
−4 pts
Reporting or documentation gap with no effect on the result. Better documentation recommended for future submissions.
Informational
0 pts
Best-practice recommendation. No defect identified; surfaced to support continuous improvement.
Research-use and scope of findings
RIQA provides analytical assurance for research and quality-review purposes. RIQA does not function as a regulatory authority, certifying body, or legal compliance organization. RIQA findings are reproducibility and methodological consistency statements derived from submitted data and declared methodology — not determinations of scientific truth, biological validity, or regulatory compliance. Audit reports are intended to support author due diligence, pre-submission review, and internal quality assurance programs.
Anchor standards
Standards alignment
RIQA's provenance requirements are anchored to established community standards. RIQA goes beyond checklist compliance to provide independent quantitative reconstruction.
MIQE 2.0 — Minimum Information for qPCR Experiments
Bustin et al. 2025, Clin Chem 71:634. RIQA-qPCR Livak v1.1 provenance catalog is structured along MIQE 2.0 sections with severity assignments consistent with the essential/desirable distinction.
MIFlowCyt — Minimum Information about a Flow Cytometry Experiment
Lee et al. 2008, Cytometry A 73A:926. RIQA-Flow Herzenberg v1.1 provenance layer is anchored to MIFlowCyt, covering panel, gating tree, compensation, FMO controls, and viability declarations.
ICH E9(R1) — Estimands and Sensitivity Analysis
The RIQA sensitivity analysis framework for clinical trials aligns with ICH E9(R1) estimand principles, evaluating conclusion stability under alternate analytical assumptions.
FAIR Principles — Findable, Accessible, Interoperable, Reusable
RIQA audit outputs include machine-readable JSON artifacts structured for downstream integration, consistent with FAIR data principles for research provenance infrastructure.
Download the RIQA White Paper
Full framework documentation · RIQA-WP-001 · 2026 · Open Access CC BY-NC 4.0
Scope of coverage

Three domains. One consistent standard.

RIQA applies the same four-phase verification framework across three primary research and analytics domains. Each domain has domain-specific verification procedures anchored to established community standards.

Domain 01
Preclinical & Biomedical Research
Quantitative experimental and computational workflows from laboratory, genomic, imaging, and in vivo techniques. Two production-ready validation modules currently available.
qPCR Livak v1.1 · Stable Flow Herzenberg v1.1 · Stable RNA-seq · Prototype
Domain 02
Clinical Research & Trial Analytics
Clinical endpoint and survival analyses where integrity of outcome reporting has direct regulatory implications. Independent verification of SAP compliance, ITT construction, and sensitivity analyses.
Survival analysis SAP reconstruction Endpoint derivation Sensitivity analysis
Domain 03
Healthcare Analytics & CMS
Claims data, risk adjustment models, and quality reporting systems where implementation errors carry direct financial and compliance consequences. Architectural audit approach for specification-to-implementation verification.
HCC risk adjustment Claims analytics ETL governance Quality measures
Domain 01
Preclinical & Biomedical Research

Independent validation of quantitative experimental workflows — from single-cell transcriptomics and flow cytometry to RT-qPCR and imaging — where analytical implementation choices directly affect reported biological conclusions.

Production modules
Available validation frameworks
Stable v1.1
RIQA-qPCR Livak
Anchor standard: MIQE 2.0 (Bustin et al. 2025)
Independent reconstruction of fold changes via the 2^−ΔΔCt method from submitted per-replicate Cq values. Evaluates reference gene stability, statistical methodology, and methodological provenance against MIQE 2.0 expectations.
ΔΔCt reconstruction Reference gene stability Path A: BH/Holm/Bonferroni Path B: ANOVA+Tukey, KW+Dunn MIQE 2.0 provenance
Stable v1.1
RIQA-Flow Herzenberg
Anchor standard: MIFlowCyt (Lee et al. 2008)
Independent reconstruction of frequency endpoints in percentage-point space and MFI endpoints in log₂-ratio space. Unique gating tree arithmetic coherence layer verifies parent-child population consistency across every gate transition.
Frequency endpoints (pp space) MFI endpoints (log₂ space) Gating tree coherence Path A and Path B support MIFlowCyt provenance
Advanced prototype
RIQA-RNAseq
In development · Target: v1.0
Differential expression concordance, pathway enrichment sensitivity, batch correction documentation, and normalisation method consistency for bulk RNA-seq workflows. Pilot engagements in scoping.
DEG concordance Normalisation sensitivity Pathway FDR analysis Batch effect documentation
Planned
Future modules
Roadmap items
Western blot densitometry and loading control validation. Histology and image-based quantification. Proteomics and metabolomics normalisation workflows. scRNA-seq cluster stability and UMAP embedding assessment.
Western blot Imaging / IHC Proteomics scRNA-seq
Implementation sensitivity
What RIQA evaluates per technique
TechniqueImplementation-sensitive parametersRIQA reconstruction class
RT-qPCRReference gene selection, ΔΔCt calculation, efficiency correction, multiple-testing strategyExact (fold change) · Near-deterministic (p-value)
Flow cytometryGating strategy, compensation matrix, MFI summary statistic, doublet exclusionExact (frequencies) · Near-deterministic (MFI)
Bulk RNA-seqAlignment tool, normalisation method, DE tool, gene set database versionSoftware-tolerant · Architectural audit
scRNA-seqClustering resolution, random seed, batch correction method, cell filtering thresholdsStructural verification · Architectural audit
Imaging / IHCSegmentation threshold, ROI definition, reader variability, quantification software versionNear-deterministic · Structural verification
Western blotLoading control linear range, ROI definition, normalisation referenceExact · Near-deterministic
Submit a biomedical study for review
Initial scoping consultation at no cost.
Domain 02
Clinical Research & Trial Analytics

Independent verification of clinical endpoint analyses, survival studies, and regulatory submission outputs — where the integrity of analytical implementation directly affects regulatory decisions, publication conclusions, and patient care.

Verification scope
What RIQA evaluates in clinical research
Modern clinical studies involve implementation-sensitive analytical choices that are rarely subjected to external reconstruction. RIQA evaluates whether the analytical logic is sound, the SAP was followed, and conclusions hold under alternate assumptions.
Analytical areaImplementation sensitivityRIQA verification approach
Survival analysisTie-handling method (Efron, Breslow, exact), censoring rules, software platformReconstruct HR and p-value; evaluate stability across tie-handling methods
SAP complianceDeviations between pre-specified SAP and conducted analysisIndependent review of SAP vs analysis; document undisclosed deviations
ITT population constructionPost-randomization exclusions, withdrawal handling, missing dataReconstruct ITT from enrollment records; document undisclosed exclusions
Endpoint derivationAdjudication rules, response criteria, derived variable logicVerify derivation logic against protocol; reconstruct derived variables
Sensitivity analysisAlternate censoring, alternate populations, alternate assumptionsIndependently evaluate conclusion stability under pre-specified alternatives
Multiplicity adjustmentHierarchical testing, family-wise error control, FDR correctionVerify correction method and scope; recompute adjusted p-values
Concordance standard
When exact reconstruction is not expected
A critical principle of the RIQA clinical framework: exact numerical reconstruction is not always achievable or analytically meaningful. MMRM analyses across SAS, R, and Python use different optimization algorithms. RIQA evaluates directional stability and significance preservation, not identity.
Illustrative concordance assessment
ScenarioHRp-value
Sponsor reported0.740.041
RIQA reconstruction0.790.067
Alternate censoring0.830.110
Breslow tie-handling0.810.089
Illustrative example from RIQA-CS-CLI-001 demonstration study.
RIQA assessment
Treatment direction preserved across all scenarios. Statistical significance was not maintained under four of six sensitivity scenarios evaluated, including the RIQA primary reconstruction. This class of finding — analytically subtle, methodologically significant — is RIQA's primary value proposition in the pharmaceutical domain.
F-102 · Moderate
Significance not maintained under alternate analytical assumptions. Recommendation: strengthen SAP censoring specification and pre-register sensitivity analyses.
Engagement models
When to engage RIQA in clinical research
Pre-NDA / pre-submission
Sponsor-engaged assurance
Independent reconstruction of primary and secondary endpoints before regulatory submission. RIQA assurance report included in the submission package.
Pre-publication
Author-engaged assurance
Research groups seeking methodological assurance before submission to high-impact journals. Signed report referenced in methods section.
Grant applications
NIH grant proposal assurance
PIs seeking neutral third-party verification of preliminary data statistical conclusions before R01 or STTR submission. RIQA certificate referenced in application.
Submit a clinical study for review
Pre-NDA, pre-publication, or grant proposal assurance.
Domain 03
Healthcare Analytics & CMS

Architectural audit of claims-based models, risk adjustment systems, and quality reporting pipelines — where specification-to-implementation gaps carry direct financial and regulatory consequences at scale.

Audit approach
Architectural audit, not numerical reconstruction
In healthcare analytics, exact numerical reconstruction is often structurally impossible without access to the same data vintage, crosswalk file version, and processing environment. RIQA verifies whether the implemented analytics match the declared specifications — identifying inconsistencies between documented logic and production implementation, and confirming that population construction and output derivation are consistent with what the method description requires.
Audit areaWhat RIQA evaluatesCommon findings
HCC risk adjustmentICD-10 code mapping, look-back window implementation, chronic condition flag logic, crosswalk file versioningLook-back window shorter than specification, wrong crosswalk year applied, missing HCC categories
Claims analyticsDenominator construction, attribution logic, enrollment period definitions, member eligibility sequencingPopulation size errors, enrollment boundary misspecification, attribution logic deviating from documentation
Encounter data validationEncounter record completeness, duplicate detection, claim type classification, revenue code mappingDuplicate encounters inflating denominators, missing revenue code crosswalk updates
Quality measuresHEDIS measure specification compliance, numerator and denominator derivation, measure version trackingSpecification-to-code gaps, measure version drift across reporting periods
ETL governanceTransformation logic documentation, data lineage tracing, schema version tracking, field-level mapping verificationUndocumented transformations, schema changes without version control, field mappings deviating from specification
HCC drift analysisLongitudinal consistency of HCC assignments across model updates, software version changes, and crosswalk revisionsSilent output changes across model versions, specification-implementation divergence accumulating over time
Claims anomaly reconstructionVerification of reported rate calculations, outlier provider identification logic, and risk score distribution consistencyRate calculation errors, provider threshold misclassification, risk score discrepancies across reporting systems
Concrete anomaly example
What a typical implementation gap looks like
HCC flag miscalculation
Chronic condition look-back mismatch
Specification requires a 24-month look-back window for chronic HCC conditions. Production code implements 12 months. Result: 3,847 members missing qualifying diagnoses. RAF scores systematically understated by an average of 0.14 per affected member.
ETL transformation drift
Revenue code crosswalk version mismatch
Model documentation references FY2022 ICD-10-CM crosswalk. Production pipeline silently retained FY2021 mappings after a scheduled update. 214 procedure codes mapped to incorrect HCC categories across all claims processed in Q1–Q3.
Quality measure reconstruction
Denominator construction error
Reported measure denominator: 18,440 eligible members. RIQA reconstruction from enrollment files: 16,982. Discrepancy of 1,458 members traced to an enrollment gap period not excluded per HEDIS specification. Compliance rate overstated by 2.3 percentage points.
These examples are illustrative. All figures are synthetic and constructed for demonstration purposes.
Illustrative finding
The stakes of implementation gaps
RIQA-CS-CMS-001 · Demonstration study
HCC look-back window mismatch
A Medicare Advantage analytics team submitted a hierarchical logistic regression model for risk-adjusted readmission rate verification. RIQA's architectural audit identified that the HCC chronic condition look-back window had been implemented as 12 months in production code despite a 24-month specification in the model documentation.
8.3%
Beneficiaries affected
3/14
Providers cross threshold
$8.5M
Annual revenue impact
RIQA outcome
F-201 · Major finding
Look-back window implemented as 12 months; specification required 24 months. Resubmission required.
No evidence of misconduct was identified. The error arose from a specification-to-implementation gap between the model documentation and the production code — a class of error that is invisible to any review mechanism that does not externally examine the code against the specification.
Integrity score: 72 / 100 · Methodological concerns
Resubmission completed with corrected covariate derivation and updated ICD-10 crosswalk.
Submit an analytics model for review
Risk adjustment, quality reporting, or claims analytics.
Demonstration series

RIQA Analytical Demonstration Studies

These studies illustrate how the RIQA framework evaluates analytical integrity across three primary domains. They demonstrate the audit methodology, finding classification system, and integrity scoring framework — not client outcomes.

Framework demonstration series. These studies utilize synthetic and reconstructed datasets derived from publicly available analytical patterns and are intended solely to illustrate the RIQA audit methodology and reporting framework. They are not client engagements, real studies, or validated deployments. Initial collaborative pilot engagements with research institution partners are currently in development.
RIQA-CS-BIO-001 Preclinical & Biomedical Research
Pre-Publication Assurance · Computational Vascular Biology
A research group engaged RIQA for pre-publication methodological assurance of a multi-modal vascular biology study integrating bulk RNA-seq, single-cell transcriptomics, in vivo phenotyping, and image-based lesion quantification. The engagement focused on cluster stability assessment and cell-type specificity verification.
F-301 · scRNA-seq cluster boundary instabilityModerate
F-302 · Cell-type specificity attenuationModerate
F-303 · Pathway FDR threshold sensitivityModerate
F-304 · SELE fold change sensitivityMinor
81 / 100 Verified with notes
Biological direction preserved across all reconstruction workflows.
RIQA-CS-CLI-001 Clinical Research & Trial Analytics
Pre-NDA Assurance · Phase III Oncology Survival Analysis
A pharmaceutical sponsor engaged RIQA for pre-NDA external analytical assurance of a Phase III randomized trial in advanced NSCLC. Primary endpoint: progression-free survival. RIQA independently reconstructed the primary PFS analysis and evaluated conclusion stability across six sensitivity scenarios.
F-101 · Censoring rule implementation gapModerate
F-102 · p-value sensitivity to censoringModerate
F-103 · SAP tie-handling underspecifiedMinor
82 / 100 Verified with notes
Treatment direction preserved. Significance not maintained across 4 of 6 sensitivity scenarios.
RIQA-CS-CMS-001 Healthcare Analytics & CMS
Pre-Submission Verification · Medicare Advantage Risk Adjustment
A healthcare analytics team submitted a hierarchical logistic regression model for risk-adjusted readmission rate verification prior to CMS quality reporting. RIQA verified whether the implemented model matched the declared specification across 142,800 beneficiaries and 14 providers, evaluating look-back window logic, ICD mapping versions, and ETL transformation consistency.
F-201 · HCC look-back window mismatch (12 vs 24 mo)Major
F-202 · ICD-10 crosswalk version discrepancyModerate
F-203 · ETL transformation undocumentedMinor
72 / 100 Methodological concerns
8.3% of beneficiaries affected. 3 of 14 providers crossed compliance threshold. Resubmission required.
About this series
Methodology over narrative
These demonstration studies were developed to show how the RIQA framework reasons through complex analytical integrity problems — not to simulate a client portfolio. The value is in the methodology: the finding classification logic, the concordance vocabulary, the integrity scoring framework, and the structured audit report format. Each of these elements is identical to what a real engagement would produce.
RIQA's first real pilot engagements are currently in development with collaborating research institution partners. As those engagements are completed, this page will be updated with real-world findings (subject to confidentiality agreements). If you are interested in participating in a pilot engagement, contact us through the form below.
Interested in a pilot engagement?
Real data. Real findings. No commitment required at this stage.
Get in touch

Submit a study or start a conversation

Initial scoping consultations are conducted at no cost. Use the form below to describe your study or engagement interest and we will respond within two business days.

By submitting this form you agree to be contacted by RIQA LLC. Your information will not be shared with third parties. All engagement discussions are held in strict confidence.
Typical submission materials
·Raw datasets or per-sample summary outputs
·Statistical Analysis Plan or model specification
·Reported values (fold changes, p-values, frequencies)
·Metadata and experimental design documentation
·Analysis code or scripts if available
·Supplementary figures, tables, or appendices
·Software and version information used in analysis
·Manuscript draft or methods section (if available)
Not all materials are required at first contact. RIQA will confirm what is needed during the initial scoping conversation.
Contact
Dr. Manish Mittal
Founder, RIQA LLC
Adjunct Instructor, Dept. of Biomedical and Health Information Sciences
University of Illinois Chicago
Email
contact@riqassure.com
For submissions: submit@riqassure.com
Location
Chicago, Illinois
RIQA LLC · riqassure.com
Conflict-free · Independent · CC BY-NC 4.0
Response time
Within 2 business days
Initial scoping consultations at no cost. No commitment required.
STTR collaboration
Research institution partners
RIQA is actively seeking research institution co-investigators for an STTR grant application in biomedical computational reproducibility. Select "STTR collaboration inquiry" above.