Common Research Weakness Enumeration (CRWE)

The Common Research Weakness Enumeration (CRWE) framework is similar to the Common Weakness Enumeration (CWE) framework for categorizing hardware and software weaknesses. In addition, a weakness scoring formula similar to the Common Vulnerability Scoring System (CVSS) used in cybersecurity could help inform both bounties and overall paper reliability. An initial high-level taxonomy could be as follows:

1. Data Quality

chevron-right1.1 Insufficient Sample Sizehashtag
  • 1.1.1 Below Minimum Sample Size for Statistical Power

    Power Analysis Tools (e.g., G*Power) for required sample size determination. Standard tables (e.g., Cohen's) for determining necessary sample size.

  • 1.1.2 Inadequate Justification for Sample Size

    • 1.1.2.1 Absence of Power Analysis

    • 1.1.2.2 Reliance on Arbitrary Benchmarks

chevron-right1.2 Inappropriate Sampling Methodshashtag

Details

  • 1.2.1 Non-random Sampling without Justification

  • 1.2.2 Sampling Bias Not Addressed

  • 1.2.3 Lack of Stratification for Heterogeneous Populations

chevron-right1.3 Non-representative Samplehashtag
  • 1.3.1 Mismatch Between Sample and Population

    • 1.3.1.1 Demographic Discrepancy

    • 1.3.1.2 Clinical or Behavioral Discrepancy

  • 1.3.2 Restrictive Exclusion Criteria

    • 1.3.2.1 Excluding Common Comorbid Conditions

  • 1.3.3 Attrition Bias

    • 1.3.3.1 High Differential Attrition Rates

chevron-right1.4 Missing Datahashtag
  • 1.4.1 High Levels of Missing Data

  • 1.4.2 Inadequate Handling of Missing Data

    • 1.4.2.1 Use of Listwise Deletion without Rationale

  • 1.4.3 Missing Data Analysis Not Reported

    • 1.4.3.1 Failure to Report Impact on Study Outcomes

chevron-right1.5 Outliershashtag
  • 1.5.1 Outlier Identification and Rationale Lacking

    • 1.5.1.1 Undefined Criteria for Outlier Exclusion

  • 1.5.2 Inconsistent Handling of Outliers

    • 1.5.2.1 Variable Exclusion Criteria across Analyses

  • 1.5.3 Impact of Outliers Not Evaluated

    • 1.5.3.1 Absence of Sensitivity Analysis

chevron-right1.6 Measurement Errorshashtag
  • 1.6.1 Inadequate Validation of Measurement Instruments

    Lack of prior studies establishing the instrument's reliability and validity for the target population and research question.

  • 1.6.2 Measurement Bias Not Addressed

    • 1.6.2.1 Non-blinded Outcome Assessment

  • 1.6.3 Variability in Measurement Not Assessed

    • 1.6.3.1 No Report on Inter-rater or Intra-rater Reliability

chevron-right1.7 Additional Considerations in Data Qualityhashtag
  • 1.7.1 Biased Data Collection

    Addressing both selection bias and measurement bias at the collection phase. Selection bias occurs when the sampling method favors certain participants over others, while measurement bias occurs when the instrument used systematically over or underestimates the outcome of interest.

  • 1.7.2 Inappropriate Data Normalization

    Highlighting incorrect adjustments made to data before analysis. This can involve transforming data in a way that is not justified or that introduces artifacts.

  • 1.7.3 Data Source Transparency

    Ensuring clarity about the origins of the data to maintain its reliability and validity. This includes specifying the source of the data, any collection procedures used, and any potential limitations of the data source.

2. Methodology

chevron-right2.1 Inappropriate Causal Inference Frameworkhashtag
  • 2.1.1 Mismatch Between Objectives and Causal Framework

  • 2.1.2 Absence of Comparative Analysis of Design Options

  • 2.1.3 Failure to Address Design Limitations

chevron-right2.2 Selection Biashashtag
  • 2.2.1 Inadequate Criteria for Participant Selection

  • 2.2.2 Non-transparent or Participant Recruitment

  • 2.2.3 Exclusion of Relevant Populations

chevron-right2.3 Confounding, Mediating, and Colliding Variables in Causal Inferencehashtag
  • 2.3.1 Inadequate Identification of Confounders, Mediators and Colliders

  • 2.3.2 Lack of Control for Identified Confounders

  • 2.3.3 Improper Use of Statistical Methods to Address Confounding

  • 2.3.4 Over Use of Controlling:

chevron-right2.4 Randomized controls/Causal model variable measurement and analysishashtag
  • 2.4.1 Absence of Control Group in Experimental Studies

  • 2.4.2 Use of Inappropriate Control Interventions

chevron-right2.5 Inadequate Blindinghashtag
  • 2.5.1 Non-Blinded Study Design When Blinding is Feasible

  • 2.5.2 Partial Blinding with Insufficient Justification

  • 2.5.3 Lack of Information on Blinding Effectiveness

3. Statistical Analysis

chevron-right3.1 Incorrect Use of Statistical Testshashtag
  • 3.1.1 Application of Parametric Tests on Non-normally Distributed Data

  • 3.1.2 Misapplication of Tests for Independent Samples to Paired Data

  • 3.1.3 Use of Two-tailed Tests for One-directional Hypotheses

chevron-right3.2 Misinterpretation of p-valueshashtag
  • 3.2.1 Equating Statistical Significance with Practical Significance

  • 3.2.2 Over-reliance on p-values for Research Conclusions

  • 3.2.3 Misinterpretation of Non-significance as Evidence of No Effect

chevron-right3.3 Inadequate Effect Size Reportinghashtag
  • 3.3.1 Failure to Report Effect Sizes and Confidence Intervals

  • 3.3.2 Misleading Representation of Effect Sizes

chevron-right3.4 Multiple Comparisons Without Correctionhashtag
  • 3.4.1 Lack of Correction for Multiple Testing

  • 3.4.2 Inappropriate Application of Multiple Testing Corrections

chevron-right3.5 Overfittinghashtag
  • 3.5.1 Overfitting in Predictive Modeling

  • 3.5.2 Lack of Validation for Model Generalizability

chevron-right3.6 Interpretationhashtag
  • 3.6.1 Unsupported Conclusions

  • 3.6.2 Overgeneralization of Results

  • 3.6.3 Exaggeration of Results

  • 3.6.4 Failing to Account for Limitations

  • 3.6.5 Misinterpretation of Confidence Intervals

  • 3.6.6 Misinterpretation of Statistical Power

4. Unsupported Conclusions

chevron-right4.1 Overinterpretation of Resultshashtag
  • 4.1.1 Claiming Causality from Correlational Data

  • 4.1.2 Extrapolating Beyond the Data

  • 4.1.3 Ignoring Statistical Significance

chevron-right4.2 Overgeneralizationhashtag
  • 4.2.1 Ignoring Sample Diversity

  • 4.2.2 Inferences to Unstudied Phenomena

chevron-right4.3 Exaggeration of Resultshashtag
  • 4.3.1 Overstating Effect Sizes

  • 4.3.2 Highlighting Positive Findings While Ignoring Null Results

  • 4.3.3 Claiming Novelty Unjustifiably

chevron-right4.4 Ignoring Alternative Explanationshashtag
  • 4.4.1 Disregarding Existing Literature

  • 4.4.2 Over Reliance on a Single Interpretation

chevron-right4.5 Failing to Account for Limitationshashtag
  • 4.5.1 Incomplete Disclosure of Study Limitations

  • 4.5.2 Not Acknowledging Data Quality Issues

  • 4.5.3 Lack of Consideration for Methodological Constraints

5. Ethical Issues

chevron-right5.1 Conflicts of Interesthashtag
  • 5.1.1 Undisclosed Financial Conflicts of Interest

  • 5.1.2 Non-financial Conflicts Not Addressed

  • 5.1.3 Inadequate Management of Declared Conflicts

chevron-right5.2 Plagiarismhashtag
  • 5.2.1 Direct Plagiarism

  • 5.2.2 Self-plagiarism

  • 5.2.3 Paraphrasing Without Citation

chevron-right5.3 Data Fabrication or Falsificationhashtag
  • 5.3.1 Fabrication of Data

  • 5.3.2 Falsification of Data

  • 5.3.3 Manipulation of Images or Figures

chevron-right5.4 Ethical Approval and Animal Welfarehashtag
  • 5.4.1 Lack of Ethical Approval for Research

  • 5.4.2 Inadequate Attention to Animal Welfare

  • 5.4.3 Use of Illegally Obtained Samples

6. Reporting and Presentation

chevron-right6.1 Incomplete or Unclear Reportinghashtag
  • 6.1.1 Lack of Specificity in Methods

  • 6.1.2 Absence of Statistical Analysis Details

  • 6.1.3 Results Not Aligned with Objectives or Hypotheses

chevron-right6.2 Inaccurate or Misleading Visualshashtag
  • 6.2.1 Misrepresentation in Graphical Data

  • 6.2.2 Lack of Necessary Labels or Scale in Visuals

  • 6.2.3 Overuse of Complex Visuals

chevron-right6.3 Lack of Reproducibilityhashtag
  • 6.3.1 No Access to Underlying Data

  • 6.3.2 Software or Code Unavailable

  • 6.3.3 Inadequate Description of Experimental Setup

chevron-right6.4 Inadequate Citation of Sourceshashtag
  • 6.4.1 Omission of Relevant Literature

  • 6.4.2 Incorrect Attribution of Sources

  • 6.4.3 Reliance on Non-peer-reviewed or Unreliable Sources

7. Reproducibility Issues

chevron-right7.1 Inadequate Description of Methodologyhashtag
  • 7.1.1 Omission of Key Methodological Details

  • 7.1.2 Vague Description of Equipment or Materials

  • 7.1.3 Insufficient Explanation of Data Collection Processes

chevron-right7.2 Insufficient Data Sharinghashtag
  • 7.2.1 Data Availability Not Specified

  • 7.2.2 Restricted Access to Data Without Justification

  • 7.2.3 Incomplete Data Sets Provided

chevron-right7.3 Lack of Code Sharing or Open-Source Softwarehashtag
  • 7.3.1 Proprietary Code Without Alternatives

  • 7.3.2 Absence of Code for Data Analysis

  • 7.3.3 No Version Control Information

chevron-right7.4 Unavailability of Necessary Resources or Materialshashtag
  • 7.4.1 Unique Materials Without Deposition

  • 7.4.2 Lack of Information on Obtaining Resources

  • 7.4.3 No Shared Protocols for Custom Procedures

chevron-right7.5 Pre-registration Issueshashtag
  • 7.5.1 Absence of Pre-registration

  • 7.5.2 Discrepancies Between Pre-registration and Reporting

  • 7.5.3 Selective Reporting Not Addressed

chevron-right7.6 Failure to Adhere to Pre-registered Planshashtag
  • 7.6.1 Unjustified Changes to Methodology

  • 7.6.2 Omission of Pre-registered Outcomes

  • 7.6.3 Introduction of Post Hoc Analyses as Pre-registered

Last updated