Common Research Weakness Enumeration (CRWE)

The Common Research Weakness Enumeration (CRWE) framework is similar to the Common Weakness Enumeration (CWE) framework for categorizing hardware and software weaknesses. In addition, a weakness scoring formula similar to the Common Vulnerability Scoring System (CVSS) used in cybersecurity could help inform both bounties and overall paper reliability. An initial high-level taxonomy could be as follows:

1. Data Quality

1.1 Insufficient Sample Size
  • 1.1.1 Below Minimum Sample Size for Statistical Power

    Power Analysis Tools (e.g., G*Power) for required sample size determination. Standard tables (e.g., Cohen's) for determining necessary sample size.

  • 1.1.2 Inadequate Justification for Sample Size

    • 1.1.2.1 Absence of Power Analysis

    • 1.1.2.2 Reliance on Arbitrary Benchmarks

1.2 Inappropriate Sampling Methods

Details

  • 1.2.1 Non-random Sampling without Justification

  • 1.2.2 Sampling Bias Not Addressed

  • 1.2.3 Lack of Stratification for Heterogeneous Populations

1.3 Non-representative Sample
  • 1.3.1 Mismatch Between Sample and Population

    • 1.3.1.1 Demographic Discrepancy

    • 1.3.1.2 Clinical or Behavioral Discrepancy

  • 1.3.2 Restrictive Exclusion Criteria

    • 1.3.2.1 Excluding Common Comorbid Conditions

  • 1.3.3 Attrition Bias

    • 1.3.3.1 High Differential Attrition Rates

1.4 Missing Data
  • 1.4.1 High Levels of Missing Data

  • 1.4.2 Inadequate Handling of Missing Data

    • 1.4.2.1 Use of Listwise Deletion without Rationale

  • 1.4.3 Missing Data Analysis Not Reported

    • 1.4.3.1 Failure to Report Impact on Study Outcomes

1.5 Outliers
  • 1.5.1 Outlier Identification and Rationale Lacking

    • 1.5.1.1 Undefined Criteria for Outlier Exclusion

  • 1.5.2 Inconsistent Handling of Outliers

    • 1.5.2.1 Variable Exclusion Criteria across Analyses

  • 1.5.3 Impact of Outliers Not Evaluated

    • 1.5.3.1 Absence of Sensitivity Analysis

1.6 Measurement Errors
  • 1.6.1 Inadequate Validation of Measurement Instruments

    Lack of prior studies establishing the instrument's reliability and validity for the target population and research question.

  • 1.6.2 Measurement Bias Not Addressed

    • 1.6.2.1 Non-blinded Outcome Assessment

  • 1.6.3 Variability in Measurement Not Assessed

    • 1.6.3.1 No Report on Inter-rater or Intra-rater Reliability

1.7 Additional Considerations in Data Quality
  • 1.7.1 Biased Data Collection

    Addressing both selection bias and measurement bias at the collection phase. Selection bias occurs when the sampling method favors certain participants over others, while measurement bias occurs when the instrument used systematically over or underestimates the outcome of interest.

  • 1.7.2 Inappropriate Data Normalization

    Highlighting incorrect adjustments made to data before analysis. This can involve transforming data in a way that is not justified or that introduces artifacts.

  • 1.7.3 Data Source Transparency

    Ensuring clarity about the origins of the data to maintain its reliability and validity. This includes specifying the source of the data, any collection procedures used, and any potential limitations of the data source.

2. Methodology

2.1 Inappropriate Causal Inference Framework
  • 2.1.1 Mismatch Between Objectives and Causal Framework

  • 2.1.2 Absence of Comparative Analysis of Design Options

  • 2.1.3 Failure to Address Design Limitations

2.2 Selection Bias
  • 2.2.1 Inadequate Criteria for Participant Selection

  • 2.2.2 Non-transparent or Participant Recruitment

  • 2.2.3 Exclusion of Relevant Populations

2.3 Confounding, Mediating, and Colliding Variables in Causal Inference
  • 2.3.1 Inadequate Identification of Confounders, Mediators and Colliders

  • 2.3.2 Lack of Control for Identified Confounders

  • 2.3.3 Improper Use of Statistical Methods to Address Confounding

  • 2.3.4 Over Use of Controlling:

2.4 Randomized controls/Causal model variable measurement and analysis
  • 2.4.1 Absence of Control Group in Experimental Studies

  • 2.4.2 Use of Inappropriate Control Interventions

2.5 Inadequate Blinding
  • 2.5.1 Non-Blinded Study Design When Blinding is Feasible

  • 2.5.2 Partial Blinding with Insufficient Justification

  • 2.5.3 Lack of Information on Blinding Effectiveness

3. Statistical Analysis

3.1 Incorrect Use of Statistical Tests
  • 3.1.1 Application of Parametric Tests on Non-normally Distributed Data

  • 3.1.2 Misapplication of Tests for Independent Samples to Paired Data

  • 3.1.3 Use of Two-tailed Tests for One-directional Hypotheses

3.2 Misinterpretation of p-values
  • 3.2.1 Equating Statistical Significance with Practical Significance

  • 3.2.2 Over-reliance on p-values for Research Conclusions

  • 3.2.3 Misinterpretation of Non-significance as Evidence of No Effect

3.3 Inadequate Effect Size Reporting
  • 3.3.1 Failure to Report Effect Sizes and Confidence Intervals

  • 3.3.2 Misleading Representation of Effect Sizes

3.4 Multiple Comparisons Without Correction
  • 3.4.1 Lack of Correction for Multiple Testing

  • 3.4.2 Inappropriate Application of Multiple Testing Corrections

3.5 Overfitting
  • 3.5.1 Overfitting in Predictive Modeling

  • 3.5.2 Lack of Validation for Model Generalizability

3.6 Interpretation
  • 3.6.1 Unsupported Conclusions

  • 3.6.2 Overgeneralization of Results

  • 3.6.3 Exaggeration of Results

  • 3.6.4 Failing to Account for Limitations

  • 3.6.5 Misinterpretation of Confidence Intervals

  • 3.6.6 Misinterpretation of Statistical Power

4. Unsupported Conclusions

4.1 Overinterpretation of Results
  • 4.1.1 Claiming Causality from Correlational Data

  • 4.1.2 Extrapolating Beyond the Data

  • 4.1.3 Ignoring Statistical Significance

4.2 Overgeneralization
  • 4.2.1 Ignoring Sample Diversity

  • 4.2.2 Inferences to Unstudied Phenomena

4.3 Exaggeration of Results
  • 4.3.1 Overstating Effect Sizes

  • 4.3.2 Highlighting Positive Findings While Ignoring Null Results

  • 4.3.3 Claiming Novelty Unjustifiably

4.4 Ignoring Alternative Explanations
  • 4.4.1 Disregarding Existing Literature

  • 4.4.2 Over Reliance on a Single Interpretation

4.5 Failing to Account for Limitations
  • 4.5.1 Incomplete Disclosure of Study Limitations

  • 4.5.2 Not Acknowledging Data Quality Issues

  • 4.5.3 Lack of Consideration for Methodological Constraints

5. Ethical Issues

5.1 Conflicts of Interest
  • 5.1.1 Undisclosed Financial Conflicts of Interest

  • 5.1.2 Non-financial Conflicts Not Addressed

  • 5.1.3 Inadequate Management of Declared Conflicts

5.2 Plagiarism
  • 5.2.1 Direct Plagiarism

  • 5.2.2 Self-plagiarism

  • 5.2.3 Paraphrasing Without Citation

5.3 Data Fabrication or Falsification
  • 5.3.1 Fabrication of Data

  • 5.3.2 Falsification of Data

  • 5.3.3 Manipulation of Images or Figures

5.4 Ethical Approval and Animal Welfare
  • 5.4.1 Lack of Ethical Approval for Research

  • 5.4.2 Inadequate Attention to Animal Welfare

  • 5.4.3 Use of Illegally Obtained Samples

6. Reporting and Presentation

6.1 Incomplete or Unclear Reporting
  • 6.1.1 Lack of Specificity in Methods

  • 6.1.2 Absence of Statistical Analysis Details

  • 6.1.3 Results Not Aligned with Objectives or Hypotheses

6.2 Inaccurate or Misleading Visuals
  • 6.2.1 Misrepresentation in Graphical Data

  • 6.2.2 Lack of Necessary Labels or Scale in Visuals

  • 6.2.3 Overuse of Complex Visuals

6.3 Lack of Reproducibility
  • 6.3.1 No Access to Underlying Data

  • 6.3.2 Software or Code Unavailable

  • 6.3.3 Inadequate Description of Experimental Setup

6.4 Inadequate Citation of Sources
  • 6.4.1 Omission of Relevant Literature

  • 6.4.2 Incorrect Attribution of Sources

  • 6.4.3 Reliance on Non-peer-reviewed or Unreliable Sources

7. Reproducibility Issues

7.1 Inadequate Description of Methodology
  • 7.1.1 Omission of Key Methodological Details

  • 7.1.2 Vague Description of Equipment or Materials

  • 7.1.3 Insufficient Explanation of Data Collection Processes

7.2 Insufficient Data Sharing
  • 7.2.1 Data Availability Not Specified

  • 7.2.2 Restricted Access to Data Without Justification

  • 7.2.3 Incomplete Data Sets Provided

7.3 Lack of Code Sharing or Open-Source Software
  • 7.3.1 Proprietary Code Without Alternatives

  • 7.3.2 Absence of Code for Data Analysis

  • 7.3.3 No Version Control Information

7.4 Unavailability of Necessary Resources or Materials
  • 7.4.1 Unique Materials Without Deposition

  • 7.4.2 Lack of Information on Obtaining Resources

  • 7.4.3 No Shared Protocols for Custom Procedures

7.5 Pre-registration Issues
  • 7.5.1 Absence of Pre-registration

  • 7.5.2 Discrepancies Between Pre-registration and Reporting

  • 7.5.3 Selective Reporting Not Addressed

7.6 Failure to Adhere to Pre-registered Plans
  • 7.6.1 Unjustified Changes to Methodology

  • 7.6.2 Omission of Pre-registered Outcomes

  • 7.6.3 Introduction of Post Hoc Analyses as Pre-registered

Last updated