Research methods: Know when your numbers are significant
Scientific method: Statistical errors
Weak statistical standards implicated in scientific irreproducibility
The fickle P value generates irreproducible results
Vital statistics
Experimental biology: Sometimes Bayesian statistics are better
A call for transparent reporting to optimize the predictive value of preclinical research
Power failure: why small sample size undermines the reliability of neuroscience
Basic statistical analysis in genetic case-control studies
Erroneous analyses of interactions in neuroscience: a problem of significance
Analyzing 'omics data using hierarchical models
Advantages and pitfalls in the application of mixed-model association methods
Quality control and conduct of genome-wide association meta-analyses
Circular analysis in systems neuroscience: the dangers of double dipping
A solution to dependency: using multilevel analysis to accommodate nested data
How does multiple testing correction work?
What is Bayesian statistics?
What is a hidden Markov model?
下面的这些文章,其实就是我们正常课本里面统计学的知识点,但是放在nature杂志发表,就顿时高大上了好多
Points of significance: Importance of being uncertain
Points of Significance: Error bars
Points of significance: Significance, P values and t-tests
Points of significance: Power and sample size
Points of Significance: Visualizing samples with box plots
Points of significance: Comparing samples part I
Points of significance: Comparing samples part II
Points of significance: Nonparametric tests
Points of significance: Designing comparative experiments
Points of significance: Analysis of variance and blocking
Points of Significance: Replication
Points of Significance: Nested designs
Points of Significance: Two-factor designs
Points of significance: Sources of variation
Points of Significance: Split plot design
Points of Significance: Bayes' theorem
Points of significance: Bayesian statistics
Points of Significance: Sampling distributions and the bootstrap
Points of Significance: Bayesian networks
A study with low statistical power has a reduced chance of detecting a true effect, but it is less well appreciated that low power also reduces the likelihood that a statistically significant result reflects a true effect. Here, we show that the average statistical power of studies in the neurosciences is very low. The consequences of this include overestimates of effect size and low reproducibility of results. There are also ethical dimensions to this problem, as unreliable research is inefficient and wasteful. Improving reproducibility in neuroscience is a key priority and requires attention to well-established but often ignored methodological principles.