re
How to guides
Before submission / reviews
Learning R and R scripts
General R guides
 Datacamp is a good starting point
 R for Data Science; Garrett Grolemund & Hadley Wickham
 How to Learn R (R bloggers)
 Winter R Bootcamp  Sean Kross December 23, 2015
 R Bootcamp  good slides
 Robust statistical methods: a primer for clinical psychology and experimental psychopathology researchers (Andy Field Rand R. Wilcox, BRAT, 2017)
Courses:
 sentiment analysis in r  Datacamp
Styling
Research design
Data prep
Data check
 “Methods to Detect Low Quality Data and Its Implication for Psychological Research”, preprint  https://osf.io/cv2bn/ ; video: https://www.youtube.com/watch?v=QE5_qktry5I ; Shinyapp: https://github.com/doomlab/shinyserver/tree/master/lqscreen ; package: https://osf.io/x6t8a/ ; https://github.com/doomlab/botbotbot
Statistical tests
 Reliability from alpha to omega: a tutorial (preprint, 2018)
 problems with cronbach and guide  Thanks Coefficient Alpha, We’ll Take it From Here (PsycMethods, 2017)
 skimr frictionless approach to dealing with summary statistics
Citations in R
Graphics / Plotting
 Top 50 ggplot2 Visualizations  The Master List (With Full R Code)
 Summary Plotting  Erica Solomon
 Create Diagrams  DiagrammeR
Visualization galleries:
Reporting
 grateful  very easy to cite the R packages used in any report or publication
Reproducibility
 A practical guide for transparency in psychological science (preprint, 2018)
 Eight things I do to make my open research more findable and understandable (Uri Sim, Data Colada, 2018)

 Example: Searching for Dumbfounding
 StatTag is a free, opensource software plugin for conducting reproducible research (intro video)
 gramr  checking a RMarkdown document for grammatical errors
SEM in R
 Plot it with the wonderful OnyxR
 SEM and causality  Defending the Causal Interpretation of SEM (or, SEM Survival Kit)
Preregistering SEM analyses
 Why social psychologists using Structural Equation Modelling need to preregister their studies (presentation Matt Williams, Massey University)
R Studio
Text mining
Advanced R uses
Cheat sheets
 R powered web applications with Shiny (a tutorial and cheat sheet with 40 example apps)
General stats courses
Ttest and Equivalence testing
 Equivalence Testing for Psychological Research: A Tutorial (preprint, 2017)
Basic correlations
Regressions
 Repeated measures MANCOVA (change in performance, etc.) / Repeated measures  3 methods with SPSS
 comparing different predictors in a regression: Relative Weight Analysis with this tool
Mediation
 Statistical mediation analysis with a multicategorical independent variable (basically create dummies, insert one, and control for the other)
 Withinsubject mediation analysis (OSF preprint)
Interactions
 The best Excel I found to do plots / My version (unprotected, more help)
 Do/plot an interaction with Interaction! software / Useful excels
 Interaction between a categorical and continuous variable  1 / 2 / used this (sample paper using this)
ANOVA two way interaction (with constrasts:
 VIDEO: ANOVA twoway interactions explain right (with the way to do contrasts through SPSS syntax)
Basic code:
UNIANOVA DV BY IV1 IV2 /METHOD=SSTYPE(3) /INTERCEPT=INCLUDE /POSTHOC=IV1(TUKEY) /PLOT=PROFILE(IV1*IV2) /EMMEANS=TABLES(IV2) COMPARE ADJ(LSD) /EMMEANS=TABLES(IV1) COMPARE ADJ(LSD) /EMMEANS=TABLES(IV1*IV2) COMPARE(IV2) /CONTRAST(IV1)=Simple /CONTRAST(IV2)=Simple /PRINT=OPOWER ETASQ HOMOGENEITY DESCRIPTIVE /CRITERIA=ALPHA(.05) /DESIGN=IV1 IV2 IV1*IV2.
(might want to change the “EMMEANS=TABLES(IV1*IV2) COMPARE(IV2)” to “EMMEANS=TABLES(IV1*IV2) COMPARE(IV1)” to see if more convinient)
UCurve / quadratic regressions
Excel Magic
 Daniel's XL toolbox  Annotate Chart function to show significant differences
General SPSS magic
 Centering variables and other useful SPSS macros
Generally highly recommended  UITS Tutorials and Working Papers.
Effect size
General
 Reporting effect sizes in original psychological research: A discussion and tutorial. (Psychological Methods, 2018)
 The Power Dialogues (basic conversational explanation of power with links  Brent Roberts UIUC)
 Reporting Effect Sizes in Original Psychological Research: A Discussion and Tutorial (psychological methods, 2016)
 Required Sample Size to Detect the Mediated Effect (Psychological Science, 2007)
Tools:
 Pearson correlation confidence intervals tools (why doesn't SPSS report these?!)
 Calculating and Reporting Effect Sizes to Facilitate Cumulative Science: A Practical Primer for ttests and ANOVAs (Daniel Lakens on Frontiers and OSF)
 Effect Size Calculators (an increasingly important factor to report in psychological science)
 Interpreting Cohen's d effect size an interactive visualization and other funky tools  R psychologist
 How to Use a Monte Carlo Study to Decide on Sample Size and Determine Power (Muthén & Muthén, 2009)
 The Meta Analysis Calculator  convert between effect sizes
Interpreting effect sizes, Andy Field summarizes in his methods book:
Cohen (1992, 1988) has made some widely accepted suggestions about what constitutes a large or small effect: •r = 0.10 (small effect): in this case, the effect explains 1% of the total variance. •r = 0.30 (medium effect): the effect accounts for 9% of the total variance. •r = 0.50 (large effect): the effect accounts for 25% of the variance.
Table: effectsize.png
Readings about effect size estimates in psychology:
 One Hundred Years of Social Psychology Quantitatively Described (Richard et al., 2003)  effect size is psychology is about .21
 Effect size guidelines for individual differences researchers (PID, 2016) paper : guidelines for effect size in psyc
Multi level power calculations
 Multilevel Power Tool (Calculations are based on the article written by Mathieu, Aguinis, Culpepper, & Chen (2012) in the Journal of Applied Psychology)
 CrossLevel Interaction Effect Calculator, is an R Shiny for this article: Bestpractice recommendations for estimating crosslevel interaction effects using multilevel modeling (JOM, 2013) (page for sample data file and R code)
 Power Analysis and Effect Size in Mixed Effects Models: A Tutorial (Journal of Cognition, 2018)
Multilevel / HLM
 Multilevel Logistic Modeling (IRSP, 2017)
CFA
 Run a CFA with Amos to see whether two scales are separate : construct two models, one with the two separate scales, one with a combined scale. Look at the chisquare for each of those models, calculate the difference and check the chisquare table to see whether this indicates a significant difference. see this blog post for further explanation.
 Confirmatory factor analysis using AMOS video on Youtube
 Interpreting fit indices (RMSEA, SRMR, CFI, chisquare)
 Confirmatory Factor Analysis Using AMOS on Academia.edu
Meta analysis
Forest plots:
 Meta analysis  metafor
 Forest plots  Manylabs1 or Forest Plot with Subgroups
 puniform (metaanalysis methods that correct for publication bias)
Combining two metas (SEM):
 MetaAnalytical Structural Equation Modeling: An Easy Introduction to the TwoStep Approach (Holger Steinmetz & Isidor, unpublished)
 Randomeffects models for metaanalytic structural equation modeling: review, issues, and illustrations (Research Synethsis, 2014)
 MetaAnalytic Structural Equation Modeling: A TwoStage Approach (Psyhcological methods, 2005)
 Metaanalyzing dependent correlations: An SPSS macroand an R script (Behavioral Research, 2014)
Best way to assess publication bias:
 Correcting bias in metaanalyses: What not to do (metashowdown Part 1)
Pooling means and SD/variance:
New directions:
Reporting
Writing
Preregistration
 As predicted (explained here)
Versioning/Github
Data Transformations
Multi choice data bank
 Use software like Respondus
Detecting cheating
Common method bias using AMOS
Bayesian
 Introduction to Bayesian Inference for Psychology (preprint by Alex Etz)
 The Bayesian New Statistics: Hypothesis testing, estimation, metaanalysis, and power analysis from a Bayesian perspective (Psychonomic Bulletin & Review, 2017)
 ANOVA with Bayes  Bayesian Inference for Psychology. Part II: ExampleApplications with JASP
If you still want to do NHST of the null hypothesis:
Absence of an effect
Analyzing change
 Analysis of Pretest and Posttest Scores with Gain Scores and Repeated Measures  UCCS Dr. Lee Becker
 Gain Score Analysis  Keith Smolkowski
In experiments:
 The Analysis of Pretest/Posttest Experiments  Gerard E. Dallal, Ph.D.
 From Gain Score t to ANCOVA F (and vice versa) (Knapp and Schafer, 2009)
APA style
Helping others with their MTurk
In the last few lab meetings some of you mentioned that you are interested in collecting data with Amazon Mechanical Turk but that you do not have an account and asked if I might be able to help. I have an account, and have had some experience with this, so I’d be happy to help you run your studies and to help you adjust them to MTurk to increase the chances of getting reliable highquality data. The email below is to give you an idea of how we can work on that.
First, a quick background. Although MTurk does have some limitations, many of the issues that came up in the various lab meetings can definitely be addressed with MTurk data collection – a replication attempt, scale validation, a pretest to determine power, quick access to working people and/or underprivileged sectors, etc. I personally believe that MTurk is also good enough for running an independent study complementary to other collected data – if done correctly. In my experience, MTurk data collection is comparable if not better than student participant pool data. I also use a tool called TurkPrime that rides on top on MTurk and intended for academics, that completely automates academic data collection and increases data reliability (preventing duplicates and lots of potential problems, etc.).
My experience is summarized in the following two posts:
Generally about MTurk  http://mgto.org/runningexperimentswithamazonmechanicalturk/ About working with Turkprime  http://mgto.org/turkprimeeasyandpowerfulmturkdatacollection/If you’d like my help to run something with MTurk, what you’ll need to do is:
Have a working Qualtrics survey that you’ve tested. I can share a Qualtrics demo with you designed for MTurk that includes a consent form, funneling section, demographics, and debriefing. Make sure you can get reimbursed and have the money available. Once MTurk runs it automatically pays from my account, so I would ask that you transfer the funds to me before I start running this, so that I don’t have to bear the financial costs of waiting for everyone’s reimbursements. If you can’t, we can talk about that. Plan how many participants you’ll need (N), the intended pay (minimum for IRB in some places is 0.05US$ a minute, most do a minimum of 0.10US$ a minute), sample characteristics (location, qualifications, etc.).Costs you’ll need to take into account : (N + 10+ participants for pretest run) * pay + 20% Amazon MTurk commission [Note: The default Amazon commission is actually 40%, but TurkPrime uses a feature called MicroBatch that reduces that to 20%. In some study designs this feature cannot be used, and I’ll alert you if that is the case]
The procedure would usually be:
You pretest your study with one or two other persons to make sure it’s working well (preferably a number of times to test all conditions, if you have manipulations). We run a pretest of 10 participants on MTurk to see it all goes well (possibly more if you have many conditions). We run a full sample.Depending on the kind of survey you’re running and your intended sample size, data collection could take anything from half an hour to a few days.
Please let me know if you have any other questions.
Also:
You’ll need to work out your sample size (+small pretest) and payment for each participant with the formula before, confirm your reimbursement, and then transfer that money to me in advance. In the meanwhile, make a copy of your survey on Qualtrics and share the copy with me, and I’ll make sure it’s all set.
Also, it’s not a must, ofcourse, but I would strongly encourage you :
Calculate needed sample size based on power analysis using G*Power and an expected effect size. Take 1530min to preregister your survey layout and your main predictions on OSF or “As predicted”.To me, personally, as a reviewer, these two things significantly increase my confidence in the findings (and, in a way, in the researcher).
Control variables
IRB (Maastricht)
Build a scholar website
 Use Wordpress
 Some WordPress themes  Author
 Using R  Dan Quintana guide on Twitter
Twitter research
 TACIT  Text Analysis,Crawling and Interpretation Tool (e.g., used in “Purity homophily in social networks”, JEP:G, 2016)
Games in surveys
Dealing with outliers
 Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median (JESP, 2013) [with R/SPSS code]
Country level analyses
 What Can CrossCultural Correlations TeachUs about Human Nature? (Human Nature, 2014)
Spatial dependence analyses
 SPATDWM: Stata module for US state and county spatial distance matrices (not only for STATA, has useful CSV for states centroids)
 Coordinates of countries (to be used with world values survey and data archive research) (file: country_latlon.csv)
MPlus
Lab participant management
 SONA (commercial)
Mobile data collection
Create a mobile app
Machine learning
 Analyzing emotions  Analyzing Emotions using Facial Expressions in Video with Microsoft AI and R
Collaborations
Share presentations and data
Simulations
Social network analysis
 An Introduction to Social Network Analysis for Personality and Social Psychologists (Clifton & Webster, 2017, SPPS)
Network analysis
 Network Analysis on Attitudes (SPPS, 2017)
Machine learning
Conducting replications
Various
See all kinds of great resources from the SIPS workshop:
 How to Promote Transparency and Replicability as a Reviewer (Stephen Lindsay and Roger GinerSorolla)
 Writing papers to be transparent, reproducible, and fabulous (Bobbie Spellman & Simine Vazire)
 Intro to Single Paper MetaAnalysis (Courtney Soderberg)
 Fundamentals of Rmarkdown (Chris Hartgerink and Mike Frank)
 IRBs and Best Practices for Data Sharing (Rick Gilmore & and Gustav Nilsonne)
 Preparing and Curating Data for Sharing (Simon Podhajsky and David Condon)
 Fundamentals of R (Elizabeth PageGould and Alex Danvers)
 Sample Size and Effect Size Workshop (Daniel Lakens and Jeremy Biesanz)
 Getting Started with Preregistration (Charlie Ebersole, Hans IJzerman, Mike Wagner, & Randy McCarthy)
 Are manipulation checks necessary? (preprint, 2018)
 Recovering data from summary statistics: Sample Parameter Reconstruction via Iterative TEchniques (SPRITE) (preprint, 2018)
 Address sample selection bias using Heckman method (Heckman, 1979) (example for use in a followup on the WVS) / SPSS
 A Refresher on Statistical Significance (HBR, 2016)  great for students
 Understanding type 1 errors  The positive predictive value by Daniel Lakens
 Quickcalcs  for calculating Chisquare and such
Content mining
 Academic papers  ContentMine
Open Science
Research workflow
Lab manuals
Some examples:
Remove line breaks in office (when copying from PDFs)
Do this macro:
Sub RemoveParaOrLineBreaks() Dim oRng As Range Dim oFind As Range Set oRng = Selection.Range Set oFind = Selection.Range With oFind.Find .Replacement.Text = "\1" Do While .Execute(FindText:="[^13^l]{1,}([az])", _ MatchWildcards:=True, _ Replace:=wdReplaceAll) Loop End With lbl_Exit: Set oRng = Nothing Set oFind = Nothing Exit Sub End Sub
Add a keyboard shortcut from options→ribbbon→keyboard→macros→RemoveParaOrLineBreaks