# How to guides

## Before submission / reviews

## Learning R and R scripts

### General R guides

- Datacamp is a good starting point
- R for Data Science; Garrett Grolemund & Hadley Wickham
- How to Learn R (R bloggers)
- Winter R Bootcamp - Sean Kross December 23, 2015
- R Bootcamp - good slides
- Robust statistical methods: a primer for clinical psychology and experimental psychopathology researchers (Andy Field Rand R. Wilcox, BRAT, 2017)

### Data prep

### Statistical tests

- problems with cronbach and guide - Thanks Coefficient Alpha, We’ll Take it From Here (PsycMethods, 2017)
- skimr frictionless approach to dealing with summary statistics

### Graphics / Plotting

- Summary Plotting - Erica Solomon
- Create Diagrams - DiagrammeR

### Reproducability

- StatTag is a free, open-source software plug-in for conducting reproducible research (intro video)
- gramr - checking a RMarkdown document for grammatical errors

### SEM in R

- Plot it with the wonderful OnyxR
- SEM and causality - Defending the Causal Interpretation of SEM (or, SEM Survival Kit)

### R Studio

### Text mining

### Advanced uses

## General stats courses

## T-test and Equivalence testing

## Basic correlations

## Regressions

- Repeated measures MANCOVA (change in performance, etc.) / Repeated measures - 3 methods with SPSS
- comparing different predictors in a regression: Relative Weight Analysis with this tool

## Mediation

- Statistical mediation analysis with a multicategorical independent variable (basically create dummies, insert one, and control for the other)
- Within-subject mediation analysis (OSF preprint)

## Interactions

- The best Excel I found to do plots / My version (unprotected, more help)
- Do/plot an interaction with Interaction! software / Useful excels
- Interaction between a categorical and continuous variable - 1 / 2 / used this (sample paper using this)

ANOVA two way interaction (with constrasts:

- VIDEO: ANOVA two-way interactions explain right (with the way to do contrasts through SPSS syntax)

Basic code:

UNIANOVA DV BY IV1 IV2 /METHOD=SSTYPE(3) /INTERCEPT=INCLUDE /POSTHOC=IV1(TUKEY) /PLOT=PROFILE(IV1*IV2) /EMMEANS=TABLES(IV2) COMPARE ADJ(LSD) /EMMEANS=TABLES(IV1) COMPARE ADJ(LSD) /EMMEANS=TABLES(IV1*IV2) COMPARE(IV2) /CONTRAST(IV1)=Simple /CONTRAST(IV2)=Simple /PRINT=OPOWER ETASQ HOMOGENEITY DESCRIPTIVE /CRITERIA=ALPHA(.05) /DESIGN=IV1 IV2 IV1*IV2.

(might want to change the “EMMEANS=TABLES(IV1*IV2) COMPARE(IV2)” to “EMMEANS=TABLES(IV1*IV2) COMPARE(IV1)” to see if more convinient)

## Excel Magic

- Daniel's XL toolbox - Annotate Chart function to show significant differences

## General SPSS magic

- Centering variables and other useful SPSS macros

Generally highly recommended - UITS Tutorials and Working Papers.

## Effect size

General

- The Power Dialogues (basic conversational explanation of power with links - Brent Roberts UIUC)
- Reporting Effect Sizes in Original Psychological Research: A Discussion and Tutorial (psychological methods, 2016)
- Required Sample Size to Detect the Mediated Effect (Psychological Science, 2007)

Tools:

- Pearson correlation confidence intervals tools (why doesn't SPSS report these?!)
- Calculating and Reporting Effect Sizes to Facilitate Cumulative Science: A Practical Primer for t-tests and ANOVAs (Daniel Lakens on Frontiers and OSF)
- Effect Size Calculators (an increasingly important factor to report in psychological science)
- Interpreting Cohen's d effect size an interactive visualization and other funky tools - R psychologist
- How to Use a Monte Carlo Study to Decide on Sample Size and Determine Power (Muthén & Muthén, 2009)
- Multilevel Power Tool (Calculations are based on the article written by Mathieu, Aguinis, Culpepper, & Chen (2012) in the Journal of Applied Psychology)
- Cross-Level Interaction Effect Calculator, is an R Shiny for this article: Best-practice recommendations for estimating cross-level interaction effects using multilevel modeling (JOM, 2013) (page for sample data file and R code)
- The Meta Analysis Calculator - convert between effect sizes

Interpreting effect sizes, Andy Field summarizes in his methods book:

Cohen (1992, 1988) has made some widely accepted suggestions about what constitutes a large or small effect: •r = 0.10 (small effect): in this case, the effect explains 1% of the total variance. •r = 0.30 (medium effect): the effect accounts for 9% of the total variance. •r = 0.50 (large effect): the effect accounts for 25% of the variance.

Table: effect-size.png

Readings about effect size estimates in psychology:

- One Hundred Years of Social Psychology Quantitatively Described (Richard et al., 2003) - effect size is psychology is about .21
- Effect size guidelines for individual differences researchers (PID, 2016) paper : guidelines for effect size in psyc

## CFA

- Run a CFA with Amos to see whether two scales are separate : construct two models, one with the two separate scales, one with a combined scale. Look at the chisquare for each of those models, calculate the difference and check the chisquare table to see whether this indicates a significant difference. see this blog post for further explanation.
- Confirmatory factor analysis using AMOS video on Youtube
- Interpreting fit indices (RMSEA, SRMR, CFI, chi-square)
- Confirmatory Factor Analysis Using AMOS on Academia.edu

## Multi-level / HLM

## Meta analysis

- Meta analysis - metafor
- Forest plots - Manylabs1 or Forest Plot with Subgroups
- puniform (meta-analysis methods that correct for publication bias)

Combining two metas (SEM):

- Meta-Analytical Structural Equation Modeling: An Easy Introduction to the Two-Step Approach (Holger Steinmetz & Isidor, unpublished)
- Random-effects models for meta-analytic structural equation modeling: review, issues, and illustrations (Research Synethsis, 2014)
- Meta-Analytic Structural Equation Modeling: A Two-Stage Approach (Psyhcological methods, 2005)
- Meta-analyzing dependent correlations: An SPSS macroand an R script (Behavioral Research, 2014)

Best way to assess publication bias:

- Correcting bias in meta-analyses: What not to do (meta-showdown Part 1)

## Reporting

## Writing

## Pre-registration

- As predicted (explained here)

## Other

- Address sample selection bias using Heckman method (Heckman, 1979) (example for use in a followup on the WVS) / SPSS
- A Refresher on Statistical Significance (HBR, 2016) - great for students
- Understanding type 1 errors - The positive predictive value by Daniel Lakens
- Quickcalcs - for calculating Chisquare and such

## Data Transformations

## Multi choice data bank

- Use software like Respondus

## Detecting cheating

## Common method bias using AMOS

## Bayesian

- Introduction to Bayesian Inference for Psychology (preprint by Alex Etz)
- The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective (Psychonomic Bulletin & Review, 2017)
- ANOVA with Bayes - Bayesian Inference for Psychology. Part II: ExampleApplications with JASP

If you still want to do NHST of the null hypothesis:

## Absence of an effect

## Analyzing change

- Analysis of Pretest and Posttest Scores with Gain Scores and Repeated Measures - UCCS Dr. Lee Becker
- Gain Score Analysis - Keith Smolkowski

In experiments:

- The Analysis of Pre-test/Post-test Experiments - Gerard E. Dallal, Ph.D.
- From Gain Score t to ANCOVA F (and vice versa) (Knapp and Schafer, 2009)

## APA style

## Helping others with their MTurk

In the last few lab meetings some of you mentioned that you are interested in collecting data with Amazon Mechanical Turk but that you do not have an account and asked if I might be able to help. I have an account, and have had some experience with this, so I’d be happy to help you run your studies and to help you adjust them to MTurk to increase the chances of getting reliable high-quality data. The email below is to give you an idea of how we can work on that.

First, a quick background. Although MTurk does have some limitations, many of the issues that came up in the various lab meetings can definitely be addressed with MTurk data collection – a replication attempt, scale validation, a pre-test to determine power, quick access to working people and/or underprivileged sectors, etc. I personally believe that MTurk is also good enough for running an independent study complementary to other collected data – if done correctly. In my experience, MTurk data collection is comparable if not better than student participant pool data. I also use a tool called TurkPrime that rides on top on MTurk and intended for academics, that completely automates academic data collection and increases data reliability (preventing duplicates and lots of potential problems, etc.).

My experience is summarized in the following two posts:

Generally about MTurk - http://mgto.org/running-experiments-with-amazon-mechanical-turk/ About working with Turkprime - http://mgto.org/turkprime-easy-and-powerful-mturk-data-collection/If you’d like my help to run something with MTurk, what you’ll need to do is:

Have a working Qualtrics survey that you’ve tested. I can share a Qualtrics demo with you designed for MTurk that includes a consent form, funneling section, demographics, and debriefing. Make sure you can get reimbursed and have the money available. Once MTurk runs it automatically pays from my account, so I would ask that you transfer the funds to me before I start running this, so that I don’t have to bear the financial costs of waiting for everyone’s reimbursements. If you can’t, we can talk about that. Plan how many participants you’ll need (N), the intended pay (minimum for IRB in some places is 0.05US$ a minute, most do a minimum of 0.10US$ a minute), sample characteristics (location, qualifications, etc.).Costs you’ll need to take into account : (N + 10+ participants for pretest run) * pay + 20% Amazon MTurk commission [Note: The default Amazon commission is actually 40%, but TurkPrime uses a feature called Micro-Batch that reduces that to 20%. In some study designs this feature cannot be used, and I’ll alert you if that is the case]

The procedure would usually be:

You pretest your study with one or two other persons to make sure it’s working well (preferably a number of times to test all conditions, if you have manipulations). We run a pretest of 10 participants on MTurk to see it all goes well (possibly more if you have many conditions). We run a full sample.Depending on the kind of survey you’re running and your intended sample size, data collection could take anything from half an hour to a few days.

Please let me know if you have any other questions.

Also:

You’ll need to work out your sample size (+small pretest) and payment for each participant with the formula before, confirm your reimbursement, and then transfer that money to me in advance. In the meanwhile, make a copy of your survey on Qualtrics and share the copy with me, and I’ll make sure it’s all set.

Also, it’s not a must, ofcourse, but I would strongly encourage you :

Calculate needed sample size based on power analysis using G*Power and an expected effect size. Take 15-30min to pre-register your survey layout and your main predictions on OSF or “As predicted”.To me, personally, as a reviewer, these two things significantly increase my confidence in the findings (and, in a way, in the researcher).

## Control variables

## IRB (Maastricht)

## Build a scholar website

## Twitter research

- TACIT - Text Analysis,Crawling and Interpretation Tool (e.g., used in “Purity homophily in social networks”, JEP:G, 2016)

## Games in surveys

## Dealing with outliers

- Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median (JESP, 2013) [with R/SPSS code]

## Country level analyses

- What Can Cross-Cultural Correlations TeachUs about Human Nature? (Human Nature, 2014)

## Spatial dependence analyses

- SPATDWM: Stata module for US state and county spatial distance matrices (not only for STATA, has useful CSV for states centroids)
- Coordinates of countries (to be used with world values survey and data archive research) (file: country_latlon.csv)

## MPlus

## Lab participant management

- SONA (commercial)

## Mobile data collection

### Create a mobile app

### Machine learning

- Analyzing emotions - Analyzing Emotions using Facial Expressions in Video with Microsoft AI and R

### Collaborations

### Share presentations and data

### Simulations

### Social network analysis

- An Introduction to Social Network Analysis for Personality and Social Psychologists (Clifton & Webster, 2017, SPPS)

### Network analysis

- Network Analysis on Attitudes (SPPS, 2017)