Last updated: June 09 2021

See Package References for help documentation on semoutput functions

NOTE: We will use two data sets in this example output. They come from the lavaan tutorial data sets; the cfa data set isHolzingerSwineford1939 and the sem data set is PoliticalDemocracy

NOTE: You can download the RMarkdown file associated with this output by selecting the ‘Code’ button on the top-right of this document and selecting “Download Rmd”. You can also display the code used to produce each output by selecting the ‘Code’ buttons along the right of the document.

NOTE: For each CFA or SEM model output block there are 4 tabs:

  • The “Summary Output” tab display nice looking tables summarizing the model results

  • The “Diagram Output” tab will display a model diagram

  • The “Residual Correlation Matrix” tab will display the residual correlation matrix

  • The “Full Output” tab will display the results from summary() along with parameter estimates and modification indices. This way you can still get the full output from a lavaan model as it provides more information than the “Summary Output”. You can also add additional output to this section if you need more info about the model.

Once you install the package, you will be able to access an Rmarkdown template by going to:

File -> New File -> R Markdown… -> From Template -> CFA/SEM (lavaan)

Setup

Required Packages

library(readr)
library(here)
library(dplyr)
library(lavaan)
library(psych)
library(semoutput)
library(semPlot)
library(sjPlot)

Import Data

## Import Data
# data <- read_csv(here("relative file path", "file name"))
cfa_data <- dplyr::select(HolzingerSwineford1939, -id, -school)
sem_data <- PoliticalDemocracy


Descriptives

Typically there is only one descriptive table displayed but since we have two data sets for this one output two descriptive tables are displayed.

# Prints basic descriptive statistics
sem_descriptives(cfa_data)
Descriptive Statistics
Variable n Mean SD min max Skewness Kurtosis % Missing
agemo 301 5.38 3.45 0.00 11.00 0.09 -1.21 0.00
ageyr 301 13.00 1.05 11.00 16.00 0.70 0.25 0.00
grade 300 7.48 0.50 7.00 8.00 0.09 -2.00 0.33
sex 301 1.51 0.50 1.00 2.00 -0.06 -2.01 0.00
x1 301 4.94 1.17 0.67 8.50 -0.26 0.36 0.00
x2 301 6.09 1.18 2.25 9.25 0.47 0.38 0.00
x3 301 2.25 1.13 0.25 4.50 0.39 -0.89 0.00
x4 301 3.06 1.16 0.00 6.33 0.27 0.12 0.00
x5 301 4.34 1.29 1.00 7.00 -0.35 -0.53 0.00
x6 301 2.19 1.10 0.14 6.14 0.87 0.88 0.00
x7 301 4.19 1.09 1.30 7.43 0.25 -0.27 0.00
x8 301 5.53 1.01 3.05 10.00 0.53 1.24 0.00
x9 301 5.37 1.01 2.78 9.25 0.21 0.34 0.00

Total N = 301
sem_descriptives(sem_data)
Descriptive Statistics
Variable n Mean SD min max Skewness Kurtosis % Missing
x1 75 5.05 0.73 3.78 6.74 0.26 -0.66 0
x2 75 4.79 1.51 1.39 7.87 -0.36 -0.46 0
x3 75 3.56 1.41 1.00 6.42 0.09 -0.86 0
y1 75 5.46 2.62 1.25 10.00 -0.10 -1.10 0
y2 75 4.26 3.95 0.00 10.00 0.33 -1.44 0
y3 75 6.56 3.28 0.00 10.00 -0.62 -0.62 0
y4 75 4.45 3.35 0.00 10.00 0.12 -1.16 0
y5 75 5.14 2.61 0.00 10.00 -0.24 -0.68 0
y6 75 2.98 3.37 0.00 10.00 0.93 -0.34 0
y7 75 6.20 3.29 0.00 10.00 -0.58 -0.63 0
y8 75 4.04 3.25 0.00 10.00 0.46 -0.88 0

Total N = 75

Correlation Matrix

There are also two correlation matrices

This is a publication quality correlation matrix that can be inserted into a manuscript.

# Uses sjPlot to print a nice looking correlation table
tab_corr(cfa_data, na.deletion = "pairwise", digits = 2, triangle = "lower")
  sex ageyr agemo grade x1 x2 x3 x4 x5 x6 x7 x8 x9
sex                          
ageyr -0.16**                        
agemo 0.02 -0.24***                      
grade -0.03 0.51*** -0.00                    
x1 -0.08 -0.06 0.05 0.17**                  
x2 -0.12* -0.02 0.05 0.14* 0.30***                
x3 -0.18** 0.04 0.03 0.14* 0.44*** 0.34***              
x4 0.12* -0.20*** -0.02 0.21*** 0.37*** 0.15** 0.16**            
x5 0.06 -0.22*** -0.04 0.17** 0.29*** 0.14* 0.08 0.73***          
x6 0.01 -0.17** 0.01 0.16** 0.36*** 0.19*** 0.20*** 0.70*** 0.72***        
x7 0.12* 0.11 0.06 0.35*** 0.07 -0.08 0.07 0.17** 0.10 0.12*      
x8 -0.04 0.24*** 0.01 0.30*** 0.22*** 0.09 0.19** 0.11 0.14* 0.15** 0.49***    
x9 0.05 0.10 0.03 0.22*** 0.39*** 0.21*** 0.33*** 0.21*** 0.23*** 0.21*** 0.34*** 0.45***  
Computed correlation used pearson-method with pairwise-deletion.
tab_corr(sem_data, na.deletion = "pairwise", digits = 2, triangle = "lower")
  y1 y2 y3 y4 y5 y6 y7 y8 x1 x2 x3
y1                      
y2 0.60***                    
y3 0.68*** 0.45***                  
y4 0.69*** 0.72*** 0.61***                
y5 0.74*** 0.54*** 0.58*** 0.65***              
y6 0.65*** 0.71*** 0.43*** 0.66*** 0.56***            
y7 0.67*** 0.58*** 0.65*** 0.68*** 0.68*** 0.61***          
y8 0.67*** 0.61*** 0.53*** 0.74*** 0.63*** 0.75*** 0.71***        
x1 0.38*** 0.21 0.33** 0.47*** 0.56*** 0.34** 0.39*** 0.46***      
x2 0.32** 0.25* 0.31** 0.44*** 0.52*** 0.35** 0.40*** 0.46*** 0.89***    
x3 0.25* 0.21 0.23 0.39*** 0.43*** 0.33** 0.35** 0.37** 0.80*** 0.85***  
Computed correlation used pearson-method with pairwise-deletion.


EFA

efa_data <- dplyr::select(cfa_data, dplyr::starts_with("x"))

## Conduct EFA analysis with nfactors
efa_fit <- fa(efa_data, fm = "pa", nfactors = 3, rotate = "varimax")

Summary Output

efa_method(efa_fit)
Extraction Method
Sample.Size Method Factors.Extracted Rotation
301 Principal Axis Factoring 3 Varimax
efa_var(efa_fit)
Total Variance Explained
Factor Eigenvalue Proportion Var Cumulative Var
1 2.187 0.243 0.243
3 1.341 0.149 0.392
2 1.328 0.148 0.540
efa_loadings(efa_fit)
Factor Loadings
Factors
Variable F1 F3 F2 h2
x1 0.279 0.613 0.152 0.477
x2 0.102 0.494 -0.030 0.256
x3 0.038 0.659 0.129 0.453
x4 0.832 0.161 0.099 0.728
x5 0.859 0.088 0.089 0.753
x6 0.799 0.214 0.086 0.692
x7 0.093 -0.081 0.705 0.512
x8 0.051 0.170 0.702 0.524
x9 0.130 0.414 0.522 0.461
efa_rotmatrix(efa_fit)
Factor Rotation Matrix
Factor 1 2 3
1 0.784 0.387 0.486
2 -0.583 0.727 0.361
3 -0.213 -0.566 0.796

Diagram Output

fa.diagram(efa_fit)

## Determine the number of factors to extract
VSS.scree(efa_data)

fa.parallel(efa_data, fa = "fa")

## Parallel analysis suggests that the number of factors =  3  and the number of components =  NA
VSS(efa_data, n = 4, rotate = "varimax")

## 
## Very Simple Structure
## Call: vss(x = x, n = n, rotate = rotate, diagonal = diagonal, fm = fm, 
##     n.obs = n.obs, plot = plot, title = title, use = use, cor = cor)
## VSS complexity 1 achieves a maximimum of 0.7  with  3  factors
## VSS complexity 2 achieves a maximimum of 0.85  with  3  factors
## 
## The Velicer MAP achieves a minimum of 0.06  with  2  factors 
## BIC achieves a minimum of  -45.93  with  3  factors
## Sample Size adjusted BIC achieves a minimum of  -9.72  with  4  factors
## 
## Statistics by number of factors 
##   vss1 vss2   map dof chisq    prob sqresid  fit RMSEA BIC SABIC complex eChisq
## 1 0.61 0.00 0.080  27 358.3 1.4e-59     6.4 0.61 0.202 204 289.8     1.0  468.7
## 2 0.65 0.76 0.063  19 129.2 2.0e-18     3.9 0.76 0.139  21  81.0     1.2  154.1
## 3 0.70 0.85 0.068  12  22.6 3.2e-02     2.1 0.87 0.054 -46  -7.9     1.3    7.9
## 4 0.66 0.84 0.120   6   5.5 4.8e-01     1.8 0.89 0.000 -29  -9.7     1.4    1.7
##     SRMR eCRMS eBIC
## 1 0.1471 0.170  315
## 2 0.0843 0.116   46
## 3 0.0191 0.033  -61
## 4 0.0088 0.021  -33

Full Output

efa_fit
## Factor Analysis using method =  pa
## Call: fa(r = efa_data, nfactors = 3, rotate = "varimax", fm = "pa")
## Standardized loadings (pattern matrix) based upon correlation matrix
##     PA1   PA3   PA2   h2   u2 com
## x1 0.28  0.61  0.15 0.48 0.52 1.5
## x2 0.10  0.49 -0.03 0.26 0.74 1.1
## x3 0.04  0.66  0.13 0.45 0.55 1.1
## x4 0.83  0.16  0.10 0.73 0.27 1.1
## x5 0.86  0.09  0.09 0.75 0.25 1.0
## x6 0.80  0.21  0.09 0.69 0.31 1.2
## x7 0.09 -0.08  0.70 0.51 0.49 1.1
## x8 0.05  0.17  0.70 0.52 0.48 1.1
## x9 0.13  0.41  0.52 0.46 0.54 2.0
## 
##                        PA1  PA3  PA2
## SS loadings           2.19 1.34 1.33
## Proportion Var        0.24 0.15 0.15
## Cumulative Var        0.24 0.39 0.54
## Proportion Explained  0.45 0.28 0.27
## Cumulative Proportion 0.45 0.73 1.00
## 
## Mean item complexity =  1.3
## Test of the hypothesis that 3 factors are sufficient.
## 
## The degrees of freedom for the null model are  36  and the objective function was  3.05 with Chi Square of  904.1
## The degrees of freedom for the model are 12  and the objective function was  0.08 
## 
## The root mean square of the residuals (RMSR) is  0.02 
## The df corrected root mean square of the residuals is  0.03 
## 
## The harmonic number of observations is  301 with the empirical chi square  7.87  with prob <  0.8 
## The total number of observations was  301  with Likelihood Chi Square =  22.54  with prob <  0.032 
## 
## Tucker Lewis Index of factoring reliability =  0.963
## RMSEA index =  0.054  and the 90 % confidence intervals are  0.016 0.088
## BIC =  -45.95
## Fit based upon off diagonal values = 1
## Measures of factor score adequacy             
##                                                    PA1  PA3  PA2
## Correlation of (regression) scores with factors   0.93 0.81 0.84
## Multiple R square of scores with factors          0.87 0.66 0.70
## Minimum correlation of possible factor scores     0.74 0.32 0.41


CFA

# specify the model
model <- '
# latent factors
visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9

# correlated errors

# constraints

'
# fit model
fit <- cfa(model = model, data = cfa_data, mimic = "lavaan", 
           estimator = "ML", missing = "ML", 
           std.lv = TRUE, std.ov = FALSE, test = "standard", 
           se = "standard", bootstrap = 1000)

Summary Output

sem_sig(fit)
Model Significance
Sample.Size Chi.Square df p.value
301 85.306 24 0
sem_fitmeasures(fit)
Model Fit Measures
CFI RMSEA RMSEA.Lower RMSEA.Upper AIC BIC
0.931 0.092 0.071 0.114 7535.49 7646.703
sem_factorloadings(fit, standardized = TRUE, ci = "standardized")
Factor Loadings
Standardized
Latent Factor Indicator Loadings sig p Lower.CI Upper.CI SE z
visual x1 0.772 *** 0 0.659 0.885 0.058 13.416
visual x2 0.424 *** 0 0.301 0.547 0.063 6.752
visual x3 0.581 *** 0 0.467 0.696 0.058 9.942
textual x4 0.852 *** 0 0.807 0.896 0.023 37.612
textual x5 0.855 *** 0 0.812 0.899 0.022 38.530
textual x6 0.838 *** 0 0.792 0.884 0.024 35.598
speed x7 0.570 *** 0 0.455 0.684 0.058 9.767
speed x8 0.723 *** 0 0.601 0.845 0.062 11.608
speed x9 0.665 *** 0 0.535 0.795 0.066 10.063
sem_factorcor(fit)
Latent Factor Correlations
Factor 1 Factor 2 r sig p Lower.CI Upper.CI SE
visual textual 0.459 *** 0 0.334 0.583 0.063
visual speed 0.471 *** 0 0.302 0.640 0.086
textual speed 0.283 *** 0 0.143 0.423 0.071

Diagram Output

semPaths(fit, latents = factors, whatLabels = "std", layout = "tree2", 
         rotation = 2, style = "lisrel", optimizeLatRes = TRUE, 
         intercepts = FALSE, residuals = TRUE, curve = 1, curvature = 3, 
         sizeLat = 10, nCharNodes = 8, sizeMan = 11, sizeMan2 = 4, 
         edge.label.cex = 1.2, edge.color = "#000000")
## Error in latNames %in% latents: object 'factors' not found

Residual Correlation Matrix

sem_residuals(fit)
x1 x2 x3 x4 x5 x6 x7 x8 x9
x1
x2 -0.03
x3 -0.01 0.09
x4 0.07 -0.01 -0.07
x5 -0.01 -0.03 -0.15 0.01
x6 0.06 0.03 -0.03 -0.01 0.00
x7 -0.14 -0.19 -0.08 0.04 -0.04 -0.01
x8 -0.04 -0.05 -0.01 -0.07 -0.04 -0.02 0.07
x9 0.15 0.07 0.15 0.05 0.07 0.06 -0.04 -0.03

Full Output

Summary

summary(fit, fit.measures = TRUE, standardized = TRUE)
## lavaan 0.6-8 ended normally after 42 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        30
##                                                       
##   Number of observations                           301
##   Number of missing patterns                         1
##                                                       
## Model Test User Model:
##                                                       
##   Test statistic                                85.306
##   Degrees of freedom                                24
##   P-value (Chi-square)                           0.000
## 
## Model Test Baseline Model:
## 
##   Test statistic                               918.852
##   Degrees of freedom                                36
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.931
##   Tucker-Lewis Index (TLI)                       0.896
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -3737.745
##   Loglikelihood unrestricted model (H1)      -3695.092
##                                                       
##   Akaike (AIC)                                7535.490
##   Bayesian (BIC)                              7646.703
##   Sample-size adjusted Bayesian (BIC)         7551.560
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.092
##   90 Percent confidence interval - lower         0.071
##   90 Percent confidence interval - upper         0.114
##   P-value RMSEA <= 0.05                          0.001
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.060
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual =~                                                             
##     x1                0.900    0.083   10.808    0.000    0.900    0.772
##     x2                0.498    0.081    6.164    0.000    0.498    0.424
##     x3                0.656    0.078    8.458    0.000    0.656    0.581
##   textual =~                                                            
##     x4                0.990    0.057   17.458    0.000    0.990    0.852
##     x5                1.102    0.063   17.601    0.000    1.102    0.855
##     x6                0.917    0.054   17.051    0.000    0.917    0.838
##   speed =~                                                              
##     x7                0.619    0.074    8.337    0.000    0.619    0.570
##     x8                0.731    0.075    9.682    0.000    0.731    0.723
##     x9                0.670    0.078    8.642    0.000    0.670    0.665
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   visual ~~                                                             
##     textual           0.459    0.063    7.225    0.000    0.459    0.459
##     speed             0.471    0.086    5.457    0.000    0.471    0.471
##   textual ~~                                                            
##     speed             0.283    0.071    3.959    0.000    0.283    0.283
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1                4.936    0.067   73.473    0.000    4.936    4.235
##    .x2                6.088    0.068   89.855    0.000    6.088    5.179
##    .x3                2.250    0.065   34.579    0.000    2.250    1.993
##    .x4                3.061    0.067   45.694    0.000    3.061    2.634
##    .x5                4.341    0.074   58.452    0.000    4.341    3.369
##    .x6                2.186    0.063   34.667    0.000    2.186    1.998
##    .x7                4.186    0.063   66.766    0.000    4.186    3.848
##    .x8                5.527    0.058   94.854    0.000    5.527    5.467
##    .x9                5.374    0.058   92.546    0.000    5.374    5.334
##     visual            0.000                               0.000    0.000
##     textual           0.000                               0.000    0.000
##     speed             0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1                0.549    0.119    4.612    0.000    0.549    0.404
##    .x2                1.134    0.104   10.875    0.000    1.134    0.821
##    .x3                0.844    0.095    8.881    0.000    0.844    0.662
##    .x4                0.371    0.048    7.739    0.000    0.371    0.275
##    .x5                0.446    0.058    7.703    0.000    0.446    0.269
##    .x6                0.356    0.043    8.200    0.000    0.356    0.298
##    .x7                0.799    0.088    9.130    0.000    0.799    0.676
##    .x8                0.488    0.092    5.321    0.000    0.488    0.477
##    .x9                0.566    0.091    6.250    0.000    0.566    0.558
##     visual            1.000                               1.000    1.000
##     textual           1.000                               1.000    1.000
##     speed             1.000                               1.000    1.000

Parameter Estimates

standardizedSolution(fit)
##        lhs op     rhs est.std    se      z pvalue ci.lower ci.upper
## 1   visual =~      x1   0.772 0.058 13.416      0    0.659    0.885
## 2   visual =~      x2   0.424 0.063  6.752      0    0.301    0.547
## 3   visual =~      x3   0.581 0.058  9.942      0    0.467    0.696
## 4  textual =~      x4   0.852 0.023 37.612      0    0.807    0.896
## 5  textual =~      x5   0.855 0.022 38.530      0    0.812    0.899
## 6  textual =~      x6   0.838 0.024 35.598      0    0.792    0.884
## 7    speed =~      x7   0.570 0.058  9.767      0    0.455    0.684
## 8    speed =~      x8   0.723 0.062 11.608      0    0.601    0.845
## 9    speed =~      x9   0.665 0.066 10.063      0    0.535    0.795
## 10      x1 ~~      x1   0.404 0.089  4.551      0    0.230    0.578
## 11      x2 ~~      x2   0.821 0.053 15.438      0    0.716    0.925
## 12      x3 ~~      x3   0.662 0.068  9.748      0    0.529    0.795
## 13      x4 ~~      x4   0.275 0.039  7.126      0    0.199    0.350
## 14      x5 ~~      x5   0.269 0.038  7.084      0    0.194    0.343
## 15      x6 ~~      x6   0.298 0.039  7.546      0    0.220    0.375
## 16      x7 ~~      x7   0.676 0.066 10.173      0    0.545    0.806
## 17      x8 ~~      x8   0.477 0.090  5.298      0    0.301    0.654
## 18      x9 ~~      x9   0.558 0.088  6.346      0    0.385    0.730
## 19  visual ~~  visual   1.000 0.000     NA     NA    1.000    1.000
## 20 textual ~~ textual   1.000 0.000     NA     NA    1.000    1.000
## 21   speed ~~   speed   1.000 0.000     NA     NA    1.000    1.000
## 22  visual ~~ textual   0.459 0.063  7.225      0    0.334    0.583
## 23  visual ~~   speed   0.471 0.086  5.457      0    0.302    0.640
## 24 textual ~~   speed   0.283 0.071  3.959      0    0.143    0.423
## 25      x1 ~1           4.235 0.182 23.272      0    3.878    4.592
## 26      x2 ~1           5.179 0.219 23.669      0    4.750    5.608
## 27      x3 ~1           1.993 0.100 20.010      0    1.798    2.188
## 28      x4 ~1           2.634 0.122 21.617      0    2.395    2.873
## 29      x5 ~1           3.369 0.149 22.623      0    3.077    3.661
## 30      x6 ~1           1.998 0.100 20.027      0    1.803    2.194
## 31      x7 ~1           3.848 0.167 23.030      0    3.521    4.176
## 32      x8 ~1           5.467 0.230 23.754      0    5.016    5.918
## 33      x9 ~1           5.334 0.225 23.716      0    4.893    5.775
## 34  visual ~1           0.000 0.000     NA     NA    0.000    0.000
## 35 textual ~1           0.000 0.000     NA     NA    0.000    0.000
## 36   speed ~1           0.000 0.000     NA     NA    0.000    0.000

Modification Indices

modificationIndices(fit, sort. = TRUE, minimum.value = 3)
##        lhs op rhs     mi    epc sepc.lv sepc.all sepc.nox
## 42  visual =~  x9 36.411  0.519   0.519    0.515    0.515
## 88      x7 ~~  x8 34.145  0.536   0.536    0.859    0.859
## 40  visual =~  x7 18.631 -0.380  -0.380   -0.349   -0.349
## 90      x8 ~~  x9 14.946 -0.423  -0.423   -0.805   -0.805
## 45 textual =~  x3  9.151 -0.269  -0.269   -0.238   -0.238
## 67      x2 ~~  x7  8.918 -0.183  -0.183   -0.192   -0.192
## 43 textual =~  x1  8.903  0.347   0.347    0.297    0.297
## 63      x2 ~~  x3  8.532  0.218   0.218    0.223    0.223
## 71      x3 ~~  x5  7.858 -0.130  -0.130   -0.212   -0.212
## 38  visual =~  x5  7.441 -0.189  -0.189   -0.147   -0.147
## 62      x1 ~~  x9  7.335  0.138   0.138    0.247    0.247
## 77      x4 ~~  x6  6.221 -0.235  -0.235   -0.646   -0.646
## 78      x4 ~~  x7  5.920  0.098   0.098    0.180    0.180
## 60      x1 ~~  x7  5.420 -0.129  -0.129   -0.195   -0.195
## 89      x7 ~~  x9  5.183 -0.187  -0.187   -0.278   -0.278
## 48 textual =~  x9  4.796  0.137   0.137    0.136    0.136
## 41  visual =~  x8  4.295 -0.189  -0.189   -0.187   -0.187
## 75      x3 ~~  x9  4.126  0.102   0.102    0.147    0.147
## 79      x4 ~~  x8  3.805 -0.069  -0.069   -0.162   -0.162
## 55      x1 ~~  x2  3.606 -0.184  -0.184   -0.233   -0.233
## 57      x1 ~~  x4  3.554  0.078   0.078    0.173    0.173
## 47 textual =~  x8  3.359 -0.120  -0.120   -0.118   -0.118


SEM

# specify the model
model <- '
# measurement model
ind60 =~ x1 + x2 + x3
dem60 =~ y1 + y2 + y3 + y4
dem65 =~ y5 + y6 + y7 + y8

# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60

# covariances
y1 ~~ y5
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
y6 ~~ y8

# variances

'
# fit model
fit <- sem(model = model, data = sem_data, mimic = "lavaan", 
           estimator = "ML", missing = "ML", 
           std.lv = FALSE, std.ov = FALSE, test = "standard", 
           se = "standard", bootstrap = 1000)

Summary Output

sem_sig(fit)
Model Significance
Sample.Size Chi.Square df p.value
75 38.125 35 0.329
sem_fitmeasures(fit)
Model Fit Measures
CFI RMSEA RMSEA.Lower RMSEA.Upper AIC BIC
0.995 0.035 0 0.092 3179.582 3276.916
sem_factorloadings(fit, standardized = TRUE, ci = "standardized")
Factor Loadings
Standardized
Latent Factor Indicator Loadings sig p Lower.CI Upper.CI SE z
ind60 x1 0.920 *** 0 0.874 0.965 0.023 39.658
ind60 x2 0.973 *** 0 0.941 1.005 0.017 58.917
ind60 x3 0.872 *** 0 0.812 0.933 0.031 28.304
dem60 y1 0.850 *** 0 0.765 0.936 0.044 19.435
dem60 y2 0.717 *** 0 0.592 0.843 0.064 11.207
dem60 y3 0.722 *** 0 0.596 0.849 0.064 11.221
dem60 y4 0.846 *** 0 0.759 0.933 0.044 19.020
dem65 y5 0.808 *** 0 0.713 0.903 0.048 16.698
dem65 y6 0.746 *** 0 0.634 0.858 0.057 13.031
dem65 y7 0.824 *** 0 0.734 0.913 0.046 18.063
dem65 y8 0.828 *** 0 0.738 0.918 0.046 18.030
sem_paths(fit, standardized = TRUE, ci = "standardized")
Regression Paths
Standardized
Predictor DV Path Values SE z sig p Lower.CI Upper.CI
ind60 dem60 0.447 0.105 4.267 *** 0.000 0.242 0.652
ind60 dem65 0.182 0.073 2.498
0.013 0.039 0.325
dem60 dem65 0.885 0.052 17.100 *** 0.000 0.784 0.987
sem_factorcor(fit)
Latent Factor Correlations
Factor 1 Factor 2 r sig p Lower.CI Upper.CI SE
sem_factorvar(fit)
Latent Factor Variance/Residual Variance
Factor 1 Factor 2 var var.std sig p
ind60 ind60 0.448 1.000 *** 0.000
dem60 dem60 3.956 0.800 *** 0.000
dem65 dem65 0.172 0.039 0.434
sem_rsquared(fit)
R-Squared Values
Variable R-Squared
dem60 0.1995522
dem65 0.9609949

Diagram Output

Compared to the CFA figure above, I modified the r-code chunck option for figure width to fig.width = 10 to make the image wider. And in the ‘Code’ I also changed the paramter for edge labels to edge.label.cex = 8.

semPaths(fit, latents = factors, whatLabels = "std", layout = "tree2", 
         rotation = 2, style = "lisrel", optimizeLatRes = TRUE, 
         intercepts = FALSE, residuals = TRUE, curve = 1, curvature = 3, 
         sizeLat = 10, nCharNodes = 8, sizeMan = 11, sizeMan2 = 4, 
         edge.label.cex = .8, edge.color = "#000000")
## Error in latNames %in% latents: object 'factors' not found

Residual Correlation Matrix

sem_residuals(fit)
x1 x2 x3 y1 y2 y3 y4 y5 y6 y7 y8
x1
x2 0.00
x3 0.00 0.00
y1 0.03 -0.05 -0.08
y2 -0.08 -0.06 -0.07 -0.01
y3 0.03 0.00 -0.06 0.06 -0.07
y4 0.12 0.08 0.06 -0.03 0.01 0.00
y5 0.14 0.07 0.02 -0.02 -0.02 0.01 -0.01
y6 -0.05 -0.06 -0.04 0.04 0.02 -0.09 0.05 -0.04
y7 -0.05 -0.06 -0.06 0.00 0.01 0.00 0.01 0.01 -0.01
y8 0.02 -0.01 -0.05 -0.01 0.03 -0.05 0.03 -0.04 0.01 0.03

Full Output

Summary

summary(fit, fit.measures = TRUE, standardized = TRUE)
## lavaan 0.6-8 ended normally after 83 iterations
## 
##   Estimator                                         ML
##   Optimization method                           NLMINB
##   Number of model parameters                        42
##                                                       
##   Number of observations                            75
##   Number of missing patterns                         1
##                                                       
## Model Test User Model:
##                                                       
##   Test statistic                                38.125
##   Degrees of freedom                                35
##   P-value (Chi-square)                           0.329
## 
## Model Test Baseline Model:
## 
##   Test statistic                               730.654
##   Degrees of freedom                                55
##   P-value                                        0.000
## 
## User Model versus Baseline Model:
## 
##   Comparative Fit Index (CFI)                    0.995
##   Tucker-Lewis Index (TLI)                       0.993
## 
## Loglikelihood and Information Criteria:
## 
##   Loglikelihood user model (H0)              -1547.791
##   Loglikelihood unrestricted model (H1)      -1528.728
##                                                       
##   Akaike (AIC)                                3179.582
##   Bayesian (BIC)                              3276.916
##   Sample-size adjusted Bayesian (BIC)         3144.543
## 
## Root Mean Square Error of Approximation:
## 
##   RMSEA                                          0.035
##   90 Percent confidence interval - lower         0.000
##   90 Percent confidence interval - upper         0.092
##   P-value RMSEA <= 0.05                          0.611
## 
## Standardized Root Mean Square Residual:
## 
##   SRMR                                           0.041
## 
## Parameter Estimates:
## 
##   Standard errors                             Standard
##   Information                                 Observed
##   Observed information based on                Hessian
## 
## Latent Variables:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   ind60 =~                                                              
##     x1                1.000                               0.670    0.920
##     x2                2.180    0.139   15.685    0.000    1.460    0.973
##     x3                1.819    0.152   11.949    0.000    1.218    0.872
##   dem60 =~                                                              
##     y1                1.000                               2.223    0.850
##     y2                1.257    0.186    6.775    0.000    2.794    0.717
##     y3                1.058    0.148    7.131    0.000    2.351    0.722
##     y4                1.265    0.151    8.391    0.000    2.812    0.846
##   dem65 =~                                                              
##     y5                1.000                               2.103    0.808
##     y6                1.186    0.171    6.920    0.000    2.493    0.746
##     y7                1.280    0.160    7.978    0.000    2.691    0.824
##     y8                1.266    0.163    7.756    0.000    2.662    0.828
## 
## Regressions:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##   dem60 ~                                                               
##     ind60             1.483    0.397    3.733    0.000    0.447    0.447
##   dem65 ~                                                               
##     ind60             0.572    0.234    2.449    0.014    0.182    0.182
##     dem60             0.837    0.099    8.476    0.000    0.885    0.885
## 
## Covariances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##  .y1 ~~                                                                 
##    .y5                0.624    0.369    1.690    0.091    0.624    0.296
##  .y2 ~~                                                                 
##    .y4                1.313    0.699    1.879    0.060    1.313    0.273
##    .y6                2.153    0.726    2.964    0.003    2.153    0.356
##  .y3 ~~                                                                 
##    .y7                0.795    0.621    1.280    0.201    0.795    0.191
##  .y4 ~~                                                                 
##    .y8                0.348    0.458    0.761    0.447    0.348    0.109
##  .y6 ~~                                                                 
##    .y8                1.356    0.572    2.371    0.018    1.356    0.338
## 
## Intercepts:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1                5.054    0.084   60.127    0.000    5.054    6.943
##    .x2                4.792    0.173   27.657    0.000    4.792    3.194
##    .x3                3.558    0.161   22.066    0.000    3.558    2.548
##    .y1                5.465    0.302   18.104    0.000    5.465    2.090
##    .y2                4.256    0.450    9.461    0.000    4.256    1.093
##    .y3                6.563    0.376   17.460    0.000    6.563    2.016
##    .y4                4.453    0.384   11.598    0.000    4.453    1.339
##    .y5                5.136    0.301   17.092    0.000    5.136    1.974
##    .y6                2.978    0.386    7.717    0.000    2.978    0.891
##    .y7                6.196    0.377   16.427    0.000    6.196    1.897
##    .y8                4.043    0.371   10.889    0.000    4.043    1.257
##     ind60             0.000                               0.000    0.000
##    .dem60             0.000                               0.000    0.000
##    .dem65             0.000                               0.000    0.000
## 
## Variances:
##                    Estimate  Std.Err  z-value  P(>|z|)   Std.lv  Std.all
##    .x1                0.082    0.020    4.136    0.000    0.082    0.154
##    .x2                0.120    0.070    1.712    0.087    0.120    0.053
##    .x3                0.467    0.089    5.233    0.000    0.467    0.239
##    .y1                1.891    0.469    4.035    0.000    1.891    0.277
##    .y2                7.373    1.346    5.479    0.000    7.373    0.486
##    .y3                5.067    0.968    5.233    0.000    5.067    0.478
##    .y4                3.148    0.756    4.165    0.000    3.148    0.285
##    .y5                2.351    0.489    4.810    0.000    2.351    0.347
##    .y6                4.954    0.895    5.532    0.000    4.954    0.443
##    .y7                3.431    0.728    4.715    0.000    3.431    0.322
##    .y8                3.254    0.707    4.603    0.000    3.254    0.315
##     ind60             0.448    0.087    5.170    0.000    1.000    1.000
##    .dem60             3.956    0.945    4.188    0.000    0.800    0.800
##    .dem65             0.172    0.220    0.783    0.434    0.039    0.039

Parameter Estimates

standardizedSolution(fit)
##      lhs op   rhs est.std    se      z pvalue ci.lower ci.upper
## 1  ind60 =~    x1   0.920 0.023 39.658  0.000    0.874    0.965
## 2  ind60 =~    x2   0.973 0.017 58.917  0.000    0.941    1.005
## 3  ind60 =~    x3   0.872 0.031 28.304  0.000    0.812    0.933
## 4  dem60 =~    y1   0.850 0.044 19.435  0.000    0.765    0.936
## 5  dem60 =~    y2   0.717 0.064 11.207  0.000    0.592    0.843
## 6  dem60 =~    y3   0.722 0.064 11.221  0.000    0.596    0.849
## 7  dem60 =~    y4   0.846 0.044 19.020  0.000    0.759    0.933
## 8  dem65 =~    y5   0.808 0.048 16.698  0.000    0.713    0.903
## 9  dem65 =~    y6   0.746 0.057 13.031  0.000    0.634    0.858
## 10 dem65 =~    y7   0.824 0.046 18.063  0.000    0.734    0.913
## 11 dem65 =~    y8   0.828 0.046 18.030  0.000    0.738    0.918
## 12 dem60  ~ ind60   0.447 0.105  4.267  0.000    0.242    0.652
## 13 dem65  ~ ind60   0.182 0.073  2.498  0.013    0.039    0.325
## 14 dem65  ~ dem60   0.885 0.052 17.100  0.000    0.784    0.987
## 15    y1 ~~    y5   0.296 0.142  2.081  0.037    0.017    0.574
## 16    y2 ~~    y4   0.273 0.121  2.259  0.024    0.036    0.509
## 17    y2 ~~    y6   0.356 0.098  3.652  0.000    0.165    0.547
## 18    y3 ~~    y7   0.191 0.137  1.387  0.166   -0.079    0.460
## 19    y4 ~~    y8   0.109 0.135  0.803  0.422   -0.157    0.374
## 20    y6 ~~    y8   0.338 0.111  3.032  0.002    0.119    0.556
## 21    x1 ~~    x1   0.154 0.043  3.606  0.000    0.070    0.238
## 22    x2 ~~    x2   0.053 0.032  1.655  0.098   -0.010    0.116
## 23    x3 ~~    x3   0.239 0.054  4.454  0.000    0.134    0.345
## 24    y1 ~~    y1   0.277 0.074  3.719  0.000    0.131    0.423
## 25    y2 ~~    y2   0.486 0.092  5.293  0.000    0.306    0.666
## 26    y3 ~~    y3   0.478 0.093  5.142  0.000    0.296    0.660
## 27    y4 ~~    y4   0.285 0.075  3.787  0.000    0.137    0.432
## 28    y5 ~~    y5   0.347 0.078  4.439  0.000    0.194    0.500
## 29    y6 ~~    y6   0.443 0.085  5.192  0.000    0.276    0.611
## 30    y7 ~~    y7   0.322 0.075  4.281  0.000    0.174    0.469
## 31    y8 ~~    y8   0.315 0.076  4.139  0.000    0.166    0.464
## 32 ind60 ~~ ind60   1.000 0.000     NA     NA    1.000    1.000
## 33 dem60 ~~ dem60   0.800 0.094  8.557  0.000    0.617    0.984
## 34 dem65 ~~ dem65   0.039 0.050  0.785  0.433   -0.058    0.136
## 35    x1 ~1         6.943 0.579 12.001  0.000    5.809    8.077
## 36    x2 ~1         3.194 0.285 11.199  0.000    2.635    3.753
## 37    x3 ~1         2.548 0.238 10.709  0.000    2.082    3.014
## 38    y1 ~1         2.090 0.206 10.129  0.000    1.686    2.495
## 39    y2 ~1         1.093 0.146  7.507  0.000    0.807    1.378
## 40    y3 ~1         2.016 0.201 10.034  0.000    1.622    2.410
## 41    y4 ~1         1.339 0.159  8.423  0.000    1.028    1.651
## 42    y5 ~1         1.974 0.198  9.943  0.000    1.585    2.363
## 43    y6 ~1         0.891 0.136  6.535  0.000    0.624    1.158
## 44    y7 ~1         1.897 0.193  9.816  0.000    1.518    2.276
## 45    y8 ~1         1.257 0.154  8.140  0.000    0.955    1.560
## 46 ind60 ~1         0.000 0.000     NA     NA    0.000    0.000
## 47 dem60 ~1         0.000 0.000     NA     NA    0.000    0.000
## 48 dem65 ~1         0.000 0.000     NA     NA    0.000    0.000

Modification Indices

modificationIndices(fit, sort. = TRUE, minimum.value = 3)
##      lhs op rhs    mi    epc sepc.lv sepc.all sepc.nox
## 52 ind60 =~  y4 4.796  0.862   0.577    0.174    0.174
## 53 ind60 =~  y5 4.456  0.835   0.559    0.215    0.215
## 70 dem65 =~  y4 4.261  1.420   2.986    0.898    0.898
## 99    y1 ~~  y3 3.771  0.849   0.849    0.274    0.274
## 74    x1 ~~  y2 3.040 -0.155  -0.155   -0.200   -0.200


Session Info

citation("lavaan")

To cite lavaan in publications use:

  Yves Rosseel (2012). lavaan: An R Package for Structural Equation
  Modeling. Journal of Statistical Software, 48(2), 1-36. URL
  https://www.jstatsoft.org/v48/i02/.

A BibTeX entry for LaTeX users is

  @Article{,
    title = {{lavaan}: An {R} Package for Structural Equation Modeling},
    author = {Yves Rosseel},
    journal = {Journal of Statistical Software},
    year = {2012},
    volume = {48},
    number = {2},
    pages = {1--36},
    url = {https://www.jstatsoft.org/v48/i02/},
  }
citation()

To cite R in publications use:

  R Core Team (2021). R: A language and environment for statistical
  computing. R Foundation for Statistical Computing, Vienna, Austria.
  URL https://www.R-project.org/.

A BibTeX entry for LaTeX users is

  @Manual{,
    title = {R: A Language and Environment for Statistical Computing},
    author = {{R Core Team}},
    organization = {R Foundation for Statistical Computing},
    address = {Vienna, Austria},
    year = {2021},
    url = {https://www.R-project.org/},
  }

We have invested a lot of time and effort in creating R, please cite it
when using it for data analysis. See also 'citation("pkgname")' for
citing R packages.
sessionInfo()
R version 4.1.0 (2021-05-18)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur 10.16

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRblas.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] sjPlot_2.8.8    semPlot_1.1.2   semoutput_1.0.0 psych_2.1.3    
[5] lavaan_0.6-8    dplyr_1.0.6     here_1.0.1      readr_1.4.0    

loaded via a namespace (and not attached):
  [1] backports_1.2.1     Hmisc_4.5-0         systemfonts_1.0.2  
  [4] plyr_1.8.6          igraph_1.2.6        splines_4.1.0      
  [7] ggplot2_3.3.3       digest_0.6.27       htmltools_0.5.1.1  
 [10] matrixcalc_1.0-4    fansi_0.5.0         magrittr_2.0.1     
 [13] Rsolnp_1.16         checkmate_2.0.0     lisrelToR_0.1.4    
 [16] cluster_2.1.2       openxlsx_4.2.3      modelr_0.1.8       
 [19] svglite_2.0.0       jpeg_0.1-8.1        sem_3.1-11         
 [22] colorspace_2.0-1    rvest_1.0.0         xfun_0.23          
 [25] crayon_1.4.1        jsonlite_1.7.2      lme4_1.1-27        
 [28] regsem_1.8.0        survival_3.2-11     glue_1.4.2         
 [31] kableExtra_1.3.4    gtable_0.3.0        emmeans_1.6.1      
 [34] webshot_0.5.2       mi_1.0              sjstats_0.18.1     
 [37] sjmisc_2.8.7        abind_1.4-5         scales_1.1.1       
 [40] mvtnorm_1.1-1       DBI_1.1.1           ggeffects_1.1.0    
 [43] Rcpp_1.0.6          viridisLite_0.4.0   xtable_1.8-4       
 [46] performance_0.7.2   htmlTable_2.2.1     tmvnsim_1.0-2      
 [49] foreign_0.8-81      proxy_0.4-25        Formula_1.2-4      
 [52] stats4_4.1.0        truncnorm_1.0-8     httr_1.4.2         
 [55] htmlwidgets_1.5.3   RColorBrewer_1.1-2  ellipsis_0.3.2     
 [58] pkgconfig_2.0.3     XML_3.99-0.6        nnet_7.3-16        
 [61] sass_0.4.0          kutils_1.70         utf8_1.2.1         
 [64] tidyselect_1.1.1    rlang_0.4.11        reshape2_1.4.4     
 [67] effectsize_0.4.5    munsell_0.5.0       tools_4.1.0        
 [70] generics_0.1.0      sjlabelled_1.1.8    broom_0.7.6        
 [73] fdrtool_1.2.16      evaluate_0.14       stringr_1.4.0      
 [76] arm_1.11-2          yaml_2.2.1          knitr_1.33         
 [79] zip_2.2.0           purrr_0.3.4         glasso_1.11        
 [82] pbapply_1.4-3       nlme_3.1-152        xml2_1.3.2         
 [85] compiler_4.1.0      rstudioapi_0.13     png_0.1-7          
 [88] e1071_1.7-7         tibble_3.1.2        bslib_0.2.5.1      
 [91] pbivnorm_0.6.0      stringi_1.6.2       highr_0.9          
 [94] parameters_0.14.0   qgraph_1.6.9        rockchalk_1.8.144  
 [97] lattice_0.20-44     Matrix_1.3-3        nloptr_1.2.2.2     
[100] vctrs_0.3.8         pillar_1.6.1        lifecycle_1.0.0    
[103] jquerylib_0.1.4     OpenMx_2.19.5       estimability_1.3   
[106] data.table_1.14.0   insight_0.14.1      corpcor_1.6.9      
[109] R6_2.5.0            latticeExtra_0.6-29 gridExtra_2.3      
[112] boot_1.3-28         MASS_7.3-54         gtools_3.8.2       
[115] assertthat_0.2.1    rprojroot_2.0.2     mnormt_2.0.2       
[118] bayestestR_0.10.0   parallel_4.1.0      hms_1.1.0          
[121] grid_4.1.0          rpart_4.1-15        tidyr_1.1.3        
[124] coda_0.19-4         class_7.3-19        minqa_1.2.4        
[127] rmarkdown_2.8       carData_3.0-4       base64enc_0.1-3    
---
title: "Document Title"
output: 
  html_document:
    code_download: yes
    code_folding: hide
    toc: true
    toc_float:
      collapsed: false
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(error = TRUE, message = FALSE, warning = TRUE)
```

Last updated: `r format(Sys.Date(), "%B %d %Y")`

See [Package References](https://dr-jt.github.io/semoutput/reference/index.html){target="_blank"} for help documentation on `semoutput` functions

**NOTE: We will use two data sets in this example output. They come from the `lavaan` tutorial data sets; the cfa data set is`HolzingerSwineford1939` and the sem data set is `PoliticalDemocracy`**

**NOTE: You can download the RMarkdown file associated with this output by selecting the 'Code' button on the top-right of this document and selecting "Download Rmd". You can also display the code used to produce each output by selecting the 'Code' buttons along the right of the document.**

**NOTE: For each CFA or SEM model output block there are 4 tabs:**

-   The "Summary Output" tab display nice looking tables summarizing the model results

-   The "Diagram Output" tab will display a model diagram

-   The "Residual Correlation Matrix" tab will display the residual correlation matrix

-   The "Full Output" tab will display the results from `summary()` along with parameter estimates and modification indices. This way you can still get the full output from a lavaan model as it provides more information than the "Summary Output". You can also add additional output to this section if you need more info about the model.

**Once you install the package, you will be able to access an Rmarkdown template by going to**:

File -\> New File -\> R Markdown... -\> From Template -\> CFA/SEM (lavaan)

# Setup

Required Packages

```{r warning=FALSE}
library(readr)
library(here)
library(dplyr)
library(lavaan)
library(psych)
library(semoutput)
library(semPlot)
library(sjPlot)
```

Import Data

```{r warning=FALSE}
## Import Data
# data <- read_csv(here("relative file path", "file name"))
cfa_data <- dplyr::select(HolzingerSwineford1939, -id, -school)
sem_data <- PoliticalDemocracy
```

```{r}
```

------------------------------------------------------------------------

------------------------------------------------------------------------

# Descriptives

**Typically there is only one descriptive table displayed but since we have two data sets for this one output two descriptive tables are displayed.**

```{r}
# Prints basic descriptive statistics
sem_descriptives(cfa_data)
sem_descriptives(sem_data)
```

------------------------------------------------------------------------

# Correlation Matrix

**There are also two correlation matrices**

**This is a publication quality correlation matrix that can be inserted into a manuscript.**

```{r}
# Uses sjPlot to print a nice looking correlation table
tab_corr(cfa_data, na.deletion = "pairwise", digits = 2, triangle = "lower")
tab_corr(sem_data, na.deletion = "pairwise", digits = 2, triangle = "lower")
```

------------------------------------------------------------------------

------------------------------------------------------------------------

# EFA {.tabset .tabset-pills}

```{r}
efa_data <- dplyr::select(cfa_data, dplyr::starts_with("x"))

## Conduct EFA analysis with nfactors
efa_fit <- fa(efa_data, fm = "pa", nfactors = 3, rotate = "varimax")
```

## Summary Output

```{r}
efa_method(efa_fit)
efa_var(efa_fit)
efa_loadings(efa_fit)
efa_rotmatrix(efa_fit)
```

## Diagram Output

```{r}
fa.diagram(efa_fit)

## Determine the number of factors to extract
VSS.scree(efa_data)
fa.parallel(efa_data, fa = "fa")
VSS(efa_data, n = 4, rotate = "varimax")
```

## Full Output

```{r}
efa_fit
```

------------------------------------------------------------------------

------------------------------------------------------------------------

# CFA {.tabset .tabset-pills}

```{r}
# specify the model
model <- '
# latent factors
visual =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed =~ x7 + x8 + x9

# correlated errors

# constraints

'
# fit model
fit <- cfa(model = model, data = cfa_data, mimic = "lavaan", 
           estimator = "ML", missing = "ML", 
           std.lv = TRUE, std.ov = FALSE, test = "standard", 
           se = "standard", bootstrap = 1000)
```

## Summary Output

```{r}
sem_sig(fit)
sem_fitmeasures(fit)
sem_factorloadings(fit, standardized = TRUE, ci = "standardized")
sem_factorcor(fit)
```

## Diagram Output

```{r}
semPaths(fit, latents = factors, whatLabels = "std", layout = "tree2", 
         rotation = 2, style = "lisrel", optimizeLatRes = TRUE, 
         intercepts = FALSE, residuals = TRUE, curve = 1, curvature = 3, 
         sizeLat = 10, nCharNodes = 8, sizeMan = 11, sizeMan2 = 4, 
         edge.label.cex = 1.2, edge.color = "#000000")
```

## Residual Correlation Matrix

```{r}
sem_residuals(fit)
```

## Full Output

### Summary

```{r}
summary(fit, fit.measures = TRUE, standardized = TRUE)
```

### Parameter Estimates

```{r}
standardizedSolution(fit)
```

### Modification Indices

```{r}
modificationIndices(fit, sort. = TRUE, minimum.value = 3)
```

<hr>
<hr>

# SEM {.tabset .tabset-pills}

```{r}
# specify the model
model <- '
# measurement model
ind60 =~ x1 + x2 + x3
dem60 =~ y1 + y2 + y3 + y4
dem65 =~ y5 + y6 + y7 + y8

# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60

# covariances
y1 ~~ y5
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
y6 ~~ y8

# variances

'
# fit model
fit <- sem(model = model, data = sem_data, mimic = "lavaan", 
           estimator = "ML", missing = "ML", 
           std.lv = FALSE, std.ov = FALSE, test = "standard", 
           se = "standard", bootstrap = 1000)
```

## Summary Output

```{r}
sem_sig(fit)
sem_fitmeasures(fit)
sem_factorloadings(fit, standardized = TRUE, ci = "standardized")
sem_paths(fit, standardized = TRUE, ci = "standardized")
sem_factorcor(fit)
sem_factorvar(fit)
sem_rsquared(fit)
```

## Diagram Output

**Compared to the CFA figure above, I modified the r-code chunck option for figure width to `fig.width = 10` to make the image wider. And in the 'Code' I also changed the paramter for edge labels to `edge.label.cex = 8`.**

```{r fig.width=10}
semPaths(fit, latents = factors, whatLabels = "std", layout = "tree2", 
         rotation = 2, style = "lisrel", optimizeLatRes = TRUE, 
         intercepts = FALSE, residuals = TRUE, curve = 1, curvature = 3, 
         sizeLat = 10, nCharNodes = 8, sizeMan = 11, sizeMan2 = 4, 
         edge.label.cex = .8, edge.color = "#000000")
```

## Residual Correlation Matrix

```{r}
sem_residuals(fit)
```

## Full Output

### Summary

```{r}
summary(fit, fit.measures = TRUE, standardized = TRUE)
```

### Parameter Estimates

```{r}
standardizedSolution(fit)
```

### Modification Indices

```{r}
modificationIndices(fit, sort. = TRUE, minimum.value = 3)
```

------------------------------------------------------------------------

------------------------------------------------------------------------

# Session Info

```{r comment=""}
citation("lavaan")
citation()
sessionInfo()
```
