Summary of conclusions
| Aspect | Rönkkö et al.’s (2023) position | Our response |
|---|---|---|
| “Fallacy #1”: PLS-SEM maximizes explained variance or R² | PLS-SEM’s optimization criterion is ambiguous. The method does not maximize the R2, and canonical correlations achieve higher levels of explained variance | PLS-SEM seeks to minimize the residuals in the relationships between composites and indicators (i.e., in the measurement models) as well as the relationships between composites (i.e., in the structural model). While related, the CCA and PLS-SEM rely on different models, making the authors’ empirical comparison of the two methods meaningless. Specifically, the methods produce equivalent results for a two-construct model estimated via PLS-SEM Mode B |
| “Fallacy #2”: PLS-SEM weights improve reliability | PLS-SEM-based weights do not improve reliability and using equal weights is a simpler and more robust solution | PLS-SEM’s ability to improve reliability has been shown both analytically and through simulation studies. The assumption of equal weights overlooks the associated reliability and validity issues and limits the model’s practical utility |
| “Untold fact”: PLS-SEM weights can bias correlations | When two constructs are only weakly correlated, PLS-SEM inflates path coefficients. Cross-loadings further inflates these biases | PLS-SEM only inflates path coefficient estimates in models where the constructs are perfectly uncorrelated. Such a setting constitutes a well-known boundary condition for PLS-SEM, which is extremely unlikely to occur in empirical applications. More importantly, this feature has no consequences for inference testing, as it does not trigger false positives much different from the expectation (e.g., 5%). Researchers should avoid models where an endogenous construct is related to only one other construct (e.g., chainlike models). Cross-loadings violate a fundamental requirement of the PLS-SEM method. Future research should assess the impact of cross-loadings on model estimates and establish measures to assess the severity of their effect |
| The composite equivalence index (CEI) | Researchers should routinely use the CEI to assess whether the indicator weighting provides any value-added beyond equal weights | We do not respond in this article on this aspect but refer to Hair et al. (2024b). Their article shows that the CEI lacks discriminatory power, conceals reliability concerns in reflective measurement models as well as differences in relative indicator contributions in formative measurement models. Researchers should therefore not use the CEI as such a step would have adverse consequences on the validity of results |
| “Fallacy #3”: using AVE and composite reliability with PLS-SEM to validate measurement | The AVE, the Fornell-Larcker criterion, and the composite reliability (ρA) do not disclose model misspecifications | The critics selectively use metrics and settings in which PLS-SEM does not identify misspecified models. Considering the standard range of model evaluation metrics discloses the misspecifications in all cases. In addition, content validity concerns would prevent any researcher from using the model set-ups the authors considered |
| General conclusion | PLS-SEM use should generally be avoided | PLS-SEM perfectly fits into the marketing research landscape, which not only aims to test theories, but also to derive managerial implications that are predictive in nature. PLS-SEM works well in achieving this objective, as the method follows a causal-predictive paradigm, where the aim is to test the predictive power within the confinements of a model carefully developed on the grounds of theory and logic |
| Aspect | Our response | |
|---|---|---|
| “Fallacy #1”: | PLS-SEM’s optimization criterion is ambiguous. The method does not maximize the R2, and canonical correlations achieve higher levels of explained variance | PLS-SEM seeks to minimize the residuals in the relationships between composites and indicators (i.e., in the measurement models) as well as the relationships between composites (i.e., in the structural model). While related, the CCA and PLS-SEM rely on different models, making the authors’ empirical comparison of the two methods meaningless. Specifically, the methods produce equivalent results for a two-construct model estimated via PLS-SEM Mode B |
| “Fallacy #2”: | PLS-SEM-based weights do not improve reliability and using equal weights is a simpler and more robust solution | PLS-SEM’s ability to improve reliability has been shown both analytically and through simulation studies. The assumption of equal weights overlooks the associated reliability and validity issues and limits the model’s practical utility |
| “Untold fact”: PLS-SEM weights can bias correlations | When two constructs are only weakly correlated, PLS-SEM inflates path coefficients. Cross-loadings further inflates these biases | PLS-SEM only inflates path coefficient estimates in models where the constructs are perfectly uncorrelated. Such a setting constitutes a well-known boundary condition for PLS-SEM, which is extremely unlikely to occur in empirical applications. More importantly, this feature has no consequences for inference testing, as it does not trigger false positives much different from the expectation (e.g., 5%). Researchers should avoid models where an endogenous construct is related to only one other construct (e.g., chainlike models). Cross-loadings violate a fundamental requirement of the PLS-SEM method. Future research should assess the impact of cross-loadings on model estimates and establish measures to assess the severity of their effect |
| The composite equivalence index (CEI) | Researchers should routinely use the CEI to assess whether the indicator weighting provides any value-added beyond equal weights | We do not respond in this article on this aspect but refer to |
| “Fallacy #3”: using AVE and composite reliability with PLS-SEM to validate measurement | The AVE, the Fornell-Larcker criterion, and the composite reliability ( | The critics selectively use metrics and settings in which PLS-SEM does not identify misspecified models. Considering the standard range of model evaluation metrics discloses the misspecifications in all cases. In addition, content validity concerns would prevent any researcher from using the model set-ups the authors considered |
| General conclusion | PLS-SEM use should generally be avoided | PLS-SEM perfectly fits into the marketing research landscape, which not only aims to test theories, but also to derive managerial implications that are predictive in nature. PLS-SEM works well in achieving this objective, as the method follows a causal-predictive paradigm, where the aim is to test the predictive power within the confinements of a model carefully developed on the grounds of theory and logic |