Decálogo para el Análisis Factorial de los Ítems de un Test

  1. Pere J. Ferrando 2
  2. Urbano Lorenzo-Seva 2
  3. Ana Hernández-Dorado 1
  4. José Muñiz 1
  1. 1 Universidad Nebrija
    info

    Universidad Nebrija

    Madrid, España

    ROR https://ror.org/03tzyrt94

  2. 2 Universitat Rovira i Virgili
    info

    Universitat Rovira i Virgili

    Tarragona, España

    ROR https://ror.org/00g5sqv46

Revista:
Psicothema

ISSN: 0214-9915 1886-144X

Año de publicación: 2022

Volumen: 34

Número: 1

Páginas: 7-17

Tipo: Artículo

DOI: 10.7334/PSICOTHEMA2021.456 DIALNET GOOGLE SCHOLAR lock_openAcceso abierto editor

Otras publicaciones en: Psicothema

Resumen

Antecedentes: en el estudio de las propiedades psicométricas de los ítems de un test un aspecto fundamental es el análisis de su estructura. El objetivo del presente trabajo es dar unas pautas que permitan llevar a cabo el análisis factorial de los ítems de una forma rigurosa y sistemática. Método: se llevó a cabo una revisión de la literatura reciente para identificar los pasos fundamentales que se han de seguir para llevar a cabo un análisis factorial adecuado de los ítems de un test. Resultados: se identificaron diez recomendaciones principales para llevar a cabo el análisis factorial de los ítems de un test: adecuación de los datos y la muestra, estadísticos univariados, justificación del análisis, selección de los ítems analizables, tipo de modelo, solución más apropiada, estimación de los parámetros, adecuación de la solución factorial, coherencia sustantiva del modelo y versión final del test. Conclusión: si se siguen de forma sistemática las diez recomendaciones propuestas, se conseguirá optimizar la calidad de los test y la toma de decisiones basadas en las estimaciones de las puntuaciones obtenidas mediante los mismos. Estas directrices son recomendables tanto en el ámbito de la investigación como en contextos más aplicados y profesionales.

Referencias bibliográficas

  • Arbuckle, J. L. (2014). Amos (Versión 23.0) [Software]. IBM SPSS. https:// www.ibm.com/products/structural-equation-modeling-sem
  • Asparouhov, T., Muthén, B., y Morin, A. J. (2015). Bayesian structural equation modeling with cross-loadings and residual covariances: Comments on Stromeyer et al. Journal of Management: Bayesian probability and Statistics, 41(6), 1561-1577. https://doi.org/10.1177/0149206315591075
  • Balluerka, N., Gorostiaga, A., Alonso-Arbiol, I., y Haranburu, M. (2007). La adaptación de instrumentos de medida de unas culturas a otras: una perspectiva práctica. Psicothema, 19(1), 124-133. http://www.psicothema.com/pdf/3338.pdf
  • Bandalos, D. L. (2021). Item Meaning and Order as Causes of Correlated Residuals in Confirmatory Factor Analysis. Structural Equation Modeling: A Multidisciplinary Journal, 28(6), 1-11. https://doi.org/10.1080/ 10705511.2021.1916395
  • Briggs, S. R., y Cheek, J. M. (1986). The role of factor analysis in the development and evaluation of personality scales. Journal of Personality, 54(1), 106-148. https://doi.org/10.1111/j.1467-6494.1986.tb00391.x
  • Browne, M. W. (1972). Oblique rotation to a partially specified target. British Journal of Mathematical and Statistical Psychology, 25(2), 207- 212. https://doi.org/10.1111/j.2044-8317.1972.tb00492.x
  • Browne, M. W. (2001). An overview of analytic rotation in exploratory factor analysis. Multivariate Behavioral Research, 36(1), 111-150. https://doi.org/10.1207/S15327906MBR3601_05
  • Calderón Garrido, C., Navarro González, D., Lorenzo Seva, U., y Ferrando Piera, P. J. (2019). Multidimensional or essentially unidimensional? A multi-faceted factoranalytic approach for assessing the dimensionality of tests and items. Psicothema, 31(4), 450-457. http://doi.org/10.7334/ psicothema2019.153
  • Cattell, R.B. (1988). The meaning and strategic use of factor analysis. En J.R. Nesselroade y R.B. Cattell (Eds.), Handbook of multivariate experimental psychology (pp. 131-203). Plenum Press.
  • Clark, L. A., y Watson, D. (1995). Constructing validity: Basic issues in objective scale development. Psychological Assessment, 7(3), 309-319. https://doi.org/10.1037/1040-3590.7.3.309
  • Cudeck, R., y Browne, M. W. (1983). Cross-validation of covariance structures. Multivariate Behavioral Research, 18(2), 147-167. https://doi. org/10.1207/s15327906mbr1802_2
  • Cuesta, M. (1996). Unidimensionalidad. En J. Muñiz (Ed.), Psicometría (pp. 239-292). Universitas.
  • Cuesta, M., Fonseca-Pedrero, E., Vallejo, G., y Muñiz, J. (2013). Datos perdidos y propiedades psicométricas en los tests de personalidad. Anales de Psicología, 29, 285-292. https://doi.org/10.6018/ analesps.29.1.137901
  • Downing, S. M., y Haladyna, T. M. (2006). Handbook of test development. Lawrence Erlbaum Associates.
  • Eysenck, H. J. (1952). The scientific study of personality. Macmillan.
  • Ferrando, P. J. (1996). Evaluación de la unidimensionalidad de los ítems mediante análisis factorial. Psicothema, 8(2), 397-410. http://www.psicothema.com/pdf/38.pdf
  • Ferrando, P. J. (2016). An extended multidimensional IRT formulation for the linear item factor analysis model. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 12(1), 1-10. https://doi.org/10.1027/1614-2241/a000098
  • Ferrando, P. J. (2021). Seven Decades of Factor Analysis: From Yela to the Present Day. Psicothema, 33(3), 378-385. https://doi.org/10.7334/ psicothema2021.24
  • Ferrando, P. J., y Lorenzo, U. (1992). Extracción del componente de dificultad en la evaluación de escalas basadas en items dicotómicos. Psicothema, 4(1), 269-276. http://www.psicothema.com/pdf/830.pdf
  • Ferrando, P. J., y Lorenzo-Seva, U. (2013). Unrestricted item factor analysis and some relations with item response theory [Reporte técnico]. Departamento de Psychología, Universitat Rovira i Virgili, Tarragona. http://psico.fcep.urv.es/utilitats/factor.
  • Ferrando, P. J., y Lorenzo-Seva, U. (2014). El análisis factorial exploratorio de los ítems: algunas consideraciones adicionales. Anales de Psicología, 30(3), 1170-1175. https://doi.org/10.6018/analesps.30.3.199991
  • Ferrando, P. J., y Lorenzo-Seva, U. (2017). Program FACTOR at 10: Origins, development and future directions. Psicothema, 29(2), 236-240. http://doi.org/10.7334/psicothema2016.304
  • Ferrando, P. J., y Lorenzo-Seva, U. (2018). Assessing the Quality and Appropriateness of Factor Solutions and Factor Score Estimates in Exploratory Item Factor Analysis. Educational and Psychological Measurement, 78(5), 762-780. http://doi.org/10.1177/0013164417719308
  • Floyd, F. J., y Widaman, K. F. (1995). Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment, 7(3), 286-299. https://doi.org/10.1037/1040-3590.7.3.286
  • Forero, C. G., Maydeu-Olivares, A., y Gallardo-Pujol, D. (2009). Factor analysis with ordinal indicators: A Monte Carlo study comparing DWLS and ULS estimation. Structural Equation Modeling, 16, 625- 641. http://doi.org/10.1080/10705510903203573
  • Fraser, C., y McDonald, R. P. (1988). NOHARM: Least squares item factor analysis. Multivariate Behavioral Research, 23(2), 267-269. https:// doi.org/10.1207/s15327906mbr2302_9
  • Gorsuch, R. L. (1997). Exploratory factor analysis: Its role in item analysis. Journal of Personality Assessment, 68(3), 532-560. http://doi. org/10.1207/s15327752jpa6803_5
  • Guilford, J. P. (1952). When not to factor analyze. Psychological Bulletin, 49(1), 26-37. https://doi.org/10.1037/h0054935
  • Haladyna, T. M. (2004). Developing and validating multiple-choice test item (3ª ed.). Lawrence Erlbaum Associates Publishers.
  • Haladyna, T. M., Downing, S. M., y Rodríguez, M. C. (2002). A review of multiple-choice item-writing guidelines. Applied Measurement in Education, 15(3), 309-334. https://doi.org/10.1207/ S15324818AME1503_5
  • Haladyna, T. M., y Rodríguez, M. C. (2013). Developing and validating test items. Routledge.
  • Hancock, G. R., y Mueller, R. O. (2001). Rethinking construct reliability within latent variable systems. En R. Cudek, S. H. C. duToit, y D. F. Sörbom (Eds.). Structural equation modeling: Present and future (pp. 195-216). Scientifi c Software.
  • Harman, H.H. (1976) Modern factor analysis. University of Chicago Press.
  • Hayashi, K., y Marcoulides, G. A. (2006). Teacher’s corner: Examining identification issues in factor analysis. Structural Equation Modeling, 13(4), 631-645. https://doi.org/10.1207/s15328007sem1304_7
  • Hernández, A., Hidalgo, M. D., Hambleton, R. K., y Gómez Benito, J. (2020). International test commission guidelines for test adaptation: A criterion checklist. Psicothema, 32, 390-398. https://10.7334/psicothema2019.306
  • Hernández, A., Ponsoda, V., Muñiz, J., Prieto, G., y Elosua, P. (2016). Revisión del modelo para evaluar la calidad de los tests utilizados en España. Papeles del Psicólogo, 37(3), 192-197. https://www.redalyc.org/ pdf/778/77847916006.pdf
  • Hernández Dorado, A., Vigil Colet, A., Lorenzo Seva, U., y Ferrando Piera, P. J. (2021). Is correcting for acquiescence increasing the external validity of personality test scores? Psicothema, 33(4), 639-646. http:// www.psicothema.com/pdf/4713.pdf
  • Izquierdo, I., Olea, J., y Abad, F. J. (2014). Exploratory factor analysis in validation studies: Uses and recommendations. Psicothema, 26(3), 395-400. https://doi.org/10.7334/psicothema2013.349
  • Jöreskog, K. G. (2007). Factor Analysis and its extensions. En R. Cudeck y R.C. MacCallum (Eds.), Factor analysis at 100: Historical developments and future directions (pp. 47-78). Routledge.
  • Jöreskog, K. G., y Sörbom, D. (2006). LISREL 8.80 [Software]. https:// ssicentral.com/index.php/products/lisrel/
  • Kaiser, H. F., y Rice, J. (1974). Little jiffy, mark IV. Educational and Psychological Measurement, 34(1), 111-117. https://doi. org/10.1177/001316447403400115
  • Lloret-Segura, S., Ferreres-Traver, A., Hernández-Baeza, A., y Tomas-Marco, I. (2014). Exploratory item factor analysis: A practical guide revised and updated. Anales de Psicología, 30(3), 1151-1169. http:// doi.org/10.6018/analesps.30.3.199361
  • Lord, F.M. (1980). Applications of item response theory to practical testing problems. Erlbaum.
  • Lord, F. M., y Novick, M. R. (1968). Statistical Theories of Mental Test Scores, Reading. Addison-Wesley.
  • Lorenzo-Seva, U. (2003). A factor simplicity index. Psychometrika, 68, 49-60. http://doi.org/10.1007/bf02296652
  • Lorenzo-Seva, U. (en prensa). SOLOMON: A method for splitting a sample into equivalent subsamples in factor analysis. Behavior Research Methods.
  • Lorenzo-Seva, U., y Ferrando, P. J. (2009). Acquiescent responding in partially balanced multidimensional scales. British Journal of Mathematical and Statistical Psychology, 62(2), 319-326. https://doi. org/10.1348/000711007X265164
  • Lorenzo-Seva, U., y Ferrando, P. J. (2020). Unrestricted factor analysis of multidimensional test items based on an objectively refined target matrix. Behavior Research Methods, 52, 116-130. https://doi.org/10.3758/ s13428-019-01209-1
  • Lorenzo-Seva, U., y Ferrando, P. J. (2021). Not positive definite correlation matrices in exploratory item factor analysis: Causes, consequences and a proposed solution. Structural Equation Modeling: A Multidisciplinary Journal, 28(1), 138-147. http://doi.org/10.1080/10705511.202 0.1735393
  • Lorenzo-Seva, U., y Van Ginkel, J.R. (2016). Multiple imputation of missing values in exploratory factor analysis of multidimensional scales: Estimating latent trait scores. Anales de Psicología, 32(2), 596-608. http://doi.org/10.6018/analesps.32.2.215161
  • Lorenzo-Seva, U., Timmerman, M. E., y Kiers, H. A. (2011). The Hull method for selecting the number of common factors. Multivariate Behavioral Research, 46(2), 340-364. http://doi.org/10.1080/00273171.2 011.564527
  • MacCallum, R. C., Widaman, K. F., Zhang, S., y Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4, 84-99. https://doi. org/10.1037/1082-989X.4.1.84
  • Maydeu-Olivares, A., y Coffman, D. L. (2006). Random intercept item factor analysis. Psychological Methods, 11(4), 344–362. https://doi. org/10.1037/1082-989X.11.4.344
  • McDonald, R.P. (1985). Factor analysis and related methods. Psychology Press.
  • McDonald, R. P., y Mok, M. M. C. (1995). Goodness of fit in item response models. Multivariate Behavioral Research, 30(1), 23-40. https://doi. org/10.1207/s15327906mbr3001_2
  • McDonald, R. P. (2000). A basis for multidimensional item response theory. Applied Psychological Measurement, 24(2), 99-114. https://doi. org/10.1177/01466210022031552
  • Mellenbergh, G. J. (2011). A conceptual introduction to psychometrics: Development, analysis and application of psychological and educational tests. The Hague Eleven International Publishing.
  • Moreno, R., Martínez, R. J., y Muñiz, J. (2004). Directrices para la construcción de ítems de elección múltiple. Psicothema, 16(3), 490-497. http://www.psicothema.com/pdf/3023.pdf
  • Moreno, R., Martínez, R., y Muñiz, J. (2006). New guidelines for developing multiple-choice items. Methodology, 2(2), 65-72. http://doi. org/10.1027/1614-2241.2.2.65
  • Moreno, R., Martínez, R., y Muñiz, J. (2015). Guidelines based on validity criteria for the development of multiple choice items. Psicothema, 27(4), 388-394. http://doi.org/10.7334/psicothema2015.110
  • Morin, A. J. S., Arens, A. K., y Marsh, H. W. (2016). A bifactor exploratory structural equation modeling framework for the identification of distinct sources of construct-relevant psychometric multidimensionality. Structural Equation Modeling, 23(1), 116-139. https://doi.org/10.10 80/10705511.2014.961800
  • Mulaik, S. A. (2010). Foundations of factor analysis. Chapman & Hall.
  • Muñiz, J. (2018). Introducción a la psicometría. Pirámide.
  • Muñiz, J., Elosua, P., y Hambleton, R. K. (2013). Directrices para la traducción y adaptación de los tests: segunda edición. Psicothema, 25(2), 151-157. http://doi.org/10.7334/psicothema2013.24
  • Muñiz, J., y Fonseca-Pacheco, E. (2019). Diez pasos para la construcción de un test. Psicothema, 31(1), 7-16. http://doi.org/10.7334/psicothema2018.291
  • Muthén, B. (1993). Goodness of Fit with Categorical and Other Non-Normal Variables. En K. A. Bollen y J. S. Long (Eds.), Testing Structural Equation Models (pp. 205-243). Sage Publications.
  • Muthén, B., y Kaplan D. (1992). A comparison of some methodologies for the factor analysis of non-normal Likert variables: A note on the size of the model. British Journal of Mathematical and Statistical Psychology, 45, 19-30. https://10.1111/j.2044-8317.1992.tb00975.x
  • Muthén, L.K., y Muthén, B.O. (1998-2017). Mplus User’s Guide (8th ed.). Muthén y Muthén. https://www.statmodel.com/download/usersguide/ MplusUserGuideVer_8.pdf
  • Nunnally, J. C. (1978). An overview of psychological measurement. En B.B. Wolman (Ed.), Clinical diagnosis of mental disorders (pp. 97- 146). Springer.
  • R Core Team (2018). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project. org/
  • Rammstedt, B., y Farmer, R. F. (2013). The impact of acquiescence on the evaluation of personality structure. Psychological Assessment, 25(4), 1137-1145. https://doi.org/10.1037/a0033323
  • Revelle, W. (2018). psych: Procedures for psychological, psychometric, and personality research. https://CRAN.R-project.org/package=psych
  • Rodríguez, A., Reise, S. P., y Haviland, M. G. (2016). Evaluating bifactor models: Calculating and interpreting statistical indices. Psychological Methods, 21(2), 137-150. http://doi.org/10.1037/met0000045
  • Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48, 1-36. http://www.jstatsoft.org/ v48/i02/
  • Ruiz, M. A., y San Martín, R. (1992). Una simulación sobre el comportamiento de la regla K1 en la estimación del número de factores. Psicothema, 4(2) 543-550. https://reunido.uniovi.es/index.php/PST/article/ view/7136
  • Schreiber, J. B. (2021). Issues and recommendations for exploratory factor analysis and principal component analysis. Research in Social and Administrative Pharmacy, 17(5), 1004-1011. http://doi.org/10.1016/j. sapharm.2020.07.027
  • Tanaka, J. S. (1993). Multifaceted conceptions of fit in structural equation models. En K. A. Bollen y J. S. Long (Eds.), Testing structural equation models (pp. 10-40). Sage.
  • Thurstone, L.L. (1947). Multiple factor analysis. University of Chicago press.
  • Timmerman, M.E., y Lorenzo-Seva, U. (2011). Dimensionality assessment of ordered polytomous items with parallel analysis. Psychological Methods, 16(2), 209-220. http://doi.org/10.1037/a0023353
  • Torgerson, W. S. (1958). Theory and methods of scaling. Wiley.
  • Velicer, W. F., y Fava, J. L. (1998). Affects of variable and subject sampling on factor pattern recovery. Psychological Methods, 3, 231-251. http:// doi.org/10.1037/1082-989X.3.2.231
  • Vigil-Colet, A., Morales-Vives, F., Camps, E., Tous, J., y Lorenzo-Seva, U. (2013). Development and validation of the overall personality assessment scale (OPERAS). Psicothema, 25, 100-106. http: //doi. org/10.7334/psicothema2011.411
  • Vigil-Colet, A., Navarro-González, D., y Morales-Vives, F. (2020). To reverse or to not reverse Likert-type items: That is the question. Psicothema, 32(1), 108-114. https://doi.org/10.7334/psicothema2019.286
  • Wetzel, E., Böhnke, J. R., y Brown, A., (2016). Response biases. In F. T. L. Leong, D. Bartram, F. Cheung, K. F. Geisinger, y D. Iliescu (Eds.), The ITC International Handbook of Testing and Assessment (pp. 349-363). Oxford University Press.
  • Wilson, M. (2005). Constructing measures: An item response modeling approach. Lawrence Erlbaum Associates.
  • Zhang, G., Jiang, G., Hattori, M., y Trichtinger, L. (2018). EFAutilities: Utility functions for exploratory factor analysis. https://CRAN.R-project.org/package=EFAutilities
  • Ziegler, M. (2014). Comments on item selection procedures. European Journal of Psychological Assessment, 30(1), 1-2. https://doi. org/10.1027/1015-5759/a000196