gravatar

castro

Salvador Castro

Recently Published

Psychometrics: Computerized Adaptive Testing
This project demonstrates a full-scale implementation of a CAT framework grounded in IRT. It integrates theoretical derivations, statistical modeling, and simulation-based validation to illustrate how adaptive algorithms optimize measurement precision while reducing test length. The CAT engine dynamically selects items based on Fisher Information and updates ability estimates using Fisher Scoring or Bayesian EAP methods. It includes a modular R codebase that builds an item bank, simulates response data, computes information functions, and applies realistic stopping rules.
Psychometrics: Item Response Theory (IRT)
This project presents a systematic examination of Item Response Theory (IRT), a psychometric framework for modeling the relationship between respondents’ latent traits (e.g., ability, attitude) and their responses to test items. The approach ensures a comprehensive understanding of IRT, progressing from theoretical foundations to practical validation, model comparison, and fairness testing. Each phase builds on the preceding one, culminating in a coherent framework for advanced psychometric analysis.
Psychometrics: Questionnaire Development, Likert Scales and Classical Test Theory (CTT)
This project systematically develops, validates, and analyzes psychological and educational assessments with a focus on scale reliability and validity. The workflow proceeds in two phases: (1) Questionnaire Development & Validation and (2) Classical Test Theory Analysis. Together, these phases ensure that the scale is statistically sound, measures the intended construct accurately, and is broadly applicable across psychology, education, and the social sciences.
Analysis of Incomplete Data
This project outlines a framework for handling missing data in applied research, moving from theory to practice with modern imputation techniques. It reviews the limits of traditional deletion methods, classifies missingness mechanisms (MCAR, MAR, MNAR), and applies screening tools such as Little’s MCAR test. Imputation strategies range from simple methods (mean substitution, hot-deck, regression) to advanced model-based approaches, maximum likelihood, and the EM algorithm for multiple imputation.
Data Analysis
This project provides a comprehensive framework for data preparation and analysis, beginning with missing data handling, outlier detection, and reliability checks for Likert scale data. It then applies a range of statistical methods, including descriptive statistics, correlation, bivariate linear regression, chi-square tests, t-tests with effect sizes, and both parametric and nonparametric comparisons, such as ANOVA and the Kruskal-Wallis test. Supporting appendices detail test development, math anxiety measurement, statistical assumptions, diagnostic tools, and effect size computation in R. Overall, the project integrates careful data screening with diverse inferential techniques to ensure robust and interpretable results in psychological and educational research.
Computational Linear Algebra for Statistical Modeling: From Matrix Foundations to Regression and ANOVA
This project explores computational linear algebra in statistics, beginning with matrix structures such as data, deviation, sums of squares, and variance-covariance matrices. It introduces the SWEEP operator for efficient inversion, determinant calculation, and correlation matrices, as well as Cholesky decomposition. These methods are applied to multiple regression to derive coefficients, variance-covariance estimates, leverage, residuals, and the hat matrix, and then extended to ANOVA to analyze variation, degrees of freedom, mean squares, the F-statistic, and model fit. Together, these techniques show how linear algebra supports estimation, diagnostics, and model evaluation.
Numerical Analysis
This project provides a structured introduction to numerical analysis, focusing on methods for solving equations, approximating integrals, and handling systems of linear equations. It begins with root-finding techniques, including the method of false position, Newton-Raphson, secant method, and fixed-point iteration. Numerical integration is addressed through Riemann sums, Simpson’s rule, and the Romberg algorithm, illustrating different approaches to approximation and accuracy. Finally, the project examines Gaussian elimination, both in its basic form and with scaled partial pivoting, to solve linear systems efficiently and reliably. Overall, it offers a concise yet comprehensive foundation in numerical methods, blending theoretical insight with computational practice.
Computational Statistics: From Descriptive Analysis to Bayesian Methods and Stochastic Algorithms
This work traces a progression from the foundations of statistical computing to advanced probabilistic modeling. It begins with descriptive statistics and core inferential tools, including hypothesis testing, confidence intervals, and Bayesian analysis. Both continuous (normal, t, chi-squared, gamma, beta, uniform) and discrete (Bernoulli, binomial, Poisson, etc.) distributions are covered, leading into computational methods like random number generation, simulation, Monte Carlo, and MCMC. The text advances to reverse-engineering unknown distributions and maximum likelihood, before concluding with modern frameworks including the EM algorithm, Gaussian Mixture Models, and the multivariate normal distribution.