This full-day workshop introduces a selection of statistical learning methods for analyzing process data, that is, log data from computer-based assessments. Covered topics include (1) data-driven methods for extracting features from response processes; (2) sequence segmentation and subtask analysis with neural language modelling; (3) introduction to ProcData, an R package for process data analysis; and (4) applications of process features to practical testing and learning problems, including scoring, differential item functioning correction, and computerized adaptive testing. Mode of instruction will be a blend of presentations, for topics (1) and (2), and concrete illustrations in R, for topics (3) and (4). Intended audience are researchers and practitioners interested in data-driven methods for analyzing process data from assessments and learning environments. To fully engage in the hands-on activities, familiarity with R and RStudio is expected. Running the ProcData package requires installation of R and Python. Installation instructions and support will be provided. Participants are expected to bring their own laptop with Windows or Mac operating system. By the end of the workshop, participants are expected to get a composite picture of process data analysis and know how to conduct various analyses using the ProcData package.
Registration is required through 2021 NCME annual meeting website.
This workshop will be held on June 4, 2021. Below is the workshop schedule. All times are in Eastern Daylight Time (EDT).
Time | Lecturer | Session Title |
---|---|---|
09:00 am — 11:00 am | Jingchen Liu | Overview of process data analysis |
11:20 pm — 01:00 pm | Xueying Tang | Introduction to ProcData package |
01:40 pm — 03:40 pm | Susu Zhang | Partial scoring and DIF correction |
03:50 pm — 05:30 pm | Xueying Tang | Subtask Analysis |
Recent advances in informational technology has led to the increasing popularity of computer-based interactive items, which require test-takers to complete specific tasks within a simulated environment. In addition to the final outcomes, the entire log of interactions between the test-taker and the item, i.e., the sequence of actions and their timestamps, are recorded as process data. Process data contain rich information about test-takers’ problem-solving processes that are not recoverable from the final responses. In this overview, we summarize our main research developments of process data analysis. It includes feature extraction via multidimensional scaling and neural-network-based autoencoder. An important question is how process data can assist specific psychometric research. To address this problem, we present two applications: improving test reliability by constructing a process-data-based partial score system and removing/reducing differential item functioning by including process data in the scoring rules.
We introduce ProcData, an R package we design for processing, examining, and analyzing process data. The topics covered in this session include
proc
and its print
and summary
methodscc_data
read.seqs
and write.seqs
seq2feature_mds
seq2feature_seq2seq
seqm
We will demonstrate the features of ProcData through live R sessions.
We provide two specific applications of process data analysis to psychometric problems. These two examples illustrate how to make use of the additional information in process data and to what extent they add values to the existing literature.
Accurate assessment of examinees’ abilities is the key task of testing. Traditional assessments are based on the item final responses, while problem-solving processes contain additional information about a student’s proficiency on the measured trait. We establish a framework to systematically construct a process-data-based scoring system that is substantially more accurate than the traditional IRT-model-based assessment in terms of reliability.
Differential item functioning (DIF) can jeopardize test fairness and validity. Various methods have been developed to identify DIF. However, few results are available to reduce or correct DIF. We develop a framework that identifies and further constructs a scoring rule to reduce DIF. This new scoring rule is based on an individualized score adjustment with process data.
In this section, we provide a step-by-step instruction of these two methods through simulated data.
The presence of process data raises new problems in psychometrics. In this session, we focus on examining respondents’ problem-solving strategies via process data. We introduce a data-driven method to segment a lengthy response process into a sequence of short subprocesses. Each subprocess can be interpreted as a subtask. We demonstrate how to use the results to identify respondents’ problem-solving strategies and perform other exploratory analysis for process data in live R sessions.
Dr. Jingchen Liu is Professor of Statistics at Columbia University. He holds a Ph.D. in Statistics from Harvard University. He is the recipient of 2018 Early Career Award given by the Psychometric Society, 2013 Tweedie New Researcher Award given by the Institute of Mathematical Statistics, and a recipient of the 2009 Best Publication in Applied Probability Award given by the INFORMS Applied Probability Society. He has research interests in statistics, psychometrics, applied probability, and Monte Carlo methods. He is currently an associate editor of Psychometrika, British Journal of Mathematical and Statistical Psychology, Journal of Applied Probability/Advances in Applied Probability, Extremes, Operations Research Letters, and STAT. Email: jcliu@stat.columbia.edu
Dr. Xueying Tang is an Assistant Professor in Statistics in the Department of Mathematics at the University of Arizona. Prior to joining the University of Arizona, she was a postdoctoral research scientist at Columbia University in the Department of Statistics. Her research interests include high dimensional Bayesian statistics, latent variable models and their applications in education and psychology. Email: xytang@math.arizona.edu
Dr. Susu Zhang is an Assistant Professor of Psychology and Statistics at the University of Illinois at Urbana-Champaign. Her research interests include latent variable modeling, the analysis of complex data (e.g., log data) in computer-based educational and psychological assessments, and longitudinal models for learning and interventions. Email: szhan105@illinois.edu