Research
How do people and machines communicate, learn, and make decisions together?
Much of my research has been organized around that question. Over the past three decades I have worked at the intersection of natural language processing, human–computer interaction, intelligent agents, and computing education. The common thread has been understanding how interactive systems represent language, reason about users, and support people in real tasks.
Across these areas I have built systems that talk and listen, model user behavior, simulate conversational agents, analyze language and affect, and support learning environments. The work has been collaborative and cumulative, often involving students directly in design, data collection, and evaluation.
Conversational Systems and Dialogue
My early work focused on mixed-initiative dialogue systems and user modeling. A central question in that research was how control shifts between a human and an intelligent system during collaborative tasks. I developed models and algorithms for managing those initiative shifts and for evaluating dialogue behavior in interactive systems.
From those foundations I later worked on responsive virtual human technology, including synthetic characters that interact with users in real time and adapt their responses to user behavior. These systems were designed for training environments where communication and decision making matter. The work explored how dialogue control, planning structures, and user models can produce robust interaction in complex scenarios.
These early projects shaped many of the questions that still guide my research today. Interactive systems must represent context, reason about user goals, and respond in ways that are understandable and useful to people.
Natural Language Processing and Language Technology
A second thread of my work examines how natural language processing can transform unstructured language into structured knowledge. In collaboration with government and industry partners, my teams have developed systems that analyze spoken and written language in applied settings.
One example involved extracting activity and exposure information from written and spoken diaries for environmental health research. The work combined language processing with domain modeling to convert narrative descriptions into structured datasets that could support scientific analysis. Another project explored spoken language interfaces for home automation, integrating language understanding with device state management and error recovery.
These projects emphasized building complete systems that move from raw language input to usable information. They also reinforced the importance of evaluation. Language technologies are only useful when they operate reliably in realistic environments.
Human-Centered AI and Evaluation
A recurring theme in my research is the human side of artificial intelligence. Intelligent systems are not simply algorithms; they are tools that people interact with, interpret, and sometimes misunderstand. Understanding those interactions is essential for building systems that are both effective and trustworthy.
Some of my work has examined how affect and personality appear in language. Earlier projects modeled emotive language for classification and detection. More recent work has explored whether large language models can infer personality traits from written text and how reliable those inferences are.
This line of research also focuses on the limits of modern AI systems. I study how language models represent knowledge, where they fail, and how their uncertainty should be interpreted. Issues such as hallucination, overconfidence, and brittle reasoning under changing conditions are central challenges for the next generation of AI tools.
Computing Education and AI
Alongside my work in language technology, I have maintained a parallel line of research on computing education. Many of my studies examine how students learn programming and how instructional design can improve outcomes.
Earlier work analyzed predictors of success in core computer science courses and explored interventions that support students who are struggling. More recently I have examined how generative AI tools are changing programming practice and computer science instruction.
This work asks practical questions. When do coding assistants help students learn? When do they obscure gaps in understanding? And how can instructors design assignments and evaluation frameworks that encourage genuine competence rather than superficial success?
Philosophy of AI and the Limits of Intelligent Systems
Alongside my technical work, I have written and spoken about broader philosophical questions surrounding artificial intelligence. These include issues such as the limits of machine knowledge, the possibility of artificial general intelligence, the idea of technological singularity, and the relationship between human intelligence and increasingly capable computational systems.
Some of this work examines how claims about AI capabilities are framed and interpreted. Modern language models and other large-scale systems can produce impressive results, but they also reveal important limits in reasoning, explanation, and knowledge representation. Understanding those limits is essential for both researchers and the broader public conversation about AI.
These questions connect computer science with philosophy, cognitive science, and the humanities. They also influence how I approach teaching. Students should learn not only how to build intelligent systems, but also how to think critically about what those systems can and cannot do.
Current Directions
My current research focuses on two closely related areas. The first is trustworthy language technology for education and professional development. I am developing datasets and evaluation protocols that measure how tools such as coding assistants, retrieval-augmented systems, and automated feedback tools affect learning in programming-intensive courses.
The second area explores dialogue systems that explain their reasoning and expose their limitations. Building on earlier work in mixed-initiative interaction and user modeling, I am interested in systems that communicate uncertainty, invite correction, and support collaboration between human users and intelligent agents.
Together these projects continue a long-standing goal of my research: building language technologies that help people learn, reason, and make decisions with greater clarity and confidence.
For a complete list of publications, projects, and grants, see my CV.