Research

Publications

6

Peer-reviewed papers and conference proceedings. Sorted newest first.

Featured1st Author

Who Holds the Pen? Preserving Clinicians' Role in Human-AI Interpretation for Child Language Therap

Ko, K., Lim, C., Oh, M., Park, J., Oh, J., Kang, S., Gim, B., Kim, W., Kim, S.

Under Review

Abstract

Child language therapy for developmental language disorder relies on clinicians' interpretive judgments, which directly shape children's developmental trajectories. As artificial intelligence (AI) and large language models (LLMs) are integrated into therapeutic workflows, questions arise regarding how to support interpretation without undermining professional agency and accountability. We examine this through a within-subjects study with eight Speech-Language Pathologists, comparing four human-AI collaboration modes that vary in AI autonomy and interaction structure. By situating these configurations within the domain of clinical interpretation, the study provides an empirical account of how different forms of AI assistance are experienced in practice. While greater autonomy reduced effort and increased perceived usefulness, it diminished clinicians' sense of control and professional identity. Despite comparable efficiency, participants consistently preferred a collaborative dialogue mode. These findings show that differences in adoption intention between AI-assisted configurations cannot be fully explained by perceived usefulness or effort reduction alone, but are associated with how interaction structures position clinicians within the interpretive process. They further suggest that, in contrast to more general-purpose domains, the use of AI in clinical interpretive work is shaped by the need to preserve authorship and professional agency. This study highlights the role of interaction structure in shaping professional experience and informs design principles for AI systems supporting clinical interpretation.

May 2026
Featured2nd Author

PardonMix: Unpacking Ambiguity in Semantic Audio Mixing for Adaptive Interface Design

Kim, D., Ko, K., Oh, J., Kim, S.

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems

Abstract

Audio mixing requires sophisticated parameter control and accumulated listening experience. In collaborative workflows, novice musicians often struggle to convey their auditory intent to professional engineers, creating a semantic gap between abstract descriptors and technical execution. While prior studies have mapped descriptive terms to parameters by focusing on minimizing ambiguity, we propose that identifying types of ambiguity can serve as a key to bridging this communication gap. We conducted a formative study analyzing mixing parameter vectors from 10 professional engineers for 18 abstract descriptors, identifying distinct types of ambiguity: High Consensus, Intensity Variance, and Directional Divergence. We propose PardonMix, an adaptive interface strategy that dynamically allocates disambiguation widgets; direct automation for high consensus terms, degree control for intensity varying terms, and exploratory gallery for divergent terms. PardonMix seeks to bridge the semantic gap by embracing ambiguity, facilitating smoother collaboration that ensures the novice’s artistic vision is precisely translated into technical execution.

Read Paper
May 2026
Featured

Demonstrating InteractiSense: An Interactive Tool for Supporting Social Engagement for Children with ASD

Kim, W.*, Lim, C.*, Seong, M., Ko, K., Kim, S.

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems

Abstract

Children with autism spectrum disorder (ASD) are prone to disengagement during social activities, which limits opportunities for communication and peer interaction. We present InteractiSense, a spherical, sensing-integrated tangible prototype and cooperative serious game (SG) that targets social and collaborative scenarios for children with ASD. The prototype embeds seven sensors in a soft graspable body and streams multimodal data during play, while SG modules translate imitation, joint attention, and turn-taking objectives into shared gameplay. Prior to this demo, a preliminary study with 15 children with ASD has examined data stability and in-game experience. In this Interactive Demo, attendees experience InteractiSense through paired gameplay and a real-time visualization interface that exposes engagement-related signals. The demo aims to prompt discussion within the CHI community on inclusive SG content, tangible input mechanisms, and opportunities for physical AI systems in developmental contexts.

Read Paper
May 2025
Featured1st Author

LEGOLAS: Learning & Enhancing Golf Skills through LLM-Augmented System

Ko, K., Oh, M., Seong, M., & Kim, S. J.

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems

Abstract

LEGOLAS is an innovative LLM-augmented system designed to enhance golf skill learning and training. It combines computer vision with large language models to analyze swing mechanics and provide personalized, natural language feedback to golfers of all levels. The system offers real-time analysis, customized training plans, and progression tracking, creating a more accessible and efficient approach to golf improvement. This paper presents our findings from a user study with 24 participants, demonstrating significant skill improvements compared to traditional training methods.

Read Paper
May 2024
1st Author

Leveraging voice for early detection of chronic kidney disease: Enabling continuous monitoring in remote healthcare

Ko, K., Ryu, J., & Kim, S.

Proceedings of eTELEMED 2024

Read Paper
December 2022

A Survey on 3D Scene Graphs: Definition, Generation and Application

Bae, J*., Shin, D*., Ko, K., Lee, J., & Kim, U. H.

International Conference on Robot Intelligence Technology and Applications

Read Paper