Open coding, a key inductive step in qualitative research, discovers and constructs concepts from human datasets. However, capturing extensive and nuanced aspects or "coding moments" can be challenging, especially with large discourse datasets. While some studies explore machine learning (ML)/Generative AI (GAI)'s potential for open coding, few evaluation studies exist. We compare open coding results by five recently published ML/GAI approaches and four human coders, using a dataset of online chat messages around a mobile learning software. Our systematic analysis reveals ML/GAI approaches' strengths and weaknesses, uncovering the complementary potential between humans and AI. Line-by-line AI approaches effectively identify content-based codes, while humans excel in interpreting conversational dynamics. We discussed how embedded analytical processes could shape the results of ML/GAI approaches. Instead of replacing humans in open coding, researchers should integrate AI with and according to their analytical processes, e.g., as parallel co-coders.
View on arXiv@article{chen2025_2504.02887, title={ Processes Matter: How ML/GAI Approaches Could Support Open Qualitative Coding of Online Discourse Datasets }, author={ John Chen and Alexandros Lotsos and Grace Wang and Lexie Zhao and Bruce Sherin and Uri Wilensky and Michael Horn }, journal={arXiv preprint arXiv:2504.02887}, year={ 2025 } }