%0 Conference Paper %B Workshop on Interactive Language Learning, Visualization, and Interfaces, 52nd Annual Meeting of the Association for Computational Linguistics %D 2014 %T Design of an Active Learning System with Human Correction for Content Analysis %A Jasy Liew Suet Yan %A McCracken, Nancy %A Kevin Crowston %X Our research investigation focuses on the role of humans in supplying corrected examples in active learning cycles, an important aspect of deploying active learning in practice. In this paper, we discuss sampling strategies and sampling sizes in setting up an active learning system for human experiments in the task of content analysis, which involves labeling concepts in large volumes of text. The cost of conducting comprehensive human subject studies to experimentally determine the effects of sampling sizes and sampling sizes is high. To reduce those costs, we first applied an active learning simulation approach to test the effect of different sampling strategies and sampling sizes on machine learning (ML) performance in order to select a smaller set of parameters to be evaluated in human subject studies. %B Workshop on Interactive Language Learning, Visualization, and Interfaces, 52nd Annual Meeting of the Association for Computational Linguistics %C Baltimore, MD %8 06/2014 %> https://crowston.syr.edu/sites/crowston.syr.edu/files/ILLWorkshop.ACLFormat.04.28.14.final_.pdf %0 Conference Paper %B Workshop on Language Technologies and Computational Social Science, 52nd Annual Meeting of the Association for Computational Linguistics %D 2014 %T Optimizing Features in Active Machine Learning for Complex Qualitative Content Analysis %A Jasy Liew Suet Yan %A McCracken, Nancy %A Shichun Zhou %A Kevin Crowston %X We propose a semi-automatic approach for content analysis that leverages machine learning (ML) being initially trained on a small set of hand-coded data to perform a first pass in coding, and then have human annotators correct machine annotations in order to produce more examples to retrain the existing model incrementally for better performance. In this “active learning” approach, it is equally important to optimize the creation of the initial ML model given less training data so that the model is able to capture most if not all positive examples, and filter out as many negative examples as possible for human annotators to correct. This paper reports our attempt to optimize the initial ML model through feature exploration in a complex content analysis project that uses a multidimensional coding scheme, and contains codes with sparse positive examples. While different codes respond optimally to different combinations of features, we show that it is possible to create an optimal initial ML model using only a single combination of features for codes with at least 100 positive examples in the gold standard corpus. %B Workshop on Language Technologies and Computational Social Science, 52nd Annual Meeting of the Association for Computational Linguistics %C Baltimore, MD %8 06/2014 %> https://crowston.syr.edu/sites/crowston.syr.edu/files/9_Paper.pdf %0 Conference Paper %B iConference %D 2014 %T Semi-Automatic Content Analysis of Qualitative Data %A Jasy Liew Suet Yan %A McCracken, Nancy %A Kevin Crowston %B iConference %C Berlin, Germany %8 03/2014 %> https://crowston.syr.edu/sites/crowston.syr.edu/files/iConference_Poster_Published.pdf