NTTMU system in the 2nd social media mining for health applications shared task

Chen Kai Wang, Nai Wun Chang, Emily Chia Yu Su, Hong Jie Dai

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

In this study, we describe our methods to automatically classify Twitter posts describing events of adverse drug reaction and medication intake. We developed classifiers using linear support vector machines (SVM) and Naïve Bayes Multinomial (NBM) models. We extracted features to develop our models and conducted experiments to examine their effectiveness as part of our participation in AMIA 2017 Social Media Mining for Health Applications shared task. For both tasks, the best-performed models on the test sets were trained by using NBM with n-gram, part-of-speech and lexicon features, which achieved F-scores of 0.295 and 0.615, respectively.

Original languageEnglish
Pages (from-to)83-86
Number of pages4
JournalCEUR Workshop Proceedings
Volume1996
Publication statusPublished - Jan 1 2017

Fingerprint

Health
Support vector machines
Classifiers
Experiments

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

NTTMU system in the 2nd social media mining for health applications shared task. / Wang, Chen Kai; Chang, Nai Wun; Su, Emily Chia Yu; Dai, Hong Jie.

In: CEUR Workshop Proceedings, Vol. 1996, 01.01.2017, p. 83-86.

Research output: Contribution to journalArticle

Wang, Chen Kai ; Chang, Nai Wun ; Su, Emily Chia Yu ; Dai, Hong Jie. / NTTMU system in the 2nd social media mining for health applications shared task. In: CEUR Workshop Proceedings. 2017 ; Vol. 1996. pp. 83-86.
@article{4835a243ab854abb846fa0bea1a11063,
title = "NTTMU system in the 2nd social media mining for health applications shared task",
abstract = "In this study, we describe our methods to automatically classify Twitter posts describing events of adverse drug reaction and medication intake. We developed classifiers using linear support vector machines (SVM) and Na{\"i}ve Bayes Multinomial (NBM) models. We extracted features to develop our models and conducted experiments to examine their effectiveness as part of our participation in AMIA 2017 Social Media Mining for Health Applications shared task. For both tasks, the best-performed models on the test sets were trained by using NBM with n-gram, part-of-speech and lexicon features, which achieved F-scores of 0.295 and 0.615, respectively.",
author = "Wang, {Chen Kai} and Chang, {Nai Wun} and Su, {Emily Chia Yu} and Dai, {Hong Jie}",
year = "2017",
month = "1",
day = "1",
language = "English",
volume = "1996",
pages = "83--86",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",
publisher = "CEUR-WS",

}

TY - JOUR

T1 - NTTMU system in the 2nd social media mining for health applications shared task

AU - Wang, Chen Kai

AU - Chang, Nai Wun

AU - Su, Emily Chia Yu

AU - Dai, Hong Jie

PY - 2017/1/1

Y1 - 2017/1/1

N2 - In this study, we describe our methods to automatically classify Twitter posts describing events of adverse drug reaction and medication intake. We developed classifiers using linear support vector machines (SVM) and Naïve Bayes Multinomial (NBM) models. We extracted features to develop our models and conducted experiments to examine their effectiveness as part of our participation in AMIA 2017 Social Media Mining for Health Applications shared task. For both tasks, the best-performed models on the test sets were trained by using NBM with n-gram, part-of-speech and lexicon features, which achieved F-scores of 0.295 and 0.615, respectively.

AB - In this study, we describe our methods to automatically classify Twitter posts describing events of adverse drug reaction and medication intake. We developed classifiers using linear support vector machines (SVM) and Naïve Bayes Multinomial (NBM) models. We extracted features to develop our models and conducted experiments to examine their effectiveness as part of our participation in AMIA 2017 Social Media Mining for Health Applications shared task. For both tasks, the best-performed models on the test sets were trained by using NBM with n-gram, part-of-speech and lexicon features, which achieved F-scores of 0.295 and 0.615, respectively.

UR - http://www.scopus.com/inward/record.url?scp=85037031509&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85037031509&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:85037031509

VL - 1996

SP - 83

EP - 86

JO - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -