skip to main content
10.1145/3341162.3344859acmconferencesArticle/Chapter ViewAbstractPublication PagesubicompConference Proceedingsconference-collections
research-article

Can a simple approach identify complex nurse care activity?

Published:09 September 2019Publication History

ABSTRACT

For the last two decades, more and more complex methods have been developed to identify human activities using various types of sensors, e.g., data from motion capture, accelerometer, and gyroscopes sensors. To date, most of the researches mainly focus on identifying simple human activities, e.g., walking, eating, and running. However, many of our daily life activities are usually more complex than those. To instigate research in complex activity recognition, the "Nurse Care Activity Recognition Challenge" [1] is initiated where six nurse activities are to be identified based on location, air pressure, motion capture, and accelerometer data. Our team, "IITDU", investigates the use of simple methods for this purpose. We first extract features from the sensor data and use one of the simplest classifiers, namely K-Nearest Neighbors (KNN). Experiment using an ensemble of KNN classifiers demonstrates that it is possible to achieve approximately 87% accuracy on 10-fold cross-validation and 66% accuracy on leave-one-subject-out cross-validation.

References

  1. 2019. Nurse Care Activity Recognition Challenge. Last Accessed: Jun. 26, 2019.Google ScholarGoogle Scholar
  2. Pritom Saha Akash, Md. Eusha Kadir, Amin Ahsan Ali, and Mohammad Shoyaib. 2019. Inter-node Hellinger Distance based Decision Tree. In IJCAI.Google ScholarGoogle Scholar
  3. Leo Breiman. 2001. Random forests. Machine learning 45, 1 (2001), 5--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Evelyn Fix and J. L. Hodges. 1989. Discriminatory Analysis. Nonparametric Discrimination: Consistency Properties. International Statistical Review 57, 3 (1989), 238--247. http://www.jstor.org/stable/1403797Google ScholarGoogle ScholarCross RefCross Ref
  5. Sozo Inoe, Naonori Ueda, Yasunobu Nohara, and Naoki Nakashima. 2016. Understanding Nursing Activities with Long-term Mobile Activity Recognition with Big Dataset. Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications 2016 (2016), 1--11.Google ScholarGoogle ScholarCross RefCross Ref
  6. Xiaoqiang Li, Yi Zhang, and Dong Liao. 2017. Mining key skeleton poses with latent svm for action recognition. Applied Computational Intelligence and Soft Computing 2017 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Diogo Carbonera Luvizon, Hedi Tabia, and David Picard. 2017. Learning features combination for human action recognition from skeleton sequences. Pattern Recognition Letters 99 (2017), 13--20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Ross Quinlan. 1986. Induction of decision trees. Machine learning 1, 1 (1986), 81--106. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Alia Sayeda Shamma, Paula Lago, Shingo Takeda, Tittaya Mairittha, Nattaya Mairittha, Farina Faiz, Yusuke Nishimura, Kohei Adachi, Tsuyoshi Okita, Sozo Inoue, and Francois Charppillet. 2019. Nurse Care Activity Recognition Challenge: Summary and Results. In Proc. HASCA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Sadia Sharmin, Mohammad Shoyaib, Amin Ahsan Ali, Muhammad Asif Hossain Khan, and Oksam Chae. 2019. Simultaneous feature selection and discretization based on mutual information. Pattern Recognition 91 (2019), 162--174.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Benyue Su, Huang Wu, Min Sheng, and Chuansheng Shen. 2019. Accurate Hierarchical Human Actions Recognition From Kinect Skeleton Data. IEEE Access 7 (2019), 52532--52541.Google ScholarGoogle ScholarCross RefCross Ref
  12. Jubil T Sunny, Sonia Mary George, Jubilant J Kizhakkethottam, Jubil T Sunny, Sonia Mary George, and Jubilant J Kizhakkethottam. 2015. Applications and challenges of human activity recognition using sensors in a smart environment. IJIRST Int. J. Innov. Res. Sci. Technol 2 (2015), 50--57.Google ScholarGoogle Scholar
  13. Songyang Zhang, Yang Yang, Jun Xiao, Xiaoming Liu, Yi Yang, Di Xie, and Yueting Zhuang. 2018. Fusing geometric features for skeleton-based action recognition using multilayer LSTM networks. IEEE Transactions on Multimedia 20, 9 (2018), 2330--2343. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    UbiComp/ISWC '19 Adjunct: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers
    September 2019
    1234 pages
    ISBN:9781450368698
    DOI:10.1145/3341162

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 9 September 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate764of2,912submissions,26%

    Upcoming Conference

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader