skip to main content
10.1145/2783258.2788619acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article
Open Access

Predicting Voice Elicited Emotions

Published:10 August 2015Publication History

ABSTRACT

We present the research, and product development and deployment, of Voice Analyzer' by Jobaline Inc. This is a patent pending technology that analyzes voice data and predicts human emotions elicited by the paralinguistic elements of a voice. Human voice characteristics, such as tone, complement the verbal communication. In several contexts of communication, "how" things are said is just as important as "what" is being said. This paper provides an overview of our deployed system, the raw data, the data processing steps, and the prediction algorithms we experimented with. A case study is included where, given a voice clip, our model predicts the degree in which a listener will find the voice "engaging". Our prediction results were verified through independent market research with 75% in agreement on how an average listener would feel. One application of Jobaline Voice Analyzer technology is for assisting companies to hire workers in the service industry where customers' emotional response to workers' voice may affect the service outcome. Jobaline Voice Analyzer is deployed in production as a product offer to our clients to help them identify workers who will better engage with their customers. We will also share some discoveries and lessons learned.

References

  1. Cacciatore, S., Luchinat, C., Tenori, L. 2014. Knowledge discovery by accuracy maximization. In Proc. Natl. Acad. Sci., USA, vol. 111 no. 14, 5117--5122.Google ScholarGoogle ScholarCross RefCross Ref
  2. Cowie, R., Douglas-Cowie, E., Savvidou, S., McMahon, E., Sawey, M., Schröder, M. 2000. FEELTRACE: an instrument for recording perceived emotion in real time. In ISCA workshop on speech and emotion, Northern Ireland, pp 19--24.Google ScholarGoogle Scholar
  3. Devillers, L., and Vidrascu, L. 2006. Real-life emotions detection with lexical and paralinguistic cues on human call center dialogs. INTERSPEECH.Google ScholarGoogle Scholar
  4. Fernandez, R. 2004. A Computational Model for the Automatic Recognition of Affect in Speech. PhD Thesis, Massachusetts Institute of Technology, Cambridge, MA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Forsell, M. 2007. Acoustic Correlates of Perceived Emotions in Speech. Master Thesis in Speech Communication, Royal Institute of Technology. KTH.Google ScholarGoogle Scholar
  6. Kreiman, J., Van Lancker-Sidtis, D., and Gerratt, B.R., 2005. Perception of Voice Quality, in The Handbook of Speech Perception, Ed. Pisoni, D.B. and Remez, R.E., Blackwell Publishing, 338--362.Google ScholarGoogle Scholar
  7. Kreiman, J., Sidtis, D., 2011. Foundations of Voice Studies. Wiley-Blackwell.Google ScholarGoogle Scholar
  8. Lopatovska, I. and Arapakis, I. 2011. Theories, methods and current research on emotions in library and information science, information retrieval and human-computer interactions. Information Processing & Management. 47(4), 575--592. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Mullor, M., Salazar, L., Li, Y., and Contreras, J. (Jobaline, Inc., USA) 2015. Matching and Lead Prequalification Based on Voice Analysis. US Patent Application #14532600.Google ScholarGoogle Scholar
  10. Picard, R.W. 2010. Emotion research by the people, for the people. Emotion Review, Volume 2, Issue 3 (July 2010)Google ScholarGoogle ScholarCross RefCross Ref
  11. Polzehl, T., Moller, S., and Metze, F. 2010. Automatically assessing personality from speech. 2010 IEEE Fourth International Conference on Semantic Computing (ICSC). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Polzin, T. S., and Waibel, A. 1998. Detecting emotions in speech. Proceedings of the CMC.Google ScholarGoogle Scholar
  13. Quatier, T. F. 2002. Discrete-Time Speech Signal Processing: Principles and Practice. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Schuller, B., Steidl, S., Batliner, A., Burkhardt, F., Devillers, L., Müller, C. A., et al. 2010. The INTERSPEECH 2010 paralinguistic challenge. INTERSPEECH.Google ScholarGoogle Scholar
  15. Schuller, B. 2011. Voice and speech analysis in search of states and traits. Computer Analysis of Human Behavior, 227--253.Google ScholarGoogle Scholar
  16. Schuller, B., Steidl, S., Batliner, A., Nöth, E., Vinciarelli, A., Burkhardt, F., et. al. 2012. The INTERSPEECH 2012 Speaker Trait Challenge. INTERSPEECH.Google ScholarGoogle Scholar
  17. Weiss, B. and Burkhardt, F. 2012. Is 'not bad' good enough? Aspects of unknown voices' likability. INTERSPEECH.Google ScholarGoogle Scholar
  18. Zhao, S., Rudzicz, F., Carvalho, L. G., Márquez-Chin, C., and Livingstone, S. 2014. Automatic detection of expressed emotion in Parkinson's disease. 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).Google ScholarGoogle Scholar

Index Terms

  1. Predicting Voice Elicited Emotions

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '15: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
          August 2015
          2378 pages
          ISBN:9781450336642
          DOI:10.1145/2783258

          Copyright © 2015 Owner/Author

          Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 10 August 2015

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          KDD '15 Paper Acceptance Rate160of819submissions,20%Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

          KDD '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader