In Conjunction with First IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2018), Miami, Florida, USA, April 11, 2018
Title: Multimodal Sensing of Human Behavior.
Abstract: Much of what we do today is centered around humans — whether is building cars, developing the next generation smartphones, or creating new social media platforms. A better understanding of people can not only answer fundamental questions about “us” as humans, but can also facilitate the development of enhanced, personalized technologies. In our lab, we developed a multimodal framework that allows us to capture multiple diverse signals that are reflective of human behaviors, thereby enabling us to understand several human-centric phenomena such as discomfort, alertness, stress, affect, or deception. In this talk, I will describe our framework, the setup that supports this framework, and present several projects that we have been working on in this space.
This is joint work with Mihai Burzo, Veronica Perez-Rosas, Mohamed Abouelenien
Speaker Biography: Rada Mihalcea is a Professor in the Computer Science and Engineering department at the University of Michigan, where she directs the Artificial Intelligence lab. Her research interests are in computational linguistics, with a focus on lexical semantics, graph-based algorithms for natural language processing, and multilingual natural language processing. She serves or has served on the editorial boards of the Journals of Computational Linguistics, Language Resources and Evaluations, Natural Language Engineering, Research in Language in Computation, IEEE Transactions on Affective Computing, and Transactions of the Association for Computational Linguistics. She was a program co-chair for the Conference of the Association for Computational Linguistics (2011) and the Conference on Empirical Methods in Natural Language Processing (2009), and a general chair for the Conference of the North American Chapter of the Association for Computational Linguistics (2015). She is the recipient of a National Science Foundation CAREER award (2008) and a Presidential Early Career Award for Scientists and Engineers (2009). In 2013, she was made an honorary citizen of her hometown of Cluj-Napoca, Romania.
Multimodal Representation, Retrieval, and Analysis of Multimedia Content (MR2AMC) 2018 is the IEEE MIPR workshop that aims to provide an international forum for researchers in the field of multimedia data processing, analysis, search, mining, and management leveraging multimodal information. This workshop will provide a forum to researchers and practitioners from both academia and industry for original research contributions and practical system design, implementation, and applications of multimodal multimedia information processing, mining, representation, management, and retrieval. MR2AMC 2018 invites research papers in the area of multimodal multimedia content analysis, search and retrieval, semantic computing, and affective computing. Accepted papers of MR2AMC 2018 will be published as part of the workshop proceedings in the IEEE Digital Library. Extended versions of the accepted workshop papers will be invited for publication in Springer Cognitive Computation and IEEE Computational Intelligence Magazine.
Multimodal Representation, Retrieval, and Analysis of Multimedia Content (MR2AMC) is the IEEE Multimedia Information Processing and Retrieval (MIPR) workshop series on the understanding of multimodal multimedia content. MR2AMC aims to provide an international forum for researchers in the field of multimedia data processing, analysis, search, mining, and management leveraging multimodal information. This workshop will provide a forum to researchers and practitioners from both academia and industry for original research contributions and practical system design, implementation, and applications of multimodal multimedia information processing, mining, representation, management, and retrieval. The broader context of the workshop comprehends Web mining, AI, Semantic Web, multimedia information retrieval, event understanding, and natural language processing. For more information, write to email@example.com
The presence of social media platforms creates an abundance of multimedia content on the web due to advancements in digital devices and affordable network infrastructures. It has enabled anyone with an Internet connection to easily create and share their ideas, opinions, updates, and preferences through multimedia content with millions of other people around the world. Thus, it necessitates novel techniques for an efficient processing, analysis, mining, and management of multimedia data to provide different multimedia-related services. Such techniques should also able to search and retrieve information from within multimedia content. Since much signicant contextual information such as spatial, temporal, and crowd-sourced information is also available in addition to the multimedia content, it is very important to leverage multimodal information because different representations represent different knowledge structures. However, decoding such knowledge structures into useful knowledge from a huge amount of multimedia content is very complex due to several reasons. Till date, the most of the semantic analysis, sentiment analysis, multimedia representation, multimedia information search and retrieval, opinion mining, and event understanding engines work in the unimodal setting. There is very limited work done which use multimodal information for these tasks. In this light, this workshop will focus on the use of multimodal information to analyze, represent, mine and manage multimedia content to support several semantic and sentiment based multimedia analytics problems. It will also focus on interesting multimedia systems that build upon semantic and sentiment information derived from multimedia data.
Accepted papers of MR2AMC 2018 will be published as part of the workshop proceedings in the IEEE Digital Library. Extended version of the accepted workshop papers will be invited for publication in Springer Cognitive Computation and IEEE Computational Intelligence Magazine (whichever matches closely with papers).
The primary goal of the proposed workshop is to investigate whether multimedia content when fused with other modalities (e.g., contextual, crowd-source, and relationship information) can enhance the performance of unimodal (e.g., when only multimedia content) multimedia systems. The broader context of the workshop comprehends Multimedia Information Processing (e.g., Natural Language Processing, Image Processing, Speech Processing, and Video Processing), Multimedia Embedding (e.g., Word Embedding and Image Embedding), Web Mining, Machine Learning, Deep Neural Networks, and AI. Topics of interest include but are not limited to: