
Call for IEEE Intelligent Systems Special Issue Papers
Explicable Artificial Intelligence for Affective Computing
Guest Editors:
Rui Mao, Nanyang Technological University, Singapore
Erik Cambria, Nanyang Technological University, Singapore
Yang Li, Northwestern Polytechnical University, China
Newton Howard, University of Oxford, United Kingdom
Corresponding Guest Editor:
Rui Mao
Background
As Artificial Intelligence (AI) advances, the need for transparency and interpretability in its decision-making processes becomes more pronounced, especially within the domain of affective computing. The capacity of AI systems to comprehend and react to human emotions introduces ethical considerations, necessitating a delicate equilibrium between innovation and accountability. Various stakeholders, spanning end-users, developers, and policymakers, express a collective need for a more profound comprehension of these systems, particularly in emotionally charged situations.
The motivation of this Special Issue stems from the inherent challenges in creating AI models that not only accurately recognize and respond to human emotions but also provide clear, interpretable insights into their decision-making processes. The Special Issue also aims at enriching the connotation of Explicable AI with diverse and comprehensive dimensions. Expanding the meaning of explicability is not just about deciphering the “black box” nature of AI models; it involves a broader understanding that encapsulates various facets crucial for fostering user trust, ethical considerations, and interdisciplinary collaboration.
Topics
-
Explainable sentiment analysis, emotion detection, and figurative language processing
-
Neurosymbolic affective computing
-
Multimodal affective computing with explainability
-
Affective intention awareness AI
-
Trustworthy AI for affective computing
-
Affective computing involves multidisciplinary ensemble and explainability
-
Affective computing for science research, e.g., healthcare, education, behavioural, cognitive and social science
-
Granular task decomposition for affective computing
-
Ethical analysis pertains to Explicable AI for affective computing.
Highlights
The Special Issue will consider papers on the mentioned topics that demonstrate humanitarian value. While achieving state-of-the-art performance is commendable, acceptance priority will be given to works that contribute to the advancement of seven pillars for future AI, including Multidisciplinarity, Task Decomposition, Parallel Analogy, Symbol Grounding, Similarity Measure, Intention Awareness, and Trustworthiness. All submissions to the Special Issue undergo a rigorous editorial pre-screening process to assess their relevance, quality, and originality. This initial screening ensures that the manuscripts align with the thematic focus of the Special Issue and meet the Journal’s standards.
Evaluation Criteria
The evaluation of submitted papers will be guided by the following key questions:
a) Does the paper contribute to explicable AI in the context of affective computing?
b) Does the paper provide an adequate level of technical innovation and/or analytical insights?
c) Are the findings or contributions supported by experimental evidence and/or theoretical underpinning?
d) Is the paper appropriate to be published on IEEE Intelligent Systems?
Peer Review
The papers will be peer-reviewed by at least three independent reviewers with expertise in the area.
Paper Submission
For author information and guidelines on submission criteria, visit the Author’s Information page. Please submit papers through the IEEE Author Portal and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by email to the guest editors directly.
Important Dates
Submission deadline: 1 February 2025
Publication date: Sep/Oct 2025
About the Guest Editors




RUI MAO is a Research Scientist, Lead Investigator at Nanyang Technological University, Singapore. He received his Ph.D. in Computing Science from the University of Aberdeen. His research interests include computational metaphor processing, affective computing and cognitive computing. He and his founded company have developed the first neural network search engine (https://wensousou.com) for searching ancient Chinese poems by using modern language, and a system (https://metapro.ruimao.tech) for linguistic and conceptual metaphor understanding. He has published several papers as the first author in top-tier conferences and journals, e,g., ACL, AAAI, IEEE ICDM, Information Fusion, and IEEE Transactions on Affective Computing. He served as Area Chair in COLING and EMNLP and Associate Editor in Expert Systems, Information Fusion and Neurocomputing.
ERIK CAMBRIA is the Founder of SenticNet, a Singapore-based company offering B2B sentiment analysis services, and a Professor at NTU, where he also holds the appointment of Provost Chair in Computer Science and Engineering. Prior to joining NTU, he worked at Microsoft Research Asia and HP Labs India and earned his PhD through a joint programme between the University of Stirling and MIT Media Lab. Erik is recipient of many awards, e.g., the 2018 AI’s 10 to Watch, the 2019 IEEE Outstanding Early Career award, IEEE Fellow, and is often featured in the news, e.g., Forbes. He is Associate Editor of several journals, e.g., NEUCOM, INFFUS, KBS, IEEE CIM and IEEE Intelligent Systems (where he manages the Department of Affective Computing and Sentiment Analysis), and is involved in many international conferences as PC member, program chair, and speaker.
YANG LI is an associate professor in the School of Automation at Northwestern Polytechnical University, Xi’an, China. After obtaining his bachelor's and doctoral degrees from Northwestern Polytechnical University in 2014 and 2018 respectively, he worked as a research fellow in SenticTeam under Professor Erik Cambria at Nanyang Technological University in Singapore and was also an adjunct research fellow at the A*STAR High-Performance Computing Institute (IHPC). His research interests lie in Adversarial Attack & Defense in AI, NLP, Recommender System, Explainable Artificial Intelligence, etc. He has obtained several national projects, such as NSFC. He has published several papers on these topics in international conferences and peer-reviewed journals. He is an associate editor of the Transaction on Affective Computing and Progress in Artificial Intelligence. He also serves as a guest editor of Future Generation Computer Systems, Mathematics, etc.
NEWTON HOWARD is a brain and cognitive scientist, the former founder and director of the MIT Mind Machine Project at the Massachusetts Institute of Technology (MIT). He is a professor of computational neurology and functional neurosurgery at Georgetown University. He was a professor of at the University of Oxford, where he directed the Oxford Computational Neuroscience Laboratory. He is also the director of MIT's Synthetic Intelligence Lab, the founder of the Center for Advanced Defense Studies and the chairman of the Brain Sciences Foundation. Professor Howard is also a senior fellow at the John Radcliffe Hospital at Oxford, a senior scientist at INSERM in Paris and a P.A.H. at the CHU Hospital in Martinique. His research areas include Cognition, Memory, Trauma, Machine Learning, Comprehensive Brain Modeling, Natural Language Processing, Nanotech, Medical Devices and Artificial Intelligence.