ACM Transactions on Recommender Systems announced a call for papers for a special issue on Large Language Models for Recommender Systems, with the guest editors being Yongfeng Zhang (Rutgers University, USA), Lei Li (Hong Kong Baptist University, China), and Luyang Kong (Amazon Research, USA).
Recommender systems play a pivotal role in personalizing content and enhancing user experiences across various online platforms. In recent years, the advent of Large Language Models (LLMs), such as T5, GPT, LLaMA, and their variants, has ushered in a transformative era for recommender systems. LLMs possess the ability to understand, generate, and manipulate natural language text at an unprecedented scale and quality. This newfound capability has profound implications for recommendation algorithms, as it enables more sophisticated and context-aware recommendations, ultimately enhancing user satisfaction and engagement.
This special issue aims to explore the intersection of Large Language Models and Recommender Systems, highlighting their importance and potential to revolutionize the field. The main topic of this special issue revolves around the integration of LLMs into recommender systems, encompassing various aspects such as model architectures, algorithms, evaluation methodologies, and real-world applications. The goal of the special issue is to provide a comprehensive platform for researchers and practitioners to share their insights, innovations, and findings in the context of LLMs for recommender systems. By bringing together experts from both the natural language processing and recommendation communities, we aim to foster interdisciplinary collaboration and advance the state-of-the-art in this burgeoning field.
In summary, this special issue seeks to explore the pivotal role of Large Language Models in Recommender Systems, fostering collaboration, innovation, and ethical considerations in this exciting and rapidly evolving field. This initiative shall contribute to the ongoing evolution of recommender systems, ultimately leading to more personalized, context-aware, and responsible recommendations across the digital landscape.
We aim to attract research on the latest ideas and achievements of large language models for recommendation, discuss the advantages and disadvantages of existing approaches, and share ideas for future directions. The special issue will not only present the latest research achievements but also connect researchers in the community who are interested in the topic to promote this direction in the following years. The main themes and topics of the special issue include, but are not limited to:
- Novel LLM-based Recommendation Models
- Multi-Modal Recommendation with LLMs
- Explainability and Interpretability of LLM-based Recommendation
- Fairness and Unbiasedness of LLM-based Recommendation
- Scalability and Efficiency of LLM-based Recommender Systems
- Privacy-Preserving LLMs for Recommendation
- Cross-domain and Cross-platform Recommendations with LLMs
- Multilingual and Cross-Lingual LLM-based Recommendation
- LLM-based Content Generation and Personalization
- Innovative User Interfaces for LLM-based Recommender Systems
- LLM-based Recommender System Agents
- Evaluation of LLM-based Recommender System
- Real-world Deployments of LLMs in Recommender Systems
- Submission deadline: December 15, 2023
- First-round review decisions: March 15, 2024
- Deadline for revision submissions: May 15, 2024
- Notification of final decisions: July 15, 2024
The special issue welcomes technical research papers, survey papers, and opinion/reflective papers. Each paper should address one or more of the abovementioned topics or be in other scopes of Large Language Models for Recommender Systems. The special issue will also consider peer-reviewed journal versions (at least 30% new content) of top papers from related recommender system conferences such as ACM RecSys, SIGIR, KDD, CIKM, WSDM, ACL, etc. The new content must be in terms of intellectual contributions, technical experiments, and findings.
Submissions must be prepared according to the TORS submission guidelines and must be submitted via Manuscript Central.
For questions and further information, please contact the guest editors at yongfeng.zhang @ rutgers.edu.