Real-World Recommender Systems for Academia: The Pain and Gain in Building, Operating, and Researching them [Beel & Dinesh]
The article “Real-World Recommender Systems for Academia: The Pain and Gain in Building, Operating, and Researching Them” provides an in-depth look into the practical and technical challenges faced when developing recommender systems tailored for academia. (Long version of arXiv; short version on CEUR-WS).
Over a period of six years, the authors built and operated three systems designed to help researchers find relevant articles: SciPlore MindMapping, Docear, and Mr. DLib. Each system addressed different needs in academic literature management and recommendation, but all faced the common challenge of handling noisy and incomplete data, which affected the overall quality of recommendations. The article also highlights the limitations of existing literature in guiding the development of these systems, as many recommendation algorithms are not tested in real-world environments or lack reproducibility.
One significant challenge discussed is the difficulty of implementing randomized A/B tests in the recommender systems. A robust randomization engine was necessary to test different recommendation strategies effectively, but designing such a system proved to be complex. It required creating a framework that could not only test algorithms but also adjust parameters dynamically, such as whether to use key-phrases or bibliometric data in generating recommendations. Another key issue raised is the impact of hardware limitations, such as slow processing speeds, on user satisfaction. Recommendations that took longer to generate resulted in fewer user clicks, demonstrating the importance of fast response times in maintaining user engagement.
Beyond technical hurdles, the authors also explore the human and organizational factors involved in building real-world recommender systems. As more stakeholders became involved, such as partners from digital libraries or reference management software companies, the complexity of managing different expectations grew. Aligning the goals of academic research with the profit-driven motives of industry partners was an ongoing challenge, especially when balancing the need for high-quality recommendations with the constraints of operating within a limited budget and time frame. Despite these obstacles, the authors found that building recommender systems for academia is ultimately a rewarding endeavor, offering rich opportunities for meaningful research and collaboration.
Disclaimer: I am one of the authors of this article.