Evaluations & Reproducibility
Google AI releases ‘RecSim NG’, a “Flexible, Scalable, Differentiable Simulation of Recommender Systems”
One or two years ago, Google AI released RecSim — A Configurable Simulation Platform for Recommender Systems. Now, Martin Mladenov from Google AI announced the next generation (NG) of RecSim, i.e. RecSim NG. RecSim NG is a powerful and flexible simulator designed to help researchers and practitioners in the field of recommender systems to model […]
‘Papers with Code’ partners with arXiv to increase reproducibility
This is exciting news, not only for the recommender-system community but many more: Robert Stojnic announced that Papers with Code partners with arXiv. Pre-Prints on arxiv can now directly link to the code on Papers With Code, GitHub respectively.
An Exhaustive List of Methods to Evaluate Recommender Systems [Muffaddal Qutbuddin @TowardsDataScience]
Imagine we have built an item-based recommender system to recommend users movies based on their rating history. And now we want to asses how our model will perform. Is it any good at actually recommending users movies that they will like? Will it help users find new and exciting movies from plethora of movies available in our […]
‘Papers with Code’ / NeurIPS Guidelines for Publishing Research Code: A Role Model For Recommender-Systems?
The recommender-system community is facing a reproducibility crisis. This has recently been demonstrated by the authors of the paper Are we really making much progress? A worrying analysis of recent neural recommendation approaches (Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach). However, the crisis is not new, and has been recognized (at least) a decade ago […]