Evaluations & Reproducibility

‘Papers with Code’ partners with arXiv to increase reproducibility

This is exciting news, not only for the recommender-system community but many more: Robert Stojnic announced that Papers with Code partners with arXiv. Pre-Prints on arxiv can now directly link to the code on Papers With Code, GitHub respectively.

Posted in Evaluation & Reproducibility | Tagged , , | Leave a comment

An Exhaustive List of Methods to Evaluate Recommender Systems [Muffaddal Qutbuddin @TowardsDataScience]

Imagine we have built an item-based recommender system to recommend users movies based on their rating history. And now we want to asses how our model will perform. Is it any good at actually recommending users movies that they will like? Will it help users find new and exciting movies from plethora of movies available in our […]

Posted in Evaluation & Reproducibility | Tagged | Leave a comment

‘Papers with Code’ / NeurIPS Guidelines for Publishing Research Code: A Role Model For Recommender-Systems?

The recommender-system community is facing a reproducibility crisis. This has recently been demonstrated by the authors of the paper Are we really making much progress? A worrying analysis of recent neural recommendation approaches (Maurizio Ferrari Dacrema, Paolo Cremonesi, Dietmar Jannach). However, the crisis is not new, and has been recognized (at least) a decade ago […]

Posted in Controversial Ideas & Discussions, Evaluation & Reproducibility | Tagged , , , , , , , , , , | Leave a comment