LensKit 2026.1.0 arrived!
LensKit 2026.1.0 (Website; GitHub) has been released, and it is best understood as a continuation of the refactoring effort that reshaped the library in 2025. That earlier redesign moved LensKit away from its legacy experiment abstractions toward a more explicit, pipeline-oriented structure. The current release does not introduce a new direction. It stabilizes that transition.
The most visible change is the removal of RunAnalysis. In earlier versions, this abstraction bundled execution and evaluation in a way that was convenient but difficult to inspect. Its removal forces a more explicit formulation of experiments. This aligns LensKit more closely with how many of us already work: combining custom data processing, model training, and evaluation code rather than relying on a single orchestration layer. The addition of is_trained() and the simplification of batch inference follow the same logic. They reduce implicit assumptions in the API and make model state and data flow more transparent.

Algorithmically, the release extends rather than expands. FlexMF continues to evolve and now incorporates LightGCN-style representations. The inclusion of SLIM and fsSLIM is notable mainly because these methods remain strong baselines but are often missing from modern experimental pipelines. LensKit here positions itself as a place where such baselines are readily available and comparable under consistent conditions.
LensKit remains a research infrastructure. Its purpose is not to compete with production frameworks or large-scale deep learning libraries. Instead, it provides a controlled environment for experimentation: data handling, model interfaces, and evaluation procedures that make assumptions explicit. This role has not diminished. If anything, it has become more relevant as recommender systems research increasingly relies on complex models whose evaluation is harder to reproduce and interpret.
From a practical perspective, this is precisely where LensKit is particularly effective. In my own teaching, I use it in lab sessions to introduce students to the fundamentals of recommender systems. The framework strikes a useful balance: it is structured enough to guide students toward sound experimental practice, yet flexible enough to expose the underlying mechanics. It has also become one of our standard choices for baseline implementations, alongside RecBole. In both cases, the value lies less in cutting-edge models and more in having reliable, well-understood reference points that can be integrated into controlled experiments.
Michael Ekstrand has been central to shaping this perspective. He is currently an assistant professor at Drexel University, where he works on recommender systems, information retrieval, and algorithmic fairness. Beyond his publications, his role in the community is worth noting. He serves on the ACM RecSys Steering Committee, which coordinates the long-term development of the conference series (RecSys), and has taken on multiple organizational roles, including general co-chair of RecSys 2018 and program co-chair of RecSys 2022 (Drexel University). He is also active in related venues such as FAccT and regularly organizes workshops, for example on responsible and alternative recommender systems (RecSys).
This combination of roles is reflected in LensKit. The project is not only software; it encodes a particular view of recommender systems research, one that emphasizes evaluation, comparability, and increasingly also questions of fairness and societal impact. Recent work by Ekstrand and collaborators, for example on fairness in information access and on evaluation methodology, follows the same line of thought.
In that sense, the 2026.1.0 release is modest but consistent. It removes convenience abstractions where they obscure experimental structure and strengthens those parts of the framework that support careful evaluation. For a field that still struggles with reproducibility, this is a meaningful contribution, even if it does not introduce new models or paradigms.
