ACM TORS enters the Chinese CCF ranking: Does it matter and what are the consequences?
ACM Transactions on Recommender Systems (ACM TORS) has now been included in the China Computer Federation Ranking (CCF Ranking) (PDF). In the newly updated list, ACM TORS appears as a C-ranked journal in the area of databases, data mining, and information retrieval. It is listed at the same level as ACM TIST, which is a notable benchmark for such a young venue. For comparison, the journals ACM TKDD and TWEB as well as the ACM Recommender Systems Conference are ranked ‘B’, and the IEEE TKDE and ACM TOIS journals are ranked ‘A’.
At first sight, some readers may focus on the letter grade ‘C’, the lowest of the three possible grades. But that would be the wrong takeaway. ACM TORS is still a relatively new journal. It was launched only a few years ago, and for a new specialist journal to enter a major international ranking system this early is already a strong sign of recognition. Academic visibility takes time to build. Reputation, submission quality, citation patterns, and community trust do not appear overnight. In that context, inclusion itself is the real story. Being listed at all in the CCF is a major achievement — dozens, if not hundreds, of journals relating to information retrieval or recommender systems are not listed in the CCF ranking at all. Similarly, to the best of my knowledge, CCF is the first major ranking to include ACM TORS. So far, ACM TORS has no Impact Factor, is not listed on SCImago or any other important journal ranking list. So, again, being listed in CCF is a major milestone for ACM TORS.
This is also a good moment to explain why the CCF Ranking matters. Outside China and Asia, many researchers are more familiar with rankings such as the Impact Factor, SCImago or CORE for conferences. But the CCF list is one of the most influential venue rankings in computer science, especially because of the size and global importance of the Chinese research community. It is widely used as a reference point in academia, and being included there substantially increases a venue’s visibility.
A CCF ranking is not just a label; it is a mechanism that changes incentives. In several Chinese universities and funding schemes, publications are evaluated with point systems that explicitly weight venues by CCF tier. If ACM TORS is on that list (and especially if it is placed in a strong tier), then a TORS paper becomes worth more in hiring, promotion, graduation requirements, internal bonuses, and grant reporting. For many researchers this shifts the decision from “Does this venue fit my community?” to “Which venue maximizes credit for a given effort?”. The predictable consequence is that TORS becomes more attractive to a much broader pool of authors, including those who previously would not have considered it. That can be good: higher perceived value typically yields more submissions and, by simple selection effects, the potential for stronger accepted papers because the journal can be more selective.
But we should be frank: the same incentive shift also increases the likelihood of many more weak or off-target submissions. The reason is structural, not moral. When a venue is “worth points,” it gets treated as a high-yield target. Some authors (or their institutions) optimize for venue-level credit rather than topical fit, novelty, or methodological rigor. That increases: (i) spray-and-pray submissions (low tailoring, low alignment to scope), (ii) template-driven manuscripts produced quickly to meet evaluation quotas, (iii) incremental or repackaged work aimed at passing minimum thresholds, and (iv) paper-mill-adjacent behavior in the broader sense of industrialized production of manuscripts with weak grounding. None of this requires bad faith from individuals; if careers hinge on counting weighted outputs, rational behavior will include sending marginal work to any venue that is newly valuable and plausibly passable.
This interacts with TORS’s prior positioning. So far, TORS has been somewhat niche: fewer (yet enough) submissions, and a relatively high acceptance rate. That does not imply lax standards; it reflects a pipeline dominated by well-known researchers who understand the expectations of the recommender-systems community, know what reviewers will scrutinize (experimental design, baselines, reproducibility, offline/online evaluation validity, etc.), and deliberately choose TORS because it reaches the right audience. Once a CCF ranking increases visibility beyond the core RecSys circle, the submitter population changes. We can expect more authors who simply do not know the implicit norms of the field and the journal—e.g., what constitutes an adequate baseline set, how to report significance and ablations, how to handle leakage, how to justify datasets and protocols, or what “incremental” means in this literature. Even well-intentioned submissions can therefore arrive “below the bar” because authors miscalibrate the bar.
My take: the ranking may improve the top end and worsen the bottom end at the same time, and the effect is likely to become stronger in the future if ACM TORS receives an even higher ranking in CCF and is listed in other journal rankings, such as the Impact Factor or SCImago. The editorial workload grows nonlinearly because weak submissions still consume triage time, desk-reject decisions, reviewer recruiting, and dispute handling. If TORS wants to benefit from the upside (more and better submissions) of being (highly) ranked in journal ranking websites, without drowning in the downside, it will likely need explicit countermeasures: clearer scope statements, stricter and faster desk-reject criteria, submission checklists (baselines, data availability, evaluation protocol), stronger reviewer guidance, and possibly tighter formatting/replication requirements that raise the cost of low-effort submissions. In other words: CCF and any other journal ranking can be a quality lever, but only if the journal actively manages the incentive-driven surge rather than assuming the old niche dynamics will hold.
Ultimately, the ACM TORS ranking in CCF is a clear positive development for ACM TORS. As the journal matures, its visibility, reputation, and institutional relevance will continue to grow, and it is likely to be included over time in additional ranking and indexing systems such as the Journal Impact Factor in the Web of Science, SCImago, Scopus-based metrics such as CiteScore, the ABDC Journal Quality List, and other national or disciplinary journal lists. With that broader recognition, ACM TORS will naturally attract a larger and more heterogeneous submission pool, including more low-quality, opportunistic, or off-scope work. That is not a sign of failure, but a normal consequence of becoming an established and valuable venue. Every major journal has to manage that dynamic. The key point is that such pressure exists precisely because the venue matters. In that sense, inclusion in the CCF and other recognised ranking lists is not a mixed blessing but an unambiguously positive milestone: it increases the journal’s legitimacy, broadens its reach, and marks the beginning of ACM TORS being treated as a serious publication outlet in the global evaluation system.

