“The most important application of the last 15 years is called RecSys, Recommener Systems” says Jensen Huang (NVIDIA)
Jensen Huang, CEO of NVIDIA, is not usually the one you expect to put recommender systems at the centre stage. Yet at the U.S.–Saudi Arabia Investment Forum, sitting next to Elon Musk and answering a question about a possible “AI bubble,” he does exactly that. After describing three big shifts in computing – the end of classical Moore’s law, the move from general-purpose CPUs to accelerated computing, and the rise of generative and “agentic” AI – he drops the sentence that made many of us in the RecSys community smile: “The most important application of the last 15 years is called RecSys – recommender systems.” He then spells out why: social feeds, ad targeting, book and movie suggestions all rely on recommenders to squeeze a gigantic internet into the tiny surface of a smartphone screen, and this “engine of the internet” has already moved from CPUs to GPUs, filling the AI factories he wants to build.
The rest of the conversation paints a broader picture of how Huang and Musk think about the future. Musk talks about reusable rockets, electric cars, and humanoid robots, and predicts that within 10–20 years “work will be optional”, with AI and robotics eliminating poverty by making many goods and services extremely cheap to produce. Huang complements this with his idea of “AI factories”: a shift from retrieval-based computing to real-time generation, where large clusters everywhere generate personalized content on the fly. They discuss worries about job losses, and Huang points to radiology as an example where AI has actually increased demand for experts. From there, they move to questions of scale and energy: multi-hundred-megawatt data centres, solar-powered AI satellites, and the claim that terawatt-scale compute will eventually have to move into space. The closing question about an AI bubble is where Huang again returns to his three pillars – accelerated computing, recommender systems, and agentic AI – to argue that the demand for infrastructure is grounded in real workloads, not just hype.
NVIDIA, of course, has strong incentives to treat recommender systems as central: modern recommenders are exactly the kind of GPU-intensive workload their hardware is designed for. But they have also invested in the RecSys ecosystem in a way that goes beyond slogans. Their open-source Merlin framework is one of the most complete toolchains for deep learning recommenders on GPUs, covering feature engineering (NVTabular), large-scale CTR and ranking models (HugeCTR), session and sequence models (Transformers4Rec), and deployment, all optimised for data sets that can easily reach hundreds of terabytes.
Around Merlin, NVIDIA has created a steady stream of resources: talks, white papers, hands-on examples of multi-stage recommenders, and detailed guidance on GPU-accelerated training and inference, which many practitioners now use as a starting point for industrial systems. They also show up visibly at ACM RecSys, with sponsorship, virtual theatres, tutorials, and technical papers, and they have repeatedly been involved in or close to winning major RecSys-related competitions such as the ACM RecSys Challenge and recommender-focused KDD Cups, often using Merlin as the backbone of their solutions. On top of that, they support community initiatives such as awards and diversity efforts, which helps sustain the non-commercial side of our field.
At the same time, it is healthy to keep a bit of academic distance. When one company provides the dominant hardware, influential software frameworks, and a strong presence at key conferences, there is a natural risk of lock-in – technical, intellectual, and even cultural. Huang’s framing of recommender systems as a major justification for ever larger GPU clusters is technically plausible, but it also aligns perfectly with NVIDIA’s business interests. As a community, we benefit from the tools and investments, but we also need open benchmarks, independent implementations, careful energy measurements, and space for ideas that do not fit neatly into the current GPU-centric paradigm. A vibrant RecSys ecosystem will always be a mix of industry-scale platforms, smaller companies, open-source projects, and academic labs, not just one stack.
So is Huang right that recommender systems are among the most important applications of the past 15 years? I think he is, for at least three concrete reasons. First, recommenders shape how we navigate culture and information. On Netflix, estimates suggest that roughly three-quarters or more of viewing comes from recommendations rather than direct search. YouTube openly states that its homepage and “watch next” recommendations rely heavily on personalised signals such as watch history and satisfaction feedback, and these systems drive a large share of watch time. When we talk about news consumption, political videos, music discovery, or what our kids see on screens, we are really talking about the behaviour of recommender algorithms. Huang’s image of a “tiny phone” trying to make sense of a gigantic internet is not an exaggeration; recommenders are the compression layer between the web and our attention.
Second, recommender systems sit at the heart of the digital economy. They decide which products are surfaced on e-commerce sites, which ads get shown and paid for, which songs are promoted on streaming platforms, and which creators find an audience. A strong recommender reduces friction and churn; a weak one makes platforms feel empty and expensive to run. At Netflix, for example, recommendation data is used not only to keep you watching tonight, but also to inform multi-million-dollar content decisions. Technically, modern recommenders are also some of the heaviest compute users in the AI toolbox: huge embedding tables, sparse features, and strict latency requirements make them natural candidates for GPU acceleration and specialized frameworks like Merlin. When Huang says that data processing and RecSys workloads alone can justify massive GPU data centers, that is not just sales talk; it reflects how many internet companies already work today.

Third, the importance of recommender systems is now visible in regulation and public policy. The EU’s Digital Services Act contains specific provisions about recommender transparency and user choice, requiring large platforms to explain how ranking works and to offer real alternatives to their default personalization. Enforcement actions increasingly target recommender algorithms directly – for example when authorities look at content feeds, product ranking, or ad delivery as potential sources of systemic risk, from disinformation to addictive design to unfair competition. If lawmakers and regulators design rules around your algorithms, you are no longer “just another application.”
Seen from this angle, Huang’s statement feels less like a provocation and more like an overdue recognition. Recommender systems already decide which news many people see, which products they buy, which artists they listen to, and which videos their children watch. They already generate substantial revenue and drive significant hardware and research investment. The real challenge for us as a community is to make that importance count in a positive way: more transparent, more robust, more energy-aware, and more aligned with long-term user and societal well-being. In that regard, CEOs can shine a spotlight, but the hard work will continue to happen in code, experiments, standards, and – of course – at conferences and workshops, where we keep asking why a particular ranking changed on somebody’s phone.
