Welcome to RS_c, the central platform for the RecSys community. We provide curated lists of recommender-systems datasets, algorithms, books, conferences and many resources more. Maybe most importantly, we publish the latest recommender-system news. If you want your news to be reported on RS_c, read here.
HomeReal-World ExperiencesGoogle medical researchers humbled when AI screening tool falls short in real-life testing [Devin Coldeway @TechCrunch]
Google medical researchers humbled when AI screening tool falls short in real-life testing [Devin Coldeway @TechCrunch]
April 28, 2020
Google achieved great ‘theoretical accuracy’ on the detection of diabetic retinopathy. However, in the real-world their approach didn’t perform so well. While this is not directly related to recommender systems, it illustrates a problem that also exists in recommender-system research: Algorithms may perform excellently on some offline datasets, yet they (sometimes) fail to perform well in the real-world.
AI is frequently cited as a miracle worker in medicine, especially in screening processes, where machine learning models boast expert-level skills in detecting problems. But like so many technologies, it’s one thing to succeed in the lab, quite another to do so in real life — as Google researchers learned in a humbling test at clinics in rural Thailand.
Google Health created a deep learning system that looks at images of the eye and looks for evidence of diabetic retinopathy, a leading cause of vision loss around the world. But despite high theoretical accuracy, the tool proved impractical in real-world testing, frustrating both patients and nurses with inconsistent results and a general lack of harmony with on-the-ground practices.