Nvidia CEO’s Defense Of DLSS 5 Gets Contradicted By One Of His Employees
March 20, 2026
Earlier this week, Nvidia unveiled DLSS 5: an “AI-powered breakthrough” in visual upscaling tech that takes a “game’s color and motion vectors as input for each frame, then infuses the scene with photoreal lighting and materials.” The internet immediately reacted very poorly to its announcement, decrying it as an AI-gen slop filter.
Nvidia CEO Jensen Huang rejected that framing at a live event later in the week, saying everyone is “completely wrong” and DLSS 5 isn’t actually “post-processing at the frame level.” That would suggest a finer degree of nuance and control than the alleged “slop filter” that is modifying the final 2D image based on broad internet training data.
But new details from Nvidia’s own “GeForce Evangelist” marketing specialist Jacob Freeman appear to contradict Huang’s framing of the controversial technology. PC gaming hardware YouTuber Daniel Owens asked Freeman if DLSS 5 is “effectively taking a single 2D frame as an input (with motion vectors) to create the output frame?” The Nvidia rep’s response was: “Yes, DLSS5 takes a 2D frame plus motion vectors as an input.” They continued, “DLSS 5 is trained end to end to understand complex scene semantics such as characters, hair, fabric, and translucent skin, along with environmental lighting conditions like front-lit, back-lit or overcast – all by analyzing a single frame.”
The less technically inclined among you may be asking what the gotcha is here. The issue is that this directly contradicts Jensen Huang’s previous statement on March 17. “It’s not post-processing, it’s not post-processing at the frame level, it’s generative control at the geometry level,” Huang said to Tom’s Hardware during a Q&A. “All of that is in the control — direct control — of the game developer. This is very different than generative AI; it’s content-control generative AI. That’s why we call it neural rendering.”
Basically, the NVIDIA employee is saying it’s a generative AI filter that uses a single image as a reference, and Huang is saying it’s not using a single frame as the reference; it’s using every aspect of the data, including the 3D geometry.
In short, as Owens puts it, DLSS 5 is just using a screenshot and slapping a filter over top. This is why people online, already in backlash mode over the original demo, are now crying foul and accusing Huang of lying about DLSS 5’s capabilities in his most recent statement. It wouldn’t be the first time he was accused of misleading consumers.
It sounds like DLSS 5 isn’t actually pulling any extra information beyond that. That also kind of explains why some of the lighting effects in the first demonstration look like garbage, because DLSS 5 is just using an image of the lighting and nothing else to generate something new. DLSS 5 isn’t some new-fangled geometry level rendering tech; it’s just AI slop 2.0, because it’s not doing anything a bog-standard generative AI filter doesn’t already do.
Search
RECENT PRESS RELEASES
Related Post
