• ricecake@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    It depends on which type of ai upscaling is being used.
    Some are basically a neural net that understands how pixelation works with light, shadow, and color gradients and can work really well. They leave the original pixels intact, figure out the best guess for the gaps using traditional methods and then correct the guesses using feedback from the neural net.
    Others are way closer to “generate me an image that looks exactly the same as this one but had three times the resolution”. It uses a lot more information about how people look (in photos it was trained on) than just how light and structure interact.

    The former is closer to how your brain works. Shadow and makeup can be separated because you (in the squishy level, not consciously) know shadows don’t do that, and the light reflection hints at depth and so on.
    The latter is more concerned with fixing “errors”, which might involve changing the original image data if it brings the total error down, or it’ll just make up things that aren’t there because it’s plausible.

    Inferring detail tends to look nicer, because it’s using information that’s there to fil the gaps. Generating detail is just smearing in shit that fits and tweaking it until it passes a threshold of acceptability.
    The first is more likely to be built into a phone camera to offset a smaller lens. The second is showing up a lot more to “make your pictures look better” by tweaking them to look like photos people like.