I don’t doubt that there are lessons to learn from the SPE, but it’s also worth noting that it’s been widely criticized for various biases and influences and lack of controls, and that no other researchers have ever been able to replicate its findings. Some might call it debunked, others perhaps not, but I think it’s fair to say it isn’t generally accepted as gospel.
If you’re talking about your computer and you have access to its keyboard, you can’t beat screenshot keyboard shortcuts!
But if you’re talking about your TV or some screen you’re not in control of, fair enough. For anyone wondering, the reason this is tough to correct with an app is because your little bitty lens is trying to capture a grid of millions of LEDs to your itty bitty camera’s sensor, which has its own pixel grid that almost certainly doesn’t match up with the grid you’re photographing. Also, photographing a colored light source makes white balance tricky for any camera, and this is a bunch of light sources that are kind of in motion, because LEDs give off rapid pulses of light, not a steady light. Modern camera apps are getting better at antialiasing to smooth it all out and using AI models to try to guess what the image was supposed to look like, but you’ll usually still see some Moire effect from those mismatched grids. I wonder if we’ll ever see a solution to this while LED screens continue to exist in their current form.
We’re pretty lucky we can capture a shitty image of what’s onscreen, though. Just ask anybody who’s tried to photograph a CRT.