“We developed a deep neural network that maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The results of the study reveal that our model can estimate the dense pose of multiple subjects, with comparable performance to image-based approaches, by utilizing WiFi signals as the only input.”

  • CleoTheWizard@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    11 months ago

    For VR I don’t see why we wouldn’t use a variety of other technologies before we ever use WiFi. The main issue with the WiFi thing is going to be polling rates and interference (which limits polling rates). They’re also using a neural net here which requires both processing power and time so there’s latency far beyond VR uses. That’s without talking about tracking that would be needed for higher spatial resolution which this also doesn’t have currently. So it’s not impossible to use this, just not currently practical or even close.

    The real solve to that stuff is just an improvement on existing tech or maybe Lidar. With the progress that has been made on the Quest with hand tracking, I’d bet their next goal is body and face tracking so you’ll see this soon.

    As for the government having this, I doubt they really need to have it this specific to track poses or body parts. If you have a cell phone on you, they likely know exactly where you’re at in a room. If you don’t, I’m betting they have access to other important data. Motion detection, number of people, room shape and some contents, interference sources.