• DannyBoy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    ·
    6 months ago

    That’s not the worst idea ever. Say a screenshot is 10 mb. 10608 hours =4800mb per work day. 30 days is 150gb worst case scenario. I suppose you could check the previous screenshot and if it’s the same, then don’t write a new file. Combine that with OCR and a utility to scroll forward and backward through time, it might be a useful tool.

      • DannyBoy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Once a minute, and only if the screen contents change. I imagine there’s something lightweight enough.

        • MacN'Cheezus@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          6 months ago

          In order to be certified for running Recall, machines currently must have an NPU (Neural Processing Unit, basically an AI coprocessor). I assume that is what makes it practical to do by offloading the required computation from the CPU.

          Apparently it IS possible to circumvent that requirement using a hack, which is what some of the researchers reporting on it have done, but I haven’t read any reports on how that affects CPU usage in practice.

          • wick@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Recall analyses each screenshot and uses AI or whatever to add tags to it. I’d assume that’s what the NPU is used for.

    • RandomLegend [He/Him]@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      ·
      6 months ago

      Are you on 16k resolution or something?

      When i take a screenshot of my 3440x1440 display it’s 1MB big. I mean this doesn’t change the issue in its core but dramatically downsizes it

        • RandomLegend [He/Him]@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 months ago

          Also, 1MB on full resolution. You could also downscale the images dramatically after you OCR them. So let’s say we shoot in full res, OCR and then downscale to 50%. Still enough so everything is human readable, combined with searchable OCR you’re down to 7,5GB for a whole month.

          Absolutely feasable. Let’s say we’re up to 8GB to include the OCR text and additional metadata and just reserve 10GB on your system for that to make double sure.

          Now you have 10GB to track your whole 3440x1440 display.

    • takeheart@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      I mean taking the screenshot is the easy part, getting reliable OCR on the other hand …

      In my experience (tesseract) current OCR works well for continuous text blocks but it has a hard time with tables, illustrations, graphs, gui widgets, etc.

    • Evotech@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 months ago

      That’s what recall is… It’s literally screenshotring and. Ocr / ai parsing Combined with a sqllite database

      • barsquid@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        6 months ago

        I think it would be hugely useful.

        But obviously I don’t want a malware company like Microsoft doing that “for me” (actually the purpose is hyperspecific ads if not long term planning to exfiltrate the data).

        Not sure if I even trust myself with the security that data would require.

      • Cargon@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        If only MS used DuckDB then they wouldn’t have such a huge PR disaster on their hands.

    • renzev@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      I suppose you could check the previous screenshot and if it’s the same

      Hmmm… this gives me an idea… maybe we could even write a special algorithm that checks whether only certain parts of picture have changed, and store only those, while re-using the parts that haven’t changed. It would be a specialized compression algorithm for Moving Pictures. But that sounds difficult, it would probably need a whole Group of Experts to implement. Maybe we can call it something like Moving Picture Experts Group, or MPEG for short : )