That’s not the worst idea ever. Say a screenshot is 10 mb. 10608 hours =4800mb per work day. 30 days is 150gb worst case scenario. I suppose you could check the previous screenshot and if it’s the same, then don’t write a new file. Combine that with OCR and a utility to scroll forward and backward through time, it might be a useful tool.
In order to be certified for running Recall, machines currently must have an NPU (Neural Processing Unit, basically an AI coprocessor). I assume that is what makes it practical to do by offloading the required computation from the CPU.
Apparently it IS possible to circumvent that requirement using a hack, which is what some of the researchers reporting on it have done, but I haven’t read any reports on how that affects CPU usage in practice.
Also, 1MB on full resolution. You could also downscale the images dramatically after you OCR them. So let’s say we shoot in full res, OCR and then downscale to 50%. Still enough so everything is human readable, combined with searchable OCR you’re down to 7,5GB for a whole month.
Absolutely feasable. Let’s say we’re up to 8GB to include the OCR text and additional metadata and just reserve 10GB on your system for that to make double sure.
Now you have 10GB to track your whole 3440x1440 display.
I mean taking the screenshot is the easy part, getting reliable OCR on the other hand …
In my experience (tesseract) current OCR works well for continuous text blocks but it has a hard time with tables, illustrations, graphs, gui widgets, etc.
But obviously I don’t want a malware company like Microsoft doing that “for me” (actually the purpose is hyperspecific ads if not long term planning to exfiltrate the data).
Not sure if I even trust myself with the security that data would require.
I suppose you could check the previous screenshot and if it’s the same
Hmmm… this gives me an idea… maybe we could even write a special algorithm that checks whether only certain parts of picture have changed, and store only those, while re-using the parts that haven’t changed. It would be a specialized compression algorithm for Moving Pictures. But that sounds difficult, it would probably need a whole Group of Experts to implement. Maybe we can call it something like Moving Picture Experts Group, or MPEG for short : )
That’s not the worst idea ever. Say a screenshot is 10 mb. 10608 hours =4800mb per work day. 30 days is 150gb worst case scenario. I suppose you could check the previous screenshot and if it’s the same, then don’t write a new file. Combine that with OCR and a utility to scroll forward and backward through time, it might be a useful tool.
Running OCR every second sounds like a great way to choke your CPU
Once a minute, and only if the screen contents change. I imagine there’s something lightweight enough.
In order to be certified for running Recall, machines currently must have an NPU (Neural Processing Unit, basically an AI coprocessor). I assume that is what makes it practical to do by offloading the required computation from the CPU.
Apparently it IS possible to circumvent that requirement using a hack, which is what some of the researchers reporting on it have done, but I haven’t read any reports on how that affects CPU usage in practice.
Recall analyses each screenshot and uses AI or whatever to add tags to it. I’d assume that’s what the NPU is used for.
Are you on 16k resolution or something?
When i take a screenshot of my 3440x1440 display it’s 1MB big. I mean this doesn’t change the issue in its core but dramatically downsizes it
they’re running 10 screens in parallel
I just chose a number haha. That makes it much more feasible then.
Also, 1MB on full resolution. You could also downscale the images dramatically after you OCR them. So let’s say we shoot in full res, OCR and then downscale to 50%. Still enough so everything is human readable, combined with searchable OCR you’re down to 7,5GB for a whole month.
Absolutely feasable. Let’s say we’re up to 8GB to include the OCR text and additional metadata and just reserve 10GB on your system for that to make double sure.
Now you have 10GB to track your whole 3440x1440 display.
What’s OCR?
Optical Character Recognition. Basically just extracting text from an image.
Optical Character Recognition
Making a program read a text in image form and make it computer-readable / searchable / selectable
I mean taking the screenshot is the easy part, getting reliable OCR on the other hand …
In my experience (tesseract) current OCR works well for continuous text blocks but it has a hard time with tables, illustrations, graphs, gui widgets, etc.
That’s what recall is… It’s literally screenshotring and. Ocr / ai parsing Combined with a sqllite database
I think it would be hugely useful.
But obviously I don’t want a malware company like Microsoft doing that “for me” (actually the purpose is hyperspecific ads if not long term planning to exfiltrate the data).
Not sure if I even trust myself with the security that data would require.
If only MS used DuckDB then they wouldn’t have such a huge PR disaster on their hands.
Hmmm… this gives me an idea… maybe we could even write a special algorithm that checks whether only certain parts of picture have changed, and store only those, while re-using the parts that haven’t changed. It would be a specialized compression algorithm for Moving Pictures. But that sounds difficult, it would probably need a whole Group of Experts to implement. Maybe we can call it something like Moving Picture Experts Group, or MPEG for short : )