I run the midwest.social instance. I’m also active on lemmy.ml. @firstname.lastname@example.org
This is why I’m glad I joined some leftist communities in my area. We try to expand our community to as many people as we can, through as many organizations as we can, but we can only do so much. The US is most likely the least class conscious place in the world, so we’re in a constant uphill battle with the cold war propaganda brain worms infecting most of the population.
I was a teenager in the early 2000s and social media was just becoming a thing. While I did partake in early social media like MySpace and Youtube we did do other, usually dumb, stuff. I played airsoft with friends, went spelunking in the storm water sewer system lol, watched movies with friends. I remember we found an abandoned house in the woods near a park and explored that multiple times.
Someone in IT security explained to me once that in the US the government usually pays far less than private sector jobs, so they don’t have a very good pool of applicants who want to “serve”. They usually leverage cyber criminals’ punishment by making them work for the FBI and the like. Maybe the Chinese government pays well?
My self-hosting area is a disaster so I won’t post pics, but I run an Asustor NAS with openmediavault installed on it and docker containers running transmission-openvpn, radarr, sonarr, and jellyfin on it.
I have a pi-hole set up with recursive dns set up as well as pi-vpn so I can remote in from anywhere and access my movies/shows on the NAS as well as have ad-blocking.
I run syncthing on my daily driver linux computer that syncs photos from my phone to it so that I don’t have to email myself stuff if I need it.
Finally, I host my Lemmy instance and Mastodon instances from two VPS servers on Digital Ocean. Not sure if that counts as self-hosting if I don’t own the hardware.
Interesting. I had the pleasure of comparing very large, complex JSON files when I was a NASA contractor. I ended up writing a somewhat custom JSON crawler using Node JS that would first parse the entire ~500 MB json files into JS objects, and then using asynchronous programming it would “crawl” through the objects and compare each and every value for every property or array index and writing out differences to a text file that included the actual path to the value, so you’d get something like “object.array1.velocity”, but waaay longer of a path than that a lot of the time.
Be as evil as fuck so much so that even hell doesn’t want you.