• 1 Post
  • 13 Comments
Joined 1 个月前
cake
Cake day: 2024年12月15日

help-circle
  • Good idea yeah! The 20x20 one will look a little more crunchy than the actual icon would end up, since the actual image file is still higher quality than 20x20 in terms of pixel density, its just the displayed height/width that gets smaller.

    The 20x20 one will probably be closer to the same quality as the 120x120 one you posted (probably a bit more blurry), and the 120x120 size will be closer to the full quality one.



  • Yeah those are basically my thoughts too lol. Even if it ends up not working out the process of trying it will still be good since it’ll give me more experience. Those aspects you’re wary of are also definitely my 2 biggest concerns too. I think (or at least hope) that with the rules I’m thinking of for how trust is generated it would mostly positively effect behaviour? I’m imagining by “rewarding” trust to recieving positive replies, combined with a small reward for making positive replies in the first place, it would mostly just lead to more positive interactions overall. And I don’t think I’d ever want a system like this to punish making a negative reply, only maybe when getting negative replies in response, since hopefully that prevents people wanting to avoid confrontation of harmful content in order to avoid being punished. Honestly it might even be better to only ever reward trust and never retract it except via decay over time, but that’s something worth testing I imagine.

    And in terms of gaming the system I do think that’s kinda my bigger concern tbh. I feel like the most likely negative outcome is something like bots/bad actors finding a way to scam it, or the community turning into an echo chamber where ideas (that aren’t harmful) get pushed out, or ends up drifting towards the center and becoming less safe for marginalized people. I do feel like thats part of the reason 196 would be a pretty good community to use a system like this though, since there’s already a very strong foundation of super cool people that could be made the initial trusted group, and then it would hopefully lead to a better result.

    There are examples of similar sorts of systems that exist, but it’s mostly various cryptocurrencies or other P2P systems that use the trust for just verifying that the peers aren’t malicious and it’s never really been tested for moderation afaik (I could have missed an example of it online, but I’m fairly confident in saying this). I think stuff like the Fediverse and other decentralized or even straight up P2P networks are a good place for this sort of thing to work though, as a lot of the culture is already conducive to decentralization of previously centralized systems, and the communities tend to be smaller which helps it feel more personal and prevents as many bad actors/botting attempts since there aren’t a ton of incentives and they become easier to recognize.



  • I’ve been thinking recently about chain of trust algorithms and decentralized moderation and am considering making a bot that functions a bit like fediseer but designed more for individual users where people can be vouched for by other users. Ideally you end up with a network where trust is generated pseudo automatically based on interactions between users and could have reports be used to gauge whether a post should be removed based on the trust level of the people making the reports vs the person getting reported. It wouldn’t necessarily be a perfect system but I feel like there would be a lot of upsides to it, and could hopefully lead to mods/admins only needing to remove the most egregious stuff but anything more borderline could be handled via community consensus. (The main issue is lurkers would get ignored with this, but idk if there’s a great way to avoid something like that happening tbh)

    My main issue atm is how to do vouching without it being too annoying for people to keep up with. Not every instance enables downvotes, plus upvote/downvote totals in general aren’t necessarily reflective of someone’s trustworthiness. I’m thinking maybe it can be based on interactions, where replies to posts/comments can be ranked by a sentiment analysis model and then that positive/negative number can be used? I still don’t think that’s a perfect solution or anything but it would probably be a decent starting point.

    If trust decays over time as well then it rewards more active members somewhat, and means that it’s a lot harder to build up a bot swarm. If you wanted any significant number of accounts you’d have to have them all posting at around the same time which would be a lot more obvious an activity spike.

    Idk, this was a wall of text lol, but it’s something I’ve been considering for a while and whenever this sort of drama pops up it makes me want to work on implementing something.


  • Thanks! This is the first time I used nail polish remover first and put a top coat on at the end and it feels way stronger and hasn’t chipped at all yet. I’ll have to get a base coat though, the brand I’ve been ordering from doesn’t sell one but I heard good things about using one online too so I’ll look for one. I appreciate all the advice lol, I’m still figuring out proper technique and stuff to prevent bubbling, or getting it all over my fingers, or having the top be textured like the brush, etc. This attempt felt way better than my first couple for sure, but I’m also definitely still learning lol.




  • One rule I think might be a good idea is that mods aren’t allowed to moderate their own posts/comment chains. Not that it’s really been an issue on 196 in the past afaik, but there are some communities where the mods will get into an argument with another user and then remove comments for incivility or a similar rule which obviously has massive potential for abuse. Assuming there are enough mods where it’s not an issue to do so (which seems very likely based on the number of people interested in moderating) preventing situations like that entirely seems beneficial.



  • I posted this in another thread but I also wanted to say it here so it’s more likely one of you will see it. I get the intention behind this, and I think it’s well intentioned, but it’s also definitely the wrong way to go about things. By lumping opposing viewpoints and misinformation together, all you end up doing is implying that having a difference in opinion on something more subjective is tantamount to spreading a proven lie, and lending credence to misinformation. A common tactic used to try and spread the influence of hate or misinformation is to present it as a “different opinion” and ask people to debate it. Doing so leads to others coming across the misinfo seeing responses that discuss it, and even if most of those are attempting to argue against it, it makes it seem like something that is a debatable opinion instead of an objective falsehood. Someone posting links to sources that show how being trans isn’t mental health issue for the 1000th time wont convince anyone that they’re wrong for believing so, but it will add another example of people arguing about an idea, making those without an opinion see the ideas as both equally worthy of consideration. Forcing moderators to engage in debate is the exact scenario people who post this sort of disguised hate would love.

    Even if the person posting it genuinely believes the statement to be true, there are studies that show presenting someone with sources that refute something they hold as fact doesn’t get them to change their mind.

    If the thread in question is actually subjective, then preventing moderators from removing just because they disagree is great. The goal of preventing overmodedation of dissenting opinions is extremely important. You cannot do so by equating them with blatent lies and hate though, as that will run counter to both goals this policy has in mind. Blurring the line between them like this will just make misinformation harder to spot, and disagreements easier to mistake as falsehoods.



  • Holy shit this is such a bad policy lol. World is known for being too aggressive at deleting a lot of content they really shouldn’t be deleting, but this policy really doesn’t seem like it will improve that. The issue is most of the time if they want something removed they do so and then add a policy after to justify it, meaning that regardless of this rule people can’t “advocate for violence”, but they will be able to post misinformation and hate speech since apparently “LGBTQ people are mentally ill” hasn’t been debunked enough elsewhere and a random comment chain in Lemmy is where it needs to be done. Never mind the actual harm those sorts of statements cause to individuals and the community at large.

    All I can see this doing is any actual types of that get wrongly overly censored will still do so since the world admins believe they are justified in doing so, while other provably false information will be required to stay up since the admins believe the mods aren’t justified in removing it.

    This policy seems to only apply to actual misinformation too, not just subjective debates. So if there’s a comment thread about whether violence is justified in protest would likely have one side removed, while I guess someone arguing that every trans person is a pedophile would be forced to stay up and be debated. Its like the exact opposite of how moderation should work lol.