• 0 Posts
  • 145 Comments
Joined 10 months ago
cake
Cake day: February 10th, 2024

help-circle








  • I haven’t seen the actual error message displayed, but “failed external validation” is definitely how the scanning process works.

    By illegal we are not referring to copyrighted content or anything like that, only much more serious things.

    Unfortunately, this will sometimes falsely identify content that should be allowed. In the past this would have silently erased the image shortly after the upload, with this only becoming noticeable days after the upload due to caching.







  • Generally that is true, but when you access a remote community for the first time, Lemmy attempts to backfill several posts from the community. This is limited to only posts, so comments and votes are not included in that. You can also “resolve” a post (or comments for that matter) on an instance from its fedilink (the colorful icon you see next to posts and comments), so when someone links to something elsewhere, a lot of apps will try to open (by resolving) it on the current instance instead, which can also result in posts or comments showing up, even when there isn’t a subscriber. Resolving can also be done manually by entering the URL in the search. This seems to not always be that reliable to work on the first try though, so it can help to try again if you have trouble resolving something on the first attempt.

    I think there is also something updating community information in the background from time to time, I’m not sure if that only happens under certain conditions or in regular intervals, and I’m not sure whether that fetches new posts at that point either. If it does, it could explain new posts appearing at daily or so interval but without any comments and votes. Backfill should probably only happen initially when discovering the community for the first time though.



  • I couldn’t tell you the reason for this, but several posts in !tech_memes@lemmy.world have been locked by a moderator: https://lemmy.world/modlog/959443

    as far as i know, locking a post does not affect voting, only prevents new comments from being federated.

    the other example you mentioned, i assume that you’re referring to the inconsistency on lemmyverse.net? i haven’t looked at how that application works, but it’s unlikely to be working with activitypub/federation, instead it’s most likely just connecting to various different instances and using their APIs. i’ve also left a comment over there about that.

    i already explained before why these posts don’t see votes on startrek.website - there is no local subscriber on that instance. once at least one person from that instance subscribes to the community it’ll start seeing updates, which includes votes. there has also been a comment by one of the startrek.website admins about the federation issues caused by them accidentally blocking certain traffic from other instances here.

    for discuss.online, there does not seem to have been a longer federation delay according to this dashboard, only about 1.5h delay at some point that was recovered from fairly quickly. it is also very possible that the first subscriber to the community on discuss.online only subscribed after the post was created, as the more recent posts seem to be doing just fine with their vote counts when comparing discuss.online and lemmy.world numbers. looking at our database, i can see the first subscriber to that community from discuss.online joined about 5 hours after the post was posted, which would easily explain the partial votes.


  • there seem to be two separate issues relating to that.

    the number at the top includes “all” communities, including those marked as nsfw.

    on a quick glance, it seems all the nsfw marked ones are correctly marked as such, in the sense of also being nsfw on lemmy.

    there also are a large number of communities missing overall, but at least the number next to the community tab adds up with the number of listed communities when the filter is set to show nsfw communities as well.

    there is also either some kind of data corruption going on or there may have been some strange spam communities on lemmy.world in the past, as it shows a bunch of communities with random numbers in the name and display names like oejwfiojwwqpofioqwfiowqiofkwqeifjwefwefoejwfiojwwqpofioqwfiowqiofkwqeifjwefwefoejwfiojwwqpofioqwfiowqiofkwqeifjwefwefoejwfiojwwqpofioqwfiowqiofkwqeifjwefwefoejwfiojwwqpofioqwfiowqiofkwqeifjwefwefoejwfiojwwqpofioqwfiowqiofkwqeifjwefwef which don’t currently exist on lemmy.world.


  • there is indeed a cutoff. there is exponential delay for retrying and at some point lemmy will stop trying until it sees the instance as active again.

    there is also a scheduled task running once a week that will delete local activities older than a week. downtimes of a day or two can generally be easily recovered from, depending on latency it can take a lot more time though. if an instance is down for an extended time it shouldn’t expect to still get activities from the entire time it was offline.


  • downtime should not result in missing content when the sending instance is lemmy 0.19.0 or newer. 0.19.0 introduced a persistent federation queue in lemmy, which means it will retry sending the same stuff until the instance is available. depending on the type of down, it can also be possible that there is a misconfiguration (e.g. “wrong” http status code on a maintenance page) that could make the sending instance think it was successfully sent. if the sending instance was unreachable (timeout) or throwing http 5xx errors, everything should be preserved.

    we are planning to post an announcement about the current situation with lemmy updates and our future plans in the coming days, stay tuned for that. you can find some info in my comment history already if you are curious.