Have been thinking about what kbin can do to combat spam accounts, which are currently on the rise again on kbin.social.

In the past this prevalence of spam has caused issues with federation, so it’s potentially a major problem not just for kbin.social but the fediverse overall if spam accounts aren’t identified and blocked/deleted quickly.

USER LEVEL

Individual users can block accounts, which is good for blocking accounts that annoy you but which might otherwise contribute positively, but not so good for addressing instance-wide spammers.

MAGAZINE/COMMUNITY LEVEL

Moderators can block accounts at a magazine/community level, which is good for addressing trolls or bots that infest a single magazine, but not so good for addressing instance-wide spammers.

The other downside is that as most magazines only have a single moderator it may take days for mods to block spammers, depending on how active the mod is. In addition there are thousands of magazines on kbin which are abandoned (ie not being actively moderated), so spammers posting to these communities won’t be blocked at all.

Increasing the number of mods would help (especially if they could ensure 24/7 coverage) but it’s important to keep in mind that the fediverse is still tiny compared to places like reddit and there are very few people who are willing and able to take on these roles, especially on a volunteer basis.

INSTANCE LEVEL

Reporting spam

There is a “report” function, and presumably these generate messages for the instance administrator (@ernest in the case of kbin.social) to action.

I don’t know what the admin interface for this is, but it may influence how easily spam accounts may be blocked. For example, if users report 100 posts belonging to 10 different accounts as spam, does this generate 100 separate messages which ernest has to review and action (which could be laborious), or does it group them into 10 different “queues” for the 10 different spam accounts (which would be less laborious to review and action).

The other limitation of course is that, like for magazine-level modding, we’re constrained by the fact that kbin.social currently only has one administrator who has a job, a personal life, and is also working hard on further developing the platform.

Tools/approaches that could be used/developed to manage spam at an instance level

I’m not sure what spam combatting abilities are built into (or envisaged for) kbin at an instance level, over and above the “report” function, but some ideas I had are:

A) Appoint more administrator (or other system roles with the ability to block/delete spam accounts)

Ernest could appoint administrators (or other system-level roles, ie not necessarily a full administrator) with the ability to deal with spam.

Upsides:
- Probably relatively easy to implement (depending on what system level roles already exist)

Downsides:
- As for community moderators, there’s potential issues of coverage and commitment.
- We may decry corporate-owned social media platforms like reddit, but - being a business with plenty of money coming in - they can at least pay some people to keep an eye on the community (by which I mean admins, not mods), ensure the stability and uptime of the site, and develop enhancements. These are all more difficult in small, privately-funded systems. But that’s a much bigger topic, and best left for another day.

B) Limit accounts by IP address

Most spammers create multiple accounts. Limiting the number of new accounts for an IP address could help with this, although that limit shouldn’t necessarily be as low as 1 (as you wouldn’t want to prevent genuine alt accounts).

Upsides:
- Prevents too many accounts being created from a single IP address (ie most likely from a single person)

Downsides:
- Can be bypassed relatively easily by using VPNs (though it adds an extra step that spammers have to take)
Could prevent genuine users from registering (eg if multiple genuine users share an IP address)

C) Manually review and approve new accounts

Some instances require new accounts to answer some questions to allow admins to assess their suitability (and humanity). kbin could institute something like this.

Upsides:
- This could at least limit the creation of new spam accounts, which currently seem to spring up like weeds.

Downsides:
- This approach requires time and resources to set up and keep going.
- It impedes the sign-up experience for genuine users (especially if it takes hours or days to be approved).
- It could be bypassed by sophisticated responses to the challenge questions.

D) Rate limit new accounts

New accounts could be throttled so that they can only post one thread / reply per (let’s say) 15 minutes. This limitation could be removed after a certain time or number of posts.

Upsides:
- Limits the “productivity” of spam accounts, making it more difficult for spammers.

Downsides:
- Requires time and effort to build
- Impedes user experience for genuine users
- Depending on how the posting throttling is relaxed, this system could be gamed. For instance, if the throttling is removed after (say) one week, all a spammer has to do is wait a week for the spamming to start.

E) Tie posting limits to reputation or mod reports

The above “rate limit new accounts” approach could be supplemented with an approach whereby posting limits are only removed if the account has neutral or positive reputation, and/or if the account has not been repeatedly reported for spamming.

So, for example, someone registers a new account. For the first week (or whatever time set by an admin-definable parameter), that account can only post once every 15 minutes (or whatever interval set by an admin-definable parameter).

After that first week the system reviews the status of the account. (Alternatively this review could be run “after the first X number of posts” rather than “x number of days”.)

If the overall net reputation of the account is less than an admin-definable value (let’s say, lower than negative 5), then the account restrictions remain in place, and the account is flagged for an admin (or similar role) to manually review and either block/delete or approve. If the net reputation is above this threshold, the posting limits are removed automatically, ie without manual intervention being required.

Alternatively (or additionally) the system could check how often posts by that account have been reported. If it has been reported more than an admin-definable value, posting limits remain in place and the account is flagged for an admin to review.

Upsides:
- Limits the “productivity” of spam accounts
- Uses the collective user base to identify spam accounts in a more sophisticated way than just reporting these to mods/admins, ie by creating a dataset which can be used by an inbuilt system to more easily help throttle/block spammers

Downsides:
- Requires considerably more time and effort to build
- Still requires a level of ongoing manual administration
- Could be “gamed” by malicious users who downvote/report even worthwhile posts (which is why I think the system should not outright block users automatically but only rate limit them, and why I think an admin should have the ability to manually approve users for normal posting. Ie, just because someone posts unpopular opinions doesn’t mean they’re posting spam, and a manual review could accommodate this)

THE WAY FORWARD

The above are only some potential ideas, I’m sure there are others. And I’m sure there are issues that I haven’t identified either.

Perhaps the way forward is to look at what can be done:

  • short term
  • longer term

As what’s required right now to stomp the current spammers on the head may not be an long-term optimal solution

  • rhythmisaprancer@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    I don’t have anything to add but appreciate you beginning the discussion. It seems that all instances will periodically face this issue; perhaps it would be good to have a proactive plan on place for when it becomes relentless.