• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: March 8th, 2024

help-circle


  • If they actually wanted to protect children, the answer is simple: reverse the responsibilities. Require porn sites to include metadata indicating it isn’t safe for minors. Require browsers to recognize that metadata, and filter out that content if parental controls are enabled. If parents are still too lazy to turn it on, make it default (like “safe search”, but more effective). The fact none of them have even suggested this is proof they don’t care about children or even porn, they just want to be seen as if they do.


  • Big food is kind of a marketing thing in America. Restaurants want to give their customers more " bang for their buck" (or at least appear to), but they don’t want to lower prices. Instead, they increase portions. This has lead to a size arms race where every restaurant wants to claim they have the biggest food in town. This is especially the case for burger joints. It doesn’t matter to the restaurant if customers eat all their food, since they pay for all of it either way. I’m guessing Americans are more culturally susceptible to this marketing tactic, since bigger-is-better is common here, and hence things have been taken further than in other countries.

    This seems to be another case of someone throwing reason out the door for the sake of insulting Americans. There is no way you would be getting “shit eating grins” for ordering a kids meal. And if your large burgers are smaller than a kids meal, you either have very little size variation, or the small would be like a single bite.



  • Absolutely none of this law was ever about privacy or mental health. No one ever claimed it was. The law is banning tiktok because it is based in China. That is the reason given by the law itself. The possibility that meta or Google or some other American company will buy or replace tiktok and operate the same way is not an unintended outcome. It is literally the whole point of the law to get bytedance to sell tiktok to an American company.



  • You are misrepresenting a lot of stuff here.

    it’s behavior is unpredictable

    This entirely depends on the quality of the AI and the task at hand. A well made AI can be relatively predictable. However, most tasks that AI excels at are tasks which themselves do not have a predictable solution. For instance, handwriting recognition can be solved by a neural network with much better than human accuracy. That task does not have a perfect solution, and there is not an ideal answer for each possible input (one person’s ‘a’ could look exactly the same as another’s ‘o’). The same can be said for almost all games, especially those involving a human player.

    and therefore cannot be tested

    Unpredictable things can be tested. That’s pretty much what the entire field of statistics and probability is about. Also, testability is a fundamental requirement for any kind of machine learning. It isn’t just a good practice kind of thing; if you can’t test your model, you don’t even have a model in the first place. The whole point is to create many candidate models and test them to find the best one.

    It would cheat and find ways to know things about the game state that it’s not supposed to know

    A neural network only knows what you tell it. If you don’t tell it where the player is, it’s not going to magically deduce it from nothing. Also, it’s output has to be interpreted to even be used. The raw output is a vector of numbers. How this is transformed into usable actions is entirely up to the developer. If that transformation allows violating the rules, that’s the developers fault, not the networks. The same can be said of human input; it is the developers responsibility to transform that into permissable actions in game.

    it would hide in a corner as far away from the player as possible because it’s parameters is to avoid death

    That is possible. Which is why you should make a performance metric that reflects what you actually want it to try to do. This is a very common issue and is just part of the process of making an AI. It is not an insurmountable problem.

    Neural networks have been used to play countless games before. It’s probably one of the most studied use cases simply because it is so easy to do.


  • That’s not how copyright works (at least not in the US). when a corporation creates a copyrighted work (by way of paying the person(s) that actually made it), the duration is set as 120 years after creation or 95 years after publication. The lifetime of any employee is not taken into account. When a copyright is made by a person, it lasts until 70 years after that person dies. You cannot swap out that person for someone else, even if the owner of the copyright changes.

    You are probably thinking of a method that is used to make private agreements last basically forever. A private contract technically isn’t allowed to last forever, there has to be some point of expiration. To make a contract last forever anyway, they pick some condition that probably won’t happen for a ridiculous amount of time, such as when the last descendant of the king of England dies (I assume they use this because the royal family keeps good genealogy records). If a currently living person is required, they might pick some infant relative to make it last as long as possible.










  • Why do you need so much info on Mike? Can’t you just evaluate his statements/work on its own merit? The whole point of open source, federated platforms is that you don’t have to trust him. If he decides to enshittify it, you can just go with a fork or another instance. A nomadic identity isn’t a centralized alternative to the fediverse, it’s just a way of bringing some of the features of a centralized identity to a decentralized one (at least, that’s the way I interpreted the article).