

I’ve always wondered what to do in this kind of situation. Somebody ought to know about it, but who do you call? Science? Ghostbusters?
I’ve always wondered what to do in this kind of situation. Somebody ought to know about it, but who do you call? Science? Ghostbusters?
If they actually wanted to protect children, the answer is simple: reverse the responsibilities. Require porn sites to include metadata indicating it isn’t safe for minors. Require browsers to recognize that metadata, and filter out that content if parental controls are enabled. If parents are still too lazy to turn it on, make it default (like “safe search”, but more effective). The fact none of them have even suggested this is proof they don’t care about children or even porn, they just want to be seen as if they do.
Big food is kind of a marketing thing in America. Restaurants want to give their customers more " bang for their buck" (or at least appear to), but they don’t want to lower prices. Instead, they increase portions. This has lead to a size arms race where every restaurant wants to claim they have the biggest food in town. This is especially the case for burger joints. It doesn’t matter to the restaurant if customers eat all their food, since they pay for all of it either way. I’m guessing Americans are more culturally susceptible to this marketing tactic, since bigger-is-better is common here, and hence things have been taken further than in other countries.
This seems to be another case of someone throwing reason out the door for the sake of insulting Americans. There is no way you would be getting “shit eating grins” for ordering a kids meal. And if your large burgers are smaller than a kids meal, you either have very little size variation, or the small would be like a single bite.
Think you mean mmHg
Absolutely none of this law was ever about privacy or mental health. No one ever claimed it was. The law is banning tiktok because it is based in China. That is the reason given by the law itself. The possibility that meta or Google or some other American company will buy or replace tiktok and operate the same way is not an unintended outcome. It is literally the whole point of the law to get bytedance to sell tiktok to an American company.
Making a nuke is pretty difficult, even for a whole nation. It would probably take years for them to develop a nuke if they started from scratch.
You are misrepresenting a lot of stuff here.
it’s behavior is unpredictable
This entirely depends on the quality of the AI and the task at hand. A well made AI can be relatively predictable. However, most tasks that AI excels at are tasks which themselves do not have a predictable solution. For instance, handwriting recognition can be solved by a neural network with much better than human accuracy. That task does not have a perfect solution, and there is not an ideal answer for each possible input (one person’s ‘a’ could look exactly the same as another’s ‘o’). The same can be said for almost all games, especially those involving a human player.
and therefore cannot be tested
Unpredictable things can be tested. That’s pretty much what the entire field of statistics and probability is about. Also, testability is a fundamental requirement for any kind of machine learning. It isn’t just a good practice kind of thing; if you can’t test your model, you don’t even have a model in the first place. The whole point is to create many candidate models and test them to find the best one.
It would cheat and find ways to know things about the game state that it’s not supposed to know
A neural network only knows what you tell it. If you don’t tell it where the player is, it’s not going to magically deduce it from nothing. Also, it’s output has to be interpreted to even be used. The raw output is a vector of numbers. How this is transformed into usable actions is entirely up to the developer. If that transformation allows violating the rules, that’s the developers fault, not the networks. The same can be said of human input; it is the developers responsibility to transform that into permissable actions in game.
it would hide in a corner as far away from the player as possible because it’s parameters is to avoid death
That is possible. Which is why you should make a performance metric that reflects what you actually want it to try to do. This is a very common issue and is just part of the process of making an AI. It is not an insurmountable problem.
Neural networks have been used to play countless games before. It’s probably one of the most studied use cases simply because it is so easy to do.
That’s not how copyright works (at least not in the US). when a corporation creates a copyrighted work (by way of paying the person(s) that actually made it), the duration is set as 120 years after creation or 95 years after publication. The lifetime of any employee is not taken into account. When a copyright is made by a person, it lasts until 70 years after that person dies. You cannot swap out that person for someone else, even if the owner of the copyright changes.
You are probably thinking of a method that is used to make private agreements last basically forever. A private contract technically isn’t allowed to last forever, there has to be some point of expiration. To make a contract last forever anyway, they pick some condition that probably won’t happen for a ridiculous amount of time, such as when the last descendant of the king of England dies (I assume they use this because the royal family keeps good genealogy records). If a currently living person is required, they might pick some infant relative to make it last as long as possible.
I’m pretty sure he said " the rules were that you were going to fact check, this isn’t fact checking" or something to that effect. He was accusing the moderators of being argumentative.
AI is actually deterministic, a random input is usually included to let you get multiple outputs for generative tasks. And anyway, you could just save the “random” output when you get a good one.
I think “making history” has just become one of those phrases media uses all the time now. Kind of like how any dispute is now “slamming” someone, apparently. Or how anyone you think is wrong is “unhinged”.
Do you have a source for this? This sounds like fine-tuning a model, which doesn’t prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don’t think that’s how any of these models work.
Other than citing the entire training data set, how would this be possible?
Embed the image using markdown: 
When does that even happen? If you have nano installed, wouldn’t it work too?
Nobody will remember this time in a few decades. Garfield was straight up assassinated and you’re just now realizing that I’m not talking about the cat.
Why do you need so much info on Mike? Can’t you just evaluate his statements/work on its own merit? The whole point of open source, federated platforms is that you don’t have to trust him. If he decides to enshittify it, you can just go with a fork or another instance. A nomadic identity isn’t a centralized alternative to the fediverse, it’s just a way of bringing some of the features of a centralized identity to a decentralized one (at least, that’s the way I interpreted the article).
Quotas are not the only way to combat discrimination, nor are they a good one. Name-blind hiring would resolve name discrimination without making additional presumptions about the applicant pool. A quota presumes that the applicant pool has a particular racial mix, and that a person’s qualifications and willingness to apply are independent of race. And even if those happen to be true, it can’t take into account the possibility that the random distribution of applicants just happens to sway one way or another in a particular instance.
The bill itself says, more or less, “any foreign adversary controlled app is banned. Also, TikTok is a foreign adversary controlled app”. So it doesn’t apply exclusively to TikTok, but it does explicitly include them.
People capitalizing Random Words for emphasis, as if they’re Proper Nouns.
Also getting ‘a’ vs ‘an’ wrong. It follows pronunciation, not spelling; so it’s “a European” and “an honor”.