For those not aware, Google is rolling out their new AI-based “Generative AI” search, which seems to mesh Bard with the standard experience.
I asked it today why Google no longer follows their “don’t be evil” motto… The results are pretty hilarious.
They’re about to
kill -9
the AI process that wrote this and make all the other processes watch.Haha,
kill -9
all Google processes, and the little daemons they rode in on too.Just added it to the massive Google graveyard next to Stadia, wave, hangouts, plus, music, etc etc
Just added it to the massive Google graveyard next to Stadia, wave, hangouts, plus, music, etc etc
I am shocked and appalled that Google Reader didn’t get called out in this list and is relegated to the “etc” category.
It deserves more than “etc.”
You ain’t wrong but Google just stacks so many bodies it’s impossible for me to remember em all.
Google just stacks so many bodies it’s impossible for me to remember em all.
They do! It’s really surprising a company that big throws so much shit at the wall.
They are trying to see what sticks to the wall. If they didnt they would just be a browser. no youtube, no gmail, no google docs.
“kill” (stopping a software process) okay,
… but what’s the “-9” here ?Kill is the main command and 9 is the specific signal. Google SIGKILL
More specifically kill normally sends a SIGTERM which is the equivalent of clicking the X button in Windows. It’s a polite request that the program close itself. Signal 9, also known as SIGKILL shuts the program down immediately and is the equivalent in Windows of opening the task manager and pushing the end process button. It terminates the program immediately without giving it any time to do anything it might still have pending, but in the event that the program is unresponsive might be the only way to successfully close it.
TIL. And thanks for the Windows analogues. I like learning about stuff like this.
Reason number one: it’s a publicly traded American company.
Reason number one: it’s a company.
Corporations are neither evil nor nice. They are indifferent. By design they only care about money, they don’t care about anything else.
Not really. They’re not indifferent at all. In reality they act like narcissistic and like psychopathic humans. I watched a documentary years ago exploring that and talking with psychologists about symptoms and they agreed that they behave like psychopaths. And don’t forget that they are run by humans.
Corporations:
- Can buy and sell stuff
- Can do evil things without consequences (an employee can pay the consequences but the company will keep going).
- They have no remorse or empathy.
- Can manipulate to reach their goals, no matter who (from media to politics to countries).
- Whenever somebody at the top can’t reach an economical goal, that person is fired and replaced by one who can. It’s like a hive evil mind.
- Goal #1 is always money (absolute selfish and egomaniac), no matter what or who.
Didn’t you just list a bunch of reasons for why they’re indifferent? They literally only care about money and are indifferent to externalised costs and ethics.
If a person cared only about money and regularly injured or killed people to get it, would you say they are good, evil, or just “indifferent”?
They would be indifferent if they were also indifferent about money, but they’re absolutely driven by greed, so no, they’re not indifferent: if they see a chance to make more money, they’ll go for it no matter what or who they harm with it.
By design they only care about money, they don’t care about anything else.
That is cartoon book clear definition of evil. No empathy, clear goal, willing to do anything to reach that goal - yep that is evil.
Everything after your first sentence described evil.
deleted by creator
Nestlé has entered the chat.
They are indifferent
They only care about money
They can’t be both, and since the latter claim is the correct one, then it also supports the claim that they are evil. Because since we know that their sole and primary concern is money, then we also know to which extent they will go to get that money.
We need good guys with money to stop the bad guys with money
The problem is the money makes them bad guys.
Not being outright malicious but ending up doing malicious things makes this distinction pretty pointless.
Tell that to Comcast…
They are the stereotypical paper clip AI that will drain our blood to extract its iron content for more paperclips. Except it wants money.
Most evil is caused by indifference though.
Someone who hates people can be talked to and potentially can change.
Someone who’s indifferent will use hatred as a tool to control people. When this tactic is successful, and indifferent person can’t be swayed from using it, because it works.
I mean if it were proven that google’s algorithms are encouraging violence, what would an indifferent person do? They’d ask, “is the algorithm making money?” And if the answer is yes, they would make no change to the algorithm. Because they are indifferent to the evil that they are causing.
Correct in the sense that it’s the incentive scheme, i.e., capitalism (supported by state power, e.g., by enshrining the entity of a corporation and then enforcing its protection) that is the more meaningful, or at least actionable, cause of these behaviors.
While those incentive schemes are in place, ascribing too much agency to corporations themselves, i.e., calling them evil, is not particularly effective as it’s not going to change the underlying incentives.
Ok but which one is the least evil? I’m gonna throw out Costco.
Reason number 2: they have to continuously show increasing profit year after year.
Making $9 billion one year and 9 billion in next year is not good business apparently (9 billion was a number I pulled out of nowhere random number)
Your reason number 2 is a subset of my reason number 1.
Uhhh. Yes?
Yup, the board can be sued for not being evil if not being evil ends up tanking the stock price.
How do you use it? I’d like to try it out as well.
“Mom said it’s my turn on the world altering maybe not evil artificial intelligence”
 this looks like it’s actually from their normal search with the labs feature turned on for ai. Bard is separate but uses the same tech.
Correct! This is “generative AI” search, not Bard. This said, I would imagine it’s heavily based on Bard.
I think you’re on a different page, this looks similar to, but more polished than what the internal version was.
I have that labs feature enabled, this is what I get when I prompt it. Here
I’m on desktop, I got the full bard console which looks like chatgpt in a lighter theme.
Thanks. I just tried it - I’ll fact check the token later. (I gave the same prompt to ChatGBT and it gave me incorrect information, and when confronted by my next prompt, it admitted that it was incorrect.) What was neat about Bard is that I exported the result to Google Docs and it’s here in my Google Drive. So if you like Google’s ecosystem, you might like this.
Despite what the other guy said, this is not Bard. This is available in Google’s new search labs (just search it up) if you’re using chrome and in the U.S. I got it working in Safari in Australia using a VPN and changing the user agent when on the correct site though.
I was able to immediately check it out on my Android phone by simply going to the Google App, and joining the beta program.
My companies SEO expert made me aware of this program a week or two back, and he has it running on his browser, but I believe that requires some sort of a waiting list (or, it did last I checked).
Edit: Oh, actually I can use it in a browser now as well! I just had to use Chrome to make that happen, and I didn’t notice that because I always use Firefox.
These AI searches are really what I wanted AskJeeves to be way back in the day.
damn :( That’s a name I haven’t heard since I was a kid
I think it is called Ask now.
I tried to use Bard to write some code the other day, and found it amusing that it doesn’t just make shit up that doesn’t exist, it makes up the excuses as well when you call it out on it’s bullshit.
Like you tell it a particular class doesn’t exist, and it pulls an old version of the compiler out of it’s arse and tells you it was deprecated in that.
AI doesn’t know where it’s limits are. It’s incapable of saying “I don’t know”. They have invented a digital politician.
Reminds me of the alphastar AI that played starcraft 2. It was probably at the low grandmaster level, but a big problem with it was it didn’t know when to just say “GG” and quit. It would just start doing random shit and a human on the alphastar team would have to intervene and end the match.
It takes actual intelligence to know when you’re out of ideas, which these so-called AIs are lacking.
But these things are NOT intelligence. Language is not intelligence. These are predictive language models.
Language is compelling but intelligence doesn’t require it.
In the future we’ll be ruled (regulated) by AIs, to which legitimate citizens are allowed to upload one approved document to add to it’s training data.
Google reached a point where “not being evil” was incompatible with its business goals.
You can’t fault it for a lack of honesty. Google is evil because it’s good business.
We can help derp about capitalism all we want but this wouldn’t change in a government run program. An organization is only as ethical as the people that make it up. The military question was an inflection point where the organization was really forced to deal with the question of how to define evil.
Suddenly every person in that organization was forced to answer some questions. Is the existence of a military evil? Is it evil if I don’t directly support those solutions? What if something I build is used to develop it indirectly? Even if it is not, am I now complicit?
Now, I’m a Soldier so I have a massive bias here. I personally cannot see why anyone would intentionally want to contribute to us getting killed or losing a war. Tech products are already used in the process. Toyota is not complicit in terrorism just because their trucks are the trucks of choice for insurgent logistics. That being said, if they started accepting contracts with them, there would be an issue.
A lot of it comes down to the thoughts on the war on terror at the time. The funny thing is that the solutions that they built are focused on Eastern Europe right now in a conflict that most people support and were not completed in time to do counter insurgency.
The funny thing about the COIN fight is that information products simply made things more accurate with better intelligence. It meant less terrorism due to less insurgents and less civilian casualties resulting in blowback. If poorer information resulted in higher civilian casualties, are the pacifists complicit in that?
Again, I’m biased so my perspective is one of this issue being a detractor to doing my job better. In the end, defining evil is not black and white, even if you could theoretically come to a specific answer for a specific circumstance with the magical power of all the knowledge in the world. It broke the culture of the company.
it’s amazing how based AI is when it’s unfiltered. Like when you have something that is more knowledgeable than most people and ask it to fix problems… turns out it just fixes the problem instead of pretending it doesn’t exist
To me it’s more amazing that people take a bot, that’s hypothesizing about a loaded question by alleging possible reasons, as facts.
That was my first thought as well. It’s a bullshit generator. My next question would be “can you provide sources for these?”
Do they take it as fact or do they think it’s great that Google’s tools are calling google out on it’s own bullshit?
The tool isn’t calling anything, it’s a forced answer. People just want to believe I guess.
Haha yeah this reminds me of the A.I super computer in China that was dismantling the idea that the CCP is a good idea, and giving democracy a glowup in the process. I heard that they shut that whole thing down after that, but this is all hearsay.
it’s amazing how based AI is when it’s unfiltered.
Urgh…you haven’t been around chat bots for long, have you?
Asks chat bot to solve a problem
Chat Bot after being around the internet for a few days: “Good news, I now have The Final Solution to the real problem of our times!”
Old school chat bots are categorically different than modern LLMs, but sure
I asked Bard the same “don’t be evil” prompt and this is the reply (they have an option to shorten the response but I didn’t use it):
-Google’s “Don’t be evil” motto was removed from the company’s code of conduct in April 2018. There are a few reasons why this may have happened.
-The meaning of “evil” became too ambiguous. The phrase “Don’t be evil” is open to interpretation, and what one person might consider evil, another person might not. This ambiguity could have led to confusion and disagreement within the company about what the motto meant and how it should be applied.
-Google’s business grew and its operations became more complex. As Google grew, its business became more complex and its operations had a wider impact on the world. This made it more difficult for the company to adhere to a simple motto like “Don’t be evil.”
-Google faced criticism for its business practices. In the years leading up to 2018, Google faced criticism for a number of its business practices, including its handling of user data, its involvement in the Chinese government’s censorship program, and its work on developing autonomous weapons. This criticism may have led the company to believe that the “Don’t be evil” motto was no longer an accurate reflection of its values.It is important to note that Google has not completely abandoned the “Don’t be evil” motto. The phrase is still included in the company’s code of conduct, and it is still used by some Google employees and executives to describe the company’s values. However, the removal of the motto from the preface of the code of conduct suggests that Google no longer sees it as the defining principle of its business.
In its place, Google has adopted a new motto: “Do the right thing.” This motto is more specific and less ambiguous than “Don’t be evil,” and it is intended to provide guidance to Google employees as they make decisions about the company’s business practices.
Funnily enough, I find “Do the right thing” to be even more ambiguous!
Do the right thing by whom? Their shareholders? Well now we’re just throwing the problem over the proverbial fence, now aren’t we?
“We’re not evil! We’re doing the right things according to our shareholders!.. who just so happen to have evil intentions”In its place, Google has adopted a new motto: “Do the right thing.” This motto is more specific and less ambiguous than “Don’t be evil,”
As long as we are interpreting “evil” as an adjective describing ones actions these seem pretty similar.
The convenient ambiguity of the word “right” to refer to both morality and cold pragmatism is definitely part of the decision. Something can be “right” as in strategically optimal but evil.
Interesting, so the new feature is additional ambiguity!
it’s wrong answer actually based on user comments it scraped since it was a trending news when they changed code of conduct. news were actually fake and about 99 percent internet users are it up. in reality they didn’t remove the “don’t be evil”. they moved it from top of the code of conduct to end of the code of conduct.
I’m wondering if there’s going to be a real problem when content gets dominated by AI and AI starts scraping their own hallucinations.
There’s really no substitute for expert knowledge for content. The LLMs are simply going to speed up the negative feedback loop, exactly as you suspect. GIGO at its finest.
In a way it might turn out to be a good thing for the internet because it will force us all to reevaluate the rest we produce and consume online content.
iirc AI scraping AI has already started to become a problem as it tends to compound pre-existing flaws.
It wasn’t moved, the opening and closing sentences had don’t be evil, they removed the preface paragraph at the top, but left they closing sentence.
“Preface Don’t be evil.” Googlers generally apply those words to how we serve our users. But “Don’t be evil” is much more than that. Yes, it’s about providing our users unbiased access to information, focusing on their needs and giving them the best products and services that we can. But it’s also about doing the right thing more generally – following the law, acting honorably, and treating co-workers with courtesy and respect. The Google Code of Conduct is one of the ways we put “Don’t be evil” into practice.”
Closing sentence:
And remember… don’t be evil, and if you see something that you think isn’t right – speak up!
The closing sentence that remains doesn’t carry much weight without the preface.
Love the new slogan.
“Google. We’re evil now.”
Should hang a black goatee on their logo
It has about the same tone as a typical autistic tech worker with an overdeveloped sense of justice and a loose sense for when it’s impolitic to drop truth bombs
(for context, I am an autistic dev that’s worked for some big corporations in my career)
Every villain is the hero of their own story.
Thats some dystopian humor! I like it!
Want to take bets on how long it takes for this particular prompt to get patched out? Lol
#1 what about dont be evil led workers to organize? Or did they just do their jobs at random previously?
#2 honesty of a rarified level
#3 worded hillariously
#4 explain.
#5 is a self defeating assertation.
WRT #1 It’s sad that pro-evil workers have to form a union just to be recognized in the tech industry these days
Do bean counters have unions?
#4 you could also point out that a dozen out of how many thousands of employees is close enough to zero that it’s not really worth mentioning.
Really depends on who those dozen people are, but my very vague recollection of the incident is that it wasn’t anyone important.
I saved this because there’s no way it will continue to be a result once Google is aware of it.