The problem is that a fuckton of the web is SEO poisoned, so even a better search engine will find garbage because for a lot of subjects garbage is all that’s available.
The best chef in the world can’t turn shit into anything you want to eat.
This is what they want you to think but they’re hardly even trying. Google is shitty on purpose because if initial search results are bad, you “engage” more and see more ads. And since they’re not worried about competition because google is the default search nearly everywhere – most people don’t even know there are alternatives, google is synonymous with search – they can enshittify their search as much as they want.
https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/https://www.wheresyoured.at/the-men-who-killed-google/
That’s what I enjoy about kagi: because I can block and rank sources, I get to do some reverse-SEO, and the results are really good with remarkably few adjustments.
I don’t understand what you are saying. If some other search methodology were to use the same exact methodology of Google in paying attention to the SEO terms, then obviously it would fall prey to the same thing that killed Google. Similarly if the method was not precisely 100% identical yet still used SEO, then it too would be poisoned.
However, if the search method were to ignore SEO entirely and focus purely on the content of the page, plus other metrics such as number of links to that page, from other highly-ranked websites, but independently of SEO, then it would not be poisoned by SEOs. Although it might suck due to other causes, either related or not.
Anyway it all depends so heavily on what you want to find - e.g. a replacement for Google Maps is harder, and Google Images is also fairly great too.
Probably less, in their modern formulation. I knew more about past ones but it’s been awhile and I have no idea how much of that is even still relevant.
A lot of content sites have altered how they write articles to be in line with google SEO to drive traffic. In doing this, the content that can be found by any search engine is now of lower quality.
focus purely on the content of the page, plus other metrics such as number of links to that page, from other highly-ranked websites
This is already something that every search engine does. That’s what SEO seeks to maximise. You can’t just “ignore” SEO stuff if you have algorithmic search, unless your algorithm ignores everything that SEO touches, including the content of the page and backlinks.
It really isn’t that hard to tell the difference between bots activities and humans. If Facebook can detect a nipple in a picture in microseconds they can tell that “hmm a surprisingly high number of IoT fridges have strong opinions about this anti-Putin blogger, starting last week” isn’t valid.
If there was an incentive to do so there would be.
Alphabet doesn’t have real competition. If they start getting some they will be motivated to improve. Amazon the same way. You order crap from them and they still make their money. Social media the same way, there is just no particular reason for them to anything about bots when bots don’t impact ad numbers.
The corporation I work for has a captcha on the website to do pretty much anything useful. We have an incentive to not have bots.
That’s an interesting idea but I think you’d very quickly see far-right groups dogpiling onto and sinking small leftist websites and blogs and promoting their own shit-tier content.
The problem is that a fuckton of the web is SEO poisoned, so even a better search engine will find garbage because for a lot of subjects garbage is all that’s available.
The best chef in the world can’t turn shit into anything you want to eat.
This is what they want you to think but they’re hardly even trying. Google is shitty on purpose because if initial search results are bad, you “engage” more and see more ads. And since they’re not worried about competition because google is the default search nearly everywhere – most people don’t even know there are alternatives, google is synonymous with search – they can enshittify their search as much as they want. https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/ https://www.wheresyoured.at/the-men-who-killed-google/
That’s what I enjoy about kagi: because I can block and rank sources, I get to do some reverse-SEO, and the results are really good with remarkably few adjustments.
I don’t understand what you are saying. If some other search methodology were to use the same exact methodology of Google in paying attention to the SEO terms, then obviously it would fall prey to the same thing that killed Google. Similarly if the method was not precisely 100% identical yet still used SEO, then it too would be poisoned.
However, if the search method were to ignore SEO entirely and focus purely on the content of the page, plus other metrics such as number of links to that page, from other highly-ranked websites, but independently of SEO, then it would not be poisoned by SEOs. Although it might suck due to other causes, either related or not.
Anyway it all depends so heavily on what you want to find - e.g. a replacement for Google Maps is harder, and Google Images is also fairly great too.
i can’t tell if you know more or less than me about what SEO is
Probably less, in their modern formulation. I knew more about past ones but it’s been awhile and I have no idea how much of that is even still relevant.
A lot of content sites have altered how they write articles to be in line with google SEO to drive traffic. In doing this, the content that can be found by any search engine is now of lower quality.
This is already something that every search engine does. That’s what SEO seeks to maximise. You can’t just “ignore” SEO stuff if you have algorithmic search, unless your algorithm ignores everything that SEO touches, including the content of the page and backlinks.
99% of users marked this site as shit -> no longer display it
100000 “users” then popped up and left the same glowing 5-star review > must be a great site.
It really isn’t that hard to tell the difference between bots activities and humans. If Facebook can detect a nipple in a picture in microseconds they can tell that “hmm a surprisingly high number of IoT fridges have strong opinions about this anti-Putin blogger, starting last week” isn’t valid.
That sounds like the millions of doctored amazon reviews and social media bot-boosted content should be dealt with by next week then.
If there was an incentive to do so there would be.
Alphabet doesn’t have real competition. If they start getting some they will be motivated to improve. Amazon the same way. You order crap from them and they still make their money. Social media the same way, there is just no particular reason for them to anything about bots when bots don’t impact ad numbers.
The corporation I work for has a captcha on the website to do pretty much anything useful. We have an incentive to not have bots.
That’s an interesting idea but I think you’d very quickly see far-right groups dogpiling onto and sinking small leftist websites and blogs and promoting their own shit-tier content.