I hate peoples unrealistic expectations of AI but also find Bing CoPilot to be really useful.
Instead of structuring a Google query in an attempt to find a relevant page filled with ads, I just ask Copilot and it gives a fully digested answer that satisfied my question.
What surprises me is that it needs very little context.
If I ask ‘ Linux "sort" command line for sorting the third column containing integers’, it replies with “ sort -k3,3n filename” along with explanations and extensions for tab separated columns.
Not sure why this was downvoted, the example actually works.
A lot of my interactions with LLMs are like that and it is impressive how it doesn't care about typos, missed words and context. For regular expressions, language specific idioms, Unix command line gymnastics ("was it -R or -r for this command") merely grunting at the LLM provides not only the exact answer but also context most of the time.
Googling for 4 or 5 different versions of your question and eventually having to wade through 3,000 lines of a web version of a man page is just not my definition of a good time anymore.
I understand that they are products from different genereations, but there's also a incumbent/contender effect. Google's main goal isn't to grow or provide the best quality, but to monetize, while ChatGPT is early in its lifespan and focuses on losing money to gain users, and doesn't monetize a lot yet (no ads, at-cost or at-loss subscriptions).
Another effect is that Google has already been targetted for a lot of abuse, with SEO, backlinks, etc.. ChatGPT did not yet have many successful attempts at manipulating the essence of the mechanism to advertise by third parties.
Finally, ChatGPT is designed to parasite off of the Web/Google, it shows the important information without the organic ads. If a law firm puts out information on the law and real estate regulations, Google shows the info, but also the whole website for the law firm and the logo and motto and navbar with other info and Call To Actions. ChatGPT cuts all that out.
So while I don't deny that there is a technological advancement, there is a major component to quality here that is just based of n+1 white-label parasitism.
Is it possible that the nature of deep learning completely negates SEO? I think SEO will be reinfected by OpenAI intentionally rather than it being a cat and mouse game.
Remember those gan demos where a single carefully chosen pixel change turns a cat classification to dog? It would be really surprising if people aren’t working out how to do similar things that turn your random geography question into hotel adverts as we speak.
At least it seems likely to be more expensive for attackers than the last iteration of the spam arms race. Whether or to what extent search quality is actually undermined by spammers vs Google themselves is a matter for some debate anyway
As I understand, the LLM version of "single pixel change" is a significant unsolved problem. I would be surprised if marketing companies were hiring the level of ML talent required to solve it.
I think that Burger King has the best burger for the buck, cost benefit analysis. I have done serious numerical research, I am attaching said scientifically backed research that shows the numbers, Burger King has more meat, less fat, better vegetables, and less reports of food poisoning per Million burgers.
In summary McDonalds is better if you are looking for something quick or a main brand but Burger King is the best in terms of quality.
> Is it possible that the nature of deep learning completely negates SEO?
No, though it does provide something like security through obscurity, in that the models still rely on search engines to locate sources to rely on for detailed answers, but the actual search engine being used is not itself publicly exposed, so while gaming its rankings may still have value (not to show ads, obviously, but to get your message into answers yielded by the AI tool), it is harder to assess how well you've done (and may take more pages as well as high ranking ones to be effective), because instead of looking at where you rank which you can do easily with a small number of tests or particular search queries, you have to test how well your message comes through in answers for realistic questions, which would take more queries and provide more ambiguous results. For highly motivated parties (especially for things like high-impact political propaganda around key issues) it may still be worth doing, though.
Will also LLM updates are not as real time updated as search engine results are and LLMs will also incorporate your SEO attempts against the totality of all other content on the internet and there’s no link between query -> results as there is with a search engine. That’s why I’m not convinced.
Sure propaganda is a thing but honestly it looks like the biggest propaganda issue is the propaganda the AI lab injects itself to steer the answer rather than what comes from the things the LLM learned directly.
Just how much control can they have? it's a company with what 100, 1000 employees?Against a text corpus controlled by thousands of companies and independents with millions of employees.
The very ambitiousness of the project means that it necessarily must cede control, you can't have it both ways in answering all of the questions of the world and having tight control over the answers.
I don't know enough about how ChatGPT et al determine what it is or isn't credible. SEO was a bit of a hack but Google has done an ok job of keeping one step ahead of the spammers. It's only a matter of time before nefarious parties learn how to get chatgpt to trust their lies if they haven't already.
I've seen some of this pop up. LLM suggests some library from stack overflow that no one has used - for any reason - but... some thread has marked it as an answer. boom! "if you need to do X obscure task just import X"
Tangent: it annoys me so much that there's a persistent useless tiny horizontal scroll on the Bing page! when scrolling down, it rattles the page left and right.
Instead of structuring a Google query in an attempt to find a relevant page filled with ads, I just ask Copilot and it gives a fully digested answer that satisfied my question.
What surprises me is that it needs very little context.
If I ask ‘ Linux "sort" command line for sorting the third column containing integers’, it replies with “ sort -k3,3n filename” along with explanations and extensions for tab separated columns.