Win / Conspiracies
Conspiracies
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

Yes, there is kind of anarchic movement to poison LLM crawlers with senseless data indistinguishable for any computer from actual human texts.

However it is hard to attribute it to any political thing. You will find authors of poisoners in any more or less antisystem circle, from NS to far-left.

F.e. author of this nice piece of software - https://zadzmo.org/code/nepenthes/ describe himself as

Open source software; Lua programming; synth DIY; ecology; botany; photography; aerospace; NetBSD user; Map Nerd; railfan; cyclist; NAFO Fella; #actuallyADHD

here - https://chirp.zadzmo.org/@aaron. Average leftist with Jewish nickname, arguing need to fight "AI" from leftist angle here - https://zadzmo.org/code/nepenthes/FAQ.md.

It is hard to find poisoning software in Google or other search engines for obvious reasons, but you could search for it using search on soucre code hubs like github and others.

I think it is a morally good idea to completely ruin all that LLM idiocy, but I don't think that poisoning will add something noticeable to the orders of magnitude wider process of LLMs self-poisoning, when LLMs trained on the product of themselves. I estimate there are already around 10% of text content in internet is produced by LLMs and it actively feeded into current training models.

No need for that anti-LLM virtue signalling activity, LLMs will bury themselfves eventually, without any help from activists.

And you are completely wrong about spam. LLMs are SPAM generators, not what people generate to poison them. Spam started as advertising, continued as advertising and I assume any advertising is a spam by default. Today LLMs is mostly used (commercially and for free) to produce advertising, even e-mail spam today seems to be mostly LLM generated. :)

182 days ago
1 score
Reason: None provided.

Yes, there is kind of anarchic movement to poison LLM crawlers with senseless data indistinguishable for any computer from actual human texts.

However it is hard to attribute it to any political thing. You will find authors of poisoners in any more or less antisystem circle, from NS to far-left.

F.e. author of this nice piece of software - https://zadzmo.org/code/nepenthes/ describe himself as

Open source software; Lua programming; synth DIY; ecology; botany; photography; aerospace; NetBSD user; Map Nerd; railfan; cyclist; NAFO Fella; #actuallyADHD

here - https://chirp.zadzmo.org/@aaron. Average leftist with Jewish nickname, arguing need to fight "AI" from leftist angle here - https://zadzmo.org/code/nepenthes/FAQ.md.

It is hard to find poisoning software in Google or other search engines for obvious reasons, but you could search for it using search on soucre code hubs like github and others.

I think it is a morally good idea to completely ruin all that LLM idiocy, but I don't think that poisoning will add something noticeable to the orders of magnitude wider process of LLMs self-poisoning, when LLMs trained on the product of themselves. I estimate there are already around 10% of text content in internet is produced by LLMs and it actively feeded into current training models.

No need for that anti-LLM virtue signalling activity, LLMs will bury themselfves eventually, without any help from activists.

182 days ago
1 score
Reason: Original

Yes, there is kind of anarchic movement to poison LLM crawlers with senseless data indistinguishable for any computer from actual human texts.

However it is hard to attribute it to any political thing. You will find authors of poisoners in any more or less antisystem circle, from NS to far-left.

F.e. author of this nice piece of software - https://zadzmo.org/code/nepenthes/ describe himself as

Open source software; Lua programming; synth DIY; ecology; botany; photography; aerospace; NetBSD user; Map Nerd; railfan; cyclist; NAFO Fella; #actuallyADHD

here - https://chirp.zadzmo.org/@aaron. Average leftist with Jewish nickname, arguing need to fight "AI" from leftist angle.

It is hard to find poisoning software in Google or other search engines for obvious reasons, but you could search for it using search on soucre code hubs like github and others.

I think it is a morally good idea to completely ruin all that LLM idiocy, but I don't think that poisoning will add something noticeable to the orders of magnitude wider process of LLMs self-poisoning, when LLMs trained on the product of themselves. I estimate there are already around 10% of text content in internet is produced by LLMs and it actively feeded into current training models.

No need for that anti-LLM virtue signalling activity, LLMs will bury themselfves eventually, without any help from activists.

182 days ago
1 score