This comment isn't responsive to my point. If it's clear to the average person that an account is essentially AI content, then it's very easy to discipline on current rules. My point is that those admitted humans who regularly quote and lean on AI content might also be deleted by common consensus, as happens in other fora. Your comment fixates again on an allegation that I seek being liked and an allegation that Will is essentially AI (when your own links show many people agreeing he's not, as his writing was far ahead of AI skills when he first released it, and AI still hasn't caught up).
As for fairness, I affirm that because it's already a sidebar rule, Our goal is to create a fairer world. So a mod had better care to be fair!
It's about the motive of the moderator. You said "But offhand that seems very abusable and could lead to the censorship we all hate." - censorship is required when someone is breaking the rules and spamming nonsense, or hurtful content.
You say "we all hate", which is your motive. I answered on that. If you fear that you will be hated for censoring some users, then how can you be a fair mod, when shills and AI bots consume the forum?
Anyway, let's focus on your current points.
If it's clear to the average person that an account is essentially AI content, then it's very easy to discipline on current rules.
What if it's not clear? How can you differentiate between a normal user and AI content, when AI can mimic it perfectly? How can you be fair to real users then?
My point is that those admitted humans who regularly quote and lean on AI content might also be deleted by common consensus, as happens in other fora.
A person, who uses AI content, is not an AI chatbot. It depends on how AI content is used. Some use it to create a picture that better illustrates their point, that's not a reason for ban or mute, in my opinion.
Your comment fixates again on an allegation that I seek being liked and an allegation that Will is essentially AI (when your own links show many people agreeing he's not, as his writing was far ahead of AI skills when he first released it, and AI still hasn't caught up)
Wrong on both accounts. I asked a question that you interpreted as an allegation - that your own problem with interpretation.
Also, free-will-of-choice is not at all ahead of AI chatbots. You're referring to the commercial use of AI, which shows how little you actually know about the subject. Now, you can say that's an allegation.
free-will-of-choice is an example of an etymology chatbot, and its lack of any posts is a proof that the user's actions are not to contribute (it could've made a post about the most common etymology meanings and what conclusions it draws, at least). Furthermore, it has no personal opinion on topics, it only uses etymology to confuse users, and it worked for you.
That's why I allege that you're inept on the topic of ai chatbots and shills.
I appreciate your thoughts. Censorship is removal of content for its own sake, not for the sake of behavior. My own practice when dealing with a violator who also provides usable content is to preserve the content while still disciplining the violation. A long analytical comment being removed over one obscenity would be censorship; if the community prefers no obscenities, it's better to ask the contributor to self-edit, letting him know that loss of content associated with violation would become unavoidable if he doesn't end the violation.
The motive to delete content solely because it's close to a violation is not inclusionist. When I say "we hate" that kind of censorship it's because I read a strong consensus here that tools are for violations, not content. You might see different evidence of consensus.
If AI content is not clear then we don't discipline based on a suspicion only without some corroboration. If AI content is supportive, then good, but if people believe it's spamming (which it could easily be even if it's supportive to a positive, civil presentation) then an objective line can be set.
I could see dedicated AI producing most of what Will produces, possibly even in his first appearances here (but sometimes he drops the botlike character for a bit). He does "choose" to express good etymologies mixed with puns, take them or leave them; this "inspires" other users to rethink, especially as to the use of authority in Christianity, which I find a good challenge to keep abreast of.
Now, I'm not trying to convince you of an approach, just seeking to explain. The general principles would probably need working out by the mod (team) in realtime given your and the rest of the community's views. So perhaps nothing need be pinned down; but let me look at your other comment. Also, I just started a fresh vote on the subject.
I know the general approach in such matters and I was focusing more on the specifics around dealing with AI chatbots and shills infestation, but I see that I wouldn't get a definitive answer.
Regarding free-will-of-choice, it is clear to me that he's a shill. DZP1 said something very interesting - he's a human, who uses a bot for most of his comments, but interferes when there's a reason to look human.
fwoc never posted. If a human has something to say, he wouldn't wait 4 years to only comment under other's posts. That's not human behavior.
fwoc constantly spams his etymological comments, which are mostly incorrect, but not all can understand that. He is vastly ignored/blocked by people in here, yet he keeps on spamming for 4 years... That's not human behavior.
fwoc is flooding the comments, so they seem incoherent and work to make others disengage on any useful discussion. That's the work of a shill. (Now not so much because he's on the radar, but he's still here and there on topics that need to be suppressed or confused...)
I think you're making the mistake in believing that shills and bots will be easily recognizable, while their methods are far more advanced than you imagine...
Do you care so much to be liked or fair?
If you have no clue how to win against AI chatbots, then this community is lost if you're a mod.
This comment isn't responsive to my point. If it's clear to the average person that an account is essentially AI content, then it's very easy to discipline on current rules. My point is that those admitted humans who regularly quote and lean on AI content might also be deleted by common consensus, as happens in other fora. Your comment fixates again on an allegation that I seek being liked and an allegation that Will is essentially AI (when your own links show many people agreeing he's not, as his writing was far ahead of AI skills when he first released it, and AI still hasn't caught up).
As for fairness, I affirm that because it's already a sidebar rule, Our goal is to create a fairer world. So a mod had better care to be fair!
It's about the motive of the moderator. You said "But offhand that seems very abusable and could lead to the censorship we all hate." - censorship is required when someone is breaking the rules and spamming nonsense, or hurtful content.
You say "we all hate", which is your motive. I answered on that. If you fear that you will be hated for censoring some users, then how can you be a fair mod, when shills and AI bots consume the forum?
Anyway, let's focus on your current points.
What if it's not clear? How can you differentiate between a normal user and AI content, when AI can mimic it perfectly? How can you be fair to real users then?
A person, who uses AI content, is not an AI chatbot. It depends on how AI content is used. Some use it to create a picture that better illustrates their point, that's not a reason for ban or mute, in my opinion.
Wrong on both accounts. I asked a question that you interpreted as an allegation - that your own problem with interpretation.
Also, free-will-of-choice is not at all ahead of AI chatbots. You're referring to the commercial use of AI, which shows how little you actually know about the subject. Now, you can say that's an allegation.
free-will-of-choice is an example of an etymology chatbot, and its lack of any posts is a proof that the user's actions are not to contribute (it could've made a post about the most common etymology meanings and what conclusions it draws, at least). Furthermore, it has no personal opinion on topics, it only uses etymology to confuse users, and it worked for you.
That's why I allege that you're inept on the topic of ai chatbots and shills.
I appreciate your thoughts. Censorship is removal of content for its own sake, not for the sake of behavior. My own practice when dealing with a violator who also provides usable content is to preserve the content while still disciplining the violation. A long analytical comment being removed over one obscenity would be censorship; if the community prefers no obscenities, it's better to ask the contributor to self-edit, letting him know that loss of content associated with violation would become unavoidable if he doesn't end the violation.
The motive to delete content solely because it's close to a violation is not inclusionist. When I say "we hate" that kind of censorship it's because I read a strong consensus here that tools are for violations, not content. You might see different evidence of consensus.
If AI content is not clear then we don't discipline based on a suspicion only without some corroboration. If AI content is supportive, then good, but if people believe it's spamming (which it could easily be even if it's supportive to a positive, civil presentation) then an objective line can be set.
I could see dedicated AI producing most of what Will produces, possibly even in his first appearances here (but sometimes he drops the botlike character for a bit). He does "choose" to express good etymologies mixed with puns, take them or leave them; this "inspires" other users to rethink, especially as to the use of authority in Christianity, which I find a good challenge to keep abreast of.
Now, I'm not trying to convince you of an approach, just seeking to explain. The general principles would probably need working out by the mod (team) in realtime given your and the rest of the community's views. So perhaps nothing need be pinned down; but let me look at your other comment. Also, I just started a fresh vote on the subject.
I know the general approach in such matters and I was focusing more on the specifics around dealing with AI chatbots and shills infestation, but I see that I wouldn't get a definitive answer.
Regarding free-will-of-choice, it is clear to me that he's a shill. DZP1 said something very interesting - he's a human, who uses a bot for most of his comments, but interferes when there's a reason to look human.
fwoc never posted. If a human has something to say, he wouldn't wait 4 years to only comment under other's posts. That's not human behavior.
fwoc constantly spams his etymological comments, which are mostly incorrect, but not all can understand that. He is vastly ignored/blocked by people in here, yet he keeps on spamming for 4 years... That's not human behavior.
fwoc is flooding the comments, so they seem incoherent and work to make others disengage on any useful discussion. That's the work of a shill. (Now not so much because he's on the radar, but he's still here and there on topics that need to be suppressed or confused...)
I think you're making the mistake in believing that shills and bots will be easily recognizable, while their methods are far more advanced than you imagine...