I have a suspicion of some of the replies being given in this forum might be bots. Is there any sure way of knowing this or do I have to simply tread carefully to not trust a LLM generated reply?
I often use LLM-generated replies, but I’m not a bot myself…
Edit:
If a reply offers something worthwhile, I don’t think it matters much whether it came from a bot, an alien, or a paramecium…
Working code and verifiable facts are useful. The rest is just a matter of whether it’s enjoyable or not.
Fair enough, I had a LLM generated answer already but wasn’t happy with it so I wanted to check in this forum to see if I could get human experience and expertise that the LLM couldn’t provide.
I have no intention of throwing shade and I appreciate the honesty, but it would be nice if you could mention that it was provided by an LLM in a post itself, as I do not want to quote the reply in my college assignment and get hit with AI plagiarism violation.
Yeah… it’s a pain…![]()
If we’re just talking about my replies, there is a way to tell them apart: the answers stored in my dataset repository are LLM-generated. Also, inside replies, I often separate things so that anything after --- is the LLM’s response.
As for others, they’ll likely have their own flexible operating policies…
Back when I was a college student, using anything other than paper books or newspapers as references was practically forbidden, so I can’t say I don’t understand the university’s nonsensical rules…
But just as a side note, I suspect a lot of abstracts for papers online these days are generated by LLMs (including RAGs). Setting aside the papers themselves, code and documentation are even more suspect. And then there are the massive amounts of AI-generated curation sites…
“AI plagiarism violation” is a philosophical question!