Avoiding Disinformation

#Draft

Neither Reddit, nor ChatGPT are vulnerable to disinformation.

Reddit - comment chain manipulation - reply and block, with or without moderator assistance - dangerous because it initially gives the impression that it is immune to manipulation

this works because 1) if a reply to a comment is deleted before it received a reply that isn't deleted too, it won't show up as a deleted reply when someone browses the thread; and it won't seem deleted to the user too, so they won't know that the manipulation is at play; and 2) if someone replies and blocks you, you won't be able to reply to them, and this will make it seem like you didn't have a better reply, in case of political discussions - e.g. if someone asks you for proof and blocks you so that you can't reply with it.. - and if you edit the original comment, if it then gets deleted with moderator assistance in suppression of opinion, it'll be as if you didn't reply at all. They will try to frame you if they can, and if they fail, then they will remove your opinion. Framing is better than suppression in destroying adversaries.

This is also bad, because this happens at the subreddit level, and Reddit moderators cannot be fully blamed for what happens.

ChatGPT - regurgitates misinformation in training data, and does not ever self-correct unless you force it to in each conversation

Lemmy - much more dangerous in that it gives dangerous people access to a platform. Many Lemmy instances have no privacy policies. They retain usernames even if the reply is deleted. It would be dangerous to participate in an extremist space in Lemmy, only to later realize that you could be doxxed by some link somewhere. You won't be able to delete your account and escape easily. Additionally the same comment chain manipulation of Reddit applies, but it could even be worse.

Trust should be the foundation of all platforms and communications. You cannot trust an LLM that is not sentient nor has an ethics of its own, unless you trust the platform providing the LLM fully and also that the information fed to it is actually accurate. An example I found is when a Sanskrit term important to Advaita Vedanta was translated wrongly by many Western writers, and as a result, ChatGPT continually misinformed everyone who tried to study it using it, leading them to feel that Advaita Vedanta was like Neo-Advaita (Advaita Vedanta stripped of Vedanta and nuances).

Because of this, I'd rather trust Zucky any day more than Fediverse. Also, be wary when participating in communities where the community admin has control over who can respond.

Comments

Popular posts from this blog

Open Sourcing Video Games

Don't steelman a fascist (Wisdom of the Year)

Why is negative times negative positive?