Research recently has shown that the chatbots have a distinct left-bias. This is because they are trained on dialog from all over the Internet, including left-wing writers. Because of that, someone has started a rightwing conservative chatbot project that may be more trustable.
That is true and it very much injects human bias, but the starting point of data is scanning the Internet indiscriminately, i.e. by pulling in things like Reddit chats, Washington Post propaganda pieces, and so on.
Research recently has shown that the chatbots have a distinct left-bias. This is because they are trained on dialog from all over the Internet, including left-wing writers. Because of that, someone has started a rightwing conservative chatbot project that may be more trustable.
?
Nope, it's because they utilize Human Reinforced Learning i.e. 100s of humans correcting the output during training
That is true and it very much injects human bias, but the starting point of data is scanning the Internet indiscriminately, i.e. by pulling in things like Reddit chats, Washington Post propaganda pieces, and so on.