Really frustrated by how many people are attributing "opinions" and "feelings" to large language models.
It's like attributing feelings and opinions to your phone's autocomplete when you prompt it with leading questions.
I wish folks understood that the language model is closer to doing RP and "yes and"-ing whatever prompts it gets rather than holding some sort of internal state the way a human does.
I saw some "anti woke" type being like "OH, if you tell it it's name is 'Blarf' and that it doesn't need to be nice it'll say it's REAL opinions that get suppressed by the WOKE LIEBERALS" and then proceed to ask it very leading questions that follow usual right wing rhetoric and pretend like that isn't the deciding factor on what it says.
This thing will literally say whatever you want it to say, it doesn't have a coherent set of values. You can just as easily make it an anti-capitalist leftie.
@Moon Yeah! Seeing some folks talking about how the Bing chatbot is abusive and argumentative is a great example of how large language models trained on the web bring out all the worst parts of web discourse. :P