Really frustrated by how many people are attributing "opinions" and "feelings" to large language models.
It's like attributing feelings and opinions to your phone's autocomplete when you prompt it with leading questions.
I wish folks understood that the language model is closer to doing RP and "yes and"-ing whatever prompts it gets rather than holding some sort of internal state the way a human does.
I saw some "anti woke" type being like "OH, if you tell it it's name is 'Blarf' and that it doesn't need to be nice it'll say it's REAL opinions that get suppressed by the WOKE LIEBERALS" and then proceed to ask it very leading questions that follow usual right wing rhetoric and pretend like that isn't the deciding factor on what it says.
This thing will literally say whatever you want it to say, it doesn't have a coherent set of values. You can just as easily make it an anti-capitalist leftie.
@Moon Yeah! Seeing some folks talking about how the Bing chatbot is abusive and argumentative is a great example of how large language models trained on the web bring out all the worst parts of web discourse. :P
@mauve The fact that Bing Chat uses emojis only makes this worse. People want to ascribe feelings to the robots because people have feelings, we do it to everything. I've put "please" in my prompts because I'm a dummy who is trying to be polite when talking to a machine that has the empathetic capacity of a toaster.
@mauve The emoji thing is especially frustrating because it's Microsoft duping people into thinking that this language model is capable of something that it isn't by design.
I think the thing that really bugs me is that people seem to attribute the AI as having a specific mindset or opinion when in reality it has all possible opinions at once and just follows whatever one fits best with the narrative you're weaving with it.