LLM output tends to have low information density and uses up even more of the readers time than needed. Could we normalize posting your prompts in addition to the LLM output so I can just read that instead?
@mauve Relevant blog post: https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/
Escape ship from centralized social media run by Mauve.
@mauve Relevant blog post: https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/