With all the hype around DeepSeek R1, I thought I give it a try to see if things got better in the hyped world of LLMs.

Alas, it still hallucinates and provides nonsensical answers to simple questions. The main difference seems that both o1 and R1 now do the “thinking” step which takes lots of time to compute locally, slowing the actual response a lot (not that the response made much sense after that).

Is it just me?

Follow

@indutny did you run the full R1 or one kf the distilled models? R1 is like 600+ B params and the small models can't really compare to that since they're just qwen/llamma but with some tuning to make them yap more.

@mauve I didn’t, but isn’t distilled model the one everyone runs and being excited about?

@indutny I'm not sure, I think folks are excited by the prospect of an open source alternative to o1 in which case that'd just be the massive 600b model which IIRC is what powers the deepseek app. I found the distilled models to not be less useful than regular qwen2.5 for my use cases 😅 I think you could get it more useful with the right prompting and multi shot approach. Maybe have it ask for more humab guidance instead of looping.

@indutny If you have 20 GB of RAM this model might be more representative of the capabilities: unsloth.ai/blog/deepseekr1-dyn

Sign in to participate in the conversation
Mauvestodon

Escape ship from centralized social media run by Mauve.