With all the hype around DeepSeek R1, I thought I give it a try to see if things got better in the hyped world of LLMs.
Alas, it still hallucinates and provides nonsensical answers to simple questions. The main difference seems that both o1 and R1 now do the “thinking” step which takes lots of time to compute locally, slowing the actual response a lot (not that the response made much sense after that).
Is it just me?
@indutny did you run the full R1 or one kf the distilled models? R1 is like 600+ B params and the small models can't really compare to that since they're just qwen/llamma but with some tuning to make them yap more.
@indutny If you have 20 GB of RAM this model might be more representative of the capabilities: https://unsloth.ai/blog/deepseekr1-dynamic