Member-only story

Which DeepSeek R1 Model Can Your Mac Handle?

The Ultimate Guide to Choosing the Perfect Model Size

Joe T. Santhanavanich
Level Up Coding
5 min readFeb 5, 2025

by Author

Hello everyone, today I’d like to show you which DeepSeek R1 model you can run on your Mac — based on the specific hardware you have. I’ve spent some time experimenting with different Mac configurations (M1, M2, even the bigger M4 Max), and also diving into various approaches and quantization options mentioned by the community. In the end, choosing the right model comes down to two main factors: how much memory you have and how comfortable you are with setting up local tools.

Why Run DeepSeek Locally?

DeepSeek R1 is a powerful large language model (LLM) that’s open-source, meaning you can download it and run it yourself. The big advantage of running models locally is that you avoid sending your data to external servers. The official DeepSeek website might be convenient, but if you’re concerned about privacy — or just want more control over your system — running it locally on your Mac is a great option.

However, be aware that bigger models require more RAM or GPU memory to run efficiently. With Apple Silicon (M1/M2/M3/M4) Macs, it’s mainly about how much unified memory (RAM) you have. The larger the model, the more demanding it is.

Model Sizes and Memory…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Responses (2)

What are your thoughts?