create account

Local integration of DeepSeek-r1 in VSCode [Eng/Esp] by darthgexe

View this thread on: hive.blogpeakd.comecency.com
· @darthgexe · (edited)
$2.92
Local integration of DeepSeek-r1 in VSCode [Eng/Esp]
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/EogMQ4rBhAabFxGjEzfAdVvBc8dJywNxDcvm1HeQGVvburhqv7g1EjTfoTf5PRamNfx.jpeg"><center><sup><a href="https://ideogram.ai/assets/image/lossless/response/9PjS1p8RTI-_1ZAJrP-rJg">IA generated image</a></sup></center></center>

<p>
Hello Hivers!
<br>
Today I wanted to share a practical guide to integrate DeepSeek-R1, a language model specialized in code and mathematical reasoning, into our local development environment using Ollama and Visual Studio Code. This allows us to leverage this AI without relying on the cloud and optimize our productivity.

<h1>What is DeepSeek-R1?</h1> 
DeepSeek-R1 is an open-source language model trained for programming tasks, data analysis, and technical problem-solving. It excels at: 
<ul> 
<li>Advanced code understanding (Python, JavaScript, Java, etc.).</li>
<li>Mathematical and logical capabilities.</li> 
<li>Efficient and contextual responses.</li> 
<li>Flexibility to customize its behavior.</li> 
</ul> 

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23swgSgSfsKXJDkGCDrPQ9UKxvPoBVEua6u9Z5ZLx7PQcDqnMCH7TEkP6XFVKz5tzUoep.png"><center><sup><a href="https://chat.deepseek.com/0">DeepSeek Chat</a></sup></center></center>

<h2>Why use it locally?</h2> 
<ol> 
<li><b>Privacy:</b> Your data and code never leave your machine.</li> 
<li><b>Speed & Availability:</b> While dependent on your computer's resources, it remains always available with no remote server latency.</li>
<li><b>Offline:</b> Works without an internet connection.</li> 
<li><b>Customization:</b> Tailor it to your specific needs.</li> 
</ol> 

Integrating it into VS Code enables: 
<ol> 
<li>Smart autocompletion.</li> 
<li>Assisted debugging.</li> 
<li>Rapid code generation (boilerplate).</li> 
<li>Real-time technical explanations.</li> 
</ol> 

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png"></center>

<h1>Step 1: Install Ollama</h1> 
Ollama is an open-source platform designed to run and manage large language models (LLMs) locally on your machine. It acts as a model manager, simplifying the download, configuration, and use of LLMs (like DeepSeek-R1, Llama 3, Mistral, etc.). 

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23tGVqVk3bosg6Pv1jFj4S2U6dZKmDMt3WnAxbMgQhSJ5VhKSwD8F4SJPwUCRTwcrmard.png"><center><sup><a href="https://ollama.com/">Ollama</a></sup></center></center> 

<h5>Installation on Linux:</h5> 
<pre>
curl -fsSL https://ollama.com/install.sh | sh 
</pre> 

<h5>Installation on Windows:</h5>
Download the installer from the <a href="https://ollama.com/download">Ollama website</a>.
Run the .exe file and follow the prompts.

<h1>Step 2: Download DeepSeek-R1</h1> 
Open your terminal and run: 
<pre> 
ollama run deepseek-r1 
</pre>
 
Ollama will automatically download the model the first time. By default, it downloads the 7b version, trained with 7 billion parameters and requiring at least 8GB of RAM for optimal performance. To use a different version, specify it explicitly.

For example, if you have a low-resource PC, you could use the 1.5b version by running:

<pre> ollama run deepseek-r1:1.5b </pre>

Check the available distilled versions of DeepSeek-R1 <a href="https://ollama.com/library/deepseek-r1"> here</a>.

<h1>Step 3: Integrate with Visual Studio Code</h1>
 
<b>Install the Continue extension in VS Code:</b> Search for the extension in the VS Code Marketplace and click Install. 

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xp4YBUxbYZsDyHyNBHHSkcrYgM3356e7wNNXnnkMiHteDWbApAHnjW1wkcEhktbg8dw.png"></center>
<br>

<b>Configure the Ollama connection:</b>

<ul> 
<li>Open the command palette in VS Code (Ctrl/Cmd + Shift + P).</li> 
<li>Type and select: <i>Continue: Open Config JSON</i>.</li> 
<li>Modify the config.json file to include DeepSeek-R1: 
<pre><code>
{
  "models": [
    {
      "title": "DeepSeek-R1 (Local)",
      "model": "deepseek-r1",
      "apiBase": "http://localhost:11434",
      "provider": "ollama",
      "temperature": 0.3
    }
  ]
}
</code></pre>

<b>Parameter explanations:</b>
<ul> 
<li><b>model:</b> The downloaded distilled model.</li> 
<li><b>apiBase:</b> Ollama's URL (default port 11434).</li> 
<li><b>temperature:</b> Controls creativity (0 = precise, 1 = creative).</li> 
<li><b>provider:</b> Use "ollama" for local connections.</li> 
</ul>
<b>Note:</b> If you installed a different version of DeepSeek, specify it in the model field, e.g.:

<pre><code> 
"model": "deepseek-r1:8b", 
</code></pre> 
</li> 
</ul>
 
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png"></center> 

<h1>Use DeepSeek-R1 in Your Workflow</h1>
<b>Quick shortcut:</b> Select a code snippet and use Ctrl/Cmd + Shift + L to open Continue.
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23swiJgS2ti5wBtHYn97ARFMT5haWugazZhwLn2MPDWMKigUpyz6Xw2NkH3UipA8MHpfT.png"></center> 
<br>

<b>Interactive chat:</b> Open the Continue sidebar tab (icon in the activity bar).
Ask questions or make queries:
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/Eo1wMUEJTJw4jTSygb4oanSMrXicqAkwz8QJdL2RuPmpszCHP9b9k962qfvERAwytJW.png"></center> 
<br>

<b>Generate code:</b> Write a comment requesting a code snippet.
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/EnynyGBfBtkaJcmZknzjhu4mYSVQguZi8Hidr8jhg1rghASdwoYaiFQNvcqTQ1RwUyB.png"></center> 
<br>

<b>Docstrings & comments:</b> Highlight a code block and use "Write a Docstring for this code" or "Write Comments for this code" to document it.
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23swi9xYmE9gBcz3oeuDLNCNC82vmKcMiPvXFozkPFd7dKFx1ENn4Vmj9upmT8ZnSB8Bu.png"></center> 

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png"></center>

Using DeepSeek-R1 locally with Ollama gives you full control over the AI model, while integrating it into VS Code transforms your editor into a powerful productivity tool.
</p>

<details>
<summary>versión en español</summary>

<p>
Hola Hivers!
 
Hoy les quería compartir una guía práctica para integrar DeepSeek-R1, un modelo de lenguaje especializado en código y razonamiento matemático, en nuestro entorno de desarrollo local usando Ollama y Visual Studio Code. de manera que se pueda aprovechar esta IA sin depender de la nube y optimizar nuestra productividad.

<h1>¿Qué es DeepSeek-R1?</h1>
DeepSeek-R1 es un modelo de lenguaje de código abierto entrenado para tareas de programación, análisis de datos y resolución de problemas técnicos. Destaca por:
<ul>
<li>Entendimiento avanzado de código (Python, JavaScript, Java, etc.).</li>
<li>Capacidades matemáticas y lógicas.</li>
<li>Respuestas eficientes y contextuales.</li>
<li>Flexibilidad para personalizar su comportamiento.</li>
</ul>

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23swgSgSfsKXJDkGCDrPQ9UKxvPoBVEua6u9Z5ZLx7PQcDqnMCH7TEkP6XFVKz5tzUoep.png"><center><sup><a href="https://chat.deepseek.com/0">DeepSeek Chat</a></sup></center></center>

<h2>¿Por qué usarlo en local?</h2>
<ol>
<li>Privacidad: Tus datos y código nunca salen de tu máquina.</li>

<li>Velocidad y disponibilidad: Aunque depende en gran medida de los recursos de nuestro ordenador, estará disponible siempre y sin latencia de servidores remotos.</li>

<li>Offline: Funciona sin conexión a internet.</li>

<li>Personalización: Lo ajustas a tus necesidades específicas.</li>
</ol>
Integrarlo en VS Code nos permite:
<ol>
<li>Autocompletado inteligente.</li>
<li>Debugging asistido.</li>
<li>Generación de código rápido (maquetado).</li>
<li>Explicaciones técnicas en tiempo real.</li>
</ol>
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png"></center>

<h1>Paso 1: Instalar Ollama</h1>
Ollama es una plataforma open source diseñada para ejecutar y gestionar modelos de lenguaje grandes (LLMs) de forma local en tu máquina. Funciona como un gestor de modelos que simplifica la descarga, configuración y uso de LLMs (como DeepSeek-R1, Llama 3, Mistral, etc.)


<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23tGVqVk3bosg6Pv1jFj4S2U6dZKmDMt3WnAxbMgQhSJ5VhKSwD8F4SJPwUCRTwcrmard.png"><center><sup><a href="https://ollama.com/">Ollama</a></sup></center></center>

<h5>Instalacion en Linux:</h5>
<pre>
curl -fsSL https://ollama.com/install.sh | sh  
</pre>
<h5>Instalacion en Windows:</h5>

Descargamos el instalador desde la pagina web de <a href ="https://ollama.com/download">Ollama </a>.
Ejecutamos el archivo .exe y seguimos los pasos.

<h1>Paso 2: Descargar DeepSeek-R1</h1>
Abre tu terminal y ejecuta:
<pre>
ollama run deepseek-r1
</pre>  
Ollama descargará automáticamente el modelo la primera vez.
Por defecto estará descargando la versión 7b, entrenada con 7 billones de parámetros y que necesita al menos unos 8GB de RAM para un buen funcionamiento. Si queremos una versión distinta deberíamos especificarla.


Por ejemplo si tenemos una PC con recursos muy limitados, podríamos usar la version 1.5b. Para ello deberiamos ejecutar el comando:

<pre>
ollama run deepseek-r1:1.5b
</pre>  

<a href="https://ollama.com/library/deepseek-r1">Aqui</a> puedes ver las distintas versiones destiladas de DeepSeek-r1 que se pueden usar. 

<h1>Paso 3: Integrar con Visual Studio Code</h1>
<b>Instalar la extensión Continue en VS Code:</b>
Busca la extensión en el marketplace de VS Code y haz clic en Install.

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xp4YBUxbYZsDyHyNBHHSkcrYgM3356e7wNNXnnkMiHteDWbApAHnjW1wkcEhktbg8dw.png"></center>



<b>Configurar la conexión a Ollama:</b>
<ul>
<li>Abre la paleta de comandos en VS Code (Ctrl/Cmd + Shift + P).</li>
<li>Escribe y selecciona: Continue: Open Config JSON.</li>
<li>Modifica el archivo config.json para incluir DeepSeek-R1:
<pre><code>
{
  "models": [
    {
      "title": "DeepSeek-R1 (Local)",
      "model": "deepseek-r1",
      "apiBase": "http://localhost:11434",
      "provider": "ollama",
      "temperature": 0.3
    }
  ]
}
</code></pre>


<b>Explicación de parámetros:</b>
<ul>
<li><b>model:</b> El modelo destilado que hemos descargado.</li>
<li><b>apiBase:</b> URL de Ollama (puerto 11434 por defecto).</li>
<li><b>temperature:</b> Controla la creatividad (0 = preciso, 1 = creativo).</li>
<li><b>provider:</b> El proveedor "ollama" para conexiones locales.</li>
</ul>

<b>Nota:</b> si anteriormente instalamos otra version de DeepSeek debemos especificar el modelo, por ejemplo: 
<pre><code>
      "model": "deepseek-r1:8b",
</code></pre>
</li>
</ul>

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png"></center>

<h1>Usar DeepSeek-R1 en tu flujo de trabajo</h1>

<b>Atajo clave:</b> Selecciona una porción de código y usa Ctrl/Cmd + Shift + L para abrir Continue.
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23swiJgS2ti5wBtHYn97ARFMT5haWugazZhwLn2MPDWMKigUpyz6Xw2NkH3UipA8MHpfT.png"></center>
<br>

<b>Chat interactivo:</b> Abre la pestaña lateral de Continue (icono en la barra de actividades).
Escribe alguna pregunta o realiza una consulta:
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/Eo1wMUEJTJw4jTSygb4oanSMrXicqAkwz8QJdL2RuPmpszCHP9b9k962qfvERAwytJW.png"></center>
<br>

<b>Generar código:</b> Escribe un comentario para pedir un fragmento de codigo.
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/EnynyGBfBtkaJcmZknzjhu4mYSVQguZi8Hidr8jhg1rghASdwoYaiFQNvcqTQ1RwUyB.png"></center>
<br>

<b>Docstrings y comentarios:</b> Resalta un bloque y usa "Write a Docstring for this code" o "Write Comments for this code" para documentar el código.
<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23swi9xYmE9gBcz3oeuDLNCNC82vmKcMiPvXFozkPFd7dKFx1ENn4Vmj9upmT8ZnSB8Bu.png"></center>


<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png"></center>

Usar DeepSeek-R1 en local con Ollama te da control total sobre el modelo de IA, mientras que integrarlo en VS Code transforma tu editor en una potente herramienta de trabajo.
</p>
</details>

<center><img src="https://files.peakd.com/file/peakd-hive/darthgexe/23wMcbCanUwy8mq3z8GL39qEEGRWkX6vY9HUixD6xJe2HDScv4yuq9vM7jckQJ8KZNxDe.png"></center>
</div>
</p>
👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 26 others
properties (23)
authordarthgexe
permlinklocal-integration-of-deepseek-r1-in-vscode-engesp
categoryhive-169321
json_metadata"{"app":"peakd/2025.2.2","description":"A guide to Install DeepSeek-R1 locally with Ollama and integrating it into VS Code ","format":"markdown","image":["https://files.peakd.com/file/peakd-hive/darthgexe/EogMQ4rBhAabFxGjEzfAdVvBc8dJywNxDcvm1HeQGVvburhqv7g1EjTfoTf5PRamNfx.jpeg","https://files.peakd.com/file/peakd-hive/darthgexe/23swgSgSfsKXJDkGCDrPQ9UKxvPoBVEua6u9Z5ZLx7PQcDqnMCH7TEkP6XFVKz5tzUoep.png","https://files.peakd.com/file/peakd-hive/darthgexe/23xejtCCVWrhJWUB1nKeQnRSnB8HrNYLvcTtVYKM9itH236fQaqnto2137zRbB7oWnGhW.png","https://files.peakd.com/file/peakd-hive/darthgexe/23tGVqVk3bosg6Pv1jFj4S2U6dZKmDMt3WnAxbMgQhSJ5VhKSwD8F4SJPwUCRTwcrmard.png","https://files.peakd.com/file/peakd-hive/darthgexe/23xp4YBUxbYZsDyHyNBHHSkcrYgM3356e7wNNXnnkMiHteDWbApAHnjW1wkcEhktbg8dw.png","https://files.peakd.com/file/peakd-hive/darthgexe/23swiJgS2ti5wBtHYn97ARFMT5haWugazZhwLn2MPDWMKigUpyz6Xw2NkH3UipA8MHpfT.png","https://files.peakd.com/file/peakd-hive/darthgexe/Eo1wMUEJTJw4jTSygb4oanSMrXicqAkwz8QJdL2RuPmpszCHP9b9k962qfvERAwytJW.png","https://files.peakd.com/file/peakd-hive/darthgexe/EnynyGBfBtkaJcmZknzjhu4mYSVQguZi8Hidr8jhg1rghASdwoYaiFQNvcqTQ1RwUyB.png","https://files.peakd.com/file/peakd-hive/darthgexe/23swi9xYmE9gBcz3oeuDLNCNC82vmKcMiPvXFozkPFd7dKFx1ENn4Vmj9upmT8ZnSB8Bu.png","https://files.peakd.com/file/peakd-hive/darthgexe/23wMcbCanUwy8mq3z8GL39qEEGRWkX6vY9HUixD6xJe2HDScv4yuq9vM7jckQJ8KZNxDe.png"],"tags":["programming","development","spanish","ai","cursor","deepseek","ollama","vscode","neoxian","proofofbrain"],"users":[]}"
created2025-02-13 01:43:36
last_update2025-05-20 14:15:48
depth0
children3
last_payout2025-02-20 01:43:36
cashout_time1969-12-31 23:59:59
total_payout_value1.450 HBD
curator_payout_value1.465 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length12,658
author_reputation53,698,277,893,330
root_title"Local integration of DeepSeek-r1 in VSCode [Eng/Esp] "
beneficiaries
0.
accounthive-169321
weight200
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id140,754,802
net_rshares6,871,329,396,227
author_curate_reward""
vote details (90)
@genesisledger ·
That's interesting. Is it resource-intensive?
👍  
properties (23)
authorgenesisledger
permlinkre-darthgexe-sront0
categoryhive-169321
json_metadata{"tags":["hive-169321"],"app":"peakd/2025.2.3","image":[],"users":[]}
created2025-02-14 17:12:36
last_update2025-02-14 17:12:36
depth1
children1
last_payout2025-02-21 17:12:36
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length45
author_reputation85,752,990,556,616
root_title"Local integration of DeepSeek-r1 in VSCode [Eng/Esp] "
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id140,791,320
net_rshares417,609,461
author_curate_reward""
vote details (1)
@darthgexe ·
Hello, it depends on the model you are going to use, there are light models like the 1.5b that does not consume much and would work without problems in an average PC. There are also large models up to 671b that is impossible to run on a single PC, for that you need clusters of several powerful computers.

If your pc has less than 8GB of RAM I recommend the 1.5b and if you have more than that you can try 7b or 8b.
properties (22)
authordarthgexe
permlinkre-genesisledger-srq9yc
categoryhive-169321
json_metadata{"tags":["hive-169321"],"app":"peakd/2025.2.2","image":[],"users":[]}
created2025-02-15 14:08:45
last_update2025-02-15 14:08:45
depth2
children0
last_payout2025-02-22 14:08:45
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length417
author_reputation53,698,277,893,330
root_title"Local integration of DeepSeek-r1 in VSCode [Eng/Esp] "
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id140,808,445
net_rshares0
@hivebuzz ·
Congratulations @darthgexe! You received a personal badge!

<table><tr><td>https://images.hive.blog/70x70/https://hivebuzz.me/badges/birthday-7.png</td><td>Happy Hive Birthday! You are on the Hive blockchain for 7 years!</td></tr></table>

<sub>_You can view your badges on [your board](https://hivebuzz.me/@darthgexe) and compare yourself to others in the [Ranking](https://hivebuzz.me/ranking)_</sub>


**Check out our last posts:**
<table><tr><td><a href="/hive-122221/@hivebuzz/pud-202503-feedback"><img src="https://images.hive.blog/64x128/https://i.imgur.com/zHjYI1k.jpg"></a></td><td><a href="/hive-122221/@hivebuzz/pud-202503-feedback">Feedback from the March Hive Power Up Day</a></td></tr><tr><td><a href="/hive-122221/@hivebuzz/pum-202502-result"><img src="https://images.hive.blog/64x128/https://i.imgur.com/mzwqdSL.png"></a></td><td><a href="/hive-122221/@hivebuzz/pum-202502-result">Hive Power Up Month Challenge - February 2025 Winners List</a></td></tr><tr><td><a href="/hive-122221/@hivebuzz/pum-202503"><img src="https://images.hive.blog/64x128/https://i.imgur.com/M9RD8KS.png"></a></td><td><a href="/hive-122221/@hivebuzz/pum-202503">Be ready for the March edition of the Hive Power Up Month!</a></td></tr></table>
properties (22)
authorhivebuzz
permlinknotify-1740952104
categoryhive-169321
json_metadata{"image":["https://hivebuzz.me/notify.t6.png"]}
created2025-03-02 21:48:24
last_update2025-03-02 21:48:24
depth1
children0
last_payout2025-03-09 21:48:24
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length1,233
author_reputation369,206,878,176,450
root_title"Local integration of DeepSeek-r1 in VSCode [Eng/Esp] "
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id141,165,775
net_rshares0