New coding models & integrations ยท Ollama Blog

๐Ÿ’ฅ Check out this awesome post from Hacker News ๐Ÿ“–

๐Ÿ“‚ Category:

๐Ÿ’ก Key idea:

Illustration of Ollama coding

GLM-4.6 and Qwen3-coder-480B are available on Ollamaโ€™s cloud service with easy integrations to the tools you are familiar with. Qwen3-Coder-30B has been updated for faster, more reliable tool calling in Ollamaโ€™s new engine.

Get started

GLM-4.6

ollama run glm-4.6:cloud

Qwen3-Coder-480B

ollama run qwen3-coder:480b-cloud

For users with more than 300GB of VRAM, qwen3-coder:480b is also available locally.

Qwen3-Coder-30B

ollama run qwen3-coder:30b

Example prompts

Create a single-page app in a single HTML file with the following requirements:

Name: Ollama's Adventure 
Goal: Jump over obstacles to survive as long as possible.
Features: Increasing speed, high score tracking, retry button, and funny sounds for actions and events.

The UI should be colorful, with parallax scrolling backgrounds.
The characters should look cartoonish, related to alpacas and be fun to watch.
The game should be enjoyable for everyone.

Example code by GLM-4.6 in a single prompt

example image of the HTML site running

example image 2 of the HTML site running

Usage with VS Code

First, pull the coding models so they can be accessed via VS Code:

ollama pull glm-4.6:cloud
ollama pull qwen3-coder:480b-cloud
  1. Open the copilot chat sidebar
  2. Select the model dropdown โ†’ย Manage models
  3. Click onย Ollamaย underย Provider Dropdown, then select desired models
  4. Select the model dropdown โ†’ and choose the model (e.g. glm-4.6)

Usage with Zed

First pull the coding models so they can be accessed via Zed:

ollama pull glm-4.6:cloud
ollama pull qwen3-coder:480b-cloud

Then, open Zed (now available for Windows!)

  1. Click on the agent panel button (glittering stars)
  2. Click on the model dropdown โ†’ Configure
  3. Select LLM providers โ†’ Ollama
  4. Confirm theย Host URLย isย http://localhost:11434, then clickย Connect
  5. Select a model under Ollama

Usage with Droid

First, install Droid:

curl -fsSL https://app.factory.ai/cli | sh

Add the following configuration to ~/.factory/config.json:

๐Ÿ’ฌ

Then run Droid and type /model to change to the model:

โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ > GLM-4.6 [current]                              โ”‚
โ”‚   Qwen3-Coder-480B                               โ”‚
โ”‚                                                  โ”‚
โ”‚ โ†‘/โ†“ to navigate, Enter to select, ESC to go back โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

Integrations

Ollamaโ€™s documentation now includes sections on using Ollama with popular coding tools:

Cloud API access

Cloud models such as glm-4.6 and qwen3-coder:480b can also be accessed directly via ollama.comโ€™s cloud API:

First, create an API key, and set it in your environment

export OLLAMA_API_KEY="your_api_key_here"

Then, call ollama.comโ€™s API

curl https://ollama.com/api/chat \
    -H "Authorization: Bearer $OLLAMA_API_KEY" \
    -d '๐Ÿ’ฌ'

For more information see the Ollamaโ€™s API documentation.

Tell us your thoughts in comments! What do you think?

#๏ธโƒฃ #coding #models #integrations #Ollama #Blog

๐Ÿ•’ Posted on 1760596977

By

Leave a Reply

Your email address will not be published. Required fields are marked *