Using Codex CLI with gpt-oss:120b on an NVIDIA DGX Spark via Tailscale
Read OriginalThis technical blog post details how to configure and use OpenAI's Codex CLI coding agent with the gpt-oss:120b model running locally in Ollama on an NVIDIA DGX Spark. The setup uses a Tailscale network to allow the author to run the Codex CLI from their laptop anywhere in the world against this self-hosted model, demonstrating its use by building a Space Invaders clone.
0 Comments
Comments
No comments yet
Be the first to share your thoughts!
Browser Extension
Get instant access to AllDevBlogs from your browser
Top of the Week
1
Using A Hidden Submit Button To Ensure Unnamed Submissions
Ben Nadel
•
3 votes
2
uv+just for testing multiple Python versions
Daniel Feldroy
•
3 votes
3
ServiceNow and Microsoft Copilot
Marius Sandbu
•
2 votes
4
🧠 Build an Agent Chat that Remembers — Persisting Conversations with Microsoft Agent Framework
Bruno Capuano
•
2 votes
5
Agentic AI and Security
Martin Fowler
•
2 votes
6
Springs and Bounces in Native CSS
Josh Comeau
•
2 votes
7
Importing vs fetching JSON
Jake Archibald
•
2 votes
8
Hire Me in Japan
Dan Abramov
•
1 votes
9
In the economy of user effort, be a bargain, not a scam
Lea Verou
•
1 votes
10
The Learning Loop and LLMs
Martin Fowler
•
1 votes