Skip to main content

One doc tagged with "claude-code"

View all tags

Run Ollama on Google Colab's GPU and spin up a free LLM server.

When using coding assistants like Claude Code or Continue, you often run into situations where API costs add up quickly, or you'd rather not send your code to an external service. There's also the frustration of wanting to try out lightweight local LLMs but finding that your local GPU is too slow to be practical.