I like chutes. I think I get about 5K prompts per day for $20/m, though they may have stricter limits for new customers.
This gives you practically unlimited usage of frontier models like kimi, deepseek, glm.
Their models are always fullsize, never quantised except where the lab themselves provides an 4bit or 8bit model. You can see from the model config exactly which hf model it pulls and the serving co figuration used.
Prompts are encrypted using Trusted Execution Environment (TEE). So neither a model host or neighbour can view your prompts. That's as close as you can get to local level privacy in the cloud.
comment0r 4 hours ago [-]
I tried looking into Chutes just now. Seems like there is no easy way to just pay & start using it with OpenCode or Claude Code, right? Their docs don’t seem to mention it. Do I really have to execute code with their API in order to use the models?
JSR_FDED 15 hours ago [-]
I get Kimi through OpenCode Zen (kind of like openrouter for the OpenCode harness), periodically top up $20 and laugh every time I see my balance go down by 3 cents for something I would have happily paid someone $30.
serf 16 hours ago [-]
nous portal or openrouter with a harness that uses intelligent multi provider requests,a local memory system, and pre-sub context compaction on input. if you do similar stuff often your token usage will drop after awhile of using a memory subsystem like hindsight or honcho quite a bit, and even more if you're using your harness to build relevant skills for the repeated tasks.
mistercheese 5 hours ago [-]
Do you have a harness recommendation? Sounds like maybe you’re into Hermes?
fatbrowndog 17 hours ago [-]
not good. I use DeepSeek's plan, Kimi AI, OpenRouter and it seemly consumes more tokens, than Claude's.
I consume Claude ~30% per day in of, 1 week, Max,x20. Equivalent in Kimi Ai, is I consume 60% in one day, in one week.
DeepSeek/Latest, 95% discount, with cache, I rack up ~$60/day before I stopped.
I don't know how Claude compute their daily limits, it seems much cheaper.
Jsttan 16 hours ago [-]
Which DeepSeek plan did you use? I been trying to find a DeepSeek for a while but with no success. I tried to use Claude $20 plan before, token burn like it is air, would be quite hard to believe anything else would burn so fast?
fatbrowndog 7 hours ago [-]
I'm using the deepseek-v4-pro model is currently offered at a 75% discount. My bad it's 75% discount, via OpenRouter.
I use the Claude-Max-20 ($200) plan. I manage to max it out 2 weeks. Planning to move to maybe multiple accounts.
I use C++ and Claude for big code-base.
mirmor23 9 hours ago [-]
[dead]
sidcool 15 hours ago [-]
Antigravity?
screenstop 8 hours ago [-]
[flagged]
volume_tech 9 hours ago [-]
[flagged]
abhishekhsingh 16 hours ago [-]
[dead]
Rendered at 22:13:49 GMT+0000 (Coordinated Universal Time) with Vercel.
This gives you practically unlimited usage of frontier models like kimi, deepseek, glm. Their models are always fullsize, never quantised except where the lab themselves provides an 4bit or 8bit model. You can see from the model config exactly which hf model it pulls and the serving co figuration used.
Prompts are encrypted using Trusted Execution Environment (TEE). So neither a model host or neighbour can view your prompts. That's as close as you can get to local level privacy in the cloud.
I consume Claude ~30% per day in of, 1 week, Max,x20. Equivalent in Kimi Ai, is I consume 60% in one day, in one week.
DeepSeek/Latest, 95% discount, with cache, I rack up ~$60/day before I stopped.
I don't know how Claude compute their daily limits, it seems much cheaper.
I use the Claude-Max-20 ($200) plan. I manage to max it out 2 weeks. Planning to move to maybe multiple accounts.
I use C++ and Claude for big code-base.