Feature hasn't been suggested before.
Describe the enhancement you want to request
I would like opencode to support user-overridable input limits for OpenAI models such as gpt-5.4, or otherwise align its effective limits more closely with the official OpenAI model documentation.
Background
OpenAI documents gpt-5.4 as having a 1,050,000 token context window and 128,000 max output tokens.
In opencode, openai:gpt-5.4 currently appears to be loaded with:
context: 1,050,000
input: 272,000
output: 128,000
Because opencode's compaction logic prefers model.limit.input when it is present, long conversations may be compacted much earlier than users would expect from the official GPT-5.4 context window.
Requested enhancement
Please consider one of the following:
- Allow
provider.models.<model>.limit.input to be overridden in user config, and make sure that override is respected at runtime.
- Add an explicit advanced setting to opt into the full documented long-context budget for supported models/providers.
- Revisit the metadata source used for OpenAI GPT-5.x models so opencode does not enforce a much smaller effective input budget unless there is a confirmed upstream reason.
Why this would help
- It would let advanced users fully use the long-context models they are already paying for.
- It would reduce unnecessary early compaction and summarization.
- It would make model behavior more predictable and closer to official provider documentation.
- It would improve parity with OpenAI's own tooling, where client-side context budget is configurable.
Additional note
This request is not asking opencode to bypass any upstream provider limits. It is only asking for either:
- more accurate defaults, or
- a user-visible override for the client-side/model-metadata limit that opencode uses for compaction decisions.
If the current 272,000 input limit is intentional because of a known upstream constraint, documenting that clearly would also be very helpful.
Feature hasn't been suggested before.
Describe the enhancement you want to request
I would like opencode to support user-overridable input limits for OpenAI models such as
gpt-5.4, or otherwise align its effective limits more closely with the official OpenAI model documentation.Background
OpenAI documents
gpt-5.4as having a1,050,000token context window and128,000max output tokens.In opencode,
openai:gpt-5.4currently appears to be loaded with:context: 1,050,000input: 272,000output: 128,000Because opencode's compaction logic prefers
model.limit.inputwhen it is present, long conversations may be compacted much earlier than users would expect from the official GPT-5.4 context window.Requested enhancement
Please consider one of the following:
provider.models.<model>.limit.inputto be overridden in user config, and make sure that override is respected at runtime.Why this would help
Additional note
This request is not asking opencode to bypass any upstream provider limits. It is only asking for either:
If the current
272,000input limit is intentional because of a known upstream constraint, documenting that clearly would also be very helpful.