github.com/go-openapi/runtime is a runtime library used to work with OpenAPI.
At this moment, it only supports OpenAPI v2 (aka Swagger).
It is used by clients and servers generated with go-swagger.
or directly by applications that build untyped OpenAPI / Swagger clients or servers.
It ships:
a configurable HTTP client transport (client.Runtime) β TLS, proxy,
timeouts, OpenTelemetry tracing, pluggable authentication
a server middleware pipeline that turns an analyzed OpenAPI spec into a
working http.Handler β routing, security, parameter binding, validation
and operation execution
a dependency-free server-middleware module with media-type processing, content
negotiation and doc-UI helpers, usable from any plain net/http server
Using only the dependency-free middleware (media types, negotiation, doc UIs):
go get github.com/go-openapi/runtime/server-middleware
Where to go next
Features
Features supported by our client and server, with normative references.
β usage/features
Core
The five interfaces (Consumer, Producer, Authenticator, Authorizer,
OperationHandler) every other piece is built on, plus content-type and
validation plumbing.
Educational deep-dives and FAQs covering both simple and advanced
usage of the runtime. Start with the FAQ for quick answers; jump
to a topic page for the full algorithm or behaviour reference.
Repo-level information for github.com/go-openapi/runtime.
Cross-org contributing and maintainer guides live in the shared
go-openapi doc-site.
Subsections of go-openapi runtime
Usage
Pick a section below. The core pages explain the interfaces and
layering that the rest of the library is built on β start there if you want a
mental model before diving into any specific area.
Runnable snippets covering common runtime usage scenarios β server
assembly, client setup, custom middleware and authentication.
Subsections of Usage
Features
A primer on what this runtime implementation supports, with normative
references to the standards each feature implements. Citations point at
the canonical specification rather than secondary sources.
Client & Server
HTTP/1.1 and HTTP/2 over plaintext or TLS. HTTP/2 is inherited
transparently from Go’s net/http stack on both client and server
when ALPN negotiates h2; no runtime-specific wiring. See
HTTP core below for the supporting RFCs.
Built-in OpenTelemetry tracing (OpenTelemetry spec);
legacy OpenTracing support remains in a sibling compatibility module.
Debug mode β request / response dumping enabled via the
Runtime.Debug field (or Runtime.SetDebug(true)); useful while
iterating on a generated client.
The runtime parses the standard auth headers and dispatches to
application-supplied callbacks for credential / token validation.
Token issuance, JWT signature checking, and OIDC ID-token validation
are out of scope β they belong in the callback.
API Key in header, query, or cookie β OpenAPI security scheme
convention; no dedicated RFC.
Bearer tokens β header parsing per RFC 6750. The
runtime treats the bearer value as an opaque string; downstream
parsing (JWT, opaque tokens, β¦) is the callback’s responsibility.
OAuth 2.0 β the runtime exposes the same Bearer hook with the
OAuth-2 framing (RFC 6749; RFC 8252 for native
apps). All four grant flows (authorization code, implicit, client
credentials, password) work because the runtime sees only the
resulting access token.
Not supported (yet)
Language negotiation β Accept-Language / Content-Language
headers and language-tag parsing.
Compression β Accept-Encoding / Content-Encoding negotiation
and the content-coding registry (gzip, Brotli, zstd).
middleware reuses the server-middleware primitives (the dotted
arrow): negotiation, media-type matching and the doc-UI handlers all
live in server-middleware.
Backward-compatibility note
The legacy entry points pre-existing in package middleware before v0.30.0 (NegotiateContentType, SwaggerUI, β¦)
are still available as a shim (middleware/seam.go) that now forwards to
the new module β see server / deprecated shims.
Read on for what each interface does, the built-in content-type codecs and
the validation hooks.
The five core interfaces β Consumer, Producer, Authenticator,
Authorizer, OperationHandler β and where each one fires on the
client and server sides.
Validatable and ContextValidatable interfaces, when the runtime
invokes them, and how they interact with spec-based validation.
Subsections of Core
Interfaces & layering
All interfaces live in the root package
github.com/go-openapi/runtime.
Each one comes with a companion *Func adapter so plain functions can be
used wherever an implementation is required.
server: serialize the operation handler’s return value into the
response body
client: serialize a request body before sending
The split between Consumer and Producer is deliberate β request
deserialization and response serialization are independent concerns and a
given content type may want different behaviour on each side (think of
streaming uploads vs. buffered downloads).
Authenticator β turn raw auth data into a principal
Authentication answers who; authorization answers may they do this?.
Authorizer runs after a principal has been resolved. A non-nil error blocks
the request.
Server-only. There is no built-in authorizer β you wire your own.
Server-only. The argument is the bound (and validated) parameter struct;
the return value is whatever Producer will then turn into the response
body. error propagates to the configured error handler.
Server lifecycle β where each interface fires
For a request that reaches a matched route, the conventional pipeline
runs the following stages. Each stage is a separate http.Handler in
the chain, composable via middleware.Builder.
flowchart TD
req(((HTTP request)))
router["Router<br/>match path/method against spec"]
sec["Security Β· Context.Authorize<br/>Authenticator β principal<br/>Authorizer β may proceed?"]
neg["ContentType / Accept negotiation<br/>pick Consumer + target Producer<br/>(part of BindValidRequest)"]
bind["Binder<br/>path/query/header/body params<br/>β uses Consumer β"]
val["Validator<br/>param validation + Validatable"]
op["OperationExecutor<br/>call OperationHandler.Handle"]
resp["Responder<br/>β uses Producer β"]
out(((HTTP response)))
req --> router --> sec --> neg --> bind --> val --> op --> resp --> out
A few things worth knowing:
The order above is a convention, not a runtime invariant. It is what
the runtime’s untyped path
(middleware.newRoutableUntypedAPI)
does β it wraps the bind+validate closure with newSecureAPI so that
security runs first β and what go-swagger’s generated typed handlers do
(each operation’s ServeHTTP calls Context.AuthorizethenContext.BindValidRequestthen the handler). You can compose a
different chain via middleware.Builder if you have a reason to.
Security comes before binding and validation. That way an
unauthenticated request short-circuits with 401 without paying for
parameter binding or body deserialization.
Auth is a single call site, not two.Context.Authorize runs the
configured authenticators in order and, on success, calls the optional
Authorizer. An Authenticator returning (false, nil, nil) means
“this scheme does not apply” and the next one is tried; a non-nil
error short-circuits with 401.
The pipeline’s stages are documented in detail under
server / pipeline.
The CSV and bytestream factories accept option functions (e.g.
runtime.ClosesStream to make a stream consumer close the underlying
reader). See the godoc
for the full option list.
Registering codecs on a server
The server side keeps two map[mediaType]Consumer / β¦Producer lookups,
populated at API construction time. For an untyped API:
// SPDX-License-Identifier: Apache-2.0// Package customcodec illustrates how to implement a runtime.Consumer for a// custom wire format. Uint32Consumer decodes a single big-endian 32-bit// unsigned integer from the request body into a *uint32 target.packagecustomcodecimport (
"encoding/binary""fmt""io""github.com/go-openapi/runtime")
// Uint32Consumer returns a runtime.Consumer that reads a single big-endian// uint32 from r and stores it at v (which must be a *uint32).funcUint32Consumer() runtime.Consumer {
returnruntime.ConsumerFunc(func(rio.Reader, vany) error {
p, ok:=v.(*uint32)
if !ok {
returnfmt.Errorf("uint32 consumer: target %T is not *uint32", v)
}
returnbinary.Read(r, binary.BigEndian, p)
})
}
Register the resulting Consumer under whatever MIME types should dispatch to it.
Selection rules
How the runtime chooses which consumer / producer to use for a given
request β including wildcards, MIME parameters, and the asymmetric matching
rule β is documented in
tutorials / media-type selection
and surfaced site-side under
standalone / content negotiation.
Client-side override: ContentTyper
A request body value can declare its own Content-Type by implementing
runtime.ContentTyper:
typeContentTyperinterface {
ContentType() string}
When a body payload set via SetBodyParam is a stream and ContentType()
returns a non-empty value, that value wins over the operation’s consumes
default. Same goes for individual file values inside a multipart upload.
The full algorithm is in
tutorials / media-type selection.
Validation hooks
OpenAPI specifies most validation declaratively (required fields, pattern,
min/max, enum, etc.). go-swagger turns those rules into code on the
generated model types via two interfaces:
Both live in the root runtime
package β see runtime.Validatable
and runtime.ContextValidatable
for the authoritative definitions. The strfmt.Registry argument carries
the active string-format registry (date-time, UUID, β¦) so format-aware
validation has access to it.
ContextValidatable is the context-aware version; it should be preferred
in new code because some validations (read-only / write-only flags,
async-driven cross-field checks) genuinely need request scope.
When the runtime calls them
Server-side, validation runs as part of Context.BindValidRequest,
which fires aftersecurity
and just after parameter binding:
flowchart TD
sec["Security Β· Context.Authorize<br/>Authenticator β Authorizer"]
bind["Binder<br/>Consumer decodes body into the parameter struct"]
val["Validator (per parameter)<br/>1. spec-driven validation (required, pattern, β¦)<br/>2. if Validatable: Validate(formats)<br/>3. if ContextValidatable: ContextValidate(ctx, formats)"]
err{{"errors.CompositeValidationError<br/>aggregates every parameter-level violation<br/>(does not stop on first failure)"}}
sec --> bind --> val
val -. on error .-> err
Two consequences worth being aware of:
Multiple errors per parameter set. A request with three invalid
fields produces a CompositeValidationError containing three entries,
not a single one.
Both layers run. Implementing Validatable does not turn off
spec-driven validation; the two layers compose. Use Validatable for
rules the spec cannot express (cross-field invariants, business rules).
Client-side, generated request models implement the same interfaces, and
the generated Validate method runs before the body is serialised β a
malformed payload fails locally instead of producing a server-side 422.
Custom validation in your own types
Most users never write these by hand β they fall out of swagger generate.
But for hand-rolled types you can add cross-field checks like this:
// DateRange illustrates a cross-field invariant on a hand-written type.typeDateRangestruct {
Fromstrfmt.Date`json:"from"`Tostrfmt.Date`json:"to"`}
// Validate enforces that To is not before From. The strfmt.Registry// argument is unused here because the rule does not involve any// registered string format.func (dDateRange) Validate(_strfmt.Registry) error {
iftime.Time(d.To).Before(time.Time(d.From)) {
returnerrors.New("DateRange.to must not be before DateRange.from")
}
returnnil}
// ContextValidate enforces that on_behalf_of is only set when the// request context carries an authenticated user.func (rMyRequest) ContextValidate(ctxcontext.Context, _strfmt.Registry) error {
ifreqUser(ctx) ==""&&r.OnBehalfOf!="" {
returnerrors.New("on_behalf_of is only valid when authenticated")
}
returnnil}
Both methods take a strfmt.Registry, which is how the runtime carries
named formats (date-time, uuid, email, β¦) into the validator. You
rarely build one by hand β the server’s *Context and the client Runtime
each carry one and pass it down. To register a custom format
(x-go-type style), call strfmt.Default.Add(...) once at startup; the
default registry is what both sides use unless overridden.
Client
The client package provides
client.Runtime,
the configurable HTTP transport that go-swagger-generated clients use
under the hood. You can also drive it directly for untyped API calls.
ClientOperation, BuildHTTP and SubmitContext β and the recent pivot
to context-only request building.
Subsections of Client
Transport
client.Runtime
wraps a *http.Client plus the wire-format codecs needed to call an
OpenAPI-described API. This page covers the knobs that shape the
underlying HTTP behaviour; auth, tracing and request submission live
on their own pages.
New builds a runtime against http.DefaultTransport. NewWithClient
takes an explicit *http.Client β use it when you need a non-default
transport (custom TLS, a proxy, an instrumented round-tripper, etc.)
or want to share a client across runtimes.
schemes lists allowed URL schemes ("https", "http"); the runtime
picks one when building a request, preferring HTTPS.
context.Background() (legacy field β see requests)
Debug
enabled if SWAGGER_DEBUG or DEBUG env var is set
You can replace any of these after construction. Example: register a
custom codec for a vendor JSON content type that the client will
encounter on responses.
Option highlights (the full struct is in the godoc):
Group
Fields
Client cert (paths)
Certificate, Key
Client cert (loaded)
LoadedCertificate, LoadedKey
Server CAs
CA, LoadedCA, LoadedCAPool (combined with each other; otherwise the system pool is used)
Hostname / verify
ServerName, InsecureSkipVerify (ignored when ServerName is set), VerifyPeerCertificate, VerifyConnection
Resumption
SessionTicketsDisabled, ClientSessionCache
TLSClientAuth always returns a config with MinVersion = TLS 1.2.
Timeouts
Two layers of timeout apply:
Per-request timeout β set via the operation’s
Params.SetTimeout(d) (any generated parameter type implements this).
This becomes the deadline of the request context.Context derived
inside BuildHTTPContext (see requests).
HTTP client timeout β set on the *http.Client you pass to
NewWithClient. This is the standard Client.Timeout field; it
applies regardless of the per-request value.
There is also a package-level
DefaultTimeout = 30 * time.Second. It is not wired up
automatically; it exists for callers building their own *http.Client
that want to use the same default the runtime advertises.
// Force a specific proxy.proxyURL, _:=url.Parse("http://proxy.internal:3128")
tr:=&http.Transport{Proxy: http.ProxyURL(proxyURL)}
httpClient:=&http.Client{Transport: tr}
rt:=client.NewWithClient(host, base, schemes, httpClient)
Some servers never close the response body, which prevents Go from
reusing the underlying TCP connection. EnableConnectionReuse
installs a transport middleware that drains the unread body on
Close() so the connection can return to the pool:
This is not enabled by default because for some servers the
response stream never completes and draining would block forever.
Turn it on when you’ve confirmed the server you’re talking to does
the right thing.
Debug logging
Two ways to enable wire-level dumps of requests and responses (both
go through httputil.DumpRequest / DumpResponse):
Set the SWAGGER_DEBUG (or DEBUG) environment variable before the
process starts. client.New picks this up.
Call rt.SetDebug(true) at runtime.
rt.SetLogger(myLogger) swaps the destination away from the default
standard-library logger.
For most production debugging you’ll get more value out of the
OpenTelemetry tracing than from raw dumps.
Authentication
Client-side authentication is a pure encoding concern: take some
credentials, write the right header / query parameter on the outbound
request. It is decoupled from the server-side Authenticator /
Authorizer interfaces (core / interfaces)
β those answer “is this request allowed?”, these answer “how do I
sign it?”.
See runtime.ClientAuthInfoWriter
for the authoritative definition. Anything with that signature can be
used as auth. The ClientRequest argument exposes SetHeaderParam,
SetQueryParam, SetBodyParam β i.e. the same surface generated
parameter types use to encode themselves.
Where to attach it
Two places, with predictable precedence:
// 1. Per operation β overrides the runtime defaultop.AuthInfo = client.BearerToken(token)
// 2. Per runtime β used when the operation does not set its ownrt.DefaultAuthentication = client.BasicAuth("alice", "s3cret")
APIKeyAuth(name, in, value) β RFC-undefined but ubiquitous
// As an HTTP headerrt.DefaultAuthentication = client.APIKeyAuth("X-Api-Key", "header", apiKey)
// Or as a query parameterrt.DefaultAuthentication = client.APIKeyAuth("api_key", "query", apiKey)
Sets Authorization: Bearer <token>. For OAuth2 client flows that
need to acquire and refresh the token, build the writer around an
oauth2.TokenSource from golang.org/x/oauth2 and re-attach it on
every call (or use a custom writer that calls Token()).
Compose(authsβ¦) β combine multiple writers
For APIs that require more than one credential header on the same
request β say an API key plus a bearer token β chain them:
Nil writers in the list are skipped silently. The first non-nil
writer that returns an error short-circuits the chain.
PassThroughAuth β explicit “no auth”
A no-op writer. Use it when the operation requires some writer
(for instance because it’s defined as security: [[]] in the spec)
but no actual credential should be attached.
A common case: an HMAC-signed request that needs to compute the
signature over the body. Implement ClientAuthInfoWriter directly:
// HMACSignature returns a ClientAuthInfoWriter that signs the request// body with the given HMAC-SHA256 key and attaches the signature plus// key ID as headers.funcHMACSignature(keyIDstring, key []byte) runtime.ClientAuthInfoWriter {
returnruntime.ClientAuthInfoWriterFunc(func(rruntime.ClientRequest, _strfmt.Registry) error {
body:=r.GetBody()
mac:=hmac.New(sha256.New, key)
mac.Write(body)
sig:=hex.EncodeToString(mac.Sum(nil))
iferr:=r.SetHeaderParam("X-Sig-Key", keyID); err!=nil {
returnerr }
returnr.SetHeaderParam("X-Sig", sig)
})
}
The runtime calls AuthenticateRequest after the operation’s
parameters have been bound but before the request is sent β so
r.GetBody() returns the encoded body for buffered payloads. For
streaming bodies (multipart, raw streams) the runtime arranges a
body-copy closure so the signer sees the bytes that will go on the
wire; see BuildHTTPContext in
client/internal/request
for the gory details.
Tracing
client.Runtime ships first-class OpenTelemetry support. There are
no extra modules to import beyond the runtime itself
(it already depends on go.opentelemetry.io/otel).
Returns a runtime.ClientTransport that delegates to the underlying
runtime and creates a client span for every request. Use it as the
transport you hand to a generated client:
For untyped use you call traced.Submit(op) directly.
A span only appears when one is already active
If the operation’s context does not contain an active span, the
transport does not start a root span. This is intentional β
telemetry boundaries belong to the application, not to the transport
library. Wrap your call site in a span and the client span attaches
beneath it.
Options β OpenTelemetryOpt
Option
What it sets
Default
WithTracerProvider(provider)
The trace.TracerProvider to acquire a tracer from.
the global provider (otel.GetTracerProvider)
WithPropagators(ps)
The propagation.TextMapPropagator used to inject context into outbound headers.
the global propagator (otel.GetTextMapPropagator)
WithSpanOptions(optsβ¦)
Extra trace.SpanStartOptions applied to every new span (kind, attributes, etc.).
none
WithSpanNameFormatter(fn)
Function that derives the span name from the *runtime.ClientOperation.
op.ID if non-empty, otherwise "{method}_{pathPattern}"
Runtime.WithOpenTracing exists but is deprecated. It silently
returns an OpenTelemetry transport, ignoring opts that are not
OpenTelemetryOpt. The OpenTracing project is archived β new code
should call WithOpenTelemetry.
If you still need OpenTracing semantics (for example because your
collector is OpenTracing-only), import the compatibility add-on:
go get github.com/go-openapi/runtime/client-middleware/opentracing
The compat module lives in its own Go module so the rest of the
runtime no longer pulls the OpenTracing dependency.
Building & submitting requests
Runtime exposes a small set of entry points for turning a
runtime.ClientOperation into a sent request and a typed result. The
public surface has been pivoting from “use the cached context on the
operation/runtime” to “pass the context explicitly”. This page covers
both shapes and explains which to use when.
The descriptor β ClientOperation
Authoritative definitions live in the
runtime
package:
Generated clients build one of these per operation method and call
Submit (or, increasingly, SubmitContext). For untyped use you
populate the fields by hand.
Entry points
The runtime offers four methods, paired by purpose:
Purpose
Legacy (cached ctx)
Context-aware (preferred)
Send the request, return the typed result
Runtime.Submit(op)
Runtime.SubmitContext(ctx, op)
Build the *http.Request only
Runtime.CreateHttpRequest(op) β
Runtime.CreateHTTPRequestContext(ctx, op)
β CreateHttpRequest is deprecated. It does not return the
context’s cancel function, so any per-request timeout set via
Params.SetTimeout is silently leaked. Use CreateHTTPRequestContext
instead.
Submit vs SubmitContext
Submit consults its context in this order:
op.Context if non-nil
otherwise rt.Context
otherwise context.Background()
SubmitContext(ctx, op) ignores those cached values entirely and uses
ctx as the parent context. This is the only way to pass a
caller-controlled context that can be cancelled, deadlined or
trace-instrumented from the call site.
// legacy β cached context, hard to cancel from the call siteresult, err:=rt.Submit(op)
// preferred β explicit contextctx, cancel:=context.WithTimeout(parent, exampleTimeout)
defercancel()
result, err = rt.SubmitContext(ctx, op)
The per-request timeout set via Params.SetTimeout(d) (i.e.
runtime.ClientRequestWriter.SetTimeout) is honoured by both
forms β it is applied when the request context is derived inside
BuildHTTPContext, on top of whatever deadline ctx already carries.
Build-only β CreateHTTPRequestContext
When you need the prepared *http.Request but want to drive
http.Client.Do yourself (for retries, custom logging, response-body
inspection), use:
req, cancel, err:=rt.CreateHTTPRequestContext(ctx, op)
iferr!=nil {
returnnil, err}
defercancel() // MUST run after the response is fully readresp, err:=myClient.Do(req)
// ...
cancel releases the per-request timeout timer and any other
resources held by the derived context. Calling it before the
response body is fully drained will cancel the in-flight request β
defer it to the end of the read.
On error the returned cancel is a no-op, so deferring it
unconditionally is safe.
What happens during a SubmitContext call
flowchart TD
in(((SubmitContext ctx, op)))
prep["prepareRequest<br/>resolve scheme + media type<br/>pick AuthInfoWriter (op.AuthInfo or rt.DefaultAuthentication)"]
build["BuildHTTPContext<br/>WriteToRequest β ctx with timeout<br/>buffered or streaming body<br/>AuthenticateRequest"]
do["http.Client.Do"]
decode["resolveConsumer Β· ReadResponse<br/>decode into typed result"]
out(((result, err)))
cancel["cancel()<br/>(deferred)"]
in --> prep --> build --> do --> decode --> out
build -.-> cancel
BuildHTTPContext chooses one of two assembly paths:
buffered body β for URL-encoded forms, producer output, or no
body. The body is materialised in memory before AuthenticateRequest
runs, so writers like HMAC signers see the final bytes.
streaming body β for multipart uploads or stream payloads
(io.Reader body). The body flows through an io.Pipe. Auth
writers receive a body-copy closure so signers can still see the
bytes β at the cost of one extra read.
Multipart uploads honour context cancellation
A long-standing rough edge β the multipart upload goroutine ignoring
the request context β was fixed in feat(client): honor context cancellation in multipart upload goroutine. Cancelling the context
mid-upload now stops the writer goroutine cleanly instead of leaking
it for the lifetime of the connection.
Migration from the legacy form
If your codebase calls Submit and stashes contexts on op.Context
or rt.Context, the change is usually mechanical:
op.Context and rt.Context are still read by Submit for
compatibility with existing callers and generated code that has not
yet been regenerated; SubmitContext ignores both. New code (and
freshly regenerated clients) should pass the context explicitly.
For CreateHttpRequest callers the move is more important β the
deprecated form leaks the per-request timer when Params.SetTimeout
is non-zero. Switch to CreateHTTPRequestContext and remember to
defer the returned cancel.
Server
The middleware package wires an analyzed OpenAPI spec into a working HTTP
handler. Requests flow through a chain of stages β by default
Router β Security β ContentType/Accept β Binder β Validator β OperationExecutor β Responder
β composable via middleware.Builder. Generated typed APIs assemble an
equivalent chain explicitly per operation; either way the runtime does
not enforce a single fixed pipeline.
How an inbound HTTP request flows through middleware.Context β from
routing to security to binding/validation to operation execution
and response writing.
Doc-UI handlers, content negotiation and the header package have
moved to the standalone server-middleware module β this page lists
the old entry points and shows the migration.
Subsections of Server
Request pipeline
The middleware
package wires an analyzed OpenAPI spec into a working http.Handler.
Every request goes through the same conventional sequence of stages β
covered briefly on core / interfaces, and
expanded here with the actual call sites.
The full picture
flowchart TD
req(((HTTP request)))
router["Router Β· NewRouter / Context.RouteInfo<br/>match path/method against the analyzed spec<br/>404 / 405 if no route"]
sec["Security Β· Context.Authorize<br/>RouteAuthenticators.Authenticate<br/>then optional Authorizer<br/>401 / 403 on failure"]
bvr["BindValidRequest"]
neg["ContentType / Accept negotiation<br/>pick Consumer + target Producer<br/>400 / 415 / 406 on failure"]
bind["Binder<br/>path / query / header / body params<br/>β uses Consumer β"]
val["Validator<br/>spec rules + Validatable<br/>422 with CompositeValidationError on failure"]
op["OperationHandler.Handle<br/>your business logic"]
resp["Responder Β· Context.Respond<br/>β uses Producer β"]
out(((HTTP response)))
req --> router --> sec --> bvr
bvr --> neg --> bind --> val
val --> op --> resp --> out
The middle three stages β negotiation, binding, validation β all live
inside the single call Context.BindValidRequest. Splitting them out
in the diagram makes the failure modes (400, 415, 406, 422) easier to
trace.
The diagram shows the typical sequence β what the runtime’s
default untyped wiring does and what go-swagger’s generated typed
handlers do. The actual ordering and composition is an
implementation detail of the RoutableAPI
plugged into the middleware.Context; a custom one can compose the
per-route handler differently.
The RoutableAPI seam
The middleware package handles routing, negotiation, validation
and the high-level lifecycle helpers (RouteInfo, Authorize,
BindValidRequest, Respond). Everything that has to know about
your API β the per-operation handler, the registered codecs, the
auth schemes β sits behind a single interface:
the http.Handler that runs the per-operation pipeline for this route
ServeErrorFor
the error-rendering function for a given path (defaults to the API’s)
ConsumersFor
a mediaType β Consumer map for the given list (route’s consumes)
ProducersFor
a mediaType β Producer map for the given list (route’s produces)
AuthenticatorsFor
a scheme name β Authenticator map for the security schemes in scope
Authorizer
the optional Authorizer to gate the principal post-authentication
Formats
the strfmt.Registry used by validation
DefaultProduces / DefaultConsumes
the API-level defaults to fall back to when the route is unspecified
The router calls HandlerFor(method, path) once per matched route
and serves whatever it gets back. What that handler does is
entirely up to the implementation β the RoutableAPI decides how
the bind/validate/security/operation/respond steps are composed.
Constructors that take a custom RoutableAPI
// Default β untyped.API wrapped in a routableUntypedAPI.ctxDefault:=middleware.NewContext(spec, api, nil)
// Custom β anything that implements RoutableAPI.ctxCustom:=middleware.NewRoutableContext(spec, myAPI, nil)
// Same, with a pre-analyzed spec to skip re-analysis.ctxAnalyzed:=middleware.NewRoutableContextWithAnalyzedSpec(spec, analyzed, myAPI, nil)
Use NewRoutableContext when you have your own implementation
(typically the one go-swagger generates for typed APIs, but any
type satisfying the interface works). Reach for
NewRoutableContextWithAnalyzedSpec if you have already produced an
*analysis.Spec and want to avoid the second analysis pass.
Two implementations the runtime sees in practice
The runtime ships oneRoutableAPI implementation β
routableUntypedAPI, internal to the middleware package. It wraps
untyped.API
and is what middleware.Serve / ServeWithBuilder builds for you.
go-swagger generates a second implementation per spec β the
*operations.MyAPI type implements every method on RoutableAPI
directly, with HandlerFor returning the per-operation ServeHTTP
shown below.
The next section walks both.
Two assembly paths
The two RoutableAPI implementations introduced above produce
equivalent pipelines, but differ in where the per-route handler is
assembled β the untyped one builds it in the runtime via a closure;
the typed one is generated source you can read directly.
Internally middleware.newRoutableUntypedAPI builds one
http.Handler per route. The bind/validate/handle/respond logic
lives in a single closure; if the route declares any security
requirement, that closure is wrapped with newSecureAPI so security
runs first:
Same primitives, same order. Neither shape is enforced by the
runtime: a route is just an http.Handler, and you can wrap or
replace it. middleware.Builder exists precisely to compose your
own chain on top.
Composing extra middleware β Builder
typeBuilderfunc(http.Handler) http.Handler
Builder is the standard http.Handler decorator type, aliased so
the API reads cleanly. The runtime exposes several entry points that
take one:
Entry point
Purpose
middleware.Serve(spec, api)
Untyped, no extra middleware (uses PassthroughBuilder).
middleware.ServeWithBuilder(spec, api, builder)
Untyped, decorate the routes handler with builder.
Context.APIHandler(builder, optsβ¦)
Mounts the routes plus the default Swagger UI / spec serve middleware.
Context.APIHandlerWithUI(builder, ui, optsβ¦)
Same, but pick the UI flavour (docui.SwaggerUI / RapiDoc / Redoc).
Context.RoutesHandler(builder)
Just the routes β no UI middleware. Useful when you mount under your own mux.
A typical pattern with the justinas/alice
middleware library β log, rate-limit, then hand off to the runtime:
Each stage stashes its result in the request context so downstream
middleware can read it without re-doing the work:
Helper
Returns
middleware.MatchedRouteFrom(r) *MatchedRoute
the route matched by the router
middleware.SecurityPrincipalFrom(r) any
the principal returned by Authorize
middleware.SecurityScopesFrom(r) []string
the union of scopes for the matched scheme
Use these inside extra middleware mounted via Builder.
Parameter binding & validation
The two stages combined as
Context.BindValidRequest
turn the incoming *http.Request into a populated parameter struct
and surface every spec-level violation in a single response.
What gets bound, in what order
BindValidRequest runs four sub-steps. Any non-recoverable error
short-circuits before the binder runs; otherwise binder-level errors
are aggregated alongside negotiation errors:
Content-Type validation β runtime.HasBody(r) early-outs for
bodyless requests; otherwise runtime.ContentType(r.Header) parses
the header (a malformed value is a 400) and validateContentType
matches it against the operation’s consumes (no match β 415,
match β pick the registered Consumer; missing Consumer β 500).
Response format selection β negotiate.ContentType(r, route.Produces, β¦)
picks the offer that best satisfies Accept; "" β 406
(errors.InvalidResponseFormat).
Parameter binding β for each declared parameter, the binder
reads the right place (path / query / header / formData / body),
converts the string(s) to the target Go type and applies any
default declared in the spec.
Per-parameter validation β the spec’s declarative rules
(required, pattern, minLength, enum, format, β¦) plus any
Validatable / ContextValidatable your model implements.
All errors collected during binding and validation are returned as
one errors.CompositeValidationError. The validator does not
stop on first failure β a request with three problems produces three
entries, so callers learn about everything in one round-trip.
Where each parameter in: reads from
in:
Source
Notes
path
the matched route’s RouteParams
Names come from the {placeholder} segments. Required by definition (no default).
query
r.URL.Query()
Multi-valued: see collectionFormat (csv, ssv, tsv, pipes, multi).
header
r.Header
Multi-valued via the same collectionFormats; multi repeats the header name.
formData
r.PostForm for application/x-www-form-urlencoded or r.MultipartForm for multipart/form-data
File parts come back as runtime.File.
body
r.Body, decoded via the chosen Consumer
Validation runs against the resulting Go value, including any Validatable hook.
The binder is reflection-based for the untyped path; generated code
uses the same primitives by calling
Context.BindValidRequest(r, route, &Params) where &Params is the
generated parameter struct.
See core / validation for the full picture
of the hooks; BindValidRequest is the call site.
Where this fits in the pipeline
Conventionally after security and before the operation
handler β see pipeline for the diagram and the
rationale (failed auth short-circuits with 401 before paying the cost
of binding/validation).
Disabling spec-driven parameter validation
If you need to bypass the parameters block entirely (typically for
test harnesses or proxy layers that re-validate downstream),
Context.SetIgnoreParameters(true) skips spec-driven parameter
validation while leaving the rest of the pipeline intact:
Validatable / ContextValidatable hooks on the model still run.
Reading the bound parameters from extra middleware
Bound parameters are cached in the request context. From middleware
mounted via Builder you can re-fetch them without re-binding:
// inside middleware.Buildermatch:=middleware.MatchedRouteFrom(r)
// (no public accessor for the bound struct itself today β// re-call BindValidRequest if you need it; the result is cached// so a second call is cheap)
MatchedRouteFrom plus SecurityPrincipalFrom and
SecurityScopesFrom cover the most common middleware needs (audit
logging, per-tenant rate limiting, β¦).
Security schemes
security
ships ready-made runtime.Authenticator implementations for the four
auth flavours OpenAPI 2.0 understands. Each comes in two shapes β a
plain variant and a *Ctx variant that threads context.Context
through to your authenticate function.
The user-supplied callback
You don’t implement Authenticator directly β you implement a
verification callback and pass it to one of the constructors below.
The runtime handles the wire-format details (header parsing, scheme
selection, scope handling, etc.).
A successful callback returns the authenticated principal β typed
however your application likes. The principal is then handed to any
configured Authorizer and stashed in the request context (read with
middleware.SecurityPrincipalFrom).
Why *Ctx?
Most real authenticators want request scope: a request-scoped
database handle, a tracing span, or a deadline that should propagate
into the auth lookup. The *Ctx constructors give your callback the
request context and let it return a (possibly enriched) context that
the runtime then attaches to the request.
The non-*Ctx variants exist for compatibility with code from before
context propagation was the norm. New code should default to *Ctx.
BasicAuth β RFC 7617
// principal type is up to youtypePrincipalstruct {
IDstringEmailstring}
authn:=security.BasicAuth(func(user, _string) (any, error) {
ifuser=="" {
returnnil, errors.Unauthenticated("basic")
}
returnPrincipal{ID: user, Email: user+"@example.com"}, nil})
BasicAuth reads r.BasicAuth() and calls your callback with the
decoded credentials. Use BasicAuthRealm("my-realm", fn) to set the
challenge realm advertised in WWW-Authenticate on failure (default:
"Basic Realm").
When the request has no Authorization header, the authenticator
returns (false, nil, nil) β “scheme does not apply” β so the next
configured scheme is tried. A non-nil error from your callback is
treated as a 401.
security.FailedBasicAuth(r) / FailedBasicAuthCtx(ctx) returns the
realm name when basic auth has been attempted and failed. Useful from
custom error handlers that want to render a WWW-Authenticate
challenge.
in must be "header" or "query" β anything else panics at
construction time (it is a programmer error). The callback receives
the raw token; an empty value short-circuits with
(false, nil, nil) so other schemes can apply.
The access_token form field if Content-Type is
application/x-www-form-urlencoded or multipart/form-data
That covers RFC 6750 Β§2.
requiredScopes is whatever the operation declared in its
security: block. Combine multiple security entries (per the spec)
and you’ll see the union or intersection per call β
RouteAuthenticator.AllScopes() and CommonScopes() expose those if
you need to inspect them yourself.
The “scheme name” you pass ("oauth2" here) is recoverable from the
request via security.OAuth2SchemeName(r) /
security.OAuth2SchemeNameCtx(ctx). That’s the hook point for code
that needs to know which OAuth2 entry was applied (handy when a
spec declares multiple OAuth2 flows).
Authorizer
Authentication says who; authorization says may they do this?.
Authorizer runs after a principal has been resolved.
Anything more interesting (RBAC, ABAC, OPA / casbin / your ownβ¦) you
write yourself. A non-nil return blocks the request:
A return value implementing errors.Error is propagated as-is.
Any other error is wrapped as errors.New(403, err.Error()).
The single Authorize call on Context (core / interfaces)
runs Authenticator and Authorizer in sequence β Authorizer only
runs if the authenticator returned a principal.
Composing schemes β RouteAuthenticators
A spec can declare multiple security requirements per operation. The
runtime turns each one into a RouteAuthenticator and groups them
into RouteAuthenticators. RouteAuthenticators.Authenticate walks
the list and:
returns the first one that returned (true, principal, nil);
collects errors from any that applied but failed (last one wins for
the response status);
returns AllowsAnonymous() == true if no scheme was required β
in that case the request proceeds without a principal.
You don’t construct RouteAuthenticators directly β the runtime
builds them from your registered Authenticators (typed APIs do this
in generated code; untyped APIs via untyped.API.AddAuth and
related). The grouping and short-circuit semantics are worth knowing
about when you wonder why “scheme A is rejecting and scheme B never
runs”: that’s by design β the first applicable scheme decides.
Reading the principal back
Inside your operation handler, the typed signature gives you the
principal directly. From extra middleware mounted via Builder:
scopes is the AllScopes() of the matching RouteAuthenticator β
useful for audit logging that needs to record which token (or token
shape) authorised the request.
Deprecated shims
In v0.30 the server-side helpers that don’t actually need any OpenAPI
machinery were extracted into the
server-middleware module. The old entry points
in middleware still compile (and forward to the new ones) so
existing imports keep building, but they are tagged deprecated and
will be removed in a future major release.
This page is a cheat-sheet for the migration. New code should target
the right-hand column directly.
The shim package
(middleware/header)
re-exports everything via type aliases and forwarding functions, so
existing code is binary-compatible. Update imports when convenient.
Doc UI handlers β SwaggerUI, RapiDoc, Redoc
The middleware shims preserve the option-struct calling convention.
The new docui package uses functional options and accepts
(next http.Handler, opts ...Option).
Methods on *Opts types that were only used to manipulate option
structs (e.g. SwaggerUIOpts.EnsureDefaults) have been removed β
they were not load-bearing.
See standalone / doc UIs for the full
options reference, the middleware-factory shape (UseSwaggerUI,
etc.) and a complete net/http example.
Why the split?
Two reasons:
Dependency hygiene. The doc UI and negotiation helpers don’t
need any OpenAPI machinery. Pulling them through middleware made
every consumer transitively depend on go-openapi/spec,
go-openapi/loads and go-openapi/validate. The standalone module
has zero such transitive deps β handy for a service that only wants
to serve a static spec and a Swagger UI from a vanilla net/http
mux.
API hygiene. The new functional options are easier to extend
than option-struct fields, and let us keep adding knobs without
growing struct surfaces. The deprecated shims paper over the older
shape so old code keeps building.
The plan is to remove the shims in a future major release. Migrating
when convenient is enough β there’s no urgency, but there’s no reason
to keep new code on the old paths either.
Standalone middleware
github.com/go-openapi/runtime/server-middleware is a separate Go module
that ships the negotiation, media-type and doc-UI primitives without
inheriting the OpenAPI spec / loads / validate dependency tree. Drop it
into any vanilla net/http application.
Install
go get github.com/go-openapi/runtime/server-middleware
Server-side selection from Accept (ContentType) and Accept-Encoding (ContentEncoding). Honours MIME parameters by default; opt out with WithIgnoreParameters.
Low-level RFC 7231 header parsing primitives reused by negotiate. Use it directly if you need raw Accept/Accept-Encoding parsing without the typed media-type layer.
Stdlib-only handlers that serve Swagger UI, RapiDoc or Redoc, plus the spec document itself. Mountable on any net/http mux.
The module has zero transitive dependencies on go-openapi/spec,
go-openapi/loads, go-openapi/validate, or even on the rest of
go-openapi/runtime. Standard library only.
Stdlib-only Swagger UI, RapiDoc, Redoc and spec-serving handlers
from the docui package.
Subsections of Standalone middleware
Media types
server-middleware/mediatype
provides the parsed value type, the matching rule and the helper used by
both server-side Content-Type validation and Accept-header
negotiation.
The MediaType value
typeMediaTypestruct {
Typestring// lowercased on parseSubtypestring// lowercased on parseParamsmap[string]string// keys lowercased; values verbatimQfloat64// extracted from "q="; not stored in Params}
See
MediaType
on pkg.go.dev for the authoritative definition.
Parameter values are preserved verbatim, but comparisons are
case-insensitive (charset=UTF-8 matches charset=utf-8). Wildcards
*/* and type/* are accepted on either side; */subtype is invalid
and Parse rejects it.
Specificity
MediaType.Specificity() returns one of the constants below β useful
when writing custom selection logic:
Constant
Example
SpecificityAny
*/*
SpecificityType
text/*
SpecificityExact
text/plain
SpecificityExactWithParams
text/plain;charset=utf-8
The asymmetric matching rule
MediaType.Matches(other) is asymmetric. The receiver is the bound
(an allowed entry on the server side, or a candidate offer when matching
against an Accept entry); the argument is the constraint (the actual
request value, or the Accept entry being satisfied).
The rule:
Bare type/subtype must agree (with wildcards on either side).
If the receiver carries no parameters, any constraint is accepted
regardless of its parameters.
Otherwise every (key, value) pair on the constraint must be present
on the receiver, with case-insensitive value comparison. The receiver
may carry additional parameters that the constraint does not list.
q-values are not considered by Matches β they belong to the negotiator
(see Set.BestMatch).
The same direction is used in both call sites in the runtime:
See
MatchFirst
on pkg.go.dev for the authoritative signature.
Used when you need a yes/no answer plus the matched bound. Short-circuits
on the first allowed entry that accepts actual (so the returned
MediaType is not necessarily the most specific match β use
Set.BestMatch if you need ranked selection).
Return
Meaning
(matched, true, nil)
first allowed entry that accepts actual
(zero, false, nil)
actual is well-formed but no allowed entry accepts it (HTTP 415 territory)
(zero, false, err)
actual failed to parse; err wraps ErrMalformed (HTTP 400 territory β errors.Is it)
Allowed entries that themselves fail to parse are skipped silently
(they cannot match a well-formed actual).
Returns the offer most acceptable to the request’s Accept header. If
two offers match with equal weight, the more specific offer wins
(text/* trumps */*; type/subtype trumps type/*); after that the
earlier entry in offers wins. If no offer is acceptable,
defaultOffer is returned.
// Pet is the demo resource served by the negotiation handler.typePetstruct {
XMLNamexml.Name`json:"-" xml:"pet"`Namestring`json:"name" xml:"name"`}
funcpickContentType() {
pet:=Pet{Name: "Lassie"}
offers:= []string{mediaTypeJSON, mediaTypeXML}
http.HandleFunc("/pet", func(whttp.ResponseWriter, r*http.Request) {
chosen:=negotiate.ContentType(r, offers, mediaTypeJSON)
w.Header().Set("Content-Type", chosen)
switchchosen {
casemediaTypeXML:
_ = xml.NewEncoder(w).Encode(pet)
default:
_ = json.NewEncoder(w).Encode(pet)
}
})
srv:=&http.Server{
Addr: ":8080",
ReadHeaderTimeout: readHeaderTimeout,
}
log.Fatal(srv.ListenAndServe())
}
When Accept is absent entirely, the first offer is returned
unchanged.
Behaviour change in v0.30 β MIME parameters honoured
Pre-v0.30 the negotiator stripped MIME-type parameters before matching:
an Accept of text/plain;charset=utf-8 matched an offer of
text/plain;charset=ascii (the charset was thrown away). That was
expedient but wrong; v0.30 honours parameters by default:
Accept: text/plain;charset=utf-8 matches an offer of bare
text/plain (offer carries no constraint β receiver-side params,
asymmetric rule).
Accept: text/plain;charset=utf-8 does not match an offer of
text/plain;charset=ascii (charset values disagree).
If your producers and Accept clients use mismatched charset or
version params that you treat as informational, opt out per call β
Returns the best-matching offered encoding for the request’s
Accept-Encoding header. Two offers tied on q go to the earlier one;
no acceptable offer returns "" (so the caller can choose to send no
encoding rather than substituting identity).
Encoding tokens have no parameters, so this function is unaffected by
the v0.30 parameter-honouring change.
If you only need raw header parsing without the typed MediaType
layer (for example when implementing a different selection rule), drop
down to
negotiate/header:
The full server pipeline calls ContentType (and the matching
Content-Type validation through mediatype.MatchFirst) inside
Context.BindValidRequest; see
core / interfaces.
The standalone module exposes the same primitives so you can drive
negotiation from any net/http handler, with or without an OpenAPI
spec in the picture.
Doc UIs & spec serving
server-middleware/docui
ships ready-to-mount http.Handlers that serve the three popular
OpenAPI documentation UIs and the spec document itself. Standard
library only β no template engine, no asset bundler, no transitive
OpenAPI dependency.
Two equivalent patterns
Each UI is exposed in two shapes; pick whichever fits your wiring style.
Direct handler wrap β SwaggerUI(next, opts...)
For when you already have an http.Handler you want to decorate.
Use* returns a func(http.Handler) http.Handler β the standard
go-style middleware adapter.
Available UIs
UI
Direct
Middleware factory
Swagger UI
docui.SwaggerUI
docui.UseSwaggerUI
Swagger UI OAuth2 cb
docui.SwaggerUIOAuth2Callback
docui.UseSwaggerUIOAuth2Callback
RapiDoc
docui.RapiDoc
docui.UseRapiDoc
Redoc
docui.Redoc
docui.UseRedoc
The OAuth2 callback handler is the small static page Swagger UI redirects
to after an OAuth2 authorization β mount it at the path you configure in
your OAuth provider.
Common options
Option
Purpose
Default
WithUIBasePath(string)
Base path the UI is served from. Slash is prepended if missing.
/
WithUIPath(string)
Sub-path under the base path (final URL: {base}/{path}).
Replace the bundled HTML template entirely (~string or ~[]byte).
bundled minimal template
WithSpecURL(string)
URL the UI fetches the spec from.
/swagger.json
WithSwaggerUIOptions(opts)
Pass-through for Swagger-UI-specific knobs (OAuth2 client id, layout, β¦).
zero value
WithUITemplate panics at request time if the supplied template fails
to parse or execute β fail loud, not silent. Reference docs for the
templates each UI accepts:
http://localhost:8080/openapi.yaml β the spec document
http://localhost:8080/v1/ping β the application
Examples
Each page below is a self-contained snippet using the untyped API
setup so the runtime primitives are visible. Typed (go-swagger
generated) servers call exactly the same primitives β the wiring
file is just generated for you. Where a topic has more material than
fits on a page (like authentication), it gets its own subsection.
For a fully runnable copy of any of these patterns, the
go-swagger/examples
sibling repo has end-to-end programs you can clone and run.
Adding new wire formats, registering vendor MIME types, streaming
bodies, per-payload Content-Type overrides, and using the
standalone negotiator from a vanilla net/http handler.
Composing third-party HTTP middleware around the runtime β recipes
that wrap or extend the http.Handler returned by
middleware.Serve.
Subsections of Examples
Authentication & authorization
OpenAPI 2.0 defines four auth flavours; the runtime covers all four
plus the orthogonal Authorizer step. The pages below walk one
concrete scenario each β the first three cover the simplest cases,
the rest progressively layer on scopes, composition and custom
business rules.
Pluggable Authorizer that gates the principal for the matched
operation β a worked role-based access control example, orthogonal
to whichever Authenticator was used.
securityDefinitions:
key:
type: apiKeyin: headername: X-Token# default: every operation requires the keysecurity:
- key: []
Wiring
doc, _:=loads.Spec("swagger.yml")
api:=untyped.NewAPI(doc).WithJSONDefaults()
// 1. Authenticator: token β principalapi.RegisterAuth("key", security.APIKeyAuth(
"X-Token", "header",
func(tokenstring) (any, error) {
// Use subtle.ConstantTimeCompare to avoid leaking the// expected token byte-by-byte via response timing.ifsubtle.ConstantTimeCompare([]byte(token), []byte("abcdefuvwxyz")) ==1 {
return"alice", nil }
returnnil, errors.New(http.StatusUnauthorized, "invalid api key")
},
))
// 2. Authorizer: every authenticated principal allowed.// (Skip this line if you have no business-rule gating.)api.RegisterAuthorizer(security.Authorized())
// 3. Operation handlers (one per spec operation)api.RegisterOperation("get", "/customers/{id}", runtime.OperationHandlerFunc(
func(_any) (any, error) {
// params is the bound parameter struct;// principal is on r.Context() via middleware.SecurityPrincipalFromreturnmap[string]string{"id": "42"}, nil },
))
handler:=middleware.Serve(doc, api)
log.Fatal(http.ListenAndServe(":35307", handler))
Query param instead of header: change in: to query in the
spec and the second arg of APIKeyAuth to "query". The token
comes from ?api_key=β¦.
Context-aware lookup (DB call honouring request cancellation):
use security.APIKeyAuthCtx
instead β same idea, the callback gets the request context.Context.
Per-operation override: a route can opt out by setting
security: []; opt into a different scheme by replacing the list.
HTTP Basic
Same shape as the API key example, but with
username:password decoded by the runtime and a realm advertised on
the failure response.
BasicAuthRealmCtx is the context-aware variant of BasicAuthRealm;
the non-*Ctx form
security.BasicAuthRealm("petstore", fn)
takes a func(user, pass string) (any, error) instead.
Replying with WWW-Authenticate on 401
The runtime stashes the realm name in the request context when basic
auth has been attempted and failed. Recover it from a custom error
handler to render a proper challenge:
Basic + Bearer is a common “either credential works” requirement.
That’s the AND/OR composition case β see
composed for how to declare and wire it.
Bearer + JWT
The runtime extracts the token from Authorization: Bearer β¦ (or
the access_token query / form field β see
server / security).
Your callback verifies it and decides whether the token’s claimed
scopes satisfy the operation’s required scopes.
Spec
OpenAPI 2.0 only declares scopes under type: oauth2. Use that
declaration even if you’re not running an OAuth2 dance β the runtime
treats it as “extract a Bearer token and pass me the required scopes”.
securityDefinitions:
hasRole:
type: oauth2flow: accessCodeauthorizationUrl: 'https://issuer.example.com/auth'# documentary onlytokenUrl: 'https://issuer.example.com/token'# documentary onlyscopes:
customer: regular customeradmin: administrative actionssecurity:
- hasRole: [customer] # default: any operation needs at least "customer"
Wiring
JWT parsing is shown here via a parseJWT stub so the doc-examples module
does not lock you into a specific library β swap it for
jwt.ParseWithClaims
(or an introspection call) in your own code.
The first argument to BearerAuth is the scheme name β match the
key under securityDefinitions. It is recoverable from the request
via security.OAuth2SchemeName(r) when an operation declares more
than one OAuth2 entry.
Token sources, in order
The runtime tries, in this order:
Authorization: Bearer <token>
?access_token=β¦ query parameter
access_token form field if Content-Type is
application/x-www-form-urlencoded or multipart/form-data
Remote verification (introspection): replace the local
jwt.ParseWithClaims with an HTTP call to your auth server’s
/introspect endpoint. Use BearerAuthCtx so the introspection
call honours the request context.
OIDC / Google bearer tokens: the
oauth2-access-code example shows the
full handshake plus the token-validation callback.
Multiple bearer schemes: not supported β the runtime extracts
one token and passes it to whichever bearer authenticator applies
for the route. The
composed example walks the standard workaround.
Composed schemes (AND / OR)
Mirrors the
go-swagger/examples/composed-auth
example, condensed. That sibling repo has the full runnable code,
the JWT helpers, the keypair-generation script and a curl exerciser.
The composition rule
Inside one security list entry, all schemes must succeed (AND).
Between entries, any successful entry wins (OR). The runtime
stops at the first entry that authenticates.
security:
# OR - isRegistered: [] # entry 1: AND of one schemehasRole: [customer] - isReseller: [] # entry 2: AND of two schemeshasRole: [inventoryManager] - isResellerQuery: [] # entry 3: alternative carrierhasRole: [inventoryManager]
That reads as: (registered AND customer-scoped)OR(reseller-by-header AND inventory-manager-scoped)OR(reseller-by-query AND inventory-manager-scoped).
The callbacks (authenticateBasic, verifyResellerToken,
verifyBearerWithScopes) each return the same principal type β the
runtime hands the principal of the winning entry to the operation
handler, regardless of which schemes participated.
One principal, many origins
A common consequence of OR composition is that you can’t tell from
the operation handler alone which path authorized the call. Two
patterns:
Annotate inside the callback: stash the auth flavour on the
principal struct (principal.Source = "basic" etc.) before
returning it.
Read it back from the request context: for OAuth2 entries, use
security.OAuth2SchemeName(r)
to recover the matched scheme name. For Basic, FailedBasicAuth
reports the realm only on failure.
Caveats (from the example’s own README)
At most one Authorization header. Mixing Authorization: Basic
and Authorization: Bearer is not supported by HTTP itself; the
Bearer carrier should fall back to the access_token query/form
field when Basic is also in play.
At most one scoped scheme per route. If a spec declares two
oauth2 entries, both will see the same Bearer token β the runtime
has no way to tell them apart at the wire level.
OpenAPI 2.0 only allows scopes on oauth2. That’s why the
example uses type: oauth2 for what is really plain JWT-with-claims.
All schemes share one principal type. Aggregate intermediary
state inside the principal struct itself.
Run it end-to-end
The full runnable program β including the JWT keypair generator, a
curl exerciser script and the JWT-claims-based authorizers β lives
at
go-swagger/examples/composed-auth.
The runtime side of that example is exactly what you see above; the
rest is application glue (DB lookups, JWT verification helpers, the
RSA keypair) that you’d write the same way against any HTTP framework.
OAuth2 access-code (Google)
Mirrors
go-swagger/examples/oauth2.
Most of this example is OAuth2-flow plumbing (redirect, callback,
token exchange) that lives in your code, not in the runtime β the
runtime only enters the picture for the protected endpoints, where
the bearer token is validated.
The bearer-jwt example is the right starting point
if all you need is validating an inbound bearer; come here when you
also want to issue the redirect dance.
Spec
securityDefinitions:
OauthSecurity:
type: oauth2flow: accessCodeauthorizationUrl: 'https://accounts.google.com/o/oauth2/v2/auth'tokenUrl: 'https://www.googleapis.com/oauth2/v4/token'scopes:
user: regular useradmin: administrativesecurity:
- OauthSecurity: [user]paths:
/login:
get:
security: [] # public β kicks off the redirect/auth/callback:
get:
security: [] # public β receives the code from Google/customers:
get:
# uses the default `OauthSecurity: [user]`...
validateAtUserInfoURL is a plain HTTP call to Google’s userinfo
endpoint with the bearer token β see the
full implementation
in the sibling repo.
State parameter, briefly: the example uses a global string for
brevity. In production this MUST be a per-session unguessable
value, stored alongside the user’s session and validated on the
callback β otherwise CSRF on the redirect.
Exercise
# 1. Visit the login URL in a browseropen http://127.0.0.1:12345/api/login
# β redirected to Google sign-in# β after consent, redirected back to /auth/callback# β the response includes the access_token# 2. Call a protected endpoint with that tokencurl -i -H "Authorization: Bearer $TOKEN" http://127.0.0.1:12345/api/customers
# Wrong token β 401curl -i -H "Authorization: Bearer garbage" http://127.0.0.1:12345/api/customers
# {"code":401,"message":"unauthenticated for invalid credentials"}
Run the full example
The complete runnable program β including the userinfo validator,
the redirect/callback handlers wired through middleware, and the
client-secret bootstrap β lives at
go-swagger/examples/oauth2.
Clone it, drop your Google client ID/secret into
restapi/implementation.go, and run.
Custom Authorizer (RBAC)
Authentication answers who. Authorization answers may they do
this? β a separate decision the runtime asks of your Authorizerafter the principal has been resolved
(core / interfaces).
The runtime ships one trivial authorizer (security.Authorized() β
always-allow). Anything more interesting you write yourself.
Useful for audit logging, per-tenant rate limiting, or surfacing a
“why was this denied?” message in error responses.
Variations
OPA / Casbin / your own engine: same shape β call out to the
policy evaluator from inside the AuthorizerFunc.
Skip authorization for some routes: combine the ACL with a
short-circuit on the matched route (route.Operation.ID,
route.PathPattern, etc.) before consulting the engine.
Per-method body inspection: Authorizer runs after
authentication but before parameter binding, so the request body
has not been consumed at this point β for body-based decisions
(“the document the user is editing must belong to them”), do the
check inside the operation handler, where the bound params are
available.
Client-side credentials
Server-side authentication is the Authenticator story. Client-side
authentication is the ClientAuthInfoWriter story β pure encoding:
take credentials, set the right header / query parameter on the
outgoing request. See client / authentication
for the full reference; this page is a recipe collection.
The runtime calls AuthenticateRequest after the operation’s
parameters have been bound β so for buffered bodies r.GetBody()
returns the encoded payload. For streaming bodies (multipart, raw
streams) the runtime arranges a body-copy closure so the signer sees
the bytes that go on the wire; see
client / requests
for the exact assembly path.
Explicit “no auth”
For operations whose spec lists a security requirement that should be
satisfied by sending nothing (rare but legal):
A nil writer would have the same effect β PassThroughAuth
is the explicit version, useful when you want the intent to read
clearly in review.
Content types & negotiation
The runtime ships codecs for JSON, XML, CSV, plain text, byte streams
and YAML (core / content-types). Anything
beyond that β a different format, vendor MIME types, large streaming
bodies, per-payload Content-Type overrides β is a few lines of glue.
Use server-middleware/negotiate from a vanilla net/http handler β
no OpenAPI spec, no go-openapi/runtime dependency.
Subsections of Content types & negotiation
Custom codec (MessagePack)
Consumer and Producer are functions; adding a codec for a new
wire format is just writing two of them and registering them under
the right MIME type. This page uses
github.com/vmihailenco/msgpack/v5
as the worked example because it’s the most widely-used Go MessagePack
implementation; any third-party codec works the same way.
Pick a Content-Type
MessagePack has no IANA-registered MIME. Two conventions are common:
application/x-msgpack (older x- style)
application/msgpack (newer)
Pick one and stick to it across spec, server registration and client
expectation. The examples below use application/x-msgpack.
The Consumer + Producer pair
// Mime is the content-type the recipe registers under. MessagePack has// no IANA-registered MIME; application/x-msgpack and application/msgpack// are both common β pick one and stick to it.constMime = "application/x-msgpack"// Consumer returns a runtime.Consumer that decodes a MessagePack body// into the target value v.funcConsumer() runtime.Consumer {
returnruntime.ConsumerFunc(func(rio.Reader, vany) error {
returnmsgpack.NewDecoder(r).Decode(v)
})
}
// Producer returns a runtime.Producer that serialises v as MessagePack// onto w.funcProducer() runtime.Producer {
returnruntime.ProducerFunc(func(wio.Writer, vany) error {
returnmsgpack.NewEncoder(w).Encode(v)
})
}
Two-line implementations are typical; the runtime never inspects
codec internals. Anything more sophisticated (configurable encoder
options, format-specific error wrapping) goes inside the closure.
Register on the server
Spec β declare the new MIME under consumes / produces:
The runtime now picks MessagePack whenever the inbound Content-Type
matches and the route lists application/x-msgpack under consumes,
or Accept: application/x-msgpack selects it from produces.
# Server happily decodes a MessagePack bodycurl -i -H 'Content-Type: application/x-msgpack'\
--data-binary @payload.msgpack \
http://127.0.0.1:8080/v1/items
# And produces MessagePack on requestcurl -i -H 'Accept: application/x-msgpack'\
http://127.0.0.1:8080/v1/items/42
A request with Content-Type outside the operation’s consumes
list yields 415 Unsupported Media Type; an Accept outside
produces yields 406 Not Acceptable. See
server / pipeline
for the full failure-mode mapping.
Variations
Vendor MIME types (application/vnd.acme.v1+msgpack) need
separate registrations even when they delegate to the same codec β
see vendor types.
Streaming bodies: Consumer / Producer get an io.Reader /
io.Writer directly, so streaming codecs work the same way. The
streaming bodies page covers raw-byte
payloads and the ClosesStream option.
Vendor MIME types
API versioning by vendor MIME type β application/vnd.acme.v1+json
and friends β is a common alternative to /v1/ URL prefixes. The
runtime supports it, but each MIME registers as its own entry: the
+json structural suffix is not sniffed automatically.
For the response side, the runtime has already chosen a producer
that matches the client’s Accept β your handler returns a value
and the matched Producer writes it. If you need the response shape
to differ between versions, branch on the negotiated content-type
the same way (see Context.ResponseFormat).
Matching rules β what about MIME parameters?
The
asymmetric matching rule
applies. If your spec lists a parameterised type
(application/vnd.acme+json;version=1), an inbound request with no
version parameter does not match. Recommend the simpler form β
parameter-distinct types are rarely worth the surprise.
See custom codec for the msgpackcodec package
itself.
When not to do this
Vendor MIME types compose poorly with browser clients
(Accept: */* is unspecific), with caches that key on URL alone, and
with HTTP middleware that inspects the URL. URL-based versioning
(/v1/...) sidesteps all three. Pick vendor MIME types when the API
is server-to-server and you genuinely need the same URL to serve
multiple representations.
Streaming bodies
For payloads that are not naturally a single Go value β large file
downloads, log streams, raw binary uploads β runtime.ByteStreamConsumer
and runtime.ByteStreamProducer give you io.Reader / io.Writer
access without the runtime decoding into a typed model.
api:=untyped.NewAPI(doc).WithJSONDefaults()
// ByteStreamProducer is registered by WithJSONDefaults under// runtime.DefaultMime ("application/octet-stream"), but be explicit// when more than one stream-producing MIME is in the picture:api.RegisterProducer(runtime.DefaultMime, runtime.ByteStreamProducer())
api.RegisterOperation("get", "/backups/{id}", runtime.OperationHandlerFunc(
func(_any) (any, error) {
f, err:=os.Open("/var/backups/2026-05-10.tar")
iferr!=nil {
returnnil, err }
// The Producer copies whatever io.Reader you return into the// response writer. Returning *os.File is fine; close it from// a Responder if you need ownership semantics.returnmiddleware.ResponderFunc(func(whttp.ResponseWriter, pruntime.Producer) {
deferf.Close()
w.Header().Set("Content-Type", runtime.DefaultMime)
_ = p.Produce(w, f)
}), nil },
))
Produce accepts an io.Reader (yes, despite the name): the
default ByteStreamProducer copies bytes through. For typed bodies
the runtime would marshal first; here you stay in raw-byte territory
end to end.
ClosesStream is the option to use when the consumer should
Close() the underlying reader after consumption. Default is not
to close β useful when you want to inspect the same body twice or
the caller manages the lifetime explicitly.
The bound parameter is an io.ReadCloser; stream straight to disk:
For multipart uploads with file parts and form fields, the shape
differs β see client multipart (queued).
Choosing between ByteStream and a typed Consumer
Use ByteStreamConsumer / Producer when:
the payload is genuinely opaque bytes (downloads, uploads of
binary blobs, logs)
the size could exceed RAM β buffered codecs would OOM
you want to forward the body to another service without
re-encoding
Use a typed Consumer/Producer (JSON, XML, β¦, custom codec)
when the payload is a structured value the operation handler needs
to inspect.
The two are not mutually exclusive β a single API can route some
operations to streams and others to typed payloads via
operation-level consumes / produces.
Per-payload Content-Type override
The client normally derives the request Content-Type from the
operation’s consumes list. Two cases need an override:
a stream payload (io.Reader / io.ReadCloser set via
SetBodyParam) whose actual format isn’t what consumes defaults to
an individual file part inside a multipart upload that has its
own per-part Content-Type (rather than http.DetectContentType-sniffed)
When the runtime picks up a body or file value that satisfies this
interface and ContentType() returns a non-empty string, that
value wins. An empty return is treated as “no opinion” and the
runtime falls back to its default selection.
The full algorithm β the order of precedence and how it interacts
with consumes and the negotiator β is in
tutorials / media-type selection.
Stream payloads β naming the wire format
Use this when you’re sending a binary blob whose precise format you
know, and you want the recipient (or a proxy) to see the right
header instead of application/octet-stream:
typeimagePayloadstruct {
bodyio.Readermimestring}
func (pimagePayload) Read(b []byte) (int, error) { returnp.body.Read(b) }
// ContentTyper β wins over the operation's `consumes` default.func (pimagePayload) ContentType() string { returnp.mime }
funcuploadAvatar(rtruntime.ClientTransport, avatarstring) error {
f, _:=os.Open(avatar)
deferf.Close()
op:=&runtime.ClientOperation{
ID: "UploadAvatar",
Method: "PUT",
PathPattern: "/users/me/avatar",
Params: putAvatarBody(imagePayload{
body: f,
mime: "image/png", // β will land on the wire as Content-Type }),
Reader: putAvatarReader{},
}
_, err:=rt.Submit(op)
returnerr}
If imagePayload did not implement ContentType(), the runtime
would use whichever entry in op.ConsumesMediaTypes it picked
(typically application/octet-stream).
Multipart file parts β per-part Content-Type
In a multipart request, individual file values are normally typed
via http.DetectContentType (sniffed from the first 512 bytes).
Implementing ContentTyper on the file value bypasses that:
// Wiring (illustrative β Params is built by the generated client):f, _:=os.Open("manifest.json")
part:=taggedFile{File: f, mime: "application/vnd.acme.manifest+json"}
// op.Params.SetFileParam("manifest", part) β part header carries// "Content-Type: application/vnd.acme.manifest+json"
Without ContentType() the multipart writer would sniff the bytes
and likely write text/plain or application/json β both wrong if
your downstream pipeline keys on the vendor type.
Server-side equivalent?
There is none β server responses pick a Producer from the
Accept-negotiated produces entry, and the producer writes the
response. If you need to influence the response Content-Type
beyond what produces allows, use a custom middleware.Responder
that sets the header explicitly before delegating to the producer.
Caveats
ContentTyper is client-side only for body and multipart-file
values. It is not consulted on response payloads.
Implementing it on a value that is not one of those two
(a regular struct passed as a typed body) has no effect β the
operation’s consumes entry wins.
An empty ContentType() return is “no opinion”, not “force empty
header”. The runtime falls back to its default.
Negotiation in plain net/http
The
server-middleware module ships content
negotiation as a standalone, dependency-free package. You can drop
it into any net/http application β no spec, no analyzer, no
go-openapi/runtime import.
Install
go get github.com/go-openapi/runtime/server-middleware
The full module pulls only the standard library at runtime
(testify is _test.go-only).
Pick a response Content-Type
constmediaTypeXML = "application/xml"// Pet is the demo resource served by the negotiation handler.typePetstruct {
XMLNamexml.Name`json:"-" xml:"pet"`Namestring`json:"name" xml:"name"`}
funcpickContentType() {
pet:=Pet{Name: "Lassie"}
offers:= []string{"application/json", mediaTypeXML}
http.HandleFunc("/pet", func(whttp.ResponseWriter, r*http.Request) {
chosen:=negotiate.ContentType(r, offers, "application/json")
w.Header().Set("Content-Type", chosen)
switchchosen {
casemediaTypeXML:
_ = xml.NewEncoder(w).Encode(pet)
default:
_ = json.NewEncoder(w).Encode(pet)
}
})
srv:=&http.Server{
Addr: ":8080",
ReadHeaderTimeout: readHeaderTimeout,
}
log.Fatal(srv.ListenAndServe())
}
ContentType returns the most-acceptable offer per the request’s
Accept header (q-values, specificity, position-as-tiebreaker). If
no offer is acceptable, the third argument (the default offer) is
returned.
"" means “no offer is acceptable” β let your handler decide
whether to send the unencoded body or 406.
Exercise
# JSON by preferencecurl -i -H 'Accept: application/json' http://127.0.0.1:8080/pet
# XML preferred, JSON acceptablecurl -i -H 'Accept: application/xml;q=0.9, application/json;q=0.5'\
http://127.0.0.1:8080/pet
# Both rejected β falls back to the default offer (application/json here)curl -i -H 'Accept: text/html' http://127.0.0.1:8080/pet
MIME-parameter behaviour
As of v0.30 the negotiator honours MIME parameters by default β an
Accept of text/plain;charset=utf-8 does not match an offer of
text/plain;charset=ascii. Pre-v0.30 the parameters were stripped
before matching. Opt out per call to restore the old behaviour:
The same module ships
docui β stdlib-only handlers for
Swagger UI / RapiDoc / Redoc. Combining the two gives you a small
spec-served, doc-UI-equipped HTTP server with no OpenAPI runtime
dependency at all. See docui standalone
(queued) once we write that example.
Custom middleware
The runtime pipeline (Router β Security β Bind β Validate β
OperationHandler β Responder) lives behind a single http.Handler.
Standard ecosystem middleware β compression, logging, rate-limiting,
tracing β composes around that handler the usual way. Order matters:
transport-level concerns (TLS termination, auth gating, rate limits)
typically wrap whatever middleware needs to see the final response
bytes (compression, logging), which in turn wraps the runtime
pipeline.
The pages below cover specific compositions worth pinning down.
Adding transparent HTTP response compression (gzip, brotli, β¦) to
a runtime server by wrapping the http.Handler returned by
middleware.Serve with the CAFxX httpcompression adapter.
Subsections of Custom middleware
Compression
This example shows how to add transparent HTTP response compression
(gzip, brotli, β¦) to a go-openapi/runtime server by wrapping the
http.Handler returned by middleware.Serve with a standard
ecosystem compression middleware.
The runtime itself does not ship compression. Composition with an
external middleware is the recommended approach; this example uses
github.com/CAFxX/httpcompression,
which covers gzip + brotli + zstd + deflate with sensible defaults
(content-type allowlist, minimum-size threshold, Vary / ETag /
Content-Length handling).
The wiring
The runtime hands you an http.Handler. Wrap it with the
compression adapter and mount the result on the mux:
compress, err:=httpcompression.DefaultAdapter()
iferr!=nil {
log.Fatalf("compression adapter: %v", err)
}
// Wrap the go-openapi handler. The order matters:// - the compressor must be OUTSIDE the api pipeline so it sees// the final response bytes;// - any TLS / auth / rate-limiting middleware typically wraps// the compressor (i.e. compressor sits between application// code and transport-level middleware).mux:=http.NewServeMux()
mux.Handle("/", compress(apiHandler))
DefaultAdapter() enables gzip + brotli with sensible defaults.
Use Adapter(...) for explicit codec, threshold, and content-type
control (e.g. httpcompression.GzipCompressionLevel(6),
httpcompression.MinSize(512),
httpcompression.ContentTypes([]string{"application/json"}, false)).
The compressed response carries Content-Encoding: gzip (or br),
Vary: Accept-Encoding, and a transformed Content-Length. The
go-openapi/runtime pipeline is unchanged β the compressor sits
outside the API handler and operates on the final response bytes.
Layering
The order of middlewares around the api handler matters:
The compressor must wrap the api handler so it sees the complete
response body before transport. Transport-level concerns (TLS
termination, auth gating, rate limiting) typically wrap the
compressor in turn.
Client-side
net/http’s default transport auto-decodes gzip responses, but
not br / zstd / deflate. Clients that need broader decoding
can wrap their http.RoundTripper with a decoder;
github.com/klauspost/compress
provides primitives suitable for that purpose. The
go-openapi/runtime client (client.Runtime) accepts a custom
transport via its configuration, so the same pattern applies.
Tutorials
The pages in this section are reference-quality explanations, not
recipes. They live alongside the Usage pages but go
deeper β when a 415 surprises you, or a quiet connection starts
returning context deadline exceeded, this is where the explanation
lives.
Subsections of Tutorials
FAQ
Answers to common questions collected from GitHub issues.
Why is request.ContentLength zero when I send a body?
A streaming body (e.g. from bytes.NewReader) is sent with chunked transfer encoding.
The runtime cannot know the content length of an arbitrary stream unless you explicitly
set it on the request. If you need ContentLength populated, set it yourself before
submitting.
How do I read the error response body from an APIError?
The client’s Submit() closes the response body after reading. To access error details,
define your error responses (including a default response) in the Swagger spec with a
schema. The generated client will then deserialize the error body into a typed struct
that you can access via type assertion:
ifapiErr, ok:=err.(*mypackage.GetThingDefault); ok {
// apiErr.Payload contains the deserialized error body}
Without a response schema in the spec, the body is discarded and only the status code
is available in the runtime.APIError.
The same approach works for any non-standard MIME type such as application/pdf
(use runtime.ByteStreamConsumer()), application/hal+json, or
application/vnd.error+json (use runtime.JSONConsumer()).
Can I run authentication on requests that don’t match a route?
No. Authentication is determined dynamically per route from the OpenAPI spec
(each operation declares its own security requirements). The middleware pipeline
authenticates after routing, so unmatched requests are never authenticated.
How do I share context values across middlewares when using an external router?
The go-openapi router creates a new request context during route resolution.
Context values set after routing (e.g. during auth) are not visible to middlewares
that run before the router in the chain.
The recommended pattern is to use a pointer-based shared struct:
typesharedCtxstruct {
Principalany// add fields as needed}
// In your outermost middleware, before the router:sc:=&sharedCtx{}
ctx:=context.WithValue(r.Context(), sharedCtxKey, sc)
next.ServeHTTP(w, r.WithContext(ctx))
// After ServeHTTP returns, sc is populated by inner middlewares.// In an inner middleware or auth handler:sc:=r.Context().Value(sharedCtxKey).(*sharedCtx)
sc.Principal = principal// visible to the outer middleware
Because the struct is shared by pointer, mutations are visible regardless of
which request copy carries the context.
Can I use this library to validate requests/responses without code generation?
Yes. Use the routing and validation middleware from the middleware package with
an untyped API. Load your spec with loads.Spec(), then wire up
middleware.NewRouter() to get request validation against the spec without
needing go-swagger generated code. See the middleware/untyped package for
examples.
How do I configure Swagger UI to show multiple specs?
SwaggerUIOpts supports the urls parameter for listing multiple spec files in
the Swagger UI explore bar. Configure it instead of the single url parameter.
How go-openapi/runtime parses, matches, and negotiates HTTP media types,
on both the server and client sides. The reference for the rules behind a
415, a 406, or a 400 you see in production.
Scope: Content-Type and Accept headers, both inbound and outbound.
Accept-Encoding is mentioned briefly. Charset, language, and version
tags are treated as opaque parameters under the rules below.
No consumer registered for an otherwise-allowed Content-Type
500 Internal Server Error
server-side configuration error
The shared model β mediatype.MediaType
Both sides use the same parser and value type:
import"github.com/go-openapi/runtime/server-middleware/mediatype"mt, err:=mediatype.Parse("application/json;charset=utf-8;q=0.8")
// mt.Type = "application"// mt.Subtype = "json"// mt.Params = {"charset": "utf-8"} // parameter keys lowercased// mt.Q = 0.8 // q is extracted, not stored in Params
Casing
Type, Subtype, parameter keys β lowercased on parse.
Parameter values β preserved verbatim.
Comparisons of parameter values are case-insensitive
(charset=UTF-8 matches charset=utf-8, the convention for charset, version, etc.).
Wildcards
*/* and type/* are accepted on either side of a comparison.
*/subtype is invalid per RFC 7231 Β§5.3.2 and Parse rejects it.
Malformed input
Every Parse failure wraps the sentinel mediatype.ErrMalformed,
so callers can distinguish “client sent garbage” from “client sent
something well-formed that nothing here accepts”:
_, err:=mediatype.Parse(headerValue)
iferrors.Is(err, mediatype.ErrMalformed) {
// 400 Bad Request territory}
The matching rule
MediaType.Matches(other) is asymmetric. The receiver is the bound
(an allowed entry on the server side, or a candidate offer when matching
against an Accept entry); the argument is the constraint (the actual
incoming request, or the Accept entry being satisfied).
The rule:
Bare type/subtype must agree (with wildcards on either side).
If the receiver carries no parameters, any constraint is accepted
regardless of its parameters.
Otherwise every (key, value) pair on the constraint must be present
on the receiver, with case-insensitive value comparison. The receiver
may carry additional parameters that the constraint does not list.
q-values are not considered by Matches β they are the negotiator’s
concern, handled inside Set.BestMatch.
The same direction is used in both call sites:
Call
Bound (receiver)
Constraint (argument)
Inbound validation
each entry in consumes
the request’s Content-Type
Accept negotiation
each candidate offer
each Accept entry
The asymmetry is intrinsic to the semantics (“loose if the bound has no
params, otherwise the constraint must be a subset”), not to which side is
the server.
Beyond strict matching β alias and suffix tolerances
The bare Matches rule above is strict RFC 7231: type, subtype, and the
parameter subset. Two extensions sit on top of it, both surfaced through
the graded result of MediaType.Match:
Tier
Reached when
Example
MatchExact
Strict RFC 7231 match.
application/json vs application/json
MatchAlias
Strict fails but both sides resolve to the same canonical form via the package-internal alias table.
application/x-yaml vs application/yaml
MatchSuffix
Strict and alias both fail but both sides resolve to the same base after folding the RFC 6839 structured-syntax suffix.
application/vnd.api+json vs application/json
MatchNone
None of the above.
Set.BestMatch, MatchFirst, and mediatype.Lookup rank candidates by
this tier in addition to q-value and specificity β when two offers fit a
constraint at different tiers, the stronger tier wins regardless of
offer order. Exact beats alias, alias beats suffix.
Alias bridge β always on
RFC 9512 Β§2.1 enumerates three deprecated alias names for the
application/yaml registration:
Alias
Canonical
application/x-yaml
application/yaml
text/yaml
application/yaml
text/x-yaml
application/yaml
A request, offer, or codec registration in any of these forms matches a
counterpart in any of the others. The bridge is wire-format equivalence
backed by an explicit IANA registration-template field β no opt-in
needed and no way to disable it.
Structured-syntax suffix tolerance β opt-in
+json, +xml, and +yaml are the RFC 6839 structured-syntax suffixes
the runtime recognises. Their wire format is the underlying base
(+json is JSON), but their semantics carry application-specific
structure on top (application/problem+json is JSON-on-the-wire with
the RFC 7807 problem-details document shape). Tolerating these as
equivalent to the base format is a contract loosening, so the runtime
defaults to strict and surfaces the leniency through an explicit
opt-in.
All three feed the same mediatype.AllowSuffix() option through
Set.BestMatch, MatchFirst, and mediatype.Lookup. With the flag on,
a spec declaring consumes: [application/json] end-to-end tolerates
request bodies sent with Content-Type: application/vnd.api+json (and
likewise for +xml / +yaml). With the flag off β the default β such
a request is rejected with 415, exactly as before.
The opt-in is intended for situations where the user does not control
both sides of the wire:
a server that wants to accept application/problem+json errors from
upstream services declared as application/json;
a client that needs to consume application/problem+json responses
from servers whose spec only declares application/json in produces.
If both sides are under your control, prefer to align the spec:
list application/vnd.api+json (or whichever variant applies)
explicitly in consumes / produces. The opt-in is leeway for the
common real-world mismatch, not a substitute for a faithful spec.
Tier interactions worth pinning
Parameters still bind at every tier. A constraint of
application/yaml; charset=utf-8 does not match an offer of
application/yaml; charset=ascii even with subtypes equal β the
parameter-subset rule from Matches applies regardless of which tier
resolved the subtype. Suffix tolerance does not loosen the param
rule.
Exact registrations always win. If application/vnd.api+json is
explicitly in consumes (or registered as a producer), routing and
codec lookup never fall through to the suffix tier for that mime β
even with WithMatchSuffix(true).
Map-side suffix folding is intentionally absent. A registration
at application/vnd.api+json does not receive a query of
application/json even with the opt-in. The inverse case (“only the
vendor consumer is registered, plain-base query arrives”) is not a
scenario the runtime tries to cover.
Server side β inbound Content-Type validation
Flow when a request arrives with a body:
runtime.HasBody(r) ββ early-out for bodyless requests
β
runtime.ContentType(r.Header) ββ 400 here if the header is malformed
β
validateContentType(consumes, ct)
ββ malformed actual β 400 errors.ParseError (defensive)
ββ no entry matches β 415 errors.InvalidContentType
ββ match β continue to consumer dispatch
β
route.Consumers[ct] ββ 500 if no codec registered
validateContentType is a thin wrapper around
mediatype.MatchFirst.
It short-circuits on the first allowed entry that accepts the actual β
not the most specific match. For ranked matching use Set.BestMatch.
What “missing Content-Type” does
When the request body is non-empty but the header is missing,
runtime.ContentType substitutes the package-level default
(runtime.DefaultMime = application/octet-stream). The validator
then matches that default against the operation’s consumes. So a
request with a body and no Content-Type typically yields 415
unless the operation lists application/octet-stream.
Parameter honouring (since v0.30)
Before v0.30, parameters were stripped on both sides before matching:
Content-Type: text/plain;charset=ascii would pass against
consumes: [text/plain;charset=utf-8]. Since v0.30 this is rejected
(charset values disagree). The fix landed with PR #426 (issue #136).
Server side β outbound Accept negotiation
negotiate.ContentType(r, offers, defaultOffer, opts...)
reads the request’s Accept header(s), parses each entry,
ranks the offers, and returns the winning offer (a string from the
offers slice). If nothing matches, defaultOffer is returned.
Ranking
Per RFC 7231 Β§5.3.2, in order:
Highest q-value (q=0 excludes an offer entirely).
Highest specificity of the matched Accept entry
(type/subtype;params > type/subtype > type/* > */*).
Earliest position in the offers slice.
Multiple Accept headers
Per RFC 7230 Β§3.2.2, multiple Accept headers are equivalent to a single
comma-joined value. The negotiator joins before parsing, so all entries
contribute to the decision regardless of how the client batched them.
Parameter honouring and the opt-out
Same v0.30 change as inbound validation. An Accept entry of
text/plain;charset=utf-8 matches an offer of bare text/plain (offer
carries no constraint), but nottext/plain;charset=ascii.
To restore the looser pre-v0.30 behaviour for one operation:
The opt-out exists for applications whose producers and Accept clients
use mismatched charset or version params that they treat as
informational.
Codec dispatch is keyed by bare type
The negotiator returns the verbatim offer (parameters preserved) and the
runtime sets Content-Type from it. Codec dispatch is a separate step:
the runtime looks up the producer in route.Producers, which is a
map[string]Producer keyed by the baretype/subtype (no params).
You will see calls to normalizeOffer(format) and
normalizeOffers(...) in the middleware and the router doing exactly
this stripping β they are about map lookup, not about negotiation.
The practical consequence: you cannot register two different producers
for the same bare type that differ only by parameters
(text/plain;charset=utf-8 vs text/plain;charset=ascii). They would
collide on the bare-type key. The negotiator can still choose
between two such offers (parameters are honoured during matching), but
the codec invoked is the single one registered under the bare key.
If you need parameter-specific encoding, do it inside one producer and
inspect the negotiated Content-Type from the response writer.
Client side β outbound Content-Type
Selection runs in two stages. Stage 1 picks a candidate from the
operation’s consumes list before the payload is known; Stage 2 runs
inside buildHTTP after the request writer has populated the payload,
and may upgrade Stage 1’s choice when the payload is a stream.
If multipart/form-data is one of the entries, prefer it (it streams
and preserves per-file Content-Type). Resolves issue #286.
Otherwise the first non-empty entry that is either a structural
mime (multipart/form-data, application/x-www-form-urlencoded)
or has a producer registered in Runtime.Producers. This skips
spec entries the client cannot serialise β useful when the spec
lists a vendor mime first and a registered alternative second.
Closes part of issues #32 and #386.
If nothing in the list is registered, the first non-empty entry is
returned anyway so the gate at the call site emits its
none of producers: β¦ diagnostic.
Falls back to Runtime.DefaultMediaType (application/json by
default) only when the list is empty (or all empty strings).
Stage 1 cannot see the payload β the request writer hasn’t run yet β
so its choice is “best effort given only the spec and the registered
producers.”
Stage 2 β setStreamContentType
Source: client/request.go. Runs inside buildHTTP after the writer
has populated r.payload. For stream payloads (io.Reader,
io.ReadCloser) only β the producer is bypassed in this branch, so
the wire header is the only place where the body’s actual MIME type
is asserted.
Three checks, in priority order:
Explicit SetHeaderParam("Content-Type", β¦). The historical
header escape hatch wins over every derivation. If the writer set
Content-Type during WriteToRequest, the runtime keeps it as-is.
This was not the original purpose of SetHeaderParam, but it has
become the natural way to say “send THIS exact header”, and we
honour it. Caveat: the user is then responsible for matching their
declared header to their actual body bytes.
Payload-declared content type. If r.payload implements the
exported runtime.ContentTyper
interface and returns a non-empty value, that value wins. The
value declares its own nature β useful for line-delimited formats,
custom MIME types, or any case where the spec offers no matching
entry. The same interface is also consulted on each part of a
multipart file upload.
Octet-stream upgrade. When neither of the above applies, and
application/octet-stream is in the operation’s consumes list
AND a producer is registered for it, the wire header is upgraded
from the picker’s choice to octet-stream β a safer “raw bytes”
claim than a structural mime like JSON.
If none of the three checks fire, the picker’s mediaType from
Stage 1 is used as the terminal fallback.
Non-stream paths are deliberately not honoured
SetHeaderParam("Content-Type", β¦) and runtime.ContentTyper are
honoured only for stream payloads. Non-stream paths have
structural constraints that conflict with arbitrary user-supplied
content types:
struct / []byte payloads β the producer is dispatched off
mediaType. Honouring an arbitrary user header here would mean
either swapping the producer (complex) or sending a body that
doesn’t match the declared header (still a lie).
Multipart bodies β the runtime owns the Content-Type header
because of the boundary parameter requirement.
URL-encoded forms β the body is form-encoded; lying about the
type would break parsing on the server.
Users with these payload shapes who need a custom content type
should adjust the operation’s consumes list (so the picker selects
the right entry) or register a producer under the desired MIME.
Wire Content-Type matrix
Payload
SetHeader Content-Type
declares ContentType()
octet-stream offered + registered
Wire Content-Type
stream
set
β
β
the SetHeader value
stream
unset
yes, non-empty
β
declared value
stream
unset
no / empty
yes
application/octet-stream
stream
unset
no / empty
no
picker’s choice (best-effort; may misrepresent body)
struct
(ignored)
β
β
picker’s choice (producer runs)
[]byte
(ignored)
β
β
picker’s choice (producer runs; e.g. JSON producer base64-encodes)
Issue #385 /
#33 β The codec
set is hardcoded; it is not derived from the spec. Apps that don’t
declare an exotic consumes/produces carry codecs they will never
use. Tracked as Track A.2 in the modularization roadmap.
[]byte payloads. A []byte flows through the picker’s chosen
producer. The JSON producer base64-encodes it as a JSON string. If
you want raw bytes on the wire, wrap as bytes.NewReader([]byte{β¦})
β it then takes the stream path and the Stage-2 octet-stream
upgrade applies.
What changed in v0.30 (client-side outbound)
Four behaviour deltas vs. v0.29. Three are confined to stream
payloads (io.Reader, io.ReadCloser); the fourth touches the
Stage-1 picker for any payload type.
The first three surface only when there is at least one stream payload
involved; existing client code that uses generated parameter types
with struct/[]byte payloads is unaffected by those.
Delta
Pre-v0.30 (master)
v0.30
Body payload’s ContentType()
not consulted; picker’s mediaType is sent
when the payload satisfies runtime.ContentTyper, its non-empty return value becomes the wire Content-Type
Stage-2 octet-stream upgrade
absent; the picker’s choice is the only signal
when the payload is a stream and lacks an explicit declaration, application/octet-stream from the operation’s consumes list is used in preference to a structural mime like application/json
SetHeaderParam("Content-Type", X)
silently overwritten by buildHTTP
honoured at top priority; the user’s explicit assertion wins
Stage-1 producer-capability filter
picker returns the first non-empty entry; if no producer is registered for it, the gate at the call site errors
picker skips entries with no registered producer (and no structural status) and tries the next one; only errors when nothing in consumes is registered
Each delta is verified by a row in the behavioural harness at
client/content_negotiation_test.go.
The rows that fail when the harness runs against the v0.29 baseline
are exactly the rows that exercise these three deltas β there are no
incidental behaviour changes outside this set. The structural paths
(form, multipart, file uploads) and the multipart-vs-urlencoded
preference fix from #286 are preserved verbatim.
Migration notes
No action needed for callers using struct-typed parameters
generated by go-swagger. The wire Content-Type is unchanged.
Streams that need a specific MIME type can implement
runtime.ContentTyper
on the payload value, or add application/octet-stream to the
operation’s consumes, or fall back to setting the header
explicitly via the params writer.
Callers that relied on SetHeaderParam("Content-Type", β¦) and
found it didn’t work (it never did, on body requests) can now
rely on it as a documented escape hatch for stream payloads.
Client side β inbound responses
There is no Accept negotiation step at decode time. The client sent
its Accept header on the request and is now reading whatever the
server chose to return β the response’s Content-Type header is the
single input the codec dispatcher consults.
Pipeline
response.Header["Content-Type"]
β
βΌ
resolveConsumer(ct) ββ client/runtime.go
β
βΌ picks a runtime.Consumer
operation.Reader ββ codegen-emitted; switches on status code,
β hands the body to the picked consumer,
βΌ decodes into the typed response struct
typed response value or error
The codegen-emitted operation Reader is the piece most users
never see. It’s a generated function per operation that:
Reads the HTTP status code and selects the matching response
definition from the spec.
Calls runtime.ContentType(response.Header) to extract the bare
mime.
Invokes the runtime to resolve a consumer for that mime
(resolveConsumer).
Decodes the body into the response definition’s Go type via
consumer.Consume(body, target).
If you are writing a custom client without codegen, you implement
this function yourself.
resolveConsumer β picking a consumer
resolveConsumer(ct string) in client/runtime.go is the single
codec-lookup site on the client. It runs:
Parse ct (rejects malformed values with a "parse content type: β¦" error β surfaced as a client-side error, not as a server
response).
mediatype.Lookup(r.Consumers, ct, r.matchOpts()...) β runs the
four always-on tiers (raw key, parsed canonical, alias query-side,
alias map-side) plus the opt-in suffix tier when
Runtime.MatchSuffix is set. See “Beyond strict matching” above.
On lookup miss, fall back to r.Consumers["*/*"] if a wildcard
consumer is registered.
On full miss, return "no consumer: %q" β the operation Reader
propagates this as the operation’s error.
Where Runtime.MatchSuffix lands
Setting rt.MatchSuffix = true flips the inbound decode path to
tolerate RFC 6839 suffix media types: a response with
Content-Type: application/problem+json finds the JSON consumer
registered at application/json, decoded into whatever Go type the
response definition declares. The wildcard "*/*" fallback runs
unchanged after the suffix tier.
Symmetric to the server-side Context.SetMatchSuffix(true) β the
opt-in is independent on each side and exists for exactly the same
reason: real servers (or real clients) that don’t strictly abide by
the spec’s produces / consumes declarations.
Alias bridge β also active here
The always-on alias bridge applies on this path too. A client that
registers the YAML consumer at the legacy application/x-yaml key
(or, for that matter, leaves the default-map flip in place at
application/yaml) handles a server response with
Content-Type: text/yaml correctly β mediatype.Lookup
canonicalizes both keys to application/yaml and finds the consumer
regardless of which form was registered.
Failure modes worth knowing
Malformed Content-Type (e.g. trailing garbage, unterminated
quoted string) β resolveConsumer returns an error sourced from
mime.ParseMediaType, prefixed with parse content type:. The
operation Reader surfaces this as the operation’s error; no
decode is attempted.
No consumer, no wildcard registered β "no consumer: %q" with
the offending Content-Type. Most commonly hit when the server
returns an undeclared error mime (application/problem+json is the
canonical example) and Runtime.MatchSuffix is off and "*/*" is
not registered.
Silent wildcard fallback β if Consumers["*/*"] is registered
(the default-map registers runtime.ByteStreamConsumer there), any
unrecognised Content-Type decodes through that consumer. For a
typed response struct, this usually fails inside the consumer’s own
unmarshal with a less specific error than the no-consumer case.
Worth knowing if the runtime appears to “silently succeed at
decoding garbage.”
Accept-Encoding
negotiate.ContentEncoding(r, offers)
implements Accept-Encoding negotiation against a list of offered
encoding tokens (gzip, deflate, β¦). Encoding tokens have no
parameters, so the v0.30 parameter-honouring change does not apply.
The runtime itself does not transparently encode response bodies; this
helper is for handlers that want to make the choice explicitly.
Common gotchas
“My matching test broke after upgrading to v0.30.”
Likely the parameter-honouring change. If your Accept clients and
your produces use mismatched charset/version params and you treat
those as informational, opt out with negotiate.WithIgnoreParameters(true)
(per call) or Context.SetIgnoreParameters(true) (server-wide).
“My server rejects application/vnd.api+json (or application/problem+json) with 415.”
The default match is strict RFC 7231 β a vendor +json mime is not
a application/json mime. Two routes forward: (1) list the vendor
mime explicitly in the operation’s consumes and register a codec
under that key (the spec-faithful path); or (2) enable
Context.SetMatchSuffix(true) server-wide to fold +json / +xml /
+yaml to the underlying base codec at lookup time (the leeway path,
for situations where the client is not under your control). See
the “Beyond strict matching” section above.
“My client request returns 415 even though the API lists my type in consumes.”
Check the wire Content-Type against your server’s consumes matching
rules. The client sends the picker’s choice (with Stage-2 upgrades for
streams), so a stray space, missing charset, or trailing ; in the
spec entry will be sent through and rejected by a strict server. If
the payload is a stream, consider implementing ContentType() string
on it to declare the type explicitly.
“My stream payload’s wire Content-Type is wrong.”
Four cases in priority order: set the header explicitly via
SetHeaderParam("Content-Type", β¦) in your params writer; implement
runtime.ContentTyper (ContentType() string) on the payload to
declare an explicit type; add application/octet-stream to the
operation’s consumes list to trigger the Stage-2 upgrade; or list
the desired mime first in consumes so the picker chooses it.
“My server returns 400 for a missing Content-Type on a body request.”
It shouldn’t β missing headers fall through to application/octet-stream
via runtime.DefaultMime and that produces 415, not 400. A 400 means
the header is present and unparseable. Check for stray characters
(unmatched parens, wildcards in parameter names, etc.).
“How do I get the parsed Content-Type value in my handler?”
Use runtime.ContentType(r.Header)
or the cached value at middleware.MatchedRouteFrom(r).Consumes.
Reference
Server matching primitive: github.com/go-openapi/runtime/server-middleware/mediatype
Server negotiator: github.com/go-openapi/runtime/server-middleware/negotiate
Codec lookup helper: mediatype.Lookup[T] β used by both server (middleware/context.go, middleware/validation.go) and client (client/runtime.go)
Alias and suffix tolerances: mediatype.Match, mediatype.MatchKind, mediatype.AllowSuffix; opt-in surfaces negotiate.WithMatchSuffix, middleware.Context.SetMatchSuffix, client.Runtime.MatchSuffix
Server validation: middleware/validation.go (validateContentType)
How go-openapi/runtime reuses TCP connections, what the kernel and the
HTTP transport actually do for you, and where it goes wrong when there
is a NAT gateway, proxy, or firewall between your client and the server.
Concrete: the reference for “I get context deadline exceeded after a
quiet period” β issue #336 is the canonical example.
Scope: client-side Runtime. Server-side keep-alive (http.Server’s
own timers) is summarised briefly at the end, with pointers into the
Go stdlib docs.
TL;DR
If your client lives behind a NAT gateway, a load balancer, or a firewall
with an idle conntrack timeout (AWS NAT: 350 seconds; many corporate
firewalls: a few minutes), and you see context deadline exceeded on
requests that follow a quiet period:
Check what your Runtime.Transport is. If you let client.New
pick the default (http.DefaultTransport), you already get
IdleConnTimeout = 90s and Dialer.KeepAlive = 30s. Those defeat
most NAT timeouts.
If you replaced the Transport (for TLS config, a proxy, etc.),
you almost certainly lost those defaults. Reinstate them.
On Go 1.23+, set an explicit
net.Dialer.KeepAliveConfig β
the bare KeepAlive field only sets the probe interval, not the
idle delay before probing starts. On Linux the kernel default for
the idle delay is often 7200 seconds (two hours), so probes
never fire before a 350s NAT timeout drops your conntrack.
Do not reach for
Runtime.EnableConnectionReuse.
The name is misleading β it does not control TCP keepalive or NAT
timeouts. See “the misnomer” below.
A recipe at the bottom of this document covers the cloud / NAT case.
Two distinct things named “keep-alive”
The word “keep-alive” is used for two unrelated mechanisms operating at
different layers. The runtime, the stdlib, and the OS all speak about
“keep-alive” without always disambiguating, which is the root of most
confusion.
HTTP keep-alive (application layer)
Connection: keep-alive is an HTTP/1.1 default. It means the same TCP
connection serves multiple HTTP request/response pairs. The client
sends request 1, reads response 1, sends request 2 on the same socket,
reads response 2, and so on, until either side closes.
In Go, http.Transport keeps a pool of idle connections per host. After
a response body is fully read and closed, the connection goes back to
the pool. The next request to the same host may pick a connection from
the pool instead of dialling a new one. Skipping the dial saves a TCP
handshake plus, for HTTPS, a TLS handshake β typically tens to hundreds
of milliseconds per request.
TCP keepalive (kernel / socket layer)
SO_KEEPALIVE is a socket option asking the kernel to send periodic
empty ACK packets on an otherwise-idle TCP connection. The peer
acknowledges them. Two consequences:
Dead-peer detection. If the peer disappears (machine rebooted,
network partitioned), the kernel sees the missing ACKs and tears
the connection down. Without keepalive, a half-open connection can
linger indefinitely.
Conntrack / NAT keep-alive. A NAT gateway or stateful firewall
maintains a connection tracking (conntrack) entry per TCP flow
passing through it. The entry is dropped after some idle period β
AWS NAT uses 350 seconds, many enterprise firewalls use 60sβ15min.
Once the entry is dropped, packets arriving for that flow are
either silently discarded or rejected with a RST that may not
reach the original sender. Periodic TCP keepalive packets count
as live traffic, so the NAT keeps the entry fresh.
The first three of the four mechanisms below are HTTP keep-alive
concerns; the fourth is the kernel/TCP one.
Knob
Layer
Default
What it controls
http.Transport.DisableKeepAlives
HTTP
false (keep-alive on)
Whether a TCP conn serves more than one HTTP request
http.Transport.MaxIdleConns / MaxIdleConnsPerHost
HTTP
100 / 2
Idle-pool sizing
http.Transport.IdleConnTimeout
HTTP
90s
How long an idle conn stays in the pool before close
net.Dialer.KeepAlive (+ KeepAliveConfig on Go 1.23+)
TCP
30s
Whether and how the kernel sends keepalive probes on dialled connections
If you only remember one thing: HTTP-layer settings decide whether the
runtime reuses a connection. TCP-layer settings decide whether a
through-the-network proxy still believes the connection exists. A
mismatch produces issue #336.
What Go does for you, by default
http.DefaultTransport is the transport
client.New
sets on every fresh Runtime. Its defaults, as of recent Go:
Read the way most cloud-deployed Go services need it:
IdleConnTimeout = 90s is less than the AWS NAT 350s timeout, so an
idle pooled connection is closed by Go before NAT drops it.
Dialer.KeepAlive = 30s enables TCP keepalive probes every 30s, so
active connections survive long NAT timeouts even when the
application isn’t sending data.
For typical cases, these defaults are correct. You only need to think
about this if you replaced the Transport, or if your environment has an
unusual idle timeout.
How the runtime wires this
Runtime.Transport is the http.RoundTripper used for every outbound
request. Three things to know:
The default is http.DefaultTransport, with the values above.
Replacing rt.Transport = ... with a custom transport
completely overrides the defaults β you inherit nothing unless
you copy what http.DefaultTransport sets.
Runtime.SetDebug(true) does not affect keep-alive at all β it
only logs requests/responses.
The misnomer β Runtime.EnableConnectionReuse
Runtime.EnableConnectionReuse() is the method most users find when
searching for “keep-alive” or “connection reuse” in this codebase. The
name suggests it controls whether connections are pooled and reused. It
does not.
What it actually does: wraps Runtime.Transport in a RoundTripper
that, after every response, drains any unread bytes from the response
body before Close. The reason: Go’s http.Transport will only
return a connection to the idle pool if the response body was fully
read. If your handler stops reading early β for example, you only need
the HTTP status and skip the body β the connection is not reusable, and
the next request will pay the cost of a new dial + handshake.
So EnableConnectionReuse is a narrow fix for one specific pattern: code
that doesn’t fully read response bodies. It has no effect on:
TCP keepalive packets;
whether the connection survives a NAT idle timeout;
the size of the idle pool;
the idle timeout in the pool;
any other connection-lifecycle concern.
If you ended up here following the issue #336 trail: this method will
not help you. A future runtime release will either rename this method
to something narrow and honest, or fold the body-draining behaviour
into a default-on path so users no longer have to know about it.
The NAT idle-timeout failure mode
This is the scenario in issue #336. Walk through it once and the symptom
becomes recognisable:
The client makes a request. Go dials a fresh TCP connection through
the NAT gateway. NAT creates a conntrack entry. Request completes;
the connection goes into Go’s idle pool.
The application is quiet for more than 350 seconds.
Go’s idle pool has not yet evicted the connection (if you increased
IdleConnTimeout past 350s, or if you have a custom transport that
doesn’t set it). Or the conn is “active” because something is
waiting on it, just not sending data β long polling, server-sent
events, slow streaming response.
NAT drops the conntrack entry. No notification to either side.
The application makes its next request. Go picks the still-pooled
connection. The TCP stack believes it is fine; it sends.
Packets disappear at the NAT. The server never sees the request,
the client never sees a response. From the application’s view, the
request hangs.
Eventually the request’s context deadline fires:
context deadline exceeded.
The same shape applies to any stateful network appliance between you and
the server: load balancers, corporate firewalls, IPSec tunnels.
Solutions
Rely on the defaults (preferred)
If you can: use http.DefaultTransport, do not replace
rt.Transport. IdleConnTimeout=90s and Dialer.KeepAlive=30s
together cover the common NAT and firewall idle timeouts. No further
configuration needed.
Custom Transport β reinstate the defaults
When you build a custom transport (for TLSClientConfig, an HTTP proxy
URL, a MaxIdleConnsPerHost change, etc.), start from the
http.DefaultTransport values, then override only what you need:
The single most common bug is omitting the Dialer. A literal of the
form &http.Transport{TLSClientConfig: ...} with no DialContext
uses Go’s net default dialler, which has no keepalive at all.
Explicit KeepAliveConfig (Go 1.23+)
The bare net.Dialer.KeepAlive field sets the probe interval. On Linux,
the kernel does not start sending probes until a separate idle delay
elapses, and that idle delay defaults to 7200 seconds at the
tcp_keepalive_time sysctl. With AWS NAT’s 350s timeout, the probes
never start in time.
Go 1.23 introduced net.Dialer.KeepAliveConfig, which lets you set the
idle delay explicitly so the kernel does not depend on tcp_keepalive_time:
DialContext: (&net.Dialer{
Timeout: 30*time.Second,
KeepAliveConfig: net.KeepAliveConfig{
Enable: true,
Idle: 60*time.Second, // wait 60s of idleness, then start probingInterval: 30*time.Second, // send a probe every 30sCount: 4, // drop the conn after 4 missed probes },
}).DialContext,
With these numbers, after 60 seconds of silence the kernel starts
sending probes, well before the 350s NAT timeout β the conntrack stays
fresh, the application sees no surprises.
Other levers
http.Transport.IdleConnTimeout set to less than the NAT timeout
forces Go to close idle connections before NAT can drop them. The
next request then dials fresh.
http.Transport.DisableKeepAlives = true opts out of HTTP keep-alive
entirely β every request gets a fresh TCP connection. Simple and
correct, but trades a handshake cost on every request. Reasonable
for low-volume clients; pathological for high-volume ones.
Diagnosing keep-alive problems
When you suspect a keep-alive issue:
Confirm the symptom shape. “Context deadline exceeded” after a
quiet period is the fingerprint of a dropped conntrack. If the
failures happen under load, it’s almost certainly something else.
Check the Transport. Print or log rt.Transport early in your
application; if it is *http.Transport, inspect IdleConnTimeout
and the dialler’s KeepAlive / KeepAliveConfig. Many subtle bugs
vanish at this step.
Use httptrace. The stdlib’s
net/http/httptrace package
surfaces the connection lifecycle β GotConn, PutIdleConn,
ConnectStart, TLSHandshakeStart, etc. When you see GotConn
with Reused: true immediately followed by a hang, you have caught
a stale pooled connection. (Future runtime versions may surface
this via a built-in helper; see the roadmap.)
On Linux, inspect kernel state.ss -t -o shows the keepalive
timer for each active socket; cat /proc/sys/net/ipv4/tcp_keepalive_*
shows the kernel defaults; conntrack -L (where available) shows
the NAT side.
tcpdump on the client. Look for outbound packets with no
inbound response after the symptom appears. Confirms the NAT-drop
hypothesis.
Server-side, briefly
A server’s keep-alive behaviour is governed by
http.Server, not by anything in
the runtime middleware:
Server.IdleTimeout β how long a kept-alive connection waits for the
next request before the server closes it.
Server.ReadHeaderTimeout, ReadTimeout, WriteTimeout β bound the
time spent on individual phases; expiry closes the connection.
The runtime’s server middleware does not override these. If your server
sits behind a NAT or load balancer with an idle timeout, set
Server.IdleTimeout to a value below that timeout so the server
proactively closes idle connections β clients on Go will simply dial
again on their next request without surfacing an error.
Recipe β Runtime for cloud / NAT environments
The construction below is the conservative starting point for a client
deployed in AWS, GCP, or behind any stateful network appliance with an
idle timeout. Adjust the timing constants if you have measurements; do
not adjust them on intuition alone.
packagemainimport (
"net""net/http""time""github.com/go-openapi/runtime/client")
funcnewClient(host, basePathstring) *client.Runtime {
rt:=client.New(host, basePath, []string{"https"})
rt.Transport = &http.Transport{
Proxy: http.ProxyFromEnvironment,
DialContext: (&net.Dialer{
Timeout: 30*time.Second,
// Go 1.23+: explicit idle delay; bare KeepAlive=30s is// not enough on Linux because the kernel idle default// (tcp_keepalive_time) is often 7200s.KeepAliveConfig: net.KeepAliveConfig{
Enable: true,
Idle: 60*time.Second,
Interval: 30*time.Second,
Count: 4,
},
}).DialContext,
ForceAttemptHTTP2: true,
MaxIdleConns: 100,
IdleConnTimeout: 60*time.Second, // < AWS NAT's 350sTLSHandshakeTimeout: 10*time.Second,
ExpectContinueTimeout: 1*time.Second,
}
returnrt}
and rely on the IdleConnTimeout to evict pooled connections before
NAT does. The kernel keepalive probes may or may not fire in time
depending on tcp_keepalive_time, but at least your idle pool is
self-policing.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.