When this service is a request, it tries to authenticate the request's sender, and informs the requestor of that try's outcome.
The determination of the request's sender works by checking three things (order TBD):
- JWT id token from
Authorization: Bearer XY
header - service account credentials from TBD (http basic/digest auth? some header?)
- api client credentials from TBD (http basic/digest auth? some header?)
-
- First pass of this is client sending
Client-Id
andClient-Encoding
, where the encoding is the client-side encoding of the client id, to be checked server side using the shared secret (e.g.curl -v -H "Client-ID: SomeClient" -H "Client-Encoding: correctlyencodedclientid" http://localhost:8080/authenticate
)
- First pass of this is client sending
A success response includes headers indicating which sender was determined:
X-Requestor: user:alice
X-Requestor: service:inspec/dc-west-1
X-Requestor: client:alice/profile-uploader-ci
Note that the exact naming scheme is TBD as it is what identifies users with our authz-service.
Explanations of why things are done the way they are.
One way this could be used is via nginx' access_by_lua_*
functionality:
A Lua block would be put into its configuration that will be called on each request, and determines if the request is "allowed" (in this case, it's only authentication that matters), and pass it on:
location / {
access_by_lua '
ngx.req.read_body()
local res = ngx.location.capture("/authn") -- this is the authn-service
if res.status == ngx.HTTP_OK then
-- left out: process response body
ngx.log(ngx.CRIT, res.body)
-- take header from response, put it in the request that is forwarded
ngx.req.set_header("x-authn-response", res.header["x-requestor"])
return
end
if res.status == ngx.HTTP_FORBIDDEN then
ngx.exit(res.status)
end
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
';
# proxy_pass/fastcgi_pass/postgres_pass/...
}
It's conceivable that we don't end up using nginx, so this service might become a proxy itself.
⏩ It is a proxy now, too!
While looking for ways to reconcile our initial plan (see Request flow issue) with the planned deployment model of Automate 2.0, which puts an emphasis on a fairly limited, little-complexity frontend service, Træfik, the issue came up that we might just do the proxying ourselves.
Since it was both fairly easy to get up and running in the service, and results in a deployment with low complexity (no nginx, no upstreams, location lines with regexp matches, lua handlers...), it was decided that it's at least worth trying. From the perspective of authn-service's code, it doesn't matter much, so keep both interfaces alive for now -- with the intention to kill the one we don't end up using later on.
There's two reasons for that.
For one thing, it's meant as a simplification for the backend services -- To verify the JWT token, you need to synchronize the OIDC provider's public key set, and besides that, there's a few things that can go wrong. Having one place that does the ID token verification (correctly) seems preferable.
The bigger reason, though, is supporting non-human clients and service accounts.
Both of these are not necessarily tied to any human user, and going through Dex' login process would be cumbersome.
An alternative to the approach taken here would have been to make Dex aware of non-human clients, e.g. by adding them as local users, and adding a client credential flow feature.
However, that's not what that flow is made for (its "clients" are supposed to be OIDC clients, which is different from our use case, I believe). Another alternative would be to change the authz-service to take care of the same business.
All things considered, this approach seems preferable because we are decoupled from both the OIDC provider and the authz-service used. We could switch either one and don't have to re-implement all the ways we'd like to be able to authenticate requests. However, it also means that for every downstream service we use (and authz-service may be among them), we'll have to make them be able to consume our one special header.