Towards a standard for bearer token URLs

In XSS doesn’t have to be Game Over, and earlier when discussing Can you ever (safely) include credentials in a URL?, I raised the possibility of standardising a new URL scheme that safely allows encoding a bearer token into a URL. This makes it more convenient to use lots of very fine-grained tokens rather than one token/cookie that grants access to everything, which improves security. It also makes it much easier to securely share access to individual resources, improving usability. In this post I’ll outline what such a new URL scheme would look like and the security advantages it provides over existing web authentication mechanisms. As browser vendors restrict the use of cookies, I believe there is a need for a secure replacement, and that bearer URLs are a good candidate.

The bearer URL scheme

The basic idea is that instead of using normal https URLs for accessing web resources, you’d instead use a new bearer URL scheme that looks something like this:

bearer://fe9CBsDahU_e9w;UserOnly@api.somewhere.example/some/path?query=yes

In most respects this works exactly like a https URL, except when you follow the link your browser (or REST client) automatically adds an Authorization Bearer header with the token from the URL in it:

GET /some/path?query=test HTTP/1.1
Host: api.somewhere.example
Authorization: Bearer fe9CBsDahU_e9w
...

That’s pretty much it. The same happens for other HTTP methods like POST, PUT, DELETE, and so on. (I’ll discuss the UserOnly bit later on). In the rest of the post I expand on how this works and why it is awesome.

How is this different to Basic auth?

As I described in the previous blog posts, browsers used to support a syntax for including a username and password in a URL (http://user:pass@foo.example/...) that would then be sent as an Authorization header to authenticate the request. This was fraught with all kinds of problems, so browser vendors removed the syntax. The current proposal avoids the problems of Basic auth URLs in the following ways:

  • Unlike Basic auth, a Bearer token Authorization header only authenticates a single request and creates no ongoing authentication state on either the client or the server. In particular, clients would be forbidden from remembering the token and reusing it on other requests (even to the same origin and path). This contrasts with Basic auth credentials which are remembered by the client and sent on subsequent requests, even to unrelated pages.
  • A username and password grants access to a user’s entire account, whereas a bearer token can be more finely scoped. The intention of bearer URLs is that each token is unique to the particular URL it is attached to, granting only access to that one particular resource. (Exactly what level of access it grants, how long it lasts, or what other conditions are attached to access are up to the application. For example, an application might only allow access if the request is accompanied by a traditional session cookie).
  • To prevent phishing attacks, clients would be discouraged from displaying the token part of the URL in a UI where it might be confused for the authority portion.
  • Bearer URLs only support HTTPS, never plain HTTP. A client is forbidden from attempting to connect to the URL over an insecure connection, reducing the risk of the token being exposed.

So Bearer URLs are better than Basic auth, but that’s not hard. In the next few sections I’ll sketch out some important security properties of the proposal.

Protection against accidental leakage


Most of the ways to encode a token into a normal HTTPS URL attempt to find places that are least likely to leak accidentally. For example, encoding the token into the fragment is better than putting it in the path component because the fragment is not sent to the server and so won’t show up in server access logs. It’s also not included in Referer headers or the window.referrer field. But if the server performs a redirect to another site (say to login with your Google account) then the fragment may be sent along with it, leaking your token to that other site.

These leaks happen because none of these components are really meant for holding credentials, so browsers don’t take any particular care to keep them secret. Web applications also often use these components for their own data, making it awkward to mix with credentials. The one part of a HTTP URL that was intended for holding secrets has been deprecated and removed from most browsers.

A new URL scheme can overcome these problems by providing a component of the URL that is dedicated for storing a credentials but is free from the problems that plagued earlier schemes. Browsers and other clients can then provide assurances that this part won’t be leaked, and provide quite strong security guarantees as discussed in the next sections.

Protection against theft

Credentials don’t just leak accidentally, but are also deliberately targeted and stolen. Cross-site scripting (XSS) attacks are often used to steal authentication tokens. Cookies provide a level of protection against theft (“exfiltration”) in the form of the HttpOnly attribute, which prevents the cookie from being accessible to JavaScript at all. This works because the browser itself submits cookies on behalf of the application, so JavaScript doesn’t need to access them. But this is not an option for other types of tokens (such as OAuth access tokens) because the browser doesn’t know how to submit them: JavaScript must have access to these tokens otherwise they are useless.

With a standard bearer URL scheme, the browser will also know how to submit the token when a link is clicked or a form submitted, so the same protections can be applied. When parsing HTML and building the DOM representation, the browser can extract any bearer URLs that it finds and store the tokens away in internal storage that is not accessible to JavaScript. If JavaScript attempts to access the token portion of the URL, then the browser will return an empty string. When the link is clicked (or form submitted) then the browser looks up the real token and adds it to the request.

Support for using bearer URLs from JavaScript could in principle also work in a secure way, if the browser DOM spec was enhanced to pass around URL objects rather than strings. For example, accessing the href attribute on a HTMLAnchorElement could return a URL object, which can then be passed to the fetch API as an object. Such a URL object could then safely encapsulate the bearer token without making it accessible to scripts. Sadly, these JavaScript APIs are all defined in terms of strings currently, but there’s no reason why in principle the spec can’t be extended to use stronger encapsulation. Alternatively, the DOM can return a string form that replaces the real token with an opaque identifier, such as an index into an internal token table. When the URL is used in the fetch API the browser looks up the real token and adds it to the request.

These mechanisms would ensure that bearer tokens are protected from exfiltration. Unlike cookies, bearer URLs are also immune to CSRF attacks because they are only sent on a single request. And as discussed in XSS doesn’t have to be game over, bearer URLs are also more robust against XSS proxying attacks: the attacker can only perform actions that they can find specific URLs for.

The UserOnly attribute

We can further harden bearer URLs against XSS by adding attributes to the URL that restrict how they can be used, similarly to cookie attributes like HttpOnly or Secure. The bearer URL syntax includes an optional attributes section to accommodate these (with basically the same syntax as Set-Cookie attributes). The only attribute envisioned at present is the UserOnly attribute, which instructs clients to only allow the URI to be used from JavaScript code initiated from a user gesture. Although far from perfect, such an attribute would make it harder for an XSS attack to proxy arbitrary requests through your browser in the background. Combined with the least-authority nature of bearer URLs, this could go a long way to significantly reducing the impact of XSS in a way that is very natural and easy to explain to web developers.

Protection against Spectre

We can go further and make bearer URLs robust against even quite advanced attacks like Spectre. HttpOnly cookies can be protected from Spectre by keeping them out of the rendering process entirely. A Spectre attack from JavaScript that completely compromises the memory address space of the rendering process cannot gain access to the cookie because it is never present in that address space.

Similarly, the token from a bearer URL could also be kept out of the renderer address space. This would involve a more complex re-architecture of the browser to parse (or pre-process) the HTML before it reaches the renderer process, but I believe this could be done and would be worth it. As I described earlier, the real bearer tokens can be held in a private data structure that is not accessible to JavaScript: to protect against Spectre we can move this data structure to another process.

This protection is much better if bearer URLs are represented as (unforgeable) opaque URL objects within the DOM rather than strings, but that’s a whole other conversation. Even if you can protect credentials from Spectre, you still need to protect the actual data—which is the important bit, after all. Other mitigations will still be needed.

Update: APIs that return URLs as strings in a format like JSON (or JSON-LD) would be more problematic to protect in this way. One potential approach to this would be to encourage the use of Link headers instead, perhaps with a JavaScript API to provide safe access to them.

Protection against public GitHub commits

A major reason for API keys and other tokens being leaked is through being committed to public Git repositories or other accessible locations. GitHub automatically revokes its own API tokens if it finds them in public repositories through an automated scanning process, and allows other credential providers to sign up. By standardising a URL scheme for bearer tokens, GitHub and other hosting providers would be able to easily discover accidentally exposed credentials for any service. If we then provide an easy standard protocol for revoking those tokens then we can solve this major security gap in a standard way.

A sketch of such a protocol is as follows: Given a compromised URL like bearer://<token>@host:port/... the provider (GitHub) can make a POST request to https://host:port/.well-known/bearer-revoke passing the token as the Authorization header. If the server supports the protocol then it returns a 200 response and arranges for the token to be revoked as soon as possible. Otherwise it returns a 404 or other error and GitHub tries some other means to inform the user that the token needs to be revoked.

In the case where the token is an OAuth access token, the URL will refer to a resource server (RS) rather than the authorization server (AS) that issued the token. In this case, the RS would forward the revocation request on to the AS.

Update: I’ve since found RFC 8959 that provides a limited solution to the problem of identifying bearer tokens in public repositories through a “secret-token” URI scheme. It’s much more limited in scope than the current proposal though.

Summary

URLs that encode limited-scope bearer tokens are a powerful way to implementing fine-grained security in a web application. They offer many benefits in terms of reducing the ambient authority in the browser and encouraging the principle of least authority. By standardising a URL scheme for such capability URLs, we can significantly increase the safety of using such URLs. I believe this approach offers significantly better security characteristics compared to cookies or manual use of OAuth tokens (or JWTs) from JavaScript, and could offer a solution to many current thorny issues in web security such as the impact of Spectre and the ever-present threat of XSS. It does all of these things without needing even more complex policy-based headers and being very compatible with the fundamental architecture of the web.

I’d be really interested to hear feedback on this design, particularly from browser security teams. This all seems workable from my point of view, but I recognise that I know very little about the constraints that modern browser implementations operate under.

Update: In response to feedback, I’ve written a follow-up article on How do you use a bearer URL? Please see that article if you’ve been left scratching your head on how you’d actually use bearer URLs within an application.

Acknowledgements

I’m not the first person to think up a URL scheme for capability URLs. In particular, design of this URL scheme benefited from discussion with Christopher Lemmer Webber and Ariadne Conill about their proposal for a similar bearcap URL scheme. Their scheme is more general than mine in supporting other protocols than HTTPS. I have deliberately limited the scope of this proposal.

Update: Ariadne suggested renaming the bearer scheme to bearer+https to make it clearer that it is specialised to https, and pointed out svn+ssh as a prior example. That seems a sensible suggestion to me.

Author: Neil Madden

Founder of Illuminated Security, providing application security and cryptography training courses. Previously Security Architect at ForgeRock. Experienced software engineer with a PhD in computer science. Interested in application security, applied cryptography, logic programming and intelligent agents.

%d bloggers like this: