In “Towards a standard for bearer token URLs”, I described a URL scheme that can be safely used to incorporate a bearer token (such as an OAuth access token) into a URL. That blog post concentrated on the technical details of how that would work and the security properties of the scheme. But as Tim Dierks commented on Twitter, it’s not necessarily obvious to people how you’d actually use this in practice. Who creates these URLs? How are they used and shared? In this follow-up post I’ll attempt to answer that question with a few examples of how bearer URLs could be used in practice.

Password reset links
A classic use-case for these kinds of URLs with an authorization token in them are the links that get sent to your email address when you ask to reset your password on some online service. This is a simple way of confirming that the person requesting the reset at least has access to your email inbox. (I won’t go into the security pros and cons of this, just that it is a very widespread pattern). Typically such a link looks something like
https://foo.example/password_reset?token=k9cpfoivh2yxQg
With widespread support for bearer URLs this would simply become
bearer://k9cpfoivh2yxQg@foo.example/password_reset
This is already adding security benefits as the token will be sent in an Authorization header when the linked is clicked rather than in an easily-leaked query parameter. Nothing else about the scenario has to change: the server still generates tokens exactly as it currently does and processes them the same way.
The same applies to other scenarios that already use unguessable URLs for (partial) access control, such as Google Drive link sharing or Dropbox chooser/saver expiring file links. In a world where bearer URLs were widely implemented (or a good polyfill available) then these services could simply move to using them instead of encoding authorization tokens directly into https URLs and benefit from the security advantages without really changing the fundamental architecture much at all.
OAuth
Many people already use OAuth to control access to APIs and other resources on the web, so it makes a good example of how bearer URLs would work in practice when access is not already URL-based. With OAuth, a client (such as a JavaScript single page app) obtains consent from a user to access their data and gets an access token that it can then use to call APIs to retrieve that data. In the most basic integration of bearer URLs, the Authorization Server (AS) would return a bearer URL instead of (or as well as) a “bare” access token, something like the following:
{ "token_type": "bearer_url", "access_token": "fe9CBsDahU_e9w", "bearer_url": "bearer://fe9CBsDahU_e9w@api.somewhere.example/users/me", "scope": "view_profile", "expires_in": 3600 }
The client can then simply issue a GET request to that URL and their bearer-URL-aware HTTP client library will extract the access token and send it as an Authorization: Bearer header automatically. If web browsers adopted the bearer URL scheme then using OAuth to make API calls becomes as simple as just fetch(bearerUrl).then(...)
(or your fancy-pants async/await version).
That’s the most basic use-case. Hopefully it’s clear that use of bearer URLs here doesn’t really change how things like OAuth operate at a fundamental level. This is a conscious design principle of the bearer URL scheme: adopting it shouldn’t require an enormous shift in architecture or complete rewrite of all the things. I’m not advocating that we boil the oceans and radically reinvent all web security overnight.
Multiple APIs = multiple URLs
In many cases, an access token authorizes access to multiple different APIs. For example, Google Cloud docs list hundreds of APIs that a client can be authorized to access, each with one or more associated OAuth scopes. In principle you can get a single access token that authorizes access to every single one of these APIs. Although this is unlikely in the case of Google Cloud Platform, it is certainly the case that developers often prefer to obtain a single access token that grants access to multiple APIs to avoid having to juggle multiple tokens and keep track of which token is meant for which API. To put it mildly, this is not great from a security point of view.
For example, it is quite common to request OpenID Connect scopes in addition to other API scopes in a single request, allowing the client to authenticate the user and gain access to APIs in a single call. But this means that every API they call with that access token can turn around and use it to access the user’s profile information from the OIDC UserInfo endpoint. Maybe this is intended, but it’s more often accidental. You can solve this problem by getting multiple finer-scoped access tokens, for example using a background refresh token flow (you request every scope in the authorization call and then use the refresh token to get individual access tokens for subsets of that scope). My colleagues at ForgeRock created a nice JavaScript library to do just this and handle the necessary juggling of access tokens for each API endpoint.
With bearer URLs, the token encoded into the URL is only intended to be used to access the one particular endpoint identified by the URL. So, if access is granted to multiple APIs then the AS would have to return multiple bearer URLs. On the face of it, this sounds like a nightmare for a developer: they’d be forced to juggle all these URLs for different APIs. But if you think about it, developers already have to juggle different URLs for different APIs because separate APIs almost always have different URLs anyway. There is also often a very clear relationship between OAuth scopes and API URLs. For example, if you look again at the Google scope list, you can see that most of the scope are URLs (or at least URL-like), and each scope grants access to a small handful of related API URLs—often just one URL. It would be quite natural for the AS to return a bearer URL for each scope, granting access to that particular API:
{ "token_type": "bearer_url", "expires_in": 3600, "bearer_urls": { "photos": "bearer://XeZu9vU9uV9A5Q@photoslibrary.googleapis.com/v1/albums", "contacts": "bearer://j7GN-vJyI9vy8Q@people.googleapis.com/v1/people/me/connections", ... } }
From a developer’s point of view, this seems more convenient: not only do I get finer-grained access tokens (better for security), but the AS has actually told me where these APIs are, so I don’t need to go trawling through voluminous API documentation. I just have to follow the links. (You can see an idea similar to this in GNAP’s support for returning multiple access tokens).
Communicating access through hyperlinks
You may be thinking, what if the access token grants access to a large collection of URLs? Good question! For example, suppose Dropbox wanted to provide bearer URL access to individual files in the user’s storage. Surely the AS can’t return hundreds of URLs in the response, one for each file? This sounds really messy.
But if you think about it, web developers already have a solution for this problem. If I want to access a user’s files I will first access a top-level URL that lists all the files, say https://api.dropboxapi.com/2/files/list_folder and then that will return details of the individual files and URLs to access those files (or enough information to be able to construct such URLs). The REST aficionados have a (terrible) name for this: HATEOAS, which is really just the idea that navigating an API should be driven by links rather than knowledge of API endpoints “baked-in” to every client. This has obvious benefits in terms of loose coupling between clients and servers.
In a world where the authority to access a resource comes from a bearer URL, then Dropbox would return bearer URLs as the links to access individual files. So the client would get a single bearer URL back from the AS that provides access to the list_folder endpoint. The response from that endpoint would then contain further bearer URLs to access individual files. I go into more detail about this in chapter 9 of my book, with worked code examples (a semi-shameless plug but I did spend a lot of time thinking through those examples).
Where do the tokens come from?
So how does the API construct these new bearer URLs? Won’t it need to keep calling back into the AS to get new access tokens? Will the user be involved every time?
In short, no. In the simplest implementation of this approach a single access token continues to provide access to all files in a particular folder. When creating the links for access to each individual file, the server simply copies the same access token from the request into the bearer URL for the file. For example, the request:
GET /v1/files/list HTTP/1.1 Authorization: Bearer F28VKjjh8NbY3w ...
Results in a response that looks something like this:
<ul> <li><a href="bearer://F28VKjjh8NbY3w@api.example/v1/files/abc">/abc</a></li> <li><a href="bearer://F28VKjjh8NbY3w@api.example/v1/files/def">/def</a></li> ... </ul>
The same token is used to authorize access to every file, which is how most OAuth deployments and session cookies work today.
If you’ve done web development for a while, you may recognise this as a similar pattern to the URL rewriting done by frameworks like Java Servlets (JSESSIONID) or PHP (PHPSESSID) to keep track of a user session without cookies. These approaches are discouraged these days because of the risk of session fixation. Session fixation attacks don’t apply to the intended use of bearer URLs because the token in the URL doesn’t establish (or alter) any session state on the server. Anything the attacker could gain access to by getting a user to click on a link they could just as easily access themselves through the same link. But it does highlight that if a token is used in a bearer URL then the access granted by that token should never increase after the token is created.
A more sophisticated implementation would generate fresh tokens for each of the links created in response to a request. Each new token would provide access to just that one resource (file). This is more secure because if any one bearer URL leaks or is stolen then the access gained by the attacker is more limited, whereas in the previous implementation an attacker could extract the token and craft different URLs using it. Creating new tokens can be expensive, but you can use techniques such as Macaroons to derive more restricted tokens from the request token, as I previously described. For example, if the original token provides read-only access to every file in a particular folder then the server can derive tokens for each individual file by appending a caveat restricting access to that one file, as is the case in these two example URLs:
bearer://AgEAAgq95Q1piHqQbfivAAAGIFH5JdHAnA4RaixMr3O5ae804ifUh3AjnS8d3NvrbiOQ@api.example/v1/files bearer://AgEAAgq95Q1piHqQbfivAAIIZmlsZT1mb28AAAYgC37K5UqxCwgTGg8mLgihTdoKmczYvKxVi46CMyXlOH4@api.example/v1/files/foo
The second URL was efficiently derived from the first without needing access to any shared secret key or database access. Although by no means essential to the use of bearer URLs, Macaroons do fit remarkably well with this approach.
Summary
Hopefully this blog has clarified how bearer URLs could be incorporated into existing or new web applications. In many cases only very minimal changes would be needed, with raw bearer tokens replaced with bearer URLs to gain the security benefits described in the last blog post. In some cases, such as password reset links, they could be adopted with hardly any changes at all.
New frameworks could be developed that offer more comprehensive integration of bearer URLs along the lines I’ve described above. Such frameworks would be closer to implementing the capability-security discipline of pioneering frameworks like Waterken, and I believe this would be a positive development for the web. But I’ve refrained from using that terminology in these two articles, because I think bearer URLs have many positive security and usability advantages even if you completely reject the arguments for capability security, or think it is too radical a change from existing security architectures. My aim is to show that bearer URLs are flexible and can compliment and enhance mainstream approaches to application security too.