How do you use a bearer URL?

In “Towards a standard for bearer token URLs”, I described a URL scheme that can be safely used to incorporate a bearer token (such as an OAuth access token) into a URL. That blog post concentrated on the technical details of how that would work and the security properties of the scheme. But as Tim Dierks commented on Twitter, it’s not necessarily obvious to people how you’d actually use this in practice. Who creates these URLs? How are they used and shared? In this follow-up post I’ll attempt to answer that question with a few examples of how bearer URLs could be used in practice.

The bearer URL scheme
Continue reading “How do you use a bearer URL?”

Towards a standard for bearer token URLs

In XSS doesn’t have to be Game Over, and earlier when discussing Can you ever (safely) include credentials in a URL?, I raised the possibility of standardising a new URL scheme that safely allows encoding a bearer token into a URL. This makes it more convenient to use lots of very fine-grained tokens rather than one token/cookie that grants access to everything, which improves security. It also makes it much easier to securely share access to individual resources, improving usability. In this post I’ll outline what such a new URL scheme would look like and the security advantages it provides over existing web authentication mechanisms. As browser vendors restrict the use of cookies, I believe there is a need for a secure replacement, and that bearer URLs are a good candidate.

Continue reading “Towards a standard for bearer token URLs”

When a KEM is not enough

In my previous post, I described the KEM/DEM paradigm for hybrid encryption. The key encapsulation mechanism is given the recipient’s public key and outputs a fresh AES key and an encapsulation of that key that the recipient can decapsulate to recover the AES key. In this post I want to talk about several ways that the KEM interface falls short and what to do about it:

  • As I’ve discussed before, the standard definition of public key encryption lacks any authentication of the sender, and the KEM-DEM paradigm is no exception. You often want to have some idea of where a message came from before you process it, so how can we add sender authentication?
  • If you want to send the same message to multiple recipients, a natural approach would be to encrypt the message once with a fresh AES key and then encrypt the AES key for each recipient. With the KEM approach though we’ll end up with a separate AES key for each recipient. How can we send the same message to multiple recipients without encrypting the whole thing separately for each one?
  • Finally, the definition of public key encryption used in the KEM/DEM paradigm doesn’t provide forward secrecy. If an attacker ever compromises the recipient’s long-term private key, they can decrypt every message ever sent to that recipient. Can we prevent this?

In this article I’ll tackle the first two issues and show how the KEM/DEM abstractions can be adjusted to cope with each. In a follow-up post I’ll then show how to tackle forward secrecy, along with replay attacks and other issues. Warning: this post is longer and has more technical details than the previous post. It’s really meant for people who already have some experience with cryptographic algorithms.

Continue reading “When a KEM is not enough”

XSS doesn’t have to be game over

A message I’m very used to seeing – but does XSS have to mean game over for web security?

There’s a persistent belief among web security people that cross-site scripting (XSS) is a “game over” event for defence: there is no effective way to recover if an attacker can inject code into your site. Brian Campbell refers to this as “XSS Nihilism”, which is a great description. But is this bleak assessment actually true? For the most part yes, but in this post I want to talk about a faint glimmer on the horizon that might just be a ray of sunshine after all.

Continue reading “XSS doesn’t have to be game over”

API Security in Action is published!

I wasn’t expecting it so quickly, so it caught me a little off guard, but API Security in Action is now finally published. PDF copies are available now, with printed copies shipping by the end of the month. Kindle/ePub take a little bit longer but should be out in a few weeks time.

My own print copies will take a few weeks to ship to the UK, and I can’t wait to finally hold it in my hands. That’s a brighter ending to 2020.

At some point I’ll try and collect some thoughts about the process of writing it and my feelings with the finished product. But tonight I’ll settle for a glass (or two) of a nice red. Cheers!

Least privilege with less effort: Macaroon access tokens in AM 7.0

Part 1: Lightweight PoP

One of the major changes between OAuth 1 and OAuth 2 was the decision to drop the requirement that requests had to be signed using a secret key associated with each access token. This signing process was notoriously hard to get right and dropping it from OAuth 2 enormously simplified the life of client and resource server developers. In OAuth 2, access tokens are pure bearer tokens, a bit like passwords: if you know the value of the token you can use it, without needing any other credentials. This is very easy for developers, but is definitely less secure. The rise of single-page apps (SPAs) and other browser-based clients leaves access tokens more exposed to theft or accidental leakage compared to traditional server-side clients.

Drinking the PoP

Signed requests in OAuth 1 are an example of proof-of-possession (PoP) tokens rather than bearer tokens. If bearer tokens are like cash, PoP tokens are like Chip-and-PIN. Even if you steal an OAuth 1 token you can’t use it without the associated secret key used to sign requests. The only problem was it turned out that even if you did have the secret key you often couldn’t use the token either, because it was just too hard to get request signing to work reliably. Some brave souls periodically try and revive this idea.

To get around the problems with request signing, various alternative schemes have been proposed. Brian Campbell gave a nice overview of the history of OAuth PoP schemes at Identiverse recently, called The Burden of Proof. It’s a good talk and worth watching. The latest such scheme is DPoP, which tries to keep things as simple as possible: rather than signing the whole request, you just sign enough basic details of the request (the URI and HTTP method) to be able to stop the most likely attacks and rely on TLS to protect the content of the request.

DPoP is good, and for some use-cases I think it’s really good, but it still has some downsides:

  • Although the signature process itself is simplified, the client still has to generate and manage this secret key securely, which adds complexity that many client developers will avoid unless they have to.
  • The client, authorization server, and resource servers all have to know about DPoP. It’s quite hard to incrementally deploy.
  • It relies on fresh public key signatures on every request. Public key crypto is slow and expensive, so you generally want to do it as little as possible. I could imagine a rollout of DPoP requiring a significant increase in front-line servers to handle the extra load.
  • DPoP is based on JWS, which supports a wide range of signature algorithms, with no guidance as to which one to use. This means the resource servers have to potentially support them all (including some really expensive ones like ES512), or risk rejecting some clients. Clients will also have to guess which algorithms the server supports when they generate the private key. In practice this means that DPoP will only be used in closed deployments, or else everyone will gravitate to a lowest common denominator option like 2048-bit RSA with PKCS#1 v1.5 padding.
  • Public key signature algorithms are fragile. The point of a signature is that it can be verified by anybody, but in the case of an OAuth request the DPoP token only needs to be verified by exactly one party: the server that you’re sending the request to. There are much simpler and safer ways to achieve request authentication in this case.

Taking the biscuit

OK, so if DPoP is still too complex, what would a better solution look like? The risk that PoP solutions are trying to prevent is an access token being stolen from one request and used to authorize completely different requests that the client didn’t intend. A different way to solve this problem would be to have individual access tokens for every request that just authorizes that one specific request and no others. For example, it could have an audience restriction limiting it to only be used for requests to one specific server, and a single scope that authorizes just the specific type of API call that is being made. The expiry time could be limited to just a few seconds in the future, and so on. In current OAuth implementations it’s pretty hard to restrict an access token to exactly one request, but you can get pretty close.

Of course, this all sounds like a nightmare. The client can’t keep going back to the user to authorize access tokens for every little request! 

It turns out there is a way to get lots of little access tokens for individual requests. It’s easy for developers, relatively simple to deploy, and cheap. (Much cheaper than digital signatures). It’s also quite delicious.

Chocolate coated macaroons

Enter macaroons

Apart from being a tasty treat, macaroons are also a new token format invented by the boffins at Google. At first glance, a macaroon is an authenticated token a bit like a JWT using the HS256 algorithm. But the additional yummy goodness that macaroons add is the ability to add caveats. A caveat is a restriction on how the token can be used. For example, you might limit the scope of a token, or reduce its expiry time. They always reduce, never expand, the authority of the token. The really cool thing is that adding a caveat gives you a new token, leaving the original token unchanged. It’s also really cheap: a few microseconds typically. This means you can have a single powerful access token and then derive multiple more restricted access tokens from it. This lets us have our cake and eat it (sorry, my metaphors are all over the place today): you can get a single token approved by the user but then derive individual access tokens for every single API call you make, with just the perfect amount of privilege for that one call.

A picture of macaroon access tokens, showing how new tokens can be derived with reduced scope, expiry, or audience.

This gives us the benefits of having lots of tiny individual access tokens for each request, but we can generate these tokens on the fly from the original token. This is what the original macaroons paper refers to as contextual caveats: specific details of the context in which an API call is taking place that we add to a token to restrict its use. That way, if the token does get stolen or misused, the scope of what you can do with it is severely limited by the caveats attached to it. The genuine client still has the original access token though, so they can still do whatever they like within the scope of the original grant.

If we restrict the tokens enough then they become pretty hard to misuse. We get much of the same benefits of a PoP approach but with much less complexity. Where you might have had code like the following to call an API:

fetch('https://api.example.com/v1/kittens', {
    method: 'POST',
    headers: {
        'Authorization': 'Bearer ' + token
    },
    body: JSON.stringify(kittenDetails)
});

The new macaroon-enhanced version becomes something as simple as the following with an appropriate library:

fetch('https://api.example.com/v1/kittens', {
    method: 'POST',
    headers: {
        'Authorization': 'Bearer ' + token.restrict({
            aud: 'api.example.com',
            exp: nowPlus5Seconds(),
            scope: 'upload_cute_kitten_picture'
        })
    },
    body: JSON.stringify(kittenDetails)
});

Macaroon OAuth tokens in ForgeRock AM 7.0

At ForgeRock we love OAuth and we also love macaroons. We think they make a great combination. OAuth benefits from the simple but flexible approach to least privilege that macaroons enable, and macaroons benefit from the framework and tooling that exists around OAuth. I strongly believe that macaroons provide about 90% of the benefit of PoP for about 20% of the effort.

That’s why we’ve included support for issuing OAuth access and refresh tokens as macaroons in the upcoming 7.0 release of ForgeRock Access Management. So how does it work? Well, mostly you just go into the OAuth2 Provider Settings and turn it on:

Image of toggle button to turn on "Use Macaroon Access and Refresh Tokens"

Now when you complete an OAuth flow you get issued with a macaroon access token and optional refresh token. They look just like normal access tokens, but are a little bit longer:

Screenshot of Postman showing macaroon access and refresh tokens

You can call the standard token introspection endpoint to see the details of the token, just like you could with a normal access token:

Screenshot showing JSON response from the OAuth token introspection endpoint, showing the scope and other token details.

You can then add a bunch of caveats just like in the example earlier, to reduce the scope, expiry time, and audience and introspect it again to see the difference:

The updated JSON response from the introspection endpoint, showing the reduced scope, expiry, and audience due to the caveats added to the token.

Repeating the token introspection request a few seconds later, the restricted access token has expired:

Screenshot showing the active: false response

But the original access token is still valid. In fact, all the details of the original token in the database (AM’s Core Token Service) are exactly as they were when the token was first issued.

Everything is handled transparently through the token introspection endpoint, making it really easy to deploy this incrementally:

  1. In step 1, you simply turn on macaroon access tokens at the Authorization Server. So long as your resource servers are using token introspection, everything can carry on as normal. Clients and resource servers need no changes.
  2. In step 2, your clients start adding caveats to tokens using a macaroon library. Resource servers carry on introspecting the tokens, but now the introspection responses take into account the caveats on the tokens.
  3. There is no step 3.

That’s a little taster of macaroon access tokens in ForgeRock AM 7.0 and why I think they will be a game changer. Least privilege with less effort. In part 2, I’ll show you some more exotic things you can do with macaroon access tokens using 3rd-party caveats.

Convergence in web and API security

Every now and then technologies that initially appear to be distinct end up converging on a common approach from opposite directions. I believe that something like that is happening right now in approaches to web and API authentication around the use of tokens and cookies.

Continue reading “Convergence in web and API security”

Can you ever (safely) include credentials in a URL?

Update: an updated version of the ideas in this blog post appears in chapter 9 of my book. You may also like my proposal: Towards a standard for bearer token URLs.

URLs are a cornerstone of the web, and are the basic means by which content and resources are shared and disseminated. People copy and paste URLs into Slack or WhatsApp to share interesting links. Google crawls the web, discovering and indexing such links. But what happens when the page you want to link is not public and requires credentials in order to view or interact with it? Suddenly a URL is no longer sufficient, unless the recipient happens to already have credentials. Sometimes they do, and everything is fine, but often they do not. If we really do want to give them access, the problem becomes how to securely pass along some credentials with the URL so that they can access the page we have linked.

A commonly desired approach to this problem is to encode the credentials into the URL itself. While convenient, this solution is fraught with dangers and frequently results in credentials being exposed in insecure contexts. In this article, we’ll look at various ways to accomplish this, the ways that things can go wrong, and conclude with a set of guidelines for cases where this can be made secure, and actually improve security overall.

Continue reading “Can you ever (safely) include credentials in a URL?”

Public key authenticated encryption and why you want it (Part II)

In Part I, I made the argument that even when using public key cryptography you almost always want authenticated encryption. In this second part, we’ll look at how you can actually achieve public key authenticated encryption (PKAE) from commonly available building blocks. We will concentrate only on approaches that do not require an interactive protocol. (Updated 12th January 2019 to add a description of a NIST-approved key-agreement mode that achieves PKAE).

Continue reading “Public key authenticated encryption and why you want it (Part II)”

Key-driven cryptographic agility

One of the criticisms of the JOSE/JWT standards is that they give an attacker too much flexibility, by allowing them to specify how a message should be processed. In particular, the standard “alg” header tells the recipient what cryptographic algorithm was used to sign or encrypt the contents. Letting the attacker chose the algorithm can lead to disastrous security vulnerabilities. Continue reading “Key-driven cryptographic agility”