OAuth 2.0 & OIDC
Hardening OAuth Implementations Against Redirect and Scope Exploits
Technical deep dive into preventing common vulnerabilities such as open redirects, scope escalation, and cross-site request forgery in OAuth flows.
In this article
Securing the Redirection Flow and Preventing Token Theft
The redirection URI is the most critical link in the OAuth 2.0 chain because it dictates where sensitive credentials travel after a user grants consent. If an attacker can manipulate this destination, they can effectively hijack the entire authorization process to steal codes or tokens. This is not just a theoretical concern but a frequent point of failure in modern distributed systems where multiple subdomains and callback endpoints coexist.
An open redirect vulnerability occurs when an authorization server accepts a destination URI that is not strictly validated against a pre-registered list. Attackers exploit this by crafting a legitimate-looking login link that points to an authorization server they trust but includes a modified redirect parameter. After the user logs in, the server blindly sends the authorization code to a malicious endpoint controlled by the attacker.
1// Define a strict whitelist of allowed callback URLs
2const allowedRedirects = [
3 "https://app.example.com/auth/callback",
4 "https://api.example.com/v1/login/callback"
5];
6
7function validateRedirectUri(inputUri) {
8 // Always use exact string matching to prevent path traversal or subdomain attacks
9 if (!allowedRedirects.includes(inputUri)) {
10 throw new Error("Invalid redirect URI detected. Potential hijacking attempt.");
11 }
12
13 // Ensure the URI uses HTTPS to protect the code in transit
14 if (!inputUri.startsWith("https://")) {
15 throw new Error("Insecure callback protocol. HTTPS is mandatory.");
16 }
17
18 return true;
19}The primary defense against these attacks is the implementation of an exact-match whitelist on the authorization server. Developers should never allow wildcards or partial matches in the redirect URI registration. Even a small error like allowing a wildcard for subdomains can enable an attacker to host a malicious script on a less-secure subdomain and capture credentials intended for the main application.
The Danger of Wildcards and Subdomain Takeovers
Wildcards in redirect URIs are often used for developer convenience, but they create a massive security hole. If an application allows any subdomain of example.com, an attacker only needs to find one vulnerable subdomain or perform a subdomain takeover to intercept traffic. Once they control a valid path within the allowed pattern, the authorization server will treat their malicious callback as trusted.
Strict validation must happen on the server side at the moment the authorization request is received. The server compares the provided URI character by character against the stored configuration. If there is even a single character difference, such as a trailing slash or a different port, the request must be rejected to ensure the code never leaves the trusted perimeter.
Handling URI Fragments and Query Parameters
Attackers sometimes attempt to append fragments or extra query parameters to a valid redirect URI to smuggle data. While fragments are not sent to the server, they can be read by malicious JavaScript on the client side if the redirect lands on a page with an existing Cross-Site Scripting vulnerability. This is why the callback page itself must be a clean, dedicated endpoint with minimal logic.
A dedicated callback endpoint should only be responsible for receiving the code and passing it to the backend. It should not perform any complex navigation or state changes based on additional URL parameters. By keeping the callback logic isolated, you reduce the surface area available for attackers to exploit during the sensitive moment when the authorization code is present in the browser address bar.
Mitigating Cross-Site Request Forgery with State and Nonce
Cross-Site Request Forgery in OAuth happens when an attacker initiates an authorization flow and tricks a victim into completing it. This results in the attacker account being linked to the victim session, or vice versa, which can lead to severe account takeover scenarios. To prevent this, we must ensure that the person who started the request is the same person who finishes it.
The state parameter acts as a unique, cryptographically secure correlation ID that binds the initial request to the final callback. Before redirecting the user to the authorization server, the client generates a random value and stores it in a secure session cookie. This value is then passed to the server, which returns it unchanged once the user provides consent.
- Generate a state value with at least 128 bits of entropy using a secure random number generator.
- Store the state value in a session cookie that is marked as HttpOnly and Secure to prevent client-side access.
- Compare the state parameter returned in the callback to the value stored in the session cookie before processing the code.
If the state values do not match or if the state parameter is missing from the callback, the application must immediately abort the process. This check ensures that the authorization response was actually triggered by the user currently interacting with the application. Without this validation, your application is vulnerable to session fixation attacks where an attacker controls the resulting authenticated state.
State vs. Nonce in OpenID Connect
While the state parameter is used for general CSRF protection in OAuth, OpenID Connect introduces the nonce parameter specifically for the ID Token. The state parameter protects the flow itself, but the nonce binds the resulting ID Token to the client session. This prevents replay attacks where an old token could be resubmitted to start a new session.
The nonce is included in the initial request and is then embedded inside the signed ID Token payload by the identity provider. When the client receives the ID Token, it must verify that the nonce inside the token matches the one generated at the start of the flow. This provides an additional layer of cryptographic proof that the token was generated for this specific login attempt.
Advanced Interception Defense with PKCE
The Proof Key for Code Exchange, or PKCE, was originally designed for mobile and single-page applications that could not securely store a client secret. However, it is now recommended for all client types as a defense-in-depth measure against authorization code injection. PKCE ensures that only the application that initiated the authorization request can exchange the code for an access token.
In a standard flow, a code is sent through the browser and can be intercepted by malicious apps or browser extensions. PKCE mitigates this by requiring the client to create a dynamic secret, called the code verifier, for every request. The client sends a hashed version of this secret, called the code challenge, at the beginning of the flow, but keeps the raw verifier hidden until the token exchange step.
PKCE transforms the authorization code from a standalone credential into a transaction-specific key that is mathematically bound to the original requester. It effectively eliminates the risk of code interception by unauthorized intermediaries.
1// 1. Generate a high-entropy random string (code verifier)
2const codeVerifier = generateRandomString(64);
3
4// 2. Hash the verifier using SHA-256 to create the code challenge
5const hashedVerifier = await crypto.subtle.digest('SHA-256', new TextEncoder().encode(codeVerifier));
6const codeChallenge = base64UrlEncode(hashedVerifier);
7
8// 3. Include the challenge and method in the authorization request
9const authUrl = `https://auth.example.com/authorize?` +
10 `response_type=code&` +
11 `client_id=MY_CLIENT_ID&` +
12 `code_challenge=${codeChallenge}&` +
13 `code_challenge_method=S256`;
14
15// 4. Later, send the raw verifier to the token endpoint to prove ownership
16// POST /token grant_type=authorization_code&code=XYZ&code_verifier=THE_RAW_STRINGProtecting Against Code Injection
By using PKCE, the authorization server can verify the identity of the requester during the token exchange without needing a static client secret. When the client requests a token, it provides the raw code verifier. The server hashes this verifier using the same algorithm specified earlier and checks if it matches the challenge stored during the authorization step.
This mechanism prevents an attacker from using a stolen authorization code because they would not have the corresponding code verifier secret. Since the verifier never travels through the browser during the first step, it remains safe from common interception vectors like operating system logs or malicious browser plugins.
Scope Integrity and the Principle of Least Privilege
Scopes are the mechanism used to define the boundaries of what a client can do on behalf of a user. A common security pitfall is requesting overly broad scopes, such as administrative access when only read access is required. This increases the damage an attacker can do if they successfully compromise an access token.
Scope escalation occurs when a client attempts to request a higher level of access than it was originally granted. The authorization server must strictly enforce the allowed scopes for each client ID and ensure that a user cannot inadvertently grant more permissions than the application actually needs. Developers should implement a policy of least privilege, requesting only the specific scopes required for the current task.
- Define granular scopes rather than broad ones to limit the impact of token leakage.
- Verify that the scopes requested by the client match the scopes pre-authorized during registration.
- Always inspect the scopes of the incoming token on the resource server before allowing access to protected data.
It is not enough for the authorization server to check scopes; the resource server must also validate them. Every API endpoint should verify that the access token provided contains the exact scope required to perform the requested action. This second layer of validation ensures that even if a token is obtained through a secondary vulnerability, it cannot be used to access unauthorized resources.
Dynamic Scope Validation on the Resource Server
Resource servers should treat the access token as an opaque string or a structured object that contains permissions data. When an API receives a request, it must decode or introspect the token to find the scopes array. Logic must then be applied to confirm that the requested operation, such as deleting a record, is covered by the scopes present in that specific token.
If a token is found to have insufficient scopes, the resource server must return a 403 Forbidden error. This error should clearly indicate that the problem is a lack of permission, which allows the client application to re-trigger the authorization flow with the correct scopes if necessary. Consistent enforcement across all microservices is essential to maintain a robust security perimeter.
