Fizzgun - A simple and effective HTTP fuzzer¶
Fizzgun (anagram of Fuzzing) is an HTTP(S) Fuzzer that automates negative testing for your API/Front-end service by creating mutants of real time requests taken from your api tests, selenium tests, or your manual exploratory browsing session. It's very effective and simple to set up.
In a nutshell, it works by accepting valid request samples from a given source, for each request it will generate a set of mutated versions of it and send them to the target server. Server responses are then evaluated against expectations and results are logged when those expectations are not satisfied.
Fizzgun request mutation rules are managed by a set of entities called Bubbles
. For instance, there is a bubble that
will mutate a json request into a set of variations by removing one property or value from the original at a time.
Fizzgun includes a set of built-in bubbles but you can also create your own.
Installation¶
Fizzgun requires Python 3.5 or newer.
pip install -U fizzgun
Basic Usage¶
Fizzgun's main source of requests is obtained via an interception HTTP(S) proxy provided by the great mitmproxy library (which will be installed when you install Fizzgun).
After executing fizzgun run
a local proxy server will be started (by default it binds to 0.0.0.0:8888
).
Then, the most common use cases will fall under one of these scenarios:
-
Execute your existing integration tests:
but instead of:
./the-command-i-use-to-run-my-tests
do:
HTTP_PROXY=http://my-session:any-pwd@localhost:8888 ./the-command-i-use-to-run-my-tests
Refer to the using SUT tests troubleshooting page if you need further help on this use case.
-
Configure your browser to use the proxy server (
127.0.0.1:8888
), when prompted for credentials enter anything. E.g.my-session/password
. Then navigate your web app under test (see the browser setup troubleshooting page if you need help with this).
For the examples above Fizzgun will log all the bugs found in a reports/my-session.txt
file.
Read through the docs to learn more, or read the next section if you are still not convinced.
I don't buy it... Why is Fizzgun simple and effective?¶
Let's try to explain this with an example:
The scenario¶
Imagine you have a very simple new service that handles user information. One of the REST endpoints, for updating a specific user, looks like this:
PUT /api/users/123
{
"name": "Mr. Bubbles",
"team_id": 456,
"role": "QE"
}
You have written some API/integration tests for that, for example one basic happy flow test which creates a new user, updates its information, and reads it to verify it was properly updated. Also, since you are a very quality conscious developer and you've got a few minutes left before the sprint comes to an end, you feel inspired and decide to write a few more tests to cover those not-very-happy flows:
- Trying to perform an update on a user id that does not exist should result in a
404
response. - Since the
name
field is mandatory, a request not containing it should result in a400
response.
You submit your pull request, lock your screen, and hurry off to the bar.
The problem¶
Although a couple of negative cases were written, there are so many other cases that were not covered:
- What if
name
is present but empty? - What if
name
is present but a different type (integer, list, array, null). - What if
name
is present but it has a really long value that doesn't fit in the database table column. - What if
name
contains some null bytes or invalid UTF sequences? - What if all the above is also applied to the other fields?
- What if we send a
POST
instead of aPUT
? - What if we send an invalid
Content-Type
header? - What if we don't sent a
Content-Type
header at all? - What if we send a malformed JSON?
- What if we send data with a weird encoding?
- What if, what if what if...
We want our APIs to be robust (specially if they are consumed by external clients). Security attacks usually exploit
missing or incomplete validations on input data. But even without any security holes we want bad input to be treated as
such (i.e. 400
status code responses) and not resulting in internal server errors, corrupt our data, trigger alarms
and mess up with our SLAs and the sleep of our on-call people.
Alternative 1: Write all those tests¶
All those What if?
s don't seem too hard to automate but that list can keep growing and growing. Not only that, but the
proposed example is very simple while usually you will find payloads with more fields, nested structures, etc. And not
only that, but this is not the only API endpoint our service provides.
That's right, negative test cases will grow exponentially, together with dev and maintenance hell, also tests execution time will increase. It's not a realistic alternative to write all those tests.
Alternative 2: Write a single test that generates wrong random input.¶
We could write a very generic algorithm that generates a totally random request, then run that in a loop and wait for failures.
- Hmmm! I get 401 for all of them, I'll just add a bit of logic to add an Auth header.
- Damn! Invalid content type for all of them, I'll just add another bit of logic to make them at least json.
- Ouch! I get 404s for all of them, I'll add a few more bits of logic to create the required resources first.
Looks like this algorithm is no longer generic.
Alternative 3: Use your API specification to generate (not too random) invalid input¶
The previous alternative will fail because the generated requests will be so ugly that not even the network will dare to carry them. In order to be effective we will need to make our invalid requests as valid as possible, so they can infiltrate, pass through any validations, and hit the SUT very deep and hard.
If you have a good API spec (e.g. Swagger/RAML, or even a WSDL if your service is SOAP), you could use that to automatically generate requests that are valid according to the spec, then give them a few punches here and there just to deform them a bit, and see if they get through and hit some bug in your SUT. This looks much more promising!
Unfortunatelly, this is destined to fail too. And the reason is that API specs only define the syntax and structure of
your API but not the semantics or context they need. The spec will tell you that the user_id
in the update request is
32bit unsigned integer and that an Authorization header with a signed Bearer token is required, but it won't tell you
that the user_id
must exist beforehand (and how to create one) or what private key to use to sign the auth token.
If the API spec doesn't provide you with all this, you will have to write that setup code yourself, and you will end up in the same position than the previous alternative.
Alternative 4: Fizzgun!¶
You already have happy-path (or not so happy-path) test cases that are able to generate all the required preconditions, the authz/authn needed, and producing semantically and syntactically correct payloads. All you have to do is run those tests through Fizzgun's Proxy, and let Fizzgun do the rest. Even if you don't have tests, but your web service has a front-end, just configure your browser to use Fizzgun as a proxy, and navigate, click around, fill forms just to keep feeding Fizzgun with samples.