YesNo: Better HTTP Testing
YesNo is a new library for Node.js that simplifies how we write tests asserting the actual behavior of our HTTP requests. YesNo intercepts the requests your app makes, then either forwards the request to its original destination or responds with a user defined mock.
Many Node.js apps include some sort of API integration. Especially when working within a microservices architecture, the entire behavior of our app may be dependent on the correctness of our HTTP requests. So it's critical that our tests capture this behavior.
The problem is that HTTP requests are the kind of thing we normally need to mock out in our unit tests, since they're dependent on an external service. But once we're mocking these requests it becomes difficult to guarantee our tests reflect the real behavior of the app. What happens when our mocks become stale? Or what if we build our mock HTTP response according to incorrect documentation, then later discover the response has a completely different shape? When you're juggling several different APIs in a evolving ecosystem these issues occur regularly, and they're often the source of real bugs.
We've tried to address this problem in YesNo by reimagining the best features of several existing HTTP testing libraries to accommodate the lessons we've garnered having to maintain complicated test suites that use them.
YesNo makes it easy to generate fixtures that have a strong guarantee of reflecting real requests. You can use it in integration tests against a live service or in your offline unit tests. Moreover, it includes a few utility methods to access and manipulate intercepted requests without additional boilerplate.
Features
Spy on live requests
yesno.spy() await myApi.updateUser(1, 'invalid-token') // Select the intercepted POST request and assert its response code expect(yesno.matching(/user\/1/).response()) .to.have.property('statusCode', 401)
Mock requests
yesno.mock(await yesno.load({ filename: './my-mocks.json' })) const users = await myApi.getUsers() // Responses are mocked
Edit and record requests
const recording = await yesno.recording({ filename: './update-user-sanitized.json' }) // Auth requests with sensitive data... const token = await auth.getToken() await myApi.updateUser(1, token) // Redact auth data so that our credentials don't // end up in source control! yesno.matching(/auth/).redact('response.body.token') yesno.matching(/user\/1/).redact('request.headers.authorization') await recording.complete()
Testing Philosophy
YesNo is built to support a simple testing approach that plays well with a TDD-based mindset, which one could divide into the steps Validate, Persist, Mock. This allows us to first validate our test against a live API, secondly persist the intercepted requests, and thirdly mock our test with the new fixtures. By the end we have a unit test whose mocked behavior closely resembles the unmocked behavior, using a workflow we can repeat whenever we need to refresh our fixtures.
Let's look at an example. We'll use YesNo's convenient recording method to write a test that can spy, record or mock requests according to an environment variable we set at runtime. This way the same test can support each step of our workflow, with our assertions remaining valid throughout.
// Begin a recording. Load mocks if in "mock" mode, otherwise spy. const recording = await yesno.recording({ filename: './get-users.json' }) // Make our HTTP requests await myApi.getUsers() // Run assertions expect(yesno.intercepted()).to.have.lengthOf(1) expect(yesno.matching(/users/).response()).to.have.property('statusCode', 200) // Persist intercepted requests if in "record" mode, otherwise no-op await recording.complete()
Our first step is to validate the real HTTP behavior of our test, so we run our tests unmocked against live services.
YESNO_RECORDING_MODE=spy npm test
If our assertions pass, we know we received the expected request & response format. If not, we'll need to identify and fix our errors, then repeat this step.
Now that we know the test behaves correctly against live services, we're ready to generate fixtures so that we can run our tests offline. To persist these requests to disks we simply run the test again in record mode.
YESNO_RECORDING_MODE=record npm test
Depending on the test it may be helpful to look at the generated JSON file. Sometimes you'll notice values that you ought to be asserting on, or you'll find sensitive credentials which should be redacted from the fixtures.
With the fixtures saved to disk we can run our test with mocks, so that all our intercepted requests resolve with mocked responses.
YESNO_RECORDING_MODE=mock npm test
Once the test passes we can commit the test and generated fixtures to version control. We have to commit the fixtures to version control so that we can continue to run our tests in mock mode going forward. Whenever our application or an external API changes, we'll repeat these steps to update the fixtures.
Following this methodology we're able to validate API behavior with the same code as our unit tests. This means we can use our tests to drive discovery and development, where otherwise we might have to write scripts or one off curl requests to independently validate APIs. This is why I find this approach so conducive for TDD, because it encourages us to write our tests first.
Remember that while this is our preferred workflow for writing tests, you're free to use whatever approach you'd like. You can always choose to manually define your mocks or skip mocking altogether.
Challenges of existing approaches
As previously discussed, there are already lots of libraries available to help you test HTTP requests in Node. Here are some of the challenges we've encountered using them that YesNo tries to address.
1. They assert the behavior of an HTTP library, not the HTTP request.
YesNo intercepts the HTTP requests at a low level, so our tests aren't tied to any library.
2. They require hand crafting fixtures.
We want to avoid writing fixtures by hand whenever possible. It's time consuming and unreliable.
3. They force us to write test specific configurations.
Another approach we've encountered is to stand up a local test server that responds to requests. However this requires us to modify our app configuration to point toward the test server, still necessitates manual mocking, and generally adds overhead.
4. They're difficult to manipulate.
A few libraries do provide some sort of "record" functionality to save mocks to disk from generated HTTP requests. But those libraries lack a syntax for editing the mocks or selecting results, which hinders maintaintability.
5. They're a pain.
Perhaps all the drawbacks should be summarized here — the existing libraries could all be easier to use!
Give it a shot
If you're still reading at this point then we likely share a passion for robust testing strategies. Go check out the README for YesNo and let us know what you think!