I've come to appreciate Flasks' ease-of-use more and more. For example, when developing clients that connect to some HTTP/RESTful backend there's always a decision to be made on how to test the client. Some options are:
- Test against PROD. Avoid, unless you really want to be that guy
- Prepare a copy of production and test against that. This is a pretty good option but can be quite resource- and maintenance intensive. And it only is available if you actually can copy the service -- might not be the case if it's eg. provided by a 3rd party.
- Mock your connector -- refactor your connection so that it's the thinnest wrapper possible and switch it against a mock for testing. I've used this approach but am not very fond of it. You have to adapt your source to use it, and there's no good code reuse if you have to implement a new client or client version. And differences between original and mock stemming from the fact that the former is connected via a network and the latter in-process just creep in to easily
- Mock the backend -- provide a lightweight mock implementation of the backend and test against this.
The latter is the approach I find myself taking more often lately. Benefits:
- no change to the SUT apart from connection settings
- more fidelity to the behaviour of the underlying network protocol
Flask makes it very easy to create mocks of HTTP-based services. The canonical "hello world" application with Flask is a mere 7 lines:
from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello World!' if __name__ == '__main__': app.run()
Let's look at an example. Say I want to connect to a HTTP service; for simplicity I'll only use 'GET', a single endpoint, foo, that takes one parameter, fooid:
GET /1.1/foo/{bar} {"important_datastructure": bar*2}
So how can we create a mock for this? My basic plan for this is to create the mock as a Flask application and start it in a separate process. The testrunner will then connect via HTTP to the mock daemon, just as it would to the real service.
I put the mock in a separate module "mockservice.py":
import json import flask import multiprocessing # this is our flask app mockservice = flask.Flask('mockservice') # want to mock one endpoint -- this is it @mockservice.route('/1.1/foo/<int:bar>', methods=['GET']) def getfoo(bar): return json.dumps({ 'important_datastructure' : bar * 2 }) # I'll wrap the flask app in a small class which controls one process class _MockProcess(object): mockprocess = None def startmock(self): # create a process, point it at the "run()" method of the flask app self.mockprocess = multiprocessing.Process(target=mockservice.run) # make the process exit with the parent process self.mockprocess.daemon = True # ...and lets go! this starts the process in the background self.mockprocess.start() def stopmock(self): # tell the background process to stop self.mockprocess.terminate() mockprocess = _MockProcess()
The client connector could look something like this:
import requests class FooConnector(object): def __init__(self, url): self.url = url def getfoo(self, bar): resp = requests.get("%s/foo/%s" % (self.url, bar)) data = resp.json() return int(data["important_datastructure"])
Let's have a test, shall we? The test module will import the mock and start it as a module fixture. This means we will reuse the flask mock for all tests in the module. Teardown then shall stop the flask app.
import mockservice import fooconnector def setup_module(): # start the background flask app mockservice.mockprocess.startmock() def teardown_module(): # and stop it mockservice.mockprocess.stopmock() def mktestcon(): # flask by default uses port 5000 return fooconnector.FooConnector("http://localhost:5000/1.1") def test_foo(): con = mktestcon() result = con.getfoo(3) assert result == 6
Nice and simple. The only way the test invocation of our connector differs from a "real" invocation is the base URL it connects to.
Obviously this needs more specification and tests. Do we want to process arbitrarily large data? How should we deal with error cases? The beautiful thing about this setup is that we can quite easily simulate all kinds of error conditions, down to slow servers, protocol violations etc.