123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508 |
- ==========
- Unit tests
- ==========
- .. highlight:: console
- Django comes with a test suite of its own, in the ``tests`` directory of the
- code base. It's our policy to make sure all tests pass at all times.
- We appreciate any and all contributions to the test suite!
- The Django tests all use the testing infrastructure that ships with Django for
- testing applications. See :doc:`/topics/testing/overview` for an explanation of
- how to write new tests.
- .. _running-unit-tests:
- Running the unit tests
- ======================
- Quickstart
- ----------
- First, `fork Django on GitHub <https://github.com/django/django/fork>`__.
- Second, create and activate a virtual environment. If you're not familiar with
- how to do that, read our :doc:`contributing tutorial </intro/contributing>`.
- Next, clone your fork, install some requirements, and run the tests::
- $ git clone git@github.com:YourGitHubName/django.git django-repo
- $ cd django-repo/tests
- $ pip install -e ..
- $ pip install -r requirements/py3.txt
- $ ./runtests.py
- Installing the requirements will likely require some operating system packages
- that your computer doesn't have installed. You can usually figure out which
- package to install by doing a Web search for the last line or so of the error
- message. Try adding your operating system to the search query if needed.
- If you have trouble installing the requirements, you can skip that step. See
- :ref:`running-unit-tests-dependencies` for details on installing the optional
- test dependencies. If you don't have an optional dependency installed, the
- tests that require it will be skipped.
- Running the tests requires a Django settings module that defines the databases
- to use. To make it easy to get started, Django provides and uses a sample
- settings module that uses the SQLite database. See
- :ref:`running-unit-tests-settings` to learn how to use a different settings
- module to run the tests with a different database.
- .. admonition:: Windows users
- We recommend something like `Git Bash <https://msysgit.github.io/>`_ to run
- the tests using the above approach.
- Having problems? See :ref:`troubleshooting-unit-tests` for some common issues.
- Running tests using ``tox``
- ---------------------------
- `Tox <https://tox.readthedocs.io/>`_ is a tool for running tests in different
- virtual environments. Django includes a basic ``tox.ini`` that automates some
- checks that our build server performs on pull requests. To run the unit tests
- and other checks (such as :ref:`import sorting <coding-style-imports>`, the
- :ref:`documentation spelling checker <documentation-spelling-check>`, and
- :ref:`code formatting <coding-style-python>`), install and run the ``tox``
- command from any place in the Django source tree::
- $ pip install tox
- $ tox
- By default, ``tox`` runs the test suite with the bundled test settings file for
- SQLite, ``flake8``, ``isort``, and the documentation spelling checker. In
- addition to the system dependencies noted elsewhere in this documentation,
- the command ``python3`` must be on your path and linked to the appropriate
- version of Python. A list of default environments can be seen as follows::
- $ tox -l
- py3
- flake8
- docs
- isort
- Testing other Python versions and database backends
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- In addition to the default environments, ``tox`` supports running unit tests
- for other versions of Python and other database backends. Since Django's test
- suite doesn't bundle a settings file for database backends other than SQLite,
- however, you must :ref:`create and provide your own test settings
- <running-unit-tests-settings>`. For example, to run the tests on Python 3.5
- using PostgreSQL::
- $ tox -e py35-postgres -- --settings=my_postgres_settings
- This command sets up a Python 3.5 virtual environment, installs Django's
- test suite dependencies (including those for PostgreSQL), and calls
- ``runtests.py`` with the supplied arguments (in this case,
- ``--settings=my_postgres_settings``).
- The remainder of this documentation shows commands for running tests without
- ``tox``, however, any option passed to ``runtests.py`` can also be passed to
- ``tox`` by prefixing the argument list with ``--``, as above.
- Tox also respects the ``DJANGO_SETTINGS_MODULE`` environment variable, if set.
- For example, the following is equivalent to the command above::
- $ DJANGO_SETTINGS_MODULE=my_postgres_settings tox -e py35-postgres
- Running the JavaScript tests
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Django includes a set of :ref:`JavaScript unit tests <javascript-tests>` for
- functions in certain contrib apps. The JavaScript tests aren't run by default
- using ``tox`` because they require `Node.js` to be installed and aren't
- necessary for the majority of patches. To run the JavaScript tests using
- ``tox``::
- $ tox -e javascript
- This command runs ``npm install`` to ensure test requirements are up to
- date and then runs ``npm test``.
- .. _running-unit-tests-settings:
- Using another ``settings`` module
- ---------------------------------
- The included settings module (``tests/test_sqlite.py``) allows you to run the
- test suite using SQLite. If you want to run the tests using a different
- database, you'll need to define your own settings file. Some tests, such as
- those for ``contrib.postgres``, are specific to a particular database backend
- and will be skipped if run with a different backend.
- To run the tests with different settings, ensure that the module is on your
- ``PYTHONPATH`` and pass the module with ``--settings``.
- The :setting:`DATABASES` setting in any test settings module needs to define
- two databases:
- * A ``default`` database. This database should use the backend that
- you want to use for primary testing.
- * A database with the alias ``other``. The ``other`` database is used to test
- that queries can be directed to different databases. This database should use
- the same backend as the ``default``, and it must have a different name.
- If you're using a backend that isn't SQLite, you will need to provide other
- details for each database:
- * The :setting:`USER` option needs to specify an existing user account
- for the database. That user needs permission to execute ``CREATE DATABASE``
- so that the test database can be created.
- * The :setting:`PASSWORD` option needs to provide the password for
- the :setting:`USER` that has been specified.
- Test databases get their names by prepending ``test_`` to the value of the
- :setting:`NAME` settings for the databases defined in :setting:`DATABASES`.
- These test databases are deleted when the tests are finished.
- You will also need to ensure that your database uses UTF-8 as the default
- character set. If your database server doesn't use UTF-8 as a default charset,
- you will need to include a value for :setting:`CHARSET <TEST_CHARSET>` in the
- test settings dictionary for the applicable database.
- .. _runtests-specifying-labels:
- Running only some of the tests
- ------------------------------
- Django's entire test suite takes a while to run, and running every single test
- could be redundant if, say, you just added a test to Django that you want to
- run quickly without running everything else. You can run a subset of the unit
- tests by appending the names of the test modules to ``runtests.py`` on the
- command line.
- For example, if you'd like to run tests only for generic relations and
- internationalization, type::
- $ ./runtests.py --settings=path.to.settings generic_relations i18n
- How do you find out the names of individual tests? Look in ``tests/`` — each
- directory name there is the name of a test.
- If you just want to run a particular class of tests, you can specify a list of
- paths to individual test classes. For example, to run the ``TranslationTests``
- of the ``i18n`` module, type::
- $ ./runtests.py --settings=path.to.settings i18n.tests.TranslationTests
- Going beyond that, you can specify an individual test method like this::
- $ ./runtests.py --settings=path.to.settings i18n.tests.TranslationTests.test_lazy_objects
- Running the Selenium tests
- --------------------------
- Some tests require Selenium and a Web browser. To run these tests, you must
- install the selenium_ package and run the tests with the
- ``--selenium=<BROWSERS>`` option. For example, if you have Firefox and Google
- Chrome installed::
- $ ./runtests.py --selenium=firefox,chrome
- See the `selenium.webdriver`_ package for the list of available browsers.
- Specifying ``--selenium`` automatically sets ``--tags=selenium`` to run only
- the tests that require selenium.
- .. _selenium.webdriver: https://github.com/SeleniumHQ/selenium/tree/master/py/selenium/webdriver
- .. _running-unit-tests-dependencies:
- Running all the tests
- ---------------------
- If you want to run the full suite of tests, you'll need to install a number of
- dependencies:
- * argon2-cffi_ 16.1.0+
- * bcrypt_
- * docutils_
- * geoip2_
- * jinja2_ 2.7+
- * numpy_
- * Pillow_
- * PyYAML_
- * pytz_ (required)
- * setuptools_
- * memcached_, plus a :ref:`supported Python binding <memcached>`
- * gettext_ (:ref:`gettext_on_windows`)
- * selenium_
- * sqlparse_
- You can find these dependencies in `pip requirements files`_ inside the
- ``tests/requirements`` directory of the Django source tree and install them
- like so::
- $ pip install -r tests/requirements/py3.txt
- If you encounter an error during the installation, your system might be missing
- a dependency for one or more of the Python packages. Consult the failing
- package's documentation or search the Web with the error message that you
- encounter.
- You can also install the database adapter(s) of your choice using
- ``oracle.txt``, ``mysql.txt``, or ``postgres.txt``.
- If you want to test the memcached cache backend, you'll also need to define
- a :setting:`CACHES` setting that points at your memcached instance.
- To run the GeoDjango tests, you will need to :doc:`setup a spatial database
- and install the Geospatial libraries</ref/contrib/gis/install/index>`.
- Each of these dependencies is optional. If you're missing any of them, the
- associated tests will be skipped.
- .. _argon2-cffi: https://pypi.org/project/argon2_cffi/
- .. _bcrypt: https://pypi.org/project/bcrypt/
- .. _docutils: https://pypi.org/project/docutils/
- .. _geoip2: https://pypi.org/project/geoip2/
- .. _jinja2: https://pypi.org/project/jinja2/
- .. _numpy: https://pypi.org/project/numpy/
- .. _Pillow: https://pypi.org/project/Pillow/
- .. _PyYAML: https://pyyaml.org/wiki/PyYAML
- .. _pytz: https://pypi.org/project/pytz/
- .. _setuptools: https://pypi.org/project/setuptools/
- .. _memcached: https://memcached.org/
- .. _gettext: https://www.gnu.org/software/gettext/manual/gettext.html
- .. _selenium: https://pypi.org/project/selenium/
- .. _sqlparse: https://pypi.org/project/sqlparse/
- .. _pip requirements files: https://pip.pypa.io/en/latest/user_guide/#requirements-files
- Code coverage
- -------------
- Contributors are encouraged to run coverage on the test suite to identify areas
- that need additional tests. The coverage tool installation and use is described
- in :ref:`testing code coverage<topics-testing-code-coverage>`.
- Coverage should be run in a single process to obtain accurate statistics. To
- run coverage on the Django test suite using the standard test settings::
- $ coverage run ./runtests.py --settings=test_sqlite --parallel=1
- After running coverage, generate the html report by running::
- $ coverage html
- When running coverage for the Django tests, the included ``.coveragerc``
- settings file defines ``coverage_html`` as the output directory for the report
- and also excludes several directories not relevant to the results
- (test code or external code included in Django).
- .. _contrib-apps:
- Contrib apps
- ============
- Tests for contrib apps can be found in the ``tests/`` directory, typically
- under ``<app_name>_tests``. For example, tests for ``contrib.auth`` are located
- in ``tests/auth_tests``.
- .. _troubleshooting-unit-tests:
- Troubleshooting
- ===============
- Many test failures with ``UnicodeEncodeError``
- ----------------------------------------------
- If the ``locales`` package is not installed, some tests will fail with a
- ``UnicodeEncodeError``.
- You can resolve this on Debian-based systems, for example, by running::
- $ apt-get install locales
- $ dpkg-reconfigure locales
- You can resolve this for macOS systems by configuring your shell's locale::
- $ export LANG="en_US.UTF-8"
- $ export LC_ALL="en_US.UTF-8"
- Run the ``locale`` command to confirm the change. Optionally, add those export
- commands to your shell's startup file (e.g. ``~/.bashrc`` for Bash) to avoid
- having to retype them.
- Tests that only fail in combination
- -----------------------------------
- In case a test passes when run in isolation but fails within the whole suite,
- we have some tools to help analyze the problem.
- The ``--bisect`` option of ``runtests.py`` will run the failing test while
- halving the test set it is run together with on each iteration, often making
- it possible to identify a small number of tests that may be related to the
- failure.
- For example, suppose that the failing test that works on its own is
- ``ModelTest.test_eq``, then using::
- $ ./runtests.py --bisect basic.tests.ModelTest.test_eq
- will try to determine a test that interferes with the given one. First, the
- test is run with the first half of the test suite. If a failure occurs, the
- first half of the test suite is split in two groups and each group is then run
- with the specified test. If there is no failure with the first half of the test
- suite, the second half of the test suite is run with the specified test and
- split appropriately as described earlier. The process repeats until the set of
- failing tests is minimized.
- The ``--pair`` option runs the given test alongside every other test from the
- suite, letting you check if another test has side-effects that cause the
- failure. So::
- $ ./runtests.py --pair basic.tests.ModelTest.test_eq
- will pair ``test_eq`` with every test label.
- With both ``--bisect`` and ``--pair``, if you already suspect which cases
- might be responsible for the failure, you may limit tests to be cross-analyzed
- by :ref:`specifying further test labels <runtests-specifying-labels>` after
- the first one::
- $ ./runtests.py --pair basic.tests.ModelTest.test_eq queries transactions
- You can also try running any set of tests in reverse using the ``--reverse``
- option in order to verify that executing tests in a different order does not
- cause any trouble::
- $ ./runtests.py basic --reverse
- Seeing the SQL queries run during a test
- ----------------------------------------
- If you wish to examine the SQL being run in failing tests, you can turn on
- :ref:`SQL logging <django-db-logger>` using the ``--debug-sql`` option. If you
- combine this with ``--verbosity=2``, all SQL queries will be output::
- $ ./runtests.py basic --debug-sql
- Seeing the full traceback of a test failure
- -------------------------------------------
- By default tests are run in parallel with one process per core. When the tests
- are run in parallel, however, you'll only see a truncated traceback for any
- test failures. You can adjust this behavior with the ``--parallel`` option::
- $ ./runtests.py basic --parallel=1
- You can also use the ``DJANGO_TEST_PROCESSES`` environment variable for this
- purpose.
- Tips for writing tests
- ----------------------
- .. highlight:: python
- Isolating model registration
- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- To avoid polluting the global :attr:`~django.apps.apps` registry and prevent
- unnecessary table creation, models defined in a test method should be bound to
- a temporary ``Apps`` instance::
- from django.apps.registry import Apps
- from django.db import models
- from django.test import SimpleTestCase
- class TestModelDefinition(SimpleTestCase):
- def test_model_definition(self):
- test_apps = Apps(['app_label'])
- class TestModel(models.Model):
- class Meta:
- apps = test_apps
- ...
- .. function:: django.test.utils.isolate_apps(*app_labels, attr_name=None, kwarg_name=None)
- Since this pattern involves a lot of boilerplate, Django provides the
- :func:`~django.test.utils.isolate_apps` decorator. It's used like this::
- from django.db import models
- from django.test import SimpleTestCase
- from django.test.utils import isolate_apps
- class TestModelDefinition(SimpleTestCase):
- @isolate_apps('app_label')
- def test_model_definition(self):
- class TestModel(models.Model):
- pass
- ...
- .. admonition:: Setting ``app_label``
- Models defined in a test method with no explicit
- :attr:`~django.db.models.Options.app_label` are automatically assigned the
- label of the app in which their test class is located.
- In order to make sure the models defined within the context of
- :func:`~django.test.utils.isolate_apps` instances are correctly
- installed, you should pass the set of targeted ``app_label`` as arguments:
- .. snippet::
- :filename: tests/app_label/tests.py
- from django.db import models
- from django.test import SimpleTestCase
- from django.test.utils import isolate_apps
- class TestModelDefinition(SimpleTestCase):
- @isolate_apps('app_label', 'other_app_label')
- def test_model_definition(self):
- # This model automatically receives app_label='app_label'
- class TestModel(models.Model):
- pass
- class OtherAppModel(models.Model):
- class Meta:
- app_label = 'other_app_label'
- ...
- The decorator can also be applied to classes::
- from django.db import models
- from django.test import SimpleTestCase
- from django.test.utils import isolate_apps
- @isolate_apps('app_label')
- class TestModelDefinition(SimpleTestCase):
- def test_model_definition(self):
- class TestModel(models.Model):
- pass
- ...
- The temporary ``Apps`` instance used to isolate model registration can be
- retrieved as an attribute when used as a class decorator by using the
- ``attr_name`` parameter::
- from django.db import models
- from django.test import SimpleTestCase
- from django.test.utils import isolate_apps
- @isolate_apps('app_label', attr_name='apps')
- class TestModelDefinition(SimpleTestCase):
- def test_model_definition(self):
- class TestModel(models.Model):
- pass
- self.assertIs(self.apps.get_model('app_label', 'TestModel'), TestModel)
- Or as an argument on the test method when used as a method decorator by using
- the ``kwarg_name`` parameter::
- from django.db import models
- from django.test import SimpleTestCase
- from django.test.utils import isolate_apps
- class TestModelDefinition(SimpleTestCase):
- @isolate_apps('app_label', kwarg_name='apps')
- def test_model_definition(self, apps):
- class TestModel(models.Model):
- pass
- self.assertIs(apps.get_model('app_label', 'TestModel'), TestModel)
|