(-*- text -*-) Subversion Commandline Client: Test Suite ========================================== The cmdline client test suite doesn't use the C-level testing framework, but is structured similarly. Instead of testing library APIs, it drives the client just like a user would, examining the output and the on-disk results (i.e., the working copy) carefully as it goes. In other words, this is "black box" testing of the command-line client. It has no access to code internals; it never looks inside the .svn/ directory; it only performs actions that a human user would do. These tests require Python 2.0 or later. [ For more general information on Subversion's testing system, please read the README in subversion/tests/. ] How To Run The Tests ==================== To run a test script over ra_local, just invoke it from this directory. Invoke the script with no arguments to run all the tests in that script: $ ./basic_tests.py Invoke with one numeric argument to run that particular test: $ ./basic_tests.py 7 And invoke with the "list" argument to show a list of all tests in that script: $ ./basic_tests.py list Running a script over ra_dav is basically the same, but you have to set up httpd 2.0 first (on the same machine, since the tests create repositories on the fly), and pass a URL argument to the test scripts. Assuming you have httpd 2.0 installed in /usr/local/apache2, just add two Location directives to /usr/local/apache2/conf/httpd.conf, with paths adjusted appropriately: DAV svn SVNParentPath /home/jrandom/projects/svn/subversion/tests/clients/cmdline/repositories DAV svn SVNPath /home/jrandom/projects/svn/subversion/tests/clients/cmdline/local_tmp/repos Httpd should be running on port 80. You may also need to ensure that it's running as you, so it has read/write access to the repositories that are probably living in your Subversion working copy. To do this, set the User and Group directives in httpd.conf, something like this: User jrandom Group users Now you can run a test script over ra_dav: $ ./basic_tests.py --url http://localhost $ ./basic_tests.py 3 --url http://localhost If you run httpd on a port other than 80, you can specify the port in the URL: "http://localhost:15835" for example. To run all tests over ra_dav, pass BASE_URL when running 'make check' from the top of the build dir: $ make check BASE_URL=http://localhost Directory Contents ================== *.py The tests themselves. svntest/ Python package, provides test suite framework /main.py: Global vars, utility routines; exports run_tests(), the main test routine. /tree.py: Infrastructure for SVNTreeNode class. - tree constructors, tree comparison routines. - routines to parse subcommand output into specific kinds of trees. - routines to parse a working copy and entries files into specific kinds of trees. /wc.py: Functions for interacting with a working copy, and converting to/from trees. /actions.py: Main API for driving subversion client and using trees to verify results. /entry.py: Parse an `entries' file (### not used yet) What the Python Tests are Doing =============================== I. Theory A. Types of Verification The point of this test system is that it's *automated*: that is, each test can algorithmically verify the results and indicate "PASS" or "FAIL". We've identified two broad classes of verification: 1. Verifying svn subcommand output. Most important subcommands (co, up, ci, im, st) print results to stdout as a list of paths. Even though the paths may be printed out in an unpredicable order, we still want to make sure this list is exactly the *set* of lines we expect to get. 2. Verifying the working copy itself. Every time a subcommand could potentially change something on disk, we need to inspect the working copy. Specifically, this means we need to make sure the working copy has exactly the tree-structure we expect, and each file has exactly the contents and properties we expect. II. Practice: Trees Sam TH proposed and began work on a solution whereby all important, inspectable information is parsed into a general, in-memory tree representation. By comparing actual vs. expected tree structures, we get automated verification. A. Tree node structure Each "tree node" in a tree has these fields: - name : the name of the node - children: list of child nodes (if the node is a dir) - contents: textual contents (if the node is a file) - properties: a hash to hold subversion props - atts: a hash of meta-information about tree nodes themselves B. Parsing subcommand output into a tree Special parsers examine lines printed by subcommands, and convert them into a tree of tree-nodes. The 'contents' and 'properties' fields are empty; but epending on the subcommand, specific attributes in the 'atts' field are set in tree-nodes: - svn co/up: a 'status' attribute is set to a two-character value from the set (A, D, G, U, C, _, ' ') - svn status: a 'status' attribute (as above), plus 'wc_rev' and 'repos_rev' attributies to hold the wc and repos revision numbers. - svn ci/im: a 'verb' attribute is set to one of (Adding, Sending, Deleting) C. Parsing a working copy into a tree We also have a routines that walks a regular working copy and returns a tree representing disk contents and props. In this case the 'atts' hash in each node is empty, but the 'contents' and 'props' fields are filled in. How to Write New Tests ====================== If you'd like to write a new python test, first decide which file it might fit into; test scripts each contain collections of tests grouped by rough categories. (Is it testing a new subcommand? New enhancement? Tricky use-case? Regression test?) Next, read the long documentation comment at the top of svntest/tree.py. It will explain the general API that most tests use. Finally, try copying-and-pasting a simple test and then edit from there. Don't forget to add your test to the 'test_list' variable at the bottom of the file.