How cvs2svn.py Works ===================== A cvs2svn run consists of 4 passes. Every pass but the last saves its data to a file on disk, so that a) we don't hold huge amounts of state in memory, and b) the conversion process is resumable. The final pass makes the actual Subversion commits. Pass 1: ======= The goal of this pass is to get a summary of all the revisions for each file written out to 'cvs2svn-data.revs'; at the end of this stage, revisions will be grouped by RCS file, not by logical commits. We walk over the repository, processing each RCS file with with rcsparse.parse(), using cvs2svn's CollectData class, which is a subclass of rcsparse.Sink(), the parser's callback class. For each RCS file, the first thing the parser encounters is the administrative header, including the head revision, the principal branch, symbolic names, RCS comments, etc. The main thing that happens here is that CollectData.define_tag() is invoked on each symbolic name and its attached revision, so all the tags and branches of this file get collected. Next, the parser hits the revision summary section. That's the part of the RCS file that looks like this: 1.6 date 2002.06.12.04.54.12; author captnmark; state Exp; branches 1.6.2.1; next 1.5; 1.5 date 2002.05.28.18.02.11; author captnmark; state Exp; branches; next 1.4; [...] For each revision summary, CollectData.define_revision() is invoked, recording that revision's metadata in the self.rev_data[] tree. After finishing the revision summaries, the parser invokes CollectData.tree_completed(), which loops over the revisions in self.rev_data, determining if there are instances where a higher revision was committed "before" a lower one (rare, but it can happen when there was clock skew on the repository machine). If there are any, it "resyncs" the timestamp of the higher rev to be just after that of the lower rev, but saves the original timestamp in self.rev_data[blah][3], so we can later write out a record to the resync file indicating that an adjustment was made (this makes it possible to catch the other parts of this commit and resync them similarly, more details below). Next, the parser encounters the *real* revision data, which has the log messages and file contents. For each revision, it invokes CollectData.set_revision_info(), which writes a new line to cvs2svn-data.revs, like this: 3dc32955 5afe9b4ba41843d8eb52ae7db47a43eaa9573254 C 1.2 * 0 0 foo/bar,v The fields are: 1. a fixed-width timestamp 2. a digest of the log message + author 3. the type of change ("C"hange, or "D"elete) 4. the revision number 5. the branch name, or "*" if none 6. the number of tags (followed by the tag names, space-delimited) 7. the number of branches (followed by the branch names, space-delimited) 8. the path of the RCS file in the repository (Of course, in the above example, fields 6 and 7 are "0", so they have no additional data.) Also, for resync'd revisions, a line like this is written out to 'cvs2svn-data.resync': 3d6c1329 18a215a05abea1c6c155dcc7283b88ae7ce23502 3d6c1328 The fields are: NEW_TIMESTAMP DIGEST OLD_TIMESTAMP (The resync file will be explained later.) That's it -- the RCS file is done. When every RCS file is done, Pass 1 is complete, and: - cvs2svn-data.revs contains a summary of every RCS file's revisions. All the revisions for a given RCS file are grouped together, but note that the groups are in no particular order. In other words, you can't yet identify the commits from looking at these lines; a multi-file commit will be scattered all over the place. - cvs2svn-data.resync contains a small amount of resync data, in no particular order. Pass 2: ======= This is where the resync file is used. The goal of this pass is to convert cvs2svn-data.revs to a new file, 'cvs2svn-data.c-revs' (clean revs). It's the same as the original file, except for some resync'd timestamps. First, read the whole resync file into a hash table that maps each author+log digest to a list of lists. Each sublist represents one of the timestamp adjustments from Pass 1, and looks like this: [old_time_lower, old_time_upper, new_time] The reason to map each digest to a list of sublists, instead of to one list, is that sometimes you'll get the same digest for unrelated commits (for example, the same author commits many times using the empty log message, or a log message that just says "Doc tweaks."). So each digest may need to "fan out" to cover multiple commits, but without accidentally unifying those commits. Now we loop over cvs2svn-data.revs, writing each line out to 'cvs2svn-data.c-revs'. Most lines are written out unchanged, but those whose digest matches some resync entry, and appear to be part of the same commit as one of the sublists in that entry, get tweaked. The tweak is to adjust the commit time of the line to the new_time, which is taken from the resync hash and results from the adjustment described in Pass 1. The way we figure out whether a given line needs to be tweaked is to loop over all the sublists, seeing if this commit's original time falls within the old<-->new time range for the current sublist. If it does, we tweak the line before writing it out, and then conditionally adjust the sublist's range to account for the timestamp we just adjusted (since it could be an outlier). Note that this could, in theory, result in separate commits being accidentally unified, since we might gradually the two sides of the range such that they are eventually more than COMMIT_THRESHOLD seconds apart. However, this is really a case of CVS not recording enough information to disambiguate the commits; we'd know we have a time range that exceeds the COMMIT_THRESHOLD, but we wouldn't necessarily know where to divide it up. We could try some clever heuristic, but for now it's not important -- after all, we're talking about commits that weren't important enough to have a distinctive log message anyway, so does it really matter if a couple of them accidentally get unified? Probably not. Pass 3: ======= This is where we deduce the changesets, that is, the grouping of file changes into single commits. It's very simple -- run 'sort' on cvs2svn-data.c-revs, converting it to 'cvs2svn-data.s-revs'. Because of the way the data is laid out, this causes commits with the same digest (that is, the same author and log message) to be grouped together. Poof! We now have the CVS changes grouped by logical commit. Pass 4: ======= This is where stuff actually gets committed to Subversion. It's pretty self-explanatory code, and is probably where most flux is about to happen (particularly with regards to tags and branches), so I won't bother to describe it here. But if you understand how passes 1-3 work, you should have no trouble understanding the code of pass 4. -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- -*- Some older notes and ideas about cvs2svn. Not deleted, because they may contain suggestions for future improvements in design. ----------------------------------------------------------------------- An email from John Gardiner Myers about some considerations for the tool. ------ From: John Gardiner Myers Subject: Thoughts on CVS to SVN conversion To: gstein@lyra.org Date: Sun, 15 Apr 2001 17:47:10 -0700 Some things you may want to consider for a CVS to SVN conversion utility: If converting a CVS repository to SVN takes days, it would be good for the conversion utility to keep its progress state on disk. If the conversion fails halfway through due to a network outage or power failure, that would allow the conversion to be resumed where it left off instead of having to start over from an empty SVN repository. It is a short step from there to allowing periodic updates of a read-only SVN repository from a read/write CVS repository. This allows the more relaxed conversion procedure: 1) Create SVN repository writable only by the conversion tool. 2) Update SVN repository from CVS repository. 3) Announce the time of CVS to SVN cutover. 4) Repeat step (2) as needed. 5) Disable commits to CVS repository, making it read-only. 6) Repeat step (2). 7) Enable commits to SVN repository. 8) Wait for developers to move their workspaces to SVN. 9) Decomission the CVS repository. You may forward this message or parts of it as you seem fit. ------ ----------------------------------------------------------------------- Further design thoughts from Greg Stein * timestamp the beginning of the process. ignore any commits that occur after that timestamp; otherwise, you could miss portions of a commit (e.g. scan A; commit occurs to A and B; scan B; create SVN revision for items in B; we missed A) * the above timestamp can also be used for John's "grab any updates that were missed in the previous pass." * for each file processed, watch out for simultaneous commits. this may cause a problem during the reading/scanning/parsing of the file, or the parse succeeds but the results are garbaged. this could be fixed with a CVS lock, but I'd prefer read-only access. algorithm: get the mtime before opening the file. if an error occurs during reading, and the mtime has changed, then restart the file. if the read is successful, but the mtime changed, then restart the file. * use a separate log to track unique branches and non-branched forks of revision history (Q: is it possible to create, say, 1.4.1.3 without a "real" branch?). this log can then be used to create a /branches/ directory in the SVN repository. Note: we want to determine some way to coalesce branches across files. It can't be based on name, though, since the same branch name could be used in multiple places, yet they are semantically different branches. Given files R, S, and T with branch B, we can tie those files' branch B into a "semantic group" whenever we see commit groups on a branch touching multiple files. Files that are have a (named) branch but no commits on it are simply ignored. For each "semantic group" of a branch, we'd create a branch based on their common ancestor, then make the changes on the children as necessary. For single-file commits to a branch, we could use heuristics (pathname analysis) to add these to a group (and log what we did), or we could put them in a "reject" kind of file for a human to tell us what to do (the human would edit a config file of some kind to instruct the converter). * if we have access to the CVSROOT/history, then we could process tags properly. otherwise, we can only use heuristics or configuration info to group up tags (branches can use commits; there are no commits associated with tags) * ideally, we store every bit of data from the ,v files to enable a complete restoration of the CVS repository. this could be done by storing properties with CVS revision numbers and stuff (i.e. all metadata not already embodied by SVN would go into properties) * how do we track the "states"? I presume "dead" is simply deleting the entry from SVN. what are the other legal states, and do we need to do anything with them? * where do we put the "description"? how about locks, access list, keyword flags, etc. * note that using something like the SourceForge repository will be an ideal test case. people *move* their repositories there, which means that all kinds of stuff can be found in those repositories, from wherever people used to run them, and under whatever development policies may have been used. For example: I found one of the projects with a "permissions 644;" line in the "gnuplot" repository. Most RCS releases issue warnings about that (although they properly handle/skip the lines).