2008-11-04 10:50 PM
Were in a situation to migrate our existing subversion (svn) storage into our NetApp storage for the purpose of storage consolidation, lexibility in increasing capacity and for using NetApp snapshots technology in backup. Our subversion sever is running on Solaris 10 (x86 ) operating system and using subverseion 1.4.6.
Your inputs and solutions (storage and backup) for subversion in NetApp is highly appreciated.
Message was edited by: Antoni Mutia
2008-11-04 11:56 PM
Should I store my repository / working copy on a NFS server?
If you are using a repository with the Berkeley DB back end (default for repositories
created with Subversion 1.0 and 1.1, not the default thereafter), we recommend not
storing the repository on a remote filesystem (for example, NFS). While Berkeley DB
databases and log files can be stored on remote filesystems, the Berkeley DB shared
region files cannot be stored on a remote filesystem, so the repository may be safely
accessed by only a single filesystem client, and not all Subversion functionality will
be available to even that one client.
If you are using the FSFS repository back end, then storing the repository on a modern
NFS server (i.e., one that supports locking) should be fine.
Working copies can be stored on NFS (one common scenario is when your home directory
is on a NFS server). On Linux NFS servers, due to the volume of renames used
internally in Subversion when checking out files, some users have reported
that 'subtree checking' should be disabled (it's enabled by default). Please see NFS
Howto Server Guide and exports(5) for more information on how to disable subtree
We've had at least one report of working copies getting wedged after being accessed
via SMB. The server in question was running a rather old version of Samba (2.2.7a).
The problem didn't recur with a newer Samba (3.0.6).
2008-11-04 11:58 PM
a good point in backing and restoring FSFS repositories in Subversion
* Standard backup software
An FSFS repository can be backed up with standard backup software.
Since old revision files don't change, incremental backups with
standard backup software are efficient. (See "Note: Backups" for
(BDB repositories can be backed up using "svnadmin hotcopy" and can be
backed up incrementally using "svnadmin dump". FSFS just makes it
* Can split up repository across multiple spools
If an FSFS repository is outgrowing the filesystem it lives on, you
can symlink old revisions off to another filesystem.
If a process terminates abnormally during a read operation, it should
leave behind no traces in the repository, since read operations do not
modify the repository in any way.
If a process terminates abnormally during a commit operation, it will
leave behind a stale transaction, which will not interfere with
operation and which can be removed with a normal recursive delete
If a process terminates abnormally during the final phase of a commit
operation, it may be holding the write lock. The way locking is
currently implemented, a dead process should not be able to hold a
lock, but over a remote filesystem that guarantee may not apply.
Also, in the future, FSFS may have optional support for
NFSv2-compatible locking which would allow for the possibility of
stale locks. In either case, the write-lock file can simply be
removed to unblock commits, and read operations will remain
Locking is currently implemented using the apr_file_lock() function,
which on Unix uses fcntl() locking, and on Windows uses LockFile().
Modern remote filesystem implementations should support these
operations, but may not do so perfectly, and NFSv2 servers may not
support them at all.
It is possible to do exclusive locking under basic NFSv2 using a
complicated dance involving link(). It's possible that FSFS will
evolve to allow NFSv2-compatible locking, or perhaps just basic O_EXCL
locking, as a repository configuration option.
Naively copying an FSFS repository while a commit is taking place
could result in an easily-repaired inconsistency in the backed-up
repository. The backed-up "current" file could wind up referring to a
new revision which wasn't copied, or which was only partially
populated when it was copied.
The "svnadmin hotcopy" command avoids this problem by copying the
"current" file before copying the revision files. But a backup using
the hotcopy command isn't as efficient as a straight incremental
backup. FSFS may evolve so that "svnadmin recover" (currently a
no-op) knows how to recover from the inconsistency which might result
from a naive backup.
Naively copying an FSFS repository might also copy in-progress
transactions, which would become stale and take up extra room until
manually removed. "svnadmin hotcopy" does not copy in-progress
transactions from an FSFS repository, although that might need to
change if Subversion starts making use of long-lived transactions.
So, if you are using standard backup tools to make backups of an FSFS
repository, configure the software to copy the "current" file before
the numbered revision files, if possible, and configure it not to copy
the "transactions" directory. If you can't do those things, use
"svnadmin hotcopy", or be prepared to cope with the very occasional
need for manual repair of the repository upon restoring it from
2008-11-26 04:06 AM
I'm wondering, maybe NetApp dont have any solutions for SVN backup and restore..
can we or can we not leverage on snapshots technology in NetApp for SVN?
2008-11-27 01:29 AM
Just create a snap of the FSFS and backup the snaps using standard backup software or use snapvault. You already did a good research on the effects of backing up the repository what should apply with standard backup software should apply to snapshots. Only snapshots are more efficient and you are immune to some of the pitfalls of using an ordinary backup software to backup the repository. This goes the same for recovery.