Commit 54d3a6da authored by bescoto's avatar bescoto

Mainly updates for python 2.3


git-svn-id: http://svn.savannah.nongnu.org/svn/rdiff-backup@373 2b77aa54-bcbc-44c9-a7ec-4f6cf2b41109
parent 778721f2
......@@ -16,7 +16,8 @@ Patch by Jeffrey Marshall fixes socket/fifo recognition on Mac OS X
Patch by Jeffrey Marshall fixes --calculate-average mode, which seems
to have broken recently.
rdiff-backup should now work with python 2.3.
rdiff-backup should now work with python 2.3. Thanks to Arkadiusz
Miskiewicz for bug reports.
New in v0.13.0 (2003/07/22)
......
......@@ -26,6 +26,7 @@ bandwidth usage, as in rsync's --bwlimit option?</a></li>
<li><a href="#leak">How much memory should rdiff-backup use? Is there a
memory leak?</a></li>
<li><a href="#dir_not_empty">I use NFS and keep getting some error that includes "OSError: [Errno 39] Directory not empty"</a></li>
</ol>
<h3>Questions and Answers</h3>
......@@ -174,13 +175,10 @@ Next time you back up, you run
so that /usr/local is no longer copied to /backup/usr/local.
However, old information about /usr/local is still present in
/backup/rdiff-backup-data/increments/usr/local. You could wait for
this information to expire and then run rdiff-backup with the
--remove-older-than option, or you could remove the increments
manually by typing:
<pre>rm -rf /backup/rdiff-backup-data/increments/usr/local
rm /backup/rdiff-backup-data/increments/usr/local.*.dir</pre>
/backup/rdiff-backup-data/increments/usr/local. You can try to
manually remove this old information, but it's safer to let it be
removed by rdiff-backup when you run it with the --remove-older-than
option.
</li>
<P>
......@@ -235,11 +233,10 @@ the total number of files. Initial mirrorings will usually be
bandwidth or disk bound, and will take much longer than subsequent
updates.
<P>To give two arbitrary data points, when I back up my personal HD
locally (about 9GB, 600000 files, maybe 50 MB turnover, 1.1Ghz athlon)
rdiff-backup takes about 35 minutes and is usually CPU bound. Another
user reports an rdiff-backup session takes about 3 hours (80GB, ~1mil
files, 2GB turnover) to back up remotely Tru64 -> linux.
<P>To give one arbitrary data point, when I back up my personal HD
locally (about 36GB, 530000 files, maybe 500 MB turnover, athlon 2000,
7200 IDE disks, version 0.12.2) rdiff-backup takes about 15 minutes
and is usually CPU bound.
</li>
<p>
......@@ -348,7 +345,7 @@ rdiff-backup to exceed it for significant periods.</li>
<p>
Another option is to limit bandwidth at a lower (and perhaps more
appropriate) level. Adam Lazur mentions <a
href="http://lartc.org/wondershaper/">The Wonder Shaper</a>.
href="http://lartc.org/wondershaper/">The Wonder Shaper</a>.</p>
</li>
<a name="leak">
......@@ -366,5 +363,27 @@ leaks lots of memory.</strong> Version 0.9.5.1 should not leak and is
available from the rdiff-backup homepage.
</li>
<a name="dir_not_empty">
<li><strong>I use NFS and keep getting some error that includes "OSError: [Errno 39] Directory not empty"</strong>
<P>Several users have reported seeing errors that contain lines like
this:
<pre>
File "/usr/lib/python2.2/site-packages/rdiff_backup/rpath.py",
line 661, in rmdir
OSError: [Errno 39] Directory not empty:
'/nfs/backup/redfish/win/Program Files/Common Files/GMT/Banners/11132'
Exception exceptions.TypeError: "'NoneType' object is not callable"
in &lt;bound method GzipFile.__del__ of
</pre>
<p> All of these users were backing up onto NFS (Network File System).
I think this is probably a bug in NFS, although tell me if you know
how to make rdiff-backup more NFS-friendly. To avoid this problem,
run rdiff-backup locally on both ends instead of over NFS. This
should be faster anyway.
</li>
</ol>
write test case for --calculate-statistics
Fix for Python 2.3
Change for librsync 0.9.6 when it comes out
......
......@@ -954,12 +954,16 @@ class RPath(RORPath):
This can be useful for directories.
"""
if not fp:
fp = self.open("rb")
os.fsync(fp.fileno())
assert not fp.close()
if not fp: self.conn.rpath.RPath.fsync_local(self)
else: os.fsync(fp.fileno())
def fsync_local(self):
"""fsync current file, run locally"""
assert self.conn is Globals.local_connection
fd = os.open(self.path, os.O_RDONLY)
os.fsync(fd)
os.close(fd)
def fsync_with_dir(self, fp = None):
"""fsync self and directory self is under"""
self.fsync(fp)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment