back up the content of a file even if the content changes without changing mtime #1937

Open
opened 2013-03-27 18:31:05 +00:00 by zooko · 3 comments

From [//pipermail/tahoe-dev/2008-September/000809.html].

If an application writes to a file twice in quick succession, then the operating system may give that file the same mtime value both times. mtime granularity varies between OSes and filesystems, and is often coarser than you would wish:

¹ http://www.infosec.jmu.edu/documents/jmu-infosec-tr-2009-002.pdf

² http://msdn.microsoft.com/en-us/library/windows/desktop/ms724290%28v=vs.85%29.aspx

  • Linux/ext3 - 1 sec ![¹]
  • Linux/ext4 - 1 nanosec ![¹]; actually 1 millisec (observed by my experiment just now on linux 3.2, ext4)
  • FreeBSD/UFS - 1 sec ![¹]
  • Mac - 1 sec ![¹]
  • Windows/FAT - 2 sec, no timezone, when DST changes it is off by one hour until next reboot: ![¹]
  • Windows/NTFS - 100 nanosec: ![¹]; possibly actually 1.6 microsec ![²]?
  • Windows/* - mtime isn't necessarily updated until the filehandle is closed [¹, ²]

Note that FAT is the standard filesystem for removable media (isn't it?), so it is actually very common.

Now the problem is, what happens if

  1. an application writes some data, D1 into a file, and the timestamp gets updated to T1, and then

  2. tahoe backup reads D1, and then

  3. the app writes some new data, D2, and the timestamp doesn't get updated because steps 2 and 3 happened within the filesystem's granularity?

What happens is that tahoe backup has saved D1, but from then on it will never save D2, since it falsely believes it already saved it since its timestamp is still T1. If this were to happen in practice, the effect for the user would be that when they go to read the file from Tahoe-LAFS, they find the previous version of its contents — D1 — and not the most recent version — D2. This unfortunately user would probably not have any way to figure out what happened, and would justly blame Tahoe-LAFS for being unreliable.

The same problem can happen if the timestamp of a file gets reset to an earlier value, such as with the touch -t unix command, or by the system clock getting moved. (The system clock getting moved happens surprisingly often in the wild.)

A user can avoid this problem by passing --ignore-timestamps to tahoe backup, which will cause that run of tahoe backup to reupload every file. That is very expensive in terms of time, disk, and CPU usage (even if the files get deduplicated by the servers).

From [//pipermail/tahoe-dev/2008-September/000809.html]. If an application writes to a file twice in quick succession, then the operating system may give that file the same `mtime` value both times. `mtime` granularity varies between OSes and filesystems, and is often coarser than you would wish: ¹ <http://www.infosec.jmu.edu/documents/jmu-infosec-tr-2009-002.pdf> ² <http://msdn.microsoft.com/en-us/library/windows/desktop/ms724290%28v=vs.85%29.aspx> * Linux/ext3 - 1 sec ![¹] * Linux/ext4 - 1 nanosec ![¹]; actually 1 millisec (observed by my experiment just now on linux 3.2, ext4) * FreeBSD/UFS - 1 sec ![¹] * Mac - 1 sec ![¹] * Windows/FAT - 2 sec, no timezone, when DST changes it is off by one hour until next reboot: ![¹] * Windows/NTFS - 100 nanosec: ![¹]; possibly actually 1.6 microsec ![²]? * Windows/* - `mtime` isn't necessarily updated until the filehandle is closed [¹, ²] Note that FAT is the standard filesystem for removable media (isn't it?), so it is actually very common. Now the problem is, what happens if 1. an application writes some data, `D1` into a file, and the timestamp gets updated to `T1`, and then 2. `tahoe backup` reads `D1`, and then 3. the app writes some new data, `D2`, and the timestamp doesn't get updated because steps 2 and 3 happened within the filesystem's granularity? What happens is that `tahoe backup` has saved `D1`, but from then on it will never save `D2`, since it falsely believes it already saved it since its timestamp is still `T1`. If this were to happen in practice, the effect for the user would be that when they go to read the file from Tahoe-LAFS, they find the previous version of its contents — `D1` — and not the most recent version — `D2`. This unfortunately user would probably not have any way to figure out what happened, and would justly blame Tahoe-LAFS for being unreliable. The same problem can happen if the timestamp of a file gets reset to an earlier value, such as with the `touch -t` unix command, or by the system clock getting moved. (The system clock getting moved happens surprisingly often in the wild.) A user can avoid this problem by passing `--ignore-timestamps` to `tahoe backup`, which will cause that run of `tahoe backup` to reupload every file. That is very expensive in terms of time, disk, and CPU usage (even if the files get deduplicated by the servers).
zooko added the
code
normal
defect
1.9.2
labels 2013-03-27 18:31:05 +00:00
zooko added this to the undecided milestone 2013-03-27 18:31:05 +00:00
Author

Here's a proposed solution which avoids the failure of preservation due to the race condition. This solution does not address the problem due to timestamps getting reset, e.g. by touch -t or by the system clock getting moved.

Let G be the local filesystem's worst-case Granularity in seconds times some fudge factor, such as 2. So if the filesystem is FAT, let G=4, if the filesystem is ext4, let G=0.002, if the filesystem is NTFS, let G=0.004, else let G=2.

When tahoe backup examines a file, if the file's current mtime is within G seconds of the current time, then don't read its contents at that time, but instead delay for G seconds and then try again.

Here's a proposed solution which avoids the failure of preservation due to the race condition. This solution does not address the problem due to timestamps getting reset, e.g. by `touch -t` or by the system clock getting moved. Let `G` be the local filesystem's worst-case Granularity in seconds times some fudge factor, such as `2`. So if the filesystem is FAT, let `G=4`, if the filesystem is ext4, let `G=0.002`, if the filesystem is NTFS, let `G=0.004`, else let `G=2`. When `tahoe backup` examines a file, if the file's current `mtime` is within `G` seconds of the current time, then don't read its contents at that time, but instead delay for `G` seconds and then try again.
daira commented 2013-03-28 04:15:32 +00:00
Owner

If we use the approach of comment:91280, then I suggest using a fixed G = 4s instead of trying to guess what the timestamp granularity is. Also, after the file has been uploaded we should check the mtime again, in case it was modified while we were reading it.

Short of making a shadow copy on filesystems that support it, it's not possible to get a completely consistent snapshot of a filesystem that is being modified, using POSIX APIs.

If we use the approach of [comment:91280](/tahoe-lafs/trac-2024-07-25/issues/1937#issuecomment-91280), then I suggest using a fixed G = 4s instead of trying to guess what the timestamp granularity is. Also, after the file has been uploaded we should check the mtime again, in case it was modified while we were reading it. Short of making a shadow copy on filesystems that support it, it's not possible to get a completely consistent snapshot of a filesystem that is being modified, using POSIX APIs.
Author

Replying to daira:

If we use the approach of comment:91280, then I suggest using a fixed G = 4s instead of trying to guess what the timestamp granularity is.

+1

Also, after the file has been uploaded we should check the mtime again, in case it was modified while we were reading it.

Short of making a shadow copy on filesystems that support it, it's not possible to get a completely consistent snapshot of a filesystem that is being modified, using POSIX APIs.

Hm, I think this is a separate issue. The problem that this ticket seeks to address is that different-contents-same-mtime can lead to data loss. The issue you raise in this comment is, I think, #427.

Replying to [daira](/tahoe-lafs/trac-2024-07-25/issues/1937#issuecomment-91281): > If we use the approach of [comment:91280](/tahoe-lafs/trac-2024-07-25/issues/1937#issuecomment-91280), then I suggest using a fixed G = 4s instead of trying to guess what the timestamp granularity is. +1 > Also, after the file has been uploaded we should check the mtime again, in case it was modified while we were reading it. > > Short of making a shadow copy on filesystems that support it, it's not possible to get a completely consistent snapshot of a filesystem that is being modified, using POSIX APIs. Hm, I think this is a separate issue. The problem that *this* ticket seeks to address is that different-contents-same-mtime can lead to data loss. The issue you raise in this comment is, I think, #427.
Sign in to join this conversation.
No Milestone
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Reference: tahoe-lafs/trac-2024-07-25#1937
No description provided.