dtfs Design Issues -- Syncs and Checkpoints


Latest Modification: May, 5th 1998


Syncing And Checkpoints

Checkpoint blocks don't have to be updated every time a checkpoint is written.

Conclusion: Implementation planned.

Checkpoint blocks are just here to speed up mounting of a file system since the latest checkpoint could also be found by sequentially scanning the log. So it does not do any harm when a checkpoint block refres to an older checkpoint since we can find the latest checkpoint by following the partial segments in the log from that older checkpoint on. (The cleaner must be aware of this and must not clean segments that are younger than the checkpoint referenced in the checkpoint block.)

However, to speed up mounting of the file system, the checkpoint blocks should always be updated to refer directly to the latest checkpoint before unmounting.

Preserving the sequence of writes.

Conclusion: Not a design issue, a discussion result.

dtfs will not garantee that the sequence of write operations to two files that are written concurrently is correct (I think no Unix file system does that...). However, dtfs can guarantee that the order of sync operations on files is preserved.

Adding syncs

Conclusion: Not a design issue, a discussion result.

Since dtfs (as any file system) may decide to write blocks of a file out to the device at any time, this behaviour can be viewed as the filesystem adding syncs at will ("hidden syncs"). So the filesystem has the right to sync any file at any time.

Directory operations log.

Conclusion: No decision so far, favoring segment batching.

Currently, dtfs' on-disk data structures support a directory operations log. However, the BSD lfs implementation shows that an LFS can get along without dirop logs. Should dirop logs be actually be used or should we do segment batching like it is done in BSD LFS?


Christian Czezakte, email: e9025461@student.tuwien.ac.at