Commit 5302a5c8 authored by Darrick J. Wong's avatar Darrick J. Wong

xfs: only clear log incompat flags at clean unmount

While reviewing the online fsck patchset, someone spied the
xfs_swapext_can_use_without_log_assistance function and wondered why we
go through this inverted-bitmask dance to avoid setting the
XFS_SB_FEAT_INCOMPAT_LOG_SWAPEXT feature.

(The same principles apply to the logged extended attribute update
feature bit in the since-merged LARP series.)

The reason for this dance is that xfs_add_incompat_log_feature is an
expensive operation -- it forces the log, pushes the AIL, and then if
nobody's beaten us to it, sets the feature bit and issues a synchronous
write of the primary superblock.  That could be a one-time cost
amortized over the life of the filesystem, but the log quiesce and cover
operations call xfs_clear_incompat_log_features to remove feature bits
opportunistically.  On a moderately loaded filesystem this leads to us
cycling those bits on and off over and over, which hurts performance.

Why do we clear the log incompat bits?  Back in ~2020 I think Dave and I
had a conversation on IRC[2] about what the log incompat bits represent.
IIRC in that conversation we decided that the log incompat bits protect
unrecovered log items so that old kernels won't try to recover them and
barf.  Since a clean log has no protected log items, we could clear the
bits at cover/quiesce time.

As Dave Chinner pointed out in the thread, clearing log incompat bits at
unmount time has positive effects for golden root disk image generator
setups, since the generator could be running a newer kernel than what
gets written to the golden image -- if there are log incompat fields set
in the golden image that was generated by a newer kernel/OS image
builder then the provisioning host cannot mount the filesystem even
though the log is clean and recovery is unnecessary to mount the
filesystem.

Given that it's expensive to set log incompat bits, we really only want
to do that once per bit per mount.  Therefore, I propose that we only
clear log incompat bits as part of writing a clean unmount record.  Do
this by adding an operational state flag to the xfs mount that guards
whether or not the feature bit clearing can actually take place.

This eliminates the l_incompat_users rwsem that we use to protect a log
cleaning operation from clearing a feature bit that a frontend thread is
trying to set -- this lock adds another way to fail w.r.t. locking.  For
the swapext series, I shard that into multiple locks just to work around
the lockdep complaints, and that's fugly.

Link: https://lore.kernel.org/linux-xfs/20240131230043.GA6180@frogsfrogsfrogs/Signed-off-by: default avatarDarrick J. Wong <djwong@kernel.org>
Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
Reviewed-by: default avatarDave Chinner <dchinner@redhat.com>
parent 98a778b4
...@@ -4047,9 +4047,6 @@ series. ...@@ -4047,9 +4047,6 @@ series.
| one ``struct rw_semaphore`` for each feature. | | one ``struct rw_semaphore`` for each feature. |
| The log cleaning code tries to take this rwsem in exclusive mode to | | The log cleaning code tries to take this rwsem in exclusive mode to |
| clear the bit; if the lock attempt fails, the feature bit remains set. | | clear the bit; if the lock attempt fails, the feature bit remains set. |
| Filesystem code signals its intention to use a log incompat feature in a |
| transaction by calling ``xlog_use_incompat_feat``, which takes the rwsem |
| in shared mode. |
| The code supporting a log incompat feature should create wrapper | | The code supporting a log incompat feature should create wrapper |
| functions to obtain the log feature and call | | functions to obtain the log feature and call |
| ``xfs_add_incompat_log_feature`` to set the feature bits in the primary | | ``xfs_add_incompat_log_feature`` to set the feature bits in the primary |
......
...@@ -1448,7 +1448,7 @@ xfs_log_work_queue( ...@@ -1448,7 +1448,7 @@ xfs_log_work_queue(
* Clear the log incompat flags if we have the opportunity. * Clear the log incompat flags if we have the opportunity.
* *
* This only happens if we're about to log the second dummy transaction as part * This only happens if we're about to log the second dummy transaction as part
* of covering the log and we can get the log incompat feature usage lock. * of covering the log.
*/ */
static inline void static inline void
xlog_clear_incompat( xlog_clear_incompat(
...@@ -1463,11 +1463,7 @@ xlog_clear_incompat( ...@@ -1463,11 +1463,7 @@ xlog_clear_incompat(
if (log->l_covered_state != XLOG_STATE_COVER_DONE2) if (log->l_covered_state != XLOG_STATE_COVER_DONE2)
return; return;
if (!down_write_trylock(&log->l_incompat_users))
return;
xfs_clear_incompat_log_features(mp); xfs_clear_incompat_log_features(mp);
up_write(&log->l_incompat_users);
} }
/* /*
...@@ -1585,8 +1581,6 @@ xlog_alloc_log( ...@@ -1585,8 +1581,6 @@ xlog_alloc_log(
} }
log->l_sectBBsize = 1 << log2_size; log->l_sectBBsize = 1 << log2_size;
init_rwsem(&log->l_incompat_users);
xlog_get_iclog_buffer_size(mp, log); xlog_get_iclog_buffer_size(mp, log);
spin_lock_init(&log->l_icloglock); spin_lock_init(&log->l_icloglock);
...@@ -3871,23 +3865,3 @@ xfs_log_check_lsn( ...@@ -3871,23 +3865,3 @@ xfs_log_check_lsn(
return valid; return valid;
} }
/*
* Notify the log that we're about to start using a feature that is protected
* by a log incompat feature flag. This will prevent log covering from
* clearing those flags.
*/
void
xlog_use_incompat_feat(
struct xlog *log)
{
down_read(&log->l_incompat_users);
}
/* Notify the log that we've finished using log incompat features. */
void
xlog_drop_incompat_feat(
struct xlog *log)
{
up_read(&log->l_incompat_users);
}
...@@ -159,8 +159,6 @@ bool xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t); ...@@ -159,8 +159,6 @@ bool xfs_log_check_lsn(struct xfs_mount *, xfs_lsn_t);
xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes); xfs_lsn_t xlog_grant_push_threshold(struct xlog *log, int need_bytes);
bool xlog_force_shutdown(struct xlog *log, uint32_t shutdown_flags); bool xlog_force_shutdown(struct xlog *log, uint32_t shutdown_flags);
void xlog_use_incompat_feat(struct xlog *log);
void xlog_drop_incompat_feat(struct xlog *log);
int xfs_attr_use_log_assist(struct xfs_mount *mp); int xfs_attr_use_log_assist(struct xfs_mount *mp);
#endif /* __XFS_LOG_H__ */ #endif /* __XFS_LOG_H__ */
...@@ -450,9 +450,6 @@ struct xlog { ...@@ -450,9 +450,6 @@ struct xlog {
xfs_lsn_t l_recovery_lsn; xfs_lsn_t l_recovery_lsn;
uint32_t l_iclog_roundoff;/* padding roundoff */ uint32_t l_iclog_roundoff;/* padding roundoff */
/* Users of log incompat features should take a read lock. */
struct rw_semaphore l_incompat_users;
}; };
/* /*
......
...@@ -3496,21 +3496,6 @@ xlog_recover_finish( ...@@ -3496,21 +3496,6 @@ xlog_recover_finish(
*/ */
xfs_log_force(log->l_mp, XFS_LOG_SYNC); xfs_log_force(log->l_mp, XFS_LOG_SYNC);
/*
* Now that we've recovered the log and all the intents, we can clear
* the log incompat feature bits in the superblock because there's no
* longer anything to protect. We rely on the AIL push to write out the
* updated superblock after everything else.
*/
if (xfs_clear_incompat_log_features(log->l_mp)) {
error = xfs_sync_sb(log->l_mp, false);
if (error < 0) {
xfs_alert(log->l_mp,
"Failed to clear log incompat features on recovery");
goto out_error;
}
}
xlog_recover_process_iunlinks(log); xlog_recover_process_iunlinks(log);
/* /*
......
...@@ -1095,6 +1095,11 @@ xfs_unmountfs( ...@@ -1095,6 +1095,11 @@ xfs_unmountfs(
"Freespace may not be correct on next mount."); "Freespace may not be correct on next mount.");
xfs_unmount_check(mp); xfs_unmount_check(mp);
/*
* Indicate that it's ok to clear log incompat bits before cleaning
* the log and writing the unmount record.
*/
xfs_set_done_with_log_incompat(mp);
xfs_log_unmount(mp); xfs_log_unmount(mp);
xfs_da_unmount(mp); xfs_da_unmount(mp);
xfs_uuid_unmount(mp); xfs_uuid_unmount(mp);
...@@ -1364,7 +1369,8 @@ xfs_clear_incompat_log_features( ...@@ -1364,7 +1369,8 @@ xfs_clear_incompat_log_features(
if (!xfs_has_crc(mp) || if (!xfs_has_crc(mp) ||
!xfs_sb_has_incompat_log_feature(&mp->m_sb, !xfs_sb_has_incompat_log_feature(&mp->m_sb,
XFS_SB_FEAT_INCOMPAT_LOG_ALL) || XFS_SB_FEAT_INCOMPAT_LOG_ALL) ||
xfs_is_shutdown(mp)) xfs_is_shutdown(mp) ||
!xfs_is_done_with_log_incompat(mp))
return false; return false;
/* /*
......
...@@ -412,6 +412,8 @@ __XFS_HAS_FEAT(nouuid, NOUUID) ...@@ -412,6 +412,8 @@ __XFS_HAS_FEAT(nouuid, NOUUID)
#define XFS_OPSTATE_WARNED_LARP 9 #define XFS_OPSTATE_WARNED_LARP 9
/* Mount time quotacheck is running */ /* Mount time quotacheck is running */
#define XFS_OPSTATE_QUOTACHECK_RUNNING 10 #define XFS_OPSTATE_QUOTACHECK_RUNNING 10
/* Do we want to clear log incompat flags? */
#define XFS_OPSTATE_UNSET_LOG_INCOMPAT 11
#define __XFS_IS_OPSTATE(name, NAME) \ #define __XFS_IS_OPSTATE(name, NAME) \
static inline bool xfs_is_ ## name (struct xfs_mount *mp) \ static inline bool xfs_is_ ## name (struct xfs_mount *mp) \
...@@ -439,6 +441,7 @@ __XFS_IS_OPSTATE(quotacheck_running, QUOTACHECK_RUNNING) ...@@ -439,6 +441,7 @@ __XFS_IS_OPSTATE(quotacheck_running, QUOTACHECK_RUNNING)
#else #else
# define xfs_is_quotacheck_running(mp) (false) # define xfs_is_quotacheck_running(mp) (false)
#endif #endif
__XFS_IS_OPSTATE(done_with_log_incompat, UNSET_LOG_INCOMPAT)
static inline bool static inline bool
xfs_should_warn(struct xfs_mount *mp, long nr) xfs_should_warn(struct xfs_mount *mp, long nr)
...@@ -457,7 +460,8 @@ xfs_should_warn(struct xfs_mount *mp, long nr) ...@@ -457,7 +460,8 @@ xfs_should_warn(struct xfs_mount *mp, long nr)
{ (1UL << XFS_OPSTATE_WARNED_SCRUB), "wscrub" }, \ { (1UL << XFS_OPSTATE_WARNED_SCRUB), "wscrub" }, \
{ (1UL << XFS_OPSTATE_WARNED_SHRINK), "wshrink" }, \ { (1UL << XFS_OPSTATE_WARNED_SHRINK), "wshrink" }, \
{ (1UL << XFS_OPSTATE_WARNED_LARP), "wlarp" }, \ { (1UL << XFS_OPSTATE_WARNED_LARP), "wlarp" }, \
{ (1UL << XFS_OPSTATE_QUOTACHECK_RUNNING), "quotacheck" } { (1UL << XFS_OPSTATE_QUOTACHECK_RUNNING), "quotacheck" }, \
{ (1UL << XFS_OPSTATE_UNSET_LOG_INCOMPAT), "unset_log_incompat" }
/* /*
* Max and min values for mount-option defined I/O * Max and min values for mount-option defined I/O
......
...@@ -22,10 +22,7 @@ ...@@ -22,10 +22,7 @@
/* /*
* Get permission to use log-assisted atomic exchange of file extents. * Get permission to use log-assisted atomic exchange of file extents.
* * Callers must not be running any transactions or hold any ILOCKs.
* Callers must not be running any transactions or hold any inode locks, and
* they must release the permission by calling xlog_drop_incompat_feat
* when they're done.
*/ */
static inline int static inline int
xfs_attr_grab_log_assist( xfs_attr_grab_log_assist(
...@@ -33,16 +30,7 @@ xfs_attr_grab_log_assist( ...@@ -33,16 +30,7 @@ xfs_attr_grab_log_assist(
{ {
int error = 0; int error = 0;
/* /* xattr update log intent items are already enabled */
* Protect ourselves from an idle log clearing the logged xattrs log
* incompat feature bit.
*/
xlog_use_incompat_feat(mp->m_log);
/*
* If log-assisted xattrs are already enabled, the caller can use the
* log assisted swap functions with the log-incompat reference we got.
*/
if (xfs_sb_version_haslogxattrs(&mp->m_sb)) if (xfs_sb_version_haslogxattrs(&mp->m_sb))
return 0; return 0;
...@@ -52,31 +40,19 @@ xfs_attr_grab_log_assist( ...@@ -52,31 +40,19 @@ xfs_attr_grab_log_assist(
* a V5 filesystem for the superblock field, but we'll require rmap * a V5 filesystem for the superblock field, but we'll require rmap
* or reflink to avoid having to deal with really old kernels. * or reflink to avoid having to deal with really old kernels.
*/ */
if (!xfs_has_reflink(mp) && !xfs_has_rmapbt(mp)) { if (!xfs_has_reflink(mp) && !xfs_has_rmapbt(mp))
error = -EOPNOTSUPP; return -EOPNOTSUPP;
goto drop_incompat;
}
/* Enable log-assisted xattrs. */ /* Enable log-assisted xattrs. */
error = xfs_add_incompat_log_feature(mp, error = xfs_add_incompat_log_feature(mp,
XFS_SB_FEAT_INCOMPAT_LOG_XATTRS); XFS_SB_FEAT_INCOMPAT_LOG_XATTRS);
if (error) if (error)
goto drop_incompat; return error;
xfs_warn_mount(mp, XFS_OPSTATE_WARNED_LARP, xfs_warn_mount(mp, XFS_OPSTATE_WARNED_LARP,
"EXPERIMENTAL logged extended attributes feature in use. Use at your own risk!"); "EXPERIMENTAL logged extended attributes feature in use. Use at your own risk!");
return 0; return 0;
drop_incompat:
xlog_drop_incompat_feat(mp->m_log);
return error;
}
static inline void
xfs_attr_rele_log_assist(
struct xfs_mount *mp)
{
xlog_drop_incompat_feat(mp->m_log);
} }
static inline bool static inline bool
...@@ -100,7 +76,6 @@ xfs_attr_change( ...@@ -100,7 +76,6 @@ xfs_attr_change(
struct xfs_da_args *args) struct xfs_da_args *args)
{ {
struct xfs_mount *mp = args->dp->i_mount; struct xfs_mount *mp = args->dp->i_mount;
bool use_logging = false;
int error; int error;
ASSERT(!(args->op_flags & XFS_DA_OP_LOGGED)); ASSERT(!(args->op_flags & XFS_DA_OP_LOGGED));
...@@ -111,14 +86,9 @@ xfs_attr_change( ...@@ -111,14 +86,9 @@ xfs_attr_change(
return error; return error;
args->op_flags |= XFS_DA_OP_LOGGED; args->op_flags |= XFS_DA_OP_LOGGED;
use_logging = true;
} }
error = xfs_attr_set(args); return xfs_attr_set(args);
if (use_logging)
xfs_attr_rele_log_assist(mp);
return error;
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment