diff options
author | Marco Elver <elver@google.com> | 2019-11-26 15:04:05 +0100 |
---|---|---|
committer | Thomas Gleixner <tglx@linutronix.de> | 2020-06-11 08:03:24 +0200 |
commit | 765dcd209947e7b3666c08fb109ab8b879f7a471 (patch) | |
tree | 0dbe7fe72d9bd74804abfb90453138f8a6e997c1 /scripts/atomic/fallbacks/add_negative | |
parent | b29482fde649c72441d5478a4ea2c52c56d97a5e (diff) |
asm-generic/atomic: Use __always_inline for fallback wrappers
Use __always_inline for atomic fallback wrappers. When building for size
(CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
inline even relatively small static inline functions that are assumed to
be inlinable such as atomic ops. This can cause problems, for example in
UACCESS regions.
While the fallback wrappers aren't pure wrappers, they are trivial
nonetheless, and the function they wrap should determine the final
inlining policy.
For x86 tinyconfig we observe:
- vmlinux baseline: 1315988
- vmlinux with patch: 1315928 (-60 bytes)
[ tglx: Cherry-picked from KCSAN ]
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Diffstat (limited to 'scripts/atomic/fallbacks/add_negative')
-rwxr-xr-x | scripts/atomic/fallbacks/add_negative | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/scripts/atomic/fallbacks/add_negative b/scripts/atomic/fallbacks/add_negative index e6f4815637de..03cc2e07fac5 100755 --- a/scripts/atomic/fallbacks/add_negative +++ b/scripts/atomic/fallbacks/add_negative @@ -8,7 +8,7 @@ cat <<EOF * if the result is negative, or false when * result is greater than or equal to zero. */ -static inline bool +static __always_inline bool ${atomic}_add_negative(${int} i, ${atomic}_t *v) { return ${atomic}_add_return(i, v) < 0; |