summaryrefslogtreecommitdiff
path: root/kernel
diff options
context:
space:
mode:
authorKrzesimir Nowak <krzesimir@kinvolk.io>2019-05-08 18:08:58 +0200
committerDaniel Borkmann <daniel@iogearbox.net>2019-05-13 02:05:50 +0200
commite2f7fc0ac6957cabff4cecf6c721979b571af208 (patch)
treecaaf45b1359a0a04f475b0b434c8a34a1d1d81e7 /kernel
parentd7c4b3980c18e81c0470f5df6d96d832f446d26f (diff)
bpf: fix undefined behavior in narrow load handling
Commit 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") made the verifier add AND instructions to clear the unwanted bits with a mask when doing a narrow load. The mask is computed with (1 << size * 8) - 1 where "size" is the size of the narrow load. When doing a 4 byte load of a an 8 byte field the verifier shifts the literal 1 by 32 places to the left. This results in an overflow of a signed integer, which is an undefined behavior. Typically, the computed mask was zero, so the result of the narrow load ended up being zero too. Cast the literal to long long to avoid overflows. Note that narrow load of the 4 byte fields does not have the undefined behavior, because the load size can only be either 1 or 2 bytes, so shifting 1 by 8 or 16 places will not overflow it. And reading 4 bytes would not be a narrow load of a 4 bytes field. Fixes: 31fd85816dbe ("bpf: permits narrower load from bpf program context fields") Reviewed-by: Alban Crequy <alban@kinvolk.io> Reviewed-by: Iago López Galeiras <iago@kinvolk.io> Signed-off-by: Krzesimir Nowak <krzesimir@kinvolk.io> Cc: Yonghong Song <yhs@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/bpf/verifier.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 7b05e8938d5c..95f9354495ad 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -7599,7 +7599,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
insn->dst_reg,
shift);
insn_buf[cnt++] = BPF_ALU64_IMM(BPF_AND, insn->dst_reg,
- (1 << size * 8) - 1);
+ (1ULL << size * 8) - 1);
}
}