diff options
author | Ben Pfaff <blp@nicira.com> | 2014-07-10 16:48:16 -0700 |
---|---|---|
committer | Ben Pfaff <blp@nicira.com> | 2014-07-10 16:48:16 -0700 |
commit | 0791315e4dbf82e293a759086ee17c84b12b1d6e (patch) | |
tree | 5fba4741ad9584314ab32d93e8487c4c9a561156 /lib/netlink-socket.c | |
parent | 7f8e264600ec442f7cccc357bf299172419d6da9 (diff) | |
download | openvswitch-0791315e4dbf82e293a759086ee17c84b12b1d6e.tar.gz |
netlink-socket: Work around kernel Netlink dump thread races.
The Linux kernel Netlink implementation has two races that cause problems
for processes that attempt to dump a table in a multithreaded manner.
The first race is in the structure of the kernel netlink_recv() function.
This function pulls a message from the socket queue and, if there is none,
reports EAGAIN:
skb = skb_recv_datagram(sk, flags, noblock, &err);
if (skb == NULL)
goto out;
Only if a message is successfully read from the socket queue does the
function, toward the end, try to queue up a new message to be dumped:
if (nlk->cb && atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf / 2) {
ret = netlink_dump(sk);
if (ret) {
sk->sk_err = ret;
sk->sk_error_report(sk);
}
}
This means that if thread A reads a message from a dump, then thread B
attempts to read one before A queues up the next, B will get EAGAIN. This
means that, following EAGAIN, B needs to wait until A returns to userspace
before it tries to read the socket again. nl_dump_next() already does
this, using 'dump->status_seq' (although the need for it has never been
explained clearly, to my knowledge).
The second race is more serious. Suppose thread X and thread Y both
simultaneously attempt to queue up a new message to be dumped, using the
call to netlink_dump() quoted above. netlink_dump() begins with:
mutex_lock(nlk->cb_mutex);
cb = nlk->cb;
if (cb == NULL) {
err = -EINVAL;
goto errout_skb;
}
Suppose that X gets cb_mutex first and finds that the dump is complete. It
will therefore, toward the end of netlink_dump(), clear nlk->cb to NULL to
indicate that no dump is in progress and release the mutex:
nlk->cb = NULL;
mutex_unlock(nlk->cb_mutex);
When Y grabs cb_mutex afterward, it will see that nlk->cb is NULL and
return -EINVAL as quoted above. netlink_recv() stuffs -EINVAL in sk_err,
but that error is not reported immediately; instead, it is saved for the
next read from the socket. Since Open vSwitch maintains a pool of Netlink
sockets, that next failure can crop up pretty much anywhere. One of the
worst places for it to crop up is in the execution of a later transaction
(e.g. in nl_sock_transact_multiple__()), because userspace treats Netlink
transactions as idempotent and will re-execute them when socket errors
occur. For a transaction that sends a packet, this causes packet
duplication, which we actually observed in practice. (ENOBUFS should
actually cause transactions to be re-executed in many cases, but EINVAL
should not; this is a separate bug in the userspace netlink code.)
VMware-BZ: #1283188
Reported-and-tested-by: Alex Wang <alexw@nicira.com>
Signed-off-by: Ben Pfaff <blp@nicira.com>
Acked-by: Alex Wang <alexw@nicira.com>
Diffstat (limited to 'lib/netlink-socket.c')
-rw-r--r-- | lib/netlink-socket.c | 10 |
1 files changed, 10 insertions, 0 deletions
diff --git a/lib/netlink-socket.c b/lib/netlink-socket.c index 42ba7e1fc..8fd393340 100644 --- a/lib/netlink-socket.c +++ b/lib/netlink-socket.c @@ -714,6 +714,7 @@ nl_dump_start(struct nl_dump *dump, int protocol, const struct ofpbuf *request) atomic_init(&dump->status, status << 1); dump->nl_seq = nl_msg_nlmsghdr(request)->nlmsg_seq; dump->status_seq = seq_create(); + ovs_mutex_init(&dump->mutex); } /* Attempts to retrieve another reply from 'dump' into 'buffer'. 'dump' must @@ -756,7 +757,15 @@ nl_dump_next(struct nl_dump *dump, struct ofpbuf *reply, struct ofpbuf *buffer) return false; } + /* Take the mutex here to avoid an in-kernel race. If two threads try + * to read from a Netlink dump socket at once, then the socket error + * can be set to EINVAL, which will be encountered on the next recv on + * that socket, which could be anywhere due to the way that we pool + * Netlink sockets. Serializing the recv calls avoids the issue. */ + ovs_mutex_lock(&dump->mutex); retval = nl_sock_recv__(dump->sock, buffer, false); + ovs_mutex_unlock(&dump->mutex); + if (retval) { ofpbuf_clear(buffer); if (retval == EAGAIN) { @@ -835,6 +844,7 @@ nl_dump_done(struct nl_dump *dump) } nl_pool_release(dump->sock); seq_destroy(dump->status_seq); + ovs_mutex_destroy(&dump->mutex); return status >> 1; } |