perf mmap: Be consistent when checking for an unmaped ring buffer

The previous patch is insufficient to cure the reported 'perf trace'
segfault, as it only cures the perf_mmap__read_done() case, moving the
segfault to perf_mmap__read_init() functio, fix it by doing the same
refcount check.

Cc: Adrian Hunter <>
Cc: Arnaldo Carvalho de Melo <>
Cc: David Ahern <>
Cc: Jiri Olsa <>
Cc: Kan Liang <>
Cc: Namhyung Kim <>
Cc: Wang Nan <>
Fixes: 8872481b ("perf mmap: Introduce perf_mmap__read_init()")
Link: default avatarArnaldo Carvalho de Melo <>
parent f58385f6
......@@ -234,7 +234,7 @@ static int overwrite_rb_find_range(void *buf, int mask, u64 *start, u64 *end)
* Report the start and end of the available data in ringbuffer
int perf_mmap__read_init(struct perf_mmap *md)
static int __perf_mmap__read_init(struct perf_mmap *md)
u64 head = perf_mmap__read_head(md);
u64 old = md->prev;
......@@ -268,6 +268,17 @@ int perf_mmap__read_init(struct perf_mmap *md)
return 0;
int perf_mmap__read_init(struct perf_mmap *map)
* Check if event was unmapped due to a POLLHUP/POLLERR.
if (!refcount_read(&map->refcnt))
return -ENOENT;
return __perf_mmap__read_init(map);
int perf_mmap__push(struct perf_mmap *md, void *to,
int push(void *to, void *buf, size_t size))
Markdown is supported
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment