From 814b8797f9863abc2877acf87f6be0f140d00139 Mon Sep 17 00:00:00 2001
From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
Date: Wed, 27 Feb 2019 14:35:50 +0100
Subject: [PATCH] cpuidle: menu: Avoid overflows when computing variance

The variance computation in get_typical_interval() may overflow if
the square of the value of diff exceeds the maximum for the int64_t
data type value which basically is the case when it is of the order
of UINT_MAX.

However, data points so far in the future don't matter for idle
state selection anyway, so change the initial threshold value in
get_typical_interval() to INT_MAX which will cause more "outlying"
data points to be discarded without affecting the selection result.

Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/cpuidle/governors/menu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 61316fc51548a..5951604e7d5c6 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -186,7 +186,7 @@ static unsigned int get_typical_interval(struct menu_device *data,
 	unsigned int min, max, thresh, avg;
 	uint64_t sum, variance;
 
-	thresh = UINT_MAX; /* Discard outliers above this value */
+	thresh = INT_MAX; /* Discard outliers above this value */
 
 again:
 
-- 
GitLab