aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--Documentation/ABI/testing/sysfs-devices-power27
-rw-r--r--Documentation/power/pm_qos_interface.txt59
-rw-r--r--drivers/base/power/qos.c144
-rw-r--r--drivers/base/power/sysfs.c65
-rw-r--r--include/linux/pm.h1
-rw-r--r--include/linux/pm_qos.h12
-rw-r--r--kernel/power/qos.c13
7 files changed, 277 insertions, 44 deletions
diff --git a/Documentation/ABI/testing/sysfs-devices-power b/Documentation/ABI/testing/sysfs-devices-power
index efe449bdf811..7dbf96b724ed 100644
--- a/Documentation/ABI/testing/sysfs-devices-power
+++ b/Documentation/ABI/testing/sysfs-devices-power
@@ -187,7 +187,7 @@ Description:
187 Not all drivers support this attribute. If it isn't supported, 187 Not all drivers support this attribute. If it isn't supported,
188 attempts to read or write it will yield I/O errors. 188 attempts to read or write it will yield I/O errors.
189 189
190What: /sys/devices/.../power/pm_qos_latency_us 190What: /sys/devices/.../power/pm_qos_resume_latency_us
191Date: March 2012 191Date: March 2012
192Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 192Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
193Description: 193Description:
@@ -205,6 +205,31 @@ Description:
205 This attribute has no effect on system-wide suspend/resume and 205 This attribute has no effect on system-wide suspend/resume and
206 hibernation. 206 hibernation.
207 207
208What: /sys/devices/.../power/pm_qos_latency_tolerance_us
209Date: January 2014
210Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
211Description:
212 The /sys/devices/.../power/pm_qos_latency_tolerance_us attribute
213 contains the PM QoS active state latency tolerance limit for the
214 given device in microseconds. That is the maximum memory access
215 latency the device can suffer without any visible adverse
216 effects on user space functionality. If that value is the
217 string "any", the latency does not matter to user space at all,
218 but hardware should not be allowed to set the latency tolerance
219 for the device automatically.
220
221 Reading "auto" from this file means that the maximum memory
222 access latency for the device may be determined automatically
223 by the hardware as needed. Writing "auto" to it allows the
224 hardware to be switched to this mode if there are no other
225 latency tolerance requirements from the kernel side.
226
227 This attribute is only present if the feature controlled by it
228 is supported by the hardware.
229
230 This attribute has no effect on runtime suspend and resume of
231 devices and on system-wide suspend/resume and hibernation.
232
208What: /sys/devices/.../power/pm_qos_no_power_off 233What: /sys/devices/.../power/pm_qos_no_power_off
209Date: September 2012 234Date: September 2012
210Contact: Rafael J. Wysocki <rjw@rjwysocki.net> 235Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
diff --git a/Documentation/power/pm_qos_interface.txt b/Documentation/power/pm_qos_interface.txt
index 22cb8f51182a..ed743bbad87c 100644
--- a/Documentation/power/pm_qos_interface.txt
+++ b/Documentation/power/pm_qos_interface.txt
@@ -88,17 +88,19 @@ node.
88 88
892. PM QoS per-device latency and flags framework 892. PM QoS per-device latency and flags framework
90 90
91For each device, there are two lists of PM QoS requests. One is maintained 91For each device, there are three lists of PM QoS requests. Two of them are
92along with the aggregated target of resume latency value and the other is for 92maintained along with the aggregated targets of resume latency and active
93PM QoS flags. Values are updated in response to changes of the request list. 93state latency tolerance (in microseconds) and the third one is for PM QoS flags.
94Values are updated in response to changes of the request list.
94 95
95Target resume latency value is simply the minimum of the request values held in 96The target values of resume latency and active state latency tolerance are
96the parameter list elements. The PM QoS flags aggregate value is a gather 97simply the minimum of the request values held in the parameter list elements.
97(bitwise OR) of all list elements' values. Two device PM QoS flags are defined 98The PM QoS flags aggregate value is a gather (bitwise OR) of all list elements'
98currently: PM_QOS_FLAG_NO_POWER_OFF and PM_QOS_FLAG_REMOTE_WAKEUP. 99values. Two device PM QoS flags are defined currently: PM_QOS_FLAG_NO_POWER_OFF
100and PM_QOS_FLAG_REMOTE_WAKEUP.
99 101
100Note: the aggregated target value is implemented in such a way that reading the 102Note: The aggregated target values are implemented in such a way that reading
101aggregated value does not require any locking mechanism. 103the aggregated value does not require any locking mechanism.
102 104
103 105
104From kernel mode the use of this interface is the following: 106From kernel mode the use of this interface is the following:
@@ -177,3 +179,42 @@ The callback is called when the aggregated value for any device is changed
177int dev_pm_qos_remove_global_notifier(notifier): 179int dev_pm_qos_remove_global_notifier(notifier):
178Removes the notification callback function from the global notification tree 180Removes the notification callback function from the global notification tree
179of the framework. 181of the framework.
182
183
184Active state latency tolerance
185
186This device PM QoS type is used to support systems in which hardware may switch
187to energy-saving operation modes on the fly. In those systems, if the operation
188mode chosen by the hardware attempts to save energy in an overly aggressive way,
189it may cause excess latencies to be visible to software, causing it to miss
190certain protocol requirements or target frame or sample rates etc.
191
192If there is a latency tolerance control mechanism for a given device available
193to software, the .set_latency_tolerance callback in that device's dev_pm_info
194structure should be populated. The routine pointed to by it is should implement
195whatever is necessary to transfer the effective requirement value to the
196hardware.
197
198Whenever the effective latency tolerance changes for the device, its
199.set_latency_tolerance() callback will be executed and the effective value will
200be passed to it. If that value is negative, which means that the list of
201latency tolerance requirements for the device is empty, the callback is expected
202to switch the underlying hardware latency tolerance control mechanism to an
203autonomous mode if available. If that value is PM_QOS_LATENCY_ANY, in turn, and
204the hardware supports a special "no requirement" setting, the callback is
205expected to use it. That allows software to prevent the hardware from
206automatically updating the device's latency tolerance in response to its power
207state changes (e.g. during transitions from D3cold to D0), which generally may
208be done in the autonomous latency tolerance control mode.
209
210If .set_latency_tolerance() is present for the device, sysfs attribute
211pm_qos_latency_tolerance_us will be present in the devivce's power directory.
212Then, user space can use that attribute to specify its latency tolerance
213requirement for the device, if any. Writing "any" to it means "no requirement,
214but do not let the hardware control latency tolerance" and writing "auto" to it
215allows the hardware to be switched to the autonomous mode if there are no other
216requirements from the kernel side in the device's list.
217
218Kernel code can use the functions described above along with the
219DEV_PM_QOS_LATENCY_TOLERANCE device PM QoS type to add, remove and update
220latency tolerance requirements for devices.
diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index c754e55f9dcb..84756f7f09d9 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -151,6 +151,14 @@ static int apply_constraint(struct dev_pm_qos_request *req,
151 req); 151 req);
152 } 152 }
153 break; 153 break;
154 case DEV_PM_QOS_LATENCY_TOLERANCE:
155 ret = pm_qos_update_target(&qos->latency_tolerance,
156 &req->data.pnode, action, value);
157 if (ret) {
158 value = pm_qos_read_value(&qos->latency_tolerance);
159 req->dev->power.set_latency_tolerance(req->dev, value);
160 }
161 break;
154 case DEV_PM_QOS_FLAGS: 162 case DEV_PM_QOS_FLAGS:
155 ret = pm_qos_update_flags(&qos->flags, &req->data.flr, 163 ret = pm_qos_update_flags(&qos->flags, &req->data.flr,
156 action, value); 164 action, value);
@@ -194,6 +202,13 @@ static int dev_pm_qos_constraints_allocate(struct device *dev)
194 c->type = PM_QOS_MIN; 202 c->type = PM_QOS_MIN;
195 c->notifiers = n; 203 c->notifiers = n;
196 204
205 c = &qos->latency_tolerance;
206 plist_head_init(&c->list);
207 c->target_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE;
208 c->default_value = PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE;
209 c->no_constraint_value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
210 c->type = PM_QOS_MIN;
211
197 INIT_LIST_HEAD(&qos->flags.list); 212 INIT_LIST_HEAD(&qos->flags.list);
198 213
199 spin_lock_irq(&dev->power.lock); 214 spin_lock_irq(&dev->power.lock);
@@ -247,6 +262,11 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
247 apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 262 apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
248 memset(req, 0, sizeof(*req)); 263 memset(req, 0, sizeof(*req));
249 } 264 }
265 c = &qos->latency_tolerance;
266 plist_for_each_entry_safe(req, tmp, &c->list, data.pnode) {
267 apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
268 memset(req, 0, sizeof(*req));
269 }
250 f = &qos->flags; 270 f = &qos->flags;
251 list_for_each_entry_safe(req, tmp, &f->list, data.flr.node) { 271 list_for_each_entry_safe(req, tmp, &f->list, data.flr.node) {
252 apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE); 272 apply_constraint(req, PM_QOS_REMOVE_REQ, PM_QOS_DEFAULT_VALUE);
@@ -266,6 +286,40 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
266 mutex_unlock(&dev_pm_qos_sysfs_mtx); 286 mutex_unlock(&dev_pm_qos_sysfs_mtx);
267} 287}
268 288
289static bool dev_pm_qos_invalid_request(struct device *dev,
290 struct dev_pm_qos_request *req)
291{
292 return !req || (req->type == DEV_PM_QOS_LATENCY_TOLERANCE
293 && !dev->power.set_latency_tolerance);
294}
295
296static int __dev_pm_qos_add_request(struct device *dev,
297 struct dev_pm_qos_request *req,
298 enum dev_pm_qos_req_type type, s32 value)
299{
300 int ret = 0;
301
302 if (!dev || dev_pm_qos_invalid_request(dev, req))
303 return -EINVAL;
304
305 if (WARN(dev_pm_qos_request_active(req),
306 "%s() called for already added request\n", __func__))
307 return -EINVAL;
308
309 if (IS_ERR(dev->power.qos))
310 ret = -ENODEV;
311 else if (!dev->power.qos)
312 ret = dev_pm_qos_constraints_allocate(dev);
313
314 trace_dev_pm_qos_add_request(dev_name(dev), type, value);
315 if (!ret) {
316 req->dev = dev;
317 req->type = type;
318 ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
319 }
320 return ret;
321}
322
269/** 323/**
270 * dev_pm_qos_add_request - inserts new qos request into the list 324 * dev_pm_qos_add_request - inserts new qos request into the list
271 * @dev: target device for the constraint 325 * @dev: target device for the constraint
@@ -291,31 +345,11 @@ void dev_pm_qos_constraints_destroy(struct device *dev)
291int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, 345int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req,
292 enum dev_pm_qos_req_type type, s32 value) 346 enum dev_pm_qos_req_type type, s32 value)
293{ 347{
294 int ret = 0; 348 int ret;
295
296 if (!dev || !req) /*guard against callers passing in null */
297 return -EINVAL;
298
299 if (WARN(dev_pm_qos_request_active(req),
300 "%s() called for already added request\n", __func__))
301 return -EINVAL;
302 349
303 mutex_lock(&dev_pm_qos_mtx); 350 mutex_lock(&dev_pm_qos_mtx);
304 351 ret = __dev_pm_qos_add_request(dev, req, type, value);
305 if (IS_ERR(dev->power.qos))
306 ret = -ENODEV;
307 else if (!dev->power.qos)
308 ret = dev_pm_qos_constraints_allocate(dev);
309
310 trace_dev_pm_qos_add_request(dev_name(dev), type, value);
311 if (!ret) {
312 req->dev = dev;
313 req->type = type;
314 ret = apply_constraint(req, PM_QOS_ADD_REQ, value);
315 }
316
317 mutex_unlock(&dev_pm_qos_mtx); 352 mutex_unlock(&dev_pm_qos_mtx);
318
319 return ret; 353 return ret;
320} 354}
321EXPORT_SYMBOL_GPL(dev_pm_qos_add_request); 355EXPORT_SYMBOL_GPL(dev_pm_qos_add_request);
@@ -343,6 +377,7 @@ static int __dev_pm_qos_update_request(struct dev_pm_qos_request *req,
343 377
344 switch(req->type) { 378 switch(req->type) {
345 case DEV_PM_QOS_RESUME_LATENCY: 379 case DEV_PM_QOS_RESUME_LATENCY:
380 case DEV_PM_QOS_LATENCY_TOLERANCE:
346 curr_value = req->data.pnode.prio; 381 curr_value = req->data.pnode.prio;
347 break; 382 break;
348 case DEV_PM_QOS_FLAGS: 383 case DEV_PM_QOS_FLAGS:
@@ -563,6 +598,10 @@ static void __dev_pm_qos_drop_user_request(struct device *dev,
563 req = dev->power.qos->resume_latency_req; 598 req = dev->power.qos->resume_latency_req;
564 dev->power.qos->resume_latency_req = NULL; 599 dev->power.qos->resume_latency_req = NULL;
565 break; 600 break;
601 case DEV_PM_QOS_LATENCY_TOLERANCE:
602 req = dev->power.qos->latency_tolerance_req;
603 dev->power.qos->latency_tolerance_req = NULL;
604 break;
566 case DEV_PM_QOS_FLAGS: 605 case DEV_PM_QOS_FLAGS:
567 req = dev->power.qos->flags_req; 606 req = dev->power.qos->flags_req;
568 dev->power.qos->flags_req = NULL; 607 dev->power.qos->flags_req = NULL;
@@ -768,6 +807,67 @@ int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set)
768 pm_runtime_put(dev); 807 pm_runtime_put(dev);
769 return ret; 808 return ret;
770} 809}
810
811/**
812 * dev_pm_qos_get_user_latency_tolerance - Get user space latency tolerance.
813 * @dev: Device to obtain the user space latency tolerance for.
814 */
815s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
816{
817 s32 ret;
818
819 mutex_lock(&dev_pm_qos_mtx);
820 ret = IS_ERR_OR_NULL(dev->power.qos)
821 || !dev->power.qos->latency_tolerance_req ?
822 PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT :
823 dev->power.qos->latency_tolerance_req->data.pnode.prio;
824 mutex_unlock(&dev_pm_qos_mtx);
825 return ret;
826}
827
828/**
829 * dev_pm_qos_update_user_latency_tolerance - Update user space latency tolerance.
830 * @dev: Device to update the user space latency tolerance for.
831 * @val: New user space latency tolerance for @dev (negative values disable).
832 */
833int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
834{
835 int ret;
836
837 mutex_lock(&dev_pm_qos_mtx);
838
839 if (IS_ERR_OR_NULL(dev->power.qos)
840 || !dev->power.qos->latency_tolerance_req) {
841 struct dev_pm_qos_request *req;
842
843 if (val < 0) {
844 ret = -EINVAL;
845 goto out;
846 }
847 req = kzalloc(sizeof(*req), GFP_KERNEL);
848 if (!req) {
849 ret = -ENOMEM;
850 goto out;
851 }
852 ret = __dev_pm_qos_add_request(dev, req, DEV_PM_QOS_LATENCY_TOLERANCE, val);
853 if (ret < 0) {
854 kfree(req);
855 goto out;
856 }
857 dev->power.qos->latency_tolerance_req = req;
858 } else {
859 if (val < 0) {
860 __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE);
861 ret = 0;
862 } else {
863 ret = __dev_pm_qos_update_request(dev->power.qos->latency_tolerance_req, val);
864 }
865 }
866
867 out:
868 mutex_unlock(&dev_pm_qos_mtx);
869 return ret;
870}
771#else /* !CONFIG_PM_RUNTIME */ 871#else /* !CONFIG_PM_RUNTIME */
772static void __dev_pm_qos_hide_latency_limit(struct device *dev) {} 872static void __dev_pm_qos_hide_latency_limit(struct device *dev) {}
773static void __dev_pm_qos_hide_flags(struct device *dev) {} 873static void __dev_pm_qos_hide_flags(struct device *dev) {}
diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index 4e24955aac8a..95b181d1ca6d 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -246,6 +246,40 @@ static ssize_t pm_qos_resume_latency_store(struct device *dev,
246static DEVICE_ATTR(pm_qos_resume_latency_us, 0644, 246static DEVICE_ATTR(pm_qos_resume_latency_us, 0644,
247 pm_qos_resume_latency_show, pm_qos_resume_latency_store); 247 pm_qos_resume_latency_show, pm_qos_resume_latency_store);
248 248
249static ssize_t pm_qos_latency_tolerance_show(struct device *dev,
250 struct device_attribute *attr,
251 char *buf)
252{
253 s32 value = dev_pm_qos_get_user_latency_tolerance(dev);
254
255 if (value < 0)
256 return sprintf(buf, "auto\n");
257 else if (value == PM_QOS_LATENCY_ANY)
258 return sprintf(buf, "any\n");
259
260 return sprintf(buf, "%d\n", value);
261}
262
263static ssize_t pm_qos_latency_tolerance_store(struct device *dev,
264 struct device_attribute *attr,
265 const char *buf, size_t n)
266{
267 s32 value;
268 int ret;
269
270 if (kstrtos32(buf, 0, &value)) {
271 if (!strcmp(buf, "auto") || !strcmp(buf, "auto\n"))
272 value = PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT;
273 else if (!strcmp(buf, "any") || !strcmp(buf, "any\n"))
274 value = PM_QOS_LATENCY_ANY;
275 }
276 ret = dev_pm_qos_update_user_latency_tolerance(dev, value);
277 return ret < 0 ? ret : n;
278}
279
280static DEVICE_ATTR(pm_qos_latency_tolerance_us, 0644,
281 pm_qos_latency_tolerance_show, pm_qos_latency_tolerance_store);
282
249static ssize_t pm_qos_no_power_off_show(struct device *dev, 283static ssize_t pm_qos_no_power_off_show(struct device *dev,
250 struct device_attribute *attr, 284 struct device_attribute *attr,
251 char *buf) 285 char *buf)
@@ -631,6 +665,17 @@ static struct attribute_group pm_qos_resume_latency_attr_group = {
631 .attrs = pm_qos_resume_latency_attrs, 665 .attrs = pm_qos_resume_latency_attrs,
632}; 666};
633 667
668static struct attribute *pm_qos_latency_tolerance_attrs[] = {
669#ifdef CONFIG_PM_RUNTIME
670 &dev_attr_pm_qos_latency_tolerance_us.attr,
671#endif /* CONFIG_PM_RUNTIME */
672 NULL,
673};
674static struct attribute_group pm_qos_latency_tolerance_attr_group = {
675 .name = power_group_name,
676 .attrs = pm_qos_latency_tolerance_attrs,
677};
678
634static struct attribute *pm_qos_flags_attrs[] = { 679static struct attribute *pm_qos_flags_attrs[] = {
635#ifdef CONFIG_PM_RUNTIME 680#ifdef CONFIG_PM_RUNTIME
636 &dev_attr_pm_qos_no_power_off.attr, 681 &dev_attr_pm_qos_no_power_off.attr,
@@ -656,18 +701,23 @@ int dpm_sysfs_add(struct device *dev)
656 if (rc) 701 if (rc)
657 goto err_out; 702 goto err_out;
658 } 703 }
659
660 if (device_can_wakeup(dev)) { 704 if (device_can_wakeup(dev)) {
661 rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group); 705 rc = sysfs_merge_group(&dev->kobj, &pm_wakeup_attr_group);
662 if (rc) { 706 if (rc)
663 if (pm_runtime_callbacks_present(dev)) 707 goto err_runtime;
664 sysfs_unmerge_group(&dev->kobj, 708 }
665 &pm_runtime_attr_group); 709 if (dev->power.set_latency_tolerance) {
666 goto err_out; 710 rc = sysfs_merge_group(&dev->kobj,
667 } 711 &pm_qos_latency_tolerance_attr_group);
712 if (rc)
713 goto err_wakeup;
668 } 714 }
669 return 0; 715 return 0;
670 716
717 err_wakeup:
718 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
719 err_runtime:
720 sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
671 err_out: 721 err_out:
672 sysfs_remove_group(&dev->kobj, &pm_attr_group); 722 sysfs_remove_group(&dev->kobj, &pm_attr_group);
673 return rc; 723 return rc;
@@ -710,6 +760,7 @@ void rpm_sysfs_remove(struct device *dev)
710 760
711void dpm_sysfs_remove(struct device *dev) 761void dpm_sysfs_remove(struct device *dev)
712{ 762{
763 sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
713 dev_pm_qos_constraints_destroy(dev); 764 dev_pm_qos_constraints_destroy(dev);
714 rpm_sysfs_remove(dev); 765 rpm_sysfs_remove(dev);
715 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group); 766 sysfs_unmerge_group(&dev->kobj, &pm_wakeup_attr_group);
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a53a06..db2be5f3e030 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -582,6 +582,7 @@ struct dev_pm_info {
582 unsigned long accounting_timestamp; 582 unsigned long accounting_timestamp;
583#endif 583#endif
584 struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */ 584 struct pm_subsys_data *subsys_data; /* Owned by the subsystem. */
585 void (*set_latency_tolerance)(struct device *, s32);
585 struct dev_pm_qos *qos; 586 struct dev_pm_qos *qos;
586}; 587};
587 588
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 2d8ce50877d8..0b476019be55 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -33,6 +33,9 @@ enum pm_qos_flags_status {
33#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC) 33#define PM_QOS_NETWORK_LAT_DEFAULT_VALUE (2000 * USEC_PER_SEC)
34#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0 34#define PM_QOS_NETWORK_THROUGHPUT_DEFAULT_VALUE 0
35#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0 35#define PM_QOS_RESUME_LATENCY_DEFAULT_VALUE 0
36#define PM_QOS_LATENCY_TOLERANCE_DEFAULT_VALUE 0
37#define PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT (-1)
38#define PM_QOS_LATENCY_ANY ((s32)(~(__u32)0 >> 1))
36 39
37#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0) 40#define PM_QOS_FLAG_NO_POWER_OFF (1 << 0)
38#define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1) 41#define PM_QOS_FLAG_REMOTE_WAKEUP (1 << 1)
@@ -50,6 +53,7 @@ struct pm_qos_flags_request {
50 53
51enum dev_pm_qos_req_type { 54enum dev_pm_qos_req_type {
52 DEV_PM_QOS_RESUME_LATENCY = 1, 55 DEV_PM_QOS_RESUME_LATENCY = 1,
56 DEV_PM_QOS_LATENCY_TOLERANCE,
53 DEV_PM_QOS_FLAGS, 57 DEV_PM_QOS_FLAGS,
54}; 58};
55 59
@@ -89,8 +93,10 @@ struct pm_qos_flags {
89 93
90struct dev_pm_qos { 94struct dev_pm_qos {
91 struct pm_qos_constraints resume_latency; 95 struct pm_qos_constraints resume_latency;
96 struct pm_qos_constraints latency_tolerance;
92 struct pm_qos_flags flags; 97 struct pm_qos_flags flags;
93 struct dev_pm_qos_request *resume_latency_req; 98 struct dev_pm_qos_request *resume_latency_req;
99 struct dev_pm_qos_request *latency_tolerance_req;
94 struct dev_pm_qos_request *flags_req; 100 struct dev_pm_qos_request *flags_req;
95}; 101};
96 102
@@ -196,6 +202,8 @@ void dev_pm_qos_hide_latency_limit(struct device *dev);
196int dev_pm_qos_expose_flags(struct device *dev, s32 value); 202int dev_pm_qos_expose_flags(struct device *dev, s32 value);
197void dev_pm_qos_hide_flags(struct device *dev); 203void dev_pm_qos_hide_flags(struct device *dev);
198int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set); 204int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set);
205s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev);
206int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val);
199 207
200static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) 208static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev)
201{ 209{
@@ -215,6 +223,10 @@ static inline int dev_pm_qos_expose_flags(struct device *dev, s32 value)
215static inline void dev_pm_qos_hide_flags(struct device *dev) {} 223static inline void dev_pm_qos_hide_flags(struct device *dev) {}
216static inline int dev_pm_qos_update_flags(struct device *dev, s32 m, bool set) 224static inline int dev_pm_qos_update_flags(struct device *dev, s32 m, bool set)
217 { return 0; } 225 { return 0; }
226static inline s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
227 { return PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; }
228static inline int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
229 { return 0; }
218 230
219static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; } 231static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; }
220static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; } 232static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; }
diff --git a/kernel/power/qos.c b/kernel/power/qos.c
index e23ae38e647f..884b77058864 100644
--- a/kernel/power/qos.c
+++ b/kernel/power/qos.c
@@ -173,6 +173,7 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
173{ 173{
174 unsigned long flags; 174 unsigned long flags;
175 int prev_value, curr_value, new_value; 175 int prev_value, curr_value, new_value;
176 int ret;
176 177
177 spin_lock_irqsave(&pm_qos_lock, flags); 178 spin_lock_irqsave(&pm_qos_lock, flags);
178 prev_value = pm_qos_get_value(c); 179 prev_value = pm_qos_get_value(c);
@@ -208,13 +209,15 @@ int pm_qos_update_target(struct pm_qos_constraints *c, struct plist_node *node,
208 209
209 trace_pm_qos_update_target(action, prev_value, curr_value); 210 trace_pm_qos_update_target(action, prev_value, curr_value);
210 if (prev_value != curr_value) { 211 if (prev_value != curr_value) {
211 blocking_notifier_call_chain(c->notifiers, 212 ret = 1;
212 (unsigned long)curr_value, 213 if (c->notifiers)
213 NULL); 214 blocking_notifier_call_chain(c->notifiers,
214 return 1; 215 (unsigned long)curr_value,
216 NULL);
215 } else { 217 } else {
216 return 0; 218 ret = 0;
217 } 219 }
220 return ret;
218} 221}
219 222
220/** 223/**