summaryrefslogtreecommitdiff
path: root/include/linux
AgeCommit message (Collapse)Author
2021-05-13tty: return void from tty_unregister_ldiscJiri Slaby
Now that noone checks the return value of tty_unregister_ldisc, make the function return 'void'. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Link: https://lore.kernel.org/r/20210505091928.22010-20-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: drop tty_ldisc_ops::refcountJiri Slaby
The refcount is checked only in tty_unregister_ldisc and EBUSY returned if it is nonzero. But none of the tty_unregister_ldisc callers act anyhow if this (or any other) error is returned. So remove tty_ldisc_ops::refcount completely and make tty_unregister_ldisc return 'void' in the next patches. That means we assume tty_unregister_ldisc is not called while the ldisc might be in use. That relies on try_module_get in get_ldops and module_put in put_ldops. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Link: https://lore.kernel.org/r/20210505091928.22010-18-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: make tty_ldisc_ops a param in tty_unregister_ldiscJiri Slaby
Make tty_unregister_ldisc symmetric to tty_register_ldisc by accepting struct tty_ldisc_ops as a parameter instead of ldisc number. This avoids checking of the ldisc number bounds in tty_unregister_ldisc. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: William Hubbs <w.d.hubbs@gmail.com> Cc: Chris Brannon <chris@the-brannons.com> Cc: Kirk Reiser <kirk@reisers.ca> Cc: Samuel Thibault <samuel.thibault@ens-lyon.org> Cc: Marcel Holtmann <marcel@holtmann.org> Cc: Johan Hedberg <johan.hedberg@gmail.com> Cc: Luiz Augusto von Dentz <luiz.dentz@gmail.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Wolfgang Grandegger <wg@grandegger.com> Cc: Marc Kleine-Budde <mkl@pengutronix.de> Cc: Andreas Koensgen <ajk@comnets.uni-bremen.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Rodolfo Giometti <giometti@enneenne.com> Cc: Peter Ujfalusi <peter.ujfalusi@gmail.com> Cc: Liam Girdwood <lgirdwood@gmail.com> Cc: Mark Brown <broonie@kernel.org> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.com> Link: https://lore.kernel.org/r/20210505091928.22010-17-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: set tty_ldisc_ops::num staticallyJiri Slaby
There is no reason to pass the ldisc number to tty_register_ldisc separately. Just set it in the already defined tty_ldisc_ops in all the ldiscs. This simplifies tty_register_ldisc a bit too (no need to set the num member there). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: William Hubbs <w.d.hubbs@gmail.com> Cc: Chris Brannon <chris@the-brannons.com> Cc: Kirk Reiser <kirk@reisers.ca> Cc: Samuel Thibault <samuel.thibault@ens-lyon.org> Cc: Marcel Holtmann <marcel@holtmann.org> Cc: Johan Hedberg <johan.hedberg@gmail.com> Cc: Luiz Augusto von Dentz <luiz.dentz@gmail.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Wolfgang Grandegger <wg@grandegger.com> Cc: Marc Kleine-Budde <mkl@pengutronix.de> Cc: Andreas Koensgen <ajk@comnets.uni-bremen.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Rodolfo Giometti <giometti@enneenne.com> Cc: Peter Ujfalusi <peter.ujfalusi@gmail.com> Cc: Liam Girdwood <lgirdwood@gmail.com> Cc: Mark Brown <broonie@kernel.org> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.com> Link: https://lore.kernel.org/r/20210505091928.22010-15-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: cumulate and document tty_struct::ctrl* membersJiri Slaby
Group the ctrl members under a single struct called ctrl. The new struct contains 'pgrp', 'session', 'pktstatus', and 'packet'. 'pktstatus' and 'packet' used to be bits in a bitfield. The struct also contains the lock protecting them to share the same cache line. Note that commit c545b66c6922b (tty: Serialize tcflow() with other tty flow control changes) added a padding to the original bitfield. It was for the bitfield to occupy a whole 64b word to avoid interferring stores on Alpha (cannot we evaporate this arch with weird implications to C code yet?). But it doesn't work as expected as the padding (tty_struct::ctrl_unused) is aligned to a 8B boundary too and occupies some bytes from the next word. So make it reliable by: 1) setting __aligned of the struct -- that aligns the start, and 2) making 'unsigned long unused[0]' as the last member of the struct -- pads the end. Add a kerneldoc comment for this grouped members. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: netdev@vger.kernel.org Link: https://lore.kernel.org/r/20210505091928.22010-14-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: cumulate and document tty_struct::flow* membersJiri Slaby
Group the flow flags under a single struct called flow. The new struct contains 'stopped' and 'tco_stopped' bools which used to be bits in a bitfield. The struct also contains the lock protecting them to potentially share the same cache line. Note that commit c545b66c6922b (tty: Serialize tcflow() with other tty flow control changes) added a padding to the original bitfield. It was for the bitfield to occupy a whole 64b word to avoid interferring stores on Alpha (cannot we evaporate this arch with weird implications to C code yet?). But it doesn't work as expected as the padding (tty_struct::unused) is aligned to a 8B boundary too and occupies some bytes from the next word. So make it reliable by: 1) setting __aligned of the struct -- that aligns the start, and 2) making 'unsigned long unused[0]' as the last member of the struct -- pads the end. This is also the perfect time to start the documentation of tty_struct where all this lives. So we start by documenting what these bools actually serve for. And why we do all the alignment dances. Only the few up-to-date information from the Theodore's comment made it into this new Kerneldoc comment. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: Heiko Carstens <hca@linux.ibm.com> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Shawn Guo <shawnguo@kernel.org> Cc: Sascha Hauer <s.hauer@pengutronix.de> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: "Maciej W. Rozycki" <macro@orcam.me.uk> Link: https://lore.kernel.org/r/20210505091928.22010-13-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: make fp of tty_ldisc_ops::receive_buf{,2} constJiri Slaby
Char pointer (cp) passed to tty_ldisc_ops::receive_buf{,2} is const. There is no reason for flag pointer (fp) not to be too. So switch it in the definition and all uses. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Cc: William Hubbs <w.d.hubbs@gmail.com> Cc: Chris Brannon <chris@the-brannons.com> Cc: Kirk Reiser <kirk@reisers.ca> Cc: Samuel Thibault <samuel.thibault@ens-lyon.org> Cc: Marcel Holtmann <marcel@holtmann.org> Cc: Johan Hedberg <johan.hedberg@gmail.com> Cc: Luiz Augusto von Dentz <luiz.dentz@gmail.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "David S. Miller" <davem@davemloft.net> Cc: Jakub Kicinski <kuba@kernel.org> Cc: Wolfgang Grandegger <wg@grandegger.com> Cc: Marc Kleine-Budde <mkl@pengutronix.de> Cc: Andreas Koensgen <ajk@comnets.uni-bremen.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Liam Girdwood <lgirdwood@gmail.com> Cc: Mark Brown <broonie@kernel.org> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.com> Cc: Peter Ujfalusi <peter.ujfalusi@gmail.com> Link: https://lore.kernel.org/r/20210505091928.22010-12-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tty: remove broken r3964 line disciplineJiri Slaby
Noone stepped up in the past two years since it was marked as BROKEN by commit c7084edc3f6d (tty: mark Siemens R3964 line discipline as BROKEN). Remove the line discipline for good. Three remarks: * we remove also the uapi header (as noone is able to use that interface anyway) * we do *not* remove the N_R3964 constant definition from tty.h, so it remains reserved. * in_interrupt() check is now removed from vt's con_put_char. Noone else calls tty_operations::put_char from interrupt context. Signed-off-by: Jiri Slaby <jslaby@suse.cz> Link: https://lore.kernel.org/r/20210505091928.22010-2-jslaby@suse.cz Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13firmware: replace HOTPLUG with UEVENT in FW_ACTION definesShawn Guo
With commit 312c004d36ce ("[PATCH] driver core: replace "hotplug" by "uevent"") already in the tree over a decade, update the name of FW_ACTION defines to follow semantics, and reflect what the defines are really meant for, i.e. whether or not generate user space event. Acked-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Shawn Guo <shawn.guo@linaro.org> Link: https://lore.kernel.org/r/20210425020024.28057-1-shawn.guo@linaro.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13usb: host: move EH SINGLE_STEP_SET_FEATURE implementation to corePeter Chen
It is needed at USB Certification test for Embedded Host 2.0, and the detail is at CH6.4.1.1 of On-The-Go and Embedded Host Supplement to the USB Revision 2.0 Specification. Since other USB 2.0 capable host like XHCI also need it, so move it to HCD core. Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Peter Chen <peter.chen@nxp.com> Signed-off-by: Li Jun <jun.li@nxp.com> Link: https://lore.kernel.org/r/1620452039-11694-1-git-send-email-jun.li@nxp.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2021-05-13tick/nohz: Kick only _queued_ task whose tick dependency is updatedMarcelo Tosatti
When the tick dependency of a task is updated, we want it to aknowledge the new state and restart the tick if needed. If the task is not running, we don't need to kick it because it will observe the new dependency upon scheduling in. But if the task is running, we may need to send an IPI to it so that it gets notified. Unfortunately we don't have the means to check if a task is running in a race free way. Checking p->on_cpu in a synchronized way against p->tick_dep_mask would imply adding a full barrier between prepare_task_switch() and tick_nohz_task_switch(), which we want to avoid in this fast-path. Therefore we blindly fire an IPI to the task's CPU. Meanwhile we can check if the task is queued on the CPU rq because p->on_rq is always set to TASK_ON_RQ_QUEUED _before_ schedule() and its full barrier that precedes tick_nohz_task_switch(). And if the task is queued on a nohz_full CPU, it also has fair chances to be running as the isolation constraints prescribe running single tasks on full dynticks CPUs. So use this as a trick to check if we can spare an IPI toward a non-running task. NOTE: For the ordering to be correct, it is assumed that we never deactivate a task while it is running, the only exception being the task deactivating itself while scheduling out. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512232924.150322-9-frederic@kernel.org
2021-05-13tick/nohz: Change signal tick dependency to wake up CPUs of member tasksMarcelo Tosatti
Rather than waking up all nohz_full CPUs on the system, only wake up the target CPUs of member threads of the signal. Reduces interruptions to nohz_full CPUs. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Link: https://lore.kernel.org/r/20210512232924.150322-8-frederic@kernel.org
2021-05-13tick/nohz: Evaluate the CPU expression after the static keyPeter Zijlstra
When tick_nohz_full_cpu() is called with smp_processor_id(), the latter is unconditionally evaluated whether the static key is on or off. It is not necessary in the off-case though, so make sure the cpu expression is executed at the last moment. Illustrate with the following test function: int tick_nohz_test(void) { return tick_nohz_full_cpu(smp_processor_id()); } The resulting code before was: mov %gs:0x7eea92d1(%rip),%eax # smp_processor_id() fetch nopl 0x0(%rax,%rax,1) xor %eax,%eax retq cmpb $0x0,0x29d393a(%rip) # <tick_nohz_full_running> je tick_nohz_test+0x29 # jump to below eax clear mov %eax,%eax bt %rax,0x29d3936(%rip) # <tick_nohz_full_mask> setb %al movzbl %al,%eax retq xor %eax,%eax retq Now it becomes: nopl 0x0(%rax,%rax,1) xor %eax,%eax retq cmpb $0x0,0x29d3871(%rip) # <tick_nohz_full_running> je tick_nohz_test+0x29 # jump to below eax clear mov %gs:0x7eea91f0(%rip),%eax # smp_processor_id() fetch, after static key mov %eax,%eax bt %rax,0x29d3866(%rip) # <tick_nohz_full_mask> setb %al movzbl %al,%eax retq xor %eax,%eax retq Signed-off-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210512232924.150322-2-frederic@kernel.org
2021-05-12libnvdimm: Remove duplicate struct declarationWan Jiabing
struct device is declared at 133rd line. The second declaration is unnecessary, remove it. Signed-off-by: Wan Jiabing <wanjiabing@vivo.com> Link: https://lore.kernel.org/r/20210419112725.42145-1-wanjiabing@vivo.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
2021-05-12sched: Make nr_iowait_cpu() return 32-bit valueAlexey Dobriyan
Runqueue ->nr_iowait counters are 32-bit anyway. Propagate 32-bitness into other code, but don't try too hard. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210422200228.1423391-3-adobriyan@gmail.com
2021-05-12sched: Make nr_iowait() return 32-bit valueAlexey Dobriyan
Creating 2**32 tasks to wait in D-state is impossible and wasteful. Return "unsigned int" and save on REX prefixes. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210422200228.1423391-2-adobriyan@gmail.com
2021-05-12sched: Make nr_running() return 32-bit valueAlexey Dobriyan
Creating 2**32 tasks is impossible due to futex pid limits and wasteful anyway. Nobody has done it. Bring nr_running() into 32-bit world to save on REX prefixes. Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210422200228.1423391-1-adobriyan@gmail.com
2021-05-12Compiler Attributes: Add continue in commentWei Ming Chen
Add "continue;" for switch/case block according to Doc[1] [1] https://www.kernel.org/doc/html/latest/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Wei Ming Chen <jj251510319013@gmail.com> Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
2021-05-12locking: Fix comment typosIngo Molnar
A few snuck through. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-05-12sched: Fix leftover comment typosIngo Molnar
A few more snuck in. Also capitalize 'CPU' while at it. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-05-12blkdev.h: remove unused codes blk_account_rqLin Feng
Last users of blk_account_rq gone with patch commit a1ce35fa49852db ("block: remove dead elevator code") and now it gets no caller, it can be safely removed. Signed-off-by: Lin Feng <linf@wangsu.com> Link: https://lore.kernel.org/r/20210512100124.173769-1-linf@wangsu.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-05-12jump_label: Free jump_entry::key bit1 for build usePeter Zijlstra
Have jump_label_init() set jump_entry::key bit1 to either 0 ot 1 unconditionally. This makes it available for build-time games. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194157.906893264@infradead.org
2021-05-12jump_label, x86: Introduce jump_entry_size()Peter Zijlstra
This allows architectures to have variable sized jumps. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194157.786777050@infradead.org
2021-05-12sched: prctl() core-scheduling interfaceChris Hyser
This patch provides support for setting and copying core scheduling 'task cookies' between threads (PID), processes (TGID), and process groups (PGID). The value of core scheduling isn't that tasks don't share a core, 'nosmt' can do that. The value lies in exploiting all the sharing opportunities that exist to recover possible lost performance and that requires a degree of flexibility in the API. From a security perspective (and there are others), the thread, process and process group distinction is an existent hierarchal categorization of tasks that reflects many of the security concerns about 'data sharing'. For example, protecting against cache-snooping by a thread that can just read the memory directly isn't all that useful. With this in mind, subcommands to CREATE/SHARE (TO/FROM) provide a mechanism to create and share cookies. CREATE/SHARE_TO specify a target pid with enum pidtype used to specify the scope of the targeted tasks. For example, PIDTYPE_TGID will share the cookie with the process and all of it's threads as typically desired in a security scenario. API: prctl(PR_SCHED_CORE, PR_SCHED_CORE_GET, tgtpid, pidtype, &cookie) prctl(PR_SCHED_CORE, PR_SCHED_CORE_CREATE, tgtpid, pidtype, NULL) prctl(PR_SCHED_CORE, PR_SCHED_CORE_SHARE_TO, tgtpid, pidtype, NULL) prctl(PR_SCHED_CORE, PR_SCHED_CORE_SHARE_FROM, srcpid, pidtype, NULL) where 'tgtpid/srcpid == 0' implies the current process and pidtype is kernel enum pid_type {PIDTYPE_PID, PIDTYPE_TGID, PIDTYPE_PGID, ...}. For return values, EINVAL, ENOMEM are what they say. ESRCH means the tgtpid/srcpid was not found. EPERM indicates lack of PTRACE permission access to tgtpid/srcpid. ENODEV indicates your machines lacks SMT. [peterz: complete rewrite] Signed-off-by: Chris Hyser <chris.hyser@oracle.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Don Hiatt <dhiatt@digitalocean.com> Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210422123309.039845339@infradead.org
2021-05-12sched: Inherit task cookie on fork()Peter Zijlstra
Note that sched_core_fork() is called from under tasklist_lock, and not from sched_fork() earlier. This avoids a few races later. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Don Hiatt <dhiatt@digitalocean.com> Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210422123308.980003687@infradead.org
2021-05-12sched: Trivial core scheduling cookie managementPeter Zijlstra
In order to not have to use pid_struct, create a new, smaller, structure to manage task cookies for core scheduling. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Don Hiatt <dhiatt@digitalocean.com> Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210422123308.919768100@infradead.org
2021-05-12sched: Trivial forced-newidle balancerPeter Zijlstra
When a sibling is forced-idle to match the core-cookie; search for matching tasks to fill the core. rcu_read_unlock() can incur an infrequent deadlock in sched_core_balance(). Fix this by using the RCU-sched flavor instead. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Don Hiatt <dhiatt@digitalocean.com> Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210422123308.800048269@infradead.org
2021-05-12sched: Basic tracking of matching tasksPeter Zijlstra
Introduce task_struct::core_cookie as an opaque identifier for core scheduling. When enabled; core scheduling will only allow matching task to be on the core; where idle matches everything. When task_struct::core_cookie is set (and core scheduling is enabled) these tasks are indexed in a second RB-tree, first on cookie value then on scheduling function, such that matching task selection always finds the most elegible match. NOTE: *shudder* at the overhead... NOTE: *sigh*, a 3rd copy of the scheduling function; the alternative is per class tracking of cookies and that just duplicates a lot of stuff for no raisin (the 2nd copy lives in the rt-mutex PI code). [Joel: folded fixes] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Don Hiatt <dhiatt@digitalocean.com> Tested-by: Hongyu Ning <hongyu.ning@linux.intel.com> Tested-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/20210422123308.496975854@infradead.org
2021-05-12delayacct: Add sysctl to enable at runtimePeter Zijlstra
Just like sched_schedstats, allow runtime enabling (and disabling) of delayacct. This is useful if one forgot to add the delayacct boot time option. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/YJkhebGJAywaZowX@hirez.programming.kicks-ass.net
2021-05-12delayacct: Default disabledPeter Zijlstra
Assuming this stuff isn't actually used much; disable it by default and avoid allocating and tracking the task_delay_info structure. taskstats is changed to still report the regular sched and sched_info and only skip the missing task_delay_info fields instead of not reporting anything. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210505111525.308018373@infradead.org
2021-05-12delayacct: Add static_branch in scheduler hooksPeter Zijlstra
Cheaper when delayacct is disabled. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Link: https://lkml.kernel.org/r/20210505111525.248028369@infradead.org
2021-05-12sched: Simplify sched_info_on()Peter Zijlstra
The situation around sched_info is somewhat complicated, it is used by sched_stats and delayacct and, indirectly, kvm. If SCHEDSTATS=Y (but disabled by default) sched_info_on() is unconditionally true -- this is the case for all distro kernel configs I checked. If for some reason SCHEDSTATS=N, but TASK_DELAY_ACCT=Y, then sched_info_on() can return false when delayacct is disabled, presumably because there would be no other users left; except kvm is. Instead of complicating matters further by accurately accounting sched_stat and kvm state, simply unconditionally enable when SCHED_INFO=Y, matching the common distro case. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Link: https://lkml.kernel.org/r/20210505111525.121458839@infradead.org
2021-05-11usb: class: cdc-wdm: WWAN framework integrationLoic Poulain
The WWAN framework provides a unified way to handle WWAN/modems and its control port(s). It has initially been introduced to support MHI/PCI modems, offering the same control protocols as the USB variants such as MBIM, QMI, AT... The WWAN framework exposes these control protocols as character devices, similarly to cdc-wdm, but in a bus agnostic fashion. This change adds registration of the USB modem cdc-wdm control endpoints to the WWAN framework as standard control ports (wwanXpY...). Exposing cdc-wdm through WWAN framework normally maintains backward compatibility, e.g: $ qmicli --device-open-qmi -d /dev/wwan0p1QMI --dms-get-ids instead of $ qmicli --device-open-qmi -d /dev/cdc-wdm0 --dms-get-ids However, some tools may rely on cdc-wdm driver/device name for device detection. It is then safer to keep the 'legacy' cdc-wdm character device to prevent any breakage. This is handled in this change by API mutual exclusion, only one access method can be used at a time, either cdc-wdm chardev or WWAN API. Note that unknown channel types (other than MBIM, AT or MBIM) are not registered to the WWAN framework. Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-11net: wwan: Add unknown port typeLoic Poulain
Some devices may have ports with unknown type/protocol which need to be tagged (though not supported by WWAN core). This will be the case for cdc-wdm based drivers. Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2021-05-11kyber: fix out of bounds access when preemptedOmar Sandoval
__blk_mq_sched_bio_merge() gets the ctx and hctx for the current CPU and passes the hctx to ->bio_merge(). kyber_bio_merge() then gets the ctx for the current CPU again and uses that to get the corresponding Kyber context in the passed hctx. However, the thread may be preempted between the two calls to blk_mq_get_ctx(), and the ctx returned the second time may no longer correspond to the passed hctx. This "works" accidentally most of the time, but it can cause us to read garbage if the second ctx came from an hctx with more ctx's than the first one (i.e., if ctx->index_hw[hctx->type] > hctx->nr_ctx). This manifested as this UBSAN array index out of bounds error reported by Jakub: UBSAN: array-index-out-of-bounds in ../kernel/locking/qspinlock.c:130:9 index 13106 is out of range for type 'long unsigned int [128]' Call Trace: dump_stack+0xa4/0xe5 ubsan_epilogue+0x5/0x40 __ubsan_handle_out_of_bounds.cold.13+0x2a/0x34 queued_spin_lock_slowpath+0x476/0x480 do_raw_spin_lock+0x1c2/0x1d0 kyber_bio_merge+0x112/0x180 blk_mq_submit_bio+0x1f5/0x1100 submit_bio_noacct+0x7b0/0x870 submit_bio+0xc2/0x3a0 btrfs_map_bio+0x4f0/0x9d0 btrfs_submit_data_bio+0x24e/0x310 submit_one_bio+0x7f/0xb0 submit_extent_page+0xc4/0x440 __extent_writepage_io+0x2b8/0x5e0 __extent_writepage+0x28d/0x6e0 extent_write_cache_pages+0x4d7/0x7a0 extent_writepages+0xa2/0x110 do_writepages+0x8f/0x180 __writeback_single_inode+0x99/0x7f0 writeback_sb_inodes+0x34e/0x790 __writeback_inodes_wb+0x9e/0x120 wb_writeback+0x4d2/0x660 wb_workfn+0x64d/0xa10 process_one_work+0x53a/0xa80 worker_thread+0x69/0x5b0 kthread+0x20b/0x240 ret_from_fork+0x1f/0x30 Only Kyber uses the hctx, so fix it by passing the request_queue to ->bio_merge() instead. BFQ and mq-deadline just use that, and Kyber can map the queues itself to avoid the mismatch. Fixes: a6088845c2bf ("block: kyber: make kyber more friendly with merging") Reported-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Omar Sandoval <osandov@fb.com> Link: https://lore.kernel.org/r/c7598605401a48d5cfeadebb678abd10af22b83f.1620691329.git.osandov@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2021-05-11soundwire: add missing kernel-doc descriptionPierre-Louis Bossart
For some reason we never added a description for the clk_stop callback. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com> Reviewed-by: Rander Wang <rander.wang@intel.com> Signed-off-by: Bard Liao <yung-chuan.liao@linux.intel.com> Link: https://lore.kernel.org/r/20210511030048.25622-3-yung-chuan.liao@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-05-11soundwire: bus: only use CLOCK_STOP_MODE0 and fix confusionsPierre-Louis Bossart
Existing devices and implementations only support the required CLOCK_STOP_MODE0. All the code related to CLOCK_STOP_MODE1 has not been tested and is highly questionable, with a clear confusion between CLOCK_STOP_MODE1 and the simple clock stop state machine. This patch removes all usages of CLOCK_STOP_MODE1 - which has no impact on any solution - and fixes the use of the simple clock stop state machine. The resulting code should be a lot more symmetrical and easier to maintain. Note that CLOCK_STOP_MODE1 is not supported in the SoundWire Device Class specification so it's rather unlikely that we need to re-add this mode later. Signed-off-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> Reviewed-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com> Reviewed-by: Rander Wang <rander.wang@intel.com> Signed-off-by: Bard Liao <yung-chuan.liao@linux.intel.com> Link: https://lore.kernel.org/r/20210511030048.25622-2-yung-chuan.liao@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
2021-05-11spi: Switch to signed types for *_native_cs SPI controller fieldsAndy Shevchenko
While fixing undefined behaviour the commit f60d7270c8a3 ("spi: Avoid undefined behaviour when counting unused native CSs") missed the case when all CSs are GPIOs and thus unused_native_cs will be evaluated to -1 in unsigned representation. This will falsely trigger a condition in the spi_get_gpio_descs(). Switch to signed types for *_native_cs SPI controller fields to fix above. Fixes: f60d7270c8a3 ("spi: Avoid undefined behaviour when counting unused native CSs") Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20210510131242.49455-1-andriy.shevchenko@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-05-11spi: pxa2xx: Introduce special type for Merrifield SPIsAndy Shevchenko
Intel Merrifield SPI is actually more closer to PXA3xx. It has extended FIFO (32 bytes) and additional registers to get or set FIFO thresholds. Introduce new type for Intel Merrifield SPI host controllers and handle bigger FIFO size. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20210510124134.24638-15-andriy.shevchenko@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-05-11spi: pxa2xx: Use pxa_ssp_enable()/pxa_ssp_disable() in the driverAndy Shevchenko
There are few places that repeat the logic of pxa_ssp_enable() and pxa_ssp_disable(). Use them instead of open coded variants. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Link: https://lore.kernel.org/r/20210510124134.24638-10-andriy.shevchenko@linux.intel.com Signed-off-by: Mark Brown <broonie@kernel.org>
2021-05-11stack: Replace "o" output with "r" input constraintNick Desaulniers
"o" isn't a common asm() constraint to use; it triggers an assertion in assert-enabled builds of LLVM that it's not recognized when targeting aarch64 (though it appears to fall back to "m"). It's fixed in LLVM 13 now, but there isn't really a good reason to use "o" in particular here. To avoid causing build issues for those using assert-enabled builds of earlier LLVM versions, the constraint needs changing. Instead, if the point is to retain the __builtin_alloca(), make ptr appear to "escape" via being an input to an empty inline asm block. This is preferable anyways, since otherwise this looks like a dead store. While the use of "r" was considered in https://lore.kernel.org/lkml/202104011447.2E7F543@keescook/ it was only tested as an output (which looks like a dead store, and wasn't sufficient). Use "r" as an input constraint instead, which behaves correctly across compilers and architectures. Fixes: 39218ff4c625 ("stack: Optionally randomize kernel stack offset each syscall") Signed-off-by: Nick Desaulniers <ndesaulniers@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Link: https://reviews.llvm.org/D100412 Link: https://bugs.llvm.org/show_bug.cgi?id=49956 Link: https://lore.kernel.org/r/20210419231741.4084415-1-keescook@chromium.org
2021-05-10selinux: delete selinux_xfrm_policy_lookup() useless argumentZhongjun Tan
seliunx_xfrm_policy_lookup() is hooks of security_xfrm_policy_lookup(). The dir argument is uselss in security_xfrm_policy_lookup(). So remove the dir argument from selinux_xfrm_policy_lookup() and security_xfrm_policy_lookup(). Signed-off-by: Zhongjun Tan <tanzhongjun@yulong.com> [PM: reformat the subject line] Signed-off-by: Paul Moore <paul@paul-moore.com>
2021-05-10cgroup: inline cgroup_task_freeze()Roman Gushchin
After the introduction of the cgroup.kill there is only one call site of cgroup_task_freeze() left: cgroup_exit(). cgroup_task_freeze() is currently taking rcu_read_lock() to read task's cgroup flags, but because it's always called with css_set_lock locked, the rcu protection is excessive. Simplify the code by inlining cgroup_task_freeze(). v2: fix build Signed-off-by: Roman Gushchin <guro@fb.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Signed-off-by: Tejun Heo <tj@kernel.org>
2021-05-10rcu: Reject RCU_LOCKDEP_WARN() false positivesPaul E. McKenney
If another lockdep report runs concurrently with an RCU lockdep report from RCU_LOCKDEP_WARN(), the following sequence of events can occur: 1. debug_lockdep_rcu_enabled() sees that lockdep is enabled when called from (say) synchronize_rcu(). 2. Lockdep is disabled by a concurrent lockdep report. 3. debug_lockdep_rcu_enabled() evaluates its lockdep-expression argument, for example, lock_is_held(&rcu_bh_lock_map). 4. Because lockdep is now disabled, lock_is_held() plays it safe and returns the constant 1. 5. But in this case, the constant 1 is not safe, because invoking synchronize_rcu() under rcu_read_lock_bh() is disallowed. 6. debug_lockdep_rcu_enabled() wrongly invokes lockdep_rcu_suspicious(), resulting in a false-positive splat. This commit therefore changes RCU_LOCKDEP_WARN() to check debug_lockdep_rcu_enabled() after checking the lockdep expression, so that any "safe" returns from lock_is_held() are rejected by debug_lockdep_rcu_enabled(). This requires memory ordering, which is supplied by READ_ONCE(debug_locks). The resulting volatile accesses prevent the compiler from reordering and the fact that only one variable is being accessed prevents the underlying hardware from reordering. The combination works for IA64, which can reorder reads to the same location, but this is defeated by the volatile accesses, which compile to load instructions that provide ordering. Reported-by: syzbot+dde0cc33951735441301@syzkaller.appspotmail.com Reported-by: Matthew Wilcox <willy@infradead.org> Reported-by: syzbot+88e4f02896967fe1ab0d@syzkaller.appspotmail.com Reported-by: Thomas Gleixner <tglx@linutronix.de> Suggested-by: Boqun Feng <boqun.feng@gmail.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-10rcu: Remove the unused rcu_irq_exit_preempt() functionPaul E. McKenney
Commit 9ee01e0f69a9 ("x86/entry: Clean up idtentry_enter/exit() leftovers") left the rcu_irq_exit_preempt() in place in order to avoid conflicts with the -rcu tree. Now that this change has long since hit mainline, this commit removes the no-longer-used rcu_irq_exit_preempt() function. Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-10bpf: verifier: Allocate idmap scratch in verifier envLorenz Bauer
func_states_equal makes a very short lived allocation for idmap, probably because it's too large to fit on the stack. However the function is called quite often, leading to a lot of alloc / free churn. Replace the temporary allocation with dedicated scratch space in struct bpf_verifier_env. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://lore.kernel.org/bpf/20210429134656.122225-4-lmb@cloudflare.com
2021-05-10srcu: Initialize SRCU after timersFrederic Weisbecker
Once srcu_init() is called, the SRCU core will make use of delayed workqueues, which rely on timers. However init_timers() is called several steps after rcu_init(). This means that a call_srcu() after rcu_init() but before init_timers() would find itself within a dangerously uninitialized timer core. This commit therefore creates a separate call to srcu_init() after init_timer() completes, which ensures that we stay in early SRCU mode until timers are safe(r). Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Neeraj Upadhyay <neeraju@codeaurora.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Joel Fernandes <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-10srcu: Unconditionally embed struct lockdep_mapFrederic Weisbecker
Since struct lockdep_map has zero size when CONFIG_DEBUG_LOCK_ALLOC=n, this commit removes the #ifdef from the srcu_struct structure's ->dep_map. This change will simplify further manipulations of this field. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Neeraj Upadhyay <neeraju@codeaurora.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Joel Fernandes <joel@joelfernandes.org> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-10timer: Revert "timer: Add timer_curr_running()"Frederic Weisbecker
This reverts commit dcd42591ebb8a25895b551a5297ea9c24414ba54. The only user was RCU/nocb. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Neeraj Upadhyay <neeraju@codeaurora.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
2021-05-10ptp: ptp_clock: make scaled_ppm_to_ppb static inlineRadu Pirea (NXP OSS)
Make scaled_ppm_to_ppb static inline to be able to build drivers that use this function even with PTP_1588_CLOCK disabled. Signed-off-by: Radu Pirea (NXP OSS) <radu-nicolae.pirea@oss.nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>