diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst index ffc064c1ec681c..49311f3da6f24b 100644 --- a/Documentation/admin-guide/hw-vuln/index.rst +++ b/Documentation/admin-guide/hw-vuln/index.rst @@ -9,5 +9,6 @@ are configurable at compile, boot or run time. .. toctree:: :maxdepth: 1 + spectre l1tf mds diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst new file mode 100644 index 00000000000000..25f3b253219859 --- /dev/null +++ b/Documentation/admin-guide/hw-vuln/spectre.rst @@ -0,0 +1,697 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Spectre Side Channels +===================== + +Spectre is a class of side channel attacks that exploit branch prediction +and speculative execution on modern CPUs to read memory, possibly +bypassing access controls. Speculative execution side channel exploits +do not modify memory but attempt to infer privileged data in the memory. + +This document covers Spectre variant 1 and Spectre variant 2. + +Affected processors +------------------- + +Speculative execution side channel methods affect a wide range of modern +high performance processors, since most modern high speed processors +use branch prediction and speculative execution. + +The following CPUs are vulnerable: + + - Intel Core, Atom, Pentium, and Xeon processors + + - AMD Phenom, EPYC, and Zen processors + + - IBM POWER and zSeries processors + + - Higher end ARM processors + + - Apple CPUs + + - Higher end MIPS CPUs + + - Likely most other high performance CPUs. Contact your CPU vendor for details. + +Whether a processor is affected or not can be read out from the Spectre +vulnerability files in sysfs. See :ref:`spectre_sys_info`. + +Related CVEs +------------ + +The following CVE entries describe Spectre variants: + + ============= ======================= ================= + CVE-2017-5753 Bounds check bypass Spectre variant 1 + CVE-2017-5715 Branch target injection Spectre variant 2 + ============= ======================= ================= + +Problem +------- + +CPUs use speculative operations to improve performance. That may leave +traces of memory accesses or computations in the processor's caches, +buffers, and branch predictors. Malicious software may be able to +influence the speculative execution paths, and then use the side effects +of the speculative execution in the CPUs' caches and buffers to infer +privileged data touched during the speculative execution. + +Spectre variant 1 attacks take advantage of speculative execution of +conditional branches, while Spectre variant 2 attacks use speculative +execution of indirect branches to leak privileged memory. +See :ref:`[1] ` :ref:`[5] ` :ref:`[7] ` +:ref:`[10] ` :ref:`[11] `. + +Spectre variant 1 (Bounds Check Bypass) +--------------------------------------- + +The bounds check bypass attack :ref:`[2] ` takes advantage +of speculative execution that bypasses conditional branch instructions +used for memory access bounds check (e.g. checking if the index of an +array results in memory access within a valid range). This results in +memory accesses to invalid memory (with out-of-bound index) that are +done speculatively before validation checks resolve. Such speculative +memory accesses can leave side effects, creating side channels which +leak information to the attacker. + +There are some extensions of Spectre variant 1 attacks for reading data +over the network, see :ref:`[12] `. However such attacks +are difficult, low bandwidth, fragile, and are considered low risk. + +Spectre variant 2 (Branch Target Injection) +------------------------------------------- + +The branch target injection attack takes advantage of speculative +execution of indirect branches :ref:`[3] `. The indirect +branch predictors inside the processor used to guess the target of +indirect branches can be influenced by an attacker, causing gadget code +to be speculatively executed, thus exposing sensitive data touched by +the victim. The side effects left in the CPU's caches during speculative +execution can be measured to infer data values. + +.. _poison_btb: + +In Spectre variant 2 attacks, the attacker can steer speculative indirect +branches in the victim to gadget code by poisoning the branch target +buffer of a CPU used for predicting indirect branch addresses. Such +poisoning could be done by indirect branching into existing code, +with the address offset of the indirect branch under the attacker's +control. Since the branch prediction on impacted hardware does not +fully disambiguate branch address and uses the offset for prediction, +this could cause privileged code's indirect branch to jump to a gadget +code with the same offset. + +The most useful gadgets take an attacker-controlled input parameter (such +as a register value) so that the memory read can be controlled. Gadgets +without input parameters might be possible, but the attacker would have +very little control over what memory can be read, reducing the risk of +the attack revealing useful data. + +One other variant 2 attack vector is for the attacker to poison the +return stack buffer (RSB) :ref:`[13] ` to cause speculative +subroutine return instruction execution to go to a gadget. An attacker's +imbalanced subroutine call instructions might "poison" entries in the +return stack buffer which are later consumed by a victim's subroutine +return instructions. This attack can be mitigated by flushing the return +stack buffer on context switch, or virtual machine (VM) exit. + +On systems with simultaneous multi-threading (SMT), attacks are possible +from the sibling thread, as level 1 cache and branch target buffer +(BTB) may be shared between hardware threads in a CPU core. A malicious +program running on the sibling thread may influence its peer's BTB to +steer its indirect branch speculations to gadget code, and measure the +speculative execution's side effects left in level 1 cache to infer the +victim's data. + +Attack scenarios +---------------- + +The following list of attack scenarios have been anticipated, but may +not cover all possible attack vectors. + +1. A user process attacking the kernel +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + The attacker passes a parameter to the kernel via a register or + via a known address in memory during a syscall. Such parameter may + be used later by the kernel as an index to an array or to derive + a pointer for a Spectre variant 1 attack. The index or pointer + is invalid, but bound checks are bypassed in the code branch taken + for speculative execution. This could cause privileged memory to be + accessed and leaked. + + For kernel code that has been identified where data pointers could + potentially be influenced for Spectre attacks, new "nospec" accessor + macros are used to prevent speculative loading of data. + + Spectre variant 2 attacker can :ref:`poison ` the branch + target buffer (BTB) before issuing syscall to launch an attack. + After entering the kernel, the kernel could use the poisoned branch + target buffer on indirect jump and jump to gadget code in speculative + execution. + + If an attacker tries to control the memory addresses leaked during + speculative execution, he would also need to pass a parameter to the + gadget, either through a register or a known address in memory. After + the gadget has executed, he can measure the side effect. + + The kernel can protect itself against consuming poisoned branch + target buffer entries by using return trampolines (also known as + "retpoline") :ref:`[3] ` :ref:`[9] ` for all + indirect branches. Return trampolines trap speculative execution paths + to prevent jumping to gadget code during speculative execution. + x86 CPUs with Enhanced Indirect Branch Restricted Speculation + (Enhanced IBRS) available in hardware should use the feature to + mitigate Spectre variant 2 instead of retpoline. Enhanced IBRS is + more efficient than retpoline. + + There may be gadget code in firmware which could be exploited with + Spectre variant 2 attack by a rogue user process. To mitigate such + attacks on x86, Indirect Branch Restricted Speculation (IBRS) feature + is turned on before the kernel invokes any firmware code. + +2. A user process attacking another user process +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + A malicious user process can try to attack another user process, + either via a context switch on the same hardware thread, or from the + sibling hyperthread sharing a physical processor core on simultaneous + multi-threading (SMT) system. + + Spectre variant 1 attacks generally require passing parameters + between the processes, which needs a data passing relationship, such + as remote procedure calls (RPC). Those parameters are used in gadget + code to derive invalid data pointers accessing privileged memory in + the attacked process. + + Spectre variant 2 attacks can be launched from a rogue process by + :ref:`poisoning ` the branch target buffer. This can + influence the indirect branch targets for a victim process that either + runs later on the same hardware thread, or running concurrently on + a sibling hardware thread sharing the same physical core. + + A user process can protect itself against Spectre variant 2 attacks + by using the prctl() syscall to disable indirect branch speculation + for itself. An administrator can also cordon off an unsafe process + from polluting the branch target buffer by disabling the process's + indirect branch speculation. This comes with a performance cost + from not using indirect branch speculation and clearing the branch + target buffer. When SMT is enabled on x86, for a process that has + indirect branch speculation disabled, Single Threaded Indirect Branch + Predictors (STIBP) :ref:`[4] ` are turned on to prevent the + sibling thread from controlling branch target buffer. In addition, + the Indirect Branch Prediction Barrier (IBPB) is issued to clear the + branch target buffer when context switching to and from such process. + + On x86, the return stack buffer is stuffed on context switch. + This prevents the branch target buffer from being used for branch + prediction when the return stack buffer underflows while switching to + a deeper call stack. Any poisoned entries in the return stack buffer + left by the previous process will also be cleared. + + User programs should use address space randomization to make attacks + more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2). + +3. A virtualized guest attacking the host +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + The attack mechanism is similar to how user processes attack the + kernel. The kernel is entered via hyper-calls or other virtualization + exit paths. + + For Spectre variant 1 attacks, rogue guests can pass parameters + (e.g. in registers) via hyper-calls to derive invalid pointers to + speculate into privileged memory after entering the kernel. For places + where such kernel code has been identified, nospec accessor macros + are used to stop speculative memory access. + + For Spectre variant 2 attacks, rogue guests can :ref:`poison + ` the branch target buffer or return stack buffer, causing + the kernel to jump to gadget code in the speculative execution paths. + + To mitigate variant 2, the host kernel can use return trampolines + for indirect branches to bypass the poisoned branch target buffer, + and flushing the return stack buffer on VM exit. This prevents rogue + guests from affecting indirect branching in the host kernel. + + To protect host processes from rogue guests, host processes can have + indirect branch speculation disabled via prctl(). The branch target + buffer is cleared before context switching to such processes. + +4. A virtualized guest attacking other guest +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + A rogue guest may attack another guest to get data accessible by the + other guest. + + Spectre variant 1 attacks are possible if parameters can be passed + between guests. This may be done via mechanisms such as shared memory + or message passing. Such parameters could be used to derive data + pointers to privileged data in guest. The privileged data could be + accessed by gadget code in the victim's speculation paths. + + Spectre variant 2 attacks can be launched from a rogue guest by + :ref:`poisoning ` the branch target buffer or the return + stack buffer. Such poisoned entries could be used to influence + speculation execution paths in the victim guest. + + Linux kernel mitigates attacks to other guests running in the same + CPU hardware thread by flushing the return stack buffer on VM exit, + and clearing the branch target buffer before switching to a new guest. + + If SMT is used, Spectre variant 2 attacks from an untrusted guest + in the sibling hyperthread can be mitigated by the administrator, + by turning off the unsafe guest's indirect branch speculation via + prctl(). A guest can also protect itself by turning on microcode + based mitigations (such as IBPB or STIBP on x86) within the guest. + +.. _spectre_sys_info: + +Spectre system information +-------------------------- + +The Linux kernel provides a sysfs interface to enumerate the current +mitigation status of the system for Spectre: whether the system is +vulnerable, and which mitigations are active. + +The sysfs file showing Spectre variant 1 mitigation status is: + + /sys/devices/system/cpu/vulnerabilities/spectre_v1 + +The possible values in this file are: + + ======================================= ================================= + 'Mitigation: __user pointer sanitation' Protection in kernel on a case by + case base with explicit pointer + sanitation. + ======================================= ================================= + +However, the protections are put in place on a case by case basis, +and there is no guarantee that all possible attack vectors for Spectre +variant 1 are covered. + +The spectre_v2 kernel file reports if the kernel has been compiled with +retpoline mitigation or if the CPU has hardware mitigation, and if the +CPU has support for additional process-specific mitigation. + +This file also reports CPU features enabled by microcode to mitigate +attack between user processes: + +1. Indirect Branch Prediction Barrier (IBPB) to add additional + isolation between processes of different users. +2. Single Thread Indirect Branch Predictors (STIBP) to add additional + isolation between CPU threads running on the same core. + +These CPU features may impact performance when used and can be enabled +per process on a case-by-case base. + +The sysfs file showing Spectre variant 2 mitigation status is: + + /sys/devices/system/cpu/vulnerabilities/spectre_v2 + +The possible values in this file are: + + - Kernel status: + + ==================================== ================================= + 'Not affected' The processor is not vulnerable + 'Vulnerable' Vulnerable, no mitigation + 'Mitigation: Full generic retpoline' Software-focused mitigation + 'Mitigation: Full AMD retpoline' AMD-specific software mitigation + 'Mitigation: Enhanced IBRS' Hardware-focused mitigation + ==================================== ================================= + + - Firmware status: Show if Indirect Branch Restricted Speculation (IBRS) is + used to protect against Spectre variant 2 attacks when calling firmware (x86 only). + + ========== ============================================================= + 'IBRS_FW' Protection against user program attacks when calling firmware + ========== ============================================================= + + - Indirect branch prediction barrier (IBPB) status for protection between + processes of different users. This feature can be controlled through + prctl() per process, or through kernel command line options. This is + an x86 only feature. For more details see below. + + =================== ======================================================== + 'IBPB: disabled' IBPB unused + 'IBPB: always-on' Use IBPB on all tasks + 'IBPB: conditional' Use IBPB on SECCOMP or indirect branch restricted tasks + =================== ======================================================== + + - Single threaded indirect branch prediction (STIBP) status for protection + between different hyper threads. This feature can be controlled through + prctl per process, or through kernel command line options. This is x86 + only feature. For more details see below. + + ==================== ======================================================== + 'STIBP: disabled' STIBP unused + 'STIBP: forced' Use STIBP on all tasks + 'STIBP: conditional' Use STIBP on SECCOMP or indirect branch restricted tasks + ==================== ======================================================== + + - Return stack buffer (RSB) protection status: + + ============= =========================================== + 'RSB filling' Protection of RSB on context switch enabled + ============= =========================================== + +Full mitigation might require a microcode update from the CPU +vendor. When the necessary microcode is not available, the kernel will +report vulnerability. + +Turning on mitigation for Spectre variant 1 and Spectre variant 2 +----------------------------------------------------------------- + +1. Kernel mitigation +^^^^^^^^^^^^^^^^^^^^ + + For the Spectre variant 1, vulnerable kernel code (as determined + by code audit or scanning tools) is annotated on a case by case + basis to use nospec accessor macros for bounds clipping :ref:`[2] + ` to avoid any usable disclosure gadgets. However, it may + not cover all attack vectors for Spectre variant 1. + + For Spectre variant 2 mitigation, the compiler turns indirect calls or + jumps in the kernel into equivalent return trampolines (retpolines) + :ref:`[3] ` :ref:`[9] ` to go to the target + addresses. Speculative execution paths under retpolines are trapped + in an infinite loop to prevent any speculative execution jumping to + a gadget. + + To turn on retpoline mitigation on a vulnerable CPU, the kernel + needs to be compiled with a gcc compiler that supports the + -mindirect-branch=thunk-extern -mindirect-branch-register options. + If the kernel is compiled with a Clang compiler, the compiler needs + to support -mretpoline-external-thunk option. The kernel config + CONFIG_RETPOLINE needs to be turned on, and the CPU needs to run with + the latest updated microcode. + + On Intel Skylake-era systems the mitigation covers most, but not all, + cases. See :ref:`[3] ` for more details. + + On CPUs with hardware mitigation for Spectre variant 2 (e.g. Enhanced + IBRS on x86), retpoline is automatically disabled at run time. + + The retpoline mitigation is turned on by default on vulnerable + CPUs. It can be forced on or off by the administrator + via the kernel command line and sysfs control files. See + :ref:`spectre_mitigation_control_command_line`. + + On x86, indirect branch restricted speculation is turned on by default + before invoking any firmware code to prevent Spectre variant 2 exploits + using the firmware. + + Using kernel address space randomization (CONFIG_RANDOMIZE_SLAB=y + and CONFIG_SLAB_FREELIST_RANDOM=y in the kernel configuration) makes + attacks on the kernel generally more difficult. + +2. User program mitigation +^^^^^^^^^^^^^^^^^^^^^^^^^^ + + User programs can mitigate Spectre variant 1 using LFENCE or "bounds + clipping". For more details see :ref:`[2] `. + + For Spectre variant 2 mitigation, individual user programs + can be compiled with return trampolines for indirect branches. + This protects them from consuming poisoned entries in the branch + target buffer left by malicious software. Alternatively, the + programs can disable their indirect branch speculation via prctl() + (See :ref:`Documentation/userspace-api/spec_ctrl.rst `). + On x86, this will turn on STIBP to guard against attacks from the + sibling thread when the user program is running, and use IBPB to + flush the branch target buffer when switching to/from the program. + + Restricting indirect branch speculation on a user program will + also prevent the program from launching a variant 2 attack + on x86. All sand-boxed SECCOMP programs have indirect branch + speculation restricted by default. Administrators can change + that behavior via the kernel command line and sysfs control files. + See :ref:`spectre_mitigation_control_command_line`. + + Programs that disable their indirect branch speculation will have + more overhead and run slower. + + User programs should use address space randomization + (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more + difficult. + +3. VM mitigation +^^^^^^^^^^^^^^^^ + + Within the kernel, Spectre variant 1 attacks from rogue guests are + mitigated on a case by case basis in VM exit paths. Vulnerable code + uses nospec accessor macros for "bounds clipping", to avoid any + usable disclosure gadgets. However, this may not cover all variant + 1 attack vectors. + + For Spectre variant 2 attacks from rogue guests to the kernel, the + Linux kernel uses retpoline or Enhanced IBRS to prevent consumption of + poisoned entries in branch target buffer left by rogue guests. It also + flushes the return stack buffer on every VM exit to prevent a return + stack buffer underflow so poisoned branch target buffer could be used, + or attacker guests leaving poisoned entries in the return stack buffer. + + To mitigate guest-to-guest attacks in the same CPU hardware thread, + the branch target buffer is sanitized by flushing before switching + to a new guest on a CPU. + + The above mitigations are turned on by default on vulnerable CPUs. + + To mitigate guest-to-guest attacks from sibling thread when SMT is + in use, an untrusted guest running in the sibling thread can have + its indirect branch speculation disabled by administrator via prctl(). + + The kernel also allows guests to use any microcode based mitigation + they choose to use (such as IBPB or STIBP on x86) to protect themselves. + +.. _spectre_mitigation_control_command_line: + +Mitigation control on the kernel command line +--------------------------------------------- + +Spectre variant 2 mitigation can be disabled or force enabled at the +kernel command line. + + nospectre_v2 + + [X86] Disable all mitigations for the Spectre variant 2 + (indirect branch prediction) vulnerability. System may + allow data leaks with this option, which is equivalent + to spectre_v2=off. + + + spectre_v2= + + [X86] Control mitigation of Spectre variant 2 + (indirect branch speculation) vulnerability. + The default operation protects the kernel from + user space attacks. + + on + unconditionally enable, implies + spectre_v2_user=on + off + unconditionally disable, implies + spectre_v2_user=off + auto + kernel detects whether your CPU model is + vulnerable + + Selecting 'on' will, and 'auto' may, choose a + mitigation method at run time according to the + CPU, the available microcode, the setting of the + CONFIG_RETPOLINE configuration option, and the + compiler with which the kernel was built. + + Selecting 'on' will also enable the mitigation + against user space to user space task attacks. + + Selecting 'off' will disable both the kernel and + the user space protections. + + Specific mitigations can also be selected manually: + + retpoline + replace indirect branches + retpoline,generic + google's original retpoline + retpoline,amd + AMD-specific minimal thunk + + Not specifying this option is equivalent to + spectre_v2=auto. + +For user space mitigation: + + spectre_v2_user= + + [X86] Control mitigation of Spectre variant 2 + (indirect branch speculation) vulnerability between + user space tasks + + on + Unconditionally enable mitigations. Is + enforced by spectre_v2=on + + off + Unconditionally disable mitigations. Is + enforced by spectre_v2=off + + prctl + Indirect branch speculation is enabled, + but mitigation can be enabled via prctl + per thread. The mitigation control state + is inherited on fork. + + prctl,ibpb + Like "prctl" above, but only STIBP is + controlled per thread. IBPB is issued + always when switching between different user + space processes. + + seccomp + Same as "prctl" above, but all seccomp + threads will enable the mitigation unless + they explicitly opt out. + + seccomp,ibpb + Like "seccomp" above, but only STIBP is + controlled per thread. IBPB is issued + always when switching between different + user space processes. + + auto + Kernel selects the mitigation depending on + the available CPU features and vulnerability. + + Default mitigation: + If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl" + + Not specifying this option is equivalent to + spectre_v2_user=auto. + + In general the kernel by default selects + reasonable mitigations for the current CPU. To + disable Spectre variant 2 mitigations, boot with + spectre_v2=off. Spectre variant 1 mitigations + cannot be disabled. + +Mitigation selection guide +-------------------------- + +1. Trusted userspace +^^^^^^^^^^^^^^^^^^^^ + + If all userspace applications are from trusted sources and do not + execute externally supplied untrusted code, then the mitigations can + be disabled. + +2. Protect sensitive programs +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + For security-sensitive programs that have secrets (e.g. crypto + keys), protection against Spectre variant 2 can be put in place by + disabling indirect branch speculation when the program is running + (See :ref:`Documentation/userspace-api/spec_ctrl.rst `). + +3. Sandbox untrusted programs +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + + Untrusted programs that could be a source of attacks can be cordoned + off by disabling their indirect branch speculation when they are run + (See :ref:`Documentation/userspace-api/spec_ctrl.rst `). + This prevents untrusted programs from polluting the branch target + buffer. All programs running in SECCOMP sandboxes have indirect + branch speculation restricted by default. This behavior can be + changed via the kernel command line and sysfs control files. See + :ref:`spectre_mitigation_control_command_line`. + +3. High security mode +^^^^^^^^^^^^^^^^^^^^^ + + All Spectre variant 2 mitigations can be forced on + at boot time for all programs (See the "on" option in + :ref:`spectre_mitigation_control_command_line`). This will add + overhead as indirect branch speculations for all programs will be + restricted. + + On x86, branch target buffer will be flushed with IBPB when switching + to a new program. STIBP is left on all the time to protect programs + against variant 2 attacks originating from programs running on + sibling threads. + + Alternatively, STIBP can be used only when running programs + whose indirect branch speculation is explicitly disabled, + while IBPB is still used all the time when switching to a new + program to clear the branch target buffer (See "ibpb" option in + :ref:`spectre_mitigation_control_command_line`). This "ibpb" option + has less performance cost than the "on" option, which leaves STIBP + on all the time. + +References on Spectre +--------------------- + +Intel white papers: + +.. _spec_ref1: + +[1] `Intel analysis of speculative execution side channels `_. + +.. _spec_ref2: + +[2] `Bounds check bypass `_. + +.. _spec_ref3: + +[3] `Deep dive: Retpoline: A branch target injection mitigation `_. + +.. _spec_ref4: + +[4] `Deep Dive: Single Thread Indirect Branch Predictors `_. + +AMD white papers: + +.. _spec_ref5: + +[5] `AMD64 technology indirect branch control extension `_. + +.. _spec_ref6: + +[6] `Software techniques for managing speculation on AMD processors `_. + +ARM white papers: + +.. _spec_ref7: + +[7] `Cache speculation side-channels `_. + +.. _spec_ref8: + +[8] `Cache speculation issues update `_. + +Google white paper: + +.. _spec_ref9: + +[9] `Retpoline: a software construct for preventing branch-target-injection `_. + +MIPS white paper: + +.. _spec_ref10: + +[10] `MIPS: response on speculative execution and side channel vulnerabilities `_. + +Academic papers: + +.. _spec_ref11: + +[11] `Spectre Attacks: Exploiting Speculative Execution `_. + +.. _spec_ref12: + +[12] `NetSpectre: Read Arbitrary Memory over Network `_. + +.. _spec_ref13: + +[13] `Spectre Returns! Speculation Attacks using the Return Stack Buffer `_. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 138f6664b2e29f..0082d1e56999ec 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -5102,12 +5102,6 @@ emulate [default] Vsyscalls turn into traps and are emulated reasonably safely. - native Vsyscalls are native syscall instructions. - This is a little bit faster than trapping - and makes a few dynamic recompilers work - better than they would in emulation mode. - It also makes exploits much easier to write. - none Vsyscalls don't work at all. This makes them quite hard to use for exploits but might break your system. diff --git a/Documentation/userspace-api/spec_ctrl.rst b/Documentation/userspace-api/spec_ctrl.rst index 1129c7550a480c..7ddd8f667459b5 100644 --- a/Documentation/userspace-api/spec_ctrl.rst +++ b/Documentation/userspace-api/spec_ctrl.rst @@ -49,6 +49,8 @@ If PR_SPEC_PRCTL is set, then the per-task control of the mitigation is available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation misfeature will fail. +.. _set_spec_ctrl: + PR_SET_SPECULATION_CTRL ----------------------- diff --git a/Makefile b/Makefile index 3e4868a6498b22..d6c65b678d2123 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 VERSION = 5 PATCHLEVEL = 2 -SUBLEVEL = 0 +SUBLEVEL = 2 EXTRAVERSION = NAME = Bobtail Squid diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c index 182ce67dfe10db..c2663fce7f6c8c 100644 --- a/arch/arc/kernel/unwind.c +++ b/arch/arc/kernel/unwind.c @@ -181,11 +181,6 @@ static void *__init unw_hdr_alloc_early(unsigned long sz) return memblock_alloc_from(sz, sizeof(unsigned int), MAX_DMA_ADDRESS); } -static void *unw_hdr_alloc(unsigned long sz) -{ - return kmalloc(sz, GFP_KERNEL); -} - static void init_unwind_table(struct unwind_table *table, const char *name, const void *core_start, unsigned long core_size, const void *init_start, unsigned long init_size, @@ -366,6 +361,10 @@ static void init_unwind_hdr(struct unwind_table *table, } #ifdef CONFIG_MODULES +static void *unw_hdr_alloc(unsigned long sz) +{ + return kmalloc(sz, GFP_KERNEL); +} static struct unwind_table *last_table; diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile index dab2914fa293cd..e84eb6b335b83a 100644 --- a/arch/arm/boot/dts/Makefile +++ b/arch/arm/boot/dts/Makefile @@ -1118,7 +1118,8 @@ dtb-$(CONFIG_MACH_SUN9I) += \ sun9i-a80-optimus.dtb \ sun9i-a80-cubieboard4.dtb dtb-$(CONFIG_MACH_SUNIV) += \ - suniv-f1c100s-licheepi-nano.dtb + suniv-f1c100s-licheepi-nano.dtb \ + suniv-f1c100s-licheepi-nano-with-lcd.dtb dtb-$(CONFIG_ARCH_TANGO) += \ tango4-vantage-1172.dtb dtb-$(CONFIG_ARCH_TEGRA_2x_SOC) += \ diff --git a/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano-with-lcd.dts b/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano-with-lcd.dts new file mode 100644 index 00000000000000..9da55cb99274b8 --- /dev/null +++ b/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano-with-lcd.dts @@ -0,0 +1,47 @@ +/* + * Copyright (C) 2018 Icenowy Zheng + * + * SPDX-License-Identifier: (GPL-2.0+ OR X11) + */ + +#include "suniv-f1c100s-licheepi-nano.dts" + +/ { + panel: panel { + compatible = "urt,umsh-8596md-t", "simple-panel"; + #address-cells = <1>; + #size-cells = <0>; + + port@0 { + reg = <0>; + #address-cells = <1>; + #size-cells = <0>; + + panel_input: endpoint@0 { + reg = <0>; + remote-endpoint = <&tcon0_out_lcd>; + }; + }; + }; +}; + +&be0 { + status = "okay"; +}; + +&de { + status = "okay"; +}; + +&tcon0 { + pinctrl-names = "default"; + pinctrl-0 = <&lcd_rgb666_pins>; + status = "okay"; +}; + +&tcon0_out { + tcon0_out_lcd: endpoint@0 { + reg = <0>; + remote-endpoint = <&panel_input>; + }; +}; diff --git a/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano.dts b/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano.dts index a1154e6c7cb5d2..1f5e3a664b110b 100644 --- a/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano.dts +++ b/arch/arm/boot/dts/suniv-f1c100s-licheepi-nano.dts @@ -6,17 +6,52 @@ /dts-v1/; #include "suniv-f1c100s.dtsi" +#include + / { model = "Lichee Pi Nano"; compatible = "licheepi,licheepi-nano", "allwinner,suniv-f1c100s"; aliases { serial0 = &uart0; + spi0 = &spi0; }; chosen { stdout-path = "serial0:115200n8"; }; + + reg_vcc3v3: vcc3v3 { + compatible = "regulator-fixed"; + regulator-name = "vcc3v3"; + regulator-min-microvolt = <3300000>; + regulator-max-microvolt = <3300000>; + }; +}; + +&mmc0 { + vmmc-supply = <®_vcc3v3>; + bus-width = <4>; + broken-cd; + status = "okay"; +}; + +&otg_sram { + status = "okay"; +}; + +&spi0 { + pinctrl-names = "default"; + pinctrl-0 = <&spi0_pins_a>; + status = "okay"; + + flash@0 { + #address-cells = <1>; + #size-cells = <1>; + compatible = "winbond,w25q128", "jedec,spi-nor"; + reg = <0>; + spi-max-frequency = <40000000>; + }; }; &uart0 { @@ -24,3 +59,13 @@ pinctrl-0 = <&uart0_pe_pins>; status = "okay"; }; + +&usb_otg { + dr_mode = "otg"; + status = "okay"; +}; + +&usbphy { + usb0_id_det-gpio = <&pio 4 2 GPIO_ACTIVE_HIGH>; /* PE2 */ + status = "okay"; +}; diff --git a/arch/arm/boot/dts/suniv-f1c100s.dtsi b/arch/arm/boot/dts/suniv-f1c100s.dtsi index 6100d3b75f613b..867b440080d19d 100644 --- a/arch/arm/boot/dts/suniv-f1c100s.dtsi +++ b/arch/arm/boot/dts/suniv-f1c100s.dtsi @@ -4,6 +4,9 @@ * Copyright 2018 Mesih Kilinc */ +#include +#include + / { #address-cells = <1>; #size-cells = <1>; @@ -32,6 +35,12 @@ }; }; + de: display-engine { + compatible = "allwinner,suniv-f1c100s-display-engine"; + allwinner,pipelines = <&fe0>; + status = "disabled"; + }; + soc { compatible = "simple-bus"; #address-cells = <1>; @@ -62,6 +71,110 @@ }; }; + spi0: spi@1c05000 { + compatible = "allwinner,suniv-f1c100s-spi", + "allwinner,sun8i-h3-spi"; + reg = <0x01c05000 0x1000>; + interrupts = <10>; + clocks = <&ccu CLK_BUS_SPI0>, <&ccu CLK_BUS_SPI0>; + clock-names = "ahb", "mod"; + resets = <&ccu RST_BUS_SPI0>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; + }; + + spi1: spi@1c06000 { + compatible = "allwinner,suniv-f1c100s-spi", + "allwinner,sun8i-h3-spi"; + reg = <0x01c06000 0x1000>; + interrupts = <11>; + clocks = <&ccu CLK_BUS_SPI1>, <&ccu CLK_BUS_SPI1>; + clock-names = "ahb", "mod"; + resets = <&ccu RST_BUS_SPI1>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; + }; + + tcon0: lcd-controller@1c0c000 { + compatible = "allwinner,suniv-f1c100s-tcon"; + reg = <0x01c0c000 0x1000>; + interrupts = <29>; + clocks = <&ccu CLK_BUS_LCD>, + <&ccu CLK_TCON>; + clock-names = "ahb", + "tcon-ch0"; + clock-output-names = "tcon-pixel-clock"; + resets = <&ccu RST_BUS_LCD>; + reset-names = "lcd"; + status = "disabled"; + + ports { + #address-cells = <1>; + #size-cells = <0>; + + tcon0_in: port@0 { + #address-cells = <1>; + #size-cells = <0>; + reg = <0>; + + tcon0_in_be0: endpoint@0 { + reg = <0>; + remote-endpoint = <&be0_out_tcon0>; + }; + }; + + tcon0_out: port@1 { + #address-cells = <1>; + #size-cells = <0>; + reg = <1>; + }; + }; + }; + + mmc0: mmc@1c0f000 { + compatible = "allwinner,suniv-f1c100s-mmc", + "allwinner,sun7i-a20-mmc"; + reg = <0x01c0f000 0x1000>; + clocks = <&ccu CLK_BUS_MMC0>, + <&ccu CLK_MMC0>, + <&ccu CLK_MMC0_OUTPUT>, + <&ccu CLK_MMC0_SAMPLE>; + clock-names = "ahb", + "mmc", + "output", + "sample"; + resets = <&ccu RST_BUS_MMC0>; + reset-names = "ahb"; + interrupts = <23>; + pinctrl-names = "default"; + pinctrl-0 = <&mmc0_pins>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; + }; + + mmc1: mmc@1c10000 { + compatible = "allwinner,suniv-f1c100s-mmc", + "allwinner,sun7i-a20-mmc"; + reg = <0x01c10000 0x1000>; + clocks = <&ccu CLK_BUS_MMC1>, + <&ccu CLK_MMC1>, + <&ccu CLK_MMC1_OUTPUT>, + <&ccu CLK_MMC1_SAMPLE>; + clock-names = "ahb", + "mmc", + "output", + "sample"; + resets = <&ccu RST_BUS_MMC1>; + reset-names = "ahb"; + interrupts = <24>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; + }; + ccu: clock@1c20000 { compatible = "allwinner,suniv-f1c100s-ccu"; reg = <0x01c20000 0x400>; @@ -89,10 +202,29 @@ #interrupt-cells = <3>; #gpio-cells = <3>; + spi0_pins_a: spi0-pins-pc { + pins = "PC0", "PC1", "PC2", "PC3"; + function = "spi0"; + }; + + lcd_rgb666_pins: lcd-rgb666-pins { + pins = "PD0", "PD1", "PD2", "PD3", "PD4", + "PD5", "PD6", "PD7", "PD8", "PD9", + "PD10", "PD11", "PD12", "PD13", "PD14", + "PD15", "PD16", "PD17", "PD18", "PD19", + "PD20", "PD21"; + function = "lcd"; + }; + uart0_pe_pins: uart0-pe-pins { pins = "PE0", "PE1"; function = "uart0"; }; + + mmc0_pins: mmc0-pins { + pins = "PF0", "PF1", "PF2", "PF3", "PF4", "PF5"; + function = "mmc0"; + }; }; timer@1c20c00 { @@ -140,5 +272,101 @@ resets = <&ccu 26>; status = "disabled"; }; + + usb_otg: usb@1c13000 { + compatible = "allwinner,suniv-f1c100s-musb"; + reg = <0x01c13000 0x0400>; + clocks = <&ccu CLK_BUS_OTG>; + resets = <&ccu RST_BUS_OTG>; + interrupts = <26>; + interrupt-names = "mc"; + phys = <&usbphy 0>; + phy-names = "usb"; + extcon = <&usbphy 0>; + allwinner,sram = <&otg_sram 1>; + status = "disabled"; + }; + + usbphy: phy@1c13400 { + compatible = "allwinner,suniv-f1c100s-usb-phy"; + reg = <0x01c13400 0x10>; + reg-names = "phy_ctrl"; + clocks = <&ccu CLK_USB_PHY0>; + clock-names = "usb0_phy"; + resets = <&ccu RST_USB_PHY0>; + reset-names = "usb0_reset"; + #phy-cells = <1>; + status = "disabled"; + }; + + fe0: display-frontend@1e00000 { + compatible = "allwinner,suniv-f1c100s-display-frontend"; + reg = <0x01e00000 0x20000>; + interrupts = <30>; + clocks = <&ccu CLK_BUS_DE_FE>, <&ccu CLK_DE_FE>, + <&ccu CLK_DRAM_DE_FE>; + clock-names = "ahb", "mod", + "ram"; + resets = <&ccu RST_BUS_DE_FE>; + status = "disabled"; + + ports { + #address-cells = <1>; + #size-cells = <0>; + + fe0_out: port@1 { + #address-cells = <1>; + #size-cells = <0>; + reg = <1>; + + fe0_out_be0: endpoint@0 { + reg = <0>; + remote-endpoint = <&be0_in_fe0>; + }; + }; + }; + }; + + be0: display-backend@1e60000 { + compatible = "allwinner,suniv-f1c100s-display-backend"; + reg = <0x01e60000 0x10000>; + reg-names = "be"; + interrupts = <31>; + clocks = <&ccu CLK_BUS_DE_BE>, <&ccu CLK_DE_BE>, + <&ccu CLK_DRAM_DE_BE>; + clock-names = "ahb", "mod", + "ram"; + resets = <&ccu RST_BUS_DE_BE>; + reset-names = "be"; + assigned-clocks = <&ccu CLK_DE_BE>; + assigned-clock-rates = <300000000>; + + ports { + #address-cells = <1>; + #size-cells = <0>; + + be0_in: port@0 { + #address-cells = <1>; + #size-cells = <0>; + reg = <0>; + + be0_in_fe0: endpoint@0 { + reg = <0>; + remote-endpoint = <&fe0_out_be0>; + }; + }; + + be0_out: port@1 { + #address-cells = <1>; + #size-cells = <0>; + reg = <1>; + + be0_out_tcon0: endpoint@0 { + reg = <0>; + remote-endpoint = <&tcon0_in_be0>; + }; + }; + }; + }; }; }; diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h index e78cda94456bcd..68c476b20b57e2 100644 --- a/arch/s390/include/asm/facility.h +++ b/arch/s390/include/asm/facility.h @@ -59,6 +59,18 @@ static inline int test_facility(unsigned long nr) return __test_facility(nr, &S390_lowcore.stfle_fac_list); } +static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size) +{ + register unsigned long reg0 asm("0") = size - 1; + + asm volatile( + ".insn s,0xb2b00000,0(%1)" /* stfle */ + : "+d" (reg0) + : "a" (stfle_fac_list) + : "memory", "cc"); + return reg0; +} + /** * stfle - Store facility list extended * @stfle_fac_list: array where facility list can be stored @@ -75,13 +87,8 @@ static inline void __stfle(u64 *stfle_fac_list, int size) memcpy(stfle_fac_list, &S390_lowcore.stfl_fac_list, 4); if (S390_lowcore.stfl_fac_list & 0x01000000) { /* More facility bits available with stfle */ - register unsigned long reg0 asm("0") = size - 1; - - asm volatile(".insn s,0xb2b00000,0(%1)" /* stfle */ - : "+d" (reg0) - : "a" (stfle_fac_list) - : "memory", "cc"); - nr = (reg0 + 1) * 8; /* # bytes stored by stfle */ + nr = __stfle_asm(stfle_fac_list, size); + nr = min_t(unsigned long, (nr + 1) * 8, size * 8); } memset((char *) stfle_fac_list + nr, 0, size * 8 - nr); } diff --git a/arch/s390/include/asm/sclp.h b/arch/s390/include/asm/sclp.h index f577c5f6031adb..c563f8368b19b1 100644 --- a/arch/s390/include/asm/sclp.h +++ b/arch/s390/include/asm/sclp.h @@ -80,7 +80,6 @@ struct sclp_info { unsigned char has_gisaf : 1; unsigned char has_diag318 : 1; unsigned char has_sipl : 1; - unsigned char has_sipl_g2 : 1; unsigned char has_dirq : 1; unsigned int ibc; unsigned int mtid; diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c index d836af3ccc3820..2c0a515428d618 100644 --- a/arch/s390/kernel/ipl.c +++ b/arch/s390/kernel/ipl.c @@ -286,12 +286,7 @@ static struct kobj_attribute sys_ipl_secure_attr = static ssize_t ipl_has_secure_show(struct kobject *kobj, struct kobj_attribute *attr, char *page) { - if (MACHINE_IS_LPAR) - return sprintf(page, "%i\n", !!sclp.has_sipl); - else if (MACHINE_IS_VM) - return sprintf(page, "%i\n", !!sclp.has_sipl_g2); - else - return sprintf(page, "%i\n", 0); + return sprintf(page, "%i\n", !!sclp.has_sipl); } static struct kobj_attribute sys_ipl_has_secure_attr = diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 7b23431be5cb6a..f49e116692710b 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -1104,6 +1104,30 @@ ENTRY(irq_entries_start) .endr END(irq_entries_start) +#ifdef CONFIG_X86_LOCAL_APIC + .align 8 +ENTRY(spurious_entries_start) + vector=FIRST_SYSTEM_VECTOR + .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) + pushl $(~vector+0x80) /* Note: always in signed byte range */ + vector=vector+1 + jmp common_spurious + .align 8 + .endr +END(spurious_entries_start) + +common_spurious: + ASM_CLAC + addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */ + SAVE_ALL switch_stacks=1 + ENCODE_FRAME_POINTER + TRACE_IRQS_OFF + movl %esp, %eax + call smp_spurious_interrupt + jmp ret_from_intr +ENDPROC(common_spurious) +#endif + /* * the CPU automatically disables interrupts when executing an IRQ vector, * so IRQ-flags tracing has to follow that: diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 11aa3b2afa4d8e..8dbca86c249b8e 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -375,6 +375,18 @@ ENTRY(irq_entries_start) .endr END(irq_entries_start) + .align 8 +ENTRY(spurious_entries_start) + vector=FIRST_SYSTEM_VECTOR + .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR) + UNWIND_HINT_IRET_REGS + pushq $(~vector+0x80) /* Note: always in signed byte range */ + jmp common_spurious + .align 8 + vector=vector+1 + .endr +END(spurious_entries_start) + .macro DEBUG_ENTRY_ASSERT_IRQS_OFF #ifdef CONFIG_DEBUG_ENTRY pushq %rax @@ -571,10 +583,20 @@ _ASM_NOKPROBE(interrupt_entry) /* Interrupt entry/exit. */ - /* - * The interrupt stubs push (~vector+0x80) onto the stack and - * then jump to common_interrupt. - */ +/* + * The interrupt stubs push (~vector+0x80) onto the stack and + * then jump to common_spurious/interrupt. + */ +common_spurious: + addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ + call interrupt_entry + UNWIND_HINT_REGS indirect=1 + call smp_spurious_interrupt /* rdi points to pt_regs */ + jmp ret_from_intr +END(common_spurious) +_ASM_NOKPROBE(common_spurious) + +/* common_interrupt is a hotpath. Align it */ .p2align CONFIG_X86_L1_CACHE_SHIFT common_interrupt: addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h index 32e666e1231e77..cbd97e22d2f319 100644 --- a/arch/x86/include/asm/hw_irq.h +++ b/arch/x86/include/asm/hw_irq.h @@ -150,8 +150,11 @@ extern char irq_entries_start[]; #define trace_irq_entries_start irq_entries_start #endif +extern char spurious_entries_start[]; + #define VECTOR_UNUSED NULL -#define VECTOR_RETRIGGERED ((void *)~0UL) +#define VECTOR_SHUTDOWN ((void *)~0UL) +#define VECTOR_RETRIGGERED ((void *)~1UL) typedef struct irq_desc* vector_irq_t[NR_VECTORS]; DECLARE_PER_CPU(vector_irq_t, vector_irq); diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 85be316665b4a3..16c21ed97cb23c 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -2041,21 +2041,32 @@ __visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs) entering_irq(); trace_spurious_apic_entry(vector); + inc_irq_stat(irq_spurious_count); + + /* + * If this is a spurious interrupt then do not acknowledge + */ + if (vector == SPURIOUS_APIC_VECTOR) { + /* See SDM vol 3 */ + pr_info("Spurious APIC interrupt (vector 0xFF) on CPU#%d, should never happen.\n", + smp_processor_id()); + goto out; + } + /* - * Check if this really is a spurious interrupt and ACK it - * if it is a vectored one. Just in case... - * Spurious interrupts should not be ACKed. + * If it is a vectored one, verify it's set in the ISR. If set, + * acknowledge it. */ v = apic_read(APIC_ISR + ((vector & ~0x1f) >> 1)); - if (v & (1 << (vector & 0x1f))) + if (v & (1 << (vector & 0x1f))) { + pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Acked\n", + vector, smp_processor_id()); ack_APIC_irq(); - - inc_irq_stat(irq_spurious_count); - - /* see sw-dev-man vol 3, chapter 7.4.13.5 */ - pr_info("spurious APIC interrupt through vector %02x on CPU#%d, " - "should never happen.\n", vector, smp_processor_id()); - + } else { + pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Not pending!\n", + vector, smp_processor_id()); + } +out: trace_spurious_apic_exit(vector); exiting_irq(); } diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c index 53aa234a6803f2..c9fec0657eea2e 100644 --- a/arch/x86/kernel/apic/io_apic.c +++ b/arch/x86/kernel/apic/io_apic.c @@ -1893,6 +1893,50 @@ static int ioapic_set_affinity(struct irq_data *irq_data, return ret; } +/* + * Interrupt shutdown masks the ioapic pin, but the interrupt might already + * be in flight, but not yet serviced by the target CPU. That means + * __synchronize_hardirq() would return and claim that everything is calmed + * down. So free_irq() would proceed and deactivate the interrupt and free + * resources. + * + * Once the target CPU comes around to service it it will find a cleared + * vector and complain. While the spurious interrupt is harmless, the full + * release of resources might prevent the interrupt from being acknowledged + * which keeps the hardware in a weird state. + * + * Verify that the corresponding Remote-IRR bits are clear. + */ +static int ioapic_irq_get_chip_state(struct irq_data *irqd, + enum irqchip_irq_state which, + bool *state) +{ + struct mp_chip_data *mcd = irqd->chip_data; + struct IO_APIC_route_entry rentry; + struct irq_pin_list *p; + + if (which != IRQCHIP_STATE_ACTIVE) + return -EINVAL; + + *state = false; + raw_spin_lock(&ioapic_lock); + for_each_irq_pin(p, mcd->irq_2_pin) { + rentry = __ioapic_read_entry(p->apic, p->pin); + /* + * The remote IRR is only valid in level trigger mode. It's + * meaning is undefined for edge triggered interrupts and + * irrelevant because the IO-APIC treats them as fire and + * forget. + */ + if (rentry.irr && rentry.trigger) { + *state = true; + break; + } + } + raw_spin_unlock(&ioapic_lock); + return 0; +} + static struct irq_chip ioapic_chip __read_mostly = { .name = "IO-APIC", .irq_startup = startup_ioapic_irq, @@ -1902,6 +1946,7 @@ static struct irq_chip ioapic_chip __read_mostly = { .irq_eoi = ioapic_ack_level, .irq_set_affinity = ioapic_set_affinity, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = ioapic_irq_get_chip_state, .flags = IRQCHIP_SKIP_SET_WAKE, }; @@ -1914,6 +1959,7 @@ static struct irq_chip ioapic_ir_chip __read_mostly = { .irq_eoi = ioapic_ir_ack_level, .irq_set_affinity = ioapic_set_affinity, .irq_retrigger = irq_chip_retrigger_hierarchy, + .irq_get_irqchip_state = ioapic_irq_get_chip_state, .flags = IRQCHIP_SKIP_SET_WAKE, }; diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c index e7cb78aed644b2..fdacb864c3dd4f 100644 --- a/arch/x86/kernel/apic/vector.c +++ b/arch/x86/kernel/apic/vector.c @@ -340,7 +340,7 @@ static void clear_irq_vector(struct irq_data *irqd) trace_vector_clear(irqd->irq, vector, apicd->cpu, apicd->prev_vector, apicd->prev_cpu); - per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_UNUSED; + per_cpu(vector_irq, apicd->cpu)[vector] = VECTOR_SHUTDOWN; irq_matrix_free(vector_matrix, apicd->cpu, vector, managed); apicd->vector = 0; @@ -349,7 +349,7 @@ static void clear_irq_vector(struct irq_data *irqd) if (!vector) return; - per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_UNUSED; + per_cpu(vector_irq, apicd->prev_cpu)[vector] = VECTOR_SHUTDOWN; irq_matrix_free(vector_matrix, apicd->prev_cpu, vector, managed); apicd->prev_vector = 0; apicd->move_in_progress = 0; diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c index d2482bbbe3d089..87ef69a72c52ef 100644 --- a/arch/x86/kernel/idt.c +++ b/arch/x86/kernel/idt.c @@ -319,7 +319,8 @@ void __init idt_setup_apic_and_irq_gates(void) #ifdef CONFIG_X86_LOCAL_APIC for_each_clear_bit_from(i, system_vectors, NR_VECTORS) { set_bit(i, system_vectors); - set_intr_gate(i, spurious_interrupt); + entry = spurious_entries_start + 8 * (i - FIRST_SYSTEM_VECTOR); + set_intr_gate(i, entry); } #endif } diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c index 9b68b5b00ac91c..cc496eb7a8d218 100644 --- a/arch/x86/kernel/irq.c +++ b/arch/x86/kernel/irq.c @@ -247,7 +247,7 @@ __visible unsigned int __irq_entry do_IRQ(struct pt_regs *regs) if (!handle_irq(desc, regs)) { ack_APIC_irq(); - if (desc != VECTOR_RETRIGGERED) { + if (desc != VECTOR_RETRIGGERED && desc != VECTOR_SHUTDOWN) { pr_emerg_ratelimited("%s: %d.%d No irq handler for vector\n", __func__, smp_processor_id(), vector); diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c index a166c960bc9e39..e9d0bc3a5e8834 100644 --- a/arch/x86/kernel/ptrace.c +++ b/arch/x86/kernel/ptrace.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #include @@ -643,9 +644,11 @@ static unsigned long ptrace_get_debugreg(struct task_struct *tsk, int n) { struct thread_struct *thread = &tsk->thread; unsigned long val = 0; + int index = n; if (n < HBP_NUM) { - struct perf_event *bp = thread->ptrace_bps[n]; + struct perf_event *bp = thread->ptrace_bps[index]; + index = array_index_nospec(index, HBP_NUM); if (bp) val = bp->hw.info.address; diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c index a5b802a1221272..71d3fef1edc92e 100644 --- a/arch/x86/kernel/tls.c +++ b/arch/x86/kernel/tls.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include @@ -220,6 +221,7 @@ int do_get_thread_area(struct task_struct *p, int idx, struct user_desc __user *u_info) { struct user_desc info; + int index; if (idx == -1 && get_user(idx, &u_info->entry_number)) return -EFAULT; @@ -227,8 +229,11 @@ int do_get_thread_area(struct task_struct *p, int idx, if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX) return -EINVAL; - fill_user_desc(&info, idx, - &p->thread.tls_array[idx - GDT_ENTRY_TLS_MIN]); + index = idx - GDT_ENTRY_TLS_MIN; + index = array_index_nospec(index, + GDT_ENTRY_TLS_MAX - GDT_ENTRY_TLS_MIN + 1); + + fill_user_desc(&info, idx, &p->thread.tls_array[index]); if (copy_to_user(u_info, &info, sizeof(info))) return -EFAULT; diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S index 0850b51493458f..4d1517022a147b 100644 --- a/arch/x86/kernel/vmlinux.lds.S +++ b/arch/x86/kernel/vmlinux.lds.S @@ -141,10 +141,10 @@ SECTIONS *(.text.__x86.indirect_thunk) __indirect_thunk_end = .; #endif - } :text = 0x9090 - /* End of text section */ - _etext = .; + /* End of text section */ + _etext = .; + } :text = 0x9090 NOTES :text :note diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c index f9269ae6da9c77..e5db3856b194ca 100644 --- a/block/bfq-iosched.c +++ b/block/bfq-iosched.c @@ -4584,6 +4584,7 @@ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync) unsigned long flags; spin_lock_irqsave(&bfqd->lock, flags); + bfqq->bic = NULL; bfq_exit_bfqq(bfqd, bfqq); bic_set_bfqq(bic, NULL, is_sync); spin_unlock_irqrestore(&bfqd->lock, flags); diff --git a/block/bio.c b/block/bio.c index ce797d73bb436a..67bba12d273bcb 100644 --- a/block/bio.c +++ b/block/bio.c @@ -731,7 +731,7 @@ static int __bio_add_pc_page(struct request_queue *q, struct bio *bio, } } - if (bio_full(bio)) + if (bio_full(bio, len)) return 0; if (bio->bi_phys_segments >= queue_max_segments(q)) @@ -807,7 +807,7 @@ void __bio_add_page(struct bio *bio, struct page *page, struct bio_vec *bv = &bio->bi_io_vec[bio->bi_vcnt]; WARN_ON_ONCE(bio_flagged(bio, BIO_CLONED)); - WARN_ON_ONCE(bio_full(bio)); + WARN_ON_ONCE(bio_full(bio, len)); bv->bv_page = page; bv->bv_offset = off; @@ -834,7 +834,7 @@ int bio_add_page(struct bio *bio, struct page *page, bool same_page = false; if (!__bio_try_merge_page(bio, page, len, offset, &same_page)) { - if (bio_full(bio)) + if (bio_full(bio, len)) return 0; __bio_add_page(bio, page, len, offset); } @@ -922,7 +922,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) if (same_page) put_page(page); } else { - if (WARN_ON_ONCE(bio_full(bio))) + if (WARN_ON_ONCE(bio_full(bio, len))) return -EINVAL; __bio_add_page(bio, page, len, offset); } @@ -966,7 +966,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) ret = __bio_iov_bvec_add_pages(bio, iter); else ret = __bio_iov_iter_get_pages(bio, iter); - } while (!ret && iov_iter_count(iter) && !bio_full(bio)); + } while (!ret && iov_iter_count(iter) && !bio_full(bio, 0)); if (iov_iter_bvec_no_ref(iter)) bio_set_flag(bio, BIO_NO_PAGE_REF); diff --git a/crypto/lrw.c b/crypto/lrw.c index 58009cf63a6e1f..be829f6afc8e5b 100644 --- a/crypto/lrw.c +++ b/crypto/lrw.c @@ -384,7 +384,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb) inst->alg.base.cra_priority = alg->base.cra_priority; inst->alg.base.cra_blocksize = LRW_BLOCK_SIZE; inst->alg.base.cra_alignmask = alg->base.cra_alignmask | - (__alignof__(__be32) - 1); + (__alignof__(be128) - 1); inst->alg.ivsize = LRW_BLOCK_SIZE; inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg) + diff --git a/drivers/android/binder.c b/drivers/android/binder.c index bc26b5511f0a9f..38a59a630cd4cb 100644 --- a/drivers/android/binder.c +++ b/drivers/android/binder.c @@ -2059,10 +2059,9 @@ static size_t binder_get_object(struct binder_proc *proc, read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset); if (offset > buffer->data_size || read_size < sizeof(*hdr) || - !IS_ALIGNED(offset, sizeof(u32))) + binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, + offset, read_size)) return 0; - binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, - offset, read_size); /* Ok, now see if we read a complete object. */ hdr = &object->hdr; @@ -2131,8 +2130,10 @@ static struct binder_buffer_object *binder_validate_ptr( return NULL; buffer_offset = start_offset + sizeof(binder_size_t) * index; - binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, - b, buffer_offset, sizeof(object_offset)); + if (binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, + b, buffer_offset, + sizeof(object_offset))) + return NULL; object_size = binder_get_object(proc, b, object_offset, object); if (!object_size || object->hdr.type != BINDER_TYPE_PTR) return NULL; @@ -2212,10 +2213,12 @@ static bool binder_validate_fixup(struct binder_proc *proc, return false; last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t); buffer_offset = objects_start_offset + - sizeof(binder_size_t) * last_bbo->parent, - binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset, - b, buffer_offset, - sizeof(last_obj_offset)); + sizeof(binder_size_t) * last_bbo->parent; + if (binder_alloc_copy_from_buffer(&proc->alloc, + &last_obj_offset, + b, buffer_offset, + sizeof(last_obj_offset))) + return false; } return (fixup_offset >= last_min_offset); } @@ -2301,15 +2304,15 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; buffer_offset += sizeof(binder_size_t)) { struct binder_object_header *hdr; - size_t object_size; + size_t object_size = 0; struct binder_object object; binder_size_t object_offset; - binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, - buffer, buffer_offset, - sizeof(object_offset)); - object_size = binder_get_object(proc, buffer, - object_offset, &object); + if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, + buffer, buffer_offset, + sizeof(object_offset))) + object_size = binder_get_object(proc, buffer, + object_offset, &object); if (object_size == 0) { pr_err("transaction release %d bad object at offset %lld, size %zd\n", debug_id, (u64)object_offset, buffer->data_size); @@ -2432,15 +2435,16 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, for (fd_index = 0; fd_index < fda->num_fds; fd_index++) { u32 fd; + int err; binder_size_t offset = fda_offset + fd_index * sizeof(fd); - binder_alloc_copy_from_buffer(&proc->alloc, - &fd, - buffer, - offset, - sizeof(fd)); - binder_deferred_fd_close(fd); + err = binder_alloc_copy_from_buffer( + &proc->alloc, &fd, buffer, + offset, sizeof(fd)); + WARN_ON(err); + if (!err) + binder_deferred_fd_close(fd); } } break; default: @@ -2683,11 +2687,12 @@ static int binder_translate_fd_array(struct binder_fd_array_object *fda, int ret; binder_size_t offset = fda_offset + fdi * sizeof(fd); - binder_alloc_copy_from_buffer(&target_proc->alloc, - &fd, t->buffer, - offset, sizeof(fd)); - ret = binder_translate_fd(fd, offset, t, thread, - in_reply_to); + ret = binder_alloc_copy_from_buffer(&target_proc->alloc, + &fd, t->buffer, + offset, sizeof(fd)); + if (!ret) + ret = binder_translate_fd(fd, offset, t, thread, + in_reply_to); if (ret < 0) return ret; } @@ -2740,8 +2745,12 @@ static int binder_fixup_parent(struct binder_transaction *t, } buffer_offset = bp->parent_offset + (uintptr_t)parent->buffer - (uintptr_t)b->user_data; - binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, - &bp->buffer, sizeof(bp->buffer)); + if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, + &bp->buffer, sizeof(bp->buffer))) { + binder_user_error("%d:%d got transaction with invalid parent offset\n", + proc->pid, thread->pid); + return -EINVAL; + } return 0; } @@ -3160,15 +3169,20 @@ static void binder_transaction(struct binder_proc *proc, goto err_binder_alloc_buf_failed; } if (secctx) { + int err; size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) + ALIGN(tr->offsets_size, sizeof(void *)) + ALIGN(extra_buffers_size, sizeof(void *)) - ALIGN(secctx_sz, sizeof(u64)); t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset; - binder_alloc_copy_to_buffer(&target_proc->alloc, - t->buffer, buf_offset, - secctx, secctx_sz); + err = binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, buf_offset, + secctx, secctx_sz); + if (err) { + t->security_ctx = 0; + WARN_ON(1); + } security_release_secctx(secctx, secctx_sz); secctx = NULL; } @@ -3234,11 +3248,16 @@ static void binder_transaction(struct binder_proc *proc, struct binder_object object; binder_size_t object_offset; - binder_alloc_copy_from_buffer(&target_proc->alloc, - &object_offset, - t->buffer, - buffer_offset, - sizeof(object_offset)); + if (binder_alloc_copy_from_buffer(&target_proc->alloc, + &object_offset, + t->buffer, + buffer_offset, + sizeof(object_offset))) { + return_error = BR_FAILED_REPLY; + return_error_param = -EINVAL; + return_error_line = __LINE__; + goto err_bad_offset; + } object_size = binder_get_object(target_proc, t->buffer, object_offset, &object); if (object_size == 0 || object_offset < off_min) { @@ -3262,15 +3281,17 @@ static void binder_transaction(struct binder_proc *proc, fp = to_flat_binder_object(hdr); ret = binder_translate_binder(fp, t, thread); - if (ret < 0) { + + if (ret < 0 || + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, + object_offset, + fp, sizeof(*fp))) { return_error = BR_FAILED_REPLY; return_error_param = ret; return_error_line = __LINE__; goto err_translate_failed; } - binder_alloc_copy_to_buffer(&target_proc->alloc, - t->buffer, object_offset, - fp, sizeof(*fp)); } break; case BINDER_TYPE_HANDLE: case BINDER_TYPE_WEAK_HANDLE: { @@ -3278,15 +3299,16 @@ static void binder_transaction(struct binder_proc *proc, fp = to_flat_binder_object(hdr); ret = binder_translate_handle(fp, t, thread); - if (ret < 0) { + if (ret < 0 || + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, + object_offset, + fp, sizeof(*fp))) { return_error = BR_FAILED_REPLY; return_error_param = ret; return_error_line = __LINE__; goto err_translate_failed; } - binder_alloc_copy_to_buffer(&target_proc->alloc, - t->buffer, object_offset, - fp, sizeof(*fp)); } break; case BINDER_TYPE_FD: { @@ -3296,16 +3318,17 @@ static void binder_transaction(struct binder_proc *proc, int ret = binder_translate_fd(fp->fd, fd_offset, t, thread, in_reply_to); - if (ret < 0) { + fp->pad_binder = 0; + if (ret < 0 || + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, + object_offset, + fp, sizeof(*fp))) { return_error = BR_FAILED_REPLY; return_error_param = ret; return_error_line = __LINE__; goto err_translate_failed; } - fp->pad_binder = 0; - binder_alloc_copy_to_buffer(&target_proc->alloc, - t->buffer, object_offset, - fp, sizeof(*fp)); } break; case BINDER_TYPE_FDA: { struct binder_object ptr_object; @@ -3393,15 +3416,16 @@ static void binder_transaction(struct binder_proc *proc, num_valid, last_fixup_obj_off, last_fixup_min_off); - if (ret < 0) { + if (ret < 0 || + binder_alloc_copy_to_buffer(&target_proc->alloc, + t->buffer, + object_offset, + bp, sizeof(*bp))) { return_error = BR_FAILED_REPLY; return_error_param = ret; return_error_line = __LINE__; goto err_translate_failed; } - binder_alloc_copy_to_buffer(&target_proc->alloc, - t->buffer, object_offset, - bp, sizeof(*bp)); last_fixup_obj_off = object_offset; last_fixup_min_off = 0; } break; @@ -4140,20 +4164,27 @@ static int binder_apply_fd_fixups(struct binder_proc *proc, trace_binder_transaction_fd_recv(t, fd, fixup->offset); fd_install(fd, fixup->file); fixup->file = NULL; - binder_alloc_copy_to_buffer(&proc->alloc, t->buffer, - fixup->offset, &fd, - sizeof(u32)); + if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer, + fixup->offset, &fd, + sizeof(u32))) { + ret = -EINVAL; + break; + } } list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) { if (fixup->file) { fput(fixup->file); } else if (ret) { u32 fd; - - binder_alloc_copy_from_buffer(&proc->alloc, &fd, - t->buffer, fixup->offset, - sizeof(fd)); - binder_deferred_fd_close(fd); + int err; + + err = binder_alloc_copy_from_buffer(&proc->alloc, &fd, + t->buffer, + fixup->offset, + sizeof(fd)); + WARN_ON(err); + if (!err) + binder_deferred_fd_close(fd); } list_del(&fixup->fixup_entry); kfree(fixup); @@ -4268,6 +4299,8 @@ static int binder_thread_read(struct binder_proc *proc, case BINDER_WORK_TRANSACTION_COMPLETE: { binder_inner_proc_unlock(proc); cmd = BR_TRANSACTION_COMPLETE; + kfree(w); + binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); if (put_user(cmd, (uint32_t __user *)ptr)) return -EFAULT; ptr += sizeof(uint32_t); @@ -4276,8 +4309,6 @@ static int binder_thread_read(struct binder_proc *proc, binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, "%d:%d BR_TRANSACTION_COMPLETE\n", proc->pid, thread->pid); - kfree(w); - binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); } break; case BINDER_WORK_NODE: { struct binder_node *node = container_of(w, struct binder_node, work); diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c index ce5603c2291c61..6d79a1b0d44630 100644 --- a/drivers/android/binder_alloc.c +++ b/drivers/android/binder_alloc.c @@ -1119,15 +1119,16 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, return 0; } -static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc, - bool to_buffer, - struct binder_buffer *buffer, - binder_size_t buffer_offset, - void *ptr, - size_t bytes) +static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc, + bool to_buffer, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + void *ptr, + size_t bytes) { /* All copies must be 32-bit aligned and 32-bit size */ - BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes)); + if (!check_buffer(alloc, buffer, buffer_offset, bytes)) + return -EINVAL; while (bytes) { unsigned long size; @@ -1155,25 +1156,26 @@ static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc, ptr = ptr + size; buffer_offset += size; } + return 0; } -void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, - struct binder_buffer *buffer, - binder_size_t buffer_offset, - void *src, - size_t bytes) +int binder_alloc_copy_to_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + void *src, + size_t bytes) { - binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, - src, bytes); + return binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, + src, bytes); } -void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, - void *dest, - struct binder_buffer *buffer, - binder_size_t buffer_offset, - size_t bytes) +int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, + void *dest, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + size_t bytes) { - binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, - dest, bytes); + return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, + dest, bytes); } diff --git a/drivers/android/binder_alloc.h b/drivers/android/binder_alloc.h index 71bfa95f8e09bb..db9c1b984695db 100644 --- a/drivers/android/binder_alloc.h +++ b/drivers/android/binder_alloc.h @@ -159,17 +159,17 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc, const void __user *from, size_t bytes); -void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, - struct binder_buffer *buffer, - binder_size_t buffer_offset, - void *src, - size_t bytes); - -void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, - void *dest, - struct binder_buffer *buffer, - binder_size_t buffer_offset, - size_t bytes); +int binder_alloc_copy_to_buffer(struct binder_alloc *alloc, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + void *src, + size_t bytes); + +int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, + void *dest, + struct binder_buffer *buffer, + binder_size_t buffer_offset, + size_t bytes); #endif /* _LINUX_BINDER_ALLOC_H */ diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c index a7359535caf5df..b444f89a204164 100644 --- a/drivers/base/cacheinfo.c +++ b/drivers/base/cacheinfo.c @@ -655,7 +655,8 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu) static int __init cacheinfo_sysfs_init(void) { - return cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "base/cacheinfo:online", + return cpuhp_setup_state(CPUHP_AP_BASE_CACHEINFO_ONLINE, + "base/cacheinfo:online", cacheinfo_cpu_online, cacheinfo_cpu_pre_down); } device_initcall(cacheinfo_sysfs_init); diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c index f962488546b638..103b5d37fa8673 100644 --- a/drivers/base/firmware_loader/fallback.c +++ b/drivers/base/firmware_loader/fallback.c @@ -659,7 +659,7 @@ static bool fw_run_sysfs_fallback(enum fw_opt opt_flags) /* Also permit LSMs and IMA to fail firmware sysfs fallback */ ret = security_kernel_load_data(LOADING_FIRMWARE); if (ret < 0) - return ret; + return false; return fw_force_sysfs_fallback(opt_flags); } diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c index 90325e1749fb62..d47ad10a35fe33 100644 --- a/drivers/char/tpm/tpm-chip.c +++ b/drivers/char/tpm/tpm-chip.c @@ -289,15 +289,15 @@ static int tpm_class_shutdown(struct device *dev) { struct tpm_chip *chip = container_of(dev, struct tpm_chip, dev); + down_write(&chip->ops_sem); if (chip->flags & TPM_CHIP_FLAG_TPM2) { - down_write(&chip->ops_sem); if (!tpm_chip_start(chip)) { tpm2_shutdown(chip, TPM2_SU_CLEAR); tpm_chip_stop(chip); } - chip->ops = NULL; - up_write(&chip->ops_sem); } + chip->ops = NULL; + up_write(&chip->ops_sem); return 0; } diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c index 85dcf2654d1102..faacbe1ffa1a90 100644 --- a/drivers/char/tpm/tpm1-cmd.c +++ b/drivers/char/tpm/tpm1-cmd.c @@ -510,7 +510,7 @@ struct tpm1_get_random_out { * * Return: * * number of bytes read - * * -errno or a TPM return code otherwise + * * -errno (positive TPM return codes are masked to -EIO) */ int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max) { @@ -531,8 +531,11 @@ int tpm1_get_random(struct tpm_chip *chip, u8 *dest, size_t max) rc = tpm_transmit_cmd(chip, &buf, sizeof(out->rng_data_len), "attempting get random"); - if (rc) + if (rc) { + if (rc > 0) + rc = -EIO; goto out; + } out = (struct tpm1_get_random_out *)&buf.data[TPM_HEADER_SIZE]; diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c index 4de49924cfc46b..d103545e40550b 100644 --- a/drivers/char/tpm/tpm2-cmd.c +++ b/drivers/char/tpm/tpm2-cmd.c @@ -297,7 +297,7 @@ struct tpm2_get_random_out { * * Return: * size of the buffer on success, - * -errno otherwise + * -errno otherwise (positive TPM return codes are masked to -EIO) */ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max) { @@ -324,8 +324,11 @@ int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max) offsetof(struct tpm2_get_random_out, buffer), "attempting get random"); - if (err) + if (err) { + if (err > 0) + err = -EIO; goto out; + } out = (struct tpm2_get_random_out *) &buf.data[TPM_HEADER_SIZE]; diff --git a/drivers/crypto/nx/nx-842-powernv.c b/drivers/crypto/nx/nx-842-powernv.c index 4acbc47973e9d2..e78ff5c65ed6ca 100644 --- a/drivers/crypto/nx/nx-842-powernv.c +++ b/drivers/crypto/nx/nx-842-powernv.c @@ -27,8 +27,6 @@ MODULE_ALIAS_CRYPTO("842-nx"); #define WORKMEM_ALIGN (CRB_ALIGN) #define CSB_WAIT_MAX (5000) /* ms */ #define VAS_RETRIES (10) -/* # of requests allowed per RxFIFO at a time. 0 for unlimited */ -#define MAX_CREDITS_PER_RXFIFO (1024) struct nx842_workmem { /* Below fields must be properly aligned */ @@ -812,7 +810,11 @@ static int __init vas_cfg_coproc_info(struct device_node *dn, int chip_id, rxattr.lnotify_lpid = lpid; rxattr.lnotify_pid = pid; rxattr.lnotify_tid = tid; - rxattr.wcreds_max = MAX_CREDITS_PER_RXFIFO; + /* + * Maximum RX window credits can not be more than #CRBs in + * RxFIFO. Otherwise, can get checkstop if RxFIFO overruns. + */ + rxattr.wcreds_max = fifo_size / CRB_SIZE; /* * Open a VAS receice window which is used to configure RxFIFO diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index fbc7bf9d73807a..8c57c5af0930e0 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -321,6 +321,21 @@ int talitos_submit(struct device *dev, int ch, struct talitos_desc *desc, } EXPORT_SYMBOL(talitos_submit); +static __be32 get_request_hdr(struct talitos_request *request, bool is_sec1) +{ + struct talitos_edesc *edesc; + + if (!is_sec1) + return request->desc->hdr; + + if (!request->desc->next_desc) + return request->desc->hdr1; + + edesc = container_of(request->desc, struct talitos_edesc, desc); + + return ((struct talitos_desc *)(edesc->buf + edesc->dma_len))->hdr1; +} + /* * process what was done, notify callback of error if not */ @@ -342,12 +357,7 @@ static void flush_channel(struct device *dev, int ch, int error, int reset_ch) /* descriptors with their done bits set don't get the error */ rmb(); - if (!is_sec1) - hdr = request->desc->hdr; - else if (request->desc->next_desc) - hdr = (request->desc + 1)->hdr1; - else - hdr = request->desc->hdr1; + hdr = get_request_hdr(request, is_sec1); if ((hdr & DESC_HDR_DONE) == DESC_HDR_DONE) status = 0; @@ -477,8 +487,14 @@ static u32 current_desc_hdr(struct device *dev, int ch) } } - if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) - return (priv->chan[ch].fifo[iter].desc + 1)->hdr; + if (priv->chan[ch].fifo[iter].desc->next_desc == cur_desc) { + struct talitos_edesc *edesc; + + edesc = container_of(priv->chan[ch].fifo[iter].desc, + struct talitos_edesc, desc); + return ((struct talitos_desc *) + (edesc->buf + edesc->dma_len))->hdr; + } return priv->chan[ch].fifo[iter].desc->hdr; } @@ -948,36 +964,6 @@ static int aead_des3_setkey(struct crypto_aead *authenc, goto out; } -/* - * talitos_edesc - s/w-extended descriptor - * @src_nents: number of segments in input scatterlist - * @dst_nents: number of segments in output scatterlist - * @icv_ool: whether ICV is out-of-line - * @iv_dma: dma address of iv for checking continuity and link table - * @dma_len: length of dma mapped link_tbl space - * @dma_link_tbl: bus physical address of link_tbl/buf - * @desc: h/w descriptor - * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2) - * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1) - * - * if decrypting (with authcheck), or either one of src_nents or dst_nents - * is greater than 1, an integrity check value is concatenated to the end - * of link_tbl data - */ -struct talitos_edesc { - int src_nents; - int dst_nents; - bool icv_ool; - dma_addr_t iv_dma; - int dma_len; - dma_addr_t dma_link_tbl; - struct talitos_desc desc; - union { - struct talitos_ptr link_tbl[0]; - u8 buf[0]; - }; -}; - static void talitos_sg_unmap(struct device *dev, struct talitos_edesc *edesc, struct scatterlist *src, @@ -1466,15 +1452,11 @@ static struct talitos_edesc *talitos_edesc_alloc(struct device *dev, edesc->dst_nents = dst_nents; edesc->iv_dma = iv_dma; edesc->dma_len = dma_len; - if (dma_len) { - void *addr = &edesc->link_tbl[0]; - - if (is_sec1 && !dst) - addr += sizeof(struct talitos_desc); - edesc->dma_link_tbl = dma_map_single(dev, addr, + if (dma_len) + edesc->dma_link_tbl = dma_map_single(dev, &edesc->link_tbl[0], edesc->dma_len, DMA_BIDIRECTIONAL); - } + return edesc; } @@ -1759,14 +1741,16 @@ static void common_nonsnoop_hash_unmap(struct device *dev, struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); struct talitos_desc *desc = &edesc->desc; - struct talitos_desc *desc2 = desc + 1; + struct talitos_desc *desc2 = (struct talitos_desc *) + (edesc->buf + edesc->dma_len); unmap_single_talitos_ptr(dev, &edesc->desc.ptr[5], DMA_FROM_DEVICE); if (desc->next_desc && desc->ptr[5].ptr != desc2->ptr[5].ptr) unmap_single_talitos_ptr(dev, &desc2->ptr[5], DMA_FROM_DEVICE); - talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0); + if (req_ctx->psrc) + talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0); /* When using hashctx-in, must unmap it. */ if (from_talitos_ptr_len(&edesc->desc.ptr[1], is_sec1)) @@ -1833,7 +1817,6 @@ static void talitos_handle_buggy_hash(struct talitos_ctx *ctx, static int common_nonsnoop_hash(struct talitos_edesc *edesc, struct ahash_request *areq, unsigned int length, - unsigned int offset, void (*callback) (struct device *dev, struct talitos_desc *desc, void *context, int error)) @@ -1872,9 +1855,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, sg_count = edesc->src_nents ?: 1; if (is_sec1 && sg_count > 1) - sg_pcopy_to_buffer(req_ctx->psrc, sg_count, - edesc->buf + sizeof(struct talitos_desc), - length, req_ctx->nbuf); + sg_copy_to_buffer(req_ctx->psrc, sg_count, edesc->buf, length); else if (length) sg_count = dma_map_sg(dev, req_ctx->psrc, sg_count, DMA_TO_DEVICE); @@ -1887,7 +1868,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, DMA_TO_DEVICE); } else { sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc, - &desc->ptr[3], sg_count, offset, 0); + &desc->ptr[3], sg_count, 0, 0); if (sg_count > 1) sync_needed = true; } @@ -1911,7 +1892,8 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, talitos_handle_buggy_hash(ctx, edesc, &desc->ptr[3]); if (is_sec1 && req_ctx->nbuf && length) { - struct talitos_desc *desc2 = desc + 1; + struct talitos_desc *desc2 = (struct talitos_desc *) + (edesc->buf + edesc->dma_len); dma_addr_t next_desc; memset(desc2, 0, sizeof(*desc2)); @@ -1932,7 +1914,7 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc, DMA_TO_DEVICE); copy_talitos_ptr(&desc2->ptr[2], &desc->ptr[2], is_sec1); sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc, - &desc2->ptr[3], sg_count, offset, 0); + &desc2->ptr[3], sg_count, 0, 0); if (sg_count > 1) sync_needed = true; copy_talitos_ptr(&desc2->ptr[5], &desc->ptr[5], is_sec1); @@ -2043,7 +2025,6 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) struct device *dev = ctx->dev; struct talitos_private *priv = dev_get_drvdata(dev); bool is_sec1 = has_ftr_sec1(priv); - int offset = 0; u8 *ctx_buf = req_ctx->buf[req_ctx->buf_idx]; if (!req_ctx->last && (nbytes + req_ctx->nbuf <= blocksize)) { @@ -2083,6 +2064,8 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) sg_chain(req_ctx->bufsl, 2, areq->src); req_ctx->psrc = req_ctx->bufsl; } else if (is_sec1 && req_ctx->nbuf && req_ctx->nbuf < blocksize) { + int offset; + if (nbytes_to_hash > blocksize) offset = blocksize - req_ctx->nbuf; else @@ -2095,7 +2078,8 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) sg_copy_to_buffer(areq->src, nents, ctx_buf + req_ctx->nbuf, offset); req_ctx->nbuf += offset; - req_ctx->psrc = areq->src; + req_ctx->psrc = scatterwalk_ffwd(req_ctx->bufsl, areq->src, + offset); } else req_ctx->psrc = areq->src; @@ -2135,8 +2119,7 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes) if (ctx->keylen && (req_ctx->first || req_ctx->last)) edesc->desc.hdr |= DESC_HDR_MODE0_MDEU_HMAC; - return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, offset, - ahash_done); + return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, ahash_done); } static int ahash_update(struct ahash_request *areq) @@ -2339,7 +2322,7 @@ static struct talitos_alg_template driver_algs[] = { .base = { .cra_name = "authenc(hmac(sha1),cbc(aes))", .cra_driver_name = "authenc-hmac-sha1-" - "cbc-aes-talitos", + "cbc-aes-talitos-hsna", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2384,7 +2367,7 @@ static struct talitos_alg_template driver_algs[] = { .cra_name = "authenc(hmac(sha1)," "cbc(des3_ede))", .cra_driver_name = "authenc-hmac-sha1-" - "cbc-3des-talitos", + "cbc-3des-talitos-hsna", .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2427,7 +2410,7 @@ static struct talitos_alg_template driver_algs[] = { .base = { .cra_name = "authenc(hmac(sha224),cbc(aes))", .cra_driver_name = "authenc-hmac-sha224-" - "cbc-aes-talitos", + "cbc-aes-talitos-hsna", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2472,7 +2455,7 @@ static struct talitos_alg_template driver_algs[] = { .cra_name = "authenc(hmac(sha224)," "cbc(des3_ede))", .cra_driver_name = "authenc-hmac-sha224-" - "cbc-3des-talitos", + "cbc-3des-talitos-hsna", .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2515,7 +2498,7 @@ static struct talitos_alg_template driver_algs[] = { .base = { .cra_name = "authenc(hmac(sha256),cbc(aes))", .cra_driver_name = "authenc-hmac-sha256-" - "cbc-aes-talitos", + "cbc-aes-talitos-hsna", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2560,7 +2543,7 @@ static struct talitos_alg_template driver_algs[] = { .cra_name = "authenc(hmac(sha256)," "cbc(des3_ede))", .cra_driver_name = "authenc-hmac-sha256-" - "cbc-3des-talitos", + "cbc-3des-talitos-hsna", .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2689,7 +2672,7 @@ static struct talitos_alg_template driver_algs[] = { .base = { .cra_name = "authenc(hmac(md5),cbc(aes))", .cra_driver_name = "authenc-hmac-md5-" - "cbc-aes-talitos", + "cbc-aes-talitos-hsna", .cra_blocksize = AES_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, @@ -2732,7 +2715,7 @@ static struct talitos_alg_template driver_algs[] = { .base = { .cra_name = "authenc(hmac(md5),cbc(des3_ede))", .cra_driver_name = "authenc-hmac-md5-" - "cbc-3des-talitos", + "cbc-3des-talitos-hsna", .cra_blocksize = DES3_EDE_BLOCK_SIZE, .cra_flags = CRYPTO_ALG_ASYNC, }, diff --git a/drivers/crypto/talitos.h b/drivers/crypto/talitos.h index a65a63e0d6c10e..979f6a61e545f1 100644 --- a/drivers/crypto/talitos.h +++ b/drivers/crypto/talitos.h @@ -65,6 +65,36 @@ struct talitos_desc { #define TALITOS_DESC_SIZE (sizeof(struct talitos_desc) - sizeof(__be32)) +/* + * talitos_edesc - s/w-extended descriptor + * @src_nents: number of segments in input scatterlist + * @dst_nents: number of segments in output scatterlist + * @icv_ool: whether ICV is out-of-line + * @iv_dma: dma address of iv for checking continuity and link table + * @dma_len: length of dma mapped link_tbl space + * @dma_link_tbl: bus physical address of link_tbl/buf + * @desc: h/w descriptor + * @link_tbl: input and output h/w link tables (if {src,dst}_nents > 1) (SEC2) + * @buf: input and output buffeur (if {src,dst}_nents > 1) (SEC1) + * + * if decrypting (with authcheck), or either one of src_nents or dst_nents + * is greater than 1, an integrity check value is concatenated to the end + * of link_tbl data + */ +struct talitos_edesc { + int src_nents; + int dst_nents; + bool icv_ool; + dma_addr_t iv_dma; + int dma_len; + dma_addr_t dma_link_tbl; + struct talitos_desc desc; + union { + struct talitos_ptr link_tbl[0]; + u8 buf[0]; + }; +}; + /** * talitos_request - descriptor submission request * @desc: descriptor pointer (kernel virtual) diff --git a/drivers/gpu/drm/sun4i/sun4i_backend.c b/drivers/gpu/drm/sun4i/sun4i_backend.c index 78d8c3afe82533..d31d4e424eaae7 100644 --- a/drivers/gpu/drm/sun4i/sun4i_backend.c +++ b/drivers/gpu/drm/sun4i/sun4i_backend.c @@ -972,6 +972,9 @@ static int sun4i_backend_remove(struct platform_device *pdev) return 0; } +static const struct sun4i_backend_quirks suniv_backend_quirks = { +}; + static const struct sun4i_backend_quirks sun4i_backend_quirks = { .needs_output_muxing = true, }; @@ -995,6 +998,10 @@ static const struct sun4i_backend_quirks sun9i_backend_quirks = { }; static const struct of_device_id sun4i_backend_of_table[] = { + { + .compatible = "allwinner,suniv-f1c100s-display-backend", + .data = &suniv_backend_quirks, + }, { .compatible = "allwinner,sun4i-a10-display-backend", .data = &sun4i_backend_quirks, diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c index 3ff73998d841f0..59f6d5d765c039 100644 --- a/drivers/gpu/drm/sun4i/sun4i_drv.c +++ b/drivers/gpu/drm/sun4i/sun4i_drv.c @@ -164,7 +164,8 @@ static bool sun4i_drv_node_is_connector(struct device_node *node) static bool sun4i_drv_node_is_frontend(struct device_node *node) { - return of_device_is_compatible(node, "allwinner,sun4i-a10-display-frontend") || + return of_device_is_compatible(node, "allwinner,suniv-f1c100s-display-frontend") || + of_device_is_compatible(node, "allwinner,sun4i-a10-display-frontend") || of_device_is_compatible(node, "allwinner,sun5i-a13-display-frontend") || of_device_is_compatible(node, "allwinner,sun6i-a31-display-frontend") || of_device_is_compatible(node, "allwinner,sun7i-a20-display-frontend") || @@ -404,6 +405,7 @@ static int sun4i_drv_remove(struct platform_device *pdev) } static const struct of_device_id sun4i_drv_of_table[] = { + { .compatible = "allwinner,suniv-f1c100s-display-engine" }, { .compatible = "allwinner,sun4i-a10-display-engine" }, { .compatible = "allwinner,sun5i-a10s-display-engine" }, { .compatible = "allwinner,sun5i-a13-display-engine" }, diff --git a/drivers/gpu/drm/sun4i/sun4i_rgb.c b/drivers/gpu/drm/sun4i/sun4i_rgb.c index a901ec689b620a..386d7fbffba212 100644 --- a/drivers/gpu/drm/sun4i/sun4i_rgb.c +++ b/drivers/gpu/drm/sun4i/sun4i_rgb.c @@ -116,6 +116,9 @@ static enum drm_mode_status sun4i_rgb_mode_valid(struct drm_encoder *crtc, if (!rgb->bridge) goto out; + if(connector->connector_type == DRM_MODE_CONNECTOR_Unknown) + goto out; + tcon->dclk_min_div = 6; tcon->dclk_max_div = 127; rounded_rate = clk_round_rate(tcon->dclk, rate); diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c index 64c43ee6bd92ff..0168283ebacb59 100644 --- a/drivers/gpu/drm/sun4i/sun4i_tcon.c +++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c @@ -1419,6 +1419,14 @@ static int sun8i_r40_tcon_tv_set_mux(struct sun4i_tcon *tcon, return 0; } +static const struct sun4i_tcon_quirks suniv_f1c100s_quirks = { + /* + * The F1C100s SoC has a second channel in TCON, but the clock input of + * it is not documented. + */ + /* .has_channel_1 = true, */ +}; + static const struct sun4i_tcon_quirks sun4i_a10_quirks = { .has_channel_0 = true, .has_channel_1 = true, @@ -1487,6 +1495,7 @@ static const struct sun4i_tcon_quirks sun9i_a80_tcon_tv_quirks = { /* sun4i_drv uses this list to check if a device node is a TCON */ const struct of_device_id sun4i_tcon_of_table[] = { + { .compatible = "allwinner,suniv-f1c100s-tcon", .data = &suniv_f1c100s_quirks }, { .compatible = "allwinner,sun4i-a10-tcon", .data = &sun4i_a10_quirks }, { .compatible = "allwinner,sun5i-a13-tcon", .data = &sun5i_a13_quirks }, { .compatible = "allwinner,sun6i-a31-tcon", .data = &sun6i_a31_quirks }, diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h index b032d3899fa35c..bfc584ada4ebf1 100644 --- a/drivers/hid/hid-ids.h +++ b/drivers/hid/hid-ids.h @@ -1241,6 +1241,7 @@ #define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05 #define USB_DEVICE_ID_PRIMAX_REZEL 0x4e72 #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F 0x4d0f +#define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65 0x4d65 #define USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22 0x4e22 diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c index 671a285724f9d8..1549c7a2f04c2e 100644 --- a/drivers/hid/hid-quirks.c +++ b/drivers/hid/hid-quirks.c @@ -130,6 +130,7 @@ static const struct hid_device_id hid_quirks[] = { { HID_USB_DEVICE(USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_MOUSE_4D22), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D0F), HID_QUIRK_ALWAYS_POLL }, + { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4D65), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_PRIMAX, USB_DEVICE_ID_PRIMAX_PIXART_MOUSE_4E22), HID_QUIRK_ALWAYS_POLL }, { HID_USB_DEVICE(USB_VENDOR_ID_PRODIGE, USB_DEVICE_ID_PRODIGE_CORDLESS), HID_QUIRK_NOGET }, { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001), HID_QUIRK_NOGET }, diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c index 4ee4c80a435439..543cc3d36e1d57 100644 --- a/drivers/hwtracing/coresight/coresight-etb10.c +++ b/drivers/hwtracing/coresight/coresight-etb10.c @@ -373,12 +373,10 @@ static void *etb_alloc_buffer(struct coresight_device *csdev, struct perf_event *event, void **pages, int nr_pages, bool overwrite) { - int node, cpu = event->cpu; + int node; struct cs_buffers *buf; - if (cpu == -1) - cpu = smp_processor_id(); - node = cpu_to_node(cpu); + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node); if (!buf) diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c index 16b0c0e1e43a07..ad6e16c96263ab 100644 --- a/drivers/hwtracing/coresight/coresight-funnel.c +++ b/drivers/hwtracing/coresight/coresight-funnel.c @@ -241,6 +241,7 @@ static int funnel_probe(struct device *dev, struct resource *res) } pm_runtime_put(dev); + ret = 0; out_disable_clk: if (ret && !IS_ERR_OR_NULL(drvdata->atclk)) diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c index 2527b5d3b65e96..8de109de171f14 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etf.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c @@ -378,12 +378,10 @@ static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, struct perf_event *event, void **pages, int nr_pages, bool overwrite) { - int node, cpu = event->cpu; + int node; struct cs_buffers *buf; - if (cpu == -1) - cpu = smp_processor_id(); - node = cpu_to_node(cpu); + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); /* Allocate memory structure for interaction with Perf */ buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node); diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c index df6e4b0b84e931..9f293b9dce8ca5 100644 --- a/drivers/hwtracing/coresight/coresight-tmc-etr.c +++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c @@ -1178,14 +1178,11 @@ static struct etr_buf * alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event, int nr_pages, void **pages, bool snapshot) { - int node, cpu = event->cpu; + int node; struct etr_buf *etr_buf; unsigned long size; - if (cpu == -1) - cpu = smp_processor_id(); - node = cpu_to_node(cpu); - + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); /* * Try to match the perf ring buffer size if it is larger * than the size requested via sysfs. @@ -1317,13 +1314,11 @@ static struct etr_perf_buffer * tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, struct perf_event *event, int nr_pages, void **pages, bool snapshot) { - int node, cpu = event->cpu; + int node; struct etr_buf *etr_buf; struct etr_perf_buffer *etr_perf; - if (cpu == -1) - cpu = smp_processor_id(); - node = cpu_to_node(cpu); + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node); if (!etr_perf) diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c index 2327ec18b40cc1..1f7ce5186dfcec 100644 --- a/drivers/iio/adc/stm32-adc-core.c +++ b/drivers/iio/adc/stm32-adc-core.c @@ -87,6 +87,7 @@ struct stm32_adc_priv_cfg { * @domain: irq domain reference * @aclk: clock reference for the analog circuitry * @bclk: bus clock common for all ADCs, depends on part used + * @vdda: vdda analog supply reference * @vref: regulator reference * @cfg: compatible configuration data * @common: common data for all ADC instances @@ -97,6 +98,7 @@ struct stm32_adc_priv { struct irq_domain *domain; struct clk *aclk; struct clk *bclk; + struct regulator *vdda; struct regulator *vref; const struct stm32_adc_priv_cfg *cfg; struct stm32_adc_common common; @@ -394,10 +396,16 @@ static int stm32_adc_core_hw_start(struct device *dev) struct stm32_adc_priv *priv = to_stm32_adc_priv(common); int ret; + ret = regulator_enable(priv->vdda); + if (ret < 0) { + dev_err(dev, "vdda enable failed %d\n", ret); + return ret; + } + ret = regulator_enable(priv->vref); if (ret < 0) { dev_err(dev, "vref enable failed\n"); - return ret; + goto err_vdda_disable; } if (priv->bclk) { @@ -425,6 +433,8 @@ static int stm32_adc_core_hw_start(struct device *dev) clk_disable_unprepare(priv->bclk); err_regulator_disable: regulator_disable(priv->vref); +err_vdda_disable: + regulator_disable(priv->vdda); return ret; } @@ -441,6 +451,7 @@ static void stm32_adc_core_hw_stop(struct device *dev) if (priv->bclk) clk_disable_unprepare(priv->bclk); regulator_disable(priv->vref); + regulator_disable(priv->vdda); } static int stm32_adc_probe(struct platform_device *pdev) @@ -468,6 +479,14 @@ static int stm32_adc_probe(struct platform_device *pdev) return PTR_ERR(priv->common.base); priv->common.phys_base = res->start; + priv->vdda = devm_regulator_get(&pdev->dev, "vdda"); + if (IS_ERR(priv->vdda)) { + ret = PTR_ERR(priv->vdda); + if (ret != -EPROBE_DEFER) + dev_err(&pdev->dev, "vdda get failed, %d\n", ret); + return ret; + } + priv->vref = devm_regulator_get(&pdev->dev, "vref"); if (IS_ERR(priv->vref)) { ret = PTR_ERR(priv->vref); diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index b8ec301025b7c7..1080c0c498154a 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -173,6 +173,7 @@ static const char * const smbus_pnp_ids[] = { "LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */ "LEN0073", /* X1 Carbon G5 (Elantech) */ "LEN0092", /* X1 Carbon 6 */ + "LEN0093", /* T480 */ "LEN0096", /* X280 */ "LEN0097", /* X280 -> ALPS trackpoint */ "LEN200f", /* T450s */ diff --git a/drivers/media/dvb-frontends/stv0297.c b/drivers/media/dvb-frontends/stv0297.c index dac396c95a59d4..6d5962d5697ac9 100644 --- a/drivers/media/dvb-frontends/stv0297.c +++ b/drivers/media/dvb-frontends/stv0297.c @@ -682,7 +682,7 @@ static const struct dvb_frontend_ops stv0297_ops = { .delsys = { SYS_DVBC_ANNEX_A }, .info = { .name = "ST STV0297 DVB-C", - .frequency_min_hz = 470 * MHz, + .frequency_min_hz = 47 * MHz, .frequency_max_hz = 862 * MHz, .frequency_stepsize_hz = 62500, .symbol_rate_min = 870000, diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile index 951c984de61ae9..fb10eafe9bde74 100644 --- a/drivers/misc/lkdtm/Makefile +++ b/drivers/misc/lkdtm/Makefile @@ -15,8 +15,7 @@ KCOV_INSTRUMENT_rodata.o := n OBJCOPYFLAGS := OBJCOPYFLAGS_rodata_objcopy.o := \ - --set-section-flags .text=alloc,readonly \ - --rename-section .text=.rodata + --rename-section .text=.rodata,alloc,readonly,load targets += rodata.o rodata_objcopy.o $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE $(call if_changed,objcopy) diff --git a/drivers/misc/vmw_vmci/vmci_context.c b/drivers/misc/vmw_vmci/vmci_context.c index 300ed69fe2c746..16695366ec926d 100644 --- a/drivers/misc/vmw_vmci/vmci_context.c +++ b/drivers/misc/vmw_vmci/vmci_context.c @@ -21,6 +21,9 @@ #include "vmci_driver.h" #include "vmci_event.h" +/* Use a wide upper bound for the maximum contexts. */ +#define VMCI_MAX_CONTEXTS 2000 + /* * List of current VMCI contexts. Contexts can be added by * vmci_ctx_create() and removed via vmci_ctx_destroy(). @@ -117,19 +120,22 @@ struct vmci_ctx *vmci_ctx_create(u32 cid, u32 priv_flags, /* Initialize host-specific VMCI context. */ init_waitqueue_head(&context->host_context.wait_queue); - context->queue_pair_array = vmci_handle_arr_create(0); + context->queue_pair_array = + vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT); if (!context->queue_pair_array) { error = -ENOMEM; goto err_free_ctx; } - context->doorbell_array = vmci_handle_arr_create(0); + context->doorbell_array = + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); if (!context->doorbell_array) { error = -ENOMEM; goto err_free_qp_array; } - context->pending_doorbell_array = vmci_handle_arr_create(0); + context->pending_doorbell_array = + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); if (!context->pending_doorbell_array) { error = -ENOMEM; goto err_free_db_array; @@ -204,7 +210,7 @@ static int ctx_fire_notification(u32 context_id, u32 priv_flags) * We create an array to hold the subscribers we find when * scanning through all contexts. */ - subscriber_array = vmci_handle_arr_create(0); + subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS); if (subscriber_array == NULL) return VMCI_ERROR_NO_MEM; @@ -623,20 +629,26 @@ int vmci_ctx_add_notification(u32 context_id, u32 remote_cid) spin_lock(&context->lock); - list_for_each_entry(n, &context->notifier_list, node) { - if (vmci_handle_is_equal(n->handle, notifier->handle)) { - exists = true; - break; + if (context->n_notifiers < VMCI_MAX_CONTEXTS) { + list_for_each_entry(n, &context->notifier_list, node) { + if (vmci_handle_is_equal(n->handle, notifier->handle)) { + exists = true; + break; + } } - } - if (exists) { - kfree(notifier); - result = VMCI_ERROR_ALREADY_EXISTS; + if (exists) { + kfree(notifier); + result = VMCI_ERROR_ALREADY_EXISTS; + } else { + list_add_tail_rcu(¬ifier->node, + &context->notifier_list); + context->n_notifiers++; + result = VMCI_SUCCESS; + } } else { - list_add_tail_rcu(¬ifier->node, &context->notifier_list); - context->n_notifiers++; - result = VMCI_SUCCESS; + kfree(notifier); + result = VMCI_ERROR_NO_MEM; } spin_unlock(&context->lock); @@ -721,8 +733,7 @@ static int vmci_ctx_get_chkpt_doorbells(struct vmci_ctx *context, u32 *buf_size, void **pbuf) { struct dbell_cpt_state *dbells; - size_t n_doorbells; - int i; + u32 i, n_doorbells; n_doorbells = vmci_handle_arr_get_size(context->doorbell_array); if (n_doorbells > 0) { @@ -860,7 +871,8 @@ int vmci_ctx_rcv_notifications_get(u32 context_id, spin_lock(&context->lock); *db_handle_array = context->pending_doorbell_array; - context->pending_doorbell_array = vmci_handle_arr_create(0); + context->pending_doorbell_array = + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); if (!context->pending_doorbell_array) { context->pending_doorbell_array = *db_handle_array; *db_handle_array = NULL; @@ -942,12 +954,11 @@ int vmci_ctx_dbell_create(u32 context_id, struct vmci_handle handle) return VMCI_ERROR_NOT_FOUND; spin_lock(&context->lock); - if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) { - vmci_handle_arr_append_entry(&context->doorbell_array, handle); - result = VMCI_SUCCESS; - } else { + if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) + result = vmci_handle_arr_append_entry(&context->doorbell_array, + handle); + else result = VMCI_ERROR_DUPLICATE_ENTRY; - } spin_unlock(&context->lock); vmci_ctx_put(context); @@ -1083,15 +1094,16 @@ int vmci_ctx_notify_dbell(u32 src_cid, if (!vmci_handle_arr_has_entry( dst_context->pending_doorbell_array, handle)) { - vmci_handle_arr_append_entry( + result = vmci_handle_arr_append_entry( &dst_context->pending_doorbell_array, handle); - - ctx_signal_notify(dst_context); - wake_up(&dst_context->host_context.wait_queue); - + if (result == VMCI_SUCCESS) { + ctx_signal_notify(dst_context); + wake_up(&dst_context->host_context.wait_queue); + } + } else { + result = VMCI_SUCCESS; } - result = VMCI_SUCCESS; } spin_unlock(&dst_context->lock); } @@ -1118,13 +1130,11 @@ int vmci_ctx_qp_create(struct vmci_ctx *context, struct vmci_handle handle) if (context == NULL || vmci_handle_is_invalid(handle)) return VMCI_ERROR_INVALID_ARGS; - if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) { - vmci_handle_arr_append_entry(&context->queue_pair_array, - handle); - result = VMCI_SUCCESS; - } else { + if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) + result = vmci_handle_arr_append_entry( + &context->queue_pair_array, handle); + else result = VMCI_ERROR_DUPLICATE_ENTRY; - } return result; } diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.c b/drivers/misc/vmw_vmci/vmci_handle_array.c index c527388f5d7b16..de7fee7ead1bcc 100644 --- a/drivers/misc/vmw_vmci/vmci_handle_array.c +++ b/drivers/misc/vmw_vmci/vmci_handle_array.c @@ -8,24 +8,29 @@ #include #include "vmci_handle_array.h" -static size_t handle_arr_calc_size(size_t capacity) +static size_t handle_arr_calc_size(u32 capacity) { - return sizeof(struct vmci_handle_arr) + + return VMCI_HANDLE_ARRAY_HEADER_SIZE + capacity * sizeof(struct vmci_handle); } -struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity) +struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity) { struct vmci_handle_arr *array; + if (max_capacity == 0 || capacity > max_capacity) + return NULL; + if (capacity == 0) - capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE; + capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY, + max_capacity); array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC); if (!array) return NULL; array->capacity = capacity; + array->max_capacity = max_capacity; array->size = 0; return array; @@ -36,27 +41,34 @@ void vmci_handle_arr_destroy(struct vmci_handle_arr *array) kfree(array); } -void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, - struct vmci_handle handle) +int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, + struct vmci_handle handle) { struct vmci_handle_arr *array = *array_ptr; if (unlikely(array->size >= array->capacity)) { /* reallocate. */ struct vmci_handle_arr *new_array; - size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT; - size_t new_size = handle_arr_calc_size(new_capacity); + u32 capacity_bump = min(array->max_capacity - array->capacity, + array->capacity); + size_t new_size = handle_arr_calc_size(array->capacity + + capacity_bump); + + if (array->size >= array->max_capacity) + return VMCI_ERROR_NO_MEM; new_array = krealloc(array, new_size, GFP_ATOMIC); if (!new_array) - return; + return VMCI_ERROR_NO_MEM; - new_array->capacity = new_capacity; + new_array->capacity += capacity_bump; *array_ptr = array = new_array; } array->entries[array->size] = handle; array->size++; + + return VMCI_SUCCESS; } /* @@ -66,7 +78,7 @@ struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array, struct vmci_handle entry_handle) { struct vmci_handle handle = VMCI_INVALID_HANDLE; - size_t i; + u32 i; for (i = 0; i < array->size; i++) { if (vmci_handle_is_equal(array->entries[i], entry_handle)) { @@ -101,7 +113,7 @@ struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array) * Handle at given index, VMCI_INVALID_HANDLE if invalid index. */ struct vmci_handle -vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index) +vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index) { if (unlikely(index >= array->size)) return VMCI_INVALID_HANDLE; @@ -112,7 +124,7 @@ vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index) bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array, struct vmci_handle entry_handle) { - size_t i; + u32 i; for (i = 0; i < array->size; i++) if (vmci_handle_is_equal(array->entries[i], entry_handle)) diff --git a/drivers/misc/vmw_vmci/vmci_handle_array.h b/drivers/misc/vmw_vmci/vmci_handle_array.h index bd1559a548e9a6..96193f85be5bba 100644 --- a/drivers/misc/vmw_vmci/vmci_handle_array.h +++ b/drivers/misc/vmw_vmci/vmci_handle_array.h @@ -9,32 +9,41 @@ #define _VMCI_HANDLE_ARRAY_H_ #include +#include #include -#define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4 -#define VMCI_ARR_CAP_MULT 2 /* Array capacity multiplier */ - struct vmci_handle_arr { - size_t capacity; - size_t size; + u32 capacity; + u32 max_capacity; + u32 size; + u32 pad; struct vmci_handle entries[]; }; -struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity); +#define VMCI_HANDLE_ARRAY_HEADER_SIZE \ + offsetof(struct vmci_handle_arr, entries) +/* Select a default capacity that results in a 64 byte sized array */ +#define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY 6 +/* Make sure that the max array size can be expressed by a u32 */ +#define VMCI_HANDLE_ARRAY_MAX_CAPACITY \ + ((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) / \ + sizeof(struct vmci_handle)) + +struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity); void vmci_handle_arr_destroy(struct vmci_handle_arr *array); -void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, - struct vmci_handle handle); +int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, + struct vmci_handle handle); struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array, struct vmci_handle entry_handle); struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array); struct vmci_handle -vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index); +vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index); bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array, struct vmci_handle entry_handle); struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array); -static inline size_t vmci_handle_arr_get_size( +static inline u32 vmci_handle_arr_get_size( const struct vmci_handle_arr *array) { return array->size; diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c index 0e09bede42a2bd..b081a1ef68598f 100644 --- a/drivers/net/ethernet/intel/e1000e/netdev.c +++ b/drivers/net/ethernet/intel/e1000e/netdev.c @@ -4208,7 +4208,7 @@ void e1000e_up(struct e1000_adapter *adapter) e1000_configure_msix(adapter); e1000_irq_enable(adapter); - netif_start_queue(adapter->netdev); + /* Tx queue started by watchdog timer when link is up */ e1000e_trigger_lsc(adapter); } @@ -4606,6 +4606,7 @@ int e1000e_open(struct net_device *netdev) pm_runtime_get_sync(&pdev->dev); netif_carrier_off(netdev); + netif_stop_queue(netdev); /* allocate transmit descriptors */ err = e1000e_setup_tx_resources(adapter->tx_ring); @@ -4666,7 +4667,6 @@ int e1000e_open(struct net_device *netdev) e1000_irq_enable(adapter); adapter->tx_hang_recheck = false; - netif_start_queue(netdev); hw->mac.get_link_status = true; pm_runtime_put(&pdev->dev); @@ -5288,6 +5288,7 @@ static void e1000_watchdog_task(struct work_struct *work) if (phy->ops.cfg_on_link_up) phy->ops.cfg_on_link_up(hw); + netif_wake_queue(netdev); netif_carrier_on(netdev); if (!test_bit(__E1000_DOWN, &adapter->state)) @@ -5301,6 +5302,7 @@ static void e1000_watchdog_task(struct work_struct *work) /* Link status message must follow this format */ pr_info("%s NIC Link is Down\n", adapter->netdev->name); netif_carrier_off(netdev); + netif_stop_queue(netdev); if (!test_bit(__E1000_DOWN, &adapter->state)) mod_timer(&adapter->phy_info_timer, round_jiffies(jiffies + 2 * HZ)); @@ -5308,13 +5310,8 @@ static void e1000_watchdog_task(struct work_struct *work) /* 8000ES2LAN requires a Rx packet buffer work-around * on link down event; reset the controller to flush * the Rx packet buffer. - * - * If the link is lost the controller stops DMA, but - * if there is queued Tx work it cannot be done. So - * reset the controller to flush the Tx packet buffers. */ - if ((adapter->flags & FLAG_RX_NEEDS_RESTART) || - e1000_desc_unused(tx_ring) + 1 < tx_ring->count) + if (adapter->flags & FLAG_RX_NEEDS_RESTART) adapter->flags |= FLAG_RESTART_NOW; else pm_schedule_suspend(netdev->dev.parent, @@ -5337,6 +5334,14 @@ static void e1000_watchdog_task(struct work_struct *work) adapter->gotc_old = adapter->stats.gotc; spin_unlock(&adapter->stats64_lock); + /* If the link is lost the controller stops DMA, but + * if there is queued Tx work it cannot be done. So + * reset the controller to flush the Tx packet buffers. + */ + if (!netif_carrier_ok(netdev) && + (e1000_desc_unused(tx_ring) + 1 < tx_ring->count)) + adapter->flags |= FLAG_RESTART_NOW; + /* If reset is necessary, do it outside of interrupt context. */ if (adapter->flags & FLAG_RESTART_NOW) { schedule_work(&adapter->reset_task); diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c index e7c3f3b8457dfd..99f1897a775dc1 100644 --- a/drivers/net/wireless/ath/carl9170/usb.c +++ b/drivers/net/wireless/ath/carl9170/usb.c @@ -128,6 +128,8 @@ static const struct usb_device_id carl9170_usb_ids[] = { }; MODULE_DEVICE_TABLE(usb, carl9170_usb_ids); +static struct usb_driver carl9170_driver; + static void carl9170_usb_submit_data_urb(struct ar9170 *ar) { struct urb *urb; @@ -966,32 +968,28 @@ static int carl9170_usb_init_device(struct ar9170 *ar) static void carl9170_usb_firmware_failed(struct ar9170 *ar) { - struct device *parent = ar->udev->dev.parent; - struct usb_device *udev; - - /* - * Store a copy of the usb_device pointer locally. - * This is because device_release_driver initiates - * carl9170_usb_disconnect, which in turn frees our - * driver context (ar). + /* Store a copies of the usb_interface and usb_device pointer locally. + * This is because release_driver initiates carl9170_usb_disconnect, + * which in turn frees our driver context (ar). */ - udev = ar->udev; + struct usb_interface *intf = ar->intf; + struct usb_device *udev = ar->udev; complete(&ar->fw_load_wait); + /* at this point 'ar' could be already freed. Don't use it anymore */ + ar = NULL; /* unbind anything failed */ - if (parent) - device_lock(parent); - - device_release_driver(&udev->dev); - if (parent) - device_unlock(parent); + usb_lock_device(udev); + usb_driver_release_interface(&carl9170_driver, intf); + usb_unlock_device(udev); - usb_put_dev(udev); + usb_put_intf(intf); } static void carl9170_usb_firmware_finish(struct ar9170 *ar) { + struct usb_interface *intf = ar->intf; int err; err = carl9170_parse_firmware(ar); @@ -1009,7 +1007,7 @@ static void carl9170_usb_firmware_finish(struct ar9170 *ar) goto err_unrx; complete(&ar->fw_load_wait); - usb_put_dev(ar->udev); + usb_put_intf(intf); return; err_unrx: @@ -1052,7 +1050,6 @@ static int carl9170_usb_probe(struct usb_interface *intf, return PTR_ERR(ar); udev = interface_to_usbdev(intf); - usb_get_dev(udev); ar->udev = udev; ar->intf = intf; ar->features = id->driver_info; @@ -1094,15 +1091,14 @@ static int carl9170_usb_probe(struct usb_interface *intf, atomic_set(&ar->rx_anch_urbs, 0); atomic_set(&ar->rx_pool_urbs, 0); - usb_get_dev(ar->udev); + usb_get_intf(intf); carl9170_set_state(ar, CARL9170_STOPPED); err = request_firmware_nowait(THIS_MODULE, 1, CARL9170FW_NAME, &ar->udev->dev, GFP_KERNEL, ar, carl9170_usb_firmware_step2); if (err) { - usb_put_dev(udev); - usb_put_dev(udev); + usb_put_intf(intf); carl9170_free(ar); } return err; @@ -1131,7 +1127,6 @@ static void carl9170_usb_disconnect(struct usb_interface *intf) carl9170_release_firmware(ar); carl9170_free(ar); - usb_put_dev(udev); } #ifdef CONFIG_PM diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c index f937815f0f2c99..b94764c88750aa 100644 --- a/drivers/net/wireless/intersil/p54/p54usb.c +++ b/drivers/net/wireless/intersil/p54/p54usb.c @@ -30,6 +30,8 @@ MODULE_ALIAS("prism54usb"); MODULE_FIRMWARE("isl3886usb"); MODULE_FIRMWARE("isl3887usb"); +static struct usb_driver p54u_driver; + /* * Note: * @@ -918,9 +920,9 @@ static void p54u_load_firmware_cb(const struct firmware *firmware, { struct p54u_priv *priv = context; struct usb_device *udev = priv->udev; + struct usb_interface *intf = priv->intf; int err; - complete(&priv->fw_wait_load); if (firmware) { priv->fw = firmware; err = p54u_start_ops(priv); @@ -929,26 +931,22 @@ static void p54u_load_firmware_cb(const struct firmware *firmware, dev_err(&udev->dev, "Firmware not found.\n"); } - if (err) { - struct device *parent = priv->udev->dev.parent; - - dev_err(&udev->dev, "failed to initialize device (%d)\n", err); - - if (parent) - device_lock(parent); + complete(&priv->fw_wait_load); + /* + * At this point p54u_disconnect may have already freed + * the "priv" context. Do not use it anymore! + */ + priv = NULL; - device_release_driver(&udev->dev); - /* - * At this point p54u_disconnect has already freed - * the "priv" context. Do not use it anymore! - */ - priv = NULL; + if (err) { + dev_err(&intf->dev, "failed to initialize device (%d)\n", err); - if (parent) - device_unlock(parent); + usb_lock_device(udev); + usb_driver_release_interface(&p54u_driver, intf); + usb_unlock_device(udev); } - usb_put_dev(udev); + usb_put_intf(intf); } static int p54u_load_firmware(struct ieee80211_hw *dev, @@ -969,14 +967,14 @@ static int p54u_load_firmware(struct ieee80211_hw *dev, dev_info(&priv->udev->dev, "Loading firmware file %s\n", p54u_fwlist[i].fw); - usb_get_dev(udev); + usb_get_intf(intf); err = request_firmware_nowait(THIS_MODULE, 1, p54u_fwlist[i].fw, device, GFP_KERNEL, priv, p54u_load_firmware_cb); if (err) { dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s " "(%d)!\n", p54u_fwlist[i].fw, err); - usb_put_dev(udev); + usb_put_intf(intf); } return err; @@ -1008,8 +1006,6 @@ static int p54u_probe(struct usb_interface *intf, skb_queue_head_init(&priv->rx_queue); init_usb_anchor(&priv->submitted); - usb_get_dev(udev); - /* really lazy and simple way of figuring out if we're a 3887 */ /* TODO: should just stick the identification in the device table */ i = intf->altsetting->desc.bNumEndpoints; @@ -1050,10 +1046,8 @@ static int p54u_probe(struct usb_interface *intf, priv->upload_fw = p54u_upload_firmware_net2280; } err = p54u_load_firmware(dev, intf); - if (err) { - usb_put_dev(udev); + if (err) p54_free_common(dev); - } return err; } @@ -1069,7 +1063,6 @@ static void p54u_disconnect(struct usb_interface *intf) wait_for_completion(&priv->fw_wait_load); p54_unregister_common(dev); - usb_put_dev(interface_to_usbdev(intf)); release_firmware(priv->fw); p54_free_common(dev); } diff --git a/drivers/net/wireless/intersil/p54/txrx.c b/drivers/net/wireless/intersil/p54/txrx.c index ff9acd1563f4ad..5892898f885356 100644 --- a/drivers/net/wireless/intersil/p54/txrx.c +++ b/drivers/net/wireless/intersil/p54/txrx.c @@ -139,7 +139,10 @@ static int p54_assign_address(struct p54_common *priv, struct sk_buff *skb) unlikely(GET_HW_QUEUE(skb) == P54_QUEUE_BEACON)) priv->beacon_req_id = data->req_id; - __skb_queue_after(&priv->tx_queue, target_skb, skb); + if (target_skb) + __skb_queue_after(&priv->tx_queue, target_skb, skb); + else + __skb_queue_head(&priv->tx_queue, skb); spin_unlock_irqrestore(&priv->tx_queue.lock, flags); return 0; } diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h index b73f99dc5a72e0..1fb76d2f5d3fdf 100644 --- a/drivers/net/wireless/marvell/mwifiex/fw.h +++ b/drivers/net/wireless/marvell/mwifiex/fw.h @@ -1759,9 +1759,10 @@ struct mwifiex_ie_types_wmm_queue_status { struct ieee_types_vendor_header { u8 element_id; u8 len; - u8 oui[4]; /* 0~2: oui, 3: oui_type */ - u8 oui_subtype; - u8 version; + struct { + u8 oui[3]; + u8 oui_type; + } __packed oui; } __packed; struct ieee_types_wmm_parameter { @@ -1775,6 +1776,9 @@ struct ieee_types_wmm_parameter { * Version [1] */ struct ieee_types_vendor_header vend_hdr; + u8 oui_subtype; + u8 version; + u8 qos_info_bitmap; u8 reserved; struct ieee_types_wmm_ac_parameters ac_params[IEEE80211_NUM_ACS]; @@ -1792,6 +1796,8 @@ struct ieee_types_wmm_info { * Version [1] */ struct ieee_types_vendor_header vend_hdr; + u8 oui_subtype; + u8 version; u8 qos_info_bitmap; } __packed; diff --git a/drivers/net/wireless/marvell/mwifiex/scan.c b/drivers/net/wireless/marvell/mwifiex/scan.c index c269a0de941379..e2786ab612cadc 100644 --- a/drivers/net/wireless/marvell/mwifiex/scan.c +++ b/drivers/net/wireless/marvell/mwifiex/scan.c @@ -1361,21 +1361,25 @@ int mwifiex_update_bss_desc_with_ie(struct mwifiex_adapter *adapter, break; case WLAN_EID_VENDOR_SPECIFIC: - if (element_len + 2 < sizeof(vendor_ie->vend_hdr)) - return -EINVAL; - vendor_ie = (struct ieee_types_vendor_specific *) current_ptr; - if (!memcmp - (vendor_ie->vend_hdr.oui, wpa_oui, - sizeof(wpa_oui))) { + /* 802.11 requires at least 3-byte OUI. */ + if (element_len < sizeof(vendor_ie->vend_hdr.oui.oui)) + return -EINVAL; + + /* Not long enough for a match? Skip it. */ + if (element_len < sizeof(wpa_oui)) + break; + + if (!memcmp(&vendor_ie->vend_hdr.oui, wpa_oui, + sizeof(wpa_oui))) { bss_entry->bcn_wpa_ie = (struct ieee_types_vendor_specific *) current_ptr; bss_entry->wpa_offset = (u16) (current_ptr - bss_entry->beacon_buf); - } else if (!memcmp(vendor_ie->vend_hdr.oui, wmm_oui, + } else if (!memcmp(&vendor_ie->vend_hdr.oui, wmm_oui, sizeof(wmm_oui))) { if (total_ie_len == sizeof(struct ieee_types_wmm_parameter) || diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c index ebc0e41e5d3b53..74e50566db1f27 100644 --- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c +++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c @@ -1351,7 +1351,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr, /* Test to see if it is a WPA IE, if not, then * it is a gen IE */ - if (!memcmp(pvendor_ie->oui, wpa_oui, + if (!memcmp(&pvendor_ie->oui, wpa_oui, sizeof(wpa_oui))) { /* IE is a WPA/WPA2 IE so call set_wpa function */ @@ -1361,7 +1361,7 @@ mwifiex_set_gen_ie_helper(struct mwifiex_private *priv, u8 *ie_data_ptr, goto next_ie; } - if (!memcmp(pvendor_ie->oui, wps_oui, + if (!memcmp(&pvendor_ie->oui, wps_oui, sizeof(wps_oui))) { /* Test to see if it is a WPS IE, * if so, enable wps session flag diff --git a/drivers/net/wireless/marvell/mwifiex/wmm.c b/drivers/net/wireless/marvell/mwifiex/wmm.c index 407b9932ca4d7a..64916ba15df5d3 100644 --- a/drivers/net/wireless/marvell/mwifiex/wmm.c +++ b/drivers/net/wireless/marvell/mwifiex/wmm.c @@ -240,7 +240,7 @@ mwifiex_wmm_setup_queue_priorities(struct mwifiex_private *priv, mwifiex_dbg(priv->adapter, INFO, "info: WMM Parameter IE: version=%d,\t" "qos_info Parameter Set Count=%d, Reserved=%#x\n", - wmm_ie->vend_hdr.version, wmm_ie->qos_info_bitmap & + wmm_ie->version, wmm_ie->qos_info_bitmap & IEEE80211_WMM_IE_AP_QOSINFO_PARAM_SET_CNT_MASK, wmm_ie->reserved); diff --git a/drivers/phy/allwinner/phy-sun4i-usb.c b/drivers/phy/allwinner/phy-sun4i-usb.c index 85692738224873..5a09a22129563f 100644 --- a/drivers/phy/allwinner/phy-sun4i-usb.c +++ b/drivers/phy/allwinner/phy-sun4i-usb.c @@ -98,6 +98,7 @@ #define POLL_TIME msecs_to_jiffies(250) enum sun4i_usb_phy_type { + suniv_f1c100s_phy, sun4i_a10_phy, sun6i_a31_phy, sun8i_a33_phy, @@ -859,6 +860,14 @@ static int sun4i_usb_phy_probe(struct platform_device *pdev) return 0; } +static const struct sun4i_usb_phy_cfg suniv_f1c100s_cfg = { + .num_phys = 1, + .type = suniv_f1c100s_phy, + .disc_thresh = 3, + .phyctl_offset = REG_PHYCTL_A10, + .dedicated_clocks = true, +}; + static const struct sun4i_usb_phy_cfg sun4i_a10_cfg = { .num_phys = 3, .type = sun4i_a10_phy, @@ -973,6 +982,10 @@ static const struct sun4i_usb_phy_cfg sun50i_h6_cfg = { }; static const struct of_device_id sun4i_usb_phy_of_match[] = { + { + .compatible = "allwinner,suniv-f1c100s-usb-phy", + .data = &suniv_f1c100s_cfg + }, { .compatible = "allwinner,sun4i-a10-usb-phy", .data = &sun4i_a10_cfg }, { .compatible = "allwinner,sun5i-a13-usb-phy", .data = &sun5i_a13_cfg }, { .compatible = "allwinner,sun6i-a31-usb-phy", .data = &sun6i_a31_cfg }, diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c index 6c90aa725f2376..e71992a3c55f68 100644 --- a/drivers/s390/char/sclp_early.c +++ b/drivers/s390/char/sclp_early.c @@ -41,7 +41,6 @@ static void __init sclp_early_facilities_detect(struct read_info_sccb *sccb) sclp.has_hvs = !!(sccb->fac119 & 0x80); sclp.has_kss = !!(sccb->fac98 & 0x01); sclp.has_sipl = !!(sccb->cbl & 0x02); - sclp.has_sipl_g2 = !!(sccb->cbl & 0x04); if (sccb->fac85 & 0x02) S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP; if (sccb->fac91 & 0x40) diff --git a/drivers/s390/cio/qdio_setup.c b/drivers/s390/cio/qdio_setup.c index 99d7d2566a3a8b..d4101cecdc8db3 100644 --- a/drivers/s390/cio/qdio_setup.c +++ b/drivers/s390/cio/qdio_setup.c @@ -150,6 +150,7 @@ static int __qdio_allocate_qs(struct qdio_q **irq_ptr_qs, int nr_queues) return -ENOMEM; } irq_ptr_qs[i] = q; + INIT_LIST_HEAD(&q->entry); } return 0; } @@ -178,6 +179,7 @@ static void setup_queues_misc(struct qdio_q *q, struct qdio_irq *irq_ptr, q->mask = 1 << (31 - i); q->nr = i; q->handler = handler; + INIT_LIST_HEAD(&q->entry); } static void setup_storage_lists(struct qdio_q *q, struct qdio_irq *irq_ptr, diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c index 28d59ac2204cc1..d9763bbecbf9bd 100644 --- a/drivers/s390/cio/qdio_thinint.c +++ b/drivers/s390/cio/qdio_thinint.c @@ -79,7 +79,6 @@ void tiqdio_add_input_queues(struct qdio_irq *irq_ptr) mutex_lock(&tiq_list_lock); list_add_rcu(&irq_ptr->input_qs[0]->entry, &tiq_list); mutex_unlock(&tiq_list_lock); - xchg(irq_ptr->dsci, 1 << 7); } void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr) @@ -87,14 +86,14 @@ void tiqdio_remove_input_queues(struct qdio_irq *irq_ptr) struct qdio_q *q; q = irq_ptr->input_qs[0]; - /* if establish triggered an error */ - if (!q || !q->entry.prev || !q->entry.next) + if (!q) return; mutex_lock(&tiq_list_lock); list_del_rcu(&q->entry); mutex_unlock(&tiq_list_lock); synchronize_rcu(); + INIT_LIST_HEAD(&q->entry); } static inline int has_multiple_inq_on_dsci(struct qdio_irq *irq_ptr) diff --git a/drivers/staging/comedi/drivers/amplc_pci230.c b/drivers/staging/comedi/drivers/amplc_pci230.c index 65f60c2b702a6c..f7e673121864bd 100644 --- a/drivers/staging/comedi/drivers/amplc_pci230.c +++ b/drivers/staging/comedi/drivers/amplc_pci230.c @@ -2330,7 +2330,8 @@ static irqreturn_t pci230_interrupt(int irq, void *d) devpriv->intr_running = false; spin_unlock_irqrestore(&devpriv->isr_spinlock, irqflags); - comedi_handle_events(dev, s_ao); + if (s_ao) + comedi_handle_events(dev, s_ao); comedi_handle_events(dev, s_ai); return IRQ_HANDLED; diff --git a/drivers/staging/comedi/drivers/dt282x.c b/drivers/staging/comedi/drivers/dt282x.c index 3be927f1d3a926..e15e33ed94ae3c 100644 --- a/drivers/staging/comedi/drivers/dt282x.c +++ b/drivers/staging/comedi/drivers/dt282x.c @@ -557,7 +557,8 @@ static irqreturn_t dt282x_interrupt(int irq, void *d) } #endif comedi_handle_events(dev, s); - comedi_handle_events(dev, s_ao); + if (s_ao) + comedi_handle_events(dev, s_ao); return IRQ_RETVAL(handled); } diff --git a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c index e3c3e427309ab0..f73edaf6ce875f 100644 --- a/drivers/staging/fsl-dpaa2/ethsw/ethsw.c +++ b/drivers/staging/fsl-dpaa2/ethsw/ethsw.c @@ -1086,6 +1086,7 @@ static int port_switchdev_event(struct notifier_block *unused, dev_hold(dev); break; default: + kfree(switchdev_work); return NOTIFY_DONE; } diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c index 03d919a9455213..93763d40e3a1f4 100644 --- a/drivers/staging/mt7621-pci/pci-mt7621.c +++ b/drivers/staging/mt7621-pci/pci-mt7621.c @@ -40,7 +40,7 @@ /* MediaTek specific configuration registers */ #define PCIE_FTS_NUM 0x70c #define PCIE_FTS_NUM_MASK GENMASK(15, 8) -#define PCIE_FTS_NUM_L0(x) ((x) & 0xff << 8) +#define PCIE_FTS_NUM_L0(x) (((x) & 0xff) << 8) /* rt_sysc_membase relative registers */ #define RALINK_PCIE_CLK_GEN 0x7c diff --git a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c index a7230c0c7b23e3..8f5a8ac1b010db 100644 --- a/drivers/staging/rtl8712/rtl871x_ioctl_linux.c +++ b/drivers/staging/rtl8712/rtl871x_ioctl_linux.c @@ -124,10 +124,91 @@ static inline void handle_group_key(struct ieee_param *param, } } -static noinline_for_stack char *translate_scan(struct _adapter *padapter, - struct iw_request_info *info, - struct wlan_network *pnetwork, - char *start, char *stop) +static noinline_for_stack char *translate_scan_wpa(struct iw_request_info *info, + struct wlan_network *pnetwork, + struct iw_event *iwe, + char *start, char *stop) +{ + /* parsing WPA/WPA2 IE */ + u8 buf[MAX_WPA_IE_LEN]; + u8 wpa_ie[255], rsn_ie[255]; + u16 wpa_len = 0, rsn_len = 0; + int n, i; + + r8712_get_sec_ie(pnetwork->network.IEs, + pnetwork->network.IELength, rsn_ie, &rsn_len, + wpa_ie, &wpa_len); + if (wpa_len > 0) { + memset(buf, 0, MAX_WPA_IE_LEN); + n = sprintf(buf, "wpa_ie="); + for (i = 0; i < wpa_len; i++) { + n += snprintf(buf + n, MAX_WPA_IE_LEN - n, + "%02x", wpa_ie[i]); + if (n >= MAX_WPA_IE_LEN) + break; + } + memset(iwe, 0, sizeof(*iwe)); + iwe->cmd = IWEVCUSTOM; + iwe->u.data.length = (u16)strlen(buf); + start = iwe_stream_add_point(info, start, stop, + iwe, buf); + memset(iwe, 0, sizeof(*iwe)); + iwe->cmd = IWEVGENIE; + iwe->u.data.length = (u16)wpa_len; + start = iwe_stream_add_point(info, start, stop, + iwe, wpa_ie); + } + if (rsn_len > 0) { + memset(buf, 0, MAX_WPA_IE_LEN); + n = sprintf(buf, "rsn_ie="); + for (i = 0; i < rsn_len; i++) { + n += snprintf(buf + n, MAX_WPA_IE_LEN - n, + "%02x", rsn_ie[i]); + if (n >= MAX_WPA_IE_LEN) + break; + } + memset(iwe, 0, sizeof(*iwe)); + iwe->cmd = IWEVCUSTOM; + iwe->u.data.length = strlen(buf); + start = iwe_stream_add_point(info, start, stop, + iwe, buf); + memset(iwe, 0, sizeof(*iwe)); + iwe->cmd = IWEVGENIE; + iwe->u.data.length = rsn_len; + start = iwe_stream_add_point(info, start, stop, iwe, + rsn_ie); + } + + return start; +} + +static noinline_for_stack char *translate_scan_wps(struct iw_request_info *info, + struct wlan_network *pnetwork, + struct iw_event *iwe, + char *start, char *stop) +{ + /* parsing WPS IE */ + u8 wps_ie[512]; + uint wps_ielen; + + if (r8712_get_wps_ie(pnetwork->network.IEs, + pnetwork->network.IELength, + wps_ie, &wps_ielen)) { + if (wps_ielen > 2) { + iwe->cmd = IWEVGENIE; + iwe->u.data.length = (u16)wps_ielen; + start = iwe_stream_add_point(info, start, stop, + iwe, wps_ie); + } + } + + return start; +} + +static char *translate_scan(struct _adapter *padapter, + struct iw_request_info *info, + struct wlan_network *pnetwork, + char *start, char *stop) { struct iw_event iwe; struct ieee80211_ht_cap *pht_capie; @@ -240,73 +321,11 @@ static noinline_for_stack char *translate_scan(struct _adapter *padapter, /* Check if we added any event */ if ((current_val - start) > iwe_stream_lcp_len(info)) start = current_val; - /* parsing WPA/WPA2 IE */ - { - u8 buf[MAX_WPA_IE_LEN]; - u8 wpa_ie[255], rsn_ie[255]; - u16 wpa_len = 0, rsn_len = 0; - int n; - - r8712_get_sec_ie(pnetwork->network.IEs, - pnetwork->network.IELength, rsn_ie, &rsn_len, - wpa_ie, &wpa_len); - if (wpa_len > 0) { - memset(buf, 0, MAX_WPA_IE_LEN); - n = sprintf(buf, "wpa_ie="); - for (i = 0; i < wpa_len; i++) { - n += snprintf(buf + n, MAX_WPA_IE_LEN - n, - "%02x", wpa_ie[i]); - if (n >= MAX_WPA_IE_LEN) - break; - } - memset(&iwe, 0, sizeof(iwe)); - iwe.cmd = IWEVCUSTOM; - iwe.u.data.length = (u16)strlen(buf); - start = iwe_stream_add_point(info, start, stop, - &iwe, buf); - memset(&iwe, 0, sizeof(iwe)); - iwe.cmd = IWEVGENIE; - iwe.u.data.length = (u16)wpa_len; - start = iwe_stream_add_point(info, start, stop, - &iwe, wpa_ie); - } - if (rsn_len > 0) { - memset(buf, 0, MAX_WPA_IE_LEN); - n = sprintf(buf, "rsn_ie="); - for (i = 0; i < rsn_len; i++) { - n += snprintf(buf + n, MAX_WPA_IE_LEN - n, - "%02x", rsn_ie[i]); - if (n >= MAX_WPA_IE_LEN) - break; - } - memset(&iwe, 0, sizeof(iwe)); - iwe.cmd = IWEVCUSTOM; - iwe.u.data.length = strlen(buf); - start = iwe_stream_add_point(info, start, stop, - &iwe, buf); - memset(&iwe, 0, sizeof(iwe)); - iwe.cmd = IWEVGENIE; - iwe.u.data.length = rsn_len; - start = iwe_stream_add_point(info, start, stop, &iwe, - rsn_ie); - } - } - { /* parsing WPS IE */ - u8 wps_ie[512]; - uint wps_ielen; + start = translate_scan_wpa(info, pnetwork, &iwe, start, stop); + + start = translate_scan_wps(info, pnetwork, &iwe, start, stop); - if (r8712_get_wps_ie(pnetwork->network.IEs, - pnetwork->network.IELength, - wps_ie, &wps_ielen)) { - if (wps_ielen > 2) { - iwe.cmd = IWEVGENIE; - iwe.u.data.length = (u16)wps_ielen; - start = iwe_stream_add_point(info, start, stop, - &iwe, wps_ie); - } - } - } /* Add quality statistics */ iwe.cmd = IWEVQUAL; rssi = r8712_signal_scale_mapping(pnetwork->network.Rssi); diff --git a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c index 68f08dc18da97a..5e9187edeef4ec 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c +++ b/drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c @@ -336,16 +336,13 @@ static void buffer_cb(struct vchiq_mmal_instance *instance, return; } else if (length == 0) { /* stream ended */ - if (buf) { - /* this should only ever happen if the port is - * disabled and there are buffers still queued + if (dev->capture.frame_count) { + /* empty buffer whilst capturing - expected to be an + * EOS, so grab another frame */ - vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR); - pr_debug("Empty buffer"); - } else if (dev->capture.frame_count) { - /* grab another frame */ if (is_capturing(dev)) { - pr_debug("Grab another frame"); + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, + "Grab another frame"); vchiq_mmal_port_parameter_set( instance, dev->capture.camera_port, @@ -353,8 +350,14 @@ static void buffer_cb(struct vchiq_mmal_instance *instance, &dev->capture.frame_count, sizeof(dev->capture.frame_count)); } + if (vchiq_mmal_submit_buffer(instance, port, buf)) + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, + "Failed to return EOS buffer"); } else { - /* signal frame completion */ + /* stopping streaming. + * return buffer, and signal frame completion + */ + vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR); complete(&dev->capture.frame_cmplt); } } else { @@ -576,6 +579,7 @@ static void stop_streaming(struct vb2_queue *vq) int ret; unsigned long timeout; struct bm2835_mmal_dev *dev = vb2_get_drv_priv(vq); + struct vchiq_mmal_port *port = dev->capture.port; v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "%s: dev:%p\n", __func__, dev); @@ -599,12 +603,6 @@ static void stop_streaming(struct vb2_queue *vq) &dev->capture.frame_count, sizeof(dev->capture.frame_count)); - /* wait for last frame to complete */ - timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, HZ); - if (timeout == 0) - v4l2_err(&dev->v4l2_dev, - "timed out waiting for frame completion\n"); - v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, "disabling connection\n"); @@ -619,6 +617,21 @@ static void stop_streaming(struct vb2_queue *vq) ret); } + /* wait for all buffers to be returned */ + while (atomic_read(&port->buffers_with_vpu)) { + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, + "%s: Waiting for buffers to be returned - %d outstanding\n", + __func__, atomic_read(&port->buffers_with_vpu)); + timeout = wait_for_completion_timeout(&dev->capture.frame_cmplt, + HZ); + if (timeout == 0) { + v4l2_err(&dev->v4l2_dev, "%s: Timeout waiting for buffers to be returned - %d outstanding\n", + __func__, + atomic_read(&port->buffers_with_vpu)); + break; + } + } + if (disable_camera(dev) < 0) v4l2_err(&dev->v4l2_dev, "Failed to disable camera\n"); } diff --git a/drivers/staging/vc04_services/bcm2835-camera/controls.c b/drivers/staging/vc04_services/bcm2835-camera/controls.c index dade79738a2947..12ac3ef61fe67f 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/controls.c +++ b/drivers/staging/vc04_services/bcm2835-camera/controls.c @@ -603,15 +603,28 @@ static int ctrl_set_bitrate(struct bm2835_mmal_dev *dev, struct v4l2_ctrl *ctrl, const struct bm2835_mmal_v4l2_ctrl *mmal_ctrl) { + int ret; struct vchiq_mmal_port *encoder_out; dev->capture.encode_bitrate = ctrl->val; encoder_out = &dev->component[MMAL_COMPONENT_VIDEO_ENCODE]->output[0]; - return vchiq_mmal_port_parameter_set(dev->instance, encoder_out, - mmal_ctrl->mmal_id, &ctrl->val, - sizeof(ctrl->val)); + ret = vchiq_mmal_port_parameter_set(dev->instance, encoder_out, + mmal_ctrl->mmal_id, &ctrl->val, + sizeof(ctrl->val)); + + v4l2_dbg(1, bcm2835_v4l2_debug, &dev->v4l2_dev, + "%s: After: mmal_ctrl:%p ctrl id:0x%x ctrl val:%d ret %d(%d)\n", + __func__, mmal_ctrl, ctrl->id, ctrl->val, ret, + (ret == 0 ? 0 : -EINVAL)); + + /* + * Older firmware versions (pre July 2019) have a bug in handling + * MMAL_PARAMETER_VIDEO_BIT_RATE that result in the call + * returning -MMAL_MSG_STATUS_EINVAL. So ignore errors from this call. + */ + return 0; } static int ctrl_set_bitrate_mode(struct bm2835_mmal_dev *dev, diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c index 16af735af5c359..29761f6c3b5533 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.c @@ -161,7 +161,8 @@ struct vchiq_mmal_instance { void *bulk_scratch; struct idr context_map; - spinlock_t context_map_lock; + /* protect accesses to context_map */ + struct mutex context_map_lock; /* component to use next */ int component_idx; @@ -184,10 +185,10 @@ get_msg_context(struct vchiq_mmal_instance *instance) * that when we service the VCHI reply, we can look up what * message is being replied to. */ - spin_lock(&instance->context_map_lock); + mutex_lock(&instance->context_map_lock); handle = idr_alloc(&instance->context_map, msg_context, 0, 0, GFP_KERNEL); - spin_unlock(&instance->context_map_lock); + mutex_unlock(&instance->context_map_lock); if (handle < 0) { kfree(msg_context); @@ -211,9 +212,9 @@ release_msg_context(struct mmal_msg_context *msg_context) { struct vchiq_mmal_instance *instance = msg_context->instance; - spin_lock(&instance->context_map_lock); + mutex_lock(&instance->context_map_lock); idr_remove(&instance->context_map, msg_context->handle); - spin_unlock(&instance->context_map_lock); + mutex_unlock(&instance->context_map_lock); kfree(msg_context); } @@ -239,6 +240,8 @@ static void buffer_work_cb(struct work_struct *work) struct mmal_msg_context *msg_context = container_of(work, struct mmal_msg_context, u.bulk.work); + atomic_dec(&msg_context->u.bulk.port->buffers_with_vpu); + msg_context->u.bulk.port->buffer_cb(msg_context->u.bulk.instance, msg_context->u.bulk.port, msg_context->u.bulk.status, @@ -287,8 +290,6 @@ static int bulk_receive(struct vchiq_mmal_instance *instance, /* store length */ msg_context->u.bulk.buffer_used = rd_len; - msg_context->u.bulk.mmal_flags = - msg->u.buffer_from_host.buffer_header.flags; msg_context->u.bulk.dts = msg->u.buffer_from_host.buffer_header.dts; msg_context->u.bulk.pts = msg->u.buffer_from_host.buffer_header.pts; @@ -379,6 +380,8 @@ buffer_from_host(struct vchiq_mmal_instance *instance, /* initialise work structure ready to schedule callback */ INIT_WORK(&msg_context->u.bulk.work, buffer_work_cb); + atomic_inc(&port->buffers_with_vpu); + /* prep the buffer from host message */ memset(&m, 0xbc, sizeof(m)); /* just to make debug clearer */ @@ -447,6 +450,9 @@ static void buffer_to_host_cb(struct vchiq_mmal_instance *instance, return; } + msg_context->u.bulk.mmal_flags = + msg->u.buffer_from_host.buffer_header.flags; + if (msg->h.status != MMAL_MSG_STATUS_SUCCESS) { /* message reception had an error */ pr_warn("error %d in reply\n", msg->h.status); @@ -1323,16 +1329,6 @@ static int port_enable(struct vchiq_mmal_instance *instance, if (port->enabled) return 0; - /* ensure there are enough buffers queued to cover the buffer headers */ - if (port->buffer_cb) { - hdr_count = 0; - list_for_each(buf_head, &port->buffers) { - hdr_count++; - } - if (hdr_count < port->current_buffer.num) - return -ENOSPC; - } - ret = port_action_port(instance, port, MMAL_MSG_PORT_ACTION_TYPE_ENABLE); if (ret) @@ -1849,7 +1845,7 @@ int vchiq_mmal_init(struct vchiq_mmal_instance **out_instance) instance->bulk_scratch = vmalloc(PAGE_SIZE); - spin_lock_init(&instance->context_map_lock); + mutex_init(&instance->context_map_lock); idr_init_base(&instance->context_map, 1); params.callback_param = instance; diff --git a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h index 22b839ecd5f024..b0ee1716525b47 100644 --- a/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h +++ b/drivers/staging/vc04_services/bcm2835-camera/mmal-vchiq.h @@ -71,6 +71,9 @@ struct vchiq_mmal_port { struct list_head buffers; /* lock to serialise adding and removing buffers from list */ spinlock_t slock; + + /* Count of buffers the VPU has yet to return */ + atomic_t buffers_with_vpu; /* callback on buffer completion */ vchiq_mmal_buffer_cb buffer_cb; /* callback context */ diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c index c557c995372478..aa20fcaefa9da8 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_2835_arm.c @@ -523,7 +523,7 @@ create_pagelist(char __user *buf, size_t count, unsigned short type) (g_cache_line_size - 1)))) { char *fragments; - if (down_killable(&g_free_fragments_sema)) { + if (down_interruptible(&g_free_fragments_sema) != 0) { cleanup_pagelistinfo(pagelistinfo); return NULL; } diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c index ab7d6a0ce94cac..62d8f599e76549 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c @@ -532,7 +532,8 @@ add_completion(VCHIQ_INSTANCE_T instance, VCHIQ_REASON_T reason, vchiq_log_trace(vchiq_arm_log_level, "%s - completion queue full", __func__); DEBUG_COUNT(COMPLETION_QUEUE_FULL_COUNT); - if (wait_for_completion_killable(&instance->remove_event)) { + if (wait_for_completion_interruptible( + &instance->remove_event)) { vchiq_log_info(vchiq_arm_log_level, "service_callback interrupted"); return VCHIQ_RETRY; @@ -643,7 +644,7 @@ service_callback(VCHIQ_REASON_T reason, struct vchiq_header *header, } DEBUG_TRACE(SERVICE_CALLBACK_LINE); - if (wait_for_completion_killable( + if (wait_for_completion_interruptible( &user_service->remove_event) != 0) { vchiq_log_info(vchiq_arm_log_level, @@ -978,7 +979,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) has been closed until the client library calls the CLOSE_DELIVERED ioctl, signalling close_event. */ if (user_service->close_pending && - wait_for_completion_killable( + wait_for_completion_interruptible( &user_service->close_event)) status = VCHIQ_RETRY; break; @@ -1154,7 +1155,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) DEBUG_TRACE(AWAIT_COMPLETION_LINE); mutex_unlock(&instance->completion_mutex); - rc = wait_for_completion_killable( + rc = wait_for_completion_interruptible( &instance->insert_event); mutex_lock(&instance->completion_mutex); if (rc != 0) { @@ -1324,7 +1325,7 @@ vchiq_ioctl(struct file *file, unsigned int cmd, unsigned long arg) do { spin_unlock(&msg_queue_spinlock); DEBUG_TRACE(DEQUEUE_MESSAGE_LINE); - if (wait_for_completion_killable( + if (wait_for_completion_interruptible( &user_service->insert_event)) { vchiq_log_info(vchiq_arm_log_level, "DEQUEUE_MESSAGE interrupted"); @@ -2328,7 +2329,7 @@ vchiq_keepalive_thread_func(void *v) while (1) { long rc = 0, uc = 0; - if (wait_for_completion_killable(&arm_state->ka_evt) + if (wait_for_completion_interruptible(&arm_state->ka_evt) != 0) { vchiq_log_error(vchiq_susp_log_level, "%s interrupted", __func__); @@ -2579,7 +2580,7 @@ block_resume(struct vchiq_arm_state *arm_state) write_unlock_bh(&arm_state->susp_res_lock); vchiq_log_info(vchiq_susp_log_level, "%s wait for previously " "blocked clients", __func__); - if (wait_for_completion_killable_timeout( + if (wait_for_completion_interruptible_timeout( &arm_state->blocked_blocker, timeout_val) <= 0) { vchiq_log_error(vchiq_susp_log_level, "%s wait for " @@ -2605,7 +2606,7 @@ block_resume(struct vchiq_arm_state *arm_state) write_unlock_bh(&arm_state->susp_res_lock); vchiq_log_info(vchiq_susp_log_level, "%s wait for resume", __func__); - if (wait_for_completion_killable_timeout( + if (wait_for_completion_interruptible_timeout( &arm_state->vc_resume_complete, timeout_val) <= 0) { vchiq_log_error(vchiq_susp_log_level, "%s wait for " @@ -2812,7 +2813,7 @@ vchiq_arm_force_suspend(struct vchiq_state *state) do { write_unlock_bh(&arm_state->susp_res_lock); - rc = wait_for_completion_killable_timeout( + rc = wait_for_completion_interruptible_timeout( &arm_state->vc_suspend_complete, msecs_to_jiffies(FORCE_SUSPEND_TIMEOUT_MS)); @@ -2908,7 +2909,7 @@ vchiq_arm_allow_resume(struct vchiq_state *state) write_unlock_bh(&arm_state->susp_res_lock); if (resume) { - if (wait_for_completion_killable( + if (wait_for_completion_interruptible( &arm_state->vc_resume_complete) < 0) { vchiq_log_error(vchiq_susp_log_level, "%s interrupted", __func__); diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c index 0c387b6473a5e5..44bfa890e0e532 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_core.c @@ -395,13 +395,21 @@ remote_event_create(wait_queue_head_t *wq, struct remote_event *event) init_waitqueue_head(wq); } +/* + * All the event waiting routines in VCHIQ used a custom semaphore + * implementation that filtered most signals. This achieved a behaviour similar + * to the "killable" family of functions. While cleaning up this code all the + * routines where switched to the "interruptible" family of functions, as the + * former was deemed unjustified and the use "killable" set all VCHIQ's + * threads in D state. + */ static inline int remote_event_wait(wait_queue_head_t *wq, struct remote_event *event) { if (!event->fired) { event->armed = 1; dsb(sy); - if (wait_event_killable(*wq, event->fired)) { + if (wait_event_interruptible(*wq, event->fired)) { event->armed = 0; return 0; } @@ -560,7 +568,7 @@ reserve_space(struct vchiq_state *state, size_t space, int is_blocking) remote_event_signal(&state->remote->trigger); if (!is_blocking || - (wait_for_completion_killable( + (wait_for_completion_interruptible( &state->slot_available_event))) return NULL; /* No space available */ } @@ -830,7 +838,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service, spin_unlock("a_spinlock); mutex_unlock(&state->slot_mutex); - if (wait_for_completion_killable( + if (wait_for_completion_interruptible( &state->data_quota_event)) return VCHIQ_RETRY; @@ -861,7 +869,7 @@ queue_message(struct vchiq_state *state, struct vchiq_service *service, service_quota->slot_use_count); VCHIQ_SERVICE_STATS_INC(service, quota_stalls); mutex_unlock(&state->slot_mutex); - if (wait_for_completion_killable( + if (wait_for_completion_interruptible( &service_quota->quota_event)) return VCHIQ_RETRY; if (service->closing) @@ -1710,7 +1718,8 @@ parse_rx_slots(struct vchiq_state *state) &service->bulk_rx : &service->bulk_tx; DEBUG_TRACE(PARSE_LINE); - if (mutex_lock_killable(&service->bulk_mutex)) { + if (mutex_lock_killable( + &service->bulk_mutex) != 0) { DEBUG_TRACE(PARSE_LINE); goto bail_not_ready; } @@ -2428,7 +2437,7 @@ vchiq_open_service_internal(struct vchiq_service *service, int client_id) QMFLAGS_IS_BLOCKING); if (status == VCHIQ_SUCCESS) { /* Wait for the ACK/NAK */ - if (wait_for_completion_killable(&service->remove_event)) { + if (wait_for_completion_interruptible(&service->remove_event)) { status = VCHIQ_RETRY; vchiq_release_service_internal(service); } else if ((service->srvstate != VCHIQ_SRVSTATE_OPEN) && @@ -2795,7 +2804,7 @@ vchiq_connect_internal(struct vchiq_state *state, VCHIQ_INSTANCE_T instance) } if (state->conn_state == VCHIQ_CONNSTATE_CONNECTING) { - if (wait_for_completion_killable(&state->connect)) + if (wait_for_completion_interruptible(&state->connect)) return VCHIQ_RETRY; vchiq_set_conn_state(state, VCHIQ_CONNSTATE_CONNECTED); @@ -2894,7 +2903,7 @@ vchiq_close_service(VCHIQ_SERVICE_HANDLE_T handle) } while (1) { - if (wait_for_completion_killable(&service->remove_event)) { + if (wait_for_completion_interruptible(&service->remove_event)) { status = VCHIQ_RETRY; break; } @@ -2955,7 +2964,7 @@ vchiq_remove_service(VCHIQ_SERVICE_HANDLE_T handle) request_poll(service->state, service, VCHIQ_POLL_REMOVE); } while (1) { - if (wait_for_completion_killable(&service->remove_event)) { + if (wait_for_completion_interruptible(&service->remove_event)) { status = VCHIQ_RETRY; break; } @@ -3038,7 +3047,7 @@ VCHIQ_STATUS_T vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, VCHIQ_SERVICE_STATS_INC(service, bulk_stalls); do { mutex_unlock(&service->bulk_mutex); - if (wait_for_completion_killable( + if (wait_for_completion_interruptible( &service->bulk_remove_event)) { status = VCHIQ_RETRY; goto error_exit; @@ -3115,7 +3124,7 @@ VCHIQ_STATUS_T vchiq_bulk_transfer(VCHIQ_SERVICE_HANDLE_T handle, if (bulk_waiter) { bulk_waiter->bulk = bulk; - if (wait_for_completion_killable(&bulk_waiter->event)) + if (wait_for_completion_interruptible(&bulk_waiter->event)) status = VCHIQ_RETRY; else if (bulk_waiter->actual == VCHIQ_BULK_ACTUAL_ABORTED) status = VCHIQ_ERROR; diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c index 6c519d8e48cbc8..8ee85c5e6f77e9 100644 --- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c +++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_util.c @@ -50,7 +50,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header) return; while (queue->write == queue->read + queue->size) { - if (wait_for_completion_killable(&queue->pop)) + if (wait_for_completion_interruptible(&queue->pop)) flush_signals(current); } @@ -63,7 +63,7 @@ void vchiu_queue_push(struct vchiu_queue *queue, struct vchiq_header *header) struct vchiq_header *vchiu_queue_peek(struct vchiu_queue *queue) { while (queue->write == queue->read) { - if (wait_for_completion_killable(&queue->push)) + if (wait_for_completion_interruptible(&queue->push)) flush_signals(current); } @@ -77,7 +77,7 @@ struct vchiq_header *vchiu_queue_pop(struct vchiu_queue *queue) struct vchiq_header *header; while (queue->write == queue->read) { - if (wait_for_completion_killable(&queue->push)) + if (wait_for_completion_interruptible(&queue->push)) flush_signals(current); } diff --git a/drivers/staging/wilc1000/wilc_netdev.c b/drivers/staging/wilc1000/wilc_netdev.c index ba78c08a17f120..5338d7d2b248de 100644 --- a/drivers/staging/wilc1000/wilc_netdev.c +++ b/drivers/staging/wilc1000/wilc_netdev.c @@ -530,17 +530,17 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif) goto fail_locks; } - if (wl->gpio_irq && init_irq(dev)) { - ret = -EIO; - goto fail_locks; - } - ret = wlan_initialize_threads(dev); if (ret < 0) { ret = -EIO; goto fail_wilc_wlan; } + if (wl->gpio_irq && init_irq(dev)) { + ret = -EIO; + goto fail_threads; + } + if (!wl->dev_irq_num && wl->hif_func->enable_interrupt && wl->hif_func->enable_interrupt(wl)) { @@ -596,7 +596,7 @@ static int wilc_wlan_initialize(struct net_device *dev, struct wilc_vif *vif) fail_irq_init: if (wl->dev_irq_num) deinit_irq(dev); - +fail_threads: wlan_deinitialize_threads(dev); fail_wilc_wlan: wilc_wlan_cleanup(dev); diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c index d2f3310abe543a..682300713be470 100644 --- a/drivers/tty/serial/8250/8250_port.c +++ b/drivers/tty/serial/8250/8250_port.c @@ -1869,8 +1869,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) status = serial_port_in(port, UART_LSR); - if (status & (UART_LSR_DR | UART_LSR_BI) && - iir & UART_IIR_RDI) { + if (status & (UART_LSR_DR | UART_LSR_BI)) { if (!up->dma || handle_rx_dma(up, iir)) status = serial8250_rx_chars(up, status); } diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c index 8b499d643461e9..8e41d70fd29805 100644 --- a/drivers/usb/dwc2/core.c +++ b/drivers/usb/dwc2/core.c @@ -531,7 +531,7 @@ int dwc2_core_reset(struct dwc2_hsotg *hsotg, bool skip_wait) } /* Wait for AHB master IDLE state */ - if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 50)) { + if (dwc2_hsotg_wait_bit_set(hsotg, GRSTCTL, GRSTCTL_AHBIDLE, 10000)) { dev_warn(hsotg->dev, "%s: HANG! AHB Idle timeout GRSTCTL GRSTCTL_AHBIDLE\n", __func__); return -EBUSY; diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c index 47be961f1bf3ff..c7ed90084d1a54 100644 --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -997,7 +997,6 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) * earlier */ gadget = epfile->ffs->gadget; - io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE; spin_lock_irq(&epfile->ffs->eps_lock); /* In the meantime, endpoint got disabled or changed. */ @@ -1012,6 +1011,8 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data) */ if (io_data->read) data_len = usb_ep_align_maybe(gadget, ep->ep, data_len); + + io_data->use_sg = gadget->sg_supported && data_len > PAGE_SIZE; spin_unlock_irq(&epfile->ffs->eps_lock); data = ffs_alloc_buffer(io_data, data_len); diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c index 737bd77a575daa..2929bb47a618ab 100644 --- a/drivers/usb/gadget/function/u_ether.c +++ b/drivers/usb/gadget/function/u_ether.c @@ -186,11 +186,12 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags) out = dev->port_usb->out_ep; else out = NULL; - spin_unlock_irqrestore(&dev->lock, flags); if (!out) + { + spin_unlock_irqrestore(&dev->lock, flags); return -ENOTCONN; - + } /* Padding up to RX_EXTRA handles minor disagreements with host. * Normally we use the USB "terminate on short read" convention; @@ -214,6 +215,7 @@ rx_submit(struct eth_dev *dev, struct usb_request *req, gfp_t gfp_flags) if (dev->port_usb->is_fixed) size = max_t(size_t, size, dev->port_usb->fixed_out_len); + spin_unlock_irqrestore(&dev->lock, flags); skb = __netdev_alloc_skb(dev->net, size + NET_IP_ALIGN, gfp_flags); if (skb == NULL) { diff --git a/drivers/usb/musb/sunxi.c b/drivers/usb/musb/sunxi.c index 832a41f9ee7d35..41ac2a33f11811 100644 --- a/drivers/usb/musb/sunxi.c +++ b/drivers/usb/musb/sunxi.c @@ -714,14 +714,17 @@ static int sunxi_musb_probe(struct platform_device *pdev) INIT_WORK(&glue->work, sunxi_musb_work); glue->host_nb.notifier_call = sunxi_musb_host_notifier; - if (of_device_is_compatible(np, "allwinner,sun4i-a10-musb")) + if (of_device_is_compatible(np, "allwinner,sun4i-a10-musb") || + of_device_is_compatible(np, "allwinner,suniv-f1c100s-musb")) { set_bit(SUNXI_MUSB_FL_HAS_SRAM, &glue->flags); + } if (of_device_is_compatible(np, "allwinner,sun6i-a31-musb")) set_bit(SUNXI_MUSB_FL_HAS_RESET, &glue->flags); if (of_device_is_compatible(np, "allwinner,sun8i-a33-musb") || - of_device_is_compatible(np, "allwinner,sun8i-h3-musb")) { + of_device_is_compatible(np, "allwinner,sun8i-h3-musb") || + of_device_is_compatible(np, "allwinner,suniv-f1c100s-musb")) { set_bit(SUNXI_MUSB_FL_HAS_RESET, &glue->flags); set_bit(SUNXI_MUSB_FL_NO_CONFIGDATA, &glue->flags); } @@ -812,6 +815,7 @@ static int sunxi_musb_remove(struct platform_device *pdev) } static const struct of_device_id sunxi_musb_match[] = { + { .compatible = "allwinner,suniv-f1c100s-musb", }, { .compatible = "allwinner,sun4i-a10-musb", }, { .compatible = "allwinner,sun6i-a31-musb", }, { .compatible = "allwinner,sun8i-a33-musb", }, diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c index 39fa2fc1b8b767..6036cbae8c78d2 100644 --- a/drivers/usb/renesas_usbhs/fifo.c +++ b/drivers/usb/renesas_usbhs/fifo.c @@ -802,9 +802,8 @@ static int __usbhsf_dma_map_ctrl(struct usbhs_pkt *pkt, int map) } static void usbhsf_dma_complete(void *arg); -static void xfer_work(struct work_struct *work) +static void usbhsf_dma_xfer_preparing(struct usbhs_pkt *pkt) { - struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work); struct usbhs_pipe *pipe = pkt->pipe; struct usbhs_fifo *fifo; struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe); @@ -812,12 +811,10 @@ static void xfer_work(struct work_struct *work) struct dma_chan *chan; struct device *dev = usbhs_priv_to_dev(priv); enum dma_transfer_direction dir; - unsigned long flags; - usbhs_lock(priv, flags); fifo = usbhs_pipe_to_fifo(pipe); if (!fifo) - goto xfer_work_end; + return; chan = usbhsf_dma_chan_get(fifo, pkt); dir = usbhs_pipe_is_dir_in(pipe) ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV; @@ -826,7 +823,7 @@ static void xfer_work(struct work_struct *work) pkt->trans, dir, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); if (!desc) - goto xfer_work_end; + return; desc->callback = usbhsf_dma_complete; desc->callback_param = pipe; @@ -834,7 +831,7 @@ static void xfer_work(struct work_struct *work) pkt->cookie = dmaengine_submit(desc); if (pkt->cookie < 0) { dev_err(dev, "Failed to submit dma descriptor\n"); - goto xfer_work_end; + return; } dev_dbg(dev, " %s %d (%d/ %d)\n", @@ -845,8 +842,17 @@ static void xfer_work(struct work_struct *work) dma_async_issue_pending(chan); usbhsf_dma_start(pipe, fifo); usbhs_pipe_enable(pipe); +} + +static void xfer_work(struct work_struct *work) +{ + struct usbhs_pkt *pkt = container_of(work, struct usbhs_pkt, work); + struct usbhs_pipe *pipe = pkt->pipe; + struct usbhs_priv *priv = usbhs_pipe_to_priv(pipe); + unsigned long flags; -xfer_work_end: + usbhs_lock(priv, flags); + usbhsf_dma_xfer_preparing(pkt); usbhs_unlock(priv, flags); } @@ -899,8 +905,13 @@ static int usbhsf_dma_prepare_push(struct usbhs_pkt *pkt, int *is_done) pkt->trans = len; usbhsf_tx_irq_ctrl(pipe, 0); - INIT_WORK(&pkt->work, xfer_work); - schedule_work(&pkt->work); + /* FIXME: Workaound for usb dmac that driver can be used in atomic */ + if (usbhs_get_dparam(priv, has_usb_dmac)) { + usbhsf_dma_xfer_preparing(pkt); + } else { + INIT_WORK(&pkt->work, xfer_work); + schedule_work(&pkt->work); + } return 0; @@ -1006,8 +1017,7 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt, pkt->trans = pkt->length; - INIT_WORK(&pkt->work, xfer_work); - schedule_work(&pkt->work); + usbhsf_dma_xfer_preparing(pkt); return 0; diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c index 1d8461ae2c3403..23669a584bae4a 100644 --- a/drivers/usb/serial/ftdi_sio.c +++ b/drivers/usb/serial/ftdi_sio.c @@ -1029,6 +1029,7 @@ static const struct usb_device_id id_table_combined[] = { { USB_DEVICE(AIRBUS_DS_VID, AIRBUS_DS_P8GR) }, /* EZPrototypes devices */ { USB_DEVICE(EZPROTOTYPES_VID, HJELMSLUND_USB485_ISO_PID) }, + { USB_DEVICE_INTERFACE_NUMBER(UNJO_VID, UNJO_ISODEBUG_V1_PID, 1) }, { } /* Terminating entry */ }; diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h index 5755f0df002589..f12d806220b4a9 100644 --- a/drivers/usb/serial/ftdi_sio_ids.h +++ b/drivers/usb/serial/ftdi_sio_ids.h @@ -1543,3 +1543,9 @@ #define CHETCO_SEASMART_DISPLAY_PID 0xA5AD /* SeaSmart NMEA2000 Display */ #define CHETCO_SEASMART_LITE_PID 0xA5AE /* SeaSmart Lite USB Adapter */ #define CHETCO_SEASMART_ANALOG_PID 0xA5AF /* SeaSmart Analog Adapter */ + +/* + * Unjo AB + */ +#define UNJO_VID 0x22B7 +#define UNJO_ISODEBUG_V1_PID 0x150D diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c index a0aaf06353596c..c1582fbd115032 100644 --- a/drivers/usb/serial/option.c +++ b/drivers/usb/serial/option.c @@ -1343,6 +1343,7 @@ static const struct usb_device_id option_ids[] = { .driver_info = RSVD(4) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0414, 0xff, 0xff, 0xff) }, { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0417, 0xff, 0xff, 0xff) }, + { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0601, 0xff) }, /* GosunCn ZTE WeLink ME3630 (RNDIS mode) */ { USB_DEVICE_INTERFACE_CLASS(ZTE_VENDOR_ID, 0x0602, 0xff) }, /* GosunCn ZTE WeLink ME3630 (MBIM mode) */ { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1008, 0xff, 0xff, 0xff), .driver_info = RSVD(4) }, diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c index c674abe3cf99dc..a38d1409f15b7c 100644 --- a/drivers/usb/typec/tps6598x.c +++ b/drivers/usb/typec/tps6598x.c @@ -41,7 +41,7 @@ #define TPS_STATUS_VCONN(s) (!!((s) & BIT(7))) /* TPS_REG_SYSTEM_CONF bits */ -#define TPS_SYSCONF_PORTINFO(c) ((c) & 3) +#define TPS_SYSCONF_PORTINFO(c) ((c) & 7) enum { TPS_PORTINFO_SINK, @@ -127,7 +127,7 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len) } static int tps6598x_block_write(struct tps6598x *tps, u8 reg, - void *val, size_t len) + const void *val, size_t len) { u8 data[TPS_MAX_LEN + 1]; @@ -173,7 +173,7 @@ static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val) static inline int tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val) { - return tps6598x_block_write(tps, reg, &val, sizeof(u32)); + return tps6598x_block_write(tps, reg, val, 4); } static int tps6598x_read_partner_identity(struct tps6598x *tps) diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c index d536889ac31bfe..4941fe8471ceff 100644 --- a/fs/crypto/policy.c +++ b/fs/crypto/policy.c @@ -81,6 +81,8 @@ int fscrypt_ioctl_set_policy(struct file *filp, const void __user *arg) if (ret == -ENODATA) { if (!S_ISDIR(inode->i_mode)) ret = -ENOTDIR; + else if (IS_DEADDIR(inode)) + ret = -ENOENT; else if (!inode->i_sb->s_cop->empty_dir(inode)) ret = -ENOTEMPTY; else diff --git a/fs/iomap.c b/fs/iomap.c index 12654c2e78f8ec..da961fca318052 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -333,7 +333,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (iop) atomic_inc(&iop->read_count); - if (!ctx->bio || !is_contig || bio_full(ctx->bio)) { + if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT; diff --git a/fs/udf/inode.c b/fs/udf/inode.c index e7276932e433c9..9bb18311a22fc1 100644 --- a/fs/udf/inode.c +++ b/fs/udf/inode.c @@ -470,13 +470,15 @@ static struct buffer_head *udf_getblk(struct inode *inode, udf_pblk_t block, return NULL; } -/* Extend the file by 'blocks' blocks, return the number of extents added */ +/* Extend the file with new blocks totaling 'new_block_bytes', + * return the number of extents added + */ static int udf_do_extend_file(struct inode *inode, struct extent_position *last_pos, struct kernel_long_ad *last_ext, - sector_t blocks) + loff_t new_block_bytes) { - sector_t add; + uint32_t add; int count = 0, fake = !(last_ext->extLength & UDF_EXTENT_LENGTH_MASK); struct super_block *sb = inode->i_sb; struct kernel_lb_addr prealloc_loc = {}; @@ -486,7 +488,7 @@ static int udf_do_extend_file(struct inode *inode, /* The previous extent is fake and we should not extend by anything * - there's nothing to do... */ - if (!blocks && fake) + if (!new_block_bytes && fake) return 0; iinfo = UDF_I(inode); @@ -517,13 +519,12 @@ static int udf_do_extend_file(struct inode *inode, /* Can we merge with the previous extent? */ if ((last_ext->extLength & UDF_EXTENT_FLAG_MASK) == EXT_NOT_RECORDED_NOT_ALLOCATED) { - add = ((1 << 30) - sb->s_blocksize - - (last_ext->extLength & UDF_EXTENT_LENGTH_MASK)) >> - sb->s_blocksize_bits; - if (add > blocks) - add = blocks; - blocks -= add; - last_ext->extLength += add << sb->s_blocksize_bits; + add = (1 << 30) - sb->s_blocksize - + (last_ext->extLength & UDF_EXTENT_LENGTH_MASK); + if (add > new_block_bytes) + add = new_block_bytes; + new_block_bytes -= add; + last_ext->extLength += add; } if (fake) { @@ -544,28 +545,27 @@ static int udf_do_extend_file(struct inode *inode, } /* Managed to do everything necessary? */ - if (!blocks) + if (!new_block_bytes) goto out; /* All further extents will be NOT_RECORDED_NOT_ALLOCATED */ last_ext->extLocation.logicalBlockNum = 0; last_ext->extLocation.partitionReferenceNum = 0; - add = (1 << (30-sb->s_blocksize_bits)) - 1; - last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | - (add << sb->s_blocksize_bits); + add = (1 << 30) - sb->s_blocksize; + last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | add; /* Create enough extents to cover the whole hole */ - while (blocks > add) { - blocks -= add; + while (new_block_bytes > add) { + new_block_bytes -= add; err = udf_add_aext(inode, last_pos, &last_ext->extLocation, last_ext->extLength, 1); if (err) return err; count++; } - if (blocks) { + if (new_block_bytes) { last_ext->extLength = EXT_NOT_RECORDED_NOT_ALLOCATED | - (blocks << sb->s_blocksize_bits); + new_block_bytes; err = udf_add_aext(inode, last_pos, &last_ext->extLocation, last_ext->extLength, 1); if (err) @@ -596,6 +596,24 @@ static int udf_do_extend_file(struct inode *inode, return count; } +/* Extend the final block of the file to final_block_len bytes */ +static void udf_do_extend_final_block(struct inode *inode, + struct extent_position *last_pos, + struct kernel_long_ad *last_ext, + uint32_t final_block_len) +{ + struct super_block *sb = inode->i_sb; + uint32_t added_bytes; + + added_bytes = final_block_len - + (last_ext->extLength & (sb->s_blocksize - 1)); + last_ext->extLength += added_bytes; + UDF_I(inode)->i_lenExtents += added_bytes; + + udf_write_aext(inode, last_pos, &last_ext->extLocation, + last_ext->extLength, 1); +} + static int udf_extend_file(struct inode *inode, loff_t newsize) { @@ -605,10 +623,12 @@ static int udf_extend_file(struct inode *inode, loff_t newsize) int8_t etype; struct super_block *sb = inode->i_sb; sector_t first_block = newsize >> sb->s_blocksize_bits, offset; + unsigned long partial_final_block; int adsize; struct udf_inode_info *iinfo = UDF_I(inode); struct kernel_long_ad extent; - int err; + int err = 0; + int within_final_block; if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT) adsize = sizeof(struct short_ad); @@ -618,18 +638,8 @@ static int udf_extend_file(struct inode *inode, loff_t newsize) BUG(); etype = inode_bmap(inode, first_block, &epos, &eloc, &elen, &offset); + within_final_block = (etype != -1); - /* File has extent covering the new size (could happen when extending - * inside a block)? */ - if (etype != -1) - return 0; - if (newsize & (sb->s_blocksize - 1)) - offset++; - /* Extended file just to the boundary of the last file block? */ - if (offset == 0) - return 0; - - /* Truncate is extending the file by 'offset' blocks */ if ((!epos.bh && epos.offset == udf_file_entry_alloc_offset(inode)) || (epos.bh && epos.offset == sizeof(struct allocExtDesc))) { /* File has no extents at all or has empty last @@ -643,7 +653,22 @@ static int udf_extend_file(struct inode *inode, loff_t newsize) &extent.extLength, 0); extent.extLength |= etype << 30; } - err = udf_do_extend_file(inode, &epos, &extent, offset); + + partial_final_block = newsize & (sb->s_blocksize - 1); + + /* File has extent covering the new size (could happen when extending + * inside a block)? + */ + if (within_final_block) { + /* Extending file within the last file block */ + udf_do_extend_final_block(inode, &epos, &extent, + partial_final_block); + } else { + loff_t add = ((loff_t)offset << sb->s_blocksize_bits) | + partial_final_block; + err = udf_do_extend_file(inode, &epos, &extent, add); + } + if (err < 0) goto out; err = 0; @@ -745,6 +770,7 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, /* Are we beyond EOF? */ if (etype == -1) { int ret; + loff_t hole_len; isBeyondEOF = true; if (count) { if (c) @@ -760,7 +786,8 @@ static sector_t inode_getblk(struct inode *inode, sector_t block, startnum = (offset > 0); } /* Create extents for the hole between EOF and offset */ - ret = udf_do_extend_file(inode, &prev_epos, laarr, offset); + hole_len = (loff_t)offset << inode->i_blkbits; + ret = udf_do_extend_file(inode, &prev_epos, laarr, hole_len); if (ret < 0) { *err = ret; newblock = 0; diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 8da5e6637771c6..11f703d4a60568 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -782,7 +782,7 @@ xfs_add_to_ioend( atomic_inc(&iop->write_count); if (!merged) { - if (bio_full(wpc->ioend->io_bio)) + if (bio_full(wpc->ioend->io_bio, len)) xfs_chain_bio(wpc->ioend, wbc, bdev, sector); bio_add_page(wpc->ioend->io_bio, page, len, poff); } diff --git a/include/linux/bio.h b/include/linux/bio.h index f87abaa898f003..e36b8fc1b1c333 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -102,9 +102,23 @@ static inline void *bio_data(struct bio *bio) return NULL; } -static inline bool bio_full(struct bio *bio) +/** + * bio_full - check if the bio is full + * @bio: bio to check + * @len: length of one segment to be added + * + * Return true if @bio is full and one segment with @len bytes can't be + * added to the bio, otherwise return false + */ +static inline bool bio_full(struct bio *bio, unsigned len) { - return bio->bi_vcnt >= bio->bi_max_vecs; + if (bio->bi_vcnt >= bio->bi_max_vecs) + return true; + + if (bio->bi_iter.bi_size > UINT_MAX - len) + return true; + + return false; } static inline bool bio_next_segment(const struct bio *bio, diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 5c6062206760a4..52ec0d9fa1f761 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -176,6 +176,7 @@ enum cpuhp_state { CPUHP_AP_WATCHDOG_ONLINE, CPUHP_AP_WORKQUEUE_ONLINE, CPUHP_AP_RCUTREE_ONLINE, + CPUHP_AP_BASE_CACHEINFO_ONLINE, CPUHP_AP_ONLINE_DYN, CPUHP_AP_ONLINE_DYN_END = CPUHP_AP_ONLINE_DYN + 30, CPUHP_AP_X86_HPET_ONLINE, diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h index 77ac9c7b94830b..762f793e92f6da 100644 --- a/include/linux/vmw_vmci_defs.h +++ b/include/linux/vmw_vmci_defs.h @@ -62,9 +62,18 @@ enum { /* * A single VMCI device has an upper limit of 128MB on the amount of - * memory that can be used for queue pairs. + * memory that can be used for queue pairs. Since each queue pair + * consists of at least two pages, the memory limit also dictates the + * number of queue pairs a guest can create. */ #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024) +#define VMCI_MAX_GUEST_QP_COUNT (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2) + +/* + * There can be at most PAGE_SIZE doorbells since there is one doorbell + * per byte in the doorbell bitmap page. + */ +#define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE /* * Queues with pre-mapped data pages must be small, so that we don't pin diff --git a/include/uapi/linux/nilfs2_ondisk.h b/include/uapi/linux/nilfs2_ondisk.h index a7e66ab11d1d65..c23f91ae5fe8b1 100644 --- a/include/uapi/linux/nilfs2_ondisk.h +++ b/include/uapi/linux/nilfs2_ondisk.h @@ -29,7 +29,7 @@ #include #include - +#include #define NILFS_INODE_BMAP_SIZE 7 @@ -533,19 +533,19 @@ enum { static inline void \ nilfs_checkpoint_set_##name(struct nilfs_checkpoint *cp) \ { \ - cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ - (1UL << NILFS_CHECKPOINT_##flag)); \ + cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) | \ + (1UL << NILFS_CHECKPOINT_##flag)); \ } \ static inline void \ nilfs_checkpoint_clear_##name(struct nilfs_checkpoint *cp) \ { \ - cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) & \ + cp->cp_flags = __cpu_to_le32(__le32_to_cpu(cp->cp_flags) & \ ~(1UL << NILFS_CHECKPOINT_##flag)); \ } \ static inline int \ nilfs_checkpoint_##name(const struct nilfs_checkpoint *cp) \ { \ - return !!(le32_to_cpu(cp->cp_flags) & \ + return !!(__le32_to_cpu(cp->cp_flags) & \ (1UL << NILFS_CHECKPOINT_##flag)); \ } @@ -595,20 +595,20 @@ enum { static inline void \ nilfs_segment_usage_set_##name(struct nilfs_segment_usage *su) \ { \ - su->su_flags = cpu_to_le32(le32_to_cpu(su->su_flags) | \ + su->su_flags = __cpu_to_le32(__le32_to_cpu(su->su_flags) | \ (1UL << NILFS_SEGMENT_USAGE_##flag));\ } \ static inline void \ nilfs_segment_usage_clear_##name(struct nilfs_segment_usage *su) \ { \ su->su_flags = \ - cpu_to_le32(le32_to_cpu(su->su_flags) & \ + __cpu_to_le32(__le32_to_cpu(su->su_flags) & \ ~(1UL << NILFS_SEGMENT_USAGE_##flag)); \ } \ static inline int \ nilfs_segment_usage_##name(const struct nilfs_segment_usage *su) \ { \ - return !!(le32_to_cpu(su->su_flags) & \ + return !!(__le32_to_cpu(su->su_flags) & \ (1UL << NILFS_SEGMENT_USAGE_##flag)); \ } @@ -619,15 +619,15 @@ NILFS_SEGMENT_USAGE_FNS(ERROR, error) static inline void nilfs_segment_usage_set_clean(struct nilfs_segment_usage *su) { - su->su_lastmod = cpu_to_le64(0); - su->su_nblocks = cpu_to_le32(0); - su->su_flags = cpu_to_le32(0); + su->su_lastmod = __cpu_to_le64(0); + su->su_nblocks = __cpu_to_le32(0); + su->su_flags = __cpu_to_le32(0); } static inline int nilfs_segment_usage_clean(const struct nilfs_segment_usage *su) { - return !le32_to_cpu(su->su_flags); + return !__le32_to_cpu(su->su_flags); } /** diff --git a/include/uapi/linux/usb/audio.h b/include/uapi/linux/usb/audio.h index ddc5396800aa42..76b7c3f6cd0d7b 100644 --- a/include/uapi/linux/usb/audio.h +++ b/include/uapi/linux/usb/audio.h @@ -450,6 +450,43 @@ static inline __u8 *uac_processing_unit_specific(struct uac_processing_unit_desc } } +/* + * Extension Unit (XU) has almost compatible layout with Processing Unit, but + * on UAC2, it has a different bmControls size (bControlSize); it's 1 byte for + * XU while 2 bytes for PU. The last iExtension field is a one-byte index as + * well as iProcessing field of PU. + */ +static inline __u8 uac_extension_unit_bControlSize(struct uac_processing_unit_descriptor *desc, + int protocol) +{ + switch (protocol) { + case UAC_VERSION_1: + return desc->baSourceID[desc->bNrInPins + 4]; + case UAC_VERSION_2: + return 1; /* in UAC2, this value is constant */ + case UAC_VERSION_3: + return 4; /* in UAC3, this value is constant */ + default: + return 1; + } +} + +static inline __u8 uac_extension_unit_iExtension(struct uac_processing_unit_descriptor *desc, + int protocol) +{ + __u8 control_size = uac_extension_unit_bControlSize(desc, protocol); + + switch (protocol) { + case UAC_VERSION_1: + case UAC_VERSION_2: + default: + return *(uac_processing_unit_bmControls(desc, protocol) + + control_size); + case UAC_VERSION_3: + return 0; /* UAC3 does not have this field */ + } +} + /* 4.5.2 Class-Specific AS Interface Descriptor */ struct uac1_as_header_descriptor { __u8 bLength; /* in bytes: 7 */ diff --git a/kernel/irq/autoprobe.c b/kernel/irq/autoprobe.c index 16cbf6beb27684..ae60cae24e9aa3 100644 --- a/kernel/irq/autoprobe.c +++ b/kernel/irq/autoprobe.c @@ -90,7 +90,7 @@ unsigned long probe_irq_on(void) /* It triggered already - consider it spurious. */ if (!(desc->istate & IRQS_WAITING)) { desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } else if (i < 32) mask |= 1 << i; @@ -127,7 +127,7 @@ unsigned int probe_irq_mask(unsigned long val) mask |= 1 << i; desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } raw_spin_unlock_irq(&desc->lock); } @@ -169,7 +169,7 @@ int probe_irq_off(unsigned long val) nr_of_irqs++; } desc->istate &= ~IRQS_AUTODETECT; - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); } raw_spin_unlock_irq(&desc->lock); } diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c index 29d6c7d070b4fd..3ff4a126088538 100644 --- a/kernel/irq/chip.c +++ b/kernel/irq/chip.c @@ -314,6 +314,12 @@ void irq_shutdown(struct irq_desc *desc) } irq_state_clr_started(desc); } +} + + +void irq_shutdown_and_deactivate(struct irq_desc *desc) +{ + irq_shutdown(desc); /* * This must be called even if the interrupt was never started up, * because the activation can happen before the interrupt is diff --git a/kernel/irq/cpuhotplug.c b/kernel/irq/cpuhotplug.c index 5b1072e394b26d..6c7ca2e983a595 100644 --- a/kernel/irq/cpuhotplug.c +++ b/kernel/irq/cpuhotplug.c @@ -116,7 +116,7 @@ static bool migrate_one_irq(struct irq_desc *desc) */ if (irqd_affinity_is_managed(d)) { irqd_set_managed_shutdown(d); - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); return false; } affinity = cpu_online_mask; diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h index 70c3053bc1f69a..3a948f41ab0056 100644 --- a/kernel/irq/internals.h +++ b/kernel/irq/internals.h @@ -82,6 +82,7 @@ extern int irq_activate_and_startup(struct irq_desc *desc, bool resend); extern int irq_startup(struct irq_desc *desc, bool resend, bool force); extern void irq_shutdown(struct irq_desc *desc); +extern void irq_shutdown_and_deactivate(struct irq_desc *desc); extern void irq_enable(struct irq_desc *desc); extern void irq_disable(struct irq_desc *desc); extern void irq_percpu_enable(struct irq_desc *desc, unsigned int cpu); @@ -96,6 +97,10 @@ static inline void irq_mark_irq(unsigned int irq) { } extern void irq_mark_irq(unsigned int irq); #endif +extern int __irq_get_irqchip_state(struct irq_data *data, + enum irqchip_irq_state which, + bool *state); + extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr); irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags); diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 78f3ddeb7fe44a..e8f7f179bf77e6 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include #include @@ -34,8 +35,9 @@ static int __init setup_forced_irqthreads(char *arg) early_param("threadirqs", setup_forced_irqthreads); #endif -static void __synchronize_hardirq(struct irq_desc *desc) +static void __synchronize_hardirq(struct irq_desc *desc, bool sync_chip) { + struct irq_data *irqd = irq_desc_get_irq_data(desc); bool inprogress; do { @@ -51,6 +53,20 @@ static void __synchronize_hardirq(struct irq_desc *desc) /* Ok, that indicated we're done: double-check carefully. */ raw_spin_lock_irqsave(&desc->lock, flags); inprogress = irqd_irq_inprogress(&desc->irq_data); + + /* + * If requested and supported, check at the chip whether it + * is in flight at the hardware level, i.e. already pending + * in a CPU and waiting for service and acknowledge. + */ + if (!inprogress && sync_chip) { + /* + * Ignore the return code. inprogress is only updated + * when the chip supports it. + */ + __irq_get_irqchip_state(irqd, IRQCHIP_STATE_ACTIVE, + &inprogress); + } raw_spin_unlock_irqrestore(&desc->lock, flags); /* Oops, that failed? */ @@ -73,13 +89,18 @@ static void __synchronize_hardirq(struct irq_desc *desc) * Returns: false if a threaded handler is active. * * This function may be called - with care - from IRQ context. + * + * It does not check whether there is an interrupt in flight at the + * hardware level, but not serviced yet, as this might deadlock when + * called with interrupts disabled and the target CPU of the interrupt + * is the current CPU. */ bool synchronize_hardirq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); if (desc) { - __synchronize_hardirq(desc); + __synchronize_hardirq(desc, false); return !atomic_read(&desc->threads_active); } @@ -95,14 +116,19 @@ EXPORT_SYMBOL(synchronize_hardirq); * to complete before returning. If you use this function while * holding a resource the IRQ handler may need you will deadlock. * - * This function may be called - with care - from IRQ context. + * Can only be called from preemptible code as it might sleep when + * an interrupt thread is associated to @irq. + * + * It optionally makes sure (when the irq chip supports that method) + * that the interrupt is not pending in any CPU and waiting for + * service. */ void synchronize_irq(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); if (desc) { - __synchronize_hardirq(desc); + __synchronize_hardirq(desc, true); /* * We made sure that no hardirq handler is * running. Now verify that no threaded handlers are @@ -1699,6 +1725,7 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) /* If this was the last handler, shut down the IRQ line: */ if (!desc->action) { irq_settings_clr_disable_unlazy(desc); + /* Only shutdown. Deactivate after synchronize_hardirq() */ irq_shutdown(desc); } @@ -1727,8 +1754,12 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) unregister_handler_proc(irq, action); - /* Make sure it's not being used on another CPU: */ - synchronize_hardirq(irq); + /* + * Make sure it's not being used on another CPU and if the chip + * supports it also make sure that there is no (not yet serviced) + * interrupt in flight at the hardware level. + */ + __synchronize_hardirq(desc, true); #ifdef CONFIG_DEBUG_SHIRQ /* @@ -1768,6 +1799,14 @@ static struct irqaction *__free_irq(struct irq_desc *desc, void *dev_id) * require it to deallocate resources over the slow bus. */ chip_bus_lock(desc); + /* + * There is no interrupt on the fly anymore. Deactivate it + * completely. + */ + raw_spin_lock_irqsave(&desc->lock, flags); + irq_domain_deactivate_irq(&desc->irq_data); + raw_spin_unlock_irqrestore(&desc->lock, flags); + irq_release_resources(desc); chip_bus_sync_unlock(desc); irq_remove_timings(desc); @@ -1855,7 +1894,7 @@ static const void *__cleanup_nmi(unsigned int irq, struct irq_desc *desc) } irq_settings_clr_disable_unlazy(desc); - irq_shutdown(desc); + irq_shutdown_and_deactivate(desc); irq_release_resources(desc); @@ -2578,6 +2617,28 @@ void teardown_percpu_nmi(unsigned int irq) irq_put_desc_unlock(desc, flags); } +int __irq_get_irqchip_state(struct irq_data *data, enum irqchip_irq_state which, + bool *state) +{ + struct irq_chip *chip; + int err = -EINVAL; + + do { + chip = irq_data_get_irq_chip(data); + if (chip->irq_get_irqchip_state) + break; +#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY + data = data->parent_data; +#else + data = NULL; +#endif + } while (data); + + if (data) + err = chip->irq_get_irqchip_state(data, which, state); + return err; +} + /** * irq_get_irqchip_state - returns the irqchip state of a interrupt. * @irq: Interrupt line that is forwarded to a VM @@ -2596,7 +2657,6 @@ int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which, { struct irq_desc *desc; struct irq_data *data; - struct irq_chip *chip; unsigned long flags; int err = -EINVAL; @@ -2606,19 +2666,7 @@ int irq_get_irqchip_state(unsigned int irq, enum irqchip_irq_state which, data = irq_desc_get_irq_data(desc); - do { - chip = irq_data_get_irq_chip(data); - if (chip->irq_get_irqchip_state) - break; -#ifdef CONFIG_IRQ_DOMAIN_HIERARCHY - data = data->parent_data; -#else - data = NULL; -#endif - } while (data); - - if (data) - err = chip->irq_get_irqchip_state(data, which, state); + err = __irq_get_irqchip_state(data, which, state); irq_put_desc_busunlock(desc, flags); return err; diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c index 6f3a35949cdda7..f24a757f8239b0 100644 --- a/sound/pci/hda/patch_realtek.c +++ b/sound/pci/hda/patch_realtek.c @@ -3255,6 +3255,7 @@ static void alc256_init(struct hda_codec *codec) alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15); + alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/ } static void alc256_shutup(struct hda_codec *codec) @@ -7825,7 +7826,6 @@ static int patch_alc269(struct hda_codec *codec) spec->shutup = alc256_shutup; spec->init_hook = alc256_init; spec->gen.mixer_nid = 0; /* ALC256 does not have any loopback mixer path */ - alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/ break; case 0x10ec0257: spec->codec_variant = ALC269_TYPE_ALC257; diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c index c703f8534b0735..7498b5191b68e4 100644 --- a/sound/usb/mixer.c +++ b/sound/usb/mixer.c @@ -2303,7 +2303,7 @@ static struct procunit_info extunits[] = { */ static int build_audio_procunit(struct mixer_build *state, int unitid, void *raw_desc, struct procunit_info *list, - char *name) + bool extension_unit) { struct uac_processing_unit_descriptor *desc = raw_desc; int num_ins; @@ -2320,6 +2320,8 @@ static int build_audio_procunit(struct mixer_build *state, int unitid, static struct procunit_info default_info = { 0, NULL, default_value_info }; + const char *name = extension_unit ? + "Extension Unit" : "Processing Unit"; if (desc->bLength < 13) { usb_audio_err(state->chip, "invalid %s descriptor (id %d)\n", name, unitid); @@ -2433,7 +2435,10 @@ static int build_audio_procunit(struct mixer_build *state, int unitid, } else if (info->name) { strlcpy(kctl->id.name, info->name, sizeof(kctl->id.name)); } else { - nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol); + if (extension_unit) + nameid = uac_extension_unit_iExtension(desc, state->mixer->protocol); + else + nameid = uac_processing_unit_iProcessing(desc, state->mixer->protocol); len = 0; if (nameid) len = snd_usb_copy_string_desc(state->chip, @@ -2466,10 +2471,10 @@ static int parse_audio_processing_unit(struct mixer_build *state, int unitid, case UAC_VERSION_2: default: return build_audio_procunit(state, unitid, raw_desc, - procunits, "Processing Unit"); + procunits, false); case UAC_VERSION_3: return build_audio_procunit(state, unitid, raw_desc, - uac3_procunits, "Processing Unit"); + uac3_procunits, false); } } @@ -2480,8 +2485,7 @@ static int parse_audio_extension_unit(struct mixer_build *state, int unitid, * Note that we parse extension units with processing unit descriptors. * That's ok as the layout is the same. */ - return build_audio_procunit(state, unitid, raw_desc, - extunits, "Extension Unit"); + return build_audio_procunit(state, unitid, raw_desc, extunits, true); } /* diff --git a/tools/perf/Documentation/intel-pt.txt b/tools/perf/Documentation/intel-pt.txt index 115eaacc455fdb..60d99e5e792193 100644 --- a/tools/perf/Documentation/intel-pt.txt +++ b/tools/perf/Documentation/intel-pt.txt @@ -88,16 +88,16 @@ smaller. To represent software control flow, "branches" samples are produced. By default a branch sample is synthesized for every single branch. To get an idea what -data is available you can use the 'perf script' tool with no parameters, which -will list all the samples. +data is available you can use the 'perf script' tool with all itrace sampling +options, which will list all the samples. perf record -e intel_pt//u ls - perf script + perf script --itrace=ibxwpe An interesting field that is not printed by default is 'flags' which can be displayed as follows: - perf script -Fcomm,tid,pid,time,cpu,event,trace,ip,sym,dso,addr,symoff,flags + perf script --itrace=ibxwpe -F+flags The flags are "bcrosyiABEx" which stand for branch, call, return, conditional, system, asynchronous, interrupt, transaction abort, trace begin, trace end, and @@ -713,7 +713,7 @@ Having no option is the same as which, in turn, is the same as - --itrace=ibxwpe + --itrace=cepwx The letters are: diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c index 66e82bd0683e2a..cfdbf65f1e0209 100644 --- a/tools/perf/util/auxtrace.c +++ b/tools/perf/util/auxtrace.c @@ -1001,7 +1001,8 @@ int itrace_parse_synth_opts(const struct option *opt, const char *str, } if (!str) { - itrace_synth_opts__set_default(synth_opts, false); + itrace_synth_opts__set_default(synth_opts, + synth_opts->default_no_sample); return 0; } diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c index 847ae51a524bf1..fb0aa661644b59 100644 --- a/tools/perf/util/header.c +++ b/tools/perf/util/header.c @@ -3602,6 +3602,7 @@ int perf_event__synthesize_features(struct perf_tool *tool, return -ENOMEM; ff.size = sz - sz_hdr; + ff.ph = &session->header; for_each_set_bit(feat, header->adds_features, HEADER_FEAT_BITS) { if (!feat_ops[feat].synthesize) { diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c index d6f1b2a03f9b45..f7dd4657535d00 100644 --- a/tools/perf/util/intel-pt.c +++ b/tools/perf/util/intel-pt.c @@ -2579,7 +2579,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event, } else { itrace_synth_opts__set_default(&pt->synth_opts, session->itrace_synth_opts->default_no_sample); - if (use_browser != -1) { + if (!session->itrace_synth_opts->default_no_sample && + !session->itrace_synth_opts->inject) { pt->synth_opts.branches = false; pt->synth_opts.callchain = true; } diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c index e0429f4ef33589..faa8eb231e1bf7 100644 --- a/tools/perf/util/pmu.c +++ b/tools/perf/util/pmu.c @@ -709,9 +709,7 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu) { int i; struct pmu_events_map *map; - struct pmu_event *pe; const char *name = pmu->name; - const char *pname; map = perf_pmu__find_map(pmu); if (!map) @@ -722,28 +720,26 @@ static void pmu_add_cpu_aliases(struct list_head *head, struct perf_pmu *pmu) */ i = 0; while (1) { + const char *cpu_name = is_arm_pmu_core(name) ? name : "cpu"; + struct pmu_event *pe = &map->table[i++]; + const char *pname = pe->pmu ? pe->pmu : cpu_name; - pe = &map->table[i++]; if (!pe->name) { if (pe->metric_group || pe->metric_name) continue; break; } - if (!is_arm_pmu_core(name)) { - pname = pe->pmu ? pe->pmu : "cpu"; - - /* - * uncore alias may be from different PMU - * with common prefix - */ - if (pmu_is_uncore(name) && - !strncmp(pname, name, strlen(pname))) - goto new_alias; + /* + * uncore alias may be from different PMU + * with common prefix + */ + if (pmu_is_uncore(name) && + !strncmp(pname, name, strlen(pname))) + goto new_alias; - if (strcmp(pname, name)) - continue; - } + if (strcmp(pname, name)) + continue; new_alias: /* need type casts to override 'const' */ diff --git a/tools/perf/util/thread-stack.c b/tools/perf/util/thread-stack.c index 4ba9e866b076ce..60c9d955c4d711 100644 --- a/tools/perf/util/thread-stack.c +++ b/tools/perf/util/thread-stack.c @@ -616,6 +616,23 @@ static int thread_stack__bottom(struct thread_stack *ts, true, false); } +static int thread_stack__pop_ks(struct thread *thread, struct thread_stack *ts, + struct perf_sample *sample, u64 ref) +{ + u64 tm = sample->time; + int err; + + /* Return to userspace, so pop all kernel addresses */ + while (thread_stack__in_kernel(ts)) { + err = thread_stack__call_return(thread, ts, --ts->cnt, + tm, ref, true); + if (err) + return err; + } + + return 0; +} + static int thread_stack__no_call_return(struct thread *thread, struct thread_stack *ts, struct perf_sample *sample, @@ -896,7 +913,18 @@ int thread_stack__process(struct thread *thread, struct comm *comm, ts->rstate = X86_RETPOLINE_DETECTED; } else if (sample->flags & PERF_IP_FLAG_RETURN) { - if (!sample->ip || !sample->addr) + if (!sample->addr) { + u32 return_from_kernel = PERF_IP_FLAG_SYSCALLRET | + PERF_IP_FLAG_INTERRUPT; + + if (!(sample->flags & return_from_kernel)) + return 0; + + /* Pop kernel stack */ + return thread_stack__pop_ks(thread, ts, sample, ref); + } + + if (!sample->ip) return 0; /* x86 retpoline 'return' doesn't match the stack */