Merge 5.7-rc5 into android-mainline
Linux 5.7-rc5 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I9424bf0b2cc798d1a40e7d19bd09d2898fa1b148
This commit is contained in:
@@ -61,8 +61,8 @@ The ``ice`` driver reports the following versions
|
||||
- running
|
||||
- ICE OS Default Package
|
||||
- The name of the DDP package that is active in the device. The DDP
|
||||
package is loaded by the driver during initialization. Each varation
|
||||
of DDP package shall have a unique name.
|
||||
package is loaded by the driver during initialization. Each
|
||||
variation of the DDP package has a unique name.
|
||||
* - ``fw.app``
|
||||
- running
|
||||
- 1.3.1.0
|
||||
|
||||
@@ -28,3 +28,5 @@ KVM
|
||||
arm/index
|
||||
|
||||
devices/index
|
||||
|
||||
running-nested-guests
|
||||
|
||||
276
Documentation/virt/kvm/running-nested-guests.rst
Normal file
276
Documentation/virt/kvm/running-nested-guests.rst
Normal file
@@ -0,0 +1,276 @@
|
||||
==============================
|
||||
Running nested guests with KVM
|
||||
==============================
|
||||
|
||||
A nested guest is the ability to run a guest inside another guest (it
|
||||
can be KVM-based or a different hypervisor). The straightforward
|
||||
example is a KVM guest that in turn runs on a KVM guest (the rest of
|
||||
this document is built on this example)::
|
||||
|
||||
.----------------. .----------------.
|
||||
| | | |
|
||||
| L2 | | L2 |
|
||||
| (Nested Guest) | | (Nested Guest) |
|
||||
| | | |
|
||||
|----------------'--'----------------|
|
||||
| |
|
||||
| L1 (Guest Hypervisor) |
|
||||
| KVM (/dev/kvm) |
|
||||
| |
|
||||
.------------------------------------------------------.
|
||||
| L0 (Host Hypervisor) |
|
||||
| KVM (/dev/kvm) |
|
||||
|------------------------------------------------------|
|
||||
| Hardware (with virtualization extensions) |
|
||||
'------------------------------------------------------'
|
||||
|
||||
Terminology:
|
||||
|
||||
- L0 – level-0; the bare metal host, running KVM
|
||||
|
||||
- L1 – level-1 guest; a VM running on L0; also called the "guest
|
||||
hypervisor", as it itself is capable of running KVM.
|
||||
|
||||
- L2 – level-2 guest; a VM running on L1, this is the "nested guest"
|
||||
|
||||
.. note:: The above diagram is modelled after the x86 architecture;
|
||||
s390x, ppc64 and other architectures are likely to have
|
||||
a different design for nesting.
|
||||
|
||||
For example, s390x always has an LPAR (LogicalPARtition)
|
||||
hypervisor running on bare metal, adding another layer and
|
||||
resulting in at least four levels in a nested setup — L0 (bare
|
||||
metal, running the LPAR hypervisor), L1 (host hypervisor), L2
|
||||
(guest hypervisor), L3 (nested guest).
|
||||
|
||||
This document will stick with the three-level terminology (L0,
|
||||
L1, and L2) for all architectures; and will largely focus on
|
||||
x86.
|
||||
|
||||
|
||||
Use Cases
|
||||
---------
|
||||
|
||||
There are several scenarios where nested KVM can be useful, to name a
|
||||
few:
|
||||
|
||||
- As a developer, you want to test your software on different operating
|
||||
systems (OSes). Instead of renting multiple VMs from a Cloud
|
||||
Provider, using nested KVM lets you rent a large enough "guest
|
||||
hypervisor" (level-1 guest). This in turn allows you to create
|
||||
multiple nested guests (level-2 guests), running different OSes, on
|
||||
which you can develop and test your software.
|
||||
|
||||
- Live migration of "guest hypervisors" and their nested guests, for
|
||||
load balancing, disaster recovery, etc.
|
||||
|
||||
- VM image creation tools (e.g. ``virt-install``, etc) often run
|
||||
their own VM, and users expect these to work inside a VM.
|
||||
|
||||
- Some OSes use virtualization internally for security (e.g. to let
|
||||
applications run safely in isolation).
|
||||
|
||||
|
||||
Enabling "nested" (x86)
|
||||
-----------------------
|
||||
|
||||
From Linux kernel v4.19 onwards, the ``nested`` KVM parameter is enabled
|
||||
by default for Intel and AMD. (Though your Linux distribution might
|
||||
override this default.)
|
||||
|
||||
In case you are running a Linux kernel older than v4.19, to enable
|
||||
nesting, set the ``nested`` KVM module parameter to ``Y`` or ``1``. To
|
||||
persist this setting across reboots, you can add it in a config file, as
|
||||
shown below:
|
||||
|
||||
1. On the bare metal host (L0), list the kernel modules and ensure that
|
||||
the KVM modules::
|
||||
|
||||
$ lsmod | grep -i kvm
|
||||
kvm_intel 133627 0
|
||||
kvm 435079 1 kvm_intel
|
||||
|
||||
2. Show information for ``kvm_intel`` module::
|
||||
|
||||
$ modinfo kvm_intel | grep -i nested
|
||||
parm: nested:bool
|
||||
|
||||
3. For the nested KVM configuration to persist across reboots, place the
|
||||
below in ``/etc/modprobed/kvm_intel.conf`` (create the file if it
|
||||
doesn't exist)::
|
||||
|
||||
$ cat /etc/modprobe.d/kvm_intel.conf
|
||||
options kvm-intel nested=y
|
||||
|
||||
4. Unload and re-load the KVM Intel module::
|
||||
|
||||
$ sudo rmmod kvm-intel
|
||||
$ sudo modprobe kvm-intel
|
||||
|
||||
5. Verify if the ``nested`` parameter for KVM is enabled::
|
||||
|
||||
$ cat /sys/module/kvm_intel/parameters/nested
|
||||
Y
|
||||
|
||||
For AMD hosts, the process is the same as above, except that the module
|
||||
name is ``kvm-amd``.
|
||||
|
||||
|
||||
Additional nested-related kernel parameters (x86)
|
||||
-------------------------------------------------
|
||||
|
||||
If your hardware is sufficiently advanced (Intel Haswell processor or
|
||||
higher, which has newer hardware virt extensions), the following
|
||||
additional features will also be enabled by default: "Shadow VMCS
|
||||
(Virtual Machine Control Structure)", APIC Virtualization on your bare
|
||||
metal host (L0). Parameters for Intel hosts::
|
||||
|
||||
$ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs
|
||||
Y
|
||||
|
||||
$ cat /sys/module/kvm_intel/parameters/enable_apicv
|
||||
Y
|
||||
|
||||
$ cat /sys/module/kvm_intel/parameters/ept
|
||||
Y
|
||||
|
||||
.. note:: If you suspect your L2 (i.e. nested guest) is running slower,
|
||||
ensure the above are enabled (particularly
|
||||
``enable_shadow_vmcs`` and ``ept``).
|
||||
|
||||
|
||||
Starting a nested guest (x86)
|
||||
-----------------------------
|
||||
|
||||
Once your bare metal host (L0) is configured for nesting, you should be
|
||||
able to start an L1 guest with::
|
||||
|
||||
$ qemu-kvm -cpu host [...]
|
||||
|
||||
The above will pass through the host CPU's capabilities as-is to the
|
||||
gues); or for better live migration compatibility, use a named CPU
|
||||
model supported by QEMU. e.g.::
|
||||
|
||||
$ qemu-kvm -cpu Haswell-noTSX-IBRS,vmx=on
|
||||
|
||||
then the guest hypervisor will subsequently be capable of running a
|
||||
nested guest with accelerated KVM.
|
||||
|
||||
|
||||
Enabling "nested" (s390x)
|
||||
-------------------------
|
||||
|
||||
1. On the host hypervisor (L0), enable the ``nested`` parameter on
|
||||
s390x::
|
||||
|
||||
$ rmmod kvm
|
||||
$ modprobe kvm nested=1
|
||||
|
||||
.. note:: On s390x, the kernel parameter ``hpage`` is mutually exclusive
|
||||
with the ``nested`` paramter — i.e. to be able to enable
|
||||
``nested``, the ``hpage`` parameter *must* be disabled.
|
||||
|
||||
2. The guest hypervisor (L1) must be provided with the ``sie`` CPU
|
||||
feature — with QEMU, this can be done by using "host passthrough"
|
||||
(via the command-line ``-cpu host``).
|
||||
|
||||
3. Now the KVM module can be loaded in the L1 (guest hypervisor)::
|
||||
|
||||
$ modprobe kvm
|
||||
|
||||
|
||||
Live migration with nested KVM
|
||||
------------------------------
|
||||
|
||||
Migrating an L1 guest, with a *live* nested guest in it, to another
|
||||
bare metal host, works as of Linux kernel 5.3 and QEMU 4.2.0 for
|
||||
Intel x86 systems, and even on older versions for s390x.
|
||||
|
||||
On AMD systems, once an L1 guest has started an L2 guest, the L1 guest
|
||||
should no longer be migrated or saved (refer to QEMU documentation on
|
||||
"savevm"/"loadvm") until the L2 guest shuts down. Attempting to migrate
|
||||
or save-and-load an L1 guest while an L2 guest is running will result in
|
||||
undefined behavior. You might see a ``kernel BUG!`` entry in ``dmesg``, a
|
||||
kernel 'oops', or an outright kernel panic. Such a migrated or loaded L1
|
||||
guest can no longer be considered stable or secure, and must be restarted.
|
||||
Migrating an L1 guest merely configured to support nesting, while not
|
||||
actually running L2 guests, is expected to function normally even on AMD
|
||||
systems but may fail once guests are started.
|
||||
|
||||
Migrating an L2 guest is always expected to succeed, so all the following
|
||||
scenarios should work even on AMD systems:
|
||||
|
||||
- Migrating a nested guest (L2) to another L1 guest on the *same* bare
|
||||
metal host.
|
||||
|
||||
- Migrating a nested guest (L2) to another L1 guest on a *different*
|
||||
bare metal host.
|
||||
|
||||
- Migrating a nested guest (L2) to a bare metal host.
|
||||
|
||||
Reporting bugs from nested setups
|
||||
-----------------------------------
|
||||
|
||||
Debugging "nested" problems can involve sifting through log files across
|
||||
L0, L1 and L2; this can result in tedious back-n-forth between the bug
|
||||
reporter and the bug fixer.
|
||||
|
||||
- Mention that you are in a "nested" setup. If you are running any kind
|
||||
of "nesting" at all, say so. Unfortunately, this needs to be called
|
||||
out because when reporting bugs, people tend to forget to even
|
||||
*mention* that they're using nested virtualization.
|
||||
|
||||
- Ensure you are actually running KVM on KVM. Sometimes people do not
|
||||
have KVM enabled for their guest hypervisor (L1), which results in
|
||||
them running with pure emulation or what QEMU calls it as "TCG", but
|
||||
they think they're running nested KVM. Thus confusing "nested Virt"
|
||||
(which could also mean, QEMU on KVM) with "nested KVM" (KVM on KVM).
|
||||
|
||||
Information to collect (generic)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following is not an exhaustive list, but a very good starting point:
|
||||
|
||||
- Kernel, libvirt, and QEMU version from L0
|
||||
|
||||
- Kernel, libvirt and QEMU version from L1
|
||||
|
||||
- QEMU command-line of L1 -- when using libvirt, you'll find it here:
|
||||
``/var/log/libvirt/qemu/instance.log``
|
||||
|
||||
- QEMU command-line of L2 -- as above, when using libvirt, get the
|
||||
complete libvirt-generated QEMU command-line
|
||||
|
||||
- ``cat /sys/cpuinfo`` from L0
|
||||
|
||||
- ``cat /sys/cpuinfo`` from L1
|
||||
|
||||
- ``lscpu`` from L0
|
||||
|
||||
- ``lscpu`` from L1
|
||||
|
||||
- Full ``dmesg`` output from L0
|
||||
|
||||
- Full ``dmesg`` output from L1
|
||||
|
||||
x86-specific info to collect
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Both the below commands, ``x86info`` and ``dmidecode``, should be
|
||||
available on most Linux distributions with the same name:
|
||||
|
||||
- Output of: ``x86info -a`` from L0
|
||||
|
||||
- Output of: ``x86info -a`` from L1
|
||||
|
||||
- Output of: ``dmidecode`` from L0
|
||||
|
||||
- Output of: ``dmidecode`` from L1
|
||||
|
||||
s390x-specific info to collect
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Along with the earlier mentioned generic details, the below is
|
||||
also recommended:
|
||||
|
||||
- ``/proc/sysinfo`` from L1; this will also include the info from L0
|
||||
16
MAINTAINERS
16
MAINTAINERS
@@ -3936,11 +3936,9 @@ F: arch/powerpc/platforms/cell/
|
||||
CEPH COMMON CODE (LIBCEPH)
|
||||
M: Ilya Dryomov <idryomov@gmail.com>
|
||||
M: Jeff Layton <jlayton@kernel.org>
|
||||
M: Sage Weil <sage@redhat.com>
|
||||
L: ceph-devel@vger.kernel.org
|
||||
S: Supported
|
||||
W: http://ceph.com/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
|
||||
T: git git://github.com/ceph/ceph-client.git
|
||||
F: include/linux/ceph/
|
||||
F: include/linux/crush/
|
||||
@@ -3948,12 +3946,10 @@ F: net/ceph/
|
||||
|
||||
CEPH DISTRIBUTED FILE SYSTEM CLIENT (CEPH)
|
||||
M: Jeff Layton <jlayton@kernel.org>
|
||||
M: Sage Weil <sage@redhat.com>
|
||||
M: Ilya Dryomov <idryomov@gmail.com>
|
||||
L: ceph-devel@vger.kernel.org
|
||||
S: Supported
|
||||
W: http://ceph.com/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
|
||||
T: git git://github.com/ceph/ceph-client.git
|
||||
F: Documentation/filesystems/ceph.rst
|
||||
F: fs/ceph/
|
||||
@@ -5935,9 +5931,9 @@ F: lib/dynamic_debug.c
|
||||
DYNAMIC INTERRUPT MODERATION
|
||||
M: Tal Gilboa <talgi@mellanox.com>
|
||||
S: Maintained
|
||||
F: Documentation/networking/net_dim.rst
|
||||
F: include/linux/dim.h
|
||||
F: lib/dim/
|
||||
F: Documentation/networking/net_dim.rst
|
||||
|
||||
DZ DECSTATION DZ11 SERIAL DRIVER
|
||||
M: "Maciej W. Rozycki" <macro@linux-mips.org>
|
||||
@@ -7119,9 +7115,10 @@ F: include/uapi/asm-generic/
|
||||
|
||||
GENERIC PHY FRAMEWORK
|
||||
M: Kishon Vijay Abraham I <kishon@ti.com>
|
||||
M: Vinod Koul <vkoul@kernel.org>
|
||||
L: linux-kernel@vger.kernel.org
|
||||
S: Supported
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kishon/linux-phy.git
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/phy/linux-phy.git
|
||||
F: Documentation/devicetree/bindings/phy/
|
||||
F: drivers/phy/
|
||||
F: include/linux/phy/
|
||||
@@ -7746,11 +7743,6 @@ L: platform-driver-x86@vger.kernel.org
|
||||
S: Orphan
|
||||
F: drivers/platform/x86/tc1100-wmi.c
|
||||
|
||||
HP100: Driver for HP 10/100 Mbit/s Voice Grade Network Adapter Series
|
||||
M: Jaroslav Kysela <perex@perex.cz>
|
||||
S: Obsolete
|
||||
F: drivers/staging/hp/hp100.*
|
||||
|
||||
HPET: High Precision Event Timers driver
|
||||
M: Clemens Ladisch <clemens@ladisch.de>
|
||||
S: Maintained
|
||||
@@ -14108,12 +14100,10 @@ F: drivers/media/radio/radio-tea5777.c
|
||||
|
||||
RADOS BLOCK DEVICE (RBD)
|
||||
M: Ilya Dryomov <idryomov@gmail.com>
|
||||
M: Sage Weil <sage@redhat.com>
|
||||
R: Dongsheng Yang <dongsheng.yang@easystack.cn>
|
||||
L: ceph-devel@vger.kernel.org
|
||||
S: Supported
|
||||
W: http://ceph.com/
|
||||
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
|
||||
T: git git://github.com/ceph/ceph-client.git
|
||||
F: Documentation/ABI/testing/sysfs-bus-rbd
|
||||
F: drivers/block/rbd.c
|
||||
|
||||
17
Makefile
17
Makefile
@@ -2,7 +2,7 @@
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 7
|
||||
SUBLEVEL = 0
|
||||
EXTRAVERSION = -rc4
|
||||
EXTRAVERSION = -rc5
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
# *DOCUMENTATION*
|
||||
@@ -743,10 +743,6 @@ else ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
|
||||
KBUILD_CFLAGS += -Os
|
||||
endif
|
||||
|
||||
ifdef CONFIG_CC_DISABLE_WARN_MAYBE_UNINITIALIZED
|
||||
KBUILD_CFLAGS += -Wno-maybe-uninitialized
|
||||
endif
|
||||
|
||||
# Tell gcc to never replace conditional load with a non-conditional one
|
||||
KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
|
||||
KBUILD_CFLAGS += $(call cc-option,-fno-allow-store-data-races)
|
||||
@@ -945,6 +941,17 @@ KBUILD_CFLAGS += -Wno-pointer-sign
|
||||
# disable stringop warnings in gcc 8+
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
|
||||
|
||||
# We'll want to enable this eventually, but it's not going away for 5.7 at least
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, zero-length-bounds)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, stringop-overflow)
|
||||
|
||||
# Another good warning that we'll want to enable eventually
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, restrict)
|
||||
|
||||
# Enabled with W=2, disabled by default as noisy
|
||||
KBUILD_CFLAGS += $(call cc-disable-warning, maybe-uninitialized)
|
||||
|
||||
# disable invalid "can't wrap" optimizations for signed / pointers
|
||||
KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
|
||||
|
||||
|
||||
@@ -91,9 +91,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
|
||||
return;
|
||||
}
|
||||
|
||||
kernel_neon_begin();
|
||||
chacha_doneon(state, dst, src, bytes, nrounds);
|
||||
kernel_neon_end();
|
||||
do {
|
||||
unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
|
||||
|
||||
kernel_neon_begin();
|
||||
chacha_doneon(state, dst, src, todo, nrounds);
|
||||
kernel_neon_end();
|
||||
|
||||
bytes -= todo;
|
||||
src += todo;
|
||||
dst += todo;
|
||||
} while (bytes);
|
||||
}
|
||||
EXPORT_SYMBOL(chacha_crypt_arch);
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
|
||||
return crypto_nhpoly1305_update(desc, src, srclen);
|
||||
|
||||
do {
|
||||
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
|
||||
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
|
||||
|
||||
kernel_neon_begin();
|
||||
crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
|
||||
|
||||
@@ -160,13 +160,20 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
|
||||
unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
|
||||
|
||||
if (static_branch_likely(&have_neon) && do_neon) {
|
||||
kernel_neon_begin();
|
||||
poly1305_blocks_neon(&dctx->h, src, len, 1);
|
||||
kernel_neon_end();
|
||||
do {
|
||||
unsigned int todo = min_t(unsigned int, len, SZ_4K);
|
||||
|
||||
kernel_neon_begin();
|
||||
poly1305_blocks_neon(&dctx->h, src, todo, 1);
|
||||
kernel_neon_end();
|
||||
|
||||
len -= todo;
|
||||
src += todo;
|
||||
} while (len);
|
||||
} else {
|
||||
poly1305_blocks_arm(&dctx->h, src, len, 1);
|
||||
src += len;
|
||||
}
|
||||
src += len;
|
||||
nbytes %= POLY1305_BLOCK_SIZE;
|
||||
}
|
||||
|
||||
|
||||
@@ -165,8 +165,13 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
|
||||
preempt_enable();
|
||||
#endif
|
||||
|
||||
if (!ret)
|
||||
*oval = oldval;
|
||||
/*
|
||||
* Store unconditionally. If ret != 0 the extra store is the least
|
||||
* of the worries but GCC cannot figure out that __futex_atomic_op()
|
||||
* is either setting ret to -EFAULT or storing the old value in
|
||||
* oldval which results in a uninitialized warning at the call site.
|
||||
*/
|
||||
*oval = oldval;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -87,9 +87,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
|
||||
!crypto_simd_usable())
|
||||
return chacha_crypt_generic(state, dst, src, bytes, nrounds);
|
||||
|
||||
kernel_neon_begin();
|
||||
chacha_doneon(state, dst, src, bytes, nrounds);
|
||||
kernel_neon_end();
|
||||
do {
|
||||
unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
|
||||
|
||||
kernel_neon_begin();
|
||||
chacha_doneon(state, dst, src, todo, nrounds);
|
||||
kernel_neon_end();
|
||||
|
||||
bytes -= todo;
|
||||
src += todo;
|
||||
dst += todo;
|
||||
} while (bytes);
|
||||
}
|
||||
EXPORT_SYMBOL(chacha_crypt_arch);
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
|
||||
return crypto_nhpoly1305_update(desc, src, srclen);
|
||||
|
||||
do {
|
||||
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
|
||||
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
|
||||
|
||||
kernel_neon_begin();
|
||||
crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
|
||||
|
||||
@@ -143,13 +143,20 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
|
||||
unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
|
||||
|
||||
if (static_branch_likely(&have_neon) && crypto_simd_usable()) {
|
||||
kernel_neon_begin();
|
||||
poly1305_blocks_neon(&dctx->h, src, len, 1);
|
||||
kernel_neon_end();
|
||||
do {
|
||||
unsigned int todo = min_t(unsigned int, len, SZ_4K);
|
||||
|
||||
kernel_neon_begin();
|
||||
poly1305_blocks_neon(&dctx->h, src, todo, 1);
|
||||
kernel_neon_end();
|
||||
|
||||
len -= todo;
|
||||
src += todo;
|
||||
} while (len);
|
||||
} else {
|
||||
poly1305_blocks(&dctx->h, src, len, 1);
|
||||
src += len;
|
||||
}
|
||||
src += len;
|
||||
nbytes %= POLY1305_BLOCK_SIZE;
|
||||
}
|
||||
|
||||
|
||||
@@ -200,6 +200,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
|
||||
}
|
||||
|
||||
memcpy((u32 *)regs + off, valp, KVM_REG_SIZE(reg->id));
|
||||
|
||||
if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) {
|
||||
int i;
|
||||
|
||||
for (i = 0; i < 16; i++)
|
||||
*vcpu_reg32(vcpu, i) = (u32)*vcpu_reg32(vcpu, i);
|
||||
}
|
||||
out:
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@
|
||||
|
||||
#define CPU_GP_REG_OFFSET(x) (CPU_GP_REGS + x)
|
||||
#define CPU_XREG_OFFSET(x) CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
|
||||
#define CPU_SP_EL0_OFFSET (CPU_XREG_OFFSET(30) + 8)
|
||||
|
||||
.text
|
||||
.pushsection .hyp.text, "ax"
|
||||
@@ -47,6 +48,16 @@
|
||||
ldp x29, lr, [\ctxt, #CPU_XREG_OFFSET(29)]
|
||||
.endm
|
||||
|
||||
.macro save_sp_el0 ctxt, tmp
|
||||
mrs \tmp, sp_el0
|
||||
str \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]
|
||||
.endm
|
||||
|
||||
.macro restore_sp_el0 ctxt, tmp
|
||||
ldr \tmp, [\ctxt, #CPU_SP_EL0_OFFSET]
|
||||
msr sp_el0, \tmp
|
||||
.endm
|
||||
|
||||
/*
|
||||
* u64 __guest_enter(struct kvm_vcpu *vcpu,
|
||||
* struct kvm_cpu_context *host_ctxt);
|
||||
@@ -60,6 +71,9 @@ SYM_FUNC_START(__guest_enter)
|
||||
// Store the host regs
|
||||
save_callee_saved_regs x1
|
||||
|
||||
// Save the host's sp_el0
|
||||
save_sp_el0 x1, x2
|
||||
|
||||
// Now the host state is stored if we have a pending RAS SError it must
|
||||
// affect the host. If any asynchronous exception is pending we defer
|
||||
// the guest entry. The DSB isn't necessary before v8.2 as any SError
|
||||
@@ -83,6 +97,9 @@ alternative_else_nop_endif
|
||||
// when this feature is enabled for kernel code.
|
||||
ptrauth_switch_to_guest x29, x0, x1, x2
|
||||
|
||||
// Restore the guest's sp_el0
|
||||
restore_sp_el0 x29, x0
|
||||
|
||||
// Restore guest regs x0-x17
|
||||
ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)]
|
||||
ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)]
|
||||
@@ -130,6 +147,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
|
||||
// Store the guest regs x18-x29, lr
|
||||
save_callee_saved_regs x1
|
||||
|
||||
// Store the guest's sp_el0
|
||||
save_sp_el0 x1, x2
|
||||
|
||||
get_host_ctxt x2, x3
|
||||
|
||||
// Macro ptrauth_switch_to_guest format:
|
||||
@@ -139,6 +159,9 @@ SYM_INNER_LABEL(__guest_exit, SYM_L_GLOBAL)
|
||||
// when this feature is enabled for kernel code.
|
||||
ptrauth_switch_to_host x1, x2, x3, x4, x5
|
||||
|
||||
// Restore the hosts's sp_el0
|
||||
restore_sp_el0 x2, x3
|
||||
|
||||
// Now restore the host regs
|
||||
restore_callee_saved_regs x2
|
||||
|
||||
|
||||
@@ -198,7 +198,6 @@ SYM_CODE_END(__hyp_panic)
|
||||
.macro invalid_vector label, target = __hyp_panic
|
||||
.align 2
|
||||
SYM_CODE_START(\label)
|
||||
\label:
|
||||
b \target
|
||||
SYM_CODE_END(\label)
|
||||
.endm
|
||||
|
||||
@@ -15,8 +15,9 @@
|
||||
/*
|
||||
* Non-VHE: Both host and guest must save everything.
|
||||
*
|
||||
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and pstate,
|
||||
* which are handled as part of the el2 return state) on every switch.
|
||||
* VHE: Host and guest must save mdscr_el1 and sp_el0 (and the PC and
|
||||
* pstate, which are handled as part of the el2 return state) on every
|
||||
* switch (sp_el0 is being dealt with in the assembly code).
|
||||
* tpidr_el0 and tpidrro_el0 only need to be switched when going
|
||||
* to host userspace or a different VCPU. EL1 registers only need to be
|
||||
* switched when potentially going to run a different VCPU. The latter two
|
||||
@@ -26,12 +27,6 @@
|
||||
static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
ctxt->sys_regs[MDSCR_EL1] = read_sysreg(mdscr_el1);
|
||||
|
||||
/*
|
||||
* The host arm64 Linux uses sp_el0 to point to 'current' and it must
|
||||
* therefore be saved/restored on every entry/exit to/from the guest.
|
||||
*/
|
||||
ctxt->gp_regs.regs.sp = read_sysreg(sp_el0);
|
||||
}
|
||||
|
||||
static void __hyp_text __sysreg_save_user_state(struct kvm_cpu_context *ctxt)
|
||||
@@ -99,12 +94,6 @@ NOKPROBE_SYMBOL(sysreg_save_guest_state_vhe);
|
||||
static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctxt)
|
||||
{
|
||||
write_sysreg(ctxt->sys_regs[MDSCR_EL1], mdscr_el1);
|
||||
|
||||
/*
|
||||
* The host arm64 Linux uses sp_el0 to point to 'current' and it must
|
||||
* therefore be saved/restored on every entry/exit to/from the guest.
|
||||
*/
|
||||
write_sysreg(ctxt->gp_regs.regs.sp, sp_el0);
|
||||
}
|
||||
|
||||
static void __hyp_text __sysreg_restore_user_state(struct kvm_cpu_context *ctxt)
|
||||
|
||||
@@ -230,6 +230,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
|
||||
ptep = (pte_t *)pudp;
|
||||
} else if (sz == (CONT_PTE_SIZE)) {
|
||||
pmdp = pmd_alloc(mm, pudp, addr);
|
||||
if (!pmdp)
|
||||
return NULL;
|
||||
|
||||
WARN_ON(addr & (sz - 1));
|
||||
/*
|
||||
|
||||
@@ -521,6 +521,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
|
||||
case KVM_CAP_IOEVENTFD:
|
||||
case KVM_CAP_DEVICE_CTRL:
|
||||
case KVM_CAP_IMMEDIATE_EXIT:
|
||||
case KVM_CAP_SET_GUEST_DEBUG:
|
||||
r = 1;
|
||||
break;
|
||||
case KVM_CAP_PPC_GUEST_DEBUG_SSTEP:
|
||||
|
||||
@@ -51,13 +51,10 @@
|
||||
#define CAUSE_IRQ_FLAG (_AC(1, UL) << (__riscv_xlen - 1))
|
||||
|
||||
/* Interrupt causes (minus the high bit) */
|
||||
#define IRQ_U_SOFT 0
|
||||
#define IRQ_S_SOFT 1
|
||||
#define IRQ_M_SOFT 3
|
||||
#define IRQ_U_TIMER 4
|
||||
#define IRQ_S_TIMER 5
|
||||
#define IRQ_M_TIMER 7
|
||||
#define IRQ_U_EXT 8
|
||||
#define IRQ_S_EXT 9
|
||||
#define IRQ_M_EXT 11
|
||||
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
#ifndef _ASM_RISCV_HWCAP_H
|
||||
#define _ASM_RISCV_HWCAP_H
|
||||
|
||||
#include <linux/bits.h>
|
||||
#include <uapi/asm/hwcap.h>
|
||||
|
||||
#ifndef __ASSEMBLY__
|
||||
@@ -22,6 +23,27 @@ enum {
|
||||
};
|
||||
|
||||
extern unsigned long elf_hwcap;
|
||||
|
||||
#define RISCV_ISA_EXT_a ('a' - 'a')
|
||||
#define RISCV_ISA_EXT_c ('c' - 'a')
|
||||
#define RISCV_ISA_EXT_d ('d' - 'a')
|
||||
#define RISCV_ISA_EXT_f ('f' - 'a')
|
||||
#define RISCV_ISA_EXT_h ('h' - 'a')
|
||||
#define RISCV_ISA_EXT_i ('i' - 'a')
|
||||
#define RISCV_ISA_EXT_m ('m' - 'a')
|
||||
#define RISCV_ISA_EXT_s ('s' - 'a')
|
||||
#define RISCV_ISA_EXT_u ('u' - 'a')
|
||||
|
||||
#define RISCV_ISA_EXT_MAX 64
|
||||
|
||||
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap);
|
||||
|
||||
#define riscv_isa_extension_mask(ext) BIT_MASK(RISCV_ISA_EXT_##ext)
|
||||
|
||||
bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit);
|
||||
#define riscv_isa_extension_available(isa_bitmap, ext) \
|
||||
__riscv_isa_extension_available(isa_bitmap, RISCV_ISA_EXT_##ext)
|
||||
|
||||
#endif
|
||||
|
||||
#endif /* _ASM_RISCV_HWCAP_H */
|
||||
|
||||
@@ -22,14 +22,6 @@ static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
|
||||
static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
|
||||
#endif
|
||||
|
||||
#ifdef CONFIG_STRICT_KERNEL_RWX
|
||||
void set_kernel_text_ro(void);
|
||||
void set_kernel_text_rw(void);
|
||||
#else
|
||||
static inline void set_kernel_text_ro(void) { }
|
||||
static inline void set_kernel_text_rw(void) { }
|
||||
#endif
|
||||
|
||||
int set_direct_map_invalid_noflush(struct page *page);
|
||||
int set_direct_map_default_noflush(struct page *page);
|
||||
|
||||
|
||||
@@ -15,8 +15,8 @@
|
||||
|
||||
const struct cpu_operations *cpu_ops[NR_CPUS] __ro_after_init;
|
||||
|
||||
void *__cpu_up_stack_pointer[NR_CPUS];
|
||||
void *__cpu_up_task_pointer[NR_CPUS];
|
||||
void *__cpu_up_stack_pointer[NR_CPUS] __section(.data);
|
||||
void *__cpu_up_task_pointer[NR_CPUS] __section(.data);
|
||||
|
||||
extern const struct cpu_operations cpu_ops_sbi;
|
||||
extern const struct cpu_operations cpu_ops_spinwait;
|
||||
|
||||
@@ -6,6 +6,7 @@
|
||||
* Copyright (C) 2017 SiFive
|
||||
*/
|
||||
|
||||
#include <linux/bitmap.h>
|
||||
#include <linux/of.h>
|
||||
#include <asm/processor.h>
|
||||
#include <asm/hwcap.h>
|
||||
@@ -13,15 +14,57 @@
|
||||
#include <asm/switch_to.h>
|
||||
|
||||
unsigned long elf_hwcap __read_mostly;
|
||||
|
||||
/* Host ISA bitmap */
|
||||
static DECLARE_BITMAP(riscv_isa, RISCV_ISA_EXT_MAX) __read_mostly;
|
||||
|
||||
#ifdef CONFIG_FPU
|
||||
bool has_fpu __read_mostly;
|
||||
#endif
|
||||
|
||||
/**
|
||||
* riscv_isa_extension_base() - Get base extension word
|
||||
*
|
||||
* @isa_bitmap: ISA bitmap to use
|
||||
* Return: base extension word as unsigned long value
|
||||
*
|
||||
* NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used.
|
||||
*/
|
||||
unsigned long riscv_isa_extension_base(const unsigned long *isa_bitmap)
|
||||
{
|
||||
if (!isa_bitmap)
|
||||
return riscv_isa[0];
|
||||
return isa_bitmap[0];
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(riscv_isa_extension_base);
|
||||
|
||||
/**
|
||||
* __riscv_isa_extension_available() - Check whether given extension
|
||||
* is available or not
|
||||
*
|
||||
* @isa_bitmap: ISA bitmap to use
|
||||
* @bit: bit position of the desired extension
|
||||
* Return: true or false
|
||||
*
|
||||
* NOTE: If isa_bitmap is NULL then Host ISA bitmap will be used.
|
||||
*/
|
||||
bool __riscv_isa_extension_available(const unsigned long *isa_bitmap, int bit)
|
||||
{
|
||||
const unsigned long *bmap = (isa_bitmap) ? isa_bitmap : riscv_isa;
|
||||
|
||||
if (bit >= RISCV_ISA_EXT_MAX)
|
||||
return false;
|
||||
|
||||
return test_bit(bit, bmap) ? true : false;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(__riscv_isa_extension_available);
|
||||
|
||||
void riscv_fill_hwcap(void)
|
||||
{
|
||||
struct device_node *node;
|
||||
const char *isa;
|
||||
size_t i;
|
||||
char print_str[BITS_PER_LONG + 1];
|
||||
size_t i, j, isa_len;
|
||||
static unsigned long isa2hwcap[256] = {0};
|
||||
|
||||
isa2hwcap['i'] = isa2hwcap['I'] = COMPAT_HWCAP_ISA_I;
|
||||
@@ -33,8 +76,11 @@ void riscv_fill_hwcap(void)
|
||||
|
||||
elf_hwcap = 0;
|
||||
|
||||
bitmap_zero(riscv_isa, RISCV_ISA_EXT_MAX);
|
||||
|
||||
for_each_of_cpu_node(node) {
|
||||
unsigned long this_hwcap = 0;
|
||||
unsigned long this_isa = 0;
|
||||
|
||||
if (riscv_of_processor_hartid(node) < 0)
|
||||
continue;
|
||||
@@ -44,8 +90,24 @@ void riscv_fill_hwcap(void)
|
||||
continue;
|
||||
}
|
||||
|
||||
for (i = 0; i < strlen(isa); ++i)
|
||||
i = 0;
|
||||
isa_len = strlen(isa);
|
||||
#if IS_ENABLED(CONFIG_32BIT)
|
||||
if (!strncmp(isa, "rv32", 4))
|
||||
i += 4;
|
||||
#elif IS_ENABLED(CONFIG_64BIT)
|
||||
if (!strncmp(isa, "rv64", 4))
|
||||
i += 4;
|
||||
#endif
|
||||
for (; i < isa_len; ++i) {
|
||||
this_hwcap |= isa2hwcap[(unsigned char)(isa[i])];
|
||||
/*
|
||||
* TODO: X, Y and Z extension parsing for Host ISA
|
||||
* bitmap will be added in-future.
|
||||
*/
|
||||
if ('a' <= isa[i] && isa[i] < 'x')
|
||||
this_isa |= (1UL << (isa[i] - 'a'));
|
||||
}
|
||||
|
||||
/*
|
||||
* All "okay" hart should have same isa. Set HWCAP based on
|
||||
@@ -56,6 +118,11 @@ void riscv_fill_hwcap(void)
|
||||
elf_hwcap &= this_hwcap;
|
||||
else
|
||||
elf_hwcap = this_hwcap;
|
||||
|
||||
if (riscv_isa[0])
|
||||
riscv_isa[0] &= this_isa;
|
||||
else
|
||||
riscv_isa[0] = this_isa;
|
||||
}
|
||||
|
||||
/* We don't support systems with F but without D, so mask those out
|
||||
@@ -65,7 +132,17 @@ void riscv_fill_hwcap(void)
|
||||
elf_hwcap &= ~COMPAT_HWCAP_ISA_F;
|
||||
}
|
||||
|
||||
pr_info("elf_hwcap is 0x%lx\n", elf_hwcap);
|
||||
memset(print_str, 0, sizeof(print_str));
|
||||
for (i = 0, j = 0; i < BITS_PER_LONG; i++)
|
||||
if (riscv_isa[0] & BIT_MASK(i))
|
||||
print_str[j++] = (char)('a' + i);
|
||||
pr_info("riscv: ISA extensions %s\n", print_str);
|
||||
|
||||
memset(print_str, 0, sizeof(print_str));
|
||||
for (i = 0, j = 0; i < BITS_PER_LONG; i++)
|
||||
if (elf_hwcap & BIT_MASK(i))
|
||||
print_str[j++] = (char)('a' + i);
|
||||
pr_info("riscv: ELF capabilities %s\n", print_str);
|
||||
|
||||
#ifdef CONFIG_FPU
|
||||
if (elf_hwcap & (COMPAT_HWCAP_ISA_F | COMPAT_HWCAP_ISA_D))
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
|
||||
#include <linux/cpu.h>
|
||||
#include <linux/interrupt.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/profile.h>
|
||||
#include <linux/smp.h>
|
||||
#include <linux/sched.h>
|
||||
@@ -63,6 +64,7 @@ void riscv_cpuid_to_hartid_mask(const struct cpumask *in, struct cpumask *out)
|
||||
for_each_cpu(cpu, in)
|
||||
cpumask_set_cpu(cpuid_to_hartid_map(cpu), out);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(riscv_cpuid_to_hartid_mask);
|
||||
|
||||
bool arch_match_cpu_phys_id(int cpu, u64 phys_id)
|
||||
{
|
||||
|
||||
@@ -12,7 +12,7 @@ vdso-syms += getcpu
|
||||
vdso-syms += flush_icache
|
||||
|
||||
# Files to link into the vdso
|
||||
obj-vdso = $(patsubst %, %.o, $(vdso-syms))
|
||||
obj-vdso = $(patsubst %, %.o, $(vdso-syms)) note.o
|
||||
|
||||
# Build rules
|
||||
targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o
|
||||
|
||||
12
arch/riscv/kernel/vdso/note.S
Normal file
12
arch/riscv/kernel/vdso/note.S
Normal file
@@ -0,0 +1,12 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0-or-later */
|
||||
/*
|
||||
* This supplies .note.* sections to go into the PT_NOTE inside the vDSO text.
|
||||
* Here we can supply some information useful to userland.
|
||||
*/
|
||||
|
||||
#include <linux/elfnote.h>
|
||||
#include <linux/version.h>
|
||||
|
||||
ELFNOTE_START(Linux, 0, "a")
|
||||
.long LINUX_VERSION_CODE
|
||||
ELFNOTE_END
|
||||
@@ -150,7 +150,8 @@ void __init setup_bootmem(void)
|
||||
memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
|
||||
|
||||
set_max_mapnr(PFN_DOWN(mem_size));
|
||||
max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
|
||||
max_pfn = PFN_DOWN(memblock_end_of_DRAM());
|
||||
max_low_pfn = max_pfn;
|
||||
|
||||
#ifdef CONFIG_BLK_DEV_INITRD
|
||||
setup_initrd();
|
||||
@@ -501,22 +502,6 @@ static inline void setup_vm_final(void)
|
||||
#endif /* CONFIG_MMU */
|
||||
|
||||
#ifdef CONFIG_STRICT_KERNEL_RWX
|
||||
void set_kernel_text_rw(void)
|
||||
{
|
||||
unsigned long text_start = (unsigned long)_text;
|
||||
unsigned long text_end = (unsigned long)_etext;
|
||||
|
||||
set_memory_rw(text_start, (text_end - text_start) >> PAGE_SHIFT);
|
||||
}
|
||||
|
||||
void set_kernel_text_ro(void)
|
||||
{
|
||||
unsigned long text_start = (unsigned long)_text;
|
||||
unsigned long text_end = (unsigned long)_etext;
|
||||
|
||||
set_memory_ro(text_start, (text_end - text_start) >> PAGE_SHIFT);
|
||||
}
|
||||
|
||||
void mark_rodata_ro(void)
|
||||
{
|
||||
unsigned long text_start = (unsigned long)_text;
|
||||
|
||||
@@ -545,6 +545,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
|
||||
case KVM_CAP_S390_AIS:
|
||||
case KVM_CAP_S390_AIS_MIGRATION:
|
||||
case KVM_CAP_S390_VCPU_RESETS:
|
||||
case KVM_CAP_SET_GUEST_DEBUG:
|
||||
r = 1;
|
||||
break;
|
||||
case KVM_CAP_S390_HPAGE_1M:
|
||||
|
||||
@@ -626,10 +626,12 @@ static int handle_pqap(struct kvm_vcpu *vcpu)
|
||||
* available for the guest are AQIC and TAPQ with the t bit set
|
||||
* since we do not set IC.3 (FIII) we currently will only intercept
|
||||
* the AQIC function code.
|
||||
* Note: running nested under z/VM can result in intercepts for other
|
||||
* function codes, e.g. PQAP(QCI). We do not support this and bail out.
|
||||
*/
|
||||
reg0 = vcpu->run->s.regs.gprs[0];
|
||||
fc = (reg0 >> 24) & 0xff;
|
||||
if (WARN_ON_ONCE(fc != 0x03))
|
||||
if (fc != 0x03)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
/* PQAP instruction is allowed for guest kernel only */
|
||||
|
||||
@@ -32,16 +32,16 @@ void blake2s_compress_arch(struct blake2s_state *state,
|
||||
const u32 inc)
|
||||
{
|
||||
/* SIMD disables preemption, so relax after processing each page. */
|
||||
BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8);
|
||||
BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);
|
||||
|
||||
if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {
|
||||
blake2s_compress_generic(state, block, nblocks, inc);
|
||||
return;
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
do {
|
||||
const size_t blocks = min_t(size_t, nblocks,
|
||||
PAGE_SIZE / BLAKE2S_BLOCK_SIZE);
|
||||
SZ_4K / BLAKE2S_BLOCK_SIZE);
|
||||
|
||||
kernel_fpu_begin();
|
||||
if (IS_ENABLED(CONFIG_AS_AVX512) &&
|
||||
@@ -52,10 +52,8 @@ void blake2s_compress_arch(struct blake2s_state *state,
|
||||
kernel_fpu_end();
|
||||
|
||||
nblocks -= blocks;
|
||||
if (!nblocks)
|
||||
break;
|
||||
block += blocks * BLAKE2S_BLOCK_SIZE;
|
||||
}
|
||||
} while (nblocks);
|
||||
}
|
||||
EXPORT_SYMBOL(blake2s_compress_arch);
|
||||
|
||||
|
||||
@@ -153,9 +153,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
|
||||
bytes <= CHACHA_BLOCK_SIZE)
|
||||
return chacha_crypt_generic(state, dst, src, bytes, nrounds);
|
||||
|
||||
kernel_fpu_begin();
|
||||
chacha_dosimd(state, dst, src, bytes, nrounds);
|
||||
kernel_fpu_end();
|
||||
do {
|
||||
unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
|
||||
|
||||
kernel_fpu_begin();
|
||||
chacha_dosimd(state, dst, src, todo, nrounds);
|
||||
kernel_fpu_end();
|
||||
|
||||
bytes -= todo;
|
||||
src += todo;
|
||||
dst += todo;
|
||||
} while (bytes);
|
||||
}
|
||||
EXPORT_SYMBOL(chacha_crypt_arch);
|
||||
|
||||
|
||||
@@ -29,7 +29,7 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
|
||||
return crypto_nhpoly1305_update(desc, src, srclen);
|
||||
|
||||
do {
|
||||
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
|
||||
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
|
||||
|
||||
kernel_fpu_begin();
|
||||
crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
|
||||
|
||||
@@ -29,7 +29,7 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
|
||||
return crypto_nhpoly1305_update(desc, src, srclen);
|
||||
|
||||
do {
|
||||
unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
|
||||
unsigned int n = min_t(unsigned int, srclen, SZ_4K);
|
||||
|
||||
kernel_fpu_begin();
|
||||
crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
|
||||
|
||||
@@ -91,8 +91,8 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
|
||||
struct poly1305_arch_internal *state = ctx;
|
||||
|
||||
/* SIMD disables preemption, so relax after processing each page. */
|
||||
BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||
|
||||
PAGE_SIZE % POLY1305_BLOCK_SIZE);
|
||||
BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
|
||||
SZ_4K % POLY1305_BLOCK_SIZE);
|
||||
|
||||
if (!static_branch_likely(&poly1305_use_avx) ||
|
||||
(len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||
|
||||
@@ -102,8 +102,8 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
|
||||
return;
|
||||
}
|
||||
|
||||
for (;;) {
|
||||
const size_t bytes = min_t(size_t, len, PAGE_SIZE);
|
||||
do {
|
||||
const size_t bytes = min_t(size_t, len, SZ_4K);
|
||||
|
||||
kernel_fpu_begin();
|
||||
if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512))
|
||||
@@ -113,11 +113,10 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
|
||||
else
|
||||
poly1305_blocks_avx(ctx, inp, bytes, padbit);
|
||||
kernel_fpu_end();
|
||||
|
||||
len -= bytes;
|
||||
if (!len)
|
||||
break;
|
||||
inp += bytes;
|
||||
}
|
||||
} while (len);
|
||||
}
|
||||
|
||||
static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
|
||||
|
||||
@@ -98,13 +98,6 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
#define SIZEOF_PTREGS 21*8
|
||||
|
||||
.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0
|
||||
/*
|
||||
* Push registers and sanitize registers of values that a
|
||||
* speculation attack might otherwise want to exploit. The
|
||||
* lower registers are likely clobbered well before they
|
||||
* could be put to use in a speculative execution gadget.
|
||||
* Interleave XOR with PUSH for better uop scheduling:
|
||||
*/
|
||||
.if \save_ret
|
||||
pushq %rsi /* pt_regs->si */
|
||||
movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */
|
||||
@@ -114,34 +107,43 @@ For 32-bit we have the following conventions - kernel is built with
|
||||
pushq %rsi /* pt_regs->si */
|
||||
.endif
|
||||
pushq \rdx /* pt_regs->dx */
|
||||
xorl %edx, %edx /* nospec dx */
|
||||
pushq %rcx /* pt_regs->cx */
|
||||
xorl %ecx, %ecx /* nospec cx */
|
||||
pushq \rax /* pt_regs->ax */
|
||||
pushq %r8 /* pt_regs->r8 */
|
||||
xorl %r8d, %r8d /* nospec r8 */
|
||||
pushq %r9 /* pt_regs->r9 */
|
||||
xorl %r9d, %r9d /* nospec r9 */
|
||||
pushq %r10 /* pt_regs->r10 */
|
||||
xorl %r10d, %r10d /* nospec r10 */
|
||||
pushq %r11 /* pt_regs->r11 */
|
||||
xorl %r11d, %r11d /* nospec r11*/
|
||||
pushq %rbx /* pt_regs->rbx */
|
||||
xorl %ebx, %ebx /* nospec rbx*/
|
||||
pushq %rbp /* pt_regs->rbp */
|
||||
xorl %ebp, %ebp /* nospec rbp*/
|
||||
pushq %r12 /* pt_regs->r12 */
|
||||
xorl %r12d, %r12d /* nospec r12*/
|
||||
pushq %r13 /* pt_regs->r13 */
|
||||
xorl %r13d, %r13d /* nospec r13*/
|
||||
pushq %r14 /* pt_regs->r14 */
|
||||
xorl %r14d, %r14d /* nospec r14*/
|
||||
pushq %r15 /* pt_regs->r15 */
|
||||
xorl %r15d, %r15d /* nospec r15*/
|
||||
UNWIND_HINT_REGS
|
||||
|
||||
.if \save_ret
|
||||
pushq %rsi /* return address on top of stack */
|
||||
.endif
|
||||
|
||||
/*
|
||||
* Sanitize registers of values that a speculation attack might
|
||||
* otherwise want to exploit. The lower registers are likely clobbered
|
||||
* well before they could be put to use in a speculative execution
|
||||
* gadget.
|
||||
*/
|
||||
xorl %edx, %edx /* nospec dx */
|
||||
xorl %ecx, %ecx /* nospec cx */
|
||||
xorl %r8d, %r8d /* nospec r8 */
|
||||
xorl %r9d, %r9d /* nospec r9 */
|
||||
xorl %r10d, %r10d /* nospec r10 */
|
||||
xorl %r11d, %r11d /* nospec r11 */
|
||||
xorl %ebx, %ebx /* nospec rbx */
|
||||
xorl %ebp, %ebp /* nospec rbp */
|
||||
xorl %r12d, %r12d /* nospec r12 */
|
||||
xorl %r13d, %r13d /* nospec r13 */
|
||||
xorl %r14d, %r14d /* nospec r14 */
|
||||
xorl %r15d, %r15d /* nospec r15 */
|
||||
|
||||
.endm
|
||||
|
||||
.macro POP_REGS pop_rdi=1 skip_r11rcx=0
|
||||
|
||||
@@ -249,7 +249,6 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
|
||||
*/
|
||||
syscall_return_via_sysret:
|
||||
/* rcx and r11 are already restored (see code above) */
|
||||
UNWIND_HINT_EMPTY
|
||||
POP_REGS pop_rdi=0 skip_r11rcx=1
|
||||
|
||||
/*
|
||||
@@ -258,6 +257,7 @@ syscall_return_via_sysret:
|
||||
*/
|
||||
movq %rsp, %rdi
|
||||
movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
|
||||
UNWIND_HINT_EMPTY
|
||||
|
||||
pushq RSP-RDI(%rdi) /* RSP */
|
||||
pushq (%rdi) /* RDI */
|
||||
@@ -279,8 +279,7 @@ SYM_CODE_END(entry_SYSCALL_64)
|
||||
* %rdi: prev task
|
||||
* %rsi: next task
|
||||
*/
|
||||
SYM_CODE_START(__switch_to_asm)
|
||||
UNWIND_HINT_FUNC
|
||||
SYM_FUNC_START(__switch_to_asm)
|
||||
/*
|
||||
* Save callee-saved registers
|
||||
* This must match the order in inactive_task_frame
|
||||
@@ -321,7 +320,7 @@ SYM_CODE_START(__switch_to_asm)
|
||||
popq %rbp
|
||||
|
||||
jmp __switch_to
|
||||
SYM_CODE_END(__switch_to_asm)
|
||||
SYM_FUNC_END(__switch_to_asm)
|
||||
|
||||
/*
|
||||
* A newly forked process directly context switches into this address.
|
||||
@@ -512,7 +511,7 @@ SYM_CODE_END(spurious_entries_start)
|
||||
* +----------------------------------------------------+
|
||||
*/
|
||||
SYM_CODE_START(interrupt_entry)
|
||||
UNWIND_HINT_FUNC
|
||||
UNWIND_HINT_IRET_REGS offset=16
|
||||
ASM_CLAC
|
||||
cld
|
||||
|
||||
@@ -544,9 +543,9 @@ SYM_CODE_START(interrupt_entry)
|
||||
pushq 5*8(%rdi) /* regs->eflags */
|
||||
pushq 4*8(%rdi) /* regs->cs */
|
||||
pushq 3*8(%rdi) /* regs->ip */
|
||||
UNWIND_HINT_IRET_REGS
|
||||
pushq 2*8(%rdi) /* regs->orig_ax */
|
||||
pushq 8(%rdi) /* return address */
|
||||
UNWIND_HINT_FUNC
|
||||
|
||||
movq (%rdi), %rdi
|
||||
jmp 2f
|
||||
@@ -637,6 +636,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
|
||||
*/
|
||||
movq %rsp, %rdi
|
||||
movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
|
||||
UNWIND_HINT_EMPTY
|
||||
|
||||
/* Copy the IRET frame to the trampoline stack. */
|
||||
pushq 6*8(%rdi) /* SS */
|
||||
@@ -1739,7 +1739,7 @@ SYM_CODE_START(rewind_stack_do_exit)
|
||||
|
||||
movq PER_CPU_VAR(cpu_current_top_of_stack), %rax
|
||||
leaq -PTREGS_SIZE(%rax), %rsp
|
||||
UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE
|
||||
UNWIND_HINT_REGS
|
||||
|
||||
call do_exit
|
||||
SYM_CODE_END(rewind_stack_do_exit)
|
||||
|
||||
@@ -61,11 +61,12 @@ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name
|
||||
{
|
||||
/*
|
||||
* Compare the symbol name with the system call name. Skip the
|
||||
* "__x64_sys", "__ia32_sys" or simple "sys" prefix.
|
||||
* "__x64_sys", "__ia32_sys", "__do_sys" or simple "sys" prefix.
|
||||
*/
|
||||
return !strcmp(sym + 3, name + 3) ||
|
||||
(!strncmp(sym, "__x64_", 6) && !strcmp(sym + 9, name + 3)) ||
|
||||
(!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3));
|
||||
(!strncmp(sym, "__ia32_", 7) && !strcmp(sym + 10, name + 3)) ||
|
||||
(!strncmp(sym, "__do_sys", 8) && !strcmp(sym + 8, name + 3));
|
||||
}
|
||||
|
||||
#ifndef COMPILE_OFFSETS
|
||||
|
||||
@@ -1663,8 +1663,8 @@ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
|
||||
static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
|
||||
{
|
||||
/* We can only post Fixed and LowPrio IRQs */
|
||||
return (irq->delivery_mode == dest_Fixed ||
|
||||
irq->delivery_mode == dest_LowestPrio);
|
||||
return (irq->delivery_mode == APIC_DM_FIXED ||
|
||||
irq->delivery_mode == APIC_DM_LOWEST);
|
||||
}
|
||||
|
||||
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
|
||||
|
||||
@@ -19,7 +19,7 @@ struct unwind_state {
|
||||
#if defined(CONFIG_UNWINDER_ORC)
|
||||
bool signal, full_regs;
|
||||
unsigned long sp, bp, ip;
|
||||
struct pt_regs *regs;
|
||||
struct pt_regs *regs, *prev_regs;
|
||||
#elif defined(CONFIG_UNWINDER_FRAME_POINTER)
|
||||
bool got_irq;
|
||||
unsigned long *bp, *orig_sp, ip;
|
||||
|
||||
@@ -352,8 +352,6 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
|
||||
* According to Intel, MFENCE can do the serialization here.
|
||||
*/
|
||||
asm volatile("mfence" : : : "memory");
|
||||
|
||||
printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -546,7 +544,7 @@ static struct clock_event_device lapic_clockevent = {
|
||||
};
|
||||
static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
|
||||
|
||||
static u32 hsx_deadline_rev(void)
|
||||
static __init u32 hsx_deadline_rev(void)
|
||||
{
|
||||
switch (boot_cpu_data.x86_stepping) {
|
||||
case 0x02: return 0x3a; /* EP */
|
||||
@@ -556,7 +554,7 @@ static u32 hsx_deadline_rev(void)
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
static u32 bdx_deadline_rev(void)
|
||||
static __init u32 bdx_deadline_rev(void)
|
||||
{
|
||||
switch (boot_cpu_data.x86_stepping) {
|
||||
case 0x02: return 0x00000011;
|
||||
@@ -568,7 +566,7 @@ static u32 bdx_deadline_rev(void)
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
static u32 skx_deadline_rev(void)
|
||||
static __init u32 skx_deadline_rev(void)
|
||||
{
|
||||
switch (boot_cpu_data.x86_stepping) {
|
||||
case 0x03: return 0x01000136;
|
||||
@@ -581,7 +579,7 @@ static u32 skx_deadline_rev(void)
|
||||
return ~0U;
|
||||
}
|
||||
|
||||
static const struct x86_cpu_id deadline_match[] = {
|
||||
static const struct x86_cpu_id deadline_match[] __initconst = {
|
||||
X86_MATCH_INTEL_FAM6_MODEL( HASWELL_X, &hsx_deadline_rev),
|
||||
X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_X, 0x0b000020),
|
||||
X86_MATCH_INTEL_FAM6_MODEL( BROADWELL_D, &bdx_deadline_rev),
|
||||
@@ -603,18 +601,19 @@ static const struct x86_cpu_id deadline_match[] = {
|
||||
{},
|
||||
};
|
||||
|
||||
static void apic_check_deadline_errata(void)
|
||||
static __init bool apic_validate_deadline_timer(void)
|
||||
{
|
||||
const struct x86_cpu_id *m;
|
||||
u32 rev;
|
||||
|
||||
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER) ||
|
||||
boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
return;
|
||||
if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
|
||||
return false;
|
||||
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
||||
return true;
|
||||
|
||||
m = x86_match_cpu(deadline_match);
|
||||
if (!m)
|
||||
return;
|
||||
return true;
|
||||
|
||||
/*
|
||||
* Function pointers will have the MSB set due to address layout,
|
||||
@@ -626,11 +625,12 @@ static void apic_check_deadline_errata(void)
|
||||
rev = (u32)m->driver_data;
|
||||
|
||||
if (boot_cpu_data.microcode >= rev)
|
||||
return;
|
||||
return true;
|
||||
|
||||
setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
|
||||
pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; "
|
||||
"please update microcode to version: 0x%x (or later)\n", rev);
|
||||
return false;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -2092,7 +2092,8 @@ void __init init_apic_mappings(void)
|
||||
{
|
||||
unsigned int new_apicid;
|
||||
|
||||
apic_check_deadline_errata();
|
||||
if (apic_validate_deadline_timer())
|
||||
pr_debug("TSC deadline timer available\n");
|
||||
|
||||
if (x2apic_mode) {
|
||||
boot_cpu_physical_apicid = read_apic_id();
|
||||
|
||||
@@ -183,7 +183,8 @@ recursion_check:
|
||||
*/
|
||||
if (visit_mask) {
|
||||
if (*visit_mask & (1UL << info->type)) {
|
||||
printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type);
|
||||
if (task == current)
|
||||
printk_deferred_once(KERN_WARNING "WARNING: stack recursion on stack type %d\n", info->type);
|
||||
goto unknown;
|
||||
}
|
||||
*visit_mask |= 1UL << info->type;
|
||||
|
||||
@@ -344,6 +344,9 @@ bad_address:
|
||||
if (IS_ENABLED(CONFIG_X86_32))
|
||||
goto the_end;
|
||||
|
||||
if (state->task != current)
|
||||
goto the_end;
|
||||
|
||||
if (state->regs) {
|
||||
printk_deferred_once(KERN_WARNING
|
||||
"WARNING: kernel stack regs at %p in %s:%d has bad 'bp' value %p\n",
|
||||
|
||||
@@ -8,19 +8,21 @@
|
||||
#include <asm/orc_lookup.h>
|
||||
|
||||
#define orc_warn(fmt, ...) \
|
||||
printk_deferred_once(KERN_WARNING pr_fmt("WARNING: " fmt), ##__VA_ARGS__)
|
||||
printk_deferred_once(KERN_WARNING "WARNING: " fmt, ##__VA_ARGS__)
|
||||
|
||||
#define orc_warn_current(args...) \
|
||||
({ \
|
||||
if (state->task == current) \
|
||||
orc_warn(args); \
|
||||
})
|
||||
|
||||
extern int __start_orc_unwind_ip[];
|
||||
extern int __stop_orc_unwind_ip[];
|
||||
extern struct orc_entry __start_orc_unwind[];
|
||||
extern struct orc_entry __stop_orc_unwind[];
|
||||
|
||||
static DEFINE_MUTEX(sort_mutex);
|
||||
int *cur_orc_ip_table = __start_orc_unwind_ip;
|
||||
struct orc_entry *cur_orc_table = __start_orc_unwind;
|
||||
|
||||
unsigned int lookup_num_blocks;
|
||||
bool orc_init;
|
||||
static bool orc_init __ro_after_init;
|
||||
static unsigned int lookup_num_blocks __ro_after_init;
|
||||
|
||||
static inline unsigned long orc_ip(const int *ip)
|
||||
{
|
||||
@@ -142,9 +144,6 @@ static struct orc_entry *orc_find(unsigned long ip)
|
||||
{
|
||||
static struct orc_entry *orc;
|
||||
|
||||
if (!orc_init)
|
||||
return NULL;
|
||||
|
||||
if (ip == 0)
|
||||
return &null_orc_entry;
|
||||
|
||||
@@ -189,6 +188,10 @@ static struct orc_entry *orc_find(unsigned long ip)
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
|
||||
static DEFINE_MUTEX(sort_mutex);
|
||||
static int *cur_orc_ip_table = __start_orc_unwind_ip;
|
||||
static struct orc_entry *cur_orc_table = __start_orc_unwind;
|
||||
|
||||
static void orc_sort_swap(void *_a, void *_b, int size)
|
||||
{
|
||||
struct orc_entry *orc_a, *orc_b;
|
||||
@@ -381,9 +384,38 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
|
||||
return true;
|
||||
}
|
||||
|
||||
/*
|
||||
* If state->regs is non-NULL, and points to a full pt_regs, just get the reg
|
||||
* value from state->regs.
|
||||
*
|
||||
* Otherwise, if state->regs just points to IRET regs, and the previous frame
|
||||
* had full regs, it's safe to get the value from the previous regs. This can
|
||||
* happen when early/late IRQ entry code gets interrupted by an NMI.
|
||||
*/
|
||||
static bool get_reg(struct unwind_state *state, unsigned int reg_off,
|
||||
unsigned long *val)
|
||||
{
|
||||
unsigned int reg = reg_off/8;
|
||||
|
||||
if (!state->regs)
|
||||
return false;
|
||||
|
||||
if (state->full_regs) {
|
||||
*val = ((unsigned long *)state->regs)[reg];
|
||||
return true;
|
||||
}
|
||||
|
||||
if (state->prev_regs) {
|
||||
*val = ((unsigned long *)state->prev_regs)[reg];
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
bool unwind_next_frame(struct unwind_state *state)
|
||||
{
|
||||
unsigned long ip_p, sp, orig_ip = state->ip, prev_sp = state->sp;
|
||||
unsigned long ip_p, sp, tmp, orig_ip = state->ip, prev_sp = state->sp;
|
||||
enum stack_type prev_type = state->stack_info.type;
|
||||
struct orc_entry *orc;
|
||||
bool indirect = false;
|
||||
@@ -445,43 +477,39 @@ bool unwind_next_frame(struct unwind_state *state)
|
||||
break;
|
||||
|
||||
case ORC_REG_R10:
|
||||
if (!state->regs || !state->full_regs) {
|
||||
orc_warn("missing regs for base reg R10 at ip %pB\n",
|
||||
(void *)state->ip);
|
||||
if (!get_reg(state, offsetof(struct pt_regs, r10), &sp)) {
|
||||
orc_warn_current("missing R10 value at %pB\n",
|
||||
(void *)state->ip);
|
||||
goto err;
|
||||
}
|
||||
sp = state->regs->r10;
|
||||
break;
|
||||
|
||||
case ORC_REG_R13:
|
||||
if (!state->regs || !state->full_regs) {
|
||||
orc_warn("missing regs for base reg R13 at ip %pB\n",
|
||||
(void *)state->ip);
|
||||
if (!get_reg(state, offsetof(struct pt_regs, r13), &sp)) {
|
||||
orc_warn_current("missing R13 value at %pB\n",
|
||||
(void *)state->ip);
|
||||
goto err;
|
||||
}
|
||||
sp = state->regs->r13;
|
||||
break;
|
||||
|
||||
case ORC_REG_DI:
|
||||
if (!state->regs || !state->full_regs) {
|
||||
orc_warn("missing regs for base reg DI at ip %pB\n",
|
||||
(void *)state->ip);
|
||||
if (!get_reg(state, offsetof(struct pt_regs, di), &sp)) {
|
||||
orc_warn_current("missing RDI value at %pB\n",
|
||||
(void *)state->ip);
|
||||
goto err;
|
||||
}
|
||||
sp = state->regs->di;
|
||||
break;
|
||||
|
||||
case ORC_REG_DX:
|
||||
if (!state->regs || !state->full_regs) {
|
||||
orc_warn("missing regs for base reg DX at ip %pB\n",
|
||||
(void *)state->ip);
|
||||
if (!get_reg(state, offsetof(struct pt_regs, dx), &sp)) {
|
||||
orc_warn_current("missing DX value at %pB\n",
|
||||
(void *)state->ip);
|
||||
goto err;
|
||||
}
|
||||
sp = state->regs->dx;
|
||||
break;
|
||||
|
||||
default:
|
||||
orc_warn("unknown SP base reg %d for ip %pB\n",
|
||||
orc_warn("unknown SP base reg %d at %pB\n",
|
||||
orc->sp_reg, (void *)state->ip);
|
||||
goto err;
|
||||
}
|
||||
@@ -504,44 +532,48 @@ bool unwind_next_frame(struct unwind_state *state)
|
||||
|
||||
state->sp = sp;
|
||||
state->regs = NULL;
|
||||
state->prev_regs = NULL;
|
||||
state->signal = false;
|
||||
break;
|
||||
|
||||
case ORC_TYPE_REGS:
|
||||
if (!deref_stack_regs(state, sp, &state->ip, &state->sp)) {
|
||||
orc_warn("can't dereference registers at %p for ip %pB\n",
|
||||
(void *)sp, (void *)orig_ip);
|
||||
orc_warn_current("can't access registers at %pB\n",
|
||||
(void *)orig_ip);
|
||||
goto err;
|
||||
}
|
||||
|
||||
state->regs = (struct pt_regs *)sp;
|
||||
state->prev_regs = NULL;
|
||||
state->full_regs = true;
|
||||
state->signal = true;
|
||||
break;
|
||||
|
||||
case ORC_TYPE_REGS_IRET:
|
||||
if (!deref_stack_iret_regs(state, sp, &state->ip, &state->sp)) {
|
||||
orc_warn("can't dereference iret registers at %p for ip %pB\n",
|
||||
(void *)sp, (void *)orig_ip);
|
||||
orc_warn_current("can't access iret registers at %pB\n",
|
||||
(void *)orig_ip);
|
||||
goto err;
|
||||
}
|
||||
|
||||
if (state->full_regs)
|
||||
state->prev_regs = state->regs;
|
||||
state->regs = (void *)sp - IRET_FRAME_OFFSET;
|
||||
state->full_regs = false;
|
||||
state->signal = true;
|
||||
break;
|
||||
|
||||
default:
|
||||
orc_warn("unknown .orc_unwind entry type %d for ip %pB\n",
|
||||
orc_warn("unknown .orc_unwind entry type %d at %pB\n",
|
||||
orc->type, (void *)orig_ip);
|
||||
break;
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* Find BP: */
|
||||
switch (orc->bp_reg) {
|
||||
case ORC_REG_UNDEFINED:
|
||||
if (state->regs && state->full_regs)
|
||||
state->bp = state->regs->bp;
|
||||
if (get_reg(state, offsetof(struct pt_regs, bp), &tmp))
|
||||
state->bp = tmp;
|
||||
break;
|
||||
|
||||
case ORC_REG_PREV_SP:
|
||||
@@ -564,8 +596,8 @@ bool unwind_next_frame(struct unwind_state *state)
|
||||
if (state->stack_info.type == prev_type &&
|
||||
on_stack(&state->stack_info, (void *)state->sp, sizeof(long)) &&
|
||||
state->sp <= prev_sp) {
|
||||
orc_warn("stack going in the wrong direction? ip=%pB\n",
|
||||
(void *)orig_ip);
|
||||
orc_warn_current("stack going in the wrong direction? at %pB\n",
|
||||
(void *)orig_ip);
|
||||
goto err;
|
||||
}
|
||||
|
||||
@@ -585,6 +617,9 @@ EXPORT_SYMBOL_GPL(unwind_next_frame);
|
||||
void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||
struct pt_regs *regs, unsigned long *first_frame)
|
||||
{
|
||||
if (!orc_init)
|
||||
goto done;
|
||||
|
||||
memset(state, 0, sizeof(*state));
|
||||
state->task = task;
|
||||
|
||||
@@ -651,7 +686,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
|
||||
/* Otherwise, skip ahead to the user-specified starting frame: */
|
||||
while (!unwind_done(state) &&
|
||||
(!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
|
||||
state->sp <= (unsigned long)first_frame))
|
||||
state->sp < (unsigned long)first_frame))
|
||||
unwind_next_frame(state);
|
||||
|
||||
return;
|
||||
|
||||
@@ -225,12 +225,12 @@ static int ioapic_set_irq(struct kvm_ioapic *ioapic, unsigned int irq,
|
||||
}
|
||||
|
||||
/*
|
||||
* AMD SVM AVIC accelerate EOI write and do not trap,
|
||||
* in-kernel IOAPIC will not be able to receive the EOI.
|
||||
* In this case, we do lazy update of the pending EOI when
|
||||
* trying to set IOAPIC irq.
|
||||
* AMD SVM AVIC accelerate EOI write iff the interrupt is edge
|
||||
* triggered, in which case the in-kernel IOAPIC will not be able
|
||||
* to receive the EOI. In this case, we do a lazy update of the
|
||||
* pending EOI when trying to set IOAPIC irq.
|
||||
*/
|
||||
if (kvm_apicv_activated(ioapic->kvm))
|
||||
if (edge && kvm_apicv_activated(ioapic->kvm))
|
||||
ioapic_lazy_update_eoi(ioapic, irq);
|
||||
|
||||
/*
|
||||
|
||||
@@ -345,7 +345,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
|
||||
return NULL;
|
||||
|
||||
/* Pin the user virtual address. */
|
||||
npinned = get_user_pages_fast(uaddr, npages, FOLL_WRITE, pages);
|
||||
npinned = get_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
|
||||
if (npinned != npages) {
|
||||
pr_err("SEV: Failure locking %lu pages.\n", npages);
|
||||
goto err;
|
||||
|
||||
@@ -1752,6 +1752,8 @@ static int db_interception(struct vcpu_svm *svm)
|
||||
if (svm->vcpu.guest_debug &
|
||||
(KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) {
|
||||
kvm_run->exit_reason = KVM_EXIT_DEBUG;
|
||||
kvm_run->debug.arch.dr6 = svm->vmcb->save.dr6;
|
||||
kvm_run->debug.arch.dr7 = svm->vmcb->save.dr7;
|
||||
kvm_run->debug.arch.pc =
|
||||
svm->vmcb->save.cs.base + svm->vmcb->save.rip;
|
||||
kvm_run->debug.arch.exception = DB_VECTOR;
|
||||
|
||||
@@ -5165,7 +5165,7 @@ static int handle_invept(struct kvm_vcpu *vcpu)
|
||||
*/
|
||||
break;
|
||||
default:
|
||||
BUG_ON(1);
|
||||
BUG();
|
||||
break;
|
||||
}
|
||||
|
||||
|
||||
@@ -82,6 +82,9 @@ SYM_FUNC_START(vmx_vmexit)
|
||||
/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
|
||||
FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
|
||||
|
||||
/* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */
|
||||
or $1, %_ASM_AX
|
||||
|
||||
pop %_ASM_AX
|
||||
.Lvmexit_skip_rsb:
|
||||
#endif
|
||||
|
||||
@@ -926,19 +926,6 @@ EXPORT_SYMBOL_GPL(kvm_set_xcr);
|
||||
__reserved_bits; \
|
||||
})
|
||||
|
||||
static u64 kvm_host_cr4_reserved_bits(struct cpuinfo_x86 *c)
|
||||
{
|
||||
u64 reserved_bits = __cr4_reserved_bits(cpu_has, c);
|
||||
|
||||
if (kvm_cpu_cap_has(X86_FEATURE_LA57))
|
||||
reserved_bits &= ~X86_CR4_LA57;
|
||||
|
||||
if (kvm_cpu_cap_has(X86_FEATURE_UMIP))
|
||||
reserved_bits &= ~X86_CR4_UMIP;
|
||||
|
||||
return reserved_bits;
|
||||
}
|
||||
|
||||
static int kvm_valid_cr4(struct kvm_vcpu *vcpu, unsigned long cr4)
|
||||
{
|
||||
if (cr4 & cr4_reserved_bits)
|
||||
@@ -3385,6 +3372,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
|
||||
case KVM_CAP_GET_MSR_FEATURES:
|
||||
case KVM_CAP_MSR_PLATFORM_INFO:
|
||||
case KVM_CAP_EXCEPTION_PAYLOAD:
|
||||
case KVM_CAP_SET_GUEST_DEBUG:
|
||||
r = 1;
|
||||
break;
|
||||
case KVM_CAP_SYNC_REGS:
|
||||
@@ -9675,7 +9663,9 @@ int kvm_arch_hardware_setup(void *opaque)
|
||||
if (!kvm_cpu_cap_has(X86_FEATURE_XSAVES))
|
||||
supported_xss = 0;
|
||||
|
||||
cr4_reserved_bits = kvm_host_cr4_reserved_bits(&boot_cpu_data);
|
||||
#define __kvm_cpu_cap_has(UNUSED_, f) kvm_cpu_cap_has(f)
|
||||
cr4_reserved_bits = __cr4_reserved_bits(__kvm_cpu_cap_has, UNUSED_);
|
||||
#undef __kvm_cpu_cap_has
|
||||
|
||||
if (kvm_has_tsc_control) {
|
||||
/*
|
||||
@@ -9707,7 +9697,8 @@ int kvm_arch_check_processor_compat(void *opaque)
|
||||
|
||||
WARN_ON(!irqs_disabled());
|
||||
|
||||
if (kvm_host_cr4_reserved_bits(c) != cr4_reserved_bits)
|
||||
if (__cr4_reserved_bits(cpu_has, c) !=
|
||||
__cr4_reserved_bits(cpu_has, &boot_cpu_data))
|
||||
return -EIO;
|
||||
|
||||
return ops->check_processor_compatibility();
|
||||
|
||||
@@ -43,7 +43,8 @@ struct cpa_data {
|
||||
unsigned long pfn;
|
||||
unsigned int flags;
|
||||
unsigned int force_split : 1,
|
||||
force_static_prot : 1;
|
||||
force_static_prot : 1,
|
||||
force_flush_all : 1;
|
||||
struct page **pages;
|
||||
};
|
||||
|
||||
@@ -355,10 +356,10 @@ static void cpa_flush(struct cpa_data *data, int cache)
|
||||
return;
|
||||
}
|
||||
|
||||
if (cpa->numpages <= tlb_single_page_flush_ceiling)
|
||||
on_each_cpu(__cpa_flush_tlb, cpa, 1);
|
||||
else
|
||||
if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
|
||||
flush_tlb_all();
|
||||
else
|
||||
on_each_cpu(__cpa_flush_tlb, cpa, 1);
|
||||
|
||||
if (!cache)
|
||||
return;
|
||||
@@ -1598,6 +1599,8 @@ static int cpa_process_alias(struct cpa_data *cpa)
|
||||
alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
|
||||
alias_cpa.curpage = 0;
|
||||
|
||||
cpa->force_flush_all = 1;
|
||||
|
||||
ret = __change_page_attr_set_clr(&alias_cpa, 0);
|
||||
if (ret)
|
||||
return ret;
|
||||
@@ -1618,6 +1621,7 @@ static int cpa_process_alias(struct cpa_data *cpa)
|
||||
alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
|
||||
alias_cpa.curpage = 0;
|
||||
|
||||
cpa->force_flush_all = 1;
|
||||
/*
|
||||
* The high mapping range is imprecise, so ignore the
|
||||
* return value.
|
||||
|
||||
@@ -123,6 +123,7 @@
|
||||
#include <linux/ioprio.h>
|
||||
#include <linux/sbitmap.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/backing-dev.h>
|
||||
|
||||
#include "blk.h"
|
||||
#include "blk-mq.h"
|
||||
@@ -4976,8 +4977,9 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
|
||||
ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
|
||||
switch (ioprio_class) {
|
||||
default:
|
||||
dev_err(bfqq->bfqd->queue->backing_dev_info->dev,
|
||||
"bfq: bad prio class %d\n", ioprio_class);
|
||||
pr_err("bdi %s: bfq: bad prio class %d\n",
|
||||
bdi_dev_name(bfqq->bfqd->queue->backing_dev_info),
|
||||
ioprio_class);
|
||||
/* fall through */
|
||||
case IOPRIO_CLASS_NONE:
|
||||
/*
|
||||
|
||||
@@ -496,7 +496,7 @@ const char *blkg_dev_name(struct blkcg_gq *blkg)
|
||||
{
|
||||
/* some drivers (floppy) instantiate a queue w/o disk registered */
|
||||
if (blkg->q->backing_dev_info->dev)
|
||||
return dev_name(blkg->q->backing_dev_info->dev);
|
||||
return bdi_dev_name(blkg->q->backing_dev_info);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
||||
@@ -466,7 +466,7 @@ struct ioc_gq {
|
||||
*/
|
||||
atomic64_t vtime;
|
||||
atomic64_t done_vtime;
|
||||
atomic64_t abs_vdebt;
|
||||
u64 abs_vdebt;
|
||||
u64 last_vtime;
|
||||
|
||||
/*
|
||||
@@ -1142,7 +1142,7 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, struct ioc_now *now)
|
||||
struct iocg_wake_ctx ctx = { .iocg = iocg };
|
||||
u64 margin_ns = (u64)(ioc->period_us *
|
||||
WAITQ_TIMER_MARGIN_PCT / 100) * NSEC_PER_USEC;
|
||||
u64 abs_vdebt, vdebt, vshortage, expires, oexpires;
|
||||
u64 vdebt, vshortage, expires, oexpires;
|
||||
s64 vbudget;
|
||||
u32 hw_inuse;
|
||||
|
||||
@@ -1152,18 +1152,15 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, struct ioc_now *now)
|
||||
vbudget = now->vnow - atomic64_read(&iocg->vtime);
|
||||
|
||||
/* pay off debt */
|
||||
abs_vdebt = atomic64_read(&iocg->abs_vdebt);
|
||||
vdebt = abs_cost_to_cost(abs_vdebt, hw_inuse);
|
||||
vdebt = abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);
|
||||
if (vdebt && vbudget > 0) {
|
||||
u64 delta = min_t(u64, vbudget, vdebt);
|
||||
u64 abs_delta = min(cost_to_abs_cost(delta, hw_inuse),
|
||||
abs_vdebt);
|
||||
iocg->abs_vdebt);
|
||||
|
||||
atomic64_add(delta, &iocg->vtime);
|
||||
atomic64_add(delta, &iocg->done_vtime);
|
||||
atomic64_sub(abs_delta, &iocg->abs_vdebt);
|
||||
if (WARN_ON_ONCE(atomic64_read(&iocg->abs_vdebt) < 0))
|
||||
atomic64_set(&iocg->abs_vdebt, 0);
|
||||
iocg->abs_vdebt -= abs_delta;
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1219,12 +1216,18 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now, u64 cost)
|
||||
u64 expires, oexpires;
|
||||
u32 hw_inuse;
|
||||
|
||||
lockdep_assert_held(&iocg->waitq.lock);
|
||||
|
||||
/* debt-adjust vtime */
|
||||
current_hweight(iocg, NULL, &hw_inuse);
|
||||
vtime += abs_cost_to_cost(atomic64_read(&iocg->abs_vdebt), hw_inuse);
|
||||
vtime += abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);
|
||||
|
||||
/* clear or maintain depending on the overage */
|
||||
if (time_before_eq64(vtime, now->vnow)) {
|
||||
/*
|
||||
* Clear or maintain depending on the overage. Non-zero vdebt is what
|
||||
* guarantees that @iocg is online and future iocg_kick_delay() will
|
||||
* clear use_delay. Don't leave it on when there's no vdebt.
|
||||
*/
|
||||
if (!iocg->abs_vdebt || time_before_eq64(vtime, now->vnow)) {
|
||||
blkcg_clear_delay(blkg);
|
||||
return false;
|
||||
}
|
||||
@@ -1258,9 +1261,12 @@ static enum hrtimer_restart iocg_delay_timer_fn(struct hrtimer *timer)
|
||||
{
|
||||
struct ioc_gq *iocg = container_of(timer, struct ioc_gq, delay_timer);
|
||||
struct ioc_now now;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&iocg->waitq.lock, flags);
|
||||
ioc_now(iocg->ioc, &now);
|
||||
iocg_kick_delay(iocg, &now, 0);
|
||||
spin_unlock_irqrestore(&iocg->waitq.lock, flags);
|
||||
|
||||
return HRTIMER_NORESTART;
|
||||
}
|
||||
@@ -1368,14 +1374,13 @@ static void ioc_timer_fn(struct timer_list *timer)
|
||||
* should have woken up in the last period and expire idle iocgs.
|
||||
*/
|
||||
list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) {
|
||||
if (!waitqueue_active(&iocg->waitq) &&
|
||||
!atomic64_read(&iocg->abs_vdebt) && !iocg_is_idle(iocg))
|
||||
if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt &&
|
||||
!iocg_is_idle(iocg))
|
||||
continue;
|
||||
|
||||
spin_lock(&iocg->waitq.lock);
|
||||
|
||||
if (waitqueue_active(&iocg->waitq) ||
|
||||
atomic64_read(&iocg->abs_vdebt)) {
|
||||
if (waitqueue_active(&iocg->waitq) || iocg->abs_vdebt) {
|
||||
/* might be oversleeping vtime / hweight changes, kick */
|
||||
iocg_kick_waitq(iocg, &now);
|
||||
iocg_kick_delay(iocg, &now, 0);
|
||||
@@ -1718,28 +1723,49 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
|
||||
* tests are racy but the races aren't systemic - we only miss once
|
||||
* in a while which is fine.
|
||||
*/
|
||||
if (!waitqueue_active(&iocg->waitq) &&
|
||||
!atomic64_read(&iocg->abs_vdebt) &&
|
||||
if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt &&
|
||||
time_before_eq64(vtime + cost, now.vnow)) {
|
||||
iocg_commit_bio(iocg, bio, cost);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* We're over budget. If @bio has to be issued regardless,
|
||||
* remember the abs_cost instead of advancing vtime.
|
||||
* iocg_kick_waitq() will pay off the debt before waking more IOs.
|
||||
* We activated above but w/o any synchronization. Deactivation is
|
||||
* synchronized with waitq.lock and we won't get deactivated as long
|
||||
* as we're waiting or has debt, so we're good if we're activated
|
||||
* here. In the unlikely case that we aren't, just issue the IO.
|
||||
*/
|
||||
spin_lock_irq(&iocg->waitq.lock);
|
||||
|
||||
if (unlikely(list_empty(&iocg->active_list))) {
|
||||
spin_unlock_irq(&iocg->waitq.lock);
|
||||
iocg_commit_bio(iocg, bio, cost);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* We're over budget. If @bio has to be issued regardless, remember
|
||||
* the abs_cost instead of advancing vtime. iocg_kick_waitq() will pay
|
||||
* off the debt before waking more IOs.
|
||||
*
|
||||
* This way, the debt is continuously paid off each period with the
|
||||
* actual budget available to the cgroup. If we just wound vtime,
|
||||
* we would incorrectly use the current hw_inuse for the entire
|
||||
* amount which, for example, can lead to the cgroup staying
|
||||
* blocked for a long time even with substantially raised hw_inuse.
|
||||
* actual budget available to the cgroup. If we just wound vtime, we
|
||||
* would incorrectly use the current hw_inuse for the entire amount
|
||||
* which, for example, can lead to the cgroup staying blocked for a
|
||||
* long time even with substantially raised hw_inuse.
|
||||
*
|
||||
* An iocg with vdebt should stay online so that the timer can keep
|
||||
* deducting its vdebt and [de]activate use_delay mechanism
|
||||
* accordingly. We don't want to race against the timer trying to
|
||||
* clear them and leave @iocg inactive w/ dangling use_delay heavily
|
||||
* penalizing the cgroup and its descendants.
|
||||
*/
|
||||
if (bio_issue_as_root_blkg(bio) || fatal_signal_pending(current)) {
|
||||
atomic64_add(abs_cost, &iocg->abs_vdebt);
|
||||
iocg->abs_vdebt += abs_cost;
|
||||
if (iocg_kick_delay(iocg, &now, cost))
|
||||
blkcg_schedule_throttle(rqos->q,
|
||||
(bio->bi_opf & REQ_SWAP) == REQ_SWAP);
|
||||
spin_unlock_irq(&iocg->waitq.lock);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -1756,20 +1782,6 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
|
||||
* All waiters are on iocg->waitq and the wait states are
|
||||
* synchronized using waitq.lock.
|
||||
*/
|
||||
spin_lock_irq(&iocg->waitq.lock);
|
||||
|
||||
/*
|
||||
* We activated above but w/o any synchronization. Deactivation is
|
||||
* synchronized with waitq.lock and we won't get deactivated as
|
||||
* long as we're waiting, so we're good if we're activated here.
|
||||
* In the unlikely case that we are deactivated, just issue the IO.
|
||||
*/
|
||||
if (unlikely(list_empty(&iocg->active_list))) {
|
||||
spin_unlock_irq(&iocg->waitq.lock);
|
||||
iocg_commit_bio(iocg, bio, cost);
|
||||
return;
|
||||
}
|
||||
|
||||
init_waitqueue_func_entry(&wait.wait, iocg_wake_fn);
|
||||
wait.wait.private = current;
|
||||
wait.bio = bio;
|
||||
@@ -1801,6 +1813,7 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
|
||||
struct ioc_now now;
|
||||
u32 hw_inuse;
|
||||
u64 abs_cost, cost;
|
||||
unsigned long flags;
|
||||
|
||||
/* bypass if disabled or for root cgroup */
|
||||
if (!ioc->enabled || !iocg->level)
|
||||
@@ -1820,15 +1833,28 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
|
||||
iocg->cursor = bio_end;
|
||||
|
||||
/*
|
||||
* Charge if there's enough vtime budget and the existing request
|
||||
* has cost assigned. Otherwise, account it as debt. See debt
|
||||
* handling in ioc_rqos_throttle() for details.
|
||||
* Charge if there's enough vtime budget and the existing request has
|
||||
* cost assigned.
|
||||
*/
|
||||
if (rq->bio && rq->bio->bi_iocost_cost &&
|
||||
time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow))
|
||||
time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) {
|
||||
iocg_commit_bio(iocg, bio, cost);
|
||||
else
|
||||
atomic64_add(abs_cost, &iocg->abs_vdebt);
|
||||
return;
|
||||
}
|
||||
|
||||
/*
|
||||
* Otherwise, account it as debt if @iocg is online, which it should
|
||||
* be for the vast majority of cases. See debt handling in
|
||||
* ioc_rqos_throttle() for details.
|
||||
*/
|
||||
spin_lock_irqsave(&iocg->waitq.lock, flags);
|
||||
if (likely(!list_empty(&iocg->active_list))) {
|
||||
iocg->abs_vdebt += abs_cost;
|
||||
iocg_kick_delay(iocg, &now, cost);
|
||||
} else {
|
||||
iocg_commit_bio(iocg, bio, cost);
|
||||
}
|
||||
spin_unlock_irqrestore(&iocg->waitq.lock, flags);
|
||||
}
|
||||
|
||||
static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio)
|
||||
@@ -1998,7 +2024,6 @@ static void ioc_pd_init(struct blkg_policy_data *pd)
|
||||
iocg->ioc = ioc;
|
||||
atomic64_set(&iocg->vtime, now.vnow);
|
||||
atomic64_set(&iocg->done_vtime, now.vnow);
|
||||
atomic64_set(&iocg->abs_vdebt, 0);
|
||||
atomic64_set(&iocg->active_period, atomic64_read(&ioc->cur_period));
|
||||
INIT_LIST_HEAD(&iocg->active_list);
|
||||
iocg->hweight_active = HWEIGHT_WHOLE;
|
||||
|
||||
@@ -287,7 +287,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
|
||||
crypto_free_skcipher(ctx->child);
|
||||
}
|
||||
|
||||
static void free(struct skcipher_instance *inst)
|
||||
static void free_inst(struct skcipher_instance *inst)
|
||||
{
|
||||
crypto_drop_skcipher(skcipher_instance_ctx(inst));
|
||||
kfree(inst);
|
||||
@@ -400,12 +400,12 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
|
||||
inst->alg.encrypt = encrypt;
|
||||
inst->alg.decrypt = decrypt;
|
||||
|
||||
inst->free = free;
|
||||
inst->free = free_inst;
|
||||
|
||||
err = skcipher_register_instance(tmpl, inst);
|
||||
if (err) {
|
||||
err_free_inst:
|
||||
free(inst);
|
||||
free_inst(inst);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -322,7 +322,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
|
||||
crypto_free_cipher(ctx->tweak);
|
||||
}
|
||||
|
||||
static void free(struct skcipher_instance *inst)
|
||||
static void free_inst(struct skcipher_instance *inst)
|
||||
{
|
||||
crypto_drop_skcipher(skcipher_instance_ctx(inst));
|
||||
kfree(inst);
|
||||
@@ -434,12 +434,12 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
|
||||
inst->alg.encrypt = encrypt;
|
||||
inst->alg.decrypt = decrypt;
|
||||
|
||||
inst->free = free;
|
||||
inst->free = free_inst;
|
||||
|
||||
err = skcipher_register_instance(tmpl, inst);
|
||||
if (err) {
|
||||
err_free_inst:
|
||||
free(inst);
|
||||
free_inst(inst);
|
||||
}
|
||||
return err;
|
||||
}
|
||||
|
||||
@@ -645,6 +645,7 @@ static void amba_device_initialize(struct amba_device *dev, const char *name)
|
||||
dev->dev.release = amba_device_release;
|
||||
dev->dev.bus = &amba_bustype;
|
||||
dev->dev.dma_mask = &dev->dev.coherent_dma_mask;
|
||||
dev->dev.dma_parms = &dev->dma_parms;
|
||||
dev->res.name = dev_name(&dev->dev);
|
||||
}
|
||||
|
||||
|
||||
@@ -256,7 +256,8 @@ static int try_to_bring_up_master(struct master *master,
|
||||
ret = master->ops->bind(master->dev);
|
||||
if (ret < 0) {
|
||||
devres_release_group(master->dev, NULL);
|
||||
dev_info(master->dev, "master bind failed: %d\n", ret);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_info(master->dev, "master bind failed: %d\n", ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -611,8 +612,9 @@ static int component_bind(struct component *component, struct master *master,
|
||||
devres_release_group(component->dev, NULL);
|
||||
devres_release_group(master->dev, NULL);
|
||||
|
||||
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
|
||||
dev_name(component->dev), component->ops, ret);
|
||||
if (ret != -EPROBE_DEFER)
|
||||
dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
|
||||
dev_name(component->dev), component->ops, ret);
|
||||
}
|
||||
|
||||
return ret;
|
||||
|
||||
@@ -2370,6 +2370,11 @@ u32 fw_devlink_get_flags(void)
|
||||
return fw_devlink_flags;
|
||||
}
|
||||
|
||||
static bool fw_devlink_is_permissive(void)
|
||||
{
|
||||
return fw_devlink_flags == DL_FLAG_SYNC_STATE_ONLY;
|
||||
}
|
||||
|
||||
/**
|
||||
* device_add - add device to device hierarchy.
|
||||
* @dev: device.
|
||||
@@ -2524,7 +2529,7 @@ int device_add(struct device *dev)
|
||||
if (fw_devlink_flags && is_fwnode_dev &&
|
||||
fwnode_has_op(dev->fwnode, add_links)) {
|
||||
fw_ret = fwnode_call_int_op(dev->fwnode, add_links, dev);
|
||||
if (fw_ret == -ENODEV)
|
||||
if (fw_ret == -ENODEV && !fw_devlink_is_permissive())
|
||||
device_link_wait_for_mandatory_supplier(dev);
|
||||
else if (fw_ret)
|
||||
device_link_wait_for_optional_supplier(dev);
|
||||
|
||||
@@ -224,17 +224,9 @@ static int deferred_devs_show(struct seq_file *s, void *data)
|
||||
}
|
||||
DEFINE_SHOW_ATTRIBUTE(deferred_devs);
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
/*
|
||||
* In the case of modules, set the default probe timeout to
|
||||
* 30 seconds to give userland some time to load needed modules
|
||||
*/
|
||||
int driver_deferred_probe_timeout = 30;
|
||||
#else
|
||||
/* In the case of !modules, no probe timeout needed */
|
||||
int driver_deferred_probe_timeout = -1;
|
||||
#endif
|
||||
int driver_deferred_probe_timeout;
|
||||
EXPORT_SYMBOL_GPL(driver_deferred_probe_timeout);
|
||||
static DECLARE_WAIT_QUEUE_HEAD(probe_timeout_waitqueue);
|
||||
|
||||
static int __init deferred_probe_timeout_setup(char *str)
|
||||
{
|
||||
@@ -266,8 +258,8 @@ int driver_deferred_probe_check_state(struct device *dev)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if (!driver_deferred_probe_timeout) {
|
||||
dev_WARN(dev, "deferred probe timeout, ignoring dependency");
|
||||
if (!driver_deferred_probe_timeout && initcalls_done) {
|
||||
dev_warn(dev, "deferred probe timeout, ignoring dependency");
|
||||
return -ETIMEDOUT;
|
||||
}
|
||||
|
||||
@@ -284,6 +276,7 @@ static void deferred_probe_timeout_work_func(struct work_struct *work)
|
||||
|
||||
list_for_each_entry_safe(private, p, &deferred_probe_pending_list, deferred_probe)
|
||||
dev_info(private->device, "deferred probe pending");
|
||||
wake_up(&probe_timeout_waitqueue);
|
||||
}
|
||||
static DECLARE_DELAYED_WORK(deferred_probe_timeout_work, deferred_probe_timeout_work_func);
|
||||
|
||||
@@ -658,6 +651,9 @@ int driver_probe_done(void)
|
||||
*/
|
||||
void wait_for_device_probe(void)
|
||||
{
|
||||
/* wait for probe timeout */
|
||||
wait_event(probe_timeout_waitqueue, !driver_deferred_probe_timeout);
|
||||
|
||||
/* wait for the deferred probe workqueue to finish */
|
||||
flush_work(&deferred_probe_work);
|
||||
|
||||
|
||||
@@ -380,6 +380,8 @@ struct platform_object {
|
||||
*/
|
||||
static void setup_pdev_dma_masks(struct platform_device *pdev)
|
||||
{
|
||||
pdev->dev.dma_parms = &pdev->dma_parms;
|
||||
|
||||
if (!pdev->dev.coherent_dma_mask)
|
||||
pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
|
||||
if (!pdev->dev.dma_mask) {
|
||||
|
||||
@@ -33,6 +33,15 @@ struct virtio_blk_vq {
|
||||
} ____cacheline_aligned_in_smp;
|
||||
|
||||
struct virtio_blk {
|
||||
/*
|
||||
* This mutex must be held by anything that may run after
|
||||
* virtblk_remove() sets vblk->vdev to NULL.
|
||||
*
|
||||
* blk-mq, virtqueue processing, and sysfs attribute code paths are
|
||||
* shut down before vblk->vdev is set to NULL and therefore do not need
|
||||
* to hold this mutex.
|
||||
*/
|
||||
struct mutex vdev_mutex;
|
||||
struct virtio_device *vdev;
|
||||
|
||||
/* The disk structure for the kernel. */
|
||||
@@ -44,6 +53,13 @@ struct virtio_blk {
|
||||
/* Process context for config space updates */
|
||||
struct work_struct config_work;
|
||||
|
||||
/*
|
||||
* Tracks references from block_device_operations open/release and
|
||||
* virtio_driver probe/remove so this object can be freed once no
|
||||
* longer in use.
|
||||
*/
|
||||
refcount_t refs;
|
||||
|
||||
/* What host tells us, plus 2 for header & tailer. */
|
||||
unsigned int sg_elems;
|
||||
|
||||
@@ -297,10 +313,55 @@ out:
|
||||
return err;
|
||||
}
|
||||
|
||||
static void virtblk_get(struct virtio_blk *vblk)
|
||||
{
|
||||
refcount_inc(&vblk->refs);
|
||||
}
|
||||
|
||||
static void virtblk_put(struct virtio_blk *vblk)
|
||||
{
|
||||
if (refcount_dec_and_test(&vblk->refs)) {
|
||||
ida_simple_remove(&vd_index_ida, vblk->index);
|
||||
mutex_destroy(&vblk->vdev_mutex);
|
||||
kfree(vblk);
|
||||
}
|
||||
}
|
||||
|
||||
static int virtblk_open(struct block_device *bd, fmode_t mode)
|
||||
{
|
||||
struct virtio_blk *vblk = bd->bd_disk->private_data;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&vblk->vdev_mutex);
|
||||
|
||||
if (vblk->vdev)
|
||||
virtblk_get(vblk);
|
||||
else
|
||||
ret = -ENXIO;
|
||||
|
||||
mutex_unlock(&vblk->vdev_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void virtblk_release(struct gendisk *disk, fmode_t mode)
|
||||
{
|
||||
struct virtio_blk *vblk = disk->private_data;
|
||||
|
||||
virtblk_put(vblk);
|
||||
}
|
||||
|
||||
/* We provide getgeo only to please some old bootloader/partitioning tools */
|
||||
static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
|
||||
{
|
||||
struct virtio_blk *vblk = bd->bd_disk->private_data;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&vblk->vdev_mutex);
|
||||
|
||||
if (!vblk->vdev) {
|
||||
ret = -ENXIO;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/* see if the host passed in geometry config */
|
||||
if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {
|
||||
@@ -316,11 +377,15 @@ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
|
||||
geo->sectors = 1 << 5;
|
||||
geo->cylinders = get_capacity(bd->bd_disk) >> 11;
|
||||
}
|
||||
return 0;
|
||||
out:
|
||||
mutex_unlock(&vblk->vdev_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static const struct block_device_operations virtblk_fops = {
|
||||
.owner = THIS_MODULE,
|
||||
.open = virtblk_open,
|
||||
.release = virtblk_release,
|
||||
.getgeo = virtblk_getgeo,
|
||||
};
|
||||
|
||||
@@ -657,6 +722,10 @@ static int virtblk_probe(struct virtio_device *vdev)
|
||||
goto out_free_index;
|
||||
}
|
||||
|
||||
/* This reference is dropped in virtblk_remove(). */
|
||||
refcount_set(&vblk->refs, 1);
|
||||
mutex_init(&vblk->vdev_mutex);
|
||||
|
||||
vblk->vdev = vdev;
|
||||
vblk->sg_elems = sg_elems;
|
||||
|
||||
@@ -822,8 +891,6 @@ out:
|
||||
static void virtblk_remove(struct virtio_device *vdev)
|
||||
{
|
||||
struct virtio_blk *vblk = vdev->priv;
|
||||
int index = vblk->index;
|
||||
int refc;
|
||||
|
||||
/* Make sure no work handler is accessing the device. */
|
||||
flush_work(&vblk->config_work);
|
||||
@@ -833,18 +900,21 @@ static void virtblk_remove(struct virtio_device *vdev)
|
||||
|
||||
blk_mq_free_tag_set(&vblk->tag_set);
|
||||
|
||||
mutex_lock(&vblk->vdev_mutex);
|
||||
|
||||
/* Stop all the virtqueues. */
|
||||
vdev->config->reset(vdev);
|
||||
|
||||
refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref);
|
||||
/* Virtqueues are stopped, nothing can use vblk->vdev anymore. */
|
||||
vblk->vdev = NULL;
|
||||
|
||||
put_disk(vblk->disk);
|
||||
vdev->config->del_vqs(vdev);
|
||||
kfree(vblk->vqs);
|
||||
kfree(vblk);
|
||||
|
||||
/* Only free device id if we don't have any users */
|
||||
if (refc == 1)
|
||||
ida_simple_remove(&vd_index_ida, index);
|
||||
mutex_unlock(&vblk->vdev_mutex);
|
||||
|
||||
virtblk_put(vblk);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_PM_SLEEP
|
||||
|
||||
@@ -812,10 +812,9 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
|
||||
if (!mhi_cntrl)
|
||||
return -EINVAL;
|
||||
|
||||
if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
|
||||
return -EINVAL;
|
||||
|
||||
if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
|
||||
if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put ||
|
||||
!mhi_cntrl->status_cb || !mhi_cntrl->read_reg ||
|
||||
!mhi_cntrl->write_reg)
|
||||
return -EINVAL;
|
||||
|
||||
ret = parse_config(mhi_cntrl, config);
|
||||
|
||||
@@ -11,9 +11,6 @@
|
||||
|
||||
extern struct bus_type mhi_bus_type;
|
||||
|
||||
/* MHI MMIO register mapping */
|
||||
#define PCI_INVALID_READ(val) (val == U32_MAX)
|
||||
|
||||
#define MHIREGLEN (0x0)
|
||||
#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
|
||||
#define MHIREGLEN_MHIREGLEN_SHIFT (0)
|
||||
|
||||
@@ -18,16 +18,7 @@
|
||||
int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
|
||||
void __iomem *base, u32 offset, u32 *out)
|
||||
{
|
||||
u32 tmp = readl(base + offset);
|
||||
|
||||
/* If there is any unexpected value, query the link status */
|
||||
if (PCI_INVALID_READ(tmp) &&
|
||||
mhi_cntrl->link_status(mhi_cntrl))
|
||||
return -EIO;
|
||||
|
||||
*out = tmp;
|
||||
|
||||
return 0;
|
||||
return mhi_cntrl->read_reg(mhi_cntrl, base + offset, out);
|
||||
}
|
||||
|
||||
int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
|
||||
@@ -49,7 +40,7 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
|
||||
void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
|
||||
u32 offset, u32 val)
|
||||
{
|
||||
writel(val, base + offset);
|
||||
mhi_cntrl->write_reg(mhi_cntrl, base + offset, val);
|
||||
}
|
||||
|
||||
void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
|
||||
@@ -294,7 +285,7 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
|
||||
!(mhi_chan->ee_mask & BIT(mhi_cntrl->ee)))
|
||||
continue;
|
||||
mhi_dev = mhi_alloc_device(mhi_cntrl);
|
||||
if (!mhi_dev)
|
||||
if (IS_ERR(mhi_dev))
|
||||
return;
|
||||
|
||||
mhi_dev->dev_type = MHI_DEVICE_XFER;
|
||||
@@ -336,7 +327,8 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
|
||||
|
||||
/* Channel name is same for both UL and DL */
|
||||
mhi_dev->chan_name = mhi_chan->name;
|
||||
dev_set_name(&mhi_dev->dev, "%04x_%s", mhi_chan->chan,
|
||||
dev_set_name(&mhi_dev->dev, "%s_%s",
|
||||
dev_name(mhi_cntrl->cntrl_dev),
|
||||
mhi_dev->chan_name);
|
||||
|
||||
/* Init wakeup source if available */
|
||||
|
||||
@@ -902,7 +902,11 @@ int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
|
||||
MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
|
||||
msecs_to_jiffies(mhi_cntrl->timeout_ms));
|
||||
|
||||
return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO;
|
||||
ret = (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -ETIMEDOUT;
|
||||
if (ret)
|
||||
mhi_power_down(mhi_cntrl, false);
|
||||
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL(mhi_sync_power_up);
|
||||
|
||||
|
||||
@@ -673,41 +673,14 @@ int chcr_ktls_cpl_set_tcb_rpl(struct adapter *adap, unsigned char *input)
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
* chcr_write_cpl_set_tcb_ulp: update tcb values.
|
||||
* TCB is responsible to create tcp headers, so all the related values
|
||||
* should be correctly updated.
|
||||
* @tx_info - driver specific tls info.
|
||||
* @q - tx queue on which packet is going out.
|
||||
* @tid - TCB identifier.
|
||||
* @pos - current index where should we start writing.
|
||||
* @word - TCB word.
|
||||
* @mask - TCB word related mask.
|
||||
* @val - TCB word related value.
|
||||
* @reply - set 1 if looking for TP response.
|
||||
* return - next position to write.
|
||||
*/
|
||||
static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
|
||||
struct sge_eth_txq *q, u32 tid,
|
||||
void *pos, u16 word, u64 mask,
|
||||
static void *__chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
|
||||
u32 tid, void *pos, u16 word, u64 mask,
|
||||
u64 val, u32 reply)
|
||||
{
|
||||
struct cpl_set_tcb_field_core *cpl;
|
||||
struct ulptx_idata *idata;
|
||||
struct ulp_txpkt *txpkt;
|
||||
void *save_pos = NULL;
|
||||
u8 buf[48] = {0};
|
||||
int left;
|
||||
|
||||
left = (void *)q->q.stat - pos;
|
||||
if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {
|
||||
if (!left) {
|
||||
pos = q->q.desc;
|
||||
} else {
|
||||
save_pos = pos;
|
||||
pos = buf;
|
||||
}
|
||||
}
|
||||
/* ULP_TXPKT */
|
||||
txpkt = pos;
|
||||
txpkt->cmd_dest = htonl(ULPTX_CMD_V(ULP_TX_PKT) | ULP_TXPKT_DEST_V(0));
|
||||
@@ -732,18 +705,54 @@ static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
|
||||
idata = (struct ulptx_idata *)(cpl + 1);
|
||||
idata->cmd_more = htonl(ULPTX_CMD_V(ULP_TX_SC_NOOP));
|
||||
idata->len = htonl(0);
|
||||
pos = idata + 1;
|
||||
|
||||
if (save_pos) {
|
||||
pos = chcr_copy_to_txd(buf, &q->q, save_pos,
|
||||
CHCR_SET_TCB_FIELD_LEN);
|
||||
} else {
|
||||
/* check again if we are at the end of the queue */
|
||||
if (left == CHCR_SET_TCB_FIELD_LEN)
|
||||
return pos;
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
* chcr_write_cpl_set_tcb_ulp: update tcb values.
|
||||
* TCB is responsible to create tcp headers, so all the related values
|
||||
* should be correctly updated.
|
||||
* @tx_info - driver specific tls info.
|
||||
* @q - tx queue on which packet is going out.
|
||||
* @tid - TCB identifier.
|
||||
* @pos - current index where should we start writing.
|
||||
* @word - TCB word.
|
||||
* @mask - TCB word related mask.
|
||||
* @val - TCB word related value.
|
||||
* @reply - set 1 if looking for TP response.
|
||||
* return - next position to write.
|
||||
*/
|
||||
static void *chcr_write_cpl_set_tcb_ulp(struct chcr_ktls_info *tx_info,
|
||||
struct sge_eth_txq *q, u32 tid,
|
||||
void *pos, u16 word, u64 mask,
|
||||
u64 val, u32 reply)
|
||||
{
|
||||
int left = (void *)q->q.stat - pos;
|
||||
|
||||
if (unlikely(left < CHCR_SET_TCB_FIELD_LEN)) {
|
||||
if (!left) {
|
||||
pos = q->q.desc;
|
||||
else
|
||||
pos = idata + 1;
|
||||
} else {
|
||||
u8 buf[48] = {0};
|
||||
|
||||
__chcr_write_cpl_set_tcb_ulp(tx_info, tid, buf, word,
|
||||
mask, val, reply);
|
||||
|
||||
return chcr_copy_to_txd(buf, &q->q, pos,
|
||||
CHCR_SET_TCB_FIELD_LEN);
|
||||
}
|
||||
}
|
||||
|
||||
pos = __chcr_write_cpl_set_tcb_ulp(tx_info, tid, pos, word,
|
||||
mask, val, reply);
|
||||
|
||||
/* check again if we are at the end of the queue */
|
||||
if (left == CHCR_SET_TCB_FIELD_LEN)
|
||||
pos = q->q.desc;
|
||||
|
||||
return pos;
|
||||
}
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
int efi_tpm_final_log_size;
|
||||
EXPORT_SYMBOL(efi_tpm_final_log_size);
|
||||
|
||||
static int tpm2_calc_event_log_size(void *data, int count, void *size_info)
|
||||
static int __init tpm2_calc_event_log_size(void *data, int count, void *size_info)
|
||||
{
|
||||
struct tcg_pcr_event2_head *header;
|
||||
int event_size, size = 0;
|
||||
|
||||
@@ -3372,15 +3372,12 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
|
||||
}
|
||||
}
|
||||
|
||||
amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
|
||||
amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
|
||||
|
||||
amdgpu_amdkfd_suspend(adev, !fbcon);
|
||||
|
||||
amdgpu_ras_suspend(adev);
|
||||
|
||||
r = amdgpu_device_ip_suspend_phase1(adev);
|
||||
|
||||
amdgpu_amdkfd_suspend(adev, !fbcon);
|
||||
|
||||
/* evict vram memory */
|
||||
amdgpu_bo_evict_vram(adev);
|
||||
|
||||
|
||||
@@ -2008,17 +2008,22 @@ void amdgpu_dm_update_connector_after_detect(
|
||||
dc_sink_retain(aconnector->dc_sink);
|
||||
if (sink->dc_edid.length == 0) {
|
||||
aconnector->edid = NULL;
|
||||
drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
|
||||
if (aconnector->dc_link->aux_mode) {
|
||||
drm_dp_cec_unset_edid(
|
||||
&aconnector->dm_dp_aux.aux);
|
||||
}
|
||||
} else {
|
||||
aconnector->edid =
|
||||
(struct edid *) sink->dc_edid.raw_edid;
|
||||
|
||||
(struct edid *)sink->dc_edid.raw_edid;
|
||||
|
||||
drm_connector_update_edid_property(connector,
|
||||
aconnector->edid);
|
||||
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
|
||||
aconnector->edid);
|
||||
aconnector->edid);
|
||||
|
||||
if (aconnector->dc_link->aux_mode)
|
||||
drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
|
||||
aconnector->edid);
|
||||
}
|
||||
|
||||
amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
|
||||
update_connector_ext_caps(aconnector);
|
||||
} else {
|
||||
|
||||
@@ -834,11 +834,10 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
|
||||
static void wait_for_no_pipes_pending(struct dc *dc, struct dc_state *context)
|
||||
{
|
||||
int i;
|
||||
int count = 0;
|
||||
struct pipe_ctx *pipe;
|
||||
PERF_TRACE();
|
||||
for (i = 0; i < MAX_PIPES; i++) {
|
||||
pipe = &context->res_ctx.pipe_ctx[i];
|
||||
int count = 0;
|
||||
struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
|
||||
|
||||
if (!pipe->plane_state)
|
||||
continue;
|
||||
|
||||
@@ -3068,25 +3068,32 @@ validate_out:
|
||||
return out;
|
||||
}
|
||||
|
||||
|
||||
bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
|
||||
bool fast_validate)
|
||||
/*
|
||||
* This must be noinline to ensure anything that deals with FP registers
|
||||
* is contained within this call; previously our compiling with hard-float
|
||||
* would result in fp instructions being emitted outside of the boundaries
|
||||
* of the DC_FP_START/END macros, which makes sense as the compiler has no
|
||||
* idea about what is wrapped and what is not
|
||||
*
|
||||
* This is largely just a workaround to avoid breakage introduced with 5.6,
|
||||
* ideally all fp-using code should be moved into its own file, only that
|
||||
* should be compiled with hard-float, and all code exported from there
|
||||
* should be strictly wrapped with DC_FP_START/END
|
||||
*/
|
||||
static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc,
|
||||
struct dc_state *context, bool fast_validate)
|
||||
{
|
||||
bool voltage_supported = false;
|
||||
bool full_pstate_supported = false;
|
||||
bool dummy_pstate_supported = false;
|
||||
double p_state_latency_us;
|
||||
|
||||
DC_FP_START();
|
||||
p_state_latency_us = context->bw_ctx.dml.soc.dram_clock_change_latency_us;
|
||||
context->bw_ctx.dml.soc.disable_dram_clock_change_vactive_support =
|
||||
dc->debug.disable_dram_clock_change_vactive_support;
|
||||
|
||||
if (fast_validate) {
|
||||
voltage_supported = dcn20_validate_bandwidth_internal(dc, context, true);
|
||||
|
||||
DC_FP_END();
|
||||
return voltage_supported;
|
||||
return dcn20_validate_bandwidth_internal(dc, context, true);
|
||||
}
|
||||
|
||||
// Best case, we support full UCLK switch latency
|
||||
@@ -3115,7 +3122,15 @@ bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
|
||||
|
||||
restore_dml_state:
|
||||
context->bw_ctx.dml.soc.dram_clock_change_latency_us = p_state_latency_us;
|
||||
return voltage_supported;
|
||||
}
|
||||
|
||||
bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
|
||||
bool fast_validate)
|
||||
{
|
||||
bool voltage_supported = false;
|
||||
DC_FP_START();
|
||||
voltage_supported = dcn20_validate_bandwidth_fp(dc, context, fast_validate);
|
||||
DC_FP_END();
|
||||
return voltage_supported;
|
||||
}
|
||||
|
||||
@@ -1200,7 +1200,7 @@ static void dml_rq_dlg_get_dlg_params(
|
||||
min_hratio_fact_l = 1.0;
|
||||
min_hratio_fact_c = 1.0;
|
||||
|
||||
if (htaps_l <= 1)
|
||||
if (hratio_l <= 1)
|
||||
min_hratio_fact_l = 2.0;
|
||||
else if (htaps_l <= 6) {
|
||||
if ((hratio_l * 2.0) > 4.0)
|
||||
@@ -1216,7 +1216,7 @@ static void dml_rq_dlg_get_dlg_params(
|
||||
|
||||
hscale_pixel_rate_l = min_hratio_fact_l * dppclk_freq_in_mhz;
|
||||
|
||||
if (htaps_c <= 1)
|
||||
if (hratio_c <= 1)
|
||||
min_hratio_fact_c = 2.0;
|
||||
else if (htaps_c <= 6) {
|
||||
if ((hratio_c * 2.0) > 4.0)
|
||||
@@ -1522,8 +1522,8 @@ static void dml_rq_dlg_get_dlg_params(
|
||||
|
||||
disp_dlg_regs->refcyc_per_vm_group_vblank = get_refcyc_per_vm_group_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz;
|
||||
disp_dlg_regs->refcyc_per_vm_group_flip = get_refcyc_per_vm_group_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz;
|
||||
disp_dlg_regs->refcyc_per_vm_req_vblank = get_refcyc_per_vm_req_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz;
|
||||
disp_dlg_regs->refcyc_per_vm_req_flip = get_refcyc_per_vm_req_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz;
|
||||
disp_dlg_regs->refcyc_per_vm_req_vblank = get_refcyc_per_vm_req_vblank(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz * dml_pow(2, 10);
|
||||
disp_dlg_regs->refcyc_per_vm_req_flip = get_refcyc_per_vm_req_flip(mode_lib, e2e_pipe_param, num_pipes, pipe_idx) * refclk_freq_in_mhz * dml_pow(2, 10);
|
||||
|
||||
// Clamp to max for now
|
||||
if (disp_dlg_regs->refcyc_per_vm_group_vblank >= (unsigned int)dml_pow(2, 23))
|
||||
|
||||
@@ -108,7 +108,7 @@
|
||||
#define ASSERT(expr) ASSERT_CRITICAL(expr)
|
||||
|
||||
#else
|
||||
#define ASSERT(expr) WARN_ON(!(expr))
|
||||
#define ASSERT(expr) WARN_ON_ONCE(!(expr))
|
||||
#endif
|
||||
|
||||
#define BREAK_TO_DEBUGGER() ASSERT(0)
|
||||
|
||||
@@ -241,8 +241,12 @@ static int drm_hdcp_request_srm(struct drm_device *drm_dev,
|
||||
|
||||
ret = request_firmware_direct(&fw, (const char *)fw_name,
|
||||
drm_dev->dev);
|
||||
if (ret < 0)
|
||||
if (ret < 0) {
|
||||
*revoked_ksv_cnt = 0;
|
||||
*revoked_ksv_list = NULL;
|
||||
ret = 0;
|
||||
goto exit;
|
||||
}
|
||||
|
||||
if (fw->size && fw->data)
|
||||
ret = drm_hdcp_srm_update(fw->data, fw->size, revoked_ksv_list,
|
||||
@@ -287,6 +291,8 @@ int drm_hdcp_check_ksvs_revoked(struct drm_device *drm_dev, u8 *ksvs,
|
||||
|
||||
ret = drm_hdcp_request_srm(drm_dev, &revoked_ksv_list,
|
||||
&revoked_ksv_cnt);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* revoked_ksv_cnt will be zero when above function failed */
|
||||
for (i = 0; i < revoked_ksv_cnt; i++)
|
||||
|
||||
@@ -843,6 +843,7 @@ static const struct of_device_id ingenic_drm_of_match[] = {
|
||||
{ .compatible = "ingenic,jz4770-lcd", .data = &jz4770_soc_info },
|
||||
{ /* sentinel */ },
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, ingenic_drm_of_match);
|
||||
|
||||
static struct platform_driver ingenic_drm_driver = {
|
||||
.driver = {
|
||||
|
||||
@@ -717,7 +717,7 @@ static void sun6i_dsi_encoder_enable(struct drm_encoder *encoder)
|
||||
struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
|
||||
struct sun6i_dsi *dsi = encoder_to_sun6i_dsi(encoder);
|
||||
struct mipi_dsi_device *device = dsi->device;
|
||||
union phy_configure_opts opts = { 0 };
|
||||
union phy_configure_opts opts = { };
|
||||
struct phy_configure_opts_mipi_dphy *cfg = &opts.mipi_dphy;
|
||||
u16 delay;
|
||||
int err;
|
||||
|
||||
@@ -221,6 +221,7 @@ struct virtio_gpu_fpriv {
|
||||
/* virtio_ioctl.c */
|
||||
#define DRM_VIRTIO_NUM_IOCTLS 10
|
||||
extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
|
||||
void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
|
||||
|
||||
/* virtio_kms.c */
|
||||
int virtio_gpu_init(struct drm_device *dev);
|
||||
|
||||
@@ -39,6 +39,9 @@ int virtio_gpu_gem_create(struct drm_file *file,
|
||||
int ret;
|
||||
u32 handle;
|
||||
|
||||
if (vgdev->has_virgl_3d)
|
||||
virtio_gpu_create_context(dev, file);
|
||||
|
||||
ret = virtio_gpu_object_create(vgdev, params, &obj, NULL);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
@@ -34,8 +34,7 @@
|
||||
|
||||
#include "virtgpu_drv.h"
|
||||
|
||||
static void virtio_gpu_create_context(struct drm_device *dev,
|
||||
struct drm_file *file)
|
||||
void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file)
|
||||
{
|
||||
struct virtio_gpu_device *vgdev = dev->dev_private;
|
||||
struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
|
||||
|
||||
@@ -1166,6 +1166,7 @@ config HID_ALPS
|
||||
config HID_MCP2221
|
||||
tristate "Microchip MCP2221 HID USB-to-I2C/SMbus host support"
|
||||
depends on USB_HID && I2C
|
||||
depends on GPIOLIB
|
||||
---help---
|
||||
Provides I2C and SMBUS host adapter functionality over USB-HID
|
||||
through MCP2221 device.
|
||||
|
||||
@@ -802,6 +802,7 @@ static int alps_probe(struct hid_device *hdev, const struct hid_device_id *id)
|
||||
break;
|
||||
case HID_DEVICE_ID_ALPS_U1_DUAL:
|
||||
case HID_DEVICE_ID_ALPS_U1:
|
||||
case HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY:
|
||||
data->dev_type = U1;
|
||||
break;
|
||||
default:
|
||||
|
||||
@@ -79,10 +79,10 @@
|
||||
#define HID_DEVICE_ID_ALPS_U1_DUAL_PTP 0x121F
|
||||
#define HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP 0x1220
|
||||
#define HID_DEVICE_ID_ALPS_U1 0x1215
|
||||
#define HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY 0x121E
|
||||
#define HID_DEVICE_ID_ALPS_T4_BTNLESS 0x120C
|
||||
#define HID_DEVICE_ID_ALPS_1222 0x1222
|
||||
|
||||
|
||||
#define USB_VENDOR_ID_AMI 0x046b
|
||||
#define USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE 0xff10
|
||||
|
||||
@@ -385,6 +385,7 @@
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7349 0x7349
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7 0x73f7
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001
|
||||
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002 0xc002
|
||||
|
||||
#define USB_VENDOR_ID_ELAN 0x04f3
|
||||
#define USB_DEVICE_ID_TOSHIBA_CLICK_L9W 0x0401
|
||||
@@ -759,6 +760,7 @@
|
||||
#define USB_DEVICE_ID_LOGITECH_RUMBLEPAD2 0xc218
|
||||
#define USB_DEVICE_ID_LOGITECH_RUMBLEPAD2_2 0xc219
|
||||
#define USB_DEVICE_ID_LOGITECH_G15_LCD 0xc222
|
||||
#define USB_DEVICE_ID_LOGITECH_G11 0xc225
|
||||
#define USB_DEVICE_ID_LOGITECH_G15_V2_LCD 0xc227
|
||||
#define USB_DEVICE_ID_LOGITECH_G510 0xc22d
|
||||
#define USB_DEVICE_ID_LOGITECH_G510_USB_AUDIO 0xc22e
|
||||
@@ -1100,6 +1102,9 @@
|
||||
#define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300
|
||||
#define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200
|
||||
|
||||
#define I2C_VENDOR_ID_SYNAPTICS 0x06cb
|
||||
#define I2C_PRODUCT_ID_SYNAPTICS_SYNA2393 0x7a13
|
||||
|
||||
#define USB_VENDOR_ID_SYNAPTICS 0x06cb
|
||||
#define USB_DEVICE_ID_SYNAPTICS_TP 0x0001
|
||||
#define USB_DEVICE_ID_SYNAPTICS_INT_TP 0x0002
|
||||
@@ -1114,6 +1119,7 @@
|
||||
#define USB_DEVICE_ID_SYNAPTICS_LTS2 0x1d10
|
||||
#define USB_DEVICE_ID_SYNAPTICS_HD 0x0ac3
|
||||
#define USB_DEVICE_ID_SYNAPTICS_QUAD_HD 0x1ac3
|
||||
#define USB_DEVICE_ID_SYNAPTICS_DELL_K12A 0x2819
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012 0x2968
|
||||
#define USB_DEVICE_ID_SYNAPTICS_TP_V103 0x5710
|
||||
#define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5 0x81a7
|
||||
|
||||
@@ -872,6 +872,10 @@ error_hw_stop:
|
||||
}
|
||||
|
||||
static const struct hid_device_id lg_g15_devices[] = {
|
||||
/* The G11 is a G15 without the LCD, treat it as a G15 */
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
|
||||
USB_DEVICE_ID_LOGITECH_G11),
|
||||
.driver_data = LG_G15 },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
|
||||
USB_DEVICE_ID_LOGITECH_G15_LCD),
|
||||
.driver_data = LG_G15 },
|
||||
|
||||
@@ -1922,6 +1922,9 @@ static const struct hid_device_id mt_devices[] = {
|
||||
{ .driver_data = MT_CLS_EGALAX_SERIAL,
|
||||
MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
|
||||
USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001) },
|
||||
{ .driver_data = MT_CLS_EGALAX,
|
||||
MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
|
||||
USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) },
|
||||
|
||||
/* Elitegroup panel */
|
||||
{ .driver_data = MT_CLS_SERIAL,
|
||||
|
||||
@@ -163,6 +163,7 @@ static const struct hid_device_id hid_quirks[] = {
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS2), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_QUAD_HD), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP_V103), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DELL_K12A), HID_QUIRK_NO_INIT_REPORTS },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_TOPMAX, USB_DEVICE_ID_TOPMAX_COBRAPAD), HID_QUIRK_BADPAD },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT },
|
||||
{ HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
|
||||
|
||||
@@ -177,6 +177,8 @@ static const struct i2c_hid_quirks {
|
||||
I2C_HID_QUIRK_BOGUS_IRQ },
|
||||
{ USB_VENDOR_ID_ALPS_JP, HID_ANY_ID,
|
||||
I2C_HID_QUIRK_RESET_ON_RESUME },
|
||||
{ I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393,
|
||||
I2C_HID_QUIRK_RESET_ON_RESUME },
|
||||
{ USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720,
|
||||
I2C_HID_QUIRK_BAD_INPUT_SIZE },
|
||||
{ 0, 0 }
|
||||
|
||||
@@ -682,16 +682,21 @@ static int usbhid_open(struct hid_device *hid)
|
||||
struct usbhid_device *usbhid = hid->driver_data;
|
||||
int res;
|
||||
|
||||
mutex_lock(&usbhid->mutex);
|
||||
|
||||
set_bit(HID_OPENED, &usbhid->iofl);
|
||||
|
||||
if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
|
||||
return 0;
|
||||
if (hid->quirks & HID_QUIRK_ALWAYS_POLL) {
|
||||
res = 0;
|
||||
goto Done;
|
||||
}
|
||||
|
||||
res = usb_autopm_get_interface(usbhid->intf);
|
||||
/* the device must be awake to reliably request remote wakeup */
|
||||
if (res < 0) {
|
||||
clear_bit(HID_OPENED, &usbhid->iofl);
|
||||
return -EIO;
|
||||
res = -EIO;
|
||||
goto Done;
|
||||
}
|
||||
|
||||
usbhid->intf->needs_remote_wakeup = 1;
|
||||
@@ -725,6 +730,9 @@ static int usbhid_open(struct hid_device *hid)
|
||||
msleep(50);
|
||||
|
||||
clear_bit(HID_RESUME_RUNNING, &usbhid->iofl);
|
||||
|
||||
Done:
|
||||
mutex_unlock(&usbhid->mutex);
|
||||
return res;
|
||||
}
|
||||
|
||||
@@ -732,6 +740,8 @@ static void usbhid_close(struct hid_device *hid)
|
||||
{
|
||||
struct usbhid_device *usbhid = hid->driver_data;
|
||||
|
||||
mutex_lock(&usbhid->mutex);
|
||||
|
||||
/*
|
||||
* Make sure we don't restart data acquisition due to
|
||||
* a resumption we no longer care about by avoiding racing
|
||||
@@ -743,12 +753,13 @@ static void usbhid_close(struct hid_device *hid)
|
||||
clear_bit(HID_IN_POLLING, &usbhid->iofl);
|
||||
spin_unlock_irq(&usbhid->lock);
|
||||
|
||||
if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
|
||||
return;
|
||||
if (!(hid->quirks & HID_QUIRK_ALWAYS_POLL)) {
|
||||
hid_cancel_delayed_stuff(usbhid);
|
||||
usb_kill_urb(usbhid->urbin);
|
||||
usbhid->intf->needs_remote_wakeup = 0;
|
||||
}
|
||||
|
||||
hid_cancel_delayed_stuff(usbhid);
|
||||
usb_kill_urb(usbhid->urbin);
|
||||
usbhid->intf->needs_remote_wakeup = 0;
|
||||
mutex_unlock(&usbhid->mutex);
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -1057,6 +1068,8 @@ static int usbhid_start(struct hid_device *hid)
|
||||
unsigned int n, insize = 0;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&usbhid->mutex);
|
||||
|
||||
clear_bit(HID_DISCONNECTED, &usbhid->iofl);
|
||||
|
||||
usbhid->bufsize = HID_MIN_BUFFER_SIZE;
|
||||
@@ -1177,6 +1190,8 @@ static int usbhid_start(struct hid_device *hid)
|
||||
usbhid_set_leds(hid);
|
||||
device_set_wakeup_enable(&dev->dev, 1);
|
||||
}
|
||||
|
||||
mutex_unlock(&usbhid->mutex);
|
||||
return 0;
|
||||
|
||||
fail:
|
||||
@@ -1187,6 +1202,7 @@ fail:
|
||||
usbhid->urbout = NULL;
|
||||
usbhid->urbctrl = NULL;
|
||||
hid_free_buffers(dev, hid);
|
||||
mutex_unlock(&usbhid->mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1202,6 +1218,8 @@ static void usbhid_stop(struct hid_device *hid)
|
||||
usbhid->intf->needs_remote_wakeup = 0;
|
||||
}
|
||||
|
||||
mutex_lock(&usbhid->mutex);
|
||||
|
||||
clear_bit(HID_STARTED, &usbhid->iofl);
|
||||
spin_lock_irq(&usbhid->lock); /* Sync with error and led handlers */
|
||||
set_bit(HID_DISCONNECTED, &usbhid->iofl);
|
||||
@@ -1222,6 +1240,8 @@ static void usbhid_stop(struct hid_device *hid)
|
||||
usbhid->urbout = NULL;
|
||||
|
||||
hid_free_buffers(hid_to_usb_dev(hid), hid);
|
||||
|
||||
mutex_unlock(&usbhid->mutex);
|
||||
}
|
||||
|
||||
static int usbhid_power(struct hid_device *hid, int lvl)
|
||||
@@ -1382,6 +1402,7 @@ static int usbhid_probe(struct usb_interface *intf, const struct usb_device_id *
|
||||
INIT_WORK(&usbhid->reset_work, hid_reset);
|
||||
timer_setup(&usbhid->io_retry, hid_retry_timeout, 0);
|
||||
spin_lock_init(&usbhid->lock);
|
||||
mutex_init(&usbhid->mutex);
|
||||
|
||||
ret = hid_add_device(hid);
|
||||
if (ret) {
|
||||
|
||||
@@ -80,6 +80,7 @@ struct usbhid_device {
|
||||
dma_addr_t outbuf_dma; /* Output buffer dma */
|
||||
unsigned long last_out; /* record of last output for timeouts */
|
||||
|
||||
struct mutex mutex; /* start/stop/open/close */
|
||||
spinlock_t lock; /* fifo spinlock */
|
||||
unsigned long iofl; /* I/O flags (CTRL_RUNNING, OUT_RUNNING) */
|
||||
struct timer_list io_retry; /* Retry timer */
|
||||
|
||||
@@ -319,9 +319,11 @@ static void wacom_feature_mapping(struct hid_device *hdev,
|
||||
data[0] = field->report->id;
|
||||
ret = wacom_get_report(hdev, HID_FEATURE_REPORT,
|
||||
data, n, WAC_CMD_RETRIES);
|
||||
if (ret == n) {
|
||||
if (ret == n && features->type == HID_GENERIC) {
|
||||
ret = hid_report_raw_event(hdev,
|
||||
HID_FEATURE_REPORT, data, n, 0);
|
||||
} else if (ret == 2 && features->type != HID_GENERIC) {
|
||||
features->touch_max = data[1];
|
||||
} else {
|
||||
features->touch_max = 16;
|
||||
hid_warn(hdev, "wacom_feature_mapping: "
|
||||
|
||||
@@ -1427,11 +1427,13 @@ static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom)
|
||||
{
|
||||
struct input_dev *pad_input = wacom->pad_input;
|
||||
unsigned char *data = wacom->data;
|
||||
int nbuttons = wacom->features.numbered_buttons;
|
||||
|
||||
int buttons = data[282] | ((data[281] & 0x40) << 2);
|
||||
int expresskeys = data[282];
|
||||
int center = (data[281] & 0x40) >> 6;
|
||||
int ring = data[285] & 0x7F;
|
||||
bool ringstatus = data[285] & 0x80;
|
||||
bool prox = buttons || ringstatus;
|
||||
bool prox = expresskeys || center || ringstatus;
|
||||
|
||||
/* Fix touchring data: userspace expects 0 at left and increasing clockwise */
|
||||
ring = 71 - ring;
|
||||
@@ -1439,7 +1441,8 @@ static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom)
|
||||
if (ring > 71)
|
||||
ring -= 72;
|
||||
|
||||
wacom_report_numbered_buttons(pad_input, 9, buttons);
|
||||
wacom_report_numbered_buttons(pad_input, nbuttons,
|
||||
expresskeys | (center << (nbuttons - 1)));
|
||||
|
||||
input_report_abs(pad_input, ABS_WHEEL, ringstatus ? ring : 0);
|
||||
|
||||
@@ -2637,9 +2640,25 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
|
||||
case HID_DG_TIPSWITCH:
|
||||
hid_data->last_slot_field = equivalent_usage;
|
||||
break;
|
||||
case HID_DG_CONTACTCOUNT:
|
||||
hid_data->cc_report = report->id;
|
||||
hid_data->cc_index = i;
|
||||
hid_data->cc_value_index = j;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (hid_data->cc_report != 0 &&
|
||||
hid_data->cc_index >= 0) {
|
||||
struct hid_field *field = report->field[hid_data->cc_index];
|
||||
int value = field->value[hid_data->cc_value_index];
|
||||
if (value)
|
||||
hid_data->num_expected = value;
|
||||
}
|
||||
else {
|
||||
hid_data->num_expected = wacom_wac->features.touch_max;
|
||||
}
|
||||
}
|
||||
|
||||
static void wacom_wac_finger_report(struct hid_device *hdev,
|
||||
@@ -2649,7 +2668,6 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
|
||||
struct wacom_wac *wacom_wac = &wacom->wacom_wac;
|
||||
struct input_dev *input = wacom_wac->touch_input;
|
||||
unsigned touch_max = wacom_wac->features.touch_max;
|
||||
struct hid_data *hid_data = &wacom_wac->hid_data;
|
||||
|
||||
/* If more packets of data are expected, give us a chance to
|
||||
* process them rather than immediately syncing a partial
|
||||
@@ -2663,7 +2681,6 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
|
||||
|
||||
input_sync(input);
|
||||
wacom_wac->hid_data.num_received = 0;
|
||||
hid_data->num_expected = 0;
|
||||
|
||||
/* keep touch state for pen event */
|
||||
wacom_wac->shared->touch_down = wacom_wac_finger_count_touches(wacom_wac);
|
||||
@@ -2738,73 +2755,12 @@ static void wacom_report_events(struct hid_device *hdev,
|
||||
}
|
||||
}
|
||||
|
||||
static void wacom_set_num_expected(struct hid_device *hdev,
|
||||
struct hid_report *report,
|
||||
int collection_index,
|
||||
struct hid_field *field,
|
||||
int field_index)
|
||||
{
|
||||
struct wacom *wacom = hid_get_drvdata(hdev);
|
||||
struct wacom_wac *wacom_wac = &wacom->wacom_wac;
|
||||
struct hid_data *hid_data = &wacom_wac->hid_data;
|
||||
unsigned int original_collection_level =
|
||||
hdev->collection[collection_index].level;
|
||||
bool end_collection = false;
|
||||
int i;
|
||||
|
||||
if (hid_data->num_expected)
|
||||
return;
|
||||
|
||||
// find the contact count value for this segment
|
||||
for (i = field_index; i < report->maxfield && !end_collection; i++) {
|
||||
struct hid_field *field = report->field[i];
|
||||
unsigned int field_level =
|
||||
hdev->collection[field->usage[0].collection_index].level;
|
||||
unsigned int j;
|
||||
|
||||
if (field_level != original_collection_level)
|
||||
continue;
|
||||
|
||||
for (j = 0; j < field->maxusage; j++) {
|
||||
struct hid_usage *usage = &field->usage[j];
|
||||
|
||||
if (usage->collection_index != collection_index) {
|
||||
end_collection = true;
|
||||
break;
|
||||
}
|
||||
if (wacom_equivalent_usage(usage->hid) == HID_DG_CONTACTCOUNT) {
|
||||
hid_data->cc_report = report->id;
|
||||
hid_data->cc_index = i;
|
||||
hid_data->cc_value_index = j;
|
||||
|
||||
if (hid_data->cc_report != 0 &&
|
||||
hid_data->cc_index >= 0) {
|
||||
|
||||
struct hid_field *field =
|
||||
report->field[hid_data->cc_index];
|
||||
int value =
|
||||
field->value[hid_data->cc_value_index];
|
||||
|
||||
if (value)
|
||||
hid_data->num_expected = value;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (hid_data->cc_report == 0 || hid_data->cc_index < 0)
|
||||
hid_data->num_expected = wacom_wac->features.touch_max;
|
||||
}
|
||||
|
||||
static int wacom_wac_collection(struct hid_device *hdev, struct hid_report *report,
|
||||
int collection_index, struct hid_field *field,
|
||||
int field_index)
|
||||
{
|
||||
struct wacom *wacom = hid_get_drvdata(hdev);
|
||||
|
||||
if (WACOM_FINGER_FIELD(field))
|
||||
wacom_set_num_expected(hdev, report, collection_index, field,
|
||||
field_index);
|
||||
wacom_report_events(hdev, report, collection_index, field_index);
|
||||
|
||||
/*
|
||||
|
||||
@@ -78,7 +78,7 @@ static struct qcom_icc_node *sdm845_osm_l3_nodes[] = {
|
||||
[SLAVE_OSM_L3] = &sdm845_osm_l3,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_icc_osm_l3 = {
|
||||
static const struct qcom_icc_desc sdm845_icc_osm_l3 = {
|
||||
.nodes = sdm845_osm_l3_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sdm845_osm_l3_nodes),
|
||||
};
|
||||
@@ -91,7 +91,7 @@ static struct qcom_icc_node *sc7180_osm_l3_nodes[] = {
|
||||
[SLAVE_OSM_L3] = &sc7180_osm_l3,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sc7180_icc_osm_l3 = {
|
||||
static const struct qcom_icc_desc sc7180_icc_osm_l3 = {
|
||||
.nodes = sc7180_osm_l3_nodes,
|
||||
.num_nodes = ARRAY_SIZE(sc7180_osm_l3_nodes),
|
||||
};
|
||||
|
||||
@@ -192,7 +192,7 @@ static struct qcom_icc_node *aggre1_noc_nodes[] = {
|
||||
[SLAVE_ANOC_PCIE_A1NOC_SNOC] = &qns_pcie_a1noc_snoc,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_aggre1_noc = {
|
||||
static const struct qcom_icc_desc sdm845_aggre1_noc = {
|
||||
.nodes = aggre1_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(aggre1_noc_nodes),
|
||||
.bcms = aggre1_noc_bcms,
|
||||
@@ -220,7 +220,7 @@ static struct qcom_icc_node *aggre2_noc_nodes[] = {
|
||||
[SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_aggre2_noc = {
|
||||
static const struct qcom_icc_desc sdm845_aggre2_noc = {
|
||||
.nodes = aggre2_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(aggre2_noc_nodes),
|
||||
.bcms = aggre2_noc_bcms,
|
||||
@@ -281,7 +281,7 @@ static struct qcom_icc_node *config_noc_nodes[] = {
|
||||
[SLAVE_SERVICE_CNOC] = &srvc_cnoc,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_config_noc = {
|
||||
static const struct qcom_icc_desc sdm845_config_noc = {
|
||||
.nodes = config_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(config_noc_nodes),
|
||||
.bcms = config_noc_bcms,
|
||||
@@ -297,7 +297,7 @@ static struct qcom_icc_node *dc_noc_nodes[] = {
|
||||
[SLAVE_MEM_NOC_CFG] = &qhs_memnoc,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_dc_noc = {
|
||||
static const struct qcom_icc_desc sdm845_dc_noc = {
|
||||
.nodes = dc_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(dc_noc_nodes),
|
||||
.bcms = dc_noc_bcms,
|
||||
@@ -315,7 +315,7 @@ static struct qcom_icc_node *gladiator_noc_nodes[] = {
|
||||
[SLAVE_SERVICE_GNOC] = &srvc_gnoc,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_gladiator_noc = {
|
||||
static const struct qcom_icc_desc sdm845_gladiator_noc = {
|
||||
.nodes = gladiator_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(gladiator_noc_nodes),
|
||||
.bcms = gladiator_noc_bcms,
|
||||
@@ -350,7 +350,7 @@ static struct qcom_icc_node *mem_noc_nodes[] = {
|
||||
[SLAVE_EBI1] = &ebi,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_mem_noc = {
|
||||
static const struct qcom_icc_desc sdm845_mem_noc = {
|
||||
.nodes = mem_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(mem_noc_nodes),
|
||||
.bcms = mem_noc_bcms,
|
||||
@@ -384,7 +384,7 @@ static struct qcom_icc_node *mmss_noc_nodes[] = {
|
||||
[SLAVE_CAMNOC_UNCOMP] = &qns_camnoc_uncomp,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_mmss_noc = {
|
||||
static const struct qcom_icc_desc sdm845_mmss_noc = {
|
||||
.nodes = mmss_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(mmss_noc_nodes),
|
||||
.bcms = mmss_noc_bcms,
|
||||
@@ -430,7 +430,7 @@ static struct qcom_icc_node *system_noc_nodes[] = {
|
||||
[SLAVE_TCU] = &xs_sys_tcu_cfg,
|
||||
};
|
||||
|
||||
const static struct qcom_icc_desc sdm845_system_noc = {
|
||||
static const struct qcom_icc_desc sdm845_system_noc = {
|
||||
.nodes = system_noc_nodes,
|
||||
.num_nodes = ARRAY_SIZE(system_noc_nodes),
|
||||
.bcms = system_noc_bcms,
|
||||
|
||||
@@ -101,6 +101,8 @@ struct kmem_cache *amd_iommu_irq_cache;
|
||||
static void update_domain(struct protection_domain *domain);
|
||||
static int protection_domain_init(struct protection_domain *domain);
|
||||
static void detach_device(struct device *dev);
|
||||
static void update_and_flush_device_table(struct protection_domain *domain,
|
||||
struct domain_pgtable *pgtable);
|
||||
|
||||
/****************************************************************************
|
||||
*
|
||||
@@ -151,6 +153,26 @@ static struct protection_domain *to_pdomain(struct iommu_domain *dom)
|
||||
return container_of(dom, struct protection_domain, domain);
|
||||
}
|
||||
|
||||
static void amd_iommu_domain_get_pgtable(struct protection_domain *domain,
|
||||
struct domain_pgtable *pgtable)
|
||||
{
|
||||
u64 pt_root = atomic64_read(&domain->pt_root);
|
||||
|
||||
pgtable->root = (u64 *)(pt_root & PAGE_MASK);
|
||||
pgtable->mode = pt_root & 7; /* lowest 3 bits encode pgtable mode */
|
||||
}
|
||||
|
||||
static u64 amd_iommu_domain_encode_pgtable(u64 *root, int mode)
|
||||
{
|
||||
u64 pt_root;
|
||||
|
||||
/* lowest 3 bits encode pgtable mode */
|
||||
pt_root = mode & 7;
|
||||
pt_root |= (u64)root;
|
||||
|
||||
return pt_root;
|
||||
}
|
||||
|
||||
static struct iommu_dev_data *alloc_dev_data(u16 devid)
|
||||
{
|
||||
struct iommu_dev_data *dev_data;
|
||||
@@ -1397,13 +1419,18 @@ static struct page *free_sub_pt(unsigned long root, int mode,
|
||||
|
||||
static void free_pagetable(struct protection_domain *domain)
|
||||
{
|
||||
unsigned long root = (unsigned long)domain->pt_root;
|
||||
struct domain_pgtable pgtable;
|
||||
struct page *freelist = NULL;
|
||||
unsigned long root;
|
||||
|
||||
BUG_ON(domain->mode < PAGE_MODE_NONE ||
|
||||
domain->mode > PAGE_MODE_6_LEVEL);
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
atomic64_set(&domain->pt_root, 0);
|
||||
|
||||
freelist = free_sub_pt(root, domain->mode, freelist);
|
||||
BUG_ON(pgtable.mode < PAGE_MODE_NONE ||
|
||||
pgtable.mode > PAGE_MODE_6_LEVEL);
|
||||
|
||||
root = (unsigned long)pgtable.root;
|
||||
freelist = free_sub_pt(root, pgtable.mode, freelist);
|
||||
|
||||
free_page_list(freelist);
|
||||
}
|
||||
@@ -1417,24 +1444,39 @@ static bool increase_address_space(struct protection_domain *domain,
|
||||
unsigned long address,
|
||||
gfp_t gfp)
|
||||
{
|
||||
struct domain_pgtable pgtable;
|
||||
unsigned long flags;
|
||||
bool ret = false;
|
||||
u64 *pte;
|
||||
bool ret = true;
|
||||
u64 *pte, root;
|
||||
|
||||
spin_lock_irqsave(&domain->lock, flags);
|
||||
|
||||
if (address <= PM_LEVEL_SIZE(domain->mode) ||
|
||||
WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
|
||||
if (address <= PM_LEVEL_SIZE(pgtable.mode))
|
||||
goto out;
|
||||
|
||||
ret = false;
|
||||
if (WARN_ON_ONCE(pgtable.mode == PAGE_MODE_6_LEVEL))
|
||||
goto out;
|
||||
|
||||
pte = (void *)get_zeroed_page(gfp);
|
||||
if (!pte)
|
||||
goto out;
|
||||
|
||||
*pte = PM_LEVEL_PDE(domain->mode,
|
||||
iommu_virt_to_phys(domain->pt_root));
|
||||
domain->pt_root = pte;
|
||||
domain->mode += 1;
|
||||
*pte = PM_LEVEL_PDE(pgtable.mode, iommu_virt_to_phys(pgtable.root));
|
||||
|
||||
pgtable.root = pte;
|
||||
pgtable.mode += 1;
|
||||
update_and_flush_device_table(domain, &pgtable);
|
||||
domain_flush_complete(domain);
|
||||
|
||||
/*
|
||||
* Device Table needs to be updated and flushed before the new root can
|
||||
* be published.
|
||||
*/
|
||||
root = amd_iommu_domain_encode_pgtable(pte, pgtable.mode);
|
||||
atomic64_set(&domain->pt_root, root);
|
||||
|
||||
ret = true;
|
||||
|
||||
@@ -1451,16 +1493,29 @@ static u64 *alloc_pte(struct protection_domain *domain,
|
||||
gfp_t gfp,
|
||||
bool *updated)
|
||||
{
|
||||
struct domain_pgtable pgtable;
|
||||
int level, end_lvl;
|
||||
u64 *pte, *page;
|
||||
|
||||
BUG_ON(!is_power_of_2(page_size));
|
||||
|
||||
while (address > PM_LEVEL_SIZE(domain->mode))
|
||||
*updated = increase_address_space(domain, address, gfp) || *updated;
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
|
||||
level = domain->mode - 1;
|
||||
pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
|
||||
while (address > PM_LEVEL_SIZE(pgtable.mode)) {
|
||||
/*
|
||||
* Return an error if there is no memory to update the
|
||||
* page-table.
|
||||
*/
|
||||
if (!increase_address_space(domain, address, gfp))
|
||||
return NULL;
|
||||
|
||||
/* Read new values to check if update was successful */
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
}
|
||||
|
||||
|
||||
level = pgtable.mode - 1;
|
||||
pte = &pgtable.root[PM_LEVEL_INDEX(level, address)];
|
||||
address = PAGE_SIZE_ALIGN(address, page_size);
|
||||
end_lvl = PAGE_SIZE_LEVEL(page_size);
|
||||
|
||||
@@ -1536,16 +1591,19 @@ static u64 *fetch_pte(struct protection_domain *domain,
|
||||
unsigned long address,
|
||||
unsigned long *page_size)
|
||||
{
|
||||
struct domain_pgtable pgtable;
|
||||
int level;
|
||||
u64 *pte;
|
||||
|
||||
*page_size = 0;
|
||||
|
||||
if (address > PM_LEVEL_SIZE(domain->mode))
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
|
||||
if (address > PM_LEVEL_SIZE(pgtable.mode))
|
||||
return NULL;
|
||||
|
||||
level = domain->mode - 1;
|
||||
pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
|
||||
level = pgtable.mode - 1;
|
||||
pte = &pgtable.root[PM_LEVEL_INDEX(level, address)];
|
||||
*page_size = PTE_LEVEL_PAGE_SIZE(level);
|
||||
|
||||
while (level > 0) {
|
||||
@@ -1660,7 +1718,13 @@ out:
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&dom->lock, flags);
|
||||
update_domain(dom);
|
||||
/*
|
||||
* Flush domain TLB(s) and wait for completion. Any Device-Table
|
||||
* Updates and flushing already happened in
|
||||
* increase_address_space().
|
||||
*/
|
||||
domain_flush_tlb_pde(dom);
|
||||
domain_flush_complete(dom);
|
||||
spin_unlock_irqrestore(&dom->lock, flags);
|
||||
}
|
||||
|
||||
@@ -1806,6 +1870,7 @@ static void dma_ops_domain_free(struct protection_domain *domain)
|
||||
static struct protection_domain *dma_ops_domain_alloc(void)
|
||||
{
|
||||
struct protection_domain *domain;
|
||||
u64 *pt_root, root;
|
||||
|
||||
domain = kzalloc(sizeof(struct protection_domain), GFP_KERNEL);
|
||||
if (!domain)
|
||||
@@ -1814,12 +1879,14 @@ static struct protection_domain *dma_ops_domain_alloc(void)
|
||||
if (protection_domain_init(domain))
|
||||
goto free_domain;
|
||||
|
||||
domain->mode = PAGE_MODE_3_LEVEL;
|
||||
domain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
|
||||
domain->flags = PD_DMA_OPS_MASK;
|
||||
if (!domain->pt_root)
|
||||
pt_root = (void *)get_zeroed_page(GFP_KERNEL);
|
||||
if (!pt_root)
|
||||
goto free_domain;
|
||||
|
||||
root = amd_iommu_domain_encode_pgtable(pt_root, PAGE_MODE_3_LEVEL);
|
||||
atomic64_set(&domain->pt_root, root);
|
||||
domain->flags = PD_DMA_OPS_MASK;
|
||||
|
||||
if (iommu_get_dma_cookie(&domain->domain) == -ENOMEM)
|
||||
goto free_domain;
|
||||
|
||||
@@ -1841,16 +1908,17 @@ static bool dma_ops_domain(struct protection_domain *domain)
|
||||
}
|
||||
|
||||
static void set_dte_entry(u16 devid, struct protection_domain *domain,
|
||||
struct domain_pgtable *pgtable,
|
||||
bool ats, bool ppr)
|
||||
{
|
||||
u64 pte_root = 0;
|
||||
u64 flags = 0;
|
||||
u32 old_domid;
|
||||
|
||||
if (domain->mode != PAGE_MODE_NONE)
|
||||
pte_root = iommu_virt_to_phys(domain->pt_root);
|
||||
if (pgtable->mode != PAGE_MODE_NONE)
|
||||
pte_root = iommu_virt_to_phys(pgtable->root);
|
||||
|
||||
pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK)
|
||||
pte_root |= (pgtable->mode & DEV_ENTRY_MODE_MASK)
|
||||
<< DEV_ENTRY_MODE_SHIFT;
|
||||
pte_root |= DTE_FLAG_IR | DTE_FLAG_IW | DTE_FLAG_V | DTE_FLAG_TV;
|
||||
|
||||
@@ -1923,6 +1991,7 @@ static void clear_dte_entry(u16 devid)
|
||||
static void do_attach(struct iommu_dev_data *dev_data,
|
||||
struct protection_domain *domain)
|
||||
{
|
||||
struct domain_pgtable pgtable;
|
||||
struct amd_iommu *iommu;
|
||||
bool ats;
|
||||
|
||||
@@ -1938,7 +2007,9 @@ static void do_attach(struct iommu_dev_data *dev_data,
|
||||
domain->dev_cnt += 1;
|
||||
|
||||
/* Update device table */
|
||||
set_dte_entry(dev_data->devid, domain, ats, dev_data->iommu_v2);
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
set_dte_entry(dev_data->devid, domain, &pgtable,
|
||||
ats, dev_data->iommu_v2);
|
||||
clone_aliases(dev_data->pdev);
|
||||
|
||||
device_flush_dte(dev_data);
|
||||
@@ -2249,23 +2320,36 @@ static int amd_iommu_domain_get_attr(struct iommu_domain *domain,
|
||||
*
|
||||
*****************************************************************************/
|
||||
|
||||
static void update_device_table(struct protection_domain *domain)
|
||||
static void update_device_table(struct protection_domain *domain,
|
||||
struct domain_pgtable *pgtable)
|
||||
{
|
||||
struct iommu_dev_data *dev_data;
|
||||
|
||||
list_for_each_entry(dev_data, &domain->dev_list, list) {
|
||||
set_dte_entry(dev_data->devid, domain, dev_data->ats.enabled,
|
||||
dev_data->iommu_v2);
|
||||
set_dte_entry(dev_data->devid, domain, pgtable,
|
||||
dev_data->ats.enabled, dev_data->iommu_v2);
|
||||
clone_aliases(dev_data->pdev);
|
||||
}
|
||||
}
|
||||
|
||||
static void update_and_flush_device_table(struct protection_domain *domain,
|
||||
struct domain_pgtable *pgtable)
|
||||
{
|
||||
update_device_table(domain, pgtable);
|
||||
domain_flush_devices(domain);
|
||||
}
|
||||
|
||||
static void update_domain(struct protection_domain *domain)
|
||||
{
|
||||
update_device_table(domain);
|
||||
struct domain_pgtable pgtable;
|
||||
|
||||
domain_flush_devices(domain);
|
||||
/* Update device table */
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
update_and_flush_device_table(domain, &pgtable);
|
||||
|
||||
/* Flush domain TLB(s) and wait for completion */
|
||||
domain_flush_tlb_pde(domain);
|
||||
domain_flush_complete(domain);
|
||||
}
|
||||
|
||||
int __init amd_iommu_init_api(void)
|
||||
@@ -2375,6 +2459,7 @@ out_err:
|
||||
static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
|
||||
{
|
||||
struct protection_domain *pdomain;
|
||||
u64 *pt_root, root;
|
||||
|
||||
switch (type) {
|
||||
case IOMMU_DOMAIN_UNMANAGED:
|
||||
@@ -2382,13 +2467,15 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
|
||||
if (!pdomain)
|
||||
return NULL;
|
||||
|
||||
pdomain->mode = PAGE_MODE_3_LEVEL;
|
||||
pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
|
||||
if (!pdomain->pt_root) {
|
||||
pt_root = (void *)get_zeroed_page(GFP_KERNEL);
|
||||
if (!pt_root) {
|
||||
protection_domain_free(pdomain);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
root = amd_iommu_domain_encode_pgtable(pt_root, PAGE_MODE_3_LEVEL);
|
||||
atomic64_set(&pdomain->pt_root, root);
|
||||
|
||||
pdomain->domain.geometry.aperture_start = 0;
|
||||
pdomain->domain.geometry.aperture_end = ~0ULL;
|
||||
pdomain->domain.geometry.force_aperture = true;
|
||||
@@ -2406,7 +2493,7 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
|
||||
if (!pdomain)
|
||||
return NULL;
|
||||
|
||||
pdomain->mode = PAGE_MODE_NONE;
|
||||
atomic64_set(&pdomain->pt_root, PAGE_MODE_NONE);
|
||||
break;
|
||||
default:
|
||||
return NULL;
|
||||
@@ -2418,6 +2505,7 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
|
||||
static void amd_iommu_domain_free(struct iommu_domain *dom)
|
||||
{
|
||||
struct protection_domain *domain;
|
||||
struct domain_pgtable pgtable;
|
||||
|
||||
domain = to_pdomain(dom);
|
||||
|
||||
@@ -2435,7 +2523,9 @@ static void amd_iommu_domain_free(struct iommu_domain *dom)
|
||||
dma_ops_domain_free(domain);
|
||||
break;
|
||||
default:
|
||||
if (domain->mode != PAGE_MODE_NONE)
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
|
||||
if (pgtable.mode != PAGE_MODE_NONE)
|
||||
free_pagetable(domain);
|
||||
|
||||
if (domain->flags & PD_IOMMUV2_MASK)
|
||||
@@ -2518,10 +2608,12 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
|
||||
gfp_t gfp)
|
||||
{
|
||||
struct protection_domain *domain = to_pdomain(dom);
|
||||
struct domain_pgtable pgtable;
|
||||
int prot = 0;
|
||||
int ret;
|
||||
|
||||
if (domain->mode == PAGE_MODE_NONE)
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
if (pgtable.mode == PAGE_MODE_NONE)
|
||||
return -EINVAL;
|
||||
|
||||
if (iommu_prot & IOMMU_READ)
|
||||
@@ -2541,8 +2633,10 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
|
||||
struct iommu_iotlb_gather *gather)
|
||||
{
|
||||
struct protection_domain *domain = to_pdomain(dom);
|
||||
struct domain_pgtable pgtable;
|
||||
|
||||
if (domain->mode == PAGE_MODE_NONE)
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
if (pgtable.mode == PAGE_MODE_NONE)
|
||||
return 0;
|
||||
|
||||
return iommu_unmap_page(domain, iova, page_size);
|
||||
@@ -2553,9 +2647,11 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
|
||||
{
|
||||
struct protection_domain *domain = to_pdomain(dom);
|
||||
unsigned long offset_mask, pte_pgsize;
|
||||
struct domain_pgtable pgtable;
|
||||
u64 *pte, __pte;
|
||||
|
||||
if (domain->mode == PAGE_MODE_NONE)
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
if (pgtable.mode == PAGE_MODE_NONE)
|
||||
return iova;
|
||||
|
||||
pte = fetch_pte(domain, iova, &pte_pgsize);
|
||||
@@ -2708,16 +2804,26 @@ EXPORT_SYMBOL(amd_iommu_unregister_ppr_notifier);
|
||||
void amd_iommu_domain_direct_map(struct iommu_domain *dom)
|
||||
{
|
||||
struct protection_domain *domain = to_pdomain(dom);
|
||||
struct domain_pgtable pgtable;
|
||||
unsigned long flags;
|
||||
u64 pt_root;
|
||||
|
||||
spin_lock_irqsave(&domain->lock, flags);
|
||||
|
||||
/* First save pgtable configuration*/
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
|
||||
/* Update data structure */
|
||||
domain->mode = PAGE_MODE_NONE;
|
||||
pt_root = amd_iommu_domain_encode_pgtable(NULL, PAGE_MODE_NONE);
|
||||
atomic64_set(&domain->pt_root, pt_root);
|
||||
|
||||
/* Make changes visible to IOMMUs */
|
||||
update_domain(domain);
|
||||
|
||||
/* Restore old pgtable in domain->ptroot to free page-table */
|
||||
pt_root = amd_iommu_domain_encode_pgtable(pgtable.root, pgtable.mode);
|
||||
atomic64_set(&domain->pt_root, pt_root);
|
||||
|
||||
/* Page-table is not visible to IOMMU anymore, so free it */
|
||||
free_pagetable(domain);
|
||||
|
||||
@@ -2908,9 +3014,11 @@ static u64 *__get_gcr3_pte(u64 *root, int level, int pasid, bool alloc)
|
||||
static int __set_gcr3(struct protection_domain *domain, int pasid,
|
||||
unsigned long cr3)
|
||||
{
|
||||
struct domain_pgtable pgtable;
|
||||
u64 *pte;
|
||||
|
||||
if (domain->mode != PAGE_MODE_NONE)
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
if (pgtable.mode != PAGE_MODE_NONE)
|
||||
return -EINVAL;
|
||||
|
||||
pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, true);
|
||||
@@ -2924,9 +3032,11 @@ static int __set_gcr3(struct protection_domain *domain, int pasid,
|
||||
|
||||
static int __clear_gcr3(struct protection_domain *domain, int pasid)
|
||||
{
|
||||
struct domain_pgtable pgtable;
|
||||
u64 *pte;
|
||||
|
||||
if (domain->mode != PAGE_MODE_NONE)
|
||||
amd_iommu_domain_get_pgtable(domain, &pgtable);
|
||||
if (pgtable.mode != PAGE_MODE_NONE)
|
||||
return -EINVAL;
|
||||
|
||||
pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, false);
|
||||
|
||||
@@ -468,8 +468,7 @@ struct protection_domain {
|
||||
iommu core code */
|
||||
spinlock_t lock; /* mostly used to lock the page table*/
|
||||
u16 id; /* the domain id written to the device table */
|
||||
int mode; /* paging mode (0-6 levels) */
|
||||
u64 *pt_root; /* page table root pointer */
|
||||
atomic64_t pt_root; /* pgtable root and pgtable mode */
|
||||
int glx; /* Number of levels for GCR3 table */
|
||||
u64 *gcr3_tbl; /* Guest CR3 table */
|
||||
unsigned long flags; /* flags to find out type of domain */
|
||||
@@ -477,6 +476,12 @@ struct protection_domain {
|
||||
unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
|
||||
};
|
||||
|
||||
/* For decocded pt_root */
|
||||
struct domain_pgtable {
|
||||
int mode;
|
||||
u64 *root;
|
||||
};
|
||||
|
||||
/*
|
||||
* Structure where we save information about one hardware AMD IOMMU in the
|
||||
* system.
|
||||
|
||||
@@ -453,7 +453,7 @@ static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
|
||||
if (!region)
|
||||
return -ENOMEM;
|
||||
|
||||
list_add(&vdev->resv_regions, ®ion->list);
|
||||
list_add(®ion->list, &vdev->resv_regions);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -1465,6 +1465,13 @@ static const struct mei_cfg mei_me_pch12_cfg = {
|
||||
MEI_CFG_DMA_128,
|
||||
};
|
||||
|
||||
/* LBG with quirk for SPS Firmware exclusion */
|
||||
static const struct mei_cfg mei_me_pch12_sps_cfg = {
|
||||
MEI_CFG_PCH8_HFS,
|
||||
MEI_CFG_FW_VER_SUPP,
|
||||
MEI_CFG_FW_SPS,
|
||||
};
|
||||
|
||||
/* Tiger Lake and newer devices */
|
||||
static const struct mei_cfg mei_me_pch15_cfg = {
|
||||
MEI_CFG_PCH8_HFS,
|
||||
@@ -1487,6 +1494,7 @@ static const struct mei_cfg *const mei_cfg_list[] = {
|
||||
[MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg,
|
||||
[MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg,
|
||||
[MEI_ME_PCH12_CFG] = &mei_me_pch12_cfg,
|
||||
[MEI_ME_PCH12_SPS_CFG] = &mei_me_pch12_sps_cfg,
|
||||
[MEI_ME_PCH15_CFG] = &mei_me_pch15_cfg,
|
||||
};
|
||||
|
||||
|
||||
@@ -80,6 +80,9 @@ struct mei_me_hw {
|
||||
* servers platforms with quirk for
|
||||
* SPS firmware exclusion.
|
||||
* @MEI_ME_PCH12_CFG: Platform Controller Hub Gen12 and newer
|
||||
* @MEI_ME_PCH12_SPS_CFG: Platform Controller Hub Gen12 and newer
|
||||
* servers platforms with quirk for
|
||||
* SPS firmware exclusion.
|
||||
* @MEI_ME_PCH15_CFG: Platform Controller Hub Gen15 and newer
|
||||
* @MEI_ME_NUM_CFG: Upper Sentinel.
|
||||
*/
|
||||
@@ -93,6 +96,7 @@ enum mei_cfg_idx {
|
||||
MEI_ME_PCH8_CFG,
|
||||
MEI_ME_PCH8_SPS_CFG,
|
||||
MEI_ME_PCH12_CFG,
|
||||
MEI_ME_PCH12_SPS_CFG,
|
||||
MEI_ME_PCH15_CFG,
|
||||
MEI_ME_NUM_CFG,
|
||||
};
|
||||
|
||||
@@ -70,7 +70,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, MEI_ME_PCH8_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, MEI_ME_PCH8_SPS_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, MEI_ME_PCH8_SPS_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_SPS_CFG)},
|
||||
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, MEI_ME_PCH8_CFG)},
|
||||
{MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, MEI_ME_PCH8_CFG)},
|
||||
|
||||
@@ -1483,7 +1483,7 @@ static void __exit most_exit(void)
|
||||
ida_destroy(&mdev_id);
|
||||
}
|
||||
|
||||
module_init(most_init);
|
||||
subsys_initcall(most_init);
|
||||
module_exit(most_exit);
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Christian Gromm <christian.gromm@microchip.com>");
|
||||
|
||||
@@ -24,8 +24,8 @@ config NET_DSA_MV88E6XXX_PTP
|
||||
bool "PTP support for Marvell 88E6xxx"
|
||||
default n
|
||||
depends on NET_DSA_MV88E6XXX_GLOBAL2
|
||||
depends on PTP_1588_CLOCK
|
||||
imply NETWORK_PHY_TIMESTAMPING
|
||||
imply PTP_1588_CLOCK
|
||||
help
|
||||
Say Y to enable PTP hardware timestamping on Marvell 88E6xxx switch
|
||||
chips that support it.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user