summaryrefslogtreecommitdiff
path: root/arch/x86/kernel/cpu/vmware.c
Commit message (Collapse)AuthorAgeFilesLines
* sched/clock/x86: Mark sched_clock() noinstrPeter Zijlstra2023-01-311-1/+1
| | | | | | | | | | | | | | In order to use sched_clock() from noinstr code, mark it and all it's implenentations noinstr. The whole pvclock thing (used by KVM/Xen) is a bit of a pain, since it calls out to watchdogs, create a pvclock_clocksource_read_nowd() variant doesn't do that and can be noinstr. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20230126151323.702003578@infradead.org
* x86/vmware: Use BIT() macro for shiftingShreenidhi Shedi2022-06-221-2/+2
| | | | | | | | | | | | | | | | VMWARE_CMD_VCPU_RESERVED is bit 31 and that would mean undefined behavior when shifting an int but the kernel is built with -fno-strict-overflow which will wrap around using two's complement. Use the BIT() macro to improve readability and avoid any potential overflow confusion because it uses an unsigned long. [ bp: Clarify commit message. ] Signed-off-by: Shreenidhi Shedi <sshedi@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Srivatsa S. Bhat (VMware) <srivatsa@csail.mit.edu> Link: https://lore.kernel.org/r/20220601101820.535031-1-sshedi@vmware.com
* Merge tag 'x86_vmware_for_v5.13' of ↵Linus Torvalds2021-04-261-0/+2
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 vmware guest update from Borislav Petkov: "Have vmware guests skip the refined TSC calibration when the TSC frequency has been retrieved from the hypervisor" * tag 'x86_vmware_for_v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/vmware: Avoid TSC recalibration when frequency is known
| * x86/vmware: Avoid TSC recalibration when frequency is knownAlexey Makhalov2021-03-281-0/+2
| | | | | | | | | | | | | | | | | | | | When the TSC frequency is known because it is retrieved from the hypervisor, skip TSC refined calibration by setting X86_FEATURE_TSC_KNOWN_FREQ. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210105004752.131069-1-amakhalov@vmware.com
* | x86/paravirt: Switch time pvops functions to use static_call()Juergen Gross2021-03-111-2/+3
|/ | | | | | | | | | | | | The time pvops functions are the only ones left which might be used in 32-bit mode and which return a 64-bit value. Switch them to use the static_call() mechanism instead of pvops, as this allows quite some simplification of the pvops implementation. Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20210311142319.4723-5-jgross@suse.com
* KVM: SVM: Add GHCB accessor functions for retrieving fieldsTom Lendacky2020-12-141-6/+6
| | | | | | | | | Update the GHCB accessor functions to add functions for retrieve GHCB fields by name. Update existing code to use the new accessor functions. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Message-Id: <664172c53a5fb4959914e1a45d88e805649af0ad.1607620209.git.thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
* x86/vmware: Add VMware-specific handling for VMMCALL under SEV-ESDoug Covelli2020-09-091-5/+45
| | | | | | | | | | | | | Add VMware-specific handling for #VC faults caused by VMMCALL instructions. Signed-off-by: Doug Covelli <dcovelli@vmware.com> Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> [ jroedel@suse.de: - Adapt to different paravirt interface ] Co-developed-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200907131613.12703-65-joro@8bytes.org
* x86/vmware: Use bool type for vmw_sched_clockAlexey Makhalov2020-03-241-2/+2
| | | | | | | | | To be aligned with other bool variables. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-6-amakhalov@vmware.com
* x86/vmware: Enable steal time accountingAlexey Makhalov2020-03-241-1/+12
| | | | | | | | | | | | | Set paravirt_steal_rq_enabled if steal clock present. paravirt_steal_rq_enabled is used in sched/core.c to adjust task progress by offsetting stolen time. Use 'no-steal-acc' off switch (share same name with KVM) to disable steal time accounting. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-5-amakhalov@vmware.com
* x86/vmware: Add steal time clock support for VMware guestsAlexey Makhalov2020-03-241-0/+197
| | | | | | | | | | | | | | | | | | | | | | | | Steal time is the amount of CPU time needed by a guest virtual machine that is not provided by the host. Steal time occurs when the host allocates this CPU time elsewhere, for example, to another guest. Steal time can be enabled by adding the VM configuration option stealclock.enable = "TRUE". It is supported by VMs that run hardware version 13 or newer. Introduce the VMware steal time infrastructure. The high level code (such as enabling, disabling and hot-plug routines) was derived from KVM. [ Tomer: use READ_ONCE macros and 32bit guests support. ] [ bp: Massage. ] Co-developed-by: Tomer Zeltzer <tomerr90@gmail.com> Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Tomer Zeltzer <tomerr90@gmail.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-4-amakhalov@vmware.com
* x86/vmware: Remove vmware_sched_clock_setup()Alexey Makhalov2020-03-241-5/+10
| | | | | | | | | | | | | Move cyc2ns setup logic to separate function. This separation will allow to use cyc2ns mult/shift pair not only for the sched_clock but also for other clocks such as steal_clock. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-3-amakhalov@vmware.com
* x86/vmware: Make vmware_select_hypercall() __initAlexey Makhalov2020-03-241-1/+1
| | | | | | | | | | | vmware_select_hypercall() is used only by the __init functions, and should be annotated with __init as well. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Thomas Hellstrom <thellstrom@vmware.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200323195707.31242-2-amakhalov@vmware.com
* x86/cpu/vmware: Use the full form of INL in VMWARE_PORTSami Tolvanen2019-10-081-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | LLVM's assembler doesn't accept the short form INL instruction: inl (%%dx) but instead insists on the output register to be explicitly specified: <inline asm>:1:7: error: invalid operand for instruction inl (%dx) ^ LLVM ERROR: Error parsing inline asm Use the full form of the instruction to fix the build. Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Nick Desaulniers <ndesaulniers@google.com> Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Thomas Hellstrom <thellstrom@vmware.com> Cc: clang-built-linux@googlegroups.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Cc: "VMware, Inc." <pv-drivers@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://github.com/ClangBuiltLinux/linux/issues/734 Link: https://lkml.kernel.org/r/20191007192129.104336-1-samitolvanen@google.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/vmware: Add a header file for hypercall definitionsThomas Hellstrom2019-08-281-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new header is intended to be used by drivers using the backdoor. Follow the KVM example using alternatives self-patching to choose between vmcall, vmmcall and io instructions. Also define two new CPU feature flags to indicate hypervisor support for vmcall- and vmmcall instructions. The new XF86_FEATURE_VMW_VMMCALL flag is needed because using XF86_FEATURE_VMMCALL might break QEMU/KVM setups using the vmmouse driver. They rely on XF86_FEATURE_VMMCALL on AMD to get the kvm_hypercall() right. But they do not yet implement vmmcall for the VMware hypercall used by the vmmouse driver. [ bp: reflow hypercall %edx usage explanation comment. ] Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Doug Covelli <dcovelli@vmware.com> Cc: Aaron Lewis <aaronlewis@google.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: linux-graphics-maintainer@vmware.com Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Nicolas Ferre <nicolas.ferre@microchip.com> Cc: Robert Hoo <robert.hu@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Cc: <pv-drivers@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190828080353.12658-3-thomas_os@shipmail.org
* x86/vmware: Update platform detection code for VMCALL/VMMCALL hypercallsThomas Hellstrom2019-08-281-17/+71
| | | | | | | | | | | | | | | | | | | | | | | | Vmware has historically used an INL instruction for this, but recent hardware versions support using VMCALL/VMMCALL instead, so use this method if supported at platform detection time. Explicitly code separate macro versions since the alternatives self-patching has not been performed at platform detection time. Also put tighter constraints on the assembly input parameters. Co-developed-by: Doug Covelli <dcovelli@vmware.com> Signed-off-by: Doug Covelli <dcovelli@vmware.com> Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Doug Covelli <dcovelli@vmware.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: linux-graphics-maintainer@vmware.com Cc: Thomas Gleixner <tglx@linutronix.de> Cc: virtualization@lists.linux-foundation.org Cc: <pv-drivers@vmware.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190828080353.12658-2-thomas_os@shipmail.org
* x86/apic: Rename 'lapic_timer_frequency' to 'lapic_timer_period'Daniel Drake2019-05-091-1/+1
| | | | | | | | | | | | | | | | | | | | | This variable is a period unit (number of clock cycles per jiffy), not a frequency (which is number of cycles per second). Give it a more appropriate name. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Daniel Drake <drake@endlessm.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: len.brown@intel.com Cc: linux@endlessm.com Cc: rafael.j.wysocki@intel.com Link: http://lkml.kernel.org/r/20190509055417.13152-2-drake@endlessm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/cpu/vmware: Do not trace vmware_sched_clock()Steven Rostedt (VMware)2018-11-091-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | When running function tracing on a Linux guest running on VMware Workstation, the guest would crash. This is due to tracing of the sched_clock internal call of the VMware vmware_sched_clock(), which causes an infinite recursion within the tracing code (clock calls must not be traced). Make vmware_sched_clock() not traced by ftrace. Fixes: 80e9a4f21fd7c ("x86/vmware: Add paravirt sched clock") Reported-by: GwanYeong Kim <gy741.kim@gmail.com> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Borislav Petkov <bp@suse.de> CC: Alok Kataria <akataria@vmware.com> CC: GwanYeong Kim <gy741.kim@gmail.com> CC: "H. Peter Anvin" <hpa@zytor.com> CC: Ingo Molnar <mingo@kernel.org> Cc: stable@vger.kernel.org CC: Thomas Gleixner <tglx@linutronix.de> CC: virtualization@lists.linux-foundation.org CC: x86-ml <x86@kernel.org> Link: http://lkml.kernel.org/r/20181109152207.4d3e7d70@gandalf.local.home
* x86/paravirt: Use a single ops structureJuergen Gross2018-09-031-2/+2
| | | | | | | | | | | | | | | | | | | | | | | Instead of using six globally visible paravirt ops structures combine them in a single structure, keeping the original structures as sub-structures. This avoids the need to assemble struct paravirt_patch_template at runtime on the stack each time apply_paravirt() is being called (i.e. when loading a module). [ tglx: Made the struct and the initializer tabular for readability sake ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: xen-devel@lists.xenproject.org Cc: virtualization@lists.linux-foundation.org Cc: akataria@vmware.com Cc: rusty@rustcorp.com.au Cc: boris.ostrovsky@oracle.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/20180828074026.820-9-jgross@suse.com
* x86/virt: Add enum for hypervisors to replace x86_hyperJuergen Gross2017-11-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The x86_hyper pointer is only used for checking whether a virtual device is supporting the hypervisor the system is running on. Use an enum for that purpose instead and drop the x86_hyper pointer. Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Xavier Deguillard <xdeguillard@vmware.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: akataria@vmware.com Cc: arnd@arndb.de Cc: boris.ostrovsky@oracle.com Cc: devel@linuxdriverproject.org Cc: dmitry.torokhov@gmail.com Cc: gregkh@linuxfoundation.org Cc: haiyangz@microsoft.com Cc: kvm@vger.kernel.org Cc: kys@microsoft.com Cc: linux-graphics-maintainer@vmware.com Cc: linux-input@vger.kernel.org Cc: moltmann@vmware.com Cc: pbonzini@redhat.com Cc: pv-drivers@vmware.com Cc: rkrcmar@redhat.com Cc: sthemmin@microsoft.com Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/20171109132739.23465-3-jgross@suse.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/virt, x86/platform: Merge 'struct x86_hyper' into 'struct x86_platform' ↵Juergen Gross2017-11-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and 'struct x86_init' Instead of x86_hyper being either NULL on bare metal or a pointer to a struct hypervisor_x86 in case of the kernel running as a guest merge the struct into x86_platform and x86_init. This will remove the need for wrappers making it hard to find out what is being called. With dummy functions added for all callbacks testing for a NULL function pointer can be removed, too. Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Juergen Gross <jgross@suse.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: akataria@vmware.com Cc: boris.ostrovsky@oracle.com Cc: devel@linuxdriverproject.org Cc: haiyangz@microsoft.com Cc: kvm@vger.kernel.org Cc: kys@microsoft.com Cc: pbonzini@redhat.com Cc: rkrcmar@redhat.com Cc: rusty@rustcorp.com.au Cc: sthemmin@microsoft.com Cc: virtualization@lists.linux-foundation.org Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/20171109132739.23465-2-jgross@suse.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* vmware: set cpu capabilities during platform initializationJuergen Gross2017-05-021-19/+20
| | | | | | | | | | | | | | | | There is no need to set the same capabilities for each cpu individually. This can be done for all cpus in platform initialization. Cc: Alok Kataria <akataria@vmware.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Cc: virtualization@lists.linux-foundation.org Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Alok Kataria <akataria@vmware.com> Signed-off-by: Juergen Gross <jgross@suse.com>
* x86/vmware: Remove duplicate inclusion of asm/timer.hMasanari Iida2017-03-011-1/+0
| | | | | | | | | Signed-off-by: Masanari Iida <standby24x7@gmail.com> Cc: akataria@vmware.com Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20170227122922.26230-1-standby24x7@gmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* Merge branch 'x86-platform-for-linus' of ↵Linus Torvalds2016-12-121-16/+70
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 platform updates from Ingo Molnar: "Two changes: - implement various VMWare guest OS improvements/fixes (Alexey Makhalov) - unexport a spurious export from the intel-mid platform driver (Lukas Wunner)" * 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/vmware: Add paravirt sched clock x86/vmware: Add basic paravirt ops support x86/vmware: Use tsc_khz value for calibrate_cpu() x86/platform/intel-mid: Unexport intel_mid_pci_set_power_state() x86/vmware: Read tsc_khz only once at boot time
| * x86/vmware: Add paravirt sched clockAlexey Makhalov2016-10-301-0/+42
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The default sched_clock() implementation is native_sched_clock(). It contains code to handle non constant frequency TSCs, which creates overhead for systems with constant frequency TSCs. The vmware hypervisor guarantees a constant frequency TSC, so native_sched_clock() is not required and slower than a dedicated function which operates with one time calculated conversion factors. Calculate the conversion factors at boot time from the tsc frequency and install an optimized sched_clock() function via paravirt ops. The paravirtualized clock can be disabled on the kernel command line with the new 'no-vmw-sched-clock' option. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Acked-by: Alok N Kataria <akataria@vmware.com> Cc: linux-doc@vger.kernel.org Cc: pv-drivers@vmware.com Cc: corbet@lwn.net Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161028075432.90579-4-amakhalov@vmware.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * x86/vmware: Add basic paravirt ops supportAlexey Makhalov2016-10-301-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add basic paravirt support: 1. Set pv_info.name to "VMware hypervisor" to have proper boot log message Booting paravirtualized kernel on VMware hypervisor instead of "... on bare hardware" 2. Set pv_cpu_ops.io_delay() to empty function - paravirt_noop() to avoid vm-exits on IO delays because io delays they are not required. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Acked-by: Alok N Kataria <akataria@vmware.com> Cc: linux-doc@vger.kernel.org Cc: pv-drivers@vmware.com Cc: corbet@lwn.net Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161028075432.90579-3-amakhalov@vmware.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * x86/vmware: Use tsc_khz value for calibrate_cpu()Alexey Makhalov2016-10-301-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit aa297292d708 ("x86/tsc: Enumerate SKL cpu_khz and tsc_khz via CPUID") separated the calibration mechanisms for cpu_khz and tsc_khz. Since the vmware hypervisor provides a constant frequency TSC to the guest, this change can lead to divergence between the tsc and the cpu frequency after vmotion, which might confuse the user. Solve this by overriding the x86 platform cpu calibration callback with the vmware specific tsc calibration function. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Acked-by: Alok N Kataria <akataria@vmware.com> Cc: linux-doc@vger.kernel.org Cc: pv-drivers@vmware.com Cc: corbet@lwn.net Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161028075432.90579-2-amakhalov@vmware.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| * x86/vmware: Read tsc_khz only once at boot timeAlexey Makhalov2016-10-211-19/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | Re-factor the vmware platform setup code to query the hypervisor for tsc frequency only once during boot. Since the VMware hypervisor guarantees constant TSC, calibrate_tsc now uses the saved value. Signed-off-by: Alexey Makhalov <amakhalov@vmware.com> Acked-by: Alok N Kataria <akataria@vmware.com> Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161020050211.GA25304@amakhalov-virtual-machine Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* | x86/vmware: Skip timer_irq_works() check on VMwareRenat Valiullin2016-10-191-0/+5
|/ | | | | | | | | | | | | | | The timer_irq_works() boot check may sometimes fail in a VM, when the Host is overcommitted or when the Guest is running nested. Since the intended check is unnecessary on VMware's virtual hardware, by-pass it. Signed-off-by: Renat Valiullin <rvaliullin@vmware.com> Acked-by: Alok N Kataria <akataria@vmware.com> Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161013184539.GA11497@rvaliullin-vm Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/vmware: Skip lapic calibration on VMwareRenat Valiullin2016-10-051-2/+10
| | | | | | | | | | | | | | | | In a virtualized environment the APIC timer calibration can go wrong when the host is overcommitted or the guest is running nested. This results in the APIC timers operating at an incorrect frequency. Since VMware supports a mechanism to retrieve the local APIC frequency we can ask the hypervisor for it and skip the APIC calibration loop. Signed-off-by: Renat Valiullin <rvaliullin@vmware.com> Acked-by: Alok N Kataria <akataria@vmware.com> Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/20161004201148.GA1421@uu64vm Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/kernel: Audit and remove any unnecessary uses of module.hPaul Gortmaker2016-07-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Historically a lot of these existed because we did not have a distinction between what was modular code and what was providing support to modules via EXPORT_SYMBOL and friends. That changed when we forked out support for the latter into the export.h file. This means we should be able to reduce the usage of module.h in code that is obj-y Makefile or bool Kconfig. The advantage in doing so is that module.h itself sources about 15 other headers; adding significantly to what we feed cpp, and it can obscure what headers we are effectively using. Since module.h was the source for init.h (for __init) and for export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance for the presence of either and replace as needed. Build testing revealed some implicit header usage that was fixed up accordingly. Note that some bool/obj-y instances remain since module.h is the header for some exception table entry stuff, and for things like __init_or_module (code that is tossed when MODULES=n). Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20160714001901.31603-4-paul.gortmaker@windriver.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/cpufeature: Remove cpu_has_hypervisorBorislav Petkov2016-03-311-1/+1
| | | | | | | | | | | | | | Use boot_cpu_has() instead. Tested-by: David Kershner <david.kershner@unisys.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: sparmaintainer@unisys.com Cc: virtualization@lists.linux-foundation.org Link: http://lkml.kernel.org/r/1459266123-21878-3-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86/cpu: Convert printk(KERN_<LEVEL> ...) to pr_<level>(...)Chen Yucong2016-02-031-3/+2
| | | | | | | | | | | | | | | | - Use the more current logging style pr_<level>(...) instead of the old printk(KERN_<LEVEL> ...). - Convert pr_warning() to pr_warn(). Signed-off-by: Chen Yucong <slaoub@gmail.com> Cc: Borislav Petkov <bp@alien8.de> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1454384702-21707-1-git-send-email-slaoub@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86: Correctly detect hypervisorJason Wang2013-08-051-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We try to handle the hypervisor compatibility mode by detecting hypervisor through a specific order. This is not robust, since hypervisors may implement each others features. This patch tries to handle this situation by always choosing the last one in the CPUID leaves. This is done by letting .detect() return a priority instead of true/false and just re-using the CPUID leaf where the signature were found as the priority (or 1 if it was found by DMI). Then we can just pick hypervisor who has the highest priority. Other sophisticated detection method could also be implemented on top. Suggested by H. Peter Anvin and Paolo Bonzini. Acked-by: K. Y. Srinivasan <kys@microsoft.com> Cc: Haiyang Zhang <haiyangz@microsoft.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Doug Covelli <dcovelli@vmware.com> Cc: Borislav Petkov <bp@suse.de> Cc: Dan Hecht <dhecht@vmware.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Link: http://lkml.kernel.org/r/1374742475-2485-4-git-send-email-jasowang@redhat.com Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86: delete __cpuinit usage from all x86 filesPaul Gortmaker2013-07-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __cpuinit type of throwaway sections might have made sense some time ago when RAM was more constrained, but now the savings do not offset the cost and complications. For example, the fix in commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time") is a good example of the nasty type of bugs that can be created with improper use of the various __init prefixes. After a discussion on LKML[1] it was decided that cpuinit should go the way of devinit and be phased out. Once all the users are gone, we can then finally remove the macros themselves from linux/init.h. Note that some harmless section mismatch warnings may result, since notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c) are flagged as __cpuinit -- so if we remove the __cpuinit from arch specific callers, we will also get section mismatch warnings. As an intermediate step, we intend to turn the linux/init.h cpuinit content into no-ops as early as possible, since that will get rid of these warnings. In any case, they are temporary and harmless. This removes all the arch/x86 uses of the __cpuinit macros from all C files. x86 only had the one __CPUINIT used in assembly files, and it wasn't paired off with a .previous or a __FINIT, so we can delete it directly w/o any corresponding additional change there. [1] https://lkml.org/lkml/2013/5/20/589 Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: x86@kernel.org Acked-by: Ingo Molnar <mingo@kernel.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* x86/apic: Allow x2apic without IR on VMware platformAlok N Kataria2013-01-241-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | This patch updates x2apic initializaition code to allow x2apic on VMware platform even without interrupt remapping support. The hypervisor_x2apic_available hook was added in x2apic initialization code and used by KVM and XEN, before this. I have also cleaned up that code to export this hook through the hypervisor_x86 structure. Compile tested for KVM and XEN configs, this patch doesn't have any functional effect on those two platforms. On VMware platform, verified that x2apic is used in physical mode on products that support this. Signed-off-by: Alok N Kataria <akataria@vmware.com> Reviewed-by: Doug Covelli <dcovelli@vmware.com> Reviewed-by: Dan Hecht <dhecht@vmware.com> Acked-by: H. Peter Anvin <hpa@zytor.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Avi Kivity <avi@redhat.com> Link: http://lkml.kernel.org/r/1358466282.423.60.camel@akataria-dtop.eng.vmware.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* x86: Fix common misspellingsLucas De Marchi2011-03-181-1/+1
| | | | | | | | | They were generated by 'codespell' and then manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi> Cc: trivial@kernel.org LKML-Reference: <1300389856-1099-3-git-send-email-lucas.demarchi@profusion.mobi> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86, vmware: Preset lpj values when on VMware.Alok Kataria2010-08-021-1/+8
| | | | | | | | | | | | | | When running on VMware's platform, we have seen situations where the AP's try to calibrate the lpj values and fail to get good calibration runs becasue of timing issues. As a result delays don't work correctly on all cpus. The solutions is to set preset_lpj value based on the current tsc frequency value. This is similar to what KVM does as well. Signed-off-by: Alok N Kataria <akataria@vmware.com> LKML-Reference: <1280790637.14933.29.camel@ank32.eng.vmware.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* x86, hypervisor: Export the x86_hyper* symbolsH. Peter Anvin2010-05-091-1/+1
| | | | | | | | | | | | | Export x86_hyper and the related specific structures, allowing for hypervisor identification by modules. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Greg KH <greg@kroah.com> Cc: Hank Janssen <hjanssen@microsoft.com> Cc: Alok Kataria <akataria@vmware.com> Cc: Ky Srinivasan <ksrinivasan@novell.com> Cc: Dmitry Torokhov <dtor@vmware.com> LKML-Reference: <4BE49778.6060800@zytor.com>
* Merge commit 'v2.6.34-rc6' into x86/cpuH. Peter Anvin2010-05-081-0/+2
|\
| * VMware Balloon driverDmitry Torokhov2010-04-241-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a standalone version of VMware Balloon driver. Ballooning is a technique that allows hypervisor dynamically limit the amount of memory available to the guest (with guest cooperation). In the overcommit scenario, when hypervisor set detects that it needs to shuffle some memory, it instructs the driver to allocate certain number of pages, and the underlying memory gets returned to the hypervisor. Later hypervisor may return memory to the guest by reattaching memory to the pageframes and instructing the driver to "deflate" balloon. We are submitting a standalone driver because KVM maintainer (Avi Kivity) expressed opinion (rightly) that our transport does not fit well into virtqueue paradigm and thus it does not make much sense to integrate with virtio. There were also some concerns whether current ballooning technique is the right thing. If there appears a better framework to achieve this we are prepared to evaluate and switch to using it, but in the meantime we'd like to get this driver upstream. We want to get the driver accepted in distributions so that users do not have to deal with an out-of-tree module and many distributions have "upstream first" requirement. The driver has been shipping for a number of years and users running on VMware platform will have it installed as part of VMware Tools even if it will not come from a distribution, thus there should not be additional risk in pulling the driver into mainline. The driver will only activate if host is VMware so everyone else should not be affected at all. Signed-off-by: Dmitry Torokhov <dtor@vmware.com> Cc: Avi Kivity <avi@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Ingo Molnar <mingo@elte.hu> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | x86: Clean up the hypervisor layerH. Peter Anvin2010-05-071-16/+20
|/ | | | | | | | | | | | | | | | | | | | | Clean up the hypervisor layer and the hypervisor drivers, using an ops structure instead of an enumeration with if statements. The identity of the hypervisor, if needed, can be tested by testing the pointer value in x86_hyper. The MS-HyperV private state is moved into a normal global variable (it's per-system state, not per-CPU state). Being a normal bss variable, it will be left at all zero on non-HyperV platforms, and so can generally be tested for HyperV-specific features without additional qualification. Signed-off-by: H. Peter Anvin <hpa@zytor.com> Acked-by: Greg KH <greg@kroah.com> Cc: Hank Janssen <hjanssen@microsoft.com> Cc: Alok Kataria <akataria@vmware.com> Cc: Ky Srinivasan <ksrinivasan@novell.com> LKML-Reference: <4BE49778.6060800@zytor.com>
* x86: Print the hypervisor returned tsc_khz during bootAlok Kataria2009-09-201-0/+6
| | | | | | | | | | | | | | | | | On an AMD-64 system the processor frequency that is printed during system boot, may be different than the tsc frequency that was returned by the hypervisor, due to the value returned from calibrate_cpu. For debugging timekeeping or other related issues it might be better to get the tsc_khz value returned by the hypervisor. The patch below now prints the tsc frequency that the VMware hypervisor returned. Signed-off-by: Alok N Kataria <akataria@vmware.com> LKML-Reference: <1252095219.12518.13.camel@ank32.eng.vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: Move tsc_calibration to x86_init_opsThomas Gleixner2009-08-311-9/+12
| | | | | | | | | | TSC calibration is modified by the vmware hypervisor and paravirt by separate means. Moorestown wants to add its own calibration routine as well. So make calibrate_tsc a proper x86_init_ops function and override it by paravirt or by the early setup of the vmware hypervisor. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
* x86/cpu: Clean up various files a bitAlan Cox2009-07-111-9/+9
| | | | | | | | | | No code changes except printk levels (although some of the K6 mtrr code might be clearer if there were a few as would splitting out some of the intel cache code). Signed-off-by: Alan Cox <alan@linux.intel.com> LKML-Reference: <new-submission> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: vmware - fix sparse warningsHannes Eder2008-11-231-0/+1
| | | | | | | | | | | | | | | | | | Impact: fix sparse build warning Fix the following sparse warnings: arch/x86/kernel/cpu/vmware.c:69:5: warning: symbol 'vmware_platform' was not declared. Should it be static? arch/x86/kernel/cpu/vmware.c:89:15: warning: symbol 'vmware_get_tsc_khz' was not declared. Should it be static? arch/x86/kernel/cpu/vmware.c:107:16: warning: symbol 'vmware_set_feature_bits' was not declared. Should it be static? Signed-off-by: Hannes Eder <hannes@hanneseder.net> Cc: "Alok N Kataria" <akataria@vmware.com> Cc: "Dan Hecht" <dhecht@vmware.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* x86: vmware: look for DMI string in the product serial keyAlok Kataria2008-11-041-1/+6
| | | | | | | | | | | | | | | Impact: Should permit VMware detection on older platforms where the vendor is changed. Could theoretically cause a regression if some weird serial number scheme contains the string "VMware" by pure chance. Seems unlikely, especially with the mixed case. In some user configured cases, VMware may choose not to put a VMware specific DMI string, but the product serial key is always there and is VMware specific. Add a interface to check the serial key, when checking for VMware in the DMI information. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86: VMware: Fix vmware_get_tsc codeAlok Kataria2008-11-031-2/+2
| | | | | | | | | | | | | | Impact: Fix possible failure to calibrate the TSC on Vmware near 4 GHz The current version of the code to get the tsc frequency from the VMware hypervisor, will be broken on processor with frequency (4G-1) HZ, because on such processors eax will have UINT_MAX and that would be legitimate. We instead check that EBX did change to decide if we were able to read the frequency from the hypervisor. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86: Add a synthetic TSC_RELIABLE feature bit.Alok Kataria2008-11-011-0/+18
| | | | | | | | | | | | | | | | | | | | | | Impact: Changes timebase calibration on Vmware. Use the synthetic TSC_RELIABLE bit to workaround virtualization anomalies. Virtual TSCs can be kept nearly in sync, but because the virtual TSC offset is set by software, it's not perfect. So, the TSC synchronization test can fail. Even then the TSC can be used as a clocksource since the VMware platform exports a reliable TSC to the guest for timekeeping purposes. Use this bit to check if we need to skip the TSC sync checks. Along with this also set the CONSTANT_TSC bit when on VMware, since we still want to use TSC as clocksource on VM running over hardware which has unsynchronized TSC's (opteron's), since the hypervisor will take care of providing consistent TSC to the guest. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: Dan Hecht <dhecht@vmware.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>
* x86: Hypervisor detection and get tsc_freq from hypervisorAlok Kataria2008-11-011-0/+88
Impact: Changes timebase calibration on Vmware. v3->v2 : Abstract the hypervisor detection and feature (tsc_freq) request behind a hypervisor.c file v2->v1 : Add a x86_hyper_vendor field to the cpuinfo_x86 structure. This avoids multiple calls to the hypervisor detection function. This patch adds function to detect if we are running under VMware. The current way to check if we are on VMware is following, # check if "hypervisor present bit" is set, if so read the 0x40000000 cpuid leaf and check for "VMwareVMware" signature. # if the above fails, check the DMI vendors name for "VMware" string if we find one we query the VMware hypervisor port to check if we are under VMware. The DMI + "VMware hypervisor port check" is needed for older VMware products, which don't implement the hypervisor signature cpuid leaf. Also note that since we are checking for the DMI signature the hypervisor port should never be accessed on native hardware. This patch also adds a hypervisor_get_tsc_freq function, instead of calibrating the frequency which can be error prone in virtualized environment, we ask the hypervisor for it. We get the frequency from the hypervisor by accessing the hypervisor port if we are running on VMware. Other hypervisors too can add code to the generic routine to get frequency on their platform. Signed-off-by: Alok N Kataria <akataria@vmware.com> Signed-off-by: Dan Hecht <dhecht@vmware.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com>