aboutsummaryrefslogtreecommitdiffstats
path: root/arch/powerpc/kvm/44x_tlb.c
Commit message (Collapse)AuthorAgeFilesLines
* KVM: PPC: fix compilation of "dump tlbs" debug functionHollis Blanchard2010-10-241-0/+1
| | | | | | | Missing local variable. Signed-off-by: Hollis Blanchard <hollis_blanchard@mentor.com> Signed-off-by: Alexander Graf <agraf@suse.de>
* KVM: PPC: Convert MSR to shared pageAlexander Graf2010-10-241-4/+4
| | | | | | | | | | | | | | | One of the most obvious registers to share with the guest directly is the MSR. The MSR contains the "interrupts enabled" flag which the guest has to toggle in critical sections. So in order to bring the overhead of interrupt en- and disabling down, let's put msr into the shared page. Keep in mind that even though you can fully read its contents, writing to it doesn't always update all state. There are a few safe fields that don't require hypervisor interaction. See the documentation for a list of MSR bits that are safe to be set from inside the guest. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: Use u64 for frame data typesJoerg Roedel2010-08-021-1/+2
| | | | | | | | | | | | For 32bit machines where the physical address width is larger than the virtual address width the frame number types in KVM may overflow. Fix this by changing them to u64. [sfr: fix build on 32-bit ppc] Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: PPC: Keep index within boundaries in kvmppc_44x_emul_tlbwe()Roel Kluin2010-05-131-1/+1
| | | | | | | | | An index of KVM44x_GUEST_TLB_SIZE is already one too large. Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Acked-by: Hollis Blanchard <hollis@penguinppc.org> Acked-by: Alexander Graf <agraf@suse.de> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
* KVM: PPC: Add helpers for CR, XERAlexander Graf2010-03-011-2/+4
| | | | | | | | | | We now have helpers for the GPRs, so let's also add some for CR and XER. Having them in the PACA simplifies code a lot, as we don't need to care about where to store CC or not to overflow any integers. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: PPC: Use accessor functions for GPR accessAlexander Graf2010-03-011-7/+7
| | | | | | | | | | | | | | | All code in PPC KVM currently accesses gprs in the vcpu struct directly. While there's nothing wrong with that wrt the current way gprs are stored and loaded, it doesn't suffice for the PACA acceleration that will follow in this patchset. So let's just create little wrapper inline functions that we call whenever a GPR needs to be read from or written to. The compiled code shouldn't really change at all for now. Signed-off-by: Alexander Graf <agraf@suse.de> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: powerpc: convert marker probes to event traceMarcelo Tosatti2009-09-101-5/+6
| | | | | | | | | [avi: make it build] [avi: fold trace-arch.h into trace.h] CC: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: Add kvmppc_mmu_dtlb/itlb_miss for bookeHollis Blanchard2009-03-241-0/+8
| | | | | | | | | When itlb or dtlb miss happens, E500 needs to update some mmu registers. So that the auto-load mechanism can work on E500 when write a tlb entry. Signed-off-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: rename 44x MMU functions used in booke.cHollis Blanchard2009-03-241-2/+2
| | | | | | | e500 will provide its own implementation of these. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: turn tlb_xlate() into a per-core hook (and give it a better name)Hollis Blanchard2009-03-241-0/+10
| | | | | Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: change kvmppc_mmu_map() parametersHollis Blanchard2009-03-241-8/+7
| | | | | | | Passing just the TLB index will ease an e500 implementation. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: cosmetic changes to mmu hook namesHollis Blanchard2009-03-241-1/+1
| | | | | Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: Implement in-kernel exit timing statisticsHollis Blanchard2008-12-311-0/+3
| | | | | | | | | | | | | | | | | | | | | | Existing KVM statistics are either just counters (kvm_stat) reported for KVM generally or trace based aproaches like kvm_trace. For KVM on powerpc we had the need to track the timings of the different exit types. While this could be achieved parsing data created with a kvm_trace extension this adds too much overhead (at least on embedded PowerPC) slowing down the workloads we wanted to measure. Therefore this patch adds a in-kernel exit timing statistic to the powerpc kvm code. These statistic is available per vm&vcpu under the kvm debugfs directory. As this statistic is low, but still some overhead it can be enabled via a .config entry and should be off by default. Since this patch touched all powerpc kvm_stat code anyway this code is now merged and simplified together with the exit timing statistic code (still working with exit timing disabled in .config). Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: save and restore guest mappings on context switchHollis Blanchard2008-12-311-0/+58
| | | | | | | | | Store shadow TLB entries in memory, but only use it on host context switch (instead of every guest entry). This improves performance for most workloads on 440 by reducing the guest TLB miss rate. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: directly insert shadow mappings into the hardware TLBHollis Blanchard2008-12-311-131/+125
| | | | | | | | | | | | | | | | | | | | | | | | | Formerly, we used to maintain a per-vcpu shadow TLB and on every entry to the guest would load this array into the hardware TLB. This consumed 1280 bytes of memory (64 entries of 16 bytes plus a struct page pointer each), and also required some assembly to loop over the array on every entry. Instead of saving a copy in memory, we can just store shadow mappings directly into the hardware TLB, accepting that the host kernel will clobber these as part of the normal 440 TLB round robin. When we do that we need less than half the memory, and we have decreased the exit handling time for all guest exits, at the cost of increased number of TLB misses because the host overwrites some guest entries. These savings will be increased on processors with larger TLBs or which implement intelligent flush instructions like tlbivax (which will avoid the need to walk arrays in software). In addition to that and to the code simplification, we have a greater chance of leaving other host userspace mappings in the TLB, instead of forcing all subsequent tasks to re-fault all their mappings. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: support large host pagesHollis Blanchard2008-12-311-16/+55
| | | | | | | | | | KVM on 440 has always been able to handle large guest mappings with 4K host pages -- we must, since the guest kernel uses 256MB mappings. This patch makes KVM work when the host has large pages too (tested with 64K). Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: fix userspace mapping invalidation on context switchHollis Blanchard2008-12-311-14/+17
| | | | | | | | | | | | | | | | We used to defer invalidating userspace TLB entries until jumping out of the kernel. This was causing MMU weirdness most easily triggered by using a pipe in the guest, e.g. "dmesg | tail". I believe the problem was that after the guest kernel changed the PID (part of context switch), the old process's mappings were still present, and so copy_to_user() on the "return to new process" path ended up using stale mappings. Testing with large pages (64K) exposed the problem, probably because with 4K pages, pressure on the TLB faulted all process A's mappings out before the guest kernel could insert any for process B. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: use prefetchable mappings for guest memoryHollis Blanchard2008-12-311-2/+7
| | | | | | | | | | | | | | | | | | | Bare metal Linux on 440 can "overmap" RAM in the kernel linear map, so that it can use large (256MB) mappings even if memory isn't a multiple of 256MB. To prevent the hardware prefetcher from loading from an invalid physical address through that mapping, it's marked Guarded. However, KVM must ensure that all guest mappings are backed by real physical RAM (since a deliberate access through a guarded mapping could still cause a machine check). Accordingly, we don't need to make our mappings guarded, so let's allow prefetching as the designers intended. Curiously this patch didn't affect performance at all on the quick test I tried, but it's clearly the right thing to do anyways and may improve other workloads. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: use MMUCR accessor to obtain TIDHollis Blanchard2008-12-311-1/+1
| | | | | | | We have an accessor; might as well use it. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: create struct kvm_vcpu_44x and introduce container_of() accessorHollis Blanchard2008-12-311-13/+24
| | | | | | | | | | | This patch doesn't yet move all 44x-specific data into the new structure, but is the first step down that path. In the future we may also want to create a struct kvm_vcpu_booke. Based on patch from Liu Yu <yu.liu@freescale.com>. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: refactor instruction emulation into generic and core-specific piecesHollis Blanchard2008-12-311-2/+2
| | | | | | | | | | | | | | | | | | Cores provide 3 emulation hooks, implemented for example in the new 4xx_emulate.c: kvmppc_core_emulate_op kvmppc_core_emulate_mtspr kvmppc_core_emulate_mfspr Strictly speaking the last two aren't necessary, but provide for more informative error reporting ("unknown SPR"). Long term I'd like to have instruction decoding autogenerated from tables of opcodes, and that way we could aggregate universal, Book E, and core-specific instructions more easily and without redundant switch statements. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: Rename "struct tlbe" to "struct kvmppc_44x_tlbe"Hollis Blanchard2008-12-311-10/+12
| | | | | | | | | This will ease ports to other cores. Also remove unused "struct kvm_tlb" while we're at it. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: Move 440-specific TLB code into 44x_tlb.cHollis Blanchard2008-12-311-2/+136
| | | | | | | This will make it easier to provide implementations for other cores. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: stop leaking host memory on VM exitHollis Blanchard2008-11-251-0/+8
| | | | | | | | | | | | When the VM exits, we must call put_page() for every page referenced in the shadow TLB. Without this patch, we usually leak 30-50 host pages (120 - 200 KiB with 4 KiB pages). The maximum number of pages leaked is the size of our shadow TLB, 64 pages. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: switch to get_user_pages_fastMarcelo Tosatti2008-10-151-2/+0
| | | | | | | | | Convert gfn_to_pfn to use get_user_pages_fast, which can do lockless pagetable lookups on x86. Kernel compilation on 4-way guest is 3.7% faster on VMX. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com>
* KVM: ppc: kvmppc_44x_shadow_release() does not require mmap_sem to be lockedHollis Blanchard2008-10-151-7/+1
| | | | | | | And it gets in the way of get_user_pages_fast(). Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: powerpc: Map guest userspace with TID=0 mappingsHollis Blanchard2008-10-151-16/+23
| | | | | | | | | | | | | | When we use TID=N userspace mappings, we must ensure that kernel mappings have been destroyed when entering userspace. Using TID=1/TID=0 for kernel/user mappings and running userspace with PID=0 means that userspace can't access the kernel mappings, but the kernel can directly access userspace. The net is that we don't need to flush the TLB on privilege switches, but we do on guest context switches (which are far more infrequent). Guest boot time performance improvement: about 30%. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: Write only modified shadow entries into the TLB on exitHollis Blanchard2008-10-151-1/+8
| | | | | | | | | | | Track which TLB entries need to be written, instead of overwriting everything below the high water mark. Typically only a single guest TLB entry will be modified in a single exit. Guest boot time performance improvement: about 15%. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: adds trace points for ppc tlb activityJerone Young2008-10-151-1/+14
| | | | | | | | | This patch adds trace points to track powerpc TLB activities using the KVM_TRACE infrastructure. Signed-off-by: Jerone Young <jyoung5@us.ibm.com> Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: fix invalidation of large guest pagesHollis Blanchard2008-07-271-2/+3
| | | | | | | | | | When guest invalidates a large tlb map, there may be more than one corresponding shadow tlb maps that need to be invalidated. Use eaddr and eend to find these shadow tlb maps. Signed-off-by: Liu Yu <yu.liu@freescale.com> Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: Report bad GFNsHollis Blanchard2008-06-061-1/+1
| | | | | | | | This code shouldn't be hit anyways, but when it is, it's useful to have a little more information about the failure. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: Use a read lock around MMU operations, and release it on errorHollis Blanchard2008-06-061-2/+3
| | | | | | | | gfn_to_page() and kvm_release_page_clean() are called from other contexts with mmap_sem locked only for reading. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: Remove unmatched kunmap() callHollis Blanchard2008-06-061-2/+0
| | | | | | | | | | | We're not calling kmap() now, so we shouldn't call kunmap() either. This has no practical effect in the non-highmem case, which is why it hasn't caused more obvious problems. Pointed out by Anthony Liguori. Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Signed-off-by: Avi Kivity <avi@qumranet.com>
* KVM: ppc: PowerPC 440 KVM implementationHollis Blanchard2008-04-271-0/+224
This functionality is definitely experimental, but is capable of running unmodified PowerPC 440 Linux kernels as guests on a PowerPC 440 host. (Only tested with 440EP "Bamboo" guests so far, but with appropriate userspace support other SoC/board combinations should work.) See Documentation/powerpc/kvm_440.txt for technical details. [stephen: build fix] Signed-off-by: Hollis Blanchard <hollisb@us.ibm.com> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Avi Kivity <avi@qumranet.com>