summaryrefslogtreecommitdiff
path: root/arch/sh/mm
Commit message (Collapse)AuthorAgeFilesLines
* sh: Don't default enable PMB support.Paul Mundt2010-01-041-1/+0
| | | | | | | | | This has the adverse effect of converting many 29bit configs to 32bit mode, while this is a change that needs to be done manually for each platform. Turn it off by default in order to cut down on spurious bug reports. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Disable PMB for SH4AL-DSP CPUs.Paul Mundt2010-01-041-3/+3
| | | | | | | | | While the PMB is available on SH-4A parts, SH4AL-DSP parts exclude it altogether. As such, explicitly disable PMB support for these parts. If this changes in the future for newer subtypes, this will have to be made more fine-grained. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Ensure all PG_dcache_dirty pages are written back.Markus Pietrek2009-12-241-6/+2
| | | | | | | | | | | With some of the cache rework an address aliasing optimization was added, but this managed to fail on certain mappings resulting in pages with PG_dcache_dirty set never writing back their dcache lines. This patch reverts to the earlier behaviour of simply always writing back when the dirty bit is set. Signed-off-by: Markus Pietrek <Markus.Pietrek@emtrion.de> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Merge branch 'master' of ↵Paul Mundt2009-12-151-1/+2
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6
| * fix broken aliasing checks for MAP_FIXED on sparc32, mips, arm and shAl Viro2009-12-111-1/+2
| | | | | | | | | | | | | | | | We want addr - (pgoff << PAGE_SHIFT) consistently coloured... Acked-by: Paul Mundt <lethal@linux-sh.org> Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | sh: wire up vmallocinfo support in ioremap() implementations.Paul Mundt2009-12-142-8/+8
| | | | | | | | | | | | | | | | This wires up the caller information for the ioremap VMA, which allows for more helpful caller tracking via /proc/vmallocinfo. Follows the x86 and powerpc changes of the same nature. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: NUMA lmb fixesMagnus Damm2009-12-091-2/+11
| | | | | | | | | | | | | | | | | | This patch updates the NUMA version of setup_memory() with UMA code changes and also modifies the last argument to lmb_alloc_base() to use an address instead of pfn. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: fix size calculation for NUMA node 0Magnus Damm2009-12-091-1/+1
| | | | | | | | | | | | | | | | | | Fix the NUMA size calculation for node 0. Do the same as the UMA version of setup_memory() and use address instead of pfn when calculating the size. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Can't compare physical and virtual addresses for aliasesMatt Fleming2009-12-091-2/+1
|/ | | | | | | | It does not make sense to compare virtual and physical addresses for aliasing, only virtual addresses can be compared for aliases. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Drop associative writes for SH-4 cache flushes.Matt Fleming2009-12-041-2/+2
| | | | | | | | | | | | | | | | | | | When flushing/invalidating the icache/dcache via the memory-mapped IC/OC address arrays, the associative bit should only be used in conjunction with virtual addresses. However, we currently flush cache lines based on physical address, so stop using the associative bit. It is a better strategy to use non-associative writes (and physical tags) for flushing the caches anyway, because flushing by virtual address (as with the A-bit set) requires a valid TLB entry for that virtual address. If one does not exist in the TLB no exception is generated and the flush is silently ignored. This is also future-proofing for SH-4A parts which are gradually phasing out associative writes to the cache array due to the aforementioned case of certain flushes silently turning in to nops. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Partial revert of copy/clear_user_highpage() optimizations.Paul Mundt2009-12-041-53/+13
| | | | | | | These still require more testing, so revert them for now. We keep the off-by-1 in the fixmap colouring and drop the rest. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Improve performance of SH4 versions of copy/clear_user_highpageStuart Menefy2009-11-241-13/+53
| | | | | | | | | | | | | | | | | | | | The previous implementation of clear_user_highpage and copy_user_highpage checked to see if there was a D-cache aliasing issue between the user and kernel mappings of a page, but if there was they always did a flush with writeback on the dirtied kernel alias. However as we now have the ability to map a page into kernel space with the same cache colour as the user mapping, there is no need to write back this data. Currently we also invalidate the kernel alias as a precaution, however I'm not sure if this is actually required. Also correct the definition of FIX_CMAP_END so that the mappings created by kmap_coherent() are actually at the correct colour. Signed-off-by: Stuart Menefy <stuart.menefy@st.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh64: Fix up reworked cache op build.Paul Mundt2009-11-122-2/+6
| | | | | | | | This gets the build fixed up for the sh64 cache enabled case. Disabling still needs further abstraction for independent I/D-cache disabling. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* sh: Enable PMB support for all SH-4A CPUs.Paul Mundt2009-11-111-5/+3
| | | | | | | | Presently the PMB options were limited to a number of CPUs they were tested with, but it is generally available on all SH-4A CPUs, so just drop the subtype conditionals. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* Merge branch 'sh/stable-updates'Paul Mundt2009-11-091-1/+4
|\
| * sh: Account for cache aliases in flush_icache_range()Matt Fleming2009-11-091-1/+4
| | | | | | | | | | | | | | | | | | | | | | The icache may also contain aliases so we must account for them just like we do when manipulating the dcache. We usually get away with aliases in the icache because the instructions that are read from memory are read-only, i.e. they never change. However, the place where this bites us is when the code has been modified. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Make sure indexes are positiveRoel Kluin2009-11-041-1/+1
| | | | | | | | | | | | | | | | The indexes are signed, make sure they are not negative when we read array elements. Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Do not apply virt_to_phys() to a physical addressMatt Fleming2009-10-301-2/+1
| | | | | | | | | | | | | | | | The variable 'phys' already contains the physical address to flush. It is not a virtual address and should not be passed to virt_to_phys(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/stable-updates'Paul Mundt2009-10-271-1/+1
|\ \ | |/
| * sh: Fix hugetlbfs dependencies for SH-3 && MMU configurations.Paul Mundt2009-10-271-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The hugetlb dependencies presently depend on SUPERH && MMU while the hugetlb page size definitions depend on CPU_SH4 or CPU_SH5. This unfortunately allows SH-3 + MMU configurations to enable hugetlbfs without a corresponding HPAGE_SHIFT definition, resulting in the build blowing up. As SH-3 doesn't support variable page sizes, we tighten up the dependenies a bit to prevent hugetlbfs from being enabled. These days we also have a shiny new SYS_SUPPORTS_HUGETLBFS, so switch to using that rather than adding to the list of corner cases in fs/Kconfig. Reported-by: Kristoffer Ericson <kristoffer.ericson@gmail.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Bump up dma_ops initialization far earlier in the boot process.Paul Mundt2009-10-272-2/+11
| | | | | | | | | | | | | | | | | | | | | | Presently this was tacked on to the dma debug init bits from fs_initcall(), which is far too late for devices setting up their own per-device coherent areas. Throw this in the beginning of mem_init(), as per the x86 iommu allocation. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh64: cache flush symbol exports.Paul Mundt2009-10-271-0/+6
| | | | | | | | | | | | | | These were previously hidden in sh_ksyms_32, despite also being needed for sh64 now that the cache.c code is shared. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Add dma-mapping support for dma_alloc/free_coherent() overrides.Paul Mundt2009-10-261-17/+5
| | | | | | | | | | | | | | | | This moves the current dma_alloc/free_coherent() calls to a generic variant and plugs them in for the nommu default. Other variants can override the defaults in the dma mapping ops directly. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Convert to asm-generic/dma-mapping-common.hPaul Mundt2009-10-201-0/+6
| | | | | | | | | | | | | | This converts the old DMA mapping support to the new generic dma-mapping-common.h abstraction. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Support SCHED_MC for SH-X3 multi-cores.Paul Mundt2009-10-161-0/+9
| | | | | | | | | | | | | | | | | | This enables SCHED_MC support for SH-X3 multi-cores. Presently this is just a simple wrapper around the possible map, but this allows for tying in support for some of the more exotic NUMA clusters where we can actually do something with the topology. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/stable-updates'Paul Mundt2009-10-162-14/+22
|\ \ | |/ | | | | | | Conflicts: arch/sh/mm/cache-sh4.c
| * sh: disabled cache handling fix.Magnus Damm2009-10-161-0/+10
| | | | | | | | | | | | | | | | | | | | | | Add code to handle the cache disabled case. Fixes breakage introduced by 37443ef3f0406e855e169c87ae3f4ffb4b6ff635 ("sh: Migrate SH-4 cacheflush ops to function pointers."). Without this patch configuring caches off with CONFIG_CACHE_OFF=y makes kfr2r09 and migo-r lock up in fbdev deferred io or early user space. Signed-off-by: Magnus Damm <damm@opensource.se> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Fix up single page flushing to use PAGE_SIZE.Valentin Sitdikov2009-10-161-12/+10
| | | | | | | | | | | | | | | | | | | | | | | | Presently The SH-4 cache flushing code uses flush_cache_4096() for most of the real flushing work, which breaks down to a fixed 4096 unroll and increment. Not only is this sub-optimal for larger page sizes, it's also uncovered a bug in sh4_flush_dcache_page() when large page sizes are used and we have no cache aliases -- resulting in only a part of the page's D-cache lines being written back. Signed-off-by: Valentin Sitdikov <valentin.sitdikov@siemens.com> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/stable-updates'Paul Mundt2009-10-131-1/+1
|\ \ | |/
| * sh: force dcache flush if dcache_dirty bit set.Paul Mundt2009-10-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This too follows the ARM change, given that the issue at hand applies to all platforms that implement lazy D-cache writeback. This fixes up the case when a page mapping disappears between the flush_dcache_page() call (when PG_dcache_dirty is set for the page) and the update_mmu_cache() call -- such as in the case of swap cache being freed early. This kills off the mapping test in update_mmu_cache() and switches to simply testing for PG_dcache_dirty. Reported-by: Nitin Gupta <ngupta@vflare.org> Reported-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Fold fixed-PMB support into dynamic PMB supportMatt Fleming2009-10-103-52/+61
| | | | | | | | | | | | | | | | | | The initialisation process differs for CONFIG_PMB and for CONFIG_PMB_FIXED. For CONFIG_PMB_FIXED we need to register the PMB entries that were allocated by the bootloader. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Fix the offset from P1SEG/P2SEG where we map RAMMatt Fleming2009-10-101-6/+7
| | | | | | | | | | | | | | | | | | | | | | We need to map the gap between 0x00000000 and __MEMORY_START in the PMB, as well as RAM. With this change my 7785LCR board can switch to 32bit MMU mode at runtime. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Remap physical memory into P1 and P2 in pmb_init()Matt Fleming2009-10-102-40/+16
| | | | | | | | | | | | | | | | | | | | | | Eventually we'll have complete control over what physical memory gets mapped where and we can probably do other interesting things. For now though, when the MMU is in 32-bit mode, we map physical memory into the P1 and P2 virtual address ranges with the same semantics as they have in 29-bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Get rid of the kmem cache codeMatt Fleming2009-10-101-55/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Unfortunately, at the time during in boot when we want to be setting up the PMB entries, the kmem subsystem hasn't been initialised. We now match pmb_map slots with pmb_entry_list slots. When we find an empty slot in pmb_map, we set the bit, thereby acquiring the corresponding pmb_entry_list entry. There is a benefit in using this static array of struct pmb_entry's; we don't need to acquire any locks in order to traverse the list of struct pmb_entry's. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Make most PMB functions staticMatt Fleming2009-10-101-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | There's no need to export the internal PMB functions for allocating, freeing and modifying PMB entries, etc. This way we can restrict the interface for PMB. Also remove the static from pmb_init() so that we have more freedom in setting up the initial PMB entries and turning on MMU 32bit mode. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: CONFIG_PMB doesn't mean the MMU is in 32bit modeMatt Fleming2009-10-101-2/+0
| | | | | | | | | | | | | | | | CONFIG_PMB will eventually allow the MMU to be switched between 29-bit and 32-bit mode dynamically at runtime. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Prepare for dynamic PMB supportMatt Fleming2009-10-102-3/+11
| | | | | | | | | | | | | | | | | | To allow the MMU to be switched between 29bit and 32bit mode at runtime some constants need to swapped for functions that return a runtime value. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Obliterate the P1 area macrosMatt Fleming2009-10-102-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | Replace the use of PHYSADDR() with __pa(). PHYSADDR() is based on the idea that all addresses in P1SEG are untranslated, so we can access an address's physical page as an offset from P1SEG. This doesn't work for CONFIG_PMB/CONFIG_PMB_FIXED because pages in P1SEG and P2SEG are used for PMB mappings and so can be translated to any physical address. Likewise, replace a P1SEGADDR() use with virt_to_phys(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Allocate PMB entry slot earlierMatt Fleming2009-10-101-41/+39
| | | | | | | | | | | | | | | | | | | | Simplify set_pmb_entry() by removing the possibility of not finding a free slot in the PMB. Instead we now allocate a slot in pmb_alloc() so that if there are no free slots we fail at allocation time, rather than in set_pmb_entry(). Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | Merge branch 'sh/cachetlb'Paul Mundt2009-10-103-422/+84
|\ \ | |/ |/|
| * sh: Factor in cpu id for selection of cache colour fixmap.Paul Mundt2009-09-091-1/+3
| | | | | | | | | | | | | | | | | | | | | | In the SMP VIPT case the page copy/clear ops still perform colouring, care needs to be taken that CPUs don't end up stepping on each other, so we give them a bit of room to work with. At the same time, we reduce the worst-case colouring given that these pages are always consumed. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Fix up redundant cache flushing for PAGE_SIZE > 4k.Paul Mundt2009-09-091-1/+1
| | | | | | | | | | | | | | | | If PAGE_SIZE is presently over 4k we do a lot of extra flushing given that we purge the cache 4k at a time. Make it explicitly 4k per iteration, rather than iterating for PAGE_SIZE before looping over again. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Rework sh4_flush_cache_page() for coherent kmap mapping.Paul Mundt2009-09-091-27/+48
| | | | | | | | | | | | | | | | | | | | | | This builds on top of the MIPS r4k code that does roughly the same thing. This permits the use of kmap_coherent() for mapped pages with dirty dcache lines and falls back on kmap_atomic() otherwise. This also fixes up a problem with the alias check and defers to shm_align_mask directly. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Kill off segment-based d-cache flushing on SH-4.Paul Mundt2009-09-091-271/+20
| | | | | | | | | | | | | | | | This kills off the unrolled segment based flushers on SH-4 and switches over to a generic unrolled approach derived from the writethrough segment flusher. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: Kill off broken PHYSADDR() usage in sh4_flush_dcache_page().Paul Mundt2009-09-091-2/+2
| | | | | | | | | | | | | | | | PHYSADDR() runs in to issues in 32-bit mode when we do not have the legacy P1/P2 areas mapped, as such, we need to use page_to_phys() directly, which also happens to do the right thing in legacy 29-bit mode. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
| * sh: sh4_flush_cache_mm() optimizations.Paul Mundt2009-09-092-120/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The i-cache flush in the case of VM_EXEC was added way back when as a sanity measure, and in practice we only care about evicting aliases from the d-cache. As a result, it's possible to drop the i-cache flush completely here. After careful profiling it's also come up that all of the work associated with hunting down aliases and doing ranged flushing ends up generating more overhead than simply blasting away the entire dcache, particularly if there are many mm's that need to be iterated over. As a result of that, just move back to flush_dcache_all() in these cases, which restores the old behaviour, and vastly simplifies the path. Additionally, on platforms without aliases at all, this can simply be nopped out. Presently we have the alias check in the SH-4 specific version, but this is true for all of the platforms, so move the check up to a generic location. This cuts down quite a bit on superfluous cacheop IPIs. Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Don't allocate smaller sized mappings on every iterationMatt Fleming2009-10-091-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we've got the less than ideal situation where if we need to allocate a 256MB mapping we'll allocate four entries like so, entry 1: 128MB entry 2: 64MB entry 3: 16MB entry 4: 16MB This is because as we execute the loop in pmb_remap() we will progressively try mapping the remaining address space with smaller and smaller sizes. This isn't good because the size we use on one iteration may be the perfect size to use on the next iteration, for instance when the initial size is divisible by one of the PMB mapping sizes. With this patch, we now only need two entries in the PMB to map 256MB of address space, entry 1: 128MB entry 2: 128MB Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Try PMB mapping based on physical address, not mapping sizeMatt Fleming2009-10-091-1/+1
| | | | | | | | | | | | | | | | We should favour PMB mappings when the physical address cannot be reached with 29-bits. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Plug PMB alloc memory leakMatt Fleming2009-10-091-6/+24
| | | | | | | | | | | | | | | | | | If we fail to allocate a PMB entry in pmb_remap() we must remember to clear and free any PMB entries that we may have previously allocated, e.g. if we were allocating a multiple entry mapping. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>
* | sh: Sprinkle __uses_jump_to_uncachedMatt Fleming2009-10-092-3/+3
| | | | | | | | | | | | | | | | Fix some callers of jump_to_uncached() and back_to_cached() that were not annotated with __uses_jump_to_uncached. Signed-off-by: Matt Fleming <matt@console-pimps.org> Signed-off-by: Paul Mundt <lethal@linux-sh.org>