diff options
author | Stephen Rothwell <sfr@canb.auug.org.au> | 2019-06-06 16:11:00 +1000 |
---|---|---|
committer | Stephen Rothwell <sfr@canb.auug.org.au> | 2019-06-06 16:11:00 +1000 |
commit | 16b83fe2f6b33e229ad7231987c18d494f91423f (patch) | |
tree | c453f28c921db84906794de086758e50806f611f | |
parent | 804f89c3cb24aeee02aec4acc9201ba159aa52ed (diff) | |
parent | 88b28c8504ba11bf59c6a460a7f9ca3498530ceb (diff) | |
download | linux-next-16b83fe2f6b33e229ad7231987c18d494f91423f.tar.gz |
Merge branch 'akpm-current/current'
272 files changed, 4684 insertions, 2593 deletions
diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index a5c845338d6d..110c3d34df71 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -615,8 +615,8 @@ on an IO device and is an example of this type. Protections ----------- -A cgroup is protected to be allocated upto the configured amount of -the resource if the usages of all its ancestors are under their +A cgroup is protected upto the configured amount of the resource +as long as the usages of all its ancestors are under their protected levels. Protections can be hard guarantees or best effort soft boundaries. Protections can also be over-committed in which case only upto the amount available to the parent is protected among @@ -1062,7 +1062,10 @@ PAGE_SIZE multiple when read back. is within its effective min boundary, the cgroup's memory won't be reclaimed under any conditions. If there is no unprotected reclaimable memory available, OOM killer - is invoked. + is invoked. Above the effective min boundary (or + effective low boundary if it is higher), pages are reclaimed + proportionally to the overage, reducing reclaim pressure for + smaller overages. Effective min boundary is limited by memory.min values of all ancestor cgroups. If there is memory.min overcommitment @@ -1084,7 +1087,10 @@ PAGE_SIZE multiple when read back. Best-effort memory protection. If the memory usage of a cgroup is within its effective low boundary, the cgroup's memory won't be reclaimed unless memory can be reclaimed - from unprotected cgroups. + from unprotected cgroups. Above the effective low boundary (or + effective min boundary if it is higher), pages are reclaimed + proportionally to the overage, reducing reclaim pressure for + smaller overages. Effective low boundary is limited by memory.low values of all ancestor cgroups. If there is memory.low overcommitment @@ -1146,6 +1152,11 @@ PAGE_SIZE multiple when read back. otherwise, a value change in this file generates a file modified event. + Note that all fields in this file are hierarchical and the + file modified event can be generated due to an event down the + hierarchy. For for the local events at the cgroup level see + memory.events.local. + low The number of times the cgroup is reclaimed due to high memory pressure even though its usage is under @@ -1185,6 +1196,11 @@ PAGE_SIZE multiple when read back. The number of processes belonging to this cgroup killed by any kind of OOM killer. + memory.events.local + Similar to memory.events but the fields in the file are local + to the cgroup i.e. not hierarchical. The file modified event + generated on this file reflects only the local events. + memory.stat A read-only flat-keyed file which exists on non-root cgroups. @@ -2341,8 +2357,10 @@ system performance due to overreclaim, to the point where the feature becomes self-defeating. The memory.low boundary on the other hand is a top-down allocated -reserve. A cgroup enjoys reclaim protection when it's within its low, -which makes delegation of subtrees possible. +reserve. A cgroup enjoys reclaim protection when it's within its +effective low, which makes delegation of subtrees possible. It also +enjoys having reclaim pressure proportional to its overage when +above its effective low. The original high boundary, the hard limit, is defined as a strict limit that can not budge, even if the OOM killer has to be called. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f9d8b2eadee5..f3dca37faeaa 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -805,12 +805,10 @@ tracking down these problems. debug_pagealloc= - [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this - parameter enables the feature at boot time. In - default, it is disabled. We can avoid allocating huge - chunk of memory for debug pagealloc if we don't enable - it at boot time and the system will work mostly same - with the kernel built without CONFIG_DEBUG_PAGEALLOC. + [KNL] When CONFIG_DEBUG_PAGEALLOC is set, this parameter + enables the feature at boot time. By default, it is + disabled and the system will work mostly the same as a + kernel built without CONFIG_DEBUG_PAGEALLOC. on: enable the feature debugpat [X86] Enable PAT debugging @@ -2886,6 +2884,17 @@ /sys/module/printk/parameters/console_suspend) to turn on/off it dynamically. + novmcoredd [KNL,KDUMP] + Disable device dump. Device dump allows drivers to + append dump data to vmcore so you can collect driver + specified debug info. Drivers can append the data + without any limit and this data is stored in memory, + so this may cause significant memory stress. Disabling + device dump can help save memory but the driver debug + data will be no longer available. This parameter + is only available when CONFIG_PROC_VMCORE_DEVICE_DUMP + is set. + noaliencache [MM, NUMA, SLAB] Disables the allocation of alien caches in the slab allocator. Saves per-node memory, but will impact performance. diff --git a/Documentation/core-api/genalloc.rst b/Documentation/core-api/genalloc.rst index 6b38a39fab24..a534cc7ebd05 100644 --- a/Documentation/core-api/genalloc.rst +++ b/Documentation/core-api/genalloc.rst @@ -129,7 +129,7 @@ writing of special-purpose memory allocators in the future. :functions: gen_pool_for_each_chunk .. kernel-doc:: lib/genalloc.c - :functions: addr_in_gen_pool + :functions: gen_pool_has_addr .. kernel-doc:: lib/genalloc.c :functions: gen_pool_avail diff --git a/Documentation/devicetree/bindings/usb/s3c2410-usb.txt b/Documentation/devicetree/bindings/usb/s3c2410-usb.txt index e45b38ce2986..26c85afd0b53 100644 --- a/Documentation/devicetree/bindings/usb/s3c2410-usb.txt +++ b/Documentation/devicetree/bindings/usb/s3c2410-usb.txt @@ -4,7 +4,7 @@ OHCI Required properties: - compatible: should be "samsung,s3c2410-ohci" for USB host controller - - reg: address and lenght of the controller memory mapped region + - reg: address and length of the controller memory mapped region - interrupts: interrupt number for the USB OHCI controller - clocks: Should reference the bus and host clocks - clock-names: Should contain two strings diff --git a/Documentation/filesystems/coda.txt b/Documentation/filesystems/coda.txt index 61311356025d..545262c167c3 100644 --- a/Documentation/filesystems/coda.txt +++ b/Documentation/filesystems/coda.txt @@ -481,7 +481,10 @@ kernel support. - + struct coda_timespec { + int64_t tv_sec; /* seconds */ + long tv_nsec; /* nanoseconds */ + }; struct coda_vattr { enum coda_vtype va_type; /* vnode type (for create) */ @@ -493,9 +496,9 @@ kernel support. long va_fileid; /* file id */ u_quad_t va_size; /* file size in bytes */ long va_blocksize; /* blocksize preferred for i/o */ - struct timespec va_atime; /* time of last access */ - struct timespec va_mtime; /* time of last modification */ - struct timespec va_ctime; /* time file changed */ + struct coda_timespec va_atime; /* time of last access */ + struct coda_timespec va_mtime; /* time of last modification */ + struct coda_timespec va_ctime; /* time file changed */ u_long va_gen; /* generation number of file */ u_long va_flags; /* flags defined for file */ dev_t va_rdev; /* device special file represents */ diff --git a/arch/alpha/include/asm/pgalloc.h b/arch/alpha/include/asm/pgalloc.h index 02f9f91bb4f0..71ded3b7d82d 100644 --- a/arch/alpha/include/asm/pgalloc.h +++ b/arch/alpha/include/asm/pgalloc.h @@ -5,6 +5,8 @@ #include <linux/mm.h> #include <linux/mmzone.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + /* * Allocate and free page tables. The xxx_kernel() versions are * used to allocate a kernel page table - this turns on ASN bits @@ -41,7 +43,7 @@ pgd_free(struct mm_struct *mm, pgd_t *pgd) static inline pmd_t * pmd_alloc_one(struct mm_struct *mm, unsigned long address) { - pmd_t *ret = (pmd_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO); + pmd_t *ret = (pmd_t *)__get_free_page(GFP_PGTABLE_USER); return ret; } @@ -51,42 +53,6 @@ pmd_free(struct mm_struct *mm, pmd_t *pmd) free_page((unsigned long)pmd); } -static inline pte_t * -pte_alloc_one_kernel(struct mm_struct *mm) -{ - pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO); - return pte; -} - -static inline void -pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_page((unsigned long)pte); -} - -static inline pgtable_t -pte_alloc_one(struct mm_struct *mm) -{ - pte_t *pte = pte_alloc_one_kernel(mm); - struct page *page; - - if (!pte) - return NULL; - page = virt_to_page(pte); - if (!pgtable_page_ctor(page)) { - __free_page(page); - return NULL; - } - return page; -} - -static inline void -pte_free(struct mm_struct *mm, pgtable_t page) -{ - pgtable_page_dtor(page); - __free_page(page); -} - #define check_pgt_cache() do { } while (0) #endif /* _ALPHA_PGALLOC_H */ diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 495e8f12179b..6bfdc78ee627 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1589,9 +1589,6 @@ config ARCH_SPARSEMEM_ENABLE config ARCH_SPARSEMEM_DEFAULT def_bool ARCH_SPARSEMEM_ENABLE -config ARCH_SELECT_MEMORY_MODEL - def_bool ARCH_SPARSEMEM_ENABLE - config HAVE_ARCH_PFN_VALID def_bool ARCH_HAS_HOLES_MEMORYMODEL || !SPARSEMEM diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h index 17ab72f0cc4e..13c5a9d2224b 100644 --- a/arch/arm/include/asm/pgalloc.h +++ b/arch/arm/include/asm/pgalloc.h @@ -57,8 +57,6 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) extern pgd_t *pgd_alloc(struct mm_struct *mm); extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); -#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO) - static inline void clean_pte_table(pte_t *pte) { clean_dcache_area(pte + PTE_HWTABLE_PTRS, PTE_HWTABLE_SIZE); @@ -80,54 +78,41 @@ static inline void clean_pte_table(pte_t *pte) * | h/w pt 1 | * +------------+ */ + +#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL +#define __HAVE_ARCH_PTE_ALLOC_ONE +#include <asm-generic/pgalloc.h> + static inline pte_t * pte_alloc_one_kernel(struct mm_struct *mm) { - pte_t *pte; + pte_t *pte = __pte_alloc_one_kernel(mm); - pte = (pte_t *)__get_free_page(PGALLOC_GFP); if (pte) clean_pte_table(pte); return pte; } +#ifdef CONFIG_HIGHPTE +#define PGTABLE_HIGHMEM __GFP_HIGHMEM +#else +#define PGTABLE_HIGHMEM 0 +#endif + static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { struct page *pte; -#ifdef CONFIG_HIGHPTE - pte = alloc_pages(PGALLOC_GFP | __GFP_HIGHMEM, 0); -#else - pte = alloc_pages(PGALLOC_GFP, 0); -#endif + pte = __pte_alloc_one(mm, GFP_PGTABLE_USER | PGTABLE_HIGHMEM); if (!pte) return NULL; if (!PageHighMem(pte)) clean_pte_table(page_address(pte)); - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } return pte; } -/* - * Free one PTE table. - */ -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - if (pte) - free_page((unsigned long)pte); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_page(pte); -} - static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t pte, pmdval_t prot) { diff --git a/arch/arm/kernel/efi.c b/arch/arm/kernel/efi.c index 9f43ba012d10..b1f142a01f2f 100644 --- a/arch/arm/kernel/efi.c +++ b/arch/arm/kernel/efi.c @@ -11,8 +11,7 @@ #include <asm/mach/map.h> #include <asm/mmu_context.h> -static int __init set_permissions(pte_t *ptep, pgtable_t token, - unsigned long addr, void *data) +static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) { efi_memory_desc_t *md = data; pte_t pte = *ptep; diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0a75058c11f3..6c34418c8020 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -496,8 +496,7 @@ void __init dma_contiguous_remap(void) } } -static int __dma_update_pte(pte_t *pte, pgtable_t token, unsigned long addr, - void *data) +static int __dma_update_pte(pte_t *pte, unsigned long addr, void *data) { struct page *page = virt_to_page(addr); pgprot_t prot = *(pgprot_t *)data; @@ -565,7 +564,7 @@ static void *__alloc_from_pool(size_t size, struct page **ret_page) static bool __in_atomic_pool(void *start, size_t size) { - return addr_in_gen_pool(atomic_pool, (unsigned long)start, size); + return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); } static int __free_from_pool(void *start, size_t size) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index f3ce34113f89..e8e0382efe59 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -732,7 +732,7 @@ static void __init *early_alloc(unsigned long sz) static void *__init late_alloc(unsigned long sz) { - void *ptr = (void *)__get_free_pages(PGALLOC_GFP, get_order(sz)); + void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz)); if (!ptr || !pgtable_page_ctor(virt_to_page(ptr))) BUG(); diff --git a/arch/arm/mm/pageattr.c b/arch/arm/mm/pageattr.c index 1403cb4a0c3d..c8b500940e1f 100644 --- a/arch/arm/mm/pageattr.c +++ b/arch/arm/mm/pageattr.c @@ -22,8 +22,7 @@ struct page_change_data { pgprot_t clear_mask; }; -static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, - void *data) +static int change_page_range(pte_t *ptep, unsigned long addr, void *data) { struct page_change_data *cdata = data; pte_t pte = *ptep; diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index cf5f1dafcf74..c2afcea9b19b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -24,6 +24,7 @@ config ARM64 select ARCH_HAS_KCOV select ARCH_HAS_KEEPINITRD select ARCH_HAS_MEMBARRIER_SYNC_CORE + select ARCH_HAS_PTE_DEVMAP select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_SETUP_DMA_OPS select ARCH_HAS_SET_MEMORY diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h index dabba4b2c61f..07be4299010b 100644 --- a/arch/arm64/include/asm/pgalloc.h +++ b/arch/arm64/include/asm/pgalloc.h @@ -24,18 +24,23 @@ #include <asm/cacheflush.h> #include <asm/tlbflush.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + #define check_pgt_cache() do { } while (0) -#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO) #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) #if CONFIG_PGTABLE_LEVELS > 2 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { + gfp_t gfp = GFP_PGTABLE_USER; struct page *page; - page = alloc_page(PGALLOC_GFP); + if (mm == &init_mm) + gfp = GFP_PGTABLE_KERNEL; + + page = alloc_page(gfp); if (!page) return NULL; if (!pgtable_pmd_page_ctor(page)) { @@ -72,7 +77,7 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) { - return (pud_t *)__get_free_page(PGALLOC_GFP); + return (pud_t *)__get_free_page(GFP_PGTABLE_USER); } static inline void pud_free(struct mm_struct *mm, pud_t *pudp) @@ -100,42 +105,6 @@ static inline void __pgd_populate(pgd_t *pgdp, phys_addr_t pudp, pgdval_t prot) extern pgd_t *pgd_alloc(struct mm_struct *mm); extern void pgd_free(struct mm_struct *mm, pgd_t *pgdp); -static inline pte_t * -pte_alloc_one_kernel(struct mm_struct *mm) -{ - return (pte_t *)__get_free_page(PGALLOC_GFP); -} - -static inline pgtable_t -pte_alloc_one(struct mm_struct *mm) -{ - struct page *pte; - - pte = alloc_pages(PGALLOC_GFP, 0); - if (!pte) - return NULL; - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } - return pte; -} - -/* - * Free a PTE table. - */ -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *ptep) -{ - if (ptep) - free_page((unsigned long)ptep); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_page(pte); -} - static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t ptep, pmdval_t prot) { diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 38c7148e2023..979a4165f29e 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -27,6 +27,7 @@ #define PTE_WRITE (PTE_DBM) /* same as DBM (51) */ #define PTE_DIRTY (_AT(pteval_t, 1) << 55) #define PTE_SPECIAL (_AT(pteval_t, 1) << 56) +#define PTE_DEVMAP (_AT(pteval_t, 1) << 57) #define PTE_PROT_NONE (_AT(pteval_t, 1) << 58) /* only when !PTE_VALID */ #ifndef __ASSEMBLY__ diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 6478dd121228..8c8462fd61ed 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -90,6 +90,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; #define pte_write(pte) (!!(pte_val(pte) & PTE_WRITE)) #define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN)) #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT)) +#define pte_devmap(pte) (!!(pte_val(pte) & PTE_DEVMAP)) #define pte_cont_addr_end(addr, end) \ ({ unsigned long __boundary = ((addr) + CONT_PTE_SIZE) & CONT_PTE_MASK; \ @@ -217,6 +218,11 @@ static inline pmd_t pmd_mkcont(pmd_t pmd) return __pmd(pmd_val(pmd) | PMD_SECT_CONT); } +static inline pte_t pte_mkdevmap(pte_t pte) +{ + return set_pte_bit(pte, __pgprot(PTE_DEVMAP)); +} + static inline void set_pte(pte_t *ptep, pte_t pte) { WRITE_ONCE(*ptep, pte); @@ -386,6 +392,11 @@ static inline int pmd_protnone(pmd_t pmd) #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +#define pmd_devmap(pmd) pte_devmap(pmd_pte(pmd)) +#endif +#define pmd_mkdevmap(pmd) pte_pmd(pte_mkdevmap(pmd_pte(pmd))) + #define __pmd_to_phys(pmd) __pte_to_phys(pmd_pte(pmd)) #define __phys_to_pmd_val(phys) __phys_to_pte_val(phys) #define pmd_pfn(pmd) ((__pmd_to_phys(pmd) & PMD_MASK) >> PAGE_SHIFT) @@ -671,6 +682,16 @@ static inline int pmdp_set_access_flags(struct vm_area_struct *vma, { return ptep_set_access_flags(vma, address, (pte_t *)pmdp, pmd_pte(entry), dirty); } + +static inline int pud_devmap(pud_t pud) +{ + return 0; +} + +static inline int pgd_devmap(pgd_t pgd) +{ + return 0; +} #endif /* diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c index 4f9acb5fbe97..230cff073a08 100644 --- a/arch/arm64/kernel/efi.c +++ b/arch/arm64/kernel/efi.c @@ -86,8 +86,7 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md) return 0; } -static int __init set_permissions(pte_t *ptep, pgtable_t token, - unsigned long addr, void *data) +static int __init set_permissions(pte_t *ptep, unsigned long addr, void *data) { efi_memory_desc_t *md = data; pte_t pte = READ_ONCE(*ptep); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 69e65b6585e6..403479fe8fac 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -373,7 +373,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys, static phys_addr_t __pgd_pgtable_alloc(int shift) { - void *ptr = (void *)__get_free_page(PGALLOC_GFP); + void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL); BUG_ON(!ptr); /* Ensure the zeroed page is visible to the page table walker */ @@ -1080,4 +1080,21 @@ int arch_add_memory(int nid, u64 start, u64 size, return __add_pages(nid, start >> PAGE_SHIFT, size >> PAGE_SHIFT, restrictions); } +void arch_remove_memory(int nid, u64 start, u64 size, + struct vmem_altmap *altmap) +{ + unsigned long start_pfn = start >> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + struct zone *zone; + + /* + * FIXME: Cleanup page tables (also in arch_add_memory() in case + * adding fails). Until then, this function should only be used + * during memory hotplug (adding memory), not for memory + * unplug. ARCH_ENABLE_MEMORY_HOTREMOVE must not be + * unlocked yet. + */ + zone = page_zone(pfn_to_page(start_pfn)); + __remove_pages(zone, start_pfn, nr_pages, altmap); +} #endif diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 6cd645edcf35..0be077628b21 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -27,8 +27,7 @@ struct page_change_data { bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED); -static int change_page_range(pte_t *ptep, pgtable_t token, unsigned long addr, - void *data) +static int change_page_range(pte_t *ptep, unsigned long addr, void *data) { struct page_change_data *cdata = data; pte_t pte = READ_ONCE(*ptep); diff --git a/arch/arm64/mm/pgd.c b/arch/arm64/mm/pgd.c index 289f9113a27a..769516cb6677 100644 --- a/arch/arm64/mm/pgd.c +++ b/arch/arm64/mm/pgd.c @@ -30,10 +30,15 @@ static struct kmem_cache *pgd_cache __ro_after_init; pgd_t *pgd_alloc(struct mm_struct *mm) { + gfp_t gfp = GFP_PGTABLE_USER; + + if (unlikely(mm == &init_mm)) + gfp = GFP_PGTABLE_KERNEL; + if (PGD_SIZE == PAGE_SIZE) - return (pgd_t *)__get_free_page(PGALLOC_GFP); + return (pgd_t *)__get_free_page(gfp); else - return kmem_cache_alloc(pgd_cache, PGALLOC_GFP); + return kmem_cache_alloc(pgd_cache, gfp); } void pgd_free(struct mm_struct *mm, pgd_t *pgd) diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index d213bb47b717..98c5716708d6 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -8,6 +8,9 @@ #include <linux/mm.h> #include <linux/sched.h> +#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { @@ -39,33 +42,6 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) return pte; } -static inline struct page *pte_alloc_one(struct mm_struct *mm) -{ - struct page *pte; - - pte = alloc_pages(GFP_KERNEL | __GFP_ZERO, 0); - if (!pte) - return NULL; - - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } - - return pte; -} - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_pages((unsigned long)pte, PTE_ORDER); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_pages(pte, PTE_ORDER); -} - static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { free_pages((unsigned long)pgd, PGD_ORDER); diff --git a/arch/hexagon/include/asm/syscall.h b/arch/hexagon/include/asm/syscall.h index dab26a71f577..0f28a6a39c3a 100644 --- a/arch/hexagon/include/asm/syscall.h +++ b/arch/hexagon/include/asm/syscall.h @@ -22,6 +22,8 @@ #define _ASM_HEXAGON_SYSCALL_H #include <uapi/linux/audit.h> +#include <linux/err.h> +#include <asm/ptrace.h> typedef long (*syscall_fn)(unsigned long, unsigned long, unsigned long, unsigned long, @@ -44,6 +46,18 @@ static inline void syscall_get_arguments(struct task_struct *task, memcpy(args, &(®s->r00)[0], 6 * sizeof(args[0])); } +static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) +{ + return IS_ERR_VALUE(regs->r00) ? regs->r00 : 0; +} + +static inline long syscall_get_return_value(struct task_struct *task, + struct pt_regs *regs) +{ + return regs->r00; +} + static inline int syscall_get_arch(struct task_struct *task) { return AUDIT_ARCH_HEXAGON; diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c index d28e29103bdb..aae75fd7b810 100644 --- a/arch/ia64/mm/init.c +++ b/arch/ia64/mm/init.c @@ -681,7 +681,6 @@ int arch_add_memory(int nid, u64 start, u64 size, return ret; } -#ifdef CONFIG_MEMORY_HOTREMOVE void arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { @@ -693,4 +692,3 @@ void arch_remove_memory(int nid, u64 start, u64 size, __remove_pages(zone, start_pfn, nr_pages, altmap); } #endif -#endif diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 1456c5eecbd9..1a8ddbd0d23c 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -13,55 +13,18 @@ #include <asm/tlb.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + extern const char bad_pmd_string[]; #define pmd_alloc_one(mm,address) ({ BUG(); ((pmd_t *)2); }) - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_page((unsigned long) pte); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t page) -{ - pgtable_page_dtor(page); - __free_page(page); -} - #define __pte_free_tlb(tlb,pte,addr) \ do { \ pgtable_page_dtor(pte); \ tlb_remove_page((tlb), pte); \ } while (0) -static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - unsigned long page = __get_free_page(GFP_KERNEL); - - if (!page) - return NULL; - - memset((void *)page, 0, PAGE_SIZE); - return (pte_t *) (page); -} - -static inline pgtable_t pte_alloc_one(struct mm_struct *mm) -{ - struct page *page = alloc_pages(GFP_KERNEL, 0); - - if (page == NULL) - return NULL; - - clear_highpage(page); - if (!pgtable_page_ctor(page)) { - __free_page(page); - return NULL; - } - return page; - -} - static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { pmd_val(*pmd) = __pa((unsigned long)pte); diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index 27808d9461f4..aa16b85ddffc 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -13,6 +13,8 @@ #include <linux/mm.h> #include <linux/sched.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { @@ -50,37 +52,6 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) free_pages((unsigned long)pgd, PGD_ORDER); } -static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - return (pte_t *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, PTE_ORDER); -} - -static inline struct page *pte_alloc_one(struct mm_struct *mm) -{ - struct page *pte; - - pte = alloc_pages(GFP_KERNEL, PTE_ORDER); - if (!pte) - return NULL; - clear_highpage(pte); - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } - return pte; -} - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_pages((unsigned long)pte, PTE_ORDER); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_pages(pte, PTE_ORDER); -} - #define __pte_free_tlb(tlb,pte,address) \ do { \ pgtable_page_dtor(pte); \ diff --git a/arch/mips/include/asm/syscall.h b/arch/mips/include/asm/syscall.h index acf80ae0a430..83bb439597d8 100644 --- a/arch/mips/include/asm/syscall.h +++ b/arch/mips/include/asm/syscall.h @@ -89,6 +89,12 @@ static inline unsigned long mips_get_syscall_arg(unsigned long *arg, unreachable(); } +static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) +{ + return regs->regs[7] ? -regs->regs[2] : 0; +} + static inline long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) { diff --git a/arch/nds32/include/asm/pgalloc.h b/arch/nds32/include/asm/pgalloc.h index 3cbc749c79aa..e78b43d8389f 100644 --- a/arch/nds32/include/asm/pgalloc.h +++ b/arch/nds32/include/asm/pgalloc.h @@ -9,6 +9,9 @@ #include <asm/tlbflush.h> #include <asm/proc-fns.h> +#define __HAVE_ARCH_PTE_ALLOC_ONE +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + /* * Since we have only two-level page tables, these are trivial */ @@ -22,22 +25,11 @@ extern void pgd_free(struct mm_struct *mm, pgd_t * pgd); #define check_pgt_cache() do { } while (0) -static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - pte_t *pte; - - pte = - (pte_t *) __get_free_page(GFP_KERNEL | __GFP_RETRY_MAYFAIL | - __GFP_ZERO); - - return pte; -} - static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { pgtable_t pte; - pte = alloc_pages(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO, 0); + pte = __pte_alloc_one(mm, GFP_PGTABLE_USER); if (pte) cpu_dcache_wb_page((unsigned long)page_address(pte)); @@ -45,21 +37,6 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) } /* - * Free one PTE table. - */ -static inline void pte_free_kernel(struct mm_struct *mm, pte_t * pte) -{ - if (pte) { - free_page((unsigned long)pte); - } -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - __free_page(pte); -} - -/* * Populate the pmdp entry with a pointer to the pte. This pmd is part * of the mm address space. * diff --git a/arch/nds32/include/asm/syscall.h b/arch/nds32/include/asm/syscall.h index 899b2fb4b52f..7b5180d78e20 100644 --- a/arch/nds32/include/asm/syscall.h +++ b/arch/nds32/include/asm/syscall.h @@ -26,7 +26,8 @@ struct pt_regs; * * It's only valid to call this when @task is known to be blocked. */ -int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) +static inline int +syscall_get_nr(struct task_struct *task, struct pt_regs *regs) { return regs->syscallno; } @@ -47,7 +48,8 @@ int syscall_get_nr(struct task_struct *task, struct pt_regs *regs) * system call instruction. This may not be the same as what the * register state looked like at system call entry tracing. */ -void syscall_rollback(struct task_struct *task, struct pt_regs *regs) +static inline void +syscall_rollback(struct task_struct *task, struct pt_regs *regs) { regs->uregs[0] = regs->orig_r0; } @@ -62,7 +64,8 @@ void syscall_rollback(struct task_struct *task, struct pt_regs *regs) * It's only valid to call this when @task is stopped for tracing on exit * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. */ -long syscall_get_error(struct task_struct *task, struct pt_regs *regs) +static inline long +syscall_get_error(struct task_struct *task, struct pt_regs *regs) { unsigned long error = regs->uregs[0]; return IS_ERR_VALUE(error) ? error : 0; @@ -79,7 +82,8 @@ long syscall_get_error(struct task_struct *task, struct pt_regs *regs) * It's only valid to call this when @task is stopped for tracing on exit * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. */ -long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) +static inline long +syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) { return regs->uregs[0]; } @@ -99,8 +103,9 @@ long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) * It's only valid to call this when @task is stopped for tracing on exit * from a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. */ -void syscall_set_return_value(struct task_struct *task, struct pt_regs *regs, - int error, long val) +static inline void +syscall_set_return_value(struct task_struct *task, struct pt_regs *regs, + int error, long val) { regs->uregs[0] = (long)error ? error : val; } @@ -118,8 +123,9 @@ void syscall_set_return_value(struct task_struct *task, struct pt_regs *regs, * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. */ #define SYSCALL_MAX_ARGS 6 -void syscall_get_arguments(struct task_struct *task, struct pt_regs *regs, - unsigned long *args) +static inline void +syscall_get_arguments(struct task_struct *task, struct pt_regs *regs, + unsigned long *args) { args[0] = regs->orig_r0; args++; @@ -138,8 +144,9 @@ void syscall_get_arguments(struct task_struct *task, struct pt_regs *regs, * It's only valid to call this when @task is stopped for tracing on * entry to a system call, due to %TIF_SYSCALL_TRACE or %TIF_SYSCALL_AUDIT. */ -void syscall_set_arguments(struct task_struct *task, struct pt_regs *regs, - const unsigned long *args) +static inline void +syscall_set_arguments(struct task_struct *task, struct pt_regs *regs, + const unsigned long *args) { regs->orig_r0 = args[0]; args++; diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index 3a149ead1207..4bc8cf72067e 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -12,6 +12,8 @@ #include <linux/mm.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { @@ -37,41 +39,6 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) free_pages((unsigned long)pgd, PGD_ORDER); } -static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - pte_t *pte; - - pte = (pte_t *) __get_free_pages(GFP_KERNEL|__GFP_ZERO, PTE_ORDER); - - return pte; -} - -static inline pgtable_t pte_alloc_one(struct mm_struct *mm) -{ - struct page *pte; - - pte = alloc_pages(GFP_KERNEL, PTE_ORDER); - if (pte) { - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } - clear_highpage(pte); - } - return pte; -} - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_pages((unsigned long)pte, PTE_ORDER); -} - -static inline void pte_free(struct mm_struct *mm, struct page *pte) -{ - pgtable_page_dtor(pte); - __free_pages(pte, PTE_ORDER); -} - #define __pte_free_tlb(tlb, pte, addr) \ do { \ pgtable_page_dtor(pte); \ diff --git a/arch/parisc/include/asm/pgalloc.h b/arch/parisc/include/asm/pgalloc.h index ea75cc966dae..4f2059a50fae 100644 --- a/arch/parisc/include/asm/pgalloc.h +++ b/arch/parisc/include/asm/pgalloc.h @@ -10,6 +10,8 @@ #include <asm/cache.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + /* Allocate the top level pgd (page directory) * * Here (for 64 bit kernels) we implement a Hybrid L2/L3 scheme: we @@ -122,37 +124,6 @@ pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) pmd_populate_kernel(mm, pmd, page_address(pte_page)) #define pmd_pgtable(pmd) pmd_page(pmd) -static inline pgtable_t -pte_alloc_one(struct mm_struct *mm) -{ - struct page *page = alloc_page(GFP_KERNEL|__GFP_ZERO); - if (!page) - return NULL; - if (!pgtable_page_ctor(page)) { - __free_page(page); - return NULL; - } - return page; -} - -static inline pte_t * -pte_alloc_one_kernel(struct mm_struct *mm) -{ - pte_t *pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO); - return pte; -} - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_page((unsigned long)pte); -} - -static inline void pte_free(struct mm_struct *mm, struct page *pte) -{ - pgtable_page_dtor(pte); - pte_free_kernel(mm, page_address(pte)); -} - #define check_pgt_cache() do { } while (0) #endif diff --git a/arch/parisc/include/asm/syscall.h b/arch/parisc/include/asm/syscall.h index 80757e43cf2c..00b127a5e09b 100644 --- a/arch/parisc/include/asm/syscall.h +++ b/arch/parisc/include/asm/syscall.h @@ -29,6 +29,13 @@ static inline void syscall_get_arguments(struct task_struct *tsk, args[0] = regs->gr[26]; } +static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) +{ + unsigned long error = regs->gr[28]; + return IS_ERR_VALUE(error) ? error : 0; +} + static inline long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) { diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 8c1c636308c8..1120ff8ac715 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -128,6 +128,7 @@ config PPC select ARCH_HAS_MMIOWB if PPC64 select ARCH_HAS_PHYS_TO_DMA select ARCH_HAS_PMEM_API if PPC64 + select ARCH_HAS_PTE_DEVMAP if PPC_BOOK3S_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_MEMBARRIER_CALLBACKS select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE && PPC64 @@ -135,7 +136,6 @@ config PPC select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST select ARCH_HAS_UACCESS_FLUSHCACHE if PPC64 select ARCH_HAS_UBSAN_SANITIZE_ALL - select ARCH_HAS_ZONE_DEVICE if PPC_BOOK3S_64 select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_KEEP_MEMBLOCK select ARCH_MIGHT_HAVE_PC_PARPORT diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h index 7dede2e34b70..c6c2bdfb369b 100644 --- a/arch/powerpc/include/asm/book3s/64/pgtable.h +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h @@ -90,7 +90,6 @@ #define _PAGE_SOFT_DIRTY _RPAGE_SW3 /* software: software dirty tracking */ #define _PAGE_SPECIAL _RPAGE_SW2 /* software: special page */ #define _PAGE_DEVMAP _RPAGE_SW1 /* software: ZONE_DEVICE page */ -#define __HAVE_ARCH_PTE_DEVMAP /* * Drivers request for cache inhibited pte mapping using _PAGE_NO_CACHE diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h index 81abcf6a737b..38d62acfdce7 100644 --- a/arch/powerpc/include/asm/syscall.h +++ b/arch/powerpc/include/asm/syscall.h @@ -35,6 +35,16 @@ static inline void syscall_rollback(struct task_struct *task, regs->gpr[3] = regs->orig_gpr3; } +static inline long syscall_get_error(struct task_struct *task, + struct pt_regs *regs) +{ + /* + * If the system call failed, + * regs->gpr[3] contains a positive ERRORCODE. + */ + return (regs->ccr & 0x10000000UL) ? -regs->gpr[3] : 0; +} + static inline long syscall_get_return_value(struct task_struct *task, struct pt_regs *regs) { diff --git a/arch/powerpc/include/uapi/asm/mman.h b/arch/powerpc/include/uapi/asm/mman.h index 65065ce32814..c0c737215b00 100644 --- a/arch/powerpc/include/uapi/asm/mman.h +++ b/arch/powerpc/include/uapi/asm/mman.h @@ -21,15 +21,11 @@ #define MAP_DENYWRITE 0x0800 /* ETXTBSY */ #define MAP_EXECUTABLE 0x1000 /* mark it as an executable */ + #define MCL_CURRENT 0x2000 /* lock all currently mapped pages */ #define MCL_FUTURE 0x4000 /* lock all additions to address space */ #define MCL_ONFAULT 0x8000 /* lock all pages that are faulted in */ -#define MAP_POPULATE 0x8000 /* populate (prefault) pagetables */ -#define MAP_NONBLOCK 0x10000 /* do not block on IO */ -#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */ -#define MAP_HUGETLB 0x40000 /* create a huge page mapping */ - /* Override any generic PKEY permission defines */ #define PKEY_DISABLE_EXECUTE 0x4 #undef PKEY_ACCESS_MASK diff --git a/arch/powerpc/kvm/book3s_64_vio.c b/arch/powerpc/kvm/book3s_64_vio.c index 66270e07449a..768b645c7edf 100644 --- a/arch/powerpc/kvm/book3s_64_vio.c +++ b/arch/powerpc/kvm/book3s_64_vio.c @@ -30,6 +30,7 @@ #include <linux/anon_inodes.h> #include <linux/iommu.h> #include <linux/file.h> +#include <linux/mm.h> #include <asm/kvm_ppc.h> #include <asm/kvm_book3s.h> @@ -56,43 +57,6 @@ static unsigned long kvmppc_stt_pages(unsigned long tce_pages) return tce_pages + ALIGN(stt_bytes, PAGE_SIZE) / PAGE_SIZE; } -static long kvmppc_account_memlimit(unsigned long stt_pages, bool inc) -{ - long ret = 0; - - if (!current || !current->mm) - return ret; /* process exited */ - - down_write(¤t->mm->mmap_sem); - - if (inc) { - unsigned long locked, lock_limit; - - locked = current->mm->locked_vm + stt_pages; - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) - ret = -ENOMEM; - else - current->mm->locked_vm += stt_pages; - } else { - if (WARN_ON_ONCE(stt_pages > current->mm->locked_vm)) - stt_pages = current->mm->locked_vm; - - current->mm->locked_vm -= stt_pages; - } - - pr_debug("[%d] RLIMIT_MEMLOCK KVM %c%ld %ld/%ld%s\n", current->pid, - inc ? '+' : '-', - stt_pages << PAGE_SHIFT, - current->mm->locked_vm << PAGE_SHIFT, - rlimit(RLIMIT_MEMLOCK), - ret ? " - exceeded" : ""); - - up_write(¤t->mm->mmap_sem); - - return ret; -} - static void kvm_spapr_tce_iommu_table_free(struct rcu_head *head) { struct kvmppc_spapr_tce_iommu_table *stit = container_of(head, @@ -302,7 +266,7 @@ static int kvm_spapr_tce_release(struct inode *inode, struct file *filp) kvm_put_kvm(stt->kvm); - kvmppc_account_memlimit( + account_locked_vm(current->mm, kvmppc_stt_pages(kvmppc_tce_pages(stt->size)), false); call_rcu(&stt->rcu, release_spapr_tce_table); @@ -327,7 +291,7 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm, return -EINVAL; npages = kvmppc_tce_pages(size); - ret = kvmppc_account_memlimit(kvmppc_stt_pages(npages), true); + ret = account_locked_vm(current->mm, kvmppc_stt_pages(npages), true); if (ret) return ret; @@ -373,7 +337,7 @@ long kvm_vm_ioctl_create_spapr_tce(struct kvm *kvm, kfree(stt); fail_acct: - kvmppc_account_memlimit(kvmppc_stt_pages(npages), false); + account_locked_vm(current->mm, kvmppc_stt_pages(npages), false); return ret; } diff --git a/arch/powerpc/kvm/book3s_xics.c b/arch/powerpc/kvm/book3s_xics.c index f27ee57ab46e..7ea9874ee935 100644 --- a/arch/powerpc/kvm/book3s_xics.c +++ b/arch/powerpc/kvm/book3s_xics.c @@ -830,7 +830,7 @@ static noinline int kvmppc_h_eoi(struct kvm_vcpu *vcpu, unsigned long xirr) * * Note: If EOI is incorrectly used by SW to lower the CPPR * value (ie more favored), we do not check for rejection of - * a pending interrupt, this is a SW error and PAPR sepcifies + * a pending interrupt, this is a SW error and PAPR specifies * that we don't have to deal with it. * * The sending of an EOI to the ICS is handled after the diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 90ee3a89722c..b056cae3388b 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -14,6 +14,7 @@ #include <linux/hugetlb.h> #include <linux/swap.h> #include <linux/sizes.h> +#include <linux/mm.h> #include <asm/mmu_context.h> #include <asm/pte-walk.h> #include <linux/mm_inline.h> @@ -46,40 +47,6 @@ struct mm_iommu_table_group_mem_t { u64 dev_hpa; /* Device memory base address */ }; -static long mm_iommu_adjust_locked_vm(struct mm_struct *mm, - unsigned long npages, bool incr) -{ - long ret = 0, locked, lock_limit; - - if (!npages) - return 0; - - down_write(&mm->mmap_sem); - - if (incr) { - locked = mm->locked_vm + npages; - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) - ret = -ENOMEM; - else - mm->locked_vm += npages; - } else { - if (WARN_ON_ONCE(npages > mm->locked_vm)) - npages = mm->locked_vm; - mm->locked_vm -= npages; - } - - pr_debug("[%d] RLIMIT_MEMLOCK HASH64 %c%ld %ld/%ld\n", - current ? current->pid : 0, - incr ? '+' : '-', - npages << PAGE_SHIFT, - mm->locked_vm << PAGE_SHIFT, - rlimit(RLIMIT_MEMLOCK)); - up_write(&mm->mmap_sem); - - return ret; -} - bool mm_iommu_preregistered(struct mm_struct *mm) { return !list_empty(&mm->context.iommu_group_mem_list); @@ -96,7 +63,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, unsigned long entry, chunk; if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) { - ret = mm_iommu_adjust_locked_vm(mm, entries, true); + ret = account_locked_vm(mm, entries, true); if (ret) return ret; @@ -211,7 +178,7 @@ free_exit: kfree(mem); unlock_exit: - mm_iommu_adjust_locked_vm(mm, locked_entries, false); + account_locked_vm(mm, locked_entries, false); return ret; } @@ -311,7 +278,7 @@ long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem) unlock_exit: mutex_unlock(&mem_list_mutex); - mm_iommu_adjust_locked_vm(mm, unlock_entries, false); + account_locked_vm(mm, unlock_entries, false); return ret; } diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c index cba29131bccc..aa127243cd38 100644 --- a/arch/powerpc/mm/mem.c +++ b/arch/powerpc/mm/mem.c @@ -125,7 +125,6 @@ int __ref arch_add_memory(int nid, u64 start, u64 size, return __add_pages(nid, start_pfn, nr_pages, restrictions); } -#ifdef CONFIG_MEMORY_HOTREMOVE void __ref arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { @@ -159,7 +158,6 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, pr_warn("Hash collision while resizing HPT\n"); } #endif -#endif /* CONFIG_MEMORY_HOTPLUG */ #ifndef CONFIG_NEED_MULTIPLE_NODES void __init mem_topology_setup(void) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 94043cf83c90..48f28bbec161 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -18,6 +18,8 @@ #include <linux/mm.h> #include <asm/tlb.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { @@ -82,33 +84,6 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) #endif /* __PAGETABLE_PMD_FOLDED */ -static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - return (pte_t *)__get_free_page( - GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); -} - -static inline struct page *pte_alloc_one(struct mm_struct *mm) -{ - struct page *pte; - - pte = alloc_page(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_ZERO); - if (likely(pte != NULL)) - pgtable_page_ctor(pte); - return pte; -} - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_page((unsigned long)pte); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_page(pte); -} - #define __pte_free_tlb(tlb, pte, buf) \ do { \ pgtable_page_dtor(pte); \ diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig index e2e154051b07..067713f7a377 100644 --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -661,9 +661,6 @@ config ARCH_SPARSEMEM_ENABLE config ARCH_SPARSEMEM_DEFAULT def_bool y -config ARCH_SELECT_MEMORY_MODEL - def_bool y - config ARCH_ENABLE_MEMORY_HOTPLUG def_bool y if SPARSEMEM diff --git a/arch/s390/appldata/appldata_base.c b/arch/s390/appldata/appldata_base.c index e4b58240ec53..aa738cad1338 100644 --- a/arch/s390/appldata/appldata_base.c +++ b/arch/s390/appldata/appldata_base.c @@ -220,15 +220,13 @@ appldata_timer_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { int timer_active = appldata_timer_active; - int zero = 0; - int one = 1; int rc; struct ctl_table ctl_entry = { .procname = ctl->procname, .data = &timer_active, .maxlen = sizeof(int), - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }; rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos); @@ -255,13 +253,12 @@ appldata_interval_handler(struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { int interval = appldata_interval; - int one = 1; int rc; struct ctl_table ctl_entry = { .procname = ctl->procname, .data = &interval, .maxlen = sizeof(int), - .extra1 = &one, + .extra1 = SYSCTL_ONE, }; rc = proc_dointvec_minmax(&ctl_entry, write, buffer, lenp, ppos); @@ -289,13 +286,11 @@ appldata_generic_handler(struct ctl_table *ctl, int write, struct list_head *lh; int rc, found; int active; - int zero = 0; - int one = 1; struct ctl_table ctl_entry = { .data = &active, .maxlen = sizeof(int), - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }; found = 0; diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c index 8964a3f60aad..2db6fb405a9a 100644 --- a/arch/s390/kernel/topology.c +++ b/arch/s390/kernel/topology.c @@ -587,15 +587,13 @@ static int topology_ctl_handler(struct ctl_table *ctl, int write, { int enabled = topology_is_enabled(); int new_mode; - int zero = 0; - int one = 1; int rc; struct ctl_table ctl_entry = { .procname = ctl->procname, .data = &enabled, .maxlen = sizeof(int), - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }; rc = proc_douintvec_minmax(&ctl_entry, write, buffer, lenp, ppos); diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c index 14d1eae9fe43..ffb81fe95c77 100644 --- a/arch/s390/mm/init.c +++ b/arch/s390/mm/init.c @@ -226,6 +226,9 @@ int arch_add_memory(int nid, u64 start, u64 size, unsigned long size_pages = PFN_DOWN(size); int rc; + if (WARN_ON_ONCE(restrictions->altmap)) + return -EINVAL; + rc = vmem_add_mapping(start, size); if (rc) return rc; @@ -236,16 +239,15 @@ int arch_add_memory(int nid, u64 start, u64 size, return rc; } -#ifdef CONFIG_MEMORY_HOTREMOVE void arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { - /* - * There is no hardware or firmware interface which could trigger a - * hot memory remove on s390. So there is nothing that needs to be - * implemented. - */ - BUG(); + unsigned long start_pfn = start >> PAGE_SHIFT; + unsigned long nr_pages = size >> PAGE_SHIFT; + struct zone *zone; + + zone = page_zone(pfn_to_page(start_pfn)); + __remove_pages(zone, start_pfn, nr_pages, altmap); + vmem_remove_mapping(start, size); } -#endif #endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/sh/configs/hp6xx_defconfig b/arch/sh/configs/hp6xx_defconfig index 4dcf7f552582..91d43e2bffea 100644 --- a/arch/sh/configs/hp6xx_defconfig +++ b/arch/sh/configs/hp6xx_defconfig @@ -40,7 +40,6 @@ CONFIG_FB=y CONFIG_FIRMWARE_EDID=y CONFIG_FB_HIT=y CONFIG_FB_SH_MOBILE_LCDC=y -CONFIG_BACKLIGHT_LCD_SUPPORT=y CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FONTS=y CONFIG_FONT_PEARL_8x8=y diff --git a/arch/sh/configs/sdk7786_defconfig b/arch/sh/configs/sdk7786_defconfig index 5209889765ad..49a29338789b 100644 --- a/arch/sh/configs/sdk7786_defconfig +++ b/arch/sh/configs/sdk7786_defconfig @@ -191,7 +191,6 @@ CONFIG_CONFIGFS_FS=y CONFIG_JFFS2_FS=m CONFIG_JFFS2_FS_XATTR=y CONFIG_UBIFS_FS=m -CONFIG_LOGFS=m CONFIG_CRAMFS=m CONFIG_SQUASHFS=m CONFIG_ROMFS_FS=m diff --git a/arch/sh/configs/sh2007_defconfig b/arch/sh/configs/sh2007_defconfig index a1cf6447dbb1..cbd6742eb423 100644 --- a/arch/sh/configs/sh2007_defconfig +++ b/arch/sh/configs/sh2007_defconfig @@ -85,7 +85,6 @@ CONFIG_WATCHDOG=y CONFIG_SH_WDT=y CONFIG_SSB=y CONFIG_FB=y -CONFIG_BACKLIGHT_LCD_SUPPORT=y # CONFIG_LCD_CLASS_DEVICE is not set CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c index 13c6a6bb5fd9..dfdbaa50946e 100644 --- a/arch/sh/mm/init.c +++ b/arch/sh/mm/init.c @@ -429,7 +429,6 @@ int memory_add_physaddr_to_nid(u64 addr) EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid); #endif -#ifdef CONFIG_MEMORY_HOTREMOVE void arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { @@ -440,5 +439,4 @@ void arch_remove_memory(int nid, u64 start, u64 size, zone = page_zone(pfn_to_page(start_pfn)); __remove_pages(zone, start_pfn, nr_pages, altmap); } -#endif #endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 26ab6f5bbaaf..8fc95c7809ef 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -300,9 +300,6 @@ config NODES_SPAN_OTHER_NODES def_bool y depends on NEED_MULTIPLE_NODES -config ARCH_SELECT_MEMORY_MODEL - def_bool y if SPARC64 - config ARCH_SPARSEMEM_ENABLE def_bool y if SPARC64 select SPARSEMEM_VMEMMAP_ENABLE diff --git a/arch/sparc/include/uapi/asm/mman.h b/arch/sparc/include/uapi/asm/mman.h index f6f99ec65bb3..cec9f4109687 100644 --- a/arch/sparc/include/uapi/asm/mman.h +++ b/arch/sparc/include/uapi/asm/mman.h @@ -22,10 +22,4 @@ #define MCL_FUTURE 0x4000 /* lock all additions to address space */ #define MCL_ONFAULT 0x8000 /* lock all pages that are faulted in */ -#define MAP_POPULATE 0x8000 /* populate (prefault) pagetables */ -#define MAP_NONBLOCK 0x10000 /* do not block on IO */ -#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */ -#define MAP_HUGETLB 0x40000 /* create a huge page mapping */ - - #endif /* _UAPI__SPARC_MMAN_H__ */ diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 99eb5682792a..d7b282e9c4d5 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -10,6 +10,8 @@ #include <linux/mm.h> +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + #define pmd_populate_kernel(mm, pmd, pte) \ set_pmd(pmd, __pmd(_PAGE_TABLE + (unsigned long) __pa(pte))) @@ -25,20 +27,6 @@ extern pgd_t *pgd_alloc(struct mm_struct *); extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); -extern pte_t *pte_alloc_one_kernel(struct mm_struct *); -extern pgtable_t pte_alloc_one(struct mm_struct *); - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - free_page((unsigned long) pte); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_page(pte); -} - #define __pte_free_tlb(tlb,pte, address) \ do { \ pgtable_page_dtor(pte); \ diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index a9c9a94c096f..de58e976b9bc 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -208,28 +208,6 @@ void pgd_free(struct mm_struct *mm, pgd_t *pgd) free_page((unsigned long) pgd); } -pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - pte_t *pte; - - pte = (pte_t *)__get_free_page(GFP_KERNEL|__GFP_ZERO); - return pte; -} - -pgtable_t pte_alloc_one(struct mm_struct *mm) -{ - struct page *pte; - - pte = alloc_page(GFP_KERNEL|__GFP_ZERO); - if (!pte) - return NULL; - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } - return pte; -} - #ifdef CONFIG_3_LEVEL_PGTABLES pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { diff --git a/arch/unicore32/include/asm/pgalloc.h b/arch/unicore32/include/asm/pgalloc.h index 7cceabecf4e3..dd09af6581e5 100644 --- a/arch/unicore32/include/asm/pgalloc.h +++ b/arch/unicore32/include/asm/pgalloc.h @@ -17,6 +17,10 @@ #include <asm/cacheflush.h> #include <asm/tlbflush.h> +#define __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL +#define __HAVE_ARCH_PTE_ALLOC_ONE +#include <asm-generic/pgalloc.h> + #define check_pgt_cache() do { } while (0) #define _PAGE_USER_TABLE (PMD_TYPE_TABLE | PMD_PRESENT) @@ -28,17 +32,14 @@ extern void free_pgd_slow(struct mm_struct *mm, pgd_t *pgd); #define pgd_alloc(mm) get_pgd_slow(mm) #define pgd_free(mm, pgd) free_pgd_slow(mm, pgd) -#define PGALLOC_GFP (GFP_KERNEL | __GFP_ZERO) - /* * Allocate one PTE table. */ static inline pte_t * pte_alloc_one_kernel(struct mm_struct *mm) { - pte_t *pte; + pte_t *pte = __pte_alloc_one_kernel(mm); - pte = (pte_t *)__get_free_page(PGALLOC_GFP); if (pte) clean_dcache_area(pte, PTRS_PER_PTE * sizeof(pte_t)); @@ -50,35 +51,14 @@ pte_alloc_one(struct mm_struct *mm) { struct page *pte; - pte = alloc_pages(PGALLOC_GFP, 0); + pte = __pte_alloc_one(mm, GFP_PGTABLE_USER); if (!pte) return NULL; - if (!PageHighMem(pte)) { - void *page = page_address(pte); - clean_dcache_area(page, PTRS_PER_PTE * sizeof(pte_t)); - } - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - } - + if (!PageHighMem(pte)) + clean_pte_table(page_address(pte)); return pte; } -/* - * Free one PTE table. - */ -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - if (pte) - free_page((unsigned long)pte); -} - -static inline void pte_free(struct mm_struct *mm, pgtable_t pte) -{ - pgtable_page_dtor(pte); - __free_page(pte); -} - static inline void __pmd_populate(pmd_t *pmdp, unsigned long pmdval) { set_pmd(pmdp, __pmd(pmdval)); diff --git a/arch/unicore32/include/mach/regs-gpio.h b/arch/unicore32/include/mach/regs-gpio.h index 806350e1ccb6..5fc701ee33e3 100644 --- a/arch/unicore32/include/mach/regs-gpio.h +++ b/arch/unicore32/include/mach/regs-gpio.h @@ -32,7 +32,7 @@ */ #define GPIO_GEDR (PKUNITY_GPIO_BASE + 0x0018) /* - * Sepcial Voltage Detect Reg GPIO_GPIR. + * Special Voltage Detect Reg GPIO_GPIR. */ #define GPIO_GPIR (PKUNITY_GPIO_BASE + 0x0020) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index cb1c073b3c7e..23de3b9da480 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -69,6 +69,7 @@ config X86 select ARCH_HAS_KCOV if X86_64 select ARCH_HAS_MEMBARRIER_SYNC_CORE select ARCH_HAS_PMEM_API if X86_64 + select ARCH_HAS_PTE_DEVMAP if X86_64 select ARCH_HAS_PTE_SPECIAL select ARCH_HAS_REFCOUNT select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64 @@ -79,7 +80,6 @@ config X86 select ARCH_HAS_STRICT_MODULE_RWX select ARCH_HAS_SYNC_CORE_BEFORE_USERMODE select ARCH_HAS_UBSAN_SANITIZE_ALL - select ARCH_HAS_ZONE_DEVICE if X86_64 select ARCH_HAVE_NMI_SAFE_CMPXCHG select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI select ARCH_MIGHT_HAVE_PC_PARPORT diff --git a/arch/x86/entry/vdso/vdso32-setup.c b/arch/x86/entry/vdso/vdso32-setup.c index 42d4c89f990e..240626e7f55a 100644 --- a/arch/x86/entry/vdso/vdso32-setup.c +++ b/arch/x86/entry/vdso/vdso32-setup.c @@ -65,9 +65,6 @@ subsys_initcall(sysenter_setup); /* Register vsyscall32 into the ABI table */ #include <linux/sysctl.h> -static const int zero; -static const int one = 1; - static struct ctl_table abi_table2[] = { { .procname = "vsyscall32", @@ -75,8 +72,8 @@ static struct ctl_table abi_table2[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (int *)&zero, - .extra2 = (int *)&one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, {} }; diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h index a281e61ec60c..29aa7859bdee 100644 --- a/arch/x86/include/asm/pgalloc.h +++ b/arch/x86/include/asm/pgalloc.h @@ -6,6 +6,9 @@ #include <linux/mm.h> /* for struct page */ #include <linux/pagemap.h> +#define __HAVE_ARCH_PTE_ALLOC_ONE +#include <asm-generic/pgalloc.h> /* for pte_{alloc,free}_one */ + static inline int __paravirt_pgd_alloc(struct mm_struct *mm) { return 0; } #ifdef CONFIG_PARAVIRT_XXL @@ -47,24 +50,8 @@ extern gfp_t __userpte_alloc_gfp; extern pgd_t *pgd_alloc(struct mm_struct *); extern void pgd_free(struct mm_struct *mm, pgd_t *pgd); -extern pte_t *pte_alloc_one_kernel(struct mm_struct *); extern pgtable_t pte_alloc_one(struct mm_struct *); -/* Should really implement gc for free page table pages. This could be - done with a reference count in struct page. */ - -static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) -{ - BUG_ON((unsigned long)pte & (PAGE_SIZE-1)); - free_page((unsigned long)pte); -} - -static inline void pte_free(struct mm_struct *mm, struct page *pte) -{ - pgtable_page_dtor(pte); - __free_page(pte); -} - extern void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte); static inline void __pte_free_tlb(struct mmu_gather *tlb, struct page *pte, diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 5e0509b41986..0bc530c4eb13 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -271,7 +271,7 @@ static inline int has_transparent_hugepage(void) return boot_cpu_has(X86_FEATURE_PSE); } -#ifdef __HAVE_ARCH_PTE_DEVMAP +#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP static inline int pmd_devmap(pmd_t pmd) { return !!(pmd_val(pmd) & _PAGE_DEVMAP); @@ -732,7 +732,7 @@ static inline int pte_present(pte_t a) return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); } -#ifdef __HAVE_ARCH_PTE_DEVMAP +#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP static inline int pte_devmap(pte_t a) { return (pte_flags(a) & _PAGE_DEVMAP) == _PAGE_DEVMAP; diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index d6ff0bbdb394..b5e49e6bac63 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -103,7 +103,6 @@ #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE) #define _PAGE_NX (_AT(pteval_t, 1) << _PAGE_BIT_NX) #define _PAGE_DEVMAP (_AT(u64, 1) << _PAGE_BIT_DEVMAP) -#define __HAVE_ARCH_PTE_DEVMAP #else #define _PAGE_NX (_AT(pteval_t, 0)) #define _PAGE_DEVMAP (_AT(pteval_t, 0)) diff --git a/arch/x86/kernel/itmt.c b/arch/x86/kernel/itmt.c index d177940aa090..12dc30924bf4 100644 --- a/arch/x86/kernel/itmt.c +++ b/arch/x86/kernel/itmt.c @@ -69,8 +69,6 @@ static int sched_itmt_update_handler(struct ctl_table *table, int write, return ret; } -static unsigned int zero; -static unsigned int one = 1; static struct ctl_table itmt_kern_table[] = { { .procname = "sched_itmt_enabled", @@ -78,8 +76,8 @@ static struct ctl_table itmt_kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = sched_itmt_update_handler, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, {} }; diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c index f265a4316179..4068abb9427f 100644 --- a/arch/x86/mm/init_32.c +++ b/arch/x86/mm/init_32.c @@ -860,7 +860,6 @@ int arch_add_memory(int nid, u64 start, u64 size, return __add_pages(nid, start_pfn, nr_pages, restrictions); } -#ifdef CONFIG_MEMORY_HOTREMOVE void arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) { @@ -872,7 +871,6 @@ void arch_remove_memory(int nid, u64 start, u64 size, __remove_pages(zone, start_pfn, nr_pages, altmap); } #endif -#endif int kernel_set_to_readonly __read_mostly; diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 693aaf28d5fe..8335ac6e1112 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1196,7 +1196,6 @@ void __ref vmemmap_free(unsigned long start, unsigned long end, remove_pagetable(start, end, false, altmap); } -#ifdef CONFIG_MEMORY_HOTREMOVE static void __meminit kernel_physical_mapping_remove(unsigned long start, unsigned long end) { @@ -1221,7 +1220,6 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, __remove_pages(zone, start_pfn, nr_pages, altmap); kernel_physical_mapping_remove(start, start + size); } -#endif #endif /* CONFIG_MEMORY_HOTPLUG */ static struct kcore_list kcore_vsyscall; diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index e6dad600614c..588876ef452a 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -213,8 +213,6 @@ static void __init alloc_node_data(int nid) node_data[nid] = nd; memset(NODE_DATA(nid), 0, sizeof(pg_data_t)); - - node_set_online(nid); } /** @@ -567,7 +565,7 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) return -EINVAL; /* Finally register nodes. */ - for_each_node_mask(nid, node_possible_map) { + for_each_node_mask(nid, numa_nodes_parsed) { u64 start = PFN_PHYS(max_pfn); u64 end = 0; @@ -578,9 +576,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) end = max(mi->blk[i].end, end); } - if (start >= end) - continue; - /* * Don't confuse VM with a node that doesn't have the * minimum amount of memory: @@ -589,6 +584,8 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) continue; alloc_node_data(nid); + if (end) + node_set_online(nid); } /* Dump memblock with node info and return. */ @@ -718,21 +715,6 @@ void __init x86_numa_init(void) numa_init(dummy_numa_init); } -static void __init init_memory_less_node(int nid) -{ - unsigned long zones_size[MAX_NR_ZONES] = {0}; - unsigned long zholes_size[MAX_NR_ZONES] = {0}; - - /* Allocate and initialize node data. Memory-less node is now online.*/ - alloc_node_data(nid); - free_area_init_node(nid, zones_size, 0, zholes_size); - - /* - * All zonelists will be built later in start_kernel() after per cpu - * areas are initialized. - */ -} - /* * Setup early cpu_to_node. * @@ -760,9 +742,6 @@ void __init init_cpu_to_node(void) if (node == NUMA_NO_NODE) continue; - if (!node_online(node)) - init_memory_less_node(node); - numa_set_node(cpu, node); } } diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 1f67b1e15bf6..44816ff6411f 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -13,33 +13,17 @@ phys_addr_t physical_mask __ro_after_init = (1ULL << __PHYSICAL_MASK_SHIFT) - 1; EXPORT_SYMBOL(physical_mask); #endif -#define PGALLOC_GFP (GFP_KERNEL_ACCOUNT | __GFP_ZERO) - #ifdef CONFIG_HIGHPTE -#define PGALLOC_USER_GFP __GFP_HIGHMEM +#define PGTABLE_HIGHMEM __GFP_HIGHMEM #else -#define PGALLOC_USER_GFP 0 +#define PGTABLE_HIGHMEM 0 #endif -gfp_t __userpte_alloc_gfp = PGALLOC_GFP | PGALLOC_USER_GFP; - -pte_t *pte_alloc_one_kernel(struct mm_struct *mm) -{ - return (pte_t *)__get_free_page(PGALLOC_GFP & ~__GFP_ACCOUNT); -} +gfp_t __userpte_alloc_gfp = GFP_PGTABLE_USER | PGTABLE_HIGHMEM; pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *pte; - - pte = alloc_pages(__userpte_alloc_gfp, 0); - if (!pte) - return NULL; - if (!pgtable_page_ctor(pte)) { - __free_page(pte); - return NULL; - } - return pte; + return __pte_alloc_one(mm, __userpte_alloc_gfp); } static int __init setup_userpte(char *arg) @@ -235,7 +219,7 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; bool failed = false; - gfp_t gfp = PGALLOC_GFP; + gfp_t gfp = GFP_PGTABLE_USER; if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; @@ -399,14 +383,14 @@ static inline pgd_t *_pgd_alloc(void) * We allocate one page for pgd. */ if (!SHARED_KERNEL_PMD) - return (pgd_t *)__get_free_pages(PGALLOC_GFP, + return (pgd_t *)__get_free_pages(GFP_PGTABLE_USER, PGD_ALLOCATION_ORDER); /* * Now PAE kernel is not running as a Xen domain. We can allocate * a 32-byte slab for pgd to save memory space. */ - return kmem_cache_alloc(pgd_cache, PGALLOC_GFP); + return kmem_cache_alloc(pgd_cache, GFP_PGTABLE_USER); } static inline void _pgd_free(pgd_t *pgd) @@ -424,7 +408,8 @@ void __init pgd_cache_init(void) static inline pgd_t *_pgd_alloc(void) { - return (pgd_t *)__get_free_pages(PGALLOC_GFP, PGD_ALLOCATION_ORDER); + return (pgd_t *)__get_free_pages(GFP_PGTABLE_USER, + PGD_ALLOCATION_ORDER); } static inline void _pgd_free(pgd_t *pgd) diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index beb44e22afdf..f6e5eeecfc69 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -2700,8 +2700,7 @@ struct remap_data { struct mmu_update *mmu_update; }; -static int remap_area_pfn_pte_fn(pte_t *ptep, pgtable_t token, - unsigned long addr, void *data) +static int remap_area_pfn_pte_fn(pte_t *ptep, unsigned long addr, void *data) { struct remap_data *rmd = data; pte_t pte = pte_mkspecial(mfn_pte(*rmd->pfn, rmd->prot)); diff --git a/crypto/zstd.c b/crypto/zstd.c index f1e4c70c9d24..244845c93618 100644 --- a/crypto/zstd.c +++ b/crypto/zstd.c @@ -24,6 +24,8 @@ struct zstd_ctx { void *dwksp; }; +static ZSTD_parameters params; + static ZSTD_parameters zstd_params(void) { return ZSTD_getParams(ZSTD_DEF_LEVEL, 0, 0); @@ -32,8 +34,10 @@ static ZSTD_parameters zstd_params(void) static int zstd_comp_init(struct zstd_ctx *ctx) { int ret = 0; - const ZSTD_parameters params = zstd_params(); - const size_t wksp_size = ZSTD_CCtxWorkspaceBound(params.cParams); + size_t wksp_size; + + params = zstd_params(); + wksp_size = ZSTD_CCtxWorkspaceBound(params.cParams); ctx->cwksp = vzalloc(wksp_size); if (!ctx->cwksp) { @@ -152,9 +156,8 @@ static int __zstd_compress(const u8 *src, unsigned int slen, { size_t out_len; struct zstd_ctx *zctx = ctx; - const ZSTD_parameters params = zstd_params(); - out_len = ZSTD_compressCCtx(zctx->cctx, dst, *dlen, src, slen, params); + out_len = ZSTD_compressCCtx(zctx->cctx, dst, *dlen, src, slen, ¶ms); if (ZSTD_isError(out_len)) return -EINVAL; *dlen = out_len; diff --git a/drivers/base/devres.c b/drivers/base/devres.c index e038e2b3b7ea..0bbb328bd17f 100644 --- a/drivers/base/devres.c +++ b/drivers/base/devres.c @@ -755,10 +755,32 @@ void devm_remove_action(struct device *dev, void (*action)(void *), void *data) WARN_ON(devres_destroy(dev, devm_action_release, devm_action_match, &devres)); - } EXPORT_SYMBOL_GPL(devm_remove_action); +/** + * devm_release_action() - release previously added custom action + * @dev: Device that owns the action + * @action: Function implementing the action + * @data: Pointer to data passed to @action implementation + * + * Releases and removes instance of @action previously added by + * devm_add_action(). Both action and data should match one of the + * existing entries. + */ +void devm_release_action(struct device *dev, void (*action)(void *), void *data) +{ + struct action_devres devres = { + .data = data, + .action = action, + }; + + WARN_ON(devres_release(dev, devm_action_release, devm_action_match, + &devres)); + +} +EXPORT_SYMBOL_GPL(devm_release_action); + /* * Managed kmalloc/kfree */ diff --git a/drivers/base/firmware_loader/fallback_table.c b/drivers/base/firmware_loader/fallback_table.c index 776dd69cf5be..58d4a1263480 100644 --- a/drivers/base/firmware_loader/fallback_table.c +++ b/drivers/base/firmware_loader/fallback_table.c @@ -16,9 +16,6 @@ * firmware fallback configuration table */ -static unsigned int zero; -static unsigned int one = 1; - struct firmware_fallback_config fw_fallback_config = { .force_sysfs_fallback = IS_ENABLED(CONFIG_FW_LOADER_USER_HELPER_FALLBACK), .loading_timeout = 60, @@ -33,8 +30,8 @@ struct ctl_table firmware_config_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_douintvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "ignore_sysfs_fallback", @@ -42,8 +39,8 @@ struct ctl_table firmware_config_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_douintvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { } }; diff --git a/drivers/base/memory.c b/drivers/base/memory.c index f180427e48f4..826dd76f662e 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -39,6 +39,11 @@ static inline int base_memory_block_id(int section_nr) return section_nr / sections_per_block; } +static inline int pfn_to_block_id(unsigned long pfn) +{ + return base_memory_block_id(pfn_to_section_nr(pfn)); +} + static int memory_subsys_online(struct device *dev); static int memory_subsys_offline(struct device *dev); @@ -582,10 +587,9 @@ int __weak arch_get_memory_phys_device(unsigned long start_pfn) * A reference for the returned object is held and the reference for the * hinted object is released. */ -struct memory_block *find_memory_block_hinted(struct mem_section *section, - struct memory_block *hint) +static struct memory_block *find_memory_block_by_id(int block_id, + struct memory_block *hint) { - int block_id = base_memory_block_id(__section_nr(section)); struct device *hintdev = hint ? &hint->dev : NULL; struct device *dev; @@ -597,6 +601,14 @@ struct memory_block *find_memory_block_hinted(struct mem_section *section, return to_memory_block(dev); } +struct memory_block *find_memory_block_hinted(struct mem_section *section, + struct memory_block *hint) +{ + int block_id = base_memory_block_id(__section_nr(section)); + + return find_memory_block_by_id(block_id, hint); +} + /* * For now, we have a linear search to go find the appropriate * memory_block corresponding to a particular phys_index. If @@ -651,21 +663,23 @@ int register_memory(struct memory_block *memory) return ret; } -static int init_memory_block(struct memory_block **memory, - struct mem_section *section, unsigned long state) +static int init_memory_block(struct memory_block **memory, int block_id, + unsigned long state) { struct memory_block *mem; unsigned long start_pfn; - int scn_nr; int ret = 0; + mem = find_memory_block_by_id(block_id, NULL); + if (mem) { + put_device(&mem->dev); + return -EEXIST; + } mem = kzalloc(sizeof(*mem), GFP_KERNEL); if (!mem) return -ENOMEM; - scn_nr = __section_nr(section); - mem->start_section_nr = - base_memory_block_id(scn_nr) * sections_per_block; + mem->start_section_nr = block_id * sections_per_block; mem->end_section_nr = mem->start_section_nr + sections_per_block - 1; mem->state = state; start_pfn = section_nr_to_pfn(mem->start_section_nr); @@ -680,94 +694,98 @@ static int init_memory_block(struct memory_block **memory, static int add_memory_block(int base_section_nr) { struct memory_block *mem; - int i, ret, section_count = 0, section_nr; + int i, ret, section_count = 0; for (i = base_section_nr; i < base_section_nr + sections_per_block; - i++) { - if (!present_section_nr(i)) - continue; - if (section_count == 0) - section_nr = i; - section_count++; - } + i++) + if (present_section_nr(i)) + section_count++; if (section_count == 0) return 0; - ret = init_memory_block(&mem, __nr_to_section(section_nr), MEM_ONLINE); + ret = init_memory_block(&mem, base_memory_block_id(base_section_nr), + MEM_ONLINE); if (ret) return ret; mem->section_count = section_count; return 0; } +static void unregister_memory(struct memory_block *memory) +{ + if (WARN_ON_ONCE(memory->dev.bus != &memory_subsys)) + return; + + /* drop the ref. we got via find_memory_block() */ + put_device(&memory->dev); + device_unregister(&memory->dev); +} + /* - * need an interface for the VM to add new memory regions, - * but without onlining it. + * Create memory block devices for the given memory area. Start and size + * have to be aligned to memory block granularity. Memory block devices + * will be initialized as offline. */ -int hotplug_memory_register(int nid, struct mem_section *section) +int create_memory_block_devices(unsigned long start, unsigned long size) { - int ret = 0; + const int start_block_id = pfn_to_block_id(PFN_DOWN(start)); + int end_block_id = pfn_to_block_id(PFN_DOWN(start + size)); struct memory_block *mem; + unsigned long block_id; + int ret = 0; - mutex_lock(&mem_sysfs_mutex); + if (WARN_ON_ONCE(!IS_ALIGNED(start, memory_block_size_bytes()) || + !IS_ALIGNED(size, memory_block_size_bytes()))) + return -EINVAL; - mem = find_memory_block(section); - if (mem) { - mem->section_count++; - put_device(&mem->dev); - } else { - ret = init_memory_block(&mem, section, MEM_OFFLINE); + mutex_lock(&mem_sysfs_mutex); + for (block_id = start_block_id; block_id != end_block_id; block_id++) { + ret = init_memory_block(&mem, block_id, MEM_OFFLINE); if (ret) - goto out; - mem->section_count++; + break; + mem->section_count = sections_per_block; + } + if (ret) { + end_block_id = block_id; + for (block_id = start_block_id; block_id != end_block_id; + block_id++) { + mem = find_memory_block_by_id(block_id, NULL); + mem->section_count = 0; + unregister_memory(mem); + } } - -out: mutex_unlock(&mem_sysfs_mutex); return ret; } -#ifdef CONFIG_MEMORY_HOTREMOVE -static void -unregister_memory(struct memory_block *memory) -{ - BUG_ON(memory->dev.bus != &memory_subsys); - - /* drop the ref. we got via find_memory_block() */ - put_device(&memory->dev); - device_unregister(&memory->dev); -} - -void unregister_memory_section(struct mem_section *section) +/* + * Remove memory block devices for the given memory area. Start and size + * have to be aligned to memory block granularity. Memory block devices + * have to be offline. + */ +void remove_memory_block_devices(unsigned long start, unsigned long size) { + const int start_block_id = pfn_to_block_id(PFN_DOWN(start)); + const int end_block_id = pfn_to_block_id(PFN_DOWN(start + size)); struct memory_block *mem; + int block_id; - if (WARN_ON_ONCE(!present_section(section))) + if (WARN_ON_ONCE(!IS_ALIGNED(start, memory_block_size_bytes()) || + !IS_ALIGNED(size, memory_block_size_bytes()))) return; mutex_lock(&mem_sysfs_mutex); - - /* - * Some users of the memory hotplug do not want/need memblock to - * track all sections. Skip over those. - */ - mem = find_memory_block(section); - if (!mem) - goto out_unlock; - - unregister_mem_sect_under_nodes(mem, __section_nr(section)); - - mem->section_count--; - if (mem->section_count == 0) + for (block_id = start_block_id; block_id != end_block_id; block_id++) { + mem = find_memory_block_by_id(block_id, NULL); + if (WARN_ON_ONCE(!mem)) + continue; + mem->section_count = 0; + unregister_memory_block_under_nodes(mem); unregister_memory(mem); - else - put_device(&mem->dev); - -out_unlock: + } mutex_unlock(&mem_sysfs_mutex); } -#endif /* CONFIG_MEMORY_HOTREMOVE */ /* return true if the memory block is offlined, otherwise, return false */ bool is_memblock_offlined(struct memory_block *mem) diff --git a/drivers/base/node.c b/drivers/base/node.c index 8598fcbd2a17..9be88fd05147 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -801,23 +801,18 @@ int register_mem_sect_under_node(struct memory_block *mem_blk, void *arg) return 0; } -/* unregister memory section under all nodes that it spans */ -int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, - unsigned long phys_index) +/* + * Unregister memory block device under all nodes that it spans. + * Has to be called with mem_sysfs_mutex held (due to unlinked_nodes). + */ +void unregister_memory_block_under_nodes(struct memory_block *mem_blk) { - NODEMASK_ALLOC(nodemask_t, unlinked_nodes, GFP_KERNEL); unsigned long pfn, sect_start_pfn, sect_end_pfn; + static nodemask_t unlinked_nodes; - if (!mem_blk) { - NODEMASK_FREE(unlinked_nodes); - return -EFAULT; - } - if (!unlinked_nodes) - return -ENOMEM; - nodes_clear(*unlinked_nodes); - - sect_start_pfn = section_nr_to_pfn(phys_index); - sect_end_pfn = sect_start_pfn + PAGES_PER_SECTION - 1; + nodes_clear(unlinked_nodes); + sect_start_pfn = section_nr_to_pfn(mem_blk->start_section_nr); + sect_end_pfn = section_nr_to_pfn(mem_blk->end_section_nr); for (pfn = sect_start_pfn; pfn <= sect_end_pfn; pfn++) { int nid; @@ -826,15 +821,13 @@ int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, continue; if (!node_online(nid)) continue; - if (node_test_and_set(nid, *unlinked_nodes)) + if (node_test_and_set(nid, unlinked_nodes)) continue; sysfs_remove_link(&node_devices[nid]->dev.kobj, kobject_name(&mem_blk->dev.kobj)); sysfs_remove_link(&mem_blk->dev.kobj, kobject_name(&node_devices[nid]->dev.kobj)); } - NODEMASK_FREE(unlinked_nodes); - return 0; } int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn) diff --git a/drivers/clk/renesas/r8a77470-cpg-mssr.c b/drivers/clk/renesas/r8a77470-cpg-mssr.c index ab0fb10b6bf0..d81ae65f0d18 100644 --- a/drivers/clk/renesas/r8a77470-cpg-mssr.c +++ b/drivers/clk/renesas/r8a77470-cpg-mssr.c @@ -175,7 +175,7 @@ static const unsigned int r8a77470_crit_mod_clks[] __initconst = { *--------------------------------------------------- * 0 0 20 x80 x78 x50 * 0 1 26 x60 x60 x56 - * 1 0 Prohibitted setting + * 1 0 Prohibited setting * 1 1 30 x52 x52 x50 * * *1 : Table 7.4 indicates VCO output (PLL0 = VCO) diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h index a45612148ca0..999aaf3a29b3 100644 --- a/drivers/dax/dax-private.h +++ b/drivers/dax/dax-private.h @@ -53,6 +53,7 @@ struct dax_region { * @pgmap - pgmap for memmap setup / lifetime (driver owned) * @ref: pgmap reference count (driver owned) * @cmp: @ref final put completion (driver owned) + * @dax_mem_res: physical address range of hotadded DAX memory */ struct dev_dax { struct dax_region *region; @@ -62,6 +63,7 @@ struct dev_dax { struct dev_pagemap pgmap; struct percpu_ref ref; struct completion cmp; + struct resource *dax_kmem_res; }; static inline struct dev_dax *to_dev_dax(struct device *dev) diff --git a/drivers/dax/device.c b/drivers/dax/device.c index 996d68ff992a..8465d12fecba 100644 --- a/drivers/dax/device.c +++ b/drivers/dax/device.c @@ -27,9 +27,8 @@ static void dev_dax_percpu_release(struct percpu_ref *ref) complete(&dev_dax->cmp); } -static void dev_dax_percpu_exit(void *data) +static void dev_dax_percpu_exit(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct dev_dax *dev_dax = ref_to_dev_dax(ref); dev_dbg(&dev_dax->dev, "%s\n", __func__); @@ -466,18 +465,12 @@ int dev_dax_probe(struct device *dev) if (rc) return rc; - rc = devm_add_action_or_reset(dev, dev_dax_percpu_exit, &dev_dax->ref); - if (rc) - return rc; - dev_dax->pgmap.ref = &dev_dax->ref; dev_dax->pgmap.kill = dev_dax_percpu_kill; + dev_dax->pgmap.cleanup = dev_dax_percpu_exit; addr = devm_memremap_pages(dev, &dev_dax->pgmap); - if (IS_ERR(addr)) { - devm_remove_action(dev, dev_dax_percpu_exit, &dev_dax->ref); - percpu_ref_exit(&dev_dax->ref); + if (IS_ERR(addr)) return PTR_ERR(addr); - } inode = dax_inode(dax_dev); cdev = inode->i_cdev; diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c index a02318c6d28a..3d0a7e702c94 100644 --- a/drivers/dax/kmem.c +++ b/drivers/dax/kmem.c @@ -66,23 +66,59 @@ int dev_dax_kmem_probe(struct device *dev) new_res->name = dev_name(dev); rc = add_memory(numa_node, new_res->start, resource_size(new_res)); - if (rc) + if (rc) { + release_resource(new_res); + kfree(new_res); return rc; + } + dev_dax->dax_kmem_res = new_res; return 0; } +#ifdef CONFIG_MEMORY_HOTREMOVE +static int dev_dax_kmem_remove(struct device *dev) +{ + struct dev_dax *dev_dax = to_dev_dax(dev); + struct resource *res = dev_dax->dax_kmem_res; + resource_size_t kmem_start = res->start; + resource_size_t kmem_size = resource_size(res); + int rc; + + /* + * We have one shot for removing memory, if some memory blocks were not + * offline prior to calling this function remove_memory() will fail, and + * there is no way to hotremove this memory until reboot because device + * unbind will succeed even if we return failure. + */ + rc = remove_memory(dev_dax->target_node, kmem_start, kmem_size); + if (rc) { + dev_err(dev, + "DAX region %pR cannot be hotremoved until the next reboot\n", + res); + return rc; + } + + /* Release and free dax resources */ + release_resource(res); + kfree(res); + dev_dax->dax_kmem_res = NULL; + + return 0; +} +#else static int dev_dax_kmem_remove(struct device *dev) { /* - * Purposely leak the request_mem_region() for the device-dax - * range and return '0' to ->remove() attempts. The removal of - * the device from the driver always succeeds, but the region - * is permanently pinned as reserved by the unreleased + * Without hotremove purposely leak the request_mem_region() for the + * device-dax range and return '0' to ->remove() attempts. The removal + * of the device from the driver always succeeds, but the region is + * permanently pinned as reserved by the unreleased * request_mem_region(). */ return 0; } +#endif /* CONFIG_MEMORY_HOTREMOVE */ static struct dax_device_driver device_dax_kmem_driver = { .drv = { diff --git a/drivers/fpga/dfl-afu-dma-region.c b/drivers/fpga/dfl-afu-dma-region.c index dcd80b088c7b..62f924489db5 100644 --- a/drivers/fpga/dfl-afu-dma-region.c +++ b/drivers/fpga/dfl-afu-dma-region.c @@ -12,6 +12,7 @@ #include <linux/dma-mapping.h> #include <linux/sched/signal.h> #include <linux/uaccess.h> +#include <linux/mm.h> #include "dfl-afu.h" @@ -32,52 +33,6 @@ void afu_dma_region_init(struct dfl_feature_platform_data *pdata) } /** - * afu_dma_adjust_locked_vm - adjust locked memory - * @dev: port device - * @npages: number of pages - * @incr: increase or decrease locked memory - * - * Increase or decrease the locked memory size with npages input. - * - * Return 0 on success. - * Return -ENOMEM if locked memory size is over the limit and no CAP_IPC_LOCK. - */ -static int afu_dma_adjust_locked_vm(struct device *dev, long npages, bool incr) -{ - unsigned long locked, lock_limit; - int ret = 0; - - /* the task is exiting. */ - if (!current->mm) - return 0; - - down_write(¤t->mm->mmap_sem); - - if (incr) { - locked = current->mm->locked_vm + npages; - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) - ret = -ENOMEM; - else - current->mm->locked_vm += npages; - } else { - if (WARN_ON_ONCE(npages > current->mm->locked_vm)) - npages = current->mm->locked_vm; - current->mm->locked_vm -= npages; - } - - dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %ld/%ld%s\n", current->pid, - incr ? '+' : '-', npages << PAGE_SHIFT, - current->mm->locked_vm << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK), - ret ? "- exceeded" : ""); - - up_write(¤t->mm->mmap_sem); - - return ret; -} - -/** * afu_dma_pin_pages - pin pages of given dma memory region * @pdata: feature device platform data * @region: dma memory region to be pinned @@ -92,7 +47,7 @@ static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata, struct device *dev = &pdata->dev->dev; int ret, pinned; - ret = afu_dma_adjust_locked_vm(dev, npages, true); + ret = account_locked_vm(current->mm, npages, true); if (ret) return ret; @@ -121,7 +76,7 @@ put_pages: free_pages: kfree(region->pages); unlock_vm: - afu_dma_adjust_locked_vm(dev, npages, false); + account_locked_vm(current->mm, npages, false); return ret; } @@ -141,7 +96,7 @@ static void afu_dma_unpin_pages(struct dfl_feature_platform_data *pdata, put_all_pages(region->pages, npages); kfree(region->pages); - afu_dma_adjust_locked_vm(dev, npages, false); + account_locked_vm(current->mm, npages, false); dev_dbg(dev, "%ld pages unpinned\n", npages); } diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/i915_mm.c index e4935dd1fd37..c23bb29e6d3e 100644 --- a/drivers/gpu/drm/i915/i915_mm.c +++ b/drivers/gpu/drm/i915/i915_mm.c @@ -35,8 +35,7 @@ struct remap_pfn { pgprot_t prot; }; -static int remap_pfn(pte_t *pte, pgtable_t token, - unsigned long addr, void *data) +static int remap_pfn(pte_t *pte, unsigned long addr, void *data) { struct remap_pfn *r = data; diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c index 2e33a9b4eae7..442903175665 100644 --- a/drivers/gpu/drm/i915/i915_perf.c +++ b/drivers/gpu/drm/i915/i915_perf.c @@ -274,8 +274,6 @@ #define POLL_PERIOD (NSEC_PER_SEC / POLL_FREQUENCY) /* for sysctl proc_dointvec_minmax of dev.i915.perf_stream_paranoid */ -static int zero; -static int one = 1; static u32 i915_perf_stream_paranoid = true; /* The maximum exponent the hardware accepts is 63 (essentially it selects one @@ -3365,8 +3363,8 @@ static struct ctl_table oa_table[] = { .maxlen = sizeof(i915_perf_stream_paranoid), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "oa_max_sample_rate", @@ -3374,7 +3372,7 @@ static struct ctl_table oa_table[] = { .maxlen = sizeof(i915_oa_max_sample_rate), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &oa_sample_rate_hard_limit, }, {} diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c index 1cb9408b0d40..698977f33670 100644 --- a/drivers/hv/vmbus_drv.c +++ b/drivers/hv/vmbus_drv.c @@ -1219,8 +1219,6 @@ static struct kmsg_dumper hv_kmsg_dumper = { }; static struct ctl_table_header *hv_ctl_table_hdr; -static int zero; -static int one = 1; /* * sysctl option to allow the user to control whether kmsg data should be @@ -1233,8 +1231,8 @@ static struct ctl_table hv_ctl_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE }, {} }; diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c index 8a1428d4f138..df9429e3fd3a 100644 --- a/drivers/misc/lkdtm/core.c +++ b/drivers/misc/lkdtm/core.c @@ -120,6 +120,9 @@ static const struct crashtype crashtypes[] = { CRASHTYPE(READ_AFTER_FREE), CRASHTYPE(WRITE_BUDDY_AFTER_FREE), CRASHTYPE(READ_BUDDY_AFTER_FREE), + CRASHTYPE(SLAB_FREE_DOUBLE), + CRASHTYPE(SLAB_FREE_CROSS), + CRASHTYPE(SLAB_FREE_PAGE), CRASHTYPE(SOFTLOCKUP), CRASHTYPE(HARDLOCKUP), CRASHTYPE(SPINLOCKUP), @@ -426,6 +429,7 @@ static int __init lkdtm_module_init(void) lkdtm_bugs_init(&recur_count); lkdtm_perms_init(); lkdtm_usercopy_init(); + lkdtm_heap_init(); /* Register debugfs interface */ lkdtm_debugfs_root = debugfs_create_dir("provoke-crash", NULL); @@ -472,6 +476,7 @@ static void __exit lkdtm_module_exit(void) debugfs_remove_recursive(lkdtm_debugfs_root); /* Handle test-specific clean-up. */ + lkdtm_heap_exit(); lkdtm_usercopy_exit(); if (lkdtm_kprobe != NULL) diff --git a/drivers/misc/lkdtm/heap.c b/drivers/misc/lkdtm/heap.c index 65026d7de130..3c5cec85edce 100644 --- a/drivers/misc/lkdtm/heap.c +++ b/drivers/misc/lkdtm/heap.c @@ -7,6 +7,10 @@ #include <linux/slab.h> #include <linux/sched.h> +static struct kmem_cache *double_free_cache; +static struct kmem_cache *a_cache; +static struct kmem_cache *b_cache; + /* * This tries to stay within the next largest power-of-2 kmalloc cache * to avoid actually overwriting anything important if it's not detected @@ -146,3 +150,71 @@ void lkdtm_READ_BUDDY_AFTER_FREE(void) kfree(val); } + +void lkdtm_SLAB_FREE_DOUBLE(void) +{ + int *val; + + val = kmem_cache_alloc(double_free_cache, GFP_KERNEL); + if (!val) { + pr_info("Unable to allocate double_free_cache memory.\n"); + return; + } + + /* Just make sure we got real memory. */ + *val = 0x12345678; + pr_info("Attempting double slab free ...\n"); + kmem_cache_free(double_free_cache, val); + kmem_cache_free(double_free_cache, val); +} + +void lkdtm_SLAB_FREE_CROSS(void) +{ + int *val; + + val = kmem_cache_alloc(a_cache, GFP_KERNEL); + if (!val) { + pr_info("Unable to allocate a_cache memory.\n"); + return; + } + + /* Just make sure we got real memory. */ + *val = 0x12345679; + pr_info("Attempting cross-cache slab free ...\n"); + kmem_cache_free(b_cache, val); +} + +void lkdtm_SLAB_FREE_PAGE(void) +{ + unsigned long p = __get_free_page(GFP_KERNEL); + + pr_info("Attempting non-Slab slab free ...\n"); + kmem_cache_free(NULL, (void *)p); + free_page(p); +} + +/* + * We have constructors to keep the caches distinctly separated without + * needing to boot with "slab_nomerge". + */ +static void ctor_double_free(void *region) +{ } +static void ctor_a(void *region) +{ } +static void ctor_b(void *region) +{ } + +void __init lkdtm_heap_init(void) +{ + double_free_cache = kmem_cache_create("lkdtm-heap-double_free", + 64, 0, 0, ctor_double_free); + a_cache = kmem_cache_create("lkdtm-heap-a", 64, 0, 0, ctor_a); + b_cache = kmem_cache_create("lkdtm-heap-b", 64, 0, 0, ctor_b); +} + +void __exit lkdtm_heap_exit(void) +{ + kmem_cache_destroy(double_free_cache); + kmem_cache_destroy(a_cache); + kmem_cache_destroy(b_cache); +} diff --git a/drivers/misc/lkdtm/lkdtm.h b/drivers/misc/lkdtm/lkdtm.h index 23dc565b4307..c5ae0b37587d 100644 --- a/drivers/misc/lkdtm/lkdtm.h +++ b/drivers/misc/lkdtm/lkdtm.h @@ -28,11 +28,16 @@ void lkdtm_STACK_GUARD_PAGE_LEADING(void); void lkdtm_STACK_GUARD_PAGE_TRAILING(void); /* lkdtm_heap.c */ +void __init lkdtm_heap_init(void); +void __exit lkdtm_heap_exit(void); void lkdtm_OVERWRITE_ALLOCATION(void); void lkdtm_WRITE_AFTER_FREE(void); void lkdtm_READ_AFTER_FREE(void); void lkdtm_WRITE_BUDDY_AFTER_FREE(void); void lkdtm_READ_BUDDY_AFTER_FREE(void); +void lkdtm_SLAB_FREE_DOUBLE(void); +void lkdtm_SLAB_FREE_CROSS(void); +void lkdtm_SLAB_FREE_PAGE(void); /* lkdtm_perms.c */ void __init lkdtm_perms_init(void); diff --git a/drivers/misc/sram-exec.c b/drivers/misc/sram-exec.c index 426ad912b441..d054e2842a5f 100644 --- a/drivers/misc/sram-exec.c +++ b/drivers/misc/sram-exec.c @@ -96,7 +96,7 @@ void *sram_exec_copy(struct gen_pool *pool, void *dst, void *src, if (!part) return NULL; - if (!addr_in_gen_pool(pool, (unsigned long)dst, size)) + if (!gen_pool_has_addr(pool, (unsigned long)dst, size)) return NULL; base = (unsigned long)part->base; diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c index 6b89f7eab26c..e0f5e6202a27 100644 --- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c +++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c @@ -53,7 +53,7 @@ int mt76x02u_skb_dma_info(struct sk_buff *skb, int port, u32 flags) pad = round_up(skb->len, 4) + 4 - skb->len; /* First packet of a A-MSDU burst keeps track of the whole burst - * length, need to update lenght of it and the last packet. + * length, need to update length of it and the last packet. */ skb_walk_frags(skb, iter) { last = iter; diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h index 81caa3782ec0..3bdda1c98339 100644 --- a/drivers/net/wireless/realtek/rtlwifi/wifi.h +++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h @@ -598,7 +598,7 @@ enum ht_channel_width { HT_CHANNEL_WIDTH_MAX, }; -/* Ref: 802.11i sepc D10.0 7.3.2.25.1 +/* Ref: 802.11i spec D10.0 7.3.2.25.1 * Cipher Suites Encryption Algorithms */ enum rt_enc_alg { diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index d9d845077b8b..c4f5a808b9da 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -311,11 +311,19 @@ static const struct attribute_group *pmem_attribute_groups[] = { NULL, }; -static void pmem_release_queue(void *q) +static void __pmem_release_queue(struct percpu_ref *ref) { + struct request_queue *q; + + q = container_of(ref, typeof(*q), q_usage_counter); blk_cleanup_queue(q); } +static void pmem_release_queue(void *ref) +{ + __pmem_release_queue(ref); +} + static void pmem_freeze_queue(struct percpu_ref *ref) { struct request_queue *q; @@ -407,12 +415,10 @@ static int pmem_attach_disk(struct device *dev, if (!q) return -ENOMEM; - if (devm_add_action_or_reset(dev, pmem_release_queue, q)) - return -ENOMEM; - pmem->pfn_flags = PFN_DEV; pmem->pgmap.ref = &q->q_usage_counter; pmem->pgmap.kill = pmem_freeze_queue; + pmem->pgmap.cleanup = __pmem_release_queue; if (is_nd_pfn(dev)) { if (setup_pagemap_fsdax(dev, &pmem->pgmap)) return -ENOMEM; @@ -433,6 +439,9 @@ static int pmem_attach_disk(struct device *dev, pmem->pfn_flags |= PFN_MAP; memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res)); } else { + if (devm_add_action_or_reset(dev, pmem_release_queue, + &q->q_usage_counter)) + return -ENOMEM; addr = devm_memremap(dev, pmem->phys_addr, pmem->size, ARCH_MEMREMAP_PMEM); memcpy(&bb_res, &nsio->res, sizeof(bb_res)); diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c index 742928d0053e..a98126ad9c3a 100644 --- a/drivers/pci/p2pdma.c +++ b/drivers/pci/p2pdma.c @@ -20,12 +20,16 @@ #include <linux/seq_buf.h> struct pci_p2pdma { - struct percpu_ref devmap_ref; - struct completion devmap_ref_done; struct gen_pool *pool; bool p2pmem_published; }; +struct p2pdma_pagemap { + struct dev_pagemap pgmap; + struct percpu_ref ref; + struct completion ref_done; +}; + static ssize_t size_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -74,41 +78,45 @@ static const struct attribute_group p2pmem_group = { .name = "p2pmem", }; +static struct p2pdma_pagemap *to_p2p_pgmap(struct percpu_ref *ref) +{ + return container_of(ref, struct p2pdma_pagemap, ref); +} + static void pci_p2pdma_percpu_release(struct percpu_ref *ref) { - struct pci_p2pdma *p2p = - container_of(ref, struct pci_p2pdma, devmap_ref); + struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref); - complete_all(&p2p->devmap_ref_done); + complete(&p2p_pgmap->ref_done); } static void pci_p2pdma_percpu_kill(struct percpu_ref *ref) { - /* - * pci_p2pdma_add_resource() may be called multiple times - * by a driver and may register the percpu_kill devm action multiple - * times. We only want the first action to actually kill the - * percpu_ref. - */ - if (percpu_ref_is_dying(ref)) - return; - percpu_ref_kill(ref); } +static void pci_p2pdma_percpu_cleanup(struct percpu_ref *ref) +{ + struct p2pdma_pagemap *p2p_pgmap = to_p2p_pgmap(ref); + + wait_for_completion(&p2p_pgmap->ref_done); + percpu_ref_exit(&p2p_pgmap->ref); +} + static void pci_p2pdma_release(void *data) { struct pci_dev *pdev = data; + struct pci_p2pdma *p2pdma = pdev->p2pdma; - if (!pdev->p2pdma) + if (!p2pdma) return; - wait_for_completion(&pdev->p2pdma->devmap_ref_done); - percpu_ref_exit(&pdev->p2pdma->devmap_ref); + /* Flush and disable pci_alloc_p2p_mem() */ + pdev->p2pdma = NULL; + synchronize_rcu(); - gen_pool_destroy(pdev->p2pdma->pool); + gen_pool_destroy(p2pdma->pool); sysfs_remove_group(&pdev->dev.kobj, &p2pmem_group); - pdev->p2pdma = NULL; } static int pci_p2pdma_setup(struct pci_dev *pdev) @@ -124,12 +132,6 @@ static int pci_p2pdma_setup(struct pci_dev *pdev) if (!p2p->pool) goto out; - init_completion(&p2p->devmap_ref_done); - error = percpu_ref_init(&p2p->devmap_ref, - pci_p2pdma_percpu_release, 0, GFP_KERNEL); - if (error) - goto out_pool_destroy; - error = devm_add_action_or_reset(&pdev->dev, pci_p2pdma_release, pdev); if (error) goto out_pool_destroy; @@ -163,6 +165,7 @@ out: int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, u64 offset) { + struct p2pdma_pagemap *p2p_pgmap; struct dev_pagemap *pgmap; void *addr; int error; @@ -185,18 +188,27 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, return error; } - pgmap = devm_kzalloc(&pdev->dev, sizeof(*pgmap), GFP_KERNEL); - if (!pgmap) + p2p_pgmap = devm_kzalloc(&pdev->dev, sizeof(*p2p_pgmap), GFP_KERNEL); + if (!p2p_pgmap) return -ENOMEM; + init_completion(&p2p_pgmap->ref_done); + error = percpu_ref_init(&p2p_pgmap->ref, + pci_p2pdma_percpu_release, 0, GFP_KERNEL); + if (error) + goto pgmap_free; + + pgmap = &p2p_pgmap->pgmap; + pgmap->res.start = pci_resource_start(pdev, bar) + offset; pgmap->res.end = pgmap->res.start + size - 1; pgmap->res.flags = pci_resource_flags(pdev, bar); - pgmap->ref = &pdev->p2pdma->devmap_ref; + pgmap->ref = &p2p_pgmap->ref; pgmap->type = MEMORY_DEVICE_PCI_P2PDMA; pgmap->pci_p2pdma_bus_offset = pci_bus_address(pdev, bar) - pci_resource_start(pdev, bar); pgmap->kill = pci_p2pdma_percpu_kill; + pgmap->cleanup = pci_p2pdma_percpu_cleanup; addr = devm_memremap_pages(&pdev->dev, pgmap); if (IS_ERR(addr)) { @@ -204,19 +216,22 @@ int pci_p2pdma_add_resource(struct pci_dev *pdev, int bar, size_t size, goto pgmap_free; } - error = gen_pool_add_virt(pdev->p2pdma->pool, (unsigned long)addr, + error = gen_pool_add_owner(pdev->p2pdma->pool, (unsigned long)addr, pci_bus_address(pdev, bar) + offset, - resource_size(&pgmap->res), dev_to_node(&pdev->dev)); + resource_size(&pgmap->res), dev_to_node(&pdev->dev), + &p2p_pgmap->ref); if (error) - goto pgmap_free; + goto pages_free; pci_info(pdev, "added peer-to-peer DMA memory %pR\n", &pgmap->res); return 0; +pages_free: + devm_memunmap_pages(&pdev->dev, pgmap); pgmap_free: - devm_kfree(&pdev->dev, pgmap); + devm_kfree(&pdev->dev, p2p_pgmap); return error; } EXPORT_SYMBOL_GPL(pci_p2pdma_add_resource); @@ -585,19 +600,30 @@ EXPORT_SYMBOL_GPL(pci_p2pmem_find_many); */ void *pci_alloc_p2pmem(struct pci_dev *pdev, size_t size) { - void *ret; + void *ret = NULL; + struct percpu_ref *ref; + /* + * Pairs with synchronize_rcu() in pci_p2pdma_release() to + * ensure pdev->p2pdma is non-NULL for the duration of the + * read-lock. + */ + rcu_read_lock(); if (unlikely(!pdev->p2pdma)) - return NULL; - - if (unlikely(!percpu_ref_tryget_live(&pdev->p2pdma->devmap_ref))) - return NULL; - - ret = (void *)gen_pool_alloc(pdev->p2pdma->pool, size); + goto out; - if (unlikely(!ret)) - percpu_ref_put(&pdev->p2pdma->devmap_ref); + ret = (void *)gen_pool_alloc_owner(pdev->p2pdma->pool, size, + (void **) &ref); + if (!ret) + goto out; + if (unlikely(!percpu_ref_tryget_live(ref))) { + gen_pool_free(pdev->p2pdma->pool, (unsigned long) ret, size); + ret = NULL; + goto out; + } +out: + rcu_read_unlock(); return ret; } EXPORT_SYMBOL_GPL(pci_alloc_p2pmem); @@ -610,8 +636,11 @@ EXPORT_SYMBOL_GPL(pci_alloc_p2pmem); */ void pci_free_p2pmem(struct pci_dev *pdev, void *addr, size_t size) { - gen_pool_free(pdev->p2pdma->pool, (uintptr_t)addr, size); - percpu_ref_put(&pdev->p2pdma->devmap_ref); + struct percpu_ref *ref; + + gen_pool_free_owner(pdev->p2pdma->pool, (uintptr_t)addr, size, + (void **) &ref); + percpu_ref_put(ref); } EXPORT_SYMBOL_GPL(pci_free_p2pmem); diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c index ce7a90e68042..8155f59ece38 100644 --- a/drivers/rapidio/devices/rio_mport_cdev.c +++ b/drivers/rapidio/devices/rio_mport_cdev.c @@ -1686,6 +1686,7 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv, if (copy_from_user(&dev_info, arg, sizeof(dev_info))) return -EFAULT; + dev_info.name[sizeof(dev_info.name) - 1] = '\0'; rmcd_debug(RDEV, "name:%s ct:0x%x did:0x%x hc:0x%x", dev_info.name, dev_info.comptag, dev_info.destid, dev_info.hopcount); @@ -1817,6 +1818,7 @@ static int rio_mport_del_riodev(struct mport_cdev_priv *priv, void __user *arg) if (copy_from_user(&dev_info, arg, sizeof(dev_info))) return -EFAULT; + dev_info.name[sizeof(dev_info.name) - 1] = '\0'; mport = priv->md->mport; diff --git a/drivers/s390/char/sclp_async.c b/drivers/s390/char/sclp_async.c index e69b12a40636..24a540de979d 100644 --- a/drivers/s390/char/sclp_async.c +++ b/drivers/s390/char/sclp_async.c @@ -64,9 +64,6 @@ static struct notifier_block call_home_panic_nb = { .priority = INT_MAX, }; -static int zero; -static int one = 1; - static struct ctl_table callhome_table[] = { { .procname = "callhome", @@ -74,8 +71,8 @@ static struct ctl_table callhome_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, {} }; diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c index 6d6b14295734..faf43b1d3dbe 100644 --- a/drivers/scsi/lpfc/lpfc_init.c +++ b/drivers/scsi/lpfc/lpfc_init.c @@ -2965,7 +2965,7 @@ lpfc_stop_hba_timers(struct lpfc_hba *phba) del_timer_sync(&phba->fcp_poll_timer); break; case LPFC_PCI_DEV_OC: - /* Stop any OneConnect device sepcific driver timers */ + /* Stop any OneConnect device specific driver timers */ lpfc_sli4_stop_fcf_redisc_wait_timer(phba); break; default: diff --git a/drivers/staging/rtl8723bs/hal/rtl8723b_phycfg.c b/drivers/staging/rtl8723bs/hal/rtl8723b_phycfg.c index 4f2ad54af398..6da7f8e7bdae 100644 --- a/drivers/staging/rtl8723bs/hal/rtl8723b_phycfg.c +++ b/drivers/staging/rtl8723bs/hal/rtl8723b_phycfg.c @@ -45,7 +45,7 @@ static u32 phy_CalculateBitShift(u32 BitMask) /** * Function: PHY_QueryBBReg * -* OverView: Read "sepcific bits" from BB register +* OverView: Read "specific bits" from BB register * * Input: * struct adapter * Adapter, diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c index e38f104db174..c882b05241b3 100644 --- a/drivers/tty/tty_ldisc.c +++ b/drivers/tty/tty_ldisc.c @@ -855,8 +855,6 @@ void tty_ldisc_deinit(struct tty_struct *tty) tty->ldisc = NULL; } -static int zero; -static int one = 1; static struct ctl_table tty_table[] = { { .procname = "ldisc_autoload", @@ -864,8 +862,8 @@ static struct ctl_table tty_table[] = { .maxlen = sizeof(tty_ldisc_autoload), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { } }; diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index 40ddc0c5f677..d06e8e291924 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -22,6 +22,7 @@ #include <linux/vmalloc.h> #include <linux/sched/mm.h> #include <linux/sched/signal.h> +#include <linux/mm.h> #include <asm/iommu.h> #include <asm/tce.h> @@ -34,51 +35,6 @@ static void tce_iommu_detach_group(void *iommu_data, struct iommu_group *iommu_group); -static long try_increment_locked_vm(struct mm_struct *mm, long npages) -{ - long ret = 0, locked, lock_limit; - - if (WARN_ON_ONCE(!mm)) - return -EPERM; - - if (!npages) - return 0; - - down_write(&mm->mmap_sem); - locked = mm->locked_vm + npages; - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - if (locked > lock_limit && !capable(CAP_IPC_LOCK)) - ret = -ENOMEM; - else - mm->locked_vm += npages; - - pr_debug("[%d] RLIMIT_MEMLOCK +%ld %ld/%ld%s\n", current->pid, - npages << PAGE_SHIFT, - mm->locked_vm << PAGE_SHIFT, - rlimit(RLIMIT_MEMLOCK), - ret ? " - exceeded" : ""); - - up_write(&mm->mmap_sem); - - return ret; -} - -static void decrement_locked_vm(struct mm_struct *mm, long npages) -{ - if (!mm || !npages) - return; - - down_write(&mm->mmap_sem); - if (WARN_ON_ONCE(npages > mm->locked_vm)) - npages = mm->locked_vm; - mm->locked_vm -= npages; - pr_debug("[%d] RLIMIT_MEMLOCK -%ld %ld/%ld\n", current->pid, - npages << PAGE_SHIFT, - mm->locked_vm << PAGE_SHIFT, - rlimit(RLIMIT_MEMLOCK)); - up_write(&mm->mmap_sem); -} - /* * VFIO IOMMU fd for SPAPR_TCE IOMMU implementation * @@ -336,7 +292,7 @@ static int tce_iommu_enable(struct tce_container *container) return ret; locked = table_group->tce32_size >> PAGE_SHIFT; - ret = try_increment_locked_vm(container->mm, locked); + ret = account_locked_vm(container->mm, locked, true); if (ret) return ret; @@ -355,7 +311,7 @@ static void tce_iommu_disable(struct tce_container *container) container->enabled = false; BUG_ON(!container->mm); - decrement_locked_vm(container->mm, container->locked_pages); + account_locked_vm(container->mm, container->locked_pages, false); } static void *tce_iommu_open(unsigned long arg) @@ -659,7 +615,7 @@ static long tce_iommu_create_table(struct tce_container *container, if (!table_size) return -EINVAL; - ret = try_increment_locked_vm(container->mm, table_size >> PAGE_SHIFT); + ret = account_locked_vm(container->mm, table_size >> PAGE_SHIFT, true); if (ret) return ret; @@ -678,7 +634,7 @@ static void tce_iommu_free_table(struct tce_container *container, unsigned long pages = tbl->it_allocated_size >> PAGE_SHIFT; iommu_tce_table_put(tbl); - decrement_locked_vm(container->mm, pages); + account_locked_vm(container->mm, pages, false); } static long tce_iommu_create_window(struct tce_container *container, diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 3ddc375e7063..bf449ace1676 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -275,21 +275,8 @@ static int vfio_lock_acct(struct vfio_dma *dma, long npage, bool async) ret = down_write_killable(&mm->mmap_sem); if (!ret) { - if (npage > 0) { - if (!dma->lock_cap) { - unsigned long limit; - - limit = task_rlimit(dma->task, - RLIMIT_MEMLOCK) >> PAGE_SHIFT; - - if (mm->locked_vm + npage > limit) - ret = -ENOMEM; - } - } - - if (!ret) - mm->locked_vm += npage; - + ret = __account_locked_vm(mm, abs(npage), npage > 0, dma->task, + dma->lock_cap); up_write(&mm->mmap_sem); } diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index d37dd5bb7a8f..37a36c6b9f93 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -77,9 +77,6 @@ static int xen_hotplug_unpopulated; #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG -static int zero; -static int one = 1; - static struct ctl_table balloon_table[] = { { .procname = "hotplug_unpopulated", @@ -87,8 +84,8 @@ static struct ctl_table balloon_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { } }; diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c index 469dfbd6cf90..4c339c7e66e5 100644 --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -264,8 +264,7 @@ void gntdev_put_map(struct gntdev_priv *priv, struct gntdev_grant_map *map) /* ------------------------------------------------------------------ */ -static int find_grant_ptes(pte_t *pte, pgtable_t token, - unsigned long addr, void *data) +static int find_grant_ptes(pte_t *pte, unsigned long addr, void *data) { struct gntdev_grant_map *map = data; unsigned int pgnr = (addr - map->vma->vm_start) >> PAGE_SHIFT; @@ -292,8 +291,7 @@ static int find_grant_ptes(pte_t *pte, pgtable_t token, } #ifdef CONFIG_X86 -static int set_grant_ptes_as_special(pte_t *pte, pgtable_t token, - unsigned long addr, void *data) +static int set_grant_ptes_as_special(pte_t *pte, unsigned long addr, void *data) { set_pte_at(current->mm, addr, pte, pte_mkspecial(*pte)); return 0; diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c index 1ff38d8036e9..2f5ce7230a43 100644 --- a/drivers/xen/privcmd.c +++ b/drivers/xen/privcmd.c @@ -731,8 +731,7 @@ struct remap_pfn { unsigned long i; }; -static int remap_pfn_fn(pte_t *ptep, pgtable_t token, unsigned long addr, - void *data) +static int remap_pfn_fn(pte_t *ptep, unsigned long addr, void *data) { struct remap_pfn *r = data; struct page *page = r->pages[r->i]; @@ -966,8 +965,7 @@ static int privcmd_mmap(struct file *file, struct vm_area_struct *vma) * on a per pfn/pte basis. Mapping calls that fail with ENOENT * can be then retried until success. */ -static int is_mapped_fn(pte_t *pte, struct page *pmd_page, - unsigned long addr, void *data) +static int is_mapped_fn(pte_t *pte, unsigned long addr, void *data) { return pte_none(*pte) ? 0 : -EBUSY; } diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c index e7df65d32c91..ba883a80b3c0 100644 --- a/drivers/xen/xlate_mmu.c +++ b/drivers/xen/xlate_mmu.c @@ -93,8 +93,7 @@ static void setup_hparams(unsigned long gfn, void *data) info->fgfn++; } -static int remap_pte_fn(pte_t *ptep, pgtable_t token, unsigned long addr, - void *data) +static int remap_pte_fn(pte_t *ptep, unsigned long addr, void *data) { struct remap_data *info = data; struct page *page = info->pages[info->index++]; diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index bc57ae9e2963..cce9ace651a2 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -35,8 +35,9 @@ * @page: structure to page * */ -static int v9fs_fid_readpage(struct p9_fid *fid, struct page *page) +static int v9fs_fid_readpage(void *data, struct page *page) { + struct p9_fid *fid = data; struct inode *inode = page->mapping->host; struct bio_vec bvec = {.bv_page = page, .bv_len = PAGE_SIZE}; struct iov_iter to; @@ -107,7 +108,8 @@ static int v9fs_vfs_readpages(struct file *filp, struct address_space *mapping, if (ret == 0) return ret; - ret = read_cache_pages(mapping, pages, (void *)v9fs_vfs_readpage, filp); + ret = read_cache_pages(mapping, pages, v9fs_fid_readpage, + filp->private_data); p9_debug(P9_DEBUG_VFS, " = %d\n", ret); return ret; } @@ -425,7 +425,7 @@ static int aio_migratepage(struct address_space *mapping, struct page *new, BUG_ON(PageWriteback(old)); get_page(new); - rc = migrate_page_move_mapping(mapping, new, old, mode, 1); + rc = migrate_page_move_mapping(mapping, new, old, 1); if (rc != MIGRATEPAGE_SUCCESS) { put_page(new); goto out_unlock; @@ -2096,6 +2096,7 @@ SYSCALL_DEFINE6(io_pgetevents, struct __aio_sigset ksig = { NULL, }; sigset_t ksigmask, sigsaved; struct timespec64 ts; + bool interrupted; int ret; if (timeout && unlikely(get_timespec64(&ts, timeout))) @@ -2109,8 +2110,10 @@ SYSCALL_DEFINE6(io_pgetevents, return ret; ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &ts : NULL); - restore_user_sigmask(ksig.sigmask, &sigsaved); - if (signal_pending(current) && !ret) + + interrupted = signal_pending(current); + restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted); + if (interrupted && !ret) ret = -ERESTARTNOHAND; return ret; @@ -2129,6 +2132,7 @@ SYSCALL_DEFINE6(io_pgetevents_time32, struct __aio_sigset ksig = { NULL, }; sigset_t ksigmask, sigsaved; struct timespec64 ts; + bool interrupted; int ret; if (timeout && unlikely(get_old_timespec32(&ts, timeout))) @@ -2143,8 +2147,10 @@ SYSCALL_DEFINE6(io_pgetevents_time32, return ret; ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &ts : NULL); - restore_user_sigmask(ksig.sigmask, &sigsaved); - if (signal_pending(current) && !ret) + + interrupted = signal_pending(current); + restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted); + if (interrupted && !ret) ret = -ERESTARTNOHAND; return ret; @@ -2194,6 +2200,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents, struct __compat_aio_sigset ksig = { NULL, }; sigset_t ksigmask, sigsaved; struct timespec64 t; + bool interrupted; int ret; if (timeout && get_old_timespec32(&t, timeout)) @@ -2207,8 +2214,10 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents, return ret; ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &t : NULL); - restore_user_sigmask(ksig.sigmask, &sigsaved); - if (signal_pending(current) && !ret) + + interrupted = signal_pending(current); + restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted); + if (interrupted && !ret) ret = -ERESTARTNOHAND; return ret; @@ -2227,6 +2236,7 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64, struct __compat_aio_sigset ksig = { NULL, }; sigset_t ksigmask, sigsaved; struct timespec64 t; + bool interrupted; int ret; if (timeout && get_timespec64(&t, timeout)) @@ -2240,8 +2250,10 @@ COMPAT_SYSCALL_DEFINE6(io_pgetevents_time64, return ret; ret = do_io_getevents(ctx_id, min_nr, nr, events, timeout ? &t : NULL); - restore_user_sigmask(ksig.sigmask, &sigsaved); - if (signal_pending(current) && !ret) + + interrupted = signal_pending(current); + restore_user_sigmask(ksig.sigmask, &sigsaved, interrupted); + if (interrupted && !ret) ret = -ERESTARTNOHAND; return ret; diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c index 8264b468f283..d4e11b2e04f6 100644 --- a/fs/binfmt_elf.c +++ b/fs/binfmt_elf.c @@ -1127,7 +1127,6 @@ out_free_interp: load_addr, interp_load_addr); if (retval < 0) goto out; - /* N.B. passed_fileno might not be initialized? */ current->mm->end_code = end_code; current->mm->start_code = start_code; current->mm->start_data = start_data; diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c index 82a48e830018..7562d6aefbe4 100644 --- a/fs/binfmt_flat.c +++ b/fs/binfmt_flat.c @@ -416,7 +416,6 @@ static int load_flat_file(struct linux_binprm *bprm, u32 text_len, data_len, bss_len, stack_len, full_data, flags; unsigned long len, memp, memp_size, extra, rlim; u32 __user *reloc, *rp; - struct inode *inode; int i, rev, relocs; loff_t fpos; unsigned long start_code, end_code; @@ -424,7 +423,6 @@ static int load_flat_file(struct linux_binprm *bprm, int ret; hdr = ((struct flat_hdr *) bprm->buf); /* exec-header */ - inode = file_inode(bprm->file); text_len = ntohl(hdr->data_start); data_len = ntohl(hdr->data_end) - ntohl(hdr->data_start); @@ -856,9 +854,14 @@ err: static int load_flat_shared_library(int id, struct lib_info *libs) { + /* + * This is a fake bprm struct; only the members "buf", "file" and + * "filename" are actually used. + */ struct linux_binprm bprm; int res; char buf[16]; + loff_t pos = 0; memset(&bprm, 0, sizeof(bprm)); @@ -872,25 +875,11 @@ static int load_flat_shared_library(int id, struct lib_info *libs) if (IS_ERR(bprm.file)) return res; - bprm.cred = prepare_exec_creds(); - res = -ENOMEM; - if (!bprm.cred) - goto out; - - /* We don't really care about recalculating credentials at this point - * as we're past the point of no return and are dealing with shared - * libraries. - */ - bprm.called_set_creds = 1; + res = kernel_read(bprm.file, bprm.buf, BINPRM_BUF_SIZE, &pos); - res = prepare_binprm(&bprm); - - if (!res) + if (res >= 0) res = load_flat_file(&bprm, libs, id, NULL); - abort_creds(bprm.cred); - -out: allow_write_access(bprm.file); fput(bprm.file); diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c index 3837ca180d52..6c0044255e69 100644 --- a/fs/btrfs/zstd.c +++ b/fs/btrfs/zstd.c @@ -27,15 +27,17 @@ /* 307s to avoid pathologically clashing with transaction commit */ #define ZSTD_BTRFS_RECLAIM_JIFFIES (307 * HZ) -static ZSTD_parameters zstd_get_btrfs_parameters(unsigned int level, +static ZSTD_parameters *zstd_get_btrfs_parameters(unsigned int level, size_t src_len) { - ZSTD_parameters params = ZSTD_getParams(level, src_len, 0); + static ZSTD_parameters params; + + params = ZSTD_getParams(level, src_len, 0); if (params.cParams.windowLog > ZSTD_BTRFS_MAX_WINDOWLOG) params.cParams.windowLog = ZSTD_BTRFS_MAX_WINDOWLOG; WARN_ON(src_len > ZSTD_BTRFS_MAX_INPUT); - return params; + return ¶ms; } struct workspace { @@ -155,11 +157,12 @@ static void zstd_calc_ws_mem_sizes(void) unsigned int level; for (level = 1; level <= ZSTD_BTRFS_MAX_LEVEL; level++) { - ZSTD_parameters params = - zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT); - size_t level_size = - max_t(size_t, - ZSTD_CStreamWorkspaceBound(params.cParams), + ZSTD_parameters *params; + size_t level_size; + + params = zstd_get_btrfs_parameters(level, ZSTD_BTRFS_MAX_INPUT); + level_size = max_t(size_t, + ZSTD_CStreamWorkspaceBound(params->cParams), ZSTD_DStreamWorkspaceBound(ZSTD_BTRFS_MAX_INPUT)); max_size = max_t(size_t, max_size, level_size); @@ -385,8 +388,9 @@ static int zstd_compress_pages(struct list_head *ws, unsigned long len = *total_out; const unsigned long nr_dest_pages = *out_pages; unsigned long max_out = nr_dest_pages * PAGE_SIZE; - ZSTD_parameters params = zstd_get_btrfs_parameters(workspace->req_level, - len); + ZSTD_parameters *params; + + params = zstd_get_btrfs_parameters(workspace->req_level, len); *out_pages = 0; *total_out = 0; diff --git a/fs/buffer.c b/fs/buffer.c index e450c55f6434..f85b6e12da5d 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -956,10 +956,20 @@ grow_dev_page(struct block_device *bdev, sector_t block, end_block = init_page_buffers(page, bdev, (sector_t)index << sizebits, size); +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x01; +#endif goto done; } - if (!try_to_free_buffers(page)) + if (!try_to_free_buffers(page)) { +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x02; +#endif goto failed; + } +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x04; +#endif } /* @@ -979,6 +989,9 @@ grow_dev_page(struct block_device *bdev, sector_t block, spin_unlock(&inode->i_mapping->private_lock); done: ret = (block < end_block) ? 1 : -ENXIO; +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x08; +#endif failed: unlock_page(page); put_page(page); @@ -1034,6 +1047,12 @@ __getblk_slow(struct block_device *bdev, sector_t block, return NULL; } +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_stamp = jiffies; + current->getblk_executed = 0; + current->getblk_bh_count = 0; + current->getblk_bh_state = 0; +#endif for (;;) { struct buffer_head *bh; int ret; @@ -1045,6 +1064,24 @@ __getblk_slow(struct block_device *bdev, sector_t block, ret = grow_buffers(bdev, block, size, gfp); if (ret < 0) return NULL; + +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + if (!time_after(jiffies, current->getblk_stamp + 3 * HZ)) + continue; + printk(KERN_ERR "%s(%u): getblk(): executed=%x bh_count=%d bh_state=%lx bdev_super_blocksize=%ld size=%u bdev_super_blocksize_bits=%d bdev_inode_blkbits=%d\n", + current->comm, current->pid, current->getblk_executed, + current->getblk_bh_count, current->getblk_bh_state, + IS_ERR_OR_NULL(bdev->bd_super) ? -1L : + bdev->bd_super->s_blocksize, size, + IS_ERR_OR_NULL(bdev->bd_super) ? -1 : + bdev->bd_super->s_blocksize_bits, + IS_ERR_OR_NULL(bdev->bd_inode) ? -1 : + bdev->bd_inode->i_blkbits); + current->getblk_executed = 0; + current->getblk_bh_count = 0; + current->getblk_bh_state = 0; + current->getblk_stamp = jiffies; +#endif } } @@ -3227,6 +3264,11 @@ EXPORT_SYMBOL(sync_dirty_buffer); */ static inline int buffer_busy(struct buffer_head *bh) { +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x80; + current->getblk_bh_count = atomic_read(&bh->b_count); + current->getblk_bh_state = bh->b_state; +#endif return atomic_read(&bh->b_count) | (bh->b_state & ((1 << BH_Dirty) | (1 << BH_Lock))); } @@ -3265,11 +3307,18 @@ int try_to_free_buffers(struct page *page) int ret = 0; BUG_ON(!PageLocked(page)); - if (PageWriteback(page)) + if (PageWriteback(page)) { +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x10; +#endif return 0; + } if (mapping == NULL) { /* can this still happen? */ ret = drop_buffers(page, &buffers_to_free); +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x20; +#endif goto out; } @@ -3293,6 +3342,9 @@ int try_to_free_buffers(struct page *page) if (ret) cancel_dirty_page(page); spin_unlock(&mapping->private_lock); +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + current->getblk_executed |= 0x40; +#endif out: if (buffers_to_free) { struct buffer_head *bh = buffers_to_free; diff --git a/fs/coda/Makefile b/fs/coda/Makefile index 1ce66819da2a..78befb8369c9 100644 --- a/fs/coda/Makefile +++ b/fs/coda/Makefile @@ -6,7 +6,8 @@ obj-$(CONFIG_CODA_FS) += coda.o coda-objs := psdev.o cache.o cnode.o inode.o dir.o file.o upcall.o \ - coda_linux.o symlink.o pioctl.o sysctl.o + coda_linux.o symlink.o pioctl.o +coda-$(CONFIG_SYSCTL) += sysctl.o # If you want debugging output, please uncomment the following line. diff --git a/fs/coda/cache.c b/fs/coda/cache.c index 201fc08a8b4f..3b8c4513118f 100644 --- a/fs/coda/cache.c +++ b/fs/coda/cache.c @@ -21,7 +21,7 @@ #include <linux/spinlock.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> +#include "coda_psdev.h" #include "coda_linux.h" #include "coda_cache.h" diff --git a/fs/coda/cnode.c b/fs/coda/cnode.c index 845b5a66952a..06855f6c7902 100644 --- a/fs/coda/cnode.c +++ b/fs/coda/cnode.c @@ -8,8 +8,8 @@ #include <linux/time.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> #include <linux/pagemap.h> +#include "coda_psdev.h" #include "coda_linux.h" static inline int coda_fideq(struct CodaFid *fid1, struct CodaFid *fid2) @@ -137,11 +137,6 @@ struct inode *coda_fid_to_inode(struct CodaFid *fid, struct super_block *sb) struct inode *inode; unsigned long hash = coda_f2i(fid); - if ( !sb ) { - pr_warn("%s: no sb!\n", __func__); - return NULL; - } - inode = ilookup5(sb, hash, coda_test_inode, fid); if ( !inode ) return NULL; @@ -153,6 +148,16 @@ struct inode *coda_fid_to_inode(struct CodaFid *fid, struct super_block *sb) return inode; } +struct coda_file_info *coda_ftoc(struct file *file) +{ + struct coda_file_info *cfi = file->private_data; + + BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + + return cfi; + +} + /* the CONTROL inode is made without asking attributes from Venus */ struct inode *coda_cnode_makectl(struct super_block *sb) { diff --git a/fs/coda/coda_fs_i.h b/fs/coda/coda_fs_i.h index d702ba1a2bf9..c99d574d1c43 100644 --- a/fs/coda/coda_fs_i.h +++ b/fs/coda/coda_fs_i.h @@ -42,8 +42,6 @@ struct coda_file_info { unsigned int cfi_mapcount; /* nr of times this file is mapped */ }; -#define CODA_FTOC(file) ((struct coda_file_info *)((file)->private_data)) - /* flags */ #define C_VATTR 0x1 /* Validity of vattr in inode */ #define C_FLUSH 0x2 /* used after a flush */ @@ -54,6 +52,7 @@ struct inode *coda_cnode_make(struct CodaFid *, struct super_block *); struct inode *coda_iget(struct super_block *sb, struct CodaFid *fid, struct coda_vattr *attr); struct inode *coda_cnode_makectl(struct super_block *sb); struct inode *coda_fid_to_inode(struct CodaFid *fid, struct super_block *sb); +struct coda_file_info *coda_ftoc(struct file *file); void coda_replace_fid(struct inode *, struct CodaFid *, struct CodaFid *); #endif diff --git a/fs/coda/coda_int.h b/fs/coda/coda_int.h index bb0b3e0ed6c2..f82b59c9dd28 100644 --- a/fs/coda/coda_int.h +++ b/fs/coda/coda_int.h @@ -13,9 +13,19 @@ extern int coda_fake_statfs; void coda_destroy_inodecache(void); int __init coda_init_inodecache(void); int coda_fsync(struct file *coda_file, loff_t start, loff_t end, int datasync); + +#ifdef CONFIG_SYSCTL void coda_sysctl_init(void); void coda_sysctl_clean(void); +#else +static inline void coda_sysctl_init(void) +{ +} +static inline void coda_sysctl_clean(void) +{ +} +#endif #endif /* _CODA_INT_ */ diff --git a/fs/coda/coda_linux.c b/fs/coda/coda_linux.c index f3d543dd9a98..2e1a5a192074 100644 --- a/fs/coda/coda_linux.c +++ b/fs/coda/coda_linux.c @@ -18,7 +18,7 @@ #include <linux/string.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> +#include "coda_psdev.h" #include "coda_linux.h" /* initialize the debugging variables */ @@ -66,6 +66,25 @@ unsigned short coda_flags_to_cflags(unsigned short flags) return coda_flags; } +static struct timespec64 coda_to_timespec64(struct coda_timespec ts) +{ + struct timespec64 ts64 = { + .tv_sec = ts.tv_sec, + .tv_nsec = ts.tv_nsec, + }; + + return ts64; +} + +static struct coda_timespec timespec64_to_coda(struct timespec64 ts64) +{ + struct coda_timespec ts = { + .tv_sec = ts64.tv_sec, + .tv_nsec = ts64.tv_nsec, + }; + + return ts; +} /* utility functions below */ void coda_vattr_to_iattr(struct inode *inode, struct coda_vattr *attr) @@ -105,11 +124,11 @@ void coda_vattr_to_iattr(struct inode *inode, struct coda_vattr *attr) if (attr->va_size != -1) inode->i_blocks = (attr->va_size + 511) >> 9; if (attr->va_atime.tv_sec != -1) - inode->i_atime = timespec_to_timespec64(attr->va_atime); + inode->i_atime = coda_to_timespec64(attr->va_atime); if (attr->va_mtime.tv_sec != -1) - inode->i_mtime = timespec_to_timespec64(attr->va_mtime); + inode->i_mtime = coda_to_timespec64(attr->va_mtime); if (attr->va_ctime.tv_sec != -1) - inode->i_ctime = timespec_to_timespec64(attr->va_ctime); + inode->i_ctime = coda_to_timespec64(attr->va_ctime); } @@ -130,12 +149,12 @@ void coda_iattr_to_vattr(struct iattr *iattr, struct coda_vattr *vattr) vattr->va_uid = (vuid_t) -1; vattr->va_gid = (vgid_t) -1; vattr->va_size = (off_t) -1; - vattr->va_atime.tv_sec = (time_t) -1; - vattr->va_atime.tv_nsec = (time_t) -1; - vattr->va_mtime.tv_sec = (time_t) -1; - vattr->va_mtime.tv_nsec = (time_t) -1; - vattr->va_ctime.tv_sec = (time_t) -1; - vattr->va_ctime.tv_nsec = (time_t) -1; + vattr->va_atime.tv_sec = (int64_t) -1; + vattr->va_atime.tv_nsec = (long) -1; + vattr->va_mtime.tv_sec = (int64_t) -1; + vattr->va_mtime.tv_nsec = (long) -1; + vattr->va_ctime.tv_sec = (int64_t) -1; + vattr->va_ctime.tv_nsec = (long) -1; vattr->va_type = C_VNON; vattr->va_fileid = -1; vattr->va_gen = -1; @@ -175,13 +194,13 @@ void coda_iattr_to_vattr(struct iattr *iattr, struct coda_vattr *vattr) vattr->va_size = iattr->ia_size; } if ( valid & ATTR_ATIME ) { - vattr->va_atime = timespec64_to_timespec(iattr->ia_atime); + vattr->va_atime = timespec64_to_coda(iattr->ia_atime); } if ( valid & ATTR_MTIME ) { - vattr->va_mtime = timespec64_to_timespec(iattr->ia_mtime); + vattr->va_mtime = timespec64_to_coda(iattr->ia_mtime); } if ( valid & ATTR_CTIME ) { - vattr->va_ctime = timespec64_to_timespec(iattr->ia_ctime); + vattr->va_ctime = timespec64_to_coda(iattr->ia_ctime); } } diff --git a/fs/coda/coda_linux.h b/fs/coda/coda_linux.h index 126155cadfa9..d5ebd36fb2cc 100644 --- a/fs/coda/coda_linux.h +++ b/fs/coda/coda_linux.h @@ -59,22 +59,6 @@ void coda_vattr_to_iattr(struct inode *, struct coda_vattr *); void coda_iattr_to_vattr(struct iattr *, struct coda_vattr *); unsigned short coda_flags_to_cflags(unsigned short); -/* sysctl.h */ -void coda_sysctl_init(void); -void coda_sysctl_clean(void); - -#define CODA_ALLOC(ptr, cast, size) do { \ - if (size < PAGE_SIZE) \ - ptr = kzalloc((unsigned long) size, GFP_KERNEL); \ - else \ - ptr = (cast)vzalloc((unsigned long) size); \ - if (!ptr) \ - pr_warn("kernel malloc returns 0 at %s:%d\n", __FILE__, __LINE__); \ -} while (0) - - -#define CODA_FREE(ptr, size) kvfree((ptr)) - /* inode to cnode access functions */ static inline struct coda_inode_info *ITOC(struct inode *inode) diff --git a/include/linux/coda_psdev.h b/fs/coda/coda_psdev.h index 15170954aa2b..801423cbbdfc 100644 --- a/include/linux/coda_psdev.h +++ b/fs/coda/coda_psdev.h @@ -3,11 +3,31 @@ #define __CODA_PSDEV_H #include <linux/backing-dev.h> +#include <linux/magic.h> #include <linux/mutex.h> -#include <uapi/linux/coda_psdev.h> + +#define CODA_PSDEV_MAJOR 67 +#define MAX_CODADEVS 5 /* how many do we allow */ struct kstatfs; +/* messages between coda filesystem in kernel and Venus */ +struct upc_req { + struct list_head uc_chain; + caddr_t uc_data; + u_short uc_flags; + u_short uc_inSize; /* Size is at most 5000 bytes */ + u_short uc_outSize; + u_short uc_opcode; /* copied from data to save lookup */ + int uc_unique; + wait_queue_head_t uc_sleep; /* process' wait queue */ +}; + +#define CODA_REQ_ASYNC 0x1 +#define CODA_REQ_READ 0x2 +#define CODA_REQ_WRITE 0x4 +#define CODA_REQ_ABORT 0x8 + /* communication pending/processing queues */ struct venus_comm { u_long vc_seq; @@ -19,7 +39,6 @@ struct venus_comm { struct mutex vc_mutex; }; - static inline struct venus_comm *coda_vcp(struct super_block *sb) { return (struct venus_comm *)((sb)->s_fs_info); @@ -30,37 +49,38 @@ int venus_rootfid(struct super_block *sb, struct CodaFid *fidp); int venus_getattr(struct super_block *sb, struct CodaFid *fid, struct coda_vattr *attr); int venus_setattr(struct super_block *, struct CodaFid *, struct coda_vattr *); -int venus_lookup(struct super_block *sb, struct CodaFid *fid, - const char *name, int length, int *type, +int venus_lookup(struct super_block *sb, struct CodaFid *fid, + const char *name, int length, int *type, struct CodaFid *resfid); int venus_close(struct super_block *sb, struct CodaFid *fid, int flags, kuid_t uid); int venus_open(struct super_block *sb, struct CodaFid *fid, int flags, struct file **f); -int venus_mkdir(struct super_block *sb, struct CodaFid *dirfid, - const char *name, int length, +int venus_mkdir(struct super_block *sb, struct CodaFid *dirfid, + const char *name, int length, struct CodaFid *newfid, struct coda_vattr *attrs); -int venus_create(struct super_block *sb, struct CodaFid *dirfid, +int venus_create(struct super_block *sb, struct CodaFid *dirfid, const char *name, int length, int excl, int mode, - struct CodaFid *newfid, struct coda_vattr *attrs) ; -int venus_rmdir(struct super_block *sb, struct CodaFid *dirfid, + struct CodaFid *newfid, struct coda_vattr *attrs); +int venus_rmdir(struct super_block *sb, struct CodaFid *dirfid, const char *name, int length); -int venus_remove(struct super_block *sb, struct CodaFid *dirfid, +int venus_remove(struct super_block *sb, struct CodaFid *dirfid, const char *name, int length); -int venus_readlink(struct super_block *sb, struct CodaFid *fid, +int venus_readlink(struct super_block *sb, struct CodaFid *fid, char *buffer, int *length); -int venus_rename(struct super_block *, struct CodaFid *new_fid, - struct CodaFid *old_fid, size_t old_length, - size_t new_length, const char *old_name, +int venus_rename(struct super_block *sb, struct CodaFid *new_fid, + struct CodaFid *old_fid, size_t old_length, + size_t new_length, const char *old_name, const char *new_name); -int venus_link(struct super_block *sb, struct CodaFid *fid, +int venus_link(struct super_block *sb, struct CodaFid *fid, struct CodaFid *dirfid, const char *name, int len ); int venus_symlink(struct super_block *sb, struct CodaFid *fid, const char *name, int len, const char *symname, int symlen); int venus_access(struct super_block *sb, struct CodaFid *fid, int mask); int venus_pioctl(struct super_block *sb, struct CodaFid *fid, unsigned int cmd, struct PioctlData *data); -int coda_downcall(struct venus_comm *vcp, int opcode, union outputArgs *out); +int coda_downcall(struct venus_comm *vcp, int opcode, union outputArgs *out, + size_t nbytes); int venus_fsync(struct super_block *sb, struct CodaFid *fid); int venus_statfs(struct dentry *dentry, struct kstatfs *sfs); diff --git a/fs/coda/dir.c b/fs/coda/dir.c index 00876ddadb43..ca40c2556ba6 100644 --- a/fs/coda/dir.c +++ b/fs/coda/dir.c @@ -23,7 +23,7 @@ #include <linux/uaccess.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> +#include "coda_psdev.h" #include "coda_linux.h" #include "coda_cache.h" @@ -47,8 +47,8 @@ static struct dentry *coda_lookup(struct inode *dir, struct dentry *entry, unsig int type = 0; if (length > CODA_MAXNAMLEN) { - pr_err("name too long: lookup, %s (%*s)\n", - coda_i2s(dir), (int)length, name); + pr_err("name too long: lookup, %s %zu\n", + coda_i2s(dir), length); return ERR_PTR(-ENAMETOOLONG); } @@ -356,8 +356,7 @@ static int coda_venus_readdir(struct file *coda_file, struct dir_context *ctx) ino_t ino; int ret; - cfi = CODA_FTOC(coda_file); - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + cfi = coda_ftoc(coda_file); host_file = cfi->cfi_container; cii = ITOC(file_inode(coda_file)); @@ -426,8 +425,7 @@ static int coda_readdir(struct file *coda_file, struct dir_context *ctx) struct file *host_file; int ret; - cfi = CODA_FTOC(coda_file); - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + cfi = coda_ftoc(coda_file); host_file = cfi->cfi_container; if (host_file->f_op->iterate || host_file->f_op->iterate_shared) { diff --git a/fs/coda/file.c b/fs/coda/file.c index 1cbc1f2298ee..0dbd13ab72e3 100644 --- a/fs/coda/file.c +++ b/fs/coda/file.c @@ -22,18 +22,22 @@ #include <linux/uaccess.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> - +#include "coda_psdev.h" #include "coda_linux.h" #include "coda_int.h" +struct coda_vm_ops { + atomic_t refcnt; + struct file *coda_file; + const struct vm_operations_struct *host_vm_ops; + struct vm_operations_struct vm_ops; +}; + static ssize_t coda_file_read_iter(struct kiocb *iocb, struct iov_iter *to) { struct file *coda_file = iocb->ki_filp; - struct coda_file_info *cfi = CODA_FTOC(coda_file); - - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + struct coda_file_info *cfi = coda_ftoc(coda_file); return vfs_iter_read(cfi->cfi_container, to, &iocb->ki_pos, 0); } @@ -43,12 +47,10 @@ coda_file_write_iter(struct kiocb *iocb, struct iov_iter *to) { struct file *coda_file = iocb->ki_filp; struct inode *coda_inode = file_inode(coda_file); - struct coda_file_info *cfi = CODA_FTOC(coda_file); + struct coda_file_info *cfi = coda_ftoc(coda_file); struct file *host_file; ssize_t ret; - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); - host_file = cfi->cfi_container; file_start_write(host_file); inode_lock(coda_inode); @@ -61,6 +63,34 @@ coda_file_write_iter(struct kiocb *iocb, struct iov_iter *to) return ret; } +static void +coda_vm_open(struct vm_area_struct *vma) +{ + struct coda_vm_ops *cvm_ops = + container_of(vma->vm_ops, struct coda_vm_ops, vm_ops); + + atomic_inc(&cvm_ops->refcnt); + + if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->open) + cvm_ops->host_vm_ops->open(vma); +} + +static void +coda_vm_close(struct vm_area_struct *vma) +{ + struct coda_vm_ops *cvm_ops = + container_of(vma->vm_ops, struct coda_vm_ops, vm_ops); + + if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->close) + cvm_ops->host_vm_ops->close(vma); + + if (atomic_dec_and_test(&cvm_ops->refcnt)) { + vma->vm_ops = cvm_ops->host_vm_ops; + fput(cvm_ops->coda_file); + kfree(cvm_ops); + } +} + static int coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) { @@ -68,14 +98,22 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) struct coda_inode_info *cii; struct file *host_file; struct inode *coda_inode, *host_inode; + struct coda_vm_ops *cvm_ops; + int ret; - cfi = CODA_FTOC(coda_file); - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + cfi = coda_ftoc(coda_file); host_file = cfi->cfi_container; if (!host_file->f_op->mmap) return -ENODEV; + if (WARN_ON(coda_file != vma->vm_file)) + return -EIO; + + cvm_ops = kmalloc(sizeof(struct coda_vm_ops), GFP_KERNEL); + if (!cvm_ops) + return -ENOMEM; + coda_inode = file_inode(coda_file); host_inode = file_inode(host_file); @@ -89,6 +127,7 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) * the container file on us! */ else if (coda_inode->i_mapping != host_inode->i_mapping) { spin_unlock(&cii->c_lock); + kfree(cvm_ops); return -EBUSY; } @@ -97,7 +136,29 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) cfi->cfi_mapcount++; spin_unlock(&cii->c_lock); - return call_mmap(host_file, vma); + vma->vm_file = get_file(host_file); + ret = call_mmap(vma->vm_file, vma); + + if (ret) { + /* if call_mmap fails, our caller will put coda_file so we + * should drop the reference to the host_file that we got. + */ + fput(host_file); + kfree(cvm_ops); + } else { + /* here we add redirects for the open/close vm_operations */ + cvm_ops->host_vm_ops = vma->vm_ops; + if (vma->vm_ops) + cvm_ops->vm_ops = *vma->vm_ops; + + cvm_ops->vm_ops.open = coda_vm_open; + cvm_ops->vm_ops.close = coda_vm_close; + cvm_ops->coda_file = coda_file; + atomic_set(&cvm_ops->refcnt, 1); + + vma->vm_ops = &cvm_ops->vm_ops; + } + return ret; } int coda_open(struct inode *coda_inode, struct file *coda_file) @@ -142,8 +203,7 @@ int coda_release(struct inode *coda_inode, struct file *coda_file) struct inode *host_inode; int err; - cfi = CODA_FTOC(coda_file); - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + cfi = coda_ftoc(coda_file); err = venus_close(coda_inode->i_sb, coda_i2f(coda_inode), coda_flags, coda_file->f_cred->fsuid); @@ -185,8 +245,7 @@ int coda_fsync(struct file *coda_file, loff_t start, loff_t end, int datasync) return err; inode_lock(coda_inode); - cfi = CODA_FTOC(coda_file); - BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); + cfi = coda_ftoc(coda_file); host_file = cfi->cfi_container; err = vfs_fsync(host_file, datasync); @@ -207,4 +266,3 @@ const struct file_operations coda_file_operations = { .fsync = coda_fsync, .splice_read = generic_file_splice_read, }; - diff --git a/fs/coda/inode.c b/fs/coda/inode.c index 23f6ebd08e80..321f56e487cb 100644 --- a/fs/coda/inode.c +++ b/fs/coda/inode.c @@ -27,7 +27,7 @@ #include <linux/vmalloc.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> +#include "coda_psdev.h" #include "coda_linux.h" #include "coda_cache.h" @@ -236,6 +236,7 @@ static void coda_put_super(struct super_block *sb) vcp->vc_sb = NULL; sb->s_fs_info = NULL; mutex_unlock(&vcp->vc_mutex); + mutex_destroy(&vcp->vc_mutex); pr_info("Bye bye.\n"); } diff --git a/fs/coda/pioctl.c b/fs/coda/pioctl.c index e0c17b7dccce..644d48c12ce8 100644 --- a/fs/coda/pioctl.c +++ b/fs/coda/pioctl.c @@ -20,8 +20,7 @@ #include <linux/uaccess.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> - +#include "coda_psdev.h" #include "coda_linux.h" /* pioctl ops */ diff --git a/fs/coda/psdev.c b/fs/coda/psdev.c index 0ceef32e6fae..ebfbbea9fa48 100644 --- a/fs/coda/psdev.c +++ b/fs/coda/psdev.c @@ -38,8 +38,7 @@ #include <linux/uaccess.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> - +#include "coda_psdev.h" #include "coda_linux.h" #include "coda_int.h" @@ -100,8 +99,12 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, ssize_t retval = 0, count = 0; int error; + /* make sure there is enough to copy out the (opcode, unique) values */ + if (nbytes < (2 * sizeof(u_int32_t))) + return -EINVAL; + /* Peek at the opcode, uniquefier */ - if (copy_from_user(&hdr, buf, 2 * sizeof(u_long))) + if (copy_from_user(&hdr, buf, 2 * sizeof(u_int32_t))) return -EFAULT; if (DOWNCALL(hdr.opcode)) { @@ -119,17 +122,21 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, hdr.opcode, hdr.unique); nbytes = size; } - CODA_ALLOC(dcbuf, union outputArgs *, nbytes); + dcbuf = kvmalloc(nbytes, GFP_KERNEL); + if (!dcbuf) { + retval = -ENOMEM; + goto out; + } if (copy_from_user(dcbuf, buf, nbytes)) { - CODA_FREE(dcbuf, nbytes); + kvfree(dcbuf); retval = -EFAULT; goto out; } /* what downcall errors does Venus handle ? */ - error = coda_downcall(vcp, hdr.opcode, dcbuf); + error = coda_downcall(vcp, hdr.opcode, dcbuf, nbytes); - CODA_FREE(dcbuf, nbytes); + kvfree(dcbuf); if (error) { pr_warn("%s: coda_downcall error: %d\n", __func__, error); @@ -182,8 +189,11 @@ static ssize_t coda_psdev_write(struct file *file, const char __user *buf, if (req->uc_opcode == CODA_OPEN_BY_FD) { struct coda_open_by_fd_out *outp = (struct coda_open_by_fd_out *)req->uc_data; - if (!outp->oh.result) + if (!outp->oh.result) { outp->fh = fget(outp->fd); + if (!outp->fh) + return -EBADF; + } } wake_up(&req->uc_sleep); @@ -252,7 +262,7 @@ static ssize_t coda_psdev_read(struct file * file, char __user * buf, goto out; } - CODA_FREE(req->uc_data, sizeof(struct coda_in_hdr)); + kvfree(req->uc_data); kfree(req); out: mutex_unlock(&vcp->vc_mutex); @@ -314,7 +324,7 @@ static int coda_psdev_release(struct inode * inode, struct file * file) /* Async requests need to be freed here */ if (req->uc_flags & CODA_REQ_ASYNC) { - CODA_FREE(req->uc_data, sizeof(struct coda_in_hdr)); + kvfree(req->uc_data); kfree(req); continue; } @@ -347,13 +357,13 @@ static const struct file_operations coda_psdev_fops = { .llseek = noop_llseek, }; -static int init_coda_psdev(void) +static int __init init_coda_psdev(void) { int i, err = 0; if (register_chrdev(CODA_PSDEV_MAJOR, "coda", &coda_psdev_fops)) { pr_err("%s: unable to get major %d\n", __func__, CODA_PSDEV_MAJOR); - return -EIO; + return -EIO; } coda_psdev_class = class_create(THIS_MODULE, "coda"); if (IS_ERR(coda_psdev_class)) { @@ -378,7 +388,7 @@ MODULE_AUTHOR("Jan Harkes, Peter J. Braam"); MODULE_DESCRIPTION("Coda Distributed File System VFS interface"); MODULE_ALIAS_CHARDEV_MAJOR(CODA_PSDEV_MAJOR); MODULE_LICENSE("GPL"); -MODULE_VERSION("6.6"); +MODULE_VERSION("6.11"); static int __init init_coda(void) { diff --git a/fs/coda/symlink.c b/fs/coda/symlink.c index 202297d156df..8907d0508198 100644 --- a/fs/coda/symlink.c +++ b/fs/coda/symlink.c @@ -17,8 +17,7 @@ #include <linux/pagemap.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> - +#include "coda_psdev.h" #include "coda_linux.h" static int coda_symlink_filler(struct file *file, struct page *page) diff --git a/fs/coda/sysctl.c b/fs/coda/sysctl.c index 0301d45000a8..fda3b702b1c5 100644 --- a/fs/coda/sysctl.c +++ b/fs/coda/sysctl.c @@ -12,7 +12,6 @@ #include "coda_int.h" -#ifdef CONFIG_SYSCTL static struct ctl_table_header *fs_table_header; static struct ctl_table coda_table[] = { @@ -62,13 +61,3 @@ void coda_sysctl_clean(void) fs_table_header = NULL; } } - -#else -void coda_sysctl_init(void) -{ -} - -void coda_sysctl_clean(void) -{ -} -#endif diff --git a/fs/coda/upcall.c b/fs/coda/upcall.c index 1175a1722411..15c0e4fdb0e3 100644 --- a/fs/coda/upcall.c +++ b/fs/coda/upcall.c @@ -33,7 +33,7 @@ #include <linux/vfs.h> #include <linux/coda.h> -#include <linux/coda_psdev.h> +#include "coda_psdev.h" #include "coda_linux.h" #include "coda_cache.h" @@ -46,7 +46,7 @@ static void *alloc_upcall(int opcode, int size) { union inputArgs *inp; - CODA_ALLOC(inp, union inputArgs *, size); + inp = kvzalloc(size, GFP_KERNEL); if (!inp) return ERR_PTR(-ENOMEM); @@ -85,7 +85,7 @@ int venus_rootfid(struct super_block *sb, struct CodaFid *fidp) if (!error) *fidp = outp->coda_root.VFid; - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -104,7 +104,7 @@ int venus_getattr(struct super_block *sb, struct CodaFid *fid, if (!error) *attr = outp->coda_getattr.attr; - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -123,7 +123,7 @@ int venus_setattr(struct super_block *sb, struct CodaFid *fid, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -153,7 +153,7 @@ int venus_lookup(struct super_block *sb, struct CodaFid *fid, *type = outp->coda_lookup.vtype; } - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -173,7 +173,7 @@ int venus_close(struct super_block *sb, struct CodaFid *fid, int flags, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -194,7 +194,7 @@ int venus_open(struct super_block *sb, struct CodaFid *fid, if (!error) *fh = outp->coda_open_by_fd.fh; - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -224,7 +224,7 @@ int venus_mkdir(struct super_block *sb, struct CodaFid *dirfid, *newfid = outp->coda_mkdir.VFid; } - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -262,7 +262,7 @@ int venus_rename(struct super_block *sb, struct CodaFid *old_fid, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -295,7 +295,7 @@ int venus_create(struct super_block *sb, struct CodaFid *dirfid, *newfid = outp->coda_create.VFid; } - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -318,7 +318,7 @@ int venus_rmdir(struct super_block *sb, struct CodaFid *dirfid, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -340,7 +340,7 @@ int venus_remove(struct super_block *sb, struct CodaFid *dirfid, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -370,7 +370,7 @@ int venus_readlink(struct super_block *sb, struct CodaFid *fid, *(buffer + retlen) = '\0'; } - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -398,7 +398,7 @@ int venus_link(struct super_block *sb, struct CodaFid *fid, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -433,7 +433,7 @@ int venus_symlink(struct super_block *sb, struct CodaFid *fid, error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -449,7 +449,7 @@ int venus_fsync(struct super_block *sb, struct CodaFid *fid) inp->coda_fsync.VFid = *fid; error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -467,7 +467,7 @@ int venus_access(struct super_block *sb, struct CodaFid *fid, int mask) error = coda_upcall(coda_vcp(sb), insize, &outsize, inp); - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -543,7 +543,7 @@ int venus_pioctl(struct super_block *sb, struct CodaFid *fid, } exit: - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -553,7 +553,7 @@ int venus_statfs(struct dentry *dentry, struct kstatfs *sfs) union outputArgs *outp; int insize, outsize, error; - insize = max_t(unsigned int, INSIZE(statfs), OUTSIZE(statfs)); + insize = SIZE(statfs); UPARG(CODA_STATFS); error = coda_upcall(coda_vcp(dentry->d_sb), insize, &outsize, inp); @@ -565,7 +565,7 @@ int venus_statfs(struct dentry *dentry, struct kstatfs *sfs) sfs->f_ffree = outp->coda_statfs.stat.f_ffree; } - CODA_FREE(inp, insize); + kvfree(inp); return error; } @@ -743,7 +743,7 @@ static int coda_upcall(struct venus_comm *vcp, sig_req = kmalloc(sizeof(struct upc_req), GFP_KERNEL); if (!sig_req) goto exit; - CODA_ALLOC((sig_req->uc_data), char *, sizeof(struct coda_in_hdr)); + sig_req->uc_data = kvzalloc(sizeof(struct coda_in_hdr), GFP_KERNEL); if (!sig_req->uc_data) { kfree(sig_req); goto exit; @@ -804,12 +804,44 @@ exit: * * CODA_REPLACE -- replace one CodaFid with another throughout the name cache */ -int coda_downcall(struct venus_comm *vcp, int opcode, union outputArgs *out) +int coda_downcall(struct venus_comm *vcp, int opcode, union outputArgs *out, + size_t nbytes) { struct inode *inode = NULL; struct CodaFid *fid = NULL, *newfid; struct super_block *sb; + /* + * Make sure we have received enough data from the cache + * manager to populate the necessary fields in the buffer + */ + switch (opcode) { + case CODA_PURGEUSER: + if (nbytes < sizeof(struct coda_purgeuser_out)) + return -EINVAL; + break; + + case CODA_ZAPDIR: + if (nbytes < sizeof(struct coda_zapdir_out)) + return -EINVAL; + break; + + case CODA_ZAPFILE: + if (nbytes < sizeof(struct coda_zapfile_out)) + return -EINVAL; + break; + + case CODA_PURGEFID: + if (nbytes < sizeof(struct coda_purgefid_out)) + return -EINVAL; + break; + + case CODA_REPLACE: + if (nbytes < sizeof(struct coda_replace_out)) + return -EINVAL; + break; + } + /* Handle invalidation requests. */ mutex_lock(&vcp->vc_mutex); sb = vcp->vc_sb; diff --git a/fs/coredump.c b/fs/coredump.c index e42e17e55bfd..b1ea7dfbd149 100644 --- a/fs/coredump.c +++ b/fs/coredump.c @@ -7,6 +7,7 @@ #include <linux/stat.h> #include <linux/fcntl.h> #include <linux/swap.h> +#include <linux/ctype.h> #include <linux/string.h> #include <linux/init.h> #include <linux/pagemap.h> @@ -187,11 +188,13 @@ put_exe_file: * name into corename, which must have space for at least * CORENAME_MAX_SIZE bytes plus one byte for the zero terminator. */ -static int format_corename(struct core_name *cn, struct coredump_params *cprm) +static int format_corename(struct core_name *cn, struct coredump_params *cprm, + size_t **argv, int *argc) { const struct cred *cred = current_cred(); const char *pat_ptr = core_pattern; int ispipe = (*pat_ptr == '|'); + bool was_space = false; int pid_in_pattern = 0; int err = 0; @@ -201,12 +204,35 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm) return -ENOMEM; cn->corename[0] = '\0'; - if (ispipe) + if (ispipe) { + int argvs = sizeof(core_pattern) / 2; + (*argv) = kmalloc_array(argvs, sizeof(**argv), GFP_KERNEL); + if (!(*argv)) + return -ENOMEM; + (*argv)[(*argc)++] = 0; ++pat_ptr; + } /* Repeat as long as we have more pattern to process and more output space */ while (*pat_ptr) { + /* + * Split on spaces before doing template expansion so that + * %e and %E don't get split if they have spaces in them + */ + if (ispipe) { + if (isspace(*pat_ptr)) { + was_space = true; + pat_ptr++; + continue; + } else if (was_space) { + was_space = false; + err = cn_printf(cn, "%c", '\0'); + if (err) + return err; + (*argv)[(*argc)++] = cn->used; + } + } if (*pat_ptr != '%') { err = cn_printf(cn, "%c", *pat_ptr++); } else { @@ -546,6 +572,8 @@ void do_coredump(const kernel_siginfo_t *siginfo) struct cred *cred; int retval = 0; int ispipe; + size_t *argv = NULL; + int argc = 0; struct files_struct *displaced; /* require nonrelative corefile path and be extra careful */ bool need_suid_safe = false; @@ -592,9 +620,10 @@ void do_coredump(const kernel_siginfo_t *siginfo) old_cred = override_creds(cred); - ispipe = format_corename(&cn, &cprm); + ispipe = format_corename(&cn, &cprm, &argv, &argc); if (ispipe) { + int argi; int dump_count; char **helper_argv; struct subprocess_info *sub_info; @@ -637,12 +666,16 @@ void do_coredump(const kernel_siginfo_t *siginfo) goto fail_dropcount; } - helper_argv = argv_split(GFP_KERNEL, cn.corename, NULL); + helper_argv = kmalloc_array(argc + 1, sizeof(*helper_argv), + GFP_KERNEL); if (!helper_argv) { printk(KERN_WARNING "%s failed to allocate memory\n", __func__); goto fail_dropcount; } + for (argi = 0; argi < argc; argi++) + helper_argv[argi] = cn.corename + argv[argi]; + helper_argv[argi] = NULL; retval = -ENOMEM; sub_info = call_usermodehelper_setup(helper_argv[0], @@ -652,7 +685,7 @@ void do_coredump(const kernel_siginfo_t *siginfo) retval = call_usermodehelper_exec(sub_info, UMH_WAIT_EXEC); - argv_free(helper_argv); + kfree(helper_argv); if (retval) { printk(KERN_INFO "Core dump to |%s pipe failed\n", cn.corename); @@ -766,6 +799,7 @@ fail_dropcount: if (ispipe) atomic_dec(&core_dump_count); fail_unlock: + kfree(argv); kfree(cn.corename); coredump_finish(mm, core_dumped); revert_creds(old_cred); diff --git a/fs/eventpoll.c b/fs/eventpoll.c index c6f513100cc9..38d2e8a29f0d 100644 --- a/fs/eventpoll.c +++ b/fs/eventpoll.c @@ -291,7 +291,6 @@ static LIST_HEAD(tfile_check_list); #include <linux/sysctl.h> -static long zero; static long long_max = LONG_MAX; struct ctl_table epoll_table[] = { @@ -301,7 +300,7 @@ struct ctl_table epoll_table[] = { .maxlen = sizeof(max_user_watches), .mode = 0644, .proc_handler = proc_doulongvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &long_max, }, { } @@ -2325,7 +2324,7 @@ SYSCALL_DEFINE6(epoll_pwait, int, epfd, struct epoll_event __user *, events, error = do_epoll_wait(epfd, events, maxevents, timeout); - restore_user_sigmask(sigmask, &sigsaved); + restore_user_sigmask(sigmask, &sigsaved, error == -EINTR); return error; } @@ -2350,7 +2349,7 @@ COMPAT_SYSCALL_DEFINE6(epoll_pwait, int, epfd, err = do_epoll_wait(epfd, events, maxevents, timeout); - restore_user_sigmask(sigmask, &sigsaved); + restore_user_sigmask(sigmask, &sigsaved, err == -EINTR); return err; } diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 052dba97d814..e1c078a7ed04 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -2915,7 +2915,7 @@ int f2fs_migrate_page(struct address_space *mapping, /* one extra reference was held for atomic_write page */ extra_count = atomic_written ? 1 : 0; rc = migrate_page_move_mapping(mapping, newpage, - page, mode, extra_count); + page, extra_count); if (rc != MIGRATEPAGE_SUCCESS) { if (atomic_written) mutex_unlock(&fi->inmem_lock); diff --git a/fs/hfsplus/xattr.c b/fs/hfsplus/xattr.c index d5403b4004c9..bb0b27d88e50 100644 --- a/fs/hfsplus/xattr.c +++ b/fs/hfsplus/xattr.c @@ -407,7 +407,7 @@ static int copy_name(char *buffer, const char *xattr_name, int name_len) int offset = 0; if (!is_known_namespace(xattr_name)) { - strncpy(buffer, XATTR_MAC_OSX_PREFIX, XATTR_MAC_OSX_PREFIX_LEN); + memcpy(buffer, XATTR_MAC_OSX_PREFIX, XATTR_MAC_OSX_PREFIX_LEN); offset += XATTR_MAC_OSX_PREFIX_LEN; len += XATTR_MAC_OSX_PREFIX_LEN; } diff --git a/fs/io_uring.c b/fs/io_uring.c index 1fc7ec573019..6f76b31093e0 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -2419,11 +2419,12 @@ static int io_cqring_wait(struct io_ring_ctx *ctx, int min_events, } ret = wait_event_interruptible(ctx->wait, io_cqring_events(ring) >= min_events); - if (ret == -ERESTARTSYS) - ret = -EINTR; if (sig) - restore_user_sigmask(sig, &sigsaved); + restore_user_sigmask(sig, &sigsaved, ret == -ERESTARTSYS); + + if (ret == -ERESTARTSYS) + ret = -EINTR; return READ_ONCE(ring->r.head) == READ_ONCE(ring->r.tail) ? ret : 0; } diff --git a/fs/iomap.c b/fs/iomap.c index 23ef63fd1669..52b0a6ee4f83 100644 --- a/fs/iomap.c +++ b/fs/iomap.c @@ -562,7 +562,7 @@ iomap_migrate_page(struct address_space *mapping, struct page *newpage, { int ret; - ret = migrate_page_move_mapping(mapping, newpage, page, mode, 0); + ret = migrate_page_move_mapping(mapping, newpage, page, 0); if (ret != MIGRATEPAGE_SUCCESS) return ret; diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c index 7d8654a1472e..f8fb89b10227 100644 --- a/fs/jffs2/file.c +++ b/fs/jffs2/file.c @@ -109,9 +109,9 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg) return ret; } -int jffs2_do_readpage_unlock(struct inode *inode, struct page *pg) +int jffs2_do_readpage_unlock(void *data, struct page *pg) { - int ret = jffs2_do_readpage_nolock(inode, pg); + int ret = jffs2_do_readpage_nolock(data, pg); unlock_page(pg); return ret; } diff --git a/fs/jffs2/fs.c b/fs/jffs2/fs.c index 112d85849db1..8a20ddd25f2d 100644 --- a/fs/jffs2/fs.c +++ b/fs/jffs2/fs.c @@ -687,7 +687,7 @@ unsigned char *jffs2_gc_fetch_page(struct jffs2_sb_info *c, struct page *pg; pg = read_cache_page(inode->i_mapping, offset >> PAGE_SHIFT, - (void *)jffs2_do_readpage_unlock, inode); + jffs2_do_readpage_unlock, inode); if (IS_ERR(pg)) return (void *)pg; diff --git a/fs/jffs2/os-linux.h b/fs/jffs2/os-linux.h index a2dbbb3f4c74..bd3d5f0ddc34 100644 --- a/fs/jffs2/os-linux.h +++ b/fs/jffs2/os-linux.h @@ -155,7 +155,7 @@ extern const struct file_operations jffs2_file_operations; extern const struct inode_operations jffs2_file_inode_operations; extern const struct address_space_operations jffs2_file_address_operations; int jffs2_fsync(struct file *, loff_t, loff_t, int); -int jffs2_do_readpage_unlock (struct inode *inode, struct page *pg); +int jffs2_do_readpage_unlock(void *data, struct page *pg); /* ioctl.c */ long jffs2_ioctl(struct file *, unsigned int, unsigned long); diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c index e6fde1a5c072..4b26aa7e4d78 100644 --- a/fs/notify/fanotify/fanotify.c +++ b/fs/notify/fanotify/fanotify.c @@ -288,10 +288,13 @@ struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group, /* * For queues with unlimited length lost events are not expected and * can possibly have security implications. Avoid losing events when - * memory is short. + * memory is short. For the limited size queues, avoid OOM killer in the + * target monitoring memcg as it may have security repercussion. */ if (group->max_events == UINT_MAX) gfp |= __GFP_NOFAIL; + else + gfp |= __GFP_RETRY_MAYFAIL; /* Whoever is interested in the event, pays for the allocation. */ memalloc_use_memcg(group->memcg); diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c index 2fda08b2b885..d510223d302c 100644 --- a/fs/notify/inotify/inotify_fsnotify.c +++ b/fs/notify/inotify/inotify_fsnotify.c @@ -90,9 +90,13 @@ int inotify_handle_event(struct fsnotify_group *group, i_mark = container_of(inode_mark, struct inotify_inode_mark, fsn_mark); - /* Whoever is interested in the event, pays for the allocation. */ + /* + * Whoever is interested in the event, pays for the allocation. Do not + * trigger OOM killer in the target monitoring memcg as it may have + * security repercussion. + */ memalloc_use_memcg(group->memcg); - event = kmalloc(alloc_len, GFP_KERNEL_ACCOUNT); + event = kmalloc(alloc_len, GFP_KERNEL_ACCOUNT | __GFP_RETRY_MAYFAIL); memalloc_unuse_memcg(); if (unlikely(!event)) { diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c index cce8de32779f..0b815178126e 100644 --- a/fs/notify/inotify/inotify_user.c +++ b/fs/notify/inotify/inotify_user.c @@ -45,8 +45,6 @@ struct kmem_cache *inotify_inode_mark_cachep __read_mostly; #include <linux/sysctl.h> -static int zero; - struct ctl_table inotify_table[] = { { .procname = "max_user_instances", @@ -54,7 +52,7 @@ struct ctl_table inotify_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "max_user_watches", @@ -62,7 +60,7 @@ struct ctl_table inotify_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "max_queued_events", @@ -70,7 +68,7 @@ struct ctl_table inotify_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero + .extra1 = SYSCTL_ZERO }, { } }; diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c index d1348fc4ca6d..2443224d0c42 100644 --- a/fs/ocfs2/alloc.c +++ b/fs/ocfs2/alloc.c @@ -5993,6 +5993,7 @@ int __ocfs2_flush_truncate_log(struct ocfs2_super *osb) struct buffer_head *data_alloc_bh = NULL; struct ocfs2_dinode *di; struct ocfs2_truncate_log *tl; + struct ocfs2_journal *journal = osb->journal; BUG_ON(inode_trylock(tl_inode)); @@ -6013,6 +6014,20 @@ int __ocfs2_flush_truncate_log(struct ocfs2_super *osb) goto out; } + /* Appending truncate log(TA) and and flushing truncate log(TF) are + * two separated transactions. They can be both committed but not + * checkpointed. If crash occurs then, both two transaction will be + * replayed with several already released to global bitmap clusters. + * Then truncate log will be replayed resulting in cluster double free. + */ + jbd2_journal_lock_updates(journal->j_journal); + status = jbd2_journal_flush(journal->j_journal); + jbd2_journal_unlock_updates(journal->j_journal); + if (status < 0) { + mlog_errno(status); + goto out; + } + data_alloc_inode = ocfs2_get_system_file_inode(osb, GLOBAL_BITMAP_SYSTEM_INODE, OCFS2_INVALID_SLOT); diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index a4c905d6b575..3e0a93e799ea 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -2139,13 +2139,30 @@ static int ocfs2_dio_wr_get_block(struct inode *inode, sector_t iblock, struct ocfs2_dio_write_ctxt *dwc = NULL; struct buffer_head *di_bh = NULL; u64 p_blkno; - loff_t pos = iblock << inode->i_sb->s_blocksize_bits; + unsigned int i_blkbits = inode->i_sb->s_blocksize_bits; + loff_t pos = iblock << i_blkbits; + sector_t endblk = (i_size_read(inode) - 1) >> i_blkbits; unsigned len, total_len = bh_result->b_size; int ret = 0, first_get_block = 0; len = osb->s_clustersize - (pos & (osb->s_clustersize - 1)); len = min(total_len, len); + /* + * bh_result->b_size is count in get_more_blocks according to write + * "pos" and "end", we need map twice to return different buffer state: + * 1. area in file size, not set NEW; + * 2. area out file size, set NEW. + * + * iblock endblk + * |--------|---------|---------|--------- + * |<-------area in file------->| + */ + + if ((iblock <= endblk) && + ((iblock + ((len - 1) >> i_blkbits)) > endblk)) + len = (endblk - iblock + 1) << i_blkbits; + mlog(0, "get block of %lu at %llu:%u req %u\n", inode->i_ino, pos, len, total_len); @@ -2229,6 +2246,9 @@ static int ocfs2_dio_wr_get_block(struct inode *inode, sector_t iblock, if (desc->c_needs_zero) set_buffer_new(bh_result); + if (iblock > endblk) + set_buffer_new(bh_result); + /* May sleep in end_io. It should not happen in a irq context. So defer * it to dio work queue. */ set_buffer_defer_completion(bh_result); diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c index 7a3a096856a8..7f74fcc6d7d9 100644 --- a/fs/ocfs2/cluster/heartbeat.c +++ b/fs/ocfs2/cluster/heartbeat.c @@ -1184,7 +1184,7 @@ bail: if (atomic_read(®->hr_steady_iterations) != 0) { if (atomic_dec_and_test(®->hr_unsteady_iterations)) { printk(KERN_NOTICE "o2hb: Unable to stabilize " - "heartbeart on region %s (%s)\n", + "heartbeat on region %s (%s)\n", config_item_name(®->hr_item), reg->hr_dev_name); atomic_set(®->hr_steady_iterations, 0); diff --git a/fs/ocfs2/cluster/quorum.c b/fs/ocfs2/cluster/quorum.c index 3d5d4b2b1356..5c424a099280 100644 --- a/fs/ocfs2/cluster/quorum.c +++ b/fs/ocfs2/cluster/quorum.c @@ -76,7 +76,7 @@ static void o2quo_fence_self(void) }; } -/* Indicate that a timeout occurred on a hearbeat region write. The +/* Indicate that a timeout occurred on a heartbeat region write. The * other nodes in the cluster may consider us dead at that time so we * want to "fence" ourselves so that we don't scribble on the disk * after they think they've recovered us. This can't solve all diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c index c599463d0694..c07c9aac537a 100644 --- a/fs/ocfs2/cluster/tcp.c +++ b/fs/ocfs2/cluster/tcp.c @@ -1762,7 +1762,7 @@ static void o2net_hb_node_up_cb(struct o2nm_node *node, int node_num, (msecs_to_jiffies(o2net_reconnect_delay()) + 1); if (node_num != o2nm_this_node()) { - /* believe it or not, accept and node hearbeating testing + /* believe it or not, accept and node heartbeating testing * can succeed for this node before we got here.. so * only use set_nn_state to clear the persistent error * if that hasn't already happened */ diff --git a/fs/ocfs2/dcache.c b/fs/ocfs2/dcache.c index 2d016937fdda..42a61eecdacd 100644 --- a/fs/ocfs2/dcache.c +++ b/fs/ocfs2/dcache.c @@ -296,6 +296,18 @@ int ocfs2_dentry_attach_lock(struct dentry *dentry, out_attach: spin_lock(&dentry_attach_lock); + if (unlikely(dentry->d_fsdata && !alias)) { + /* d_fsdata is set by a racing thread which is doing + * the same thing as this thread is doing. Leave the racing + * thread going ahead and we return here. + */ + spin_unlock(&dentry_attach_lock); + iput(dl->dl_inode); + ocfs2_lock_res_free(&dl->dl_lockres); + kfree(dl); + return 0; + } + dentry->d_fsdata = dl; dl->dl_count++; spin_unlock(&dentry_attach_lock); diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c index 810f841494ef..74b768ca1cd8 100644 --- a/fs/ocfs2/dlm/dlmmaster.c +++ b/fs/ocfs2/dlm/dlmmaster.c @@ -2161,7 +2161,7 @@ put: * think that $RECOVERY is currently mastered by a dead node. If so, * we wait a short time to allow that node to get notified by its own * heartbeat stack, then check again. All $RECOVERY lock resources - * mastered by dead nodes are purged when the hearbeat callback is + * mastered by dead nodes are purged when the heartbeat callback is * fired, so we can know for sure that it is safe to continue once * the node returns a live node or no node. */ static int dlm_pre_master_reco_lockres(struct dlm_ctxt *dlm, diff --git a/fs/ocfs2/dlm/dlmunlock.c b/fs/ocfs2/dlm/dlmunlock.c index e78657742bd8..3883633e82eb 100644 --- a/fs/ocfs2/dlm/dlmunlock.c +++ b/fs/ocfs2/dlm/dlmunlock.c @@ -90,7 +90,8 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, enum dlm_status status; int actions = 0; int in_use; - u8 owner; + u8 owner; + int recovery_wait = 0; mlog(0, "master_node = %d, valblk = %d\n", master_node, flags & LKM_VALBLK); @@ -193,9 +194,12 @@ static enum dlm_status dlmunlock_common(struct dlm_ctxt *dlm, } if (flags & LKM_CANCEL) lock->cancel_pending = 0; - else - lock->unlock_pending = 0; - + else { + if (!lock->unlock_pending) + recovery_wait = 1; + else + lock->unlock_pending = 0; + } } /* get an extra ref on lock. if we are just switching @@ -229,6 +233,17 @@ leave: spin_unlock(&res->spinlock); wake_up(&res->wq); + if (recovery_wait) { + spin_lock(&res->spinlock); + /* Unlock request will directly succeed after owner dies, + * and the lock is already removed from grant list. We have to + * wait for RECOVERING done or we miss the chance to purge it + * since the removement is much faster than RECOVERING proc. + */ + __dlm_wait_on_lockres_flags(res, DLM_LOCK_RES_RECOVERING); + spin_unlock(&res->spinlock); + } + /* let the caller's final dlm_lock_put handle the actual kfree */ if (actions & DLM_UNLOCK_FREE_LOCK) { /* this should always be coupled with list removal */ diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c index b5fc5d3c7525..9317d04fb03b 100644 --- a/fs/ocfs2/dlmglue.c +++ b/fs/ocfs2/dlmglue.c @@ -434,7 +434,7 @@ static void ocfs2_update_lock_stats(struct ocfs2_lock_res *res, int level, struct ocfs2_mask_waiter *mw, int ret) { u32 usec; - ktime_t kt; + ktime_t last, kt; struct ocfs2_lock_stats *stats; if (level == LKM_PRMODE) @@ -444,7 +444,8 @@ static void ocfs2_update_lock_stats(struct ocfs2_lock_res *res, int level, else return; - kt = ktime_sub(ktime_get(), mw->mw_lock_start); + last = ktime_get(); + kt = ktime_sub(last, mw->mw_lock_start); usec = ktime_to_us(kt); stats->ls_gets++; @@ -460,6 +461,8 @@ static void ocfs2_update_lock_stats(struct ocfs2_lock_res *res, int level, if (ret) stats->ls_fail++; + + stats->ls_last = ktime_to_timespec(last).tv_sec; } static inline void ocfs2_track_lock_refresh(struct ocfs2_lock_res *lockres) @@ -2989,6 +2992,8 @@ struct ocfs2_dlm_debug *ocfs2_new_dlm_debug(void) kref_init(&dlm_debug->d_refcnt); INIT_LIST_HEAD(&dlm_debug->d_lockres_tracking); dlm_debug->d_locking_state = NULL; + dlm_debug->d_locking_filter = NULL; + dlm_debug->d_filter_secs = 0; out: return dlm_debug; } @@ -3079,17 +3084,41 @@ static void *ocfs2_dlm_seq_next(struct seq_file *m, void *v, loff_t *pos) * - Lock stats printed * New in version 3 * - Max time in lock stats is in usecs (instead of nsecs) + * New in version 4 + * - Add last pr/ex unlock times in secs */ -#define OCFS2_DLM_DEBUG_STR_VERSION 3 +#define OCFS2_DLM_DEBUG_STR_VERSION 4 static int ocfs2_dlm_seq_show(struct seq_file *m, void *v) { int i; char *lvb; + u32 now, last = 0; struct ocfs2_lock_res *lockres = v; + struct ocfs2_dlm_debug *dlm_debug = + ((struct ocfs2_dlm_seq_priv *)m->private)->p_dlm_debug; if (!lockres) return -EINVAL; + if (dlm_debug->d_filter_secs) { + now = ktime_to_timespec(ktime_get()).tv_sec; +#ifdef CONFIG_OCFS2_FS_STATS + if (lockres->l_lock_prmode.ls_last > + lockres->l_lock_exmode.ls_last) + last = lockres->l_lock_prmode.ls_last; + else + last = lockres->l_lock_exmode.ls_last; +#endif + /* + * Use d_filter_secs field to filter lock resources dump, + * the default d_filter_secs(0) value filters nothing, + * otherwise, only dump the last N seconds active lock + * resources. + */ + if ((now - last) > dlm_debug->d_filter_secs) + return 0; + } + seq_printf(m, "0x%x\t", OCFS2_DLM_DEBUG_STR_VERSION); if (lockres->l_type == OCFS2_LOCK_TYPE_DENTRY) @@ -3131,6 +3160,8 @@ static int ocfs2_dlm_seq_show(struct seq_file *m, void *v) # define lock_max_prmode(_l) ((_l)->l_lock_prmode.ls_max) # define lock_max_exmode(_l) ((_l)->l_lock_exmode.ls_max) # define lock_refresh(_l) ((_l)->l_lock_refresh) +# define lock_last_prmode(_l) ((_l)->l_lock_prmode.ls_last) +# define lock_last_exmode(_l) ((_l)->l_lock_exmode.ls_last) #else # define lock_num_prmode(_l) (0) # define lock_num_exmode(_l) (0) @@ -3141,6 +3172,8 @@ static int ocfs2_dlm_seq_show(struct seq_file *m, void *v) # define lock_max_prmode(_l) (0) # define lock_max_exmode(_l) (0) # define lock_refresh(_l) (0) +# define lock_last_prmode(_l) (0) +# define lock_last_exmode(_l) (0) #endif /* The following seq_print was added in version 2 of this output */ seq_printf(m, "%u\t" @@ -3151,6 +3184,8 @@ static int ocfs2_dlm_seq_show(struct seq_file *m, void *v) "%llu\t" "%u\t" "%u\t" + "%u\t" + "%u\t" "%u\t", lock_num_prmode(lockres), lock_num_exmode(lockres), @@ -3160,7 +3195,9 @@ static int ocfs2_dlm_seq_show(struct seq_file *m, void *v) lock_total_exmode(lockres), lock_max_prmode(lockres), lock_max_exmode(lockres), - lock_refresh(lockres)); + lock_refresh(lockres), + lock_last_prmode(lockres), + lock_last_exmode(lockres)); /* End the line */ seq_printf(m, "\n"); @@ -3231,6 +3268,17 @@ static int ocfs2_dlm_init_debug(struct ocfs2_super *osb) goto out; } + dlm_debug->d_locking_filter = debugfs_create_u32("locking_filter", + 0600, + osb->osb_debug_root, + &dlm_debug->d_filter_secs); + if (!dlm_debug->d_locking_filter) { + ret = -EINVAL; + mlog(ML_ERROR, + "Unable to create locking filter debugfs file.\n"); + goto out; + } + ocfs2_get_dlm_debug(dlm_debug); out: return ret; @@ -3242,6 +3290,7 @@ static void ocfs2_dlm_shutdown_debug(struct ocfs2_super *osb) if (dlm_debug) { debugfs_remove(dlm_debug->d_locking_state); + debugfs_remove(dlm_debug->d_locking_filter); ocfs2_put_dlm_debug(dlm_debug); } } diff --git a/fs/ocfs2/ocfs2.h b/fs/ocfs2/ocfs2.h index a4647a646f07..9b2a72ff6f7c 100644 --- a/fs/ocfs2/ocfs2.h +++ b/fs/ocfs2/ocfs2.h @@ -150,6 +150,7 @@ struct ocfs2_lock_stats { /* Storing max wait in usecs saves 24 bytes per inode */ u32 ls_max; /* Max wait in USEC */ + u32 ls_last; /* Last unlock time in SEC */ }; #endif @@ -222,6 +223,8 @@ struct ocfs2_orphan_scan { struct ocfs2_dlm_debug { struct kref d_refcnt; struct dentry *d_locking_state; + struct dentry *d_locking_filter; + u32 d_filter_secs; struct list_head d_lockres_tracking; }; diff --git a/fs/proc/Kconfig b/fs/proc/Kconfig index 62ee41b4bbd0..b74ea844abd5 100644 --- a/fs/proc/Kconfig +++ b/fs/proc/Kconfig @@ -58,7 +58,8 @@ config PROC_VMCORE_DEVICE_DUMP snapshot. If you say Y here, the collected device dumps will be added - as ELF notes to /proc/vmcore. + as ELF notes to /proc/vmcore. You can still disable device + dump using the kernel command line option 'novmcoredd'. config PROC_SYSCTL bool "Sysctl support (/proc/sys)" if EXPERT diff --git a/fs/proc/array.c b/fs/proc/array.c index 84908556ea58..46dcb6f0eccf 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -462,7 +462,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns, * a program is not able to use ptrace(2) in that case. It is * safe because the task has stopped executing permanently. */ - if (permitted && (task->flags & PF_DUMPCORE)) { + if (permitted && (task->flags & (PF_EXITING|PF_DUMPCORE))) { if (try_get_task_stack(task)) { eip = KSTK_EIP(task); esp = KSTK_ESP(task); diff --git a/fs/proc/inode.c b/fs/proc/inode.c index 5f8d215b3fd0..dbe43a50caf2 100644 --- a/fs/proc/inode.c +++ b/fs/proc/inode.c @@ -200,7 +200,8 @@ static loff_t proc_reg_llseek(struct file *file, loff_t offset, int whence) struct proc_dir_entry *pde = PDE(file_inode(file)); loff_t rv = -EINVAL; if (use_pde(pde)) { - loff_t (*llseek)(struct file *, loff_t, int); + typeof_member(struct file_operations, llseek) llseek; + llseek = pde->proc_fops->llseek; if (!llseek) llseek = default_llseek; @@ -212,10 +213,11 @@ static loff_t proc_reg_llseek(struct file *file, loff_t offset, int whence) static ssize_t proc_reg_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { - ssize_t (*read)(struct file *, char __user *, size_t, loff_t *); struct proc_dir_entry *pde = PDE(file_inode(file)); ssize_t rv = -EIO; if (use_pde(pde)) { + typeof_member(struct file_operations, read) read; + read = pde->proc_fops->read; if (read) rv = read(file, buf, count, ppos); @@ -226,10 +228,11 @@ static ssize_t proc_reg_read(struct file *file, char __user *buf, size_t count, static ssize_t proc_reg_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos) { - ssize_t (*write)(struct file *, const char __user *, size_t, loff_t *); struct proc_dir_entry *pde = PDE(file_inode(file)); ssize_t rv = -EIO; if (use_pde(pde)) { + typeof_member(struct file_operations, write) write; + write = pde->proc_fops->write; if (write) rv = write(file, buf, count, ppos); @@ -242,8 +245,9 @@ static __poll_t proc_reg_poll(struct file *file, struct poll_table_struct *pts) { struct proc_dir_entry *pde = PDE(file_inode(file)); __poll_t rv = DEFAULT_POLLMASK; - __poll_t (*poll)(struct file *, struct poll_table_struct *); if (use_pde(pde)) { + typeof_member(struct file_operations, poll) poll; + poll = pde->proc_fops->poll; if (poll) rv = poll(file, pts); @@ -256,8 +260,9 @@ static long proc_reg_unlocked_ioctl(struct file *file, unsigned int cmd, unsigne { struct proc_dir_entry *pde = PDE(file_inode(file)); long rv = -ENOTTY; - long (*ioctl)(struct file *, unsigned int, unsigned long); if (use_pde(pde)) { + typeof_member(struct file_operations, unlocked_ioctl) ioctl; + ioctl = pde->proc_fops->unlocked_ioctl; if (ioctl) rv = ioctl(file, cmd, arg); @@ -271,8 +276,9 @@ static long proc_reg_compat_ioctl(struct file *file, unsigned int cmd, unsigned { struct proc_dir_entry *pde = PDE(file_inode(file)); long rv = -ENOTTY; - long (*compat_ioctl)(struct file *, unsigned int, unsigned long); if (use_pde(pde)) { + typeof_member(struct file_operations, compat_ioctl) compat_ioctl; + compat_ioctl = pde->proc_fops->compat_ioctl; if (compat_ioctl) rv = compat_ioctl(file, cmd, arg); @@ -286,8 +292,9 @@ static int proc_reg_mmap(struct file *file, struct vm_area_struct *vma) { struct proc_dir_entry *pde = PDE(file_inode(file)); int rv = -EIO; - int (*mmap)(struct file *, struct vm_area_struct *); if (use_pde(pde)) { + typeof_member(struct file_operations, mmap) mmap; + mmap = pde->proc_fops->mmap; if (mmap) rv = mmap(file, vma); @@ -305,7 +312,7 @@ proc_reg_get_unmapped_area(struct file *file, unsigned long orig_addr, unsigned long rv = -EIO; if (use_pde(pde)) { - typeof(proc_reg_get_unmapped_area) *get_area; + typeof_member(struct file_operations, get_unmapped_area) get_area; get_area = pde->proc_fops->get_unmapped_area; #ifdef CONFIG_MMU @@ -326,8 +333,8 @@ static int proc_reg_open(struct inode *inode, struct file *file) { struct proc_dir_entry *pde = PDE(inode); int rv = 0; - int (*open)(struct inode *, struct file *); - int (*release)(struct inode *, struct file *); + typeof_member(struct file_operations, open) open; + typeof_member(struct file_operations, release) release; struct pde_opener *pdeo; /* diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c index c74570736b24..2b1b60e2e9cb 100644 --- a/fs/proc/proc_sysctl.c +++ b/fs/proc/proc_sysctl.c @@ -22,6 +22,10 @@ static const struct inode_operations proc_sys_inode_operations; static const struct file_operations proc_sys_dir_file_operations; static const struct inode_operations proc_sys_dir_operations; +/* shared constants to be used in various sysctls */ +const int sysctl_vals[] = { 0, 1, INT_MAX }; +EXPORT_SYMBOL(sysctl_vals); + /* Support for permanently empty directories */ struct ctl_table sysctl_mount_point[] = { diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c index 7bb96fdd38ad..a531d548d685 100644 --- a/fs/proc/vmcore.c +++ b/fs/proc/vmcore.c @@ -21,6 +21,7 @@ #include <linux/init.h> #include <linux/crash_dump.h> #include <linux/list.h> +#include <linux/moduleparam.h> #include <linux/mutex.h> #include <linux/vmalloc.h> #include <linux/pagemap.h> @@ -54,6 +55,9 @@ static struct proc_dir_entry *proc_vmcore; /* Device Dump list and mutex to synchronize access to list */ static LIST_HEAD(vmcoredd_list); static DEFINE_MUTEX(vmcoredd_mutex); + +static bool vmcoredd_disabled; +core_param(novmcoredd, vmcoredd_disabled, bool, 0); #endif /* CONFIG_PROC_VMCORE_DEVICE_DUMP */ /* Device Dump Size */ @@ -1452,6 +1456,11 @@ int vmcore_add_device_dump(struct vmcoredd_data *data) size_t data_size; int ret; + if (vmcoredd_disabled) { + pr_err_once("Device dump is disabled\n"); + return -EINVAL; + } + if (!data || !strlen(data->dump_name) || !data->vmcoredd_callback || !data->size) return -EINVAL; diff --git a/fs/ramfs/inode.c b/fs/ramfs/inode.c index 11201b2d06b9..a26ae0b83f8f 100644 --- a/fs/ramfs/inode.c +++ b/fs/ramfs/inode.c @@ -145,6 +145,17 @@ static int ramfs_symlink(struct inode * dir, struct dentry *dentry, const char * return error; } +static int ramfs_tmpfile(struct inode *dir, struct dentry *dentry, umode_t mode) +{ + struct inode *inode; + + inode = ramfs_get_inode(dir->i_sb, dir, mode, 0); + if (!inode) + return -ENOSPC; + d_tmpfile(dentry, inode); + return 0; +} + static const struct inode_operations ramfs_dir_inode_operations = { .create = ramfs_create, .lookup = simple_lookup, @@ -155,6 +166,7 @@ static const struct inode_operations ramfs_dir_inode_operations = { .rmdir = simple_rmdir, .mknod = ramfs_mknod, .rename = simple_rename, + .tmpfile = ramfs_tmpfile, }; /* diff --git a/fs/select.c b/fs/select.c index 6cbc9ff56ba0..a4d8f6e8b63c 100644 --- a/fs/select.c +++ b/fs/select.c @@ -758,10 +758,9 @@ static long do_pselect(int n, fd_set __user *inp, fd_set __user *outp, return ret; ret = core_sys_select(n, inp, outp, exp, to); + restore_user_sigmask(sigmask, &sigsaved, ret == -ERESTARTNOHAND); ret = poll_select_copy_remaining(&end_time, tsp, type, ret); - restore_user_sigmask(sigmask, &sigsaved); - return ret; } @@ -1106,8 +1105,7 @@ SYSCALL_DEFINE5(ppoll, struct pollfd __user *, ufds, unsigned int, nfds, ret = do_sys_poll(ufds, nfds, to); - restore_user_sigmask(sigmask, &sigsaved); - + restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR); /* We can restart this syscall, usually */ if (ret == -EINTR) ret = -ERESTARTNOHAND; @@ -1142,8 +1140,7 @@ SYSCALL_DEFINE5(ppoll_time32, struct pollfd __user *, ufds, unsigned int, nfds, ret = do_sys_poll(ufds, nfds, to); - restore_user_sigmask(sigmask, &sigsaved); - + restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR); /* We can restart this syscall, usually */ if (ret == -EINTR) ret = -ERESTARTNOHAND; @@ -1350,10 +1347,9 @@ static long do_compat_pselect(int n, compat_ulong_t __user *inp, return ret; ret = compat_core_sys_select(n, inp, outp, exp, to); + restore_user_sigmask(sigmask, &sigsaved, ret == -ERESTARTNOHAND); ret = poll_select_copy_remaining(&end_time, tsp, type, ret); - restore_user_sigmask(sigmask, &sigsaved); - return ret; } @@ -1425,8 +1421,7 @@ COMPAT_SYSCALL_DEFINE5(ppoll_time32, struct pollfd __user *, ufds, ret = do_sys_poll(ufds, nfds, to); - restore_user_sigmask(sigmask, &sigsaved); - + restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR); /* We can restart this syscall, usually */ if (ret == -EINTR) ret = -ERESTARTNOHAND; @@ -1461,8 +1456,7 @@ COMPAT_SYSCALL_DEFINE5(ppoll_time64, struct pollfd __user *, ufds, ret = do_sys_poll(ufds, nfds, to); - restore_user_sigmask(sigmask, &sigsaved); - + restore_user_sigmask(sigmask, &sigsaved, ret == -EINTR); /* We can restart this syscall, usually */ if (ret == -EINTR) ret = -ERESTARTNOHAND; diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c index 512e7d9c60cd..cb4c07e65874 100644 --- a/fs/ubifs/file.c +++ b/fs/ubifs/file.c @@ -1482,7 +1482,7 @@ static int ubifs_migrate_page(struct address_space *mapping, { int rc; - rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); + rc = migrate_page_move_mapping(mapping, newpage, page, 0); if (rc != MIGRATEPAGE_SUCCESS) return rc; diff --git a/fs/ufs/super.c b/fs/ufs/super.c index 3d247c0d92aa..4ed0dca52ec8 100644 --- a/fs/ufs/super.c +++ b/fs/ufs/super.c @@ -1407,11 +1407,9 @@ static int ufs_statfs(struct dentry *dentry, struct kstatfs *buf) struct super_block *sb = dentry->d_sb; struct ufs_sb_private_info *uspi= UFS_SB(sb)->s_uspi; unsigned flags = UFS_SB(sb)->s_flags; - struct ufs_super_block_third *usb3; u64 id = huge_encode_dev(sb->s_bdev->bd_dev); mutex_lock(&UFS_SB(sb)->s_lock); - usb3 = ubh_get_usb_third(uspi); if ((flags & UFS_TYPE_MASK) == UFS_TYPE_UFS2) buf->f_type = UFS2_MAGIC; diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index 948714c1535a..8476175c07e7 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -1,13 +1,112 @@ /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __ASM_GENERIC_PGALLOC_H #define __ASM_GENERIC_PGALLOC_H -/* - * an empty file is enough for a nommu architecture - */ + #ifdef CONFIG_MMU -#error need to implement an architecture specific asm/pgalloc.h + +#define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO) +#define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT) + +/** + * __pte_alloc_one_kernel - allocate a page for PTE-level kernel page table + * @mm: the mm_struct of the current context + * + * This function is intended for architectures that need + * anything beyond simple page allocation. + * + * Return: pointer to the allocated memory or %NULL on error + */ +static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm) +{ + return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL); +} + +#ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL +/** + * pte_alloc_one_kernel - allocate a page for PTE-level kernel page table + * @mm: the mm_struct of the current context + * + * Return: pointer to the allocated memory or %NULL on error + */ +static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) +{ + return __pte_alloc_one_kernel(mm); +} +#endif + +/** + * pte_free_kernel - free PTE-level kernel page table page + * @mm: the mm_struct of the current context + * @pte: pointer to the memory containing the page table + */ +static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) +{ + free_page((unsigned long)pte); +} + +/** + * __pte_alloc_one - allocate a page for PTE-level user page table + * @mm: the mm_struct of the current context + * @gfp: GFP flags to use for the allocation + * + * Allocates a page and runs the pgtable_page_ctor(). + * + * This function is intended for architectures that need + * anything beyond simple page allocation or must have custom GFP flags. + * + * Return: `struct page` initialized as page table or %NULL on error + */ +static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) +{ + struct page *pte; + + pte = alloc_page(gfp); + if (!pte) + return NULL; + if (!pgtable_page_ctor(pte)) { + __free_page(pte); + return NULL; + } + + return pte; +} + +#ifndef __HAVE_ARCH_PTE_ALLOC_ONE +/** + * pte_alloc_one - allocate a page for PTE-level user page table + * @mm: the mm_struct of the current context + * + * Allocates a page and runs the pgtable_page_ctor(). + * + * Return: `struct page` initialized as page table or %NULL on error + */ +static inline pgtable_t pte_alloc_one(struct mm_struct *mm) +{ + return __pte_alloc_one(mm, GFP_PGTABLE_USER); +} #endif +/* + * Should really implement gc for free page table pages. This could be + * done with a reference count in struct page. + */ + +/** + * pte_free - free PTE-level user page table page + * @mm: the mm_struct of the current context + * @pte_page: the `struct page` representing the page table + */ +static inline void pte_free(struct mm_struct *mm, struct page *pte_page) +{ + pgtable_page_dtor(pte_page); + __free_page(pte_page); +} + +#else /* CONFIG_MMU */ + +/* This is enough for a nommu architecture */ #define check_pgt_cache() do { } while (0) +#endif /* CONFIG_MMU */ + #endif /* __ASM_GENERIC_PGALLOC_H */ diff --git a/include/linux/coda.h b/include/linux/coda.h index d30209b9cef8..0ca0c83fdb1c 100644 --- a/include/linux/coda.h +++ b/include/linux/coda.h @@ -58,8 +58,7 @@ Mellon the rights to redistribute these changes without encumbrance. #ifndef _CODA_HEADER_ #define _CODA_HEADER_ -#if defined(__linux__) typedef unsigned long long u_quad_t; -#endif + #include <uapi/linux/coda.h> #endif diff --git a/include/linux/device.h b/include/linux/device.h index 5489a759e1c5..e0649f6adf2e 100644 --- a/include/linux/device.h +++ b/include/linux/device.h @@ -713,6 +713,7 @@ void __iomem *devm_of_iomap(struct device *dev, /* allows to add/remove a custom action to devres stack */ int devm_add_action(struct device *dev, void (*action)(void *), void *data); void devm_remove_action(struct device *dev, void (*action)(void *), void *data); +void devm_release_action(struct device *dev, void (*action)(void *), void *data); static inline int devm_add_action_or_reset(struct device *dev, void (*action)(void *), void *data) diff --git a/include/linux/dmar.h b/include/linux/dmar.h index f8af1d770520..eb634912f475 100644 --- a/include/linux/dmar.h +++ b/include/linux/dmar.h @@ -104,12 +104,14 @@ static inline bool dmar_rcu_check(void) #define dmar_rcu_dereference(p) rcu_dereference_check((p), dmar_rcu_check()) -#define for_each_dev_scope(a, c, p, d) \ - for ((p) = 0; ((d) = (p) < (c) ? dmar_rcu_dereference((a)[(p)].dev) : \ - NULL, (p) < (c)); (p)++) - -#define for_each_active_dev_scope(a, c, p, d) \ - for_each_dev_scope((a), (c), (p), (d)) if (!(d)) { continue; } else +#define for_each_dev_scope(devs, cnt, i, tmp) \ + for ((i) = 0; ((tmp) = (i) < (cnt) ? \ + dmar_rcu_dereference((devs)[(i)].dev) : NULL, (i) < (cnt)); \ + (i)++) + +#define for_each_active_dev_scope(devs, cnt, i, tmp) \ + for_each_dev_scope((devs), (cnt), (i), (tmp)) \ + if (!(tmp)) { continue; } else extern int dmar_table_init(void); extern int dmar_dev_scope_init(void); diff --git a/include/linux/genalloc.h b/include/linux/genalloc.h index 6c62eeca754f..40d1ae042a77 100644 --- a/include/linux/genalloc.h +++ b/include/linux/genalloc.h @@ -75,6 +75,7 @@ struct gen_pool_chunk { struct list_head next_chunk; /* next chunk in pool */ atomic_long_t avail; phys_addr_t phys_addr; /* physical starting address of memory chunk */ + void *owner; /* private data to retrieve at alloc time */ unsigned long start_addr; /* start address of memory chunk */ unsigned long end_addr; /* end address of memory chunk (inclusive) */ unsigned long bits[0]; /* bitmap for allocating memory chunk */ @@ -96,8 +97,15 @@ struct genpool_data_fixed { extern struct gen_pool *gen_pool_create(int, int); extern phys_addr_t gen_pool_virt_to_phys(struct gen_pool *pool, unsigned long); -extern int gen_pool_add_virt(struct gen_pool *, unsigned long, phys_addr_t, - size_t, int); +extern int gen_pool_add_owner(struct gen_pool *, unsigned long, phys_addr_t, + size_t, int, void *); + +static inline int gen_pool_add_virt(struct gen_pool *pool, unsigned long addr, + phys_addr_t phys, size_t size, int nid) +{ + return gen_pool_add_owner(pool, addr, phys, size, nid, NULL); +} + /** * gen_pool_add - add a new chunk of special memory to the pool * @pool: pool to add new memory chunk to @@ -116,13 +124,48 @@ static inline int gen_pool_add(struct gen_pool *pool, unsigned long addr, return gen_pool_add_virt(pool, addr, -1, size, nid); } extern void gen_pool_destroy(struct gen_pool *); -extern unsigned long gen_pool_alloc(struct gen_pool *, size_t); -extern unsigned long gen_pool_alloc_algo(struct gen_pool *, size_t, - genpool_algo_t algo, void *data); +unsigned long gen_pool_alloc_algo_owner(struct gen_pool *pool, size_t size, + genpool_algo_t algo, void *data, void **owner); + +static inline unsigned long gen_pool_alloc_owner(struct gen_pool *pool, + size_t size, void **owner) +{ + return gen_pool_alloc_algo_owner(pool, size, pool->algo, pool->data, + owner); +} + +static inline unsigned long gen_pool_alloc_algo(struct gen_pool *pool, + size_t size, genpool_algo_t algo, void *data) +{ + return gen_pool_alloc_algo_owner(pool, size, algo, data, NULL); +} + +/** + * gen_pool_alloc - allocate special memory from the pool + * @pool: pool to allocate from + * @size: number of bytes to allocate from the pool + * + * Allocate the requested number of bytes from the specified pool. + * Uses the pool allocation function (with first-fit algorithm by default). + * Can not be used in NMI handler on architectures without + * NMI-safe cmpxchg implementation. + */ +static inline unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) +{ + return gen_pool_alloc_algo(pool, size, pool->algo, pool->data); +} + extern void *gen_pool_dma_alloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); void *gen_pool_dma_zalloc(struct gen_pool *pool, size_t size, dma_addr_t *dma); -extern void gen_pool_free(struct gen_pool *, unsigned long, size_t); +extern void gen_pool_free_owner(struct gen_pool *pool, unsigned long addr, + size_t size, void **owner); +static inline void gen_pool_free(struct gen_pool *pool, unsigned long addr, + size_t size) +{ + gen_pool_free_owner(pool, addr, size, NULL); +} + extern void gen_pool_for_each_chunk(struct gen_pool *, void (*)(struct gen_pool *, struct gen_pool_chunk *, void *), void *); extern size_t gen_pool_avail(struct gen_pool *); @@ -157,7 +200,7 @@ extern struct gen_pool *devm_gen_pool_create(struct device *dev, int min_alloc_order, int nid, const char *name); extern struct gen_pool *gen_pool_get(struct device *dev, const char *name); -bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start, +extern bool gen_pool_has_addr(struct gen_pool *pool, unsigned long start, size_t size); #ifdef CONFIG_OF diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index edf476c8cfb9..f895a79c6f5c 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -608,22 +608,92 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; -#define alloc_huge_page(v, a, r) NULL -#define alloc_huge_page_node(h, nid) NULL -#define alloc_huge_page_nodemask(h, preferred_nid, nmask) NULL -#define alloc_huge_page_vma(h, vma, address) NULL -#define alloc_bootmem_huge_page(h) NULL -#define hstate_file(f) NULL -#define hstate_sizelog(s) NULL -#define hstate_vma(v) NULL -#define hstate_inode(i) NULL -#define page_hstate(page) NULL -#define huge_page_size(h) PAGE_SIZE -#define huge_page_mask(h) PAGE_MASK -#define vma_kernel_pagesize(v) PAGE_SIZE -#define vma_mmu_pagesize(v) PAGE_SIZE -#define huge_page_order(h) 0 -#define huge_page_shift(h) PAGE_SHIFT + +static inline struct page *alloc_huge_page(struct vm_area_struct *vma, + unsigned long addr, + int avoid_reserve) +{ + return NULL; +} + +static inline struct page *alloc_huge_page_node(struct hstate *h, int nid) +{ + return NULL; +} + +static inline struct page * +alloc_huge_page_nodemask(struct hstate *h, int preferred_nid, nodemask_t *nmask) +{ + return NULL; +} + +static inline struct page *alloc_huge_page_vma(struct hstate *h, + struct vm_area_struct *vma, + unsigned long address) +{ + return NULL; +} + +static inline int __alloc_bootmem_huge_page(struct hstate *h) +{ + return 0; +} + +static inline struct hstate *hstate_file(struct file *f) +{ + return NULL; +} + +static inline struct hstate *hstate_sizelog(int page_size_log) +{ + return NULL; +} + +static inline struct hstate *hstate_vma(struct vm_area_struct *vma) +{ + return NULL; +} + +static inline struct hstate *hstate_inode(struct inode *i) +{ + return NULL; +} + +static inline struct hstate *page_hstate(struct page *page) +{ + return NULL; +} + +static inline unsigned long huge_page_size(struct hstate *h) +{ + return PAGE_SIZE; +} + +static inline unsigned long huge_page_mask(struct hstate *h) +{ + return PAGE_MASK; +} + +static inline unsigned long vma_kernel_pagesize(struct vm_area_struct *vma) +{ + return PAGE_SIZE; +} + +static inline unsigned long vma_mmu_pagesize(struct vm_area_struct *vma) +{ + return PAGE_SIZE; +} + +static inline unsigned int huge_page_order(struct hstate *h) +{ + return 0; +} + +static inline unsigned int huge_page_shift(struct hstate *h) +{ + return PAGE_SHIFT; +} + static inline bool hstate_is_gigantic(struct hstate *h) { return false; diff --git a/include/linux/kernel.h b/include/linux/kernel.h index 74b1ee9027f5..aba715bae9ce 100644 --- a/include/linux/kernel.h +++ b/include/linux/kernel.h @@ -88,6 +88,8 @@ */ #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) +#define typeof_member(T, m) typeof(((T*)0)->m) + #define DIV_ROUND_UP __KERNEL_DIV_ROUND_UP #define DIV_ROUND_DOWN_ULL(ll, d) \ diff --git a/include/linux/lz4.h b/include/linux/lz4.h index 394e3d9213b8..b16e15b9587a 100644 --- a/include/linux/lz4.h +++ b/include/linux/lz4.h @@ -278,7 +278,7 @@ int LZ4_decompress_fast(const char *source, char *dest, int originalSize); * @compressedSize: is the precise full size of the compressed block * @maxDecompressedSize: is the size of 'dest' buffer * - * Decompresses data fom 'source' into 'dest'. + * Decompresses data from 'source' into 'dest'. * If the source stream is detected malformed, the function will * stop decoding and return a negative result. * This function is protected against buffer overflow exploits, @@ -522,7 +522,7 @@ int LZ4_setStreamDecode(LZ4_streamDecode_t *LZ4_streamDecode, const char *dictionary, int dictSize); /** - * LZ4_decompress_fast_continue() - Decompress blocks in streaming mode + * LZ4_decompress_safe_continue() - Decompress blocks in streaming mode * @LZ4_streamDecode: the 'LZ4_streamDecode_t' structure * @source: source address of the compressed data * @dest: output buffer address of the uncompressed data @@ -530,7 +530,7 @@ int LZ4_setStreamDecode(LZ4_streamDecode_t *LZ4_streamDecode, * @compressedSize: is the precise full size of the compressed block * @maxDecompressedSize: is the size of 'dest' buffer * - * These decoding function allows decompression of multiple blocks + * This decoding function allows decompression of multiple blocks * in "streaming" mode. * Previously decoded blocks *must* remain available at the memory position * where they were decoded (up to 64 KB) @@ -569,7 +569,7 @@ int LZ4_decompress_safe_continue(LZ4_streamDecode_t *LZ4_streamDecode, * which must be already allocated with 'originalSize' bytes * @originalSize: is the original and therefore uncompressed size * - * These decoding function allows decompression of multiple blocks + * This decoding function allows decompression of multiple blocks * in "streaming" mode. * Previously decoded blocks *must* remain available at the memory position * where they were decoded (up to 64 KB) @@ -610,10 +610,10 @@ int LZ4_decompress_fast_continue(LZ4_streamDecode_t *LZ4_streamDecode, * @dictStart: pointer to the start of the dictionary in memory * @dictSize: size of dictionary * - * These decoding function works the same as + * This decoding function works the same as * a combination of LZ4_setStreamDecode() followed by * LZ4_decompress_safe_continue() - * It is stand-alone, and don'tn eed a LZ4_streamDecode_t structure. + * It is stand-alone, and doesn't need an LZ4_streamDecode_t structure. * * Return: number of bytes decompressed into destination buffer * (necessarily <= maxDecompressedSize) @@ -633,10 +633,10 @@ int LZ4_decompress_safe_usingDict(const char *source, char *dest, * @dictStart: pointer to the start of the dictionary in memory * @dictSize: size of dictionary * - * These decoding function works the same as + * This decoding function works the same as * a combination of LZ4_setStreamDecode() followed by - * LZ4_decompress_safe_continue() - * It is stand-alone, and don'tn eed a LZ4_streamDecode_t structure. + * LZ4_decompress_fast_continue() + * It is stand-alone, and doesn't need an LZ4_streamDecode_t structure. * * Return: number of bytes decompressed into destination buffer * (necessarily <= maxDecompressedSize) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index edf9e8f32d70..3ca57bacfdd2 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -117,9 +117,12 @@ struct memcg_shrinker_map { struct mem_cgroup_per_node { struct lruvec lruvec; + /* Legacy local VM stats */ + struct lruvec_stat __percpu *lruvec_stat_local; + + /* Subtree VM stats (batched updates) */ struct lruvec_stat __percpu *lruvec_stat_cpu; atomic_long_t lruvec_stat[NR_VM_NODE_STAT_ITEMS]; - atomic_long_t lruvec_stat_local[NR_VM_NODE_STAT_ITEMS]; unsigned long lru_zone_size[MAX_NR_ZONES][NR_LRU_LISTS]; @@ -230,8 +233,9 @@ struct mem_cgroup { /* OOM-Killer disable */ int oom_kill_disable; - /* memory.events */ + /* memory.events and memory.events.local */ struct cgroup_file events_file; + struct cgroup_file events_local_file; /* handle for "memory.swap.events" */ struct cgroup_file swap_events_file; @@ -265,18 +269,20 @@ struct mem_cgroup { atomic_t moving_account; struct task_struct *move_lock_task; - /* memory.stat */ + /* Legacy local VM stats and events */ + struct memcg_vmstats_percpu __percpu *vmstats_local; + + /* Subtree VM stats and events (batched updates) */ struct memcg_vmstats_percpu __percpu *vmstats_percpu; MEMCG_PADDING(_pad2_); atomic_long_t vmstats[MEMCG_NR_STAT]; - atomic_long_t vmstats_local[MEMCG_NR_STAT]; - atomic_long_t vmevents[NR_VM_EVENT_ITEMS]; - atomic_long_t vmevents_local[NR_VM_EVENT_ITEMS]; + /* memory.events */ atomic_long_t memory_events[MEMCG_NR_MEMORY_EVENTS]; + atomic_long_t memory_events_local[MEMCG_NR_MEMORY_EVENTS]; unsigned long socket_pressure; @@ -329,6 +335,19 @@ static inline bool mem_cgroup_disabled(void) return !cgroup_subsys_enabled(memory_cgrp_subsys); } +static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg, + bool in_low_reclaim) +{ + if (mem_cgroup_disabled()) + return 0; + + if (in_low_reclaim) + return READ_ONCE(memcg->memory.emin); + + return max(READ_ONCE(memcg->memory.emin), + READ_ONCE(memcg->memory.elow)); +} + enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg); @@ -511,6 +530,8 @@ void mem_cgroup_handle_over_high(void); unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg); +unsigned long mem_cgroup_size(struct mem_cgroup *memcg); + void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p); @@ -567,7 +588,11 @@ static inline unsigned long memcg_page_state(struct mem_cgroup *memcg, int idx) static inline unsigned long memcg_page_state_local(struct mem_cgroup *memcg, int idx) { - long x = atomic_long_read(&memcg->vmstats_local[idx]); + long x = 0; + int cpu; + + for_each_possible_cpu(cpu) + x += per_cpu(memcg->vmstats_local->stat[idx], cpu); #ifdef CONFIG_SMP if (x < 0) x = 0; @@ -641,13 +666,15 @@ static inline unsigned long lruvec_page_state_local(struct lruvec *lruvec, enum node_stat_item idx) { struct mem_cgroup_per_node *pn; - long x; + long x = 0; + int cpu; if (mem_cgroup_disabled()) return node_page_state(lruvec_pgdat(lruvec), idx); pn = container_of(lruvec, struct mem_cgroup_per_node, lruvec); - x = atomic_long_read(&pn->lruvec_stat_local[idx]); + for_each_possible_cpu(cpu) + x += per_cpu(pn->lruvec_stat_local->count[idx], cpu); #ifdef CONFIG_SMP if (x < 0) x = 0; @@ -737,6 +764,9 @@ static inline void count_memcg_event_mm(struct mm_struct *mm, static inline void memcg_memory_event(struct mem_cgroup *memcg, enum memcg_memory_event event) { + atomic_long_inc(&memcg->memory_events_local[event]); + cgroup_file_notify(&memcg->events_local_file); + do { atomic_long_inc(&memcg->memory_events[event]); cgroup_file_notify(&memcg->events_file); @@ -793,6 +823,12 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm, { } +static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg, + bool in_low_reclaim) +{ + return 0; +} + static inline enum mem_cgroup_protection mem_cgroup_protected( struct mem_cgroup *root, struct mem_cgroup *memcg) { @@ -938,6 +974,11 @@ static inline unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg) return 0; } +static inline unsigned long mem_cgroup_size(struct mem_cgroup *memcg) +{ + return 0; +} + static inline void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct *p) { diff --git a/include/linux/memory.h b/include/linux/memory.h index e1dc1bb2b787..f26a5417ec5d 100644 --- a/include/linux/memory.h +++ b/include/linux/memory.h @@ -111,10 +111,8 @@ extern int register_memory_notifier(struct notifier_block *nb); extern void unregister_memory_notifier(struct notifier_block *nb); extern int register_memory_isolate_notifier(struct notifier_block *nb); extern void unregister_memory_isolate_notifier(struct notifier_block *nb); -int hotplug_memory_register(int nid, struct mem_section *section); -#ifdef CONFIG_MEMORY_HOTREMOVE -extern void unregister_memory_section(struct mem_section *); -#endif +int create_memory_block_devices(unsigned long start, unsigned long size); +void remove_memory_block_devices(unsigned long start, unsigned long size); extern int memory_dev_init(void); extern int memory_notify(unsigned long val, void *v); extern int memory_isolate_notify(unsigned long val, void *v); diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index ae892eef8b82..79e0add6a597 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -123,20 +123,10 @@ static inline bool movable_node_is_enabled(void) return movable_node_enabled; } -#ifdef CONFIG_MEMORY_HOTREMOVE extern void arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap); extern void __remove_pages(struct zone *zone, unsigned long start_pfn, unsigned long nr_pages, struct vmem_altmap *altmap); -#endif /* CONFIG_MEMORY_HOTREMOVE */ - -/* - * Do we want sysfs memblock files created. This will allow userspace to online - * and offline memory explicitly. Lack of this bit means that the caller has to - * call move_pfn_range_to_zone to finish the initialization. - */ - -#define MHP_MEMBLOCK_API (1<<0) /* reasonably generic interface to expand the physical pages */ extern int __add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, @@ -324,7 +314,7 @@ static inline void pgdat_resize_init(struct pglist_data *pgdat) {} extern bool is_mem_section_removable(unsigned long pfn, unsigned long nr_pages); extern void try_offline_node(int nid); extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages); -extern void remove_memory(int nid, u64 start, u64 size); +extern int remove_memory(int nid, u64 start, u64 size); extern void __remove_memory(int nid, u64 start, u64 size); #else @@ -341,7 +331,11 @@ static inline int offline_pages(unsigned long start_pfn, unsigned long nr_pages) return -EINVAL; } -static inline void remove_memory(int nid, u64 start, u64 size) {} +static inline int remove_memory(int nid, u64 start, u64 size) +{ + return -EBUSY; +} + static inline void __remove_memory(int nid, u64 start, u64 size) {} #endif /* CONFIG_MEMORY_HOTREMOVE */ @@ -356,7 +350,7 @@ extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, extern bool is_memblock_offlined(struct memory_block *mem); extern int sparse_add_one_section(int nid, unsigned long start_pfn, struct vmem_altmap *altmap); -extern void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, +extern void sparse_remove_one_section(struct mem_section *ms, unsigned long map_offset, struct vmem_altmap *altmap); extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned long pnum); diff --git a/include/linux/memremap.h b/include/linux/memremap.h index f0628660d541..1732dea030b2 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -81,6 +81,7 @@ typedef void (*dev_page_free_t)(struct page *page, void *data); * @res: physical address range covered by @ref * @ref: reference count that pins the devm_memremap_pages() mapping * @kill: callback to transition @ref to the dead state + * @cleanup: callback to wait for @ref to be idle and reap it * @dev: host device of the mapping for debug * @data: private data pointer for page_free() * @type: memory type: see MEMORY_* in memory_hotplug.h @@ -92,6 +93,7 @@ struct dev_pagemap { struct resource res; struct percpu_ref *ref; void (*kill)(struct percpu_ref *ref); + void (*cleanup)(struct percpu_ref *ref); struct device *dev; void *data; enum memory_type type; @@ -100,6 +102,7 @@ struct dev_pagemap { #ifdef CONFIG_ZONE_DEVICE void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); +void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); struct dev_pagemap *get_dev_pagemap(unsigned long pfn, struct dev_pagemap *pgmap); @@ -118,6 +121,11 @@ static inline void *devm_memremap_pages(struct device *dev, return ERR_PTR(-ENXIO); } +static inline void devm_memunmap_pages(struct device *dev, + struct dev_pagemap *pgmap) +{ +} + static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn, struct dev_pagemap *pgmap) { diff --git a/include/linux/migrate.h b/include/linux/migrate.h index e13d9bf2f9a5..7f04754c7f2b 100644 --- a/include/linux/migrate.h +++ b/include/linux/migrate.h @@ -77,8 +77,7 @@ extern void migrate_page_copy(struct page *newpage, struct page *page); extern int migrate_huge_page_move_mapping(struct address_space *mapping, struct page *newpage, struct page *page); extern int migrate_page_move_mapping(struct address_space *mapping, - struct page *newpage, struct page *page, enum migrate_mode mode, - int extra_count); + struct page *newpage, struct page *page, int extra_count); #else static inline void putback_movable_pages(struct list_head *l) {} diff --git a/include/linux/mm.h b/include/linux/mm.h index 0e8834ac32b7..2da796ed2193 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -536,7 +536,7 @@ static inline void vma_set_anonymous(struct vm_area_struct *vma) struct mmu_gather; struct inode; -#if !defined(__HAVE_ARCH_PTE_DEVMAP) || !defined(CONFIG_TRANSPARENT_HUGEPAGE) +#if !defined(CONFIG_ARCH_HAS_PTE_DEVMAP) || !defined(CONFIG_TRANSPARENT_HUGEPAGE) static inline int pmd_devmap(pmd_t pmd) { return 0; @@ -942,32 +942,6 @@ static inline bool put_devmap_managed_page(struct page *page) } return false; } - -static inline bool is_device_private_page(const struct page *page) -{ - return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PRIVATE; -} - -static inline bool is_device_public_page(const struct page *page) -{ - return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PUBLIC; -} - -#ifdef CONFIG_PCI_P2PDMA -static inline bool is_pci_p2pdma_page(const struct page *page) -{ - return is_zone_device_page(page) && - page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; -} -#else /* CONFIG_PCI_P2PDMA */ -static inline bool is_pci_p2pdma_page(const struct page *page) -{ - return false; -} -#endif /* CONFIG_PCI_P2PDMA */ - #else /* CONFIG_DEV_PAGEMAP_OPS */ static inline void dev_pagemap_get_ops(void) { @@ -981,22 +955,31 @@ static inline bool put_devmap_managed_page(struct page *page) { return false; } +#endif /* CONFIG_DEV_PAGEMAP_OPS */ static inline bool is_device_private_page(const struct page *page) { - return false; + return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && + IS_ENABLED(CONFIG_DEVICE_PRIVATE) && + is_zone_device_page(page) && + page->pgmap->type == MEMORY_DEVICE_PRIVATE; } static inline bool is_device_public_page(const struct page *page) { - return false; + return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && + IS_ENABLED(CONFIG_DEVICE_PUBLIC) && + is_zone_device_page(page) && + page->pgmap->type == MEMORY_DEVICE_PUBLIC; } static inline bool is_pci_p2pdma_page(const struct page *page) { - return false; + return IS_ENABLED(CONFIG_DEV_PAGEMAP_OPS) && + IS_ENABLED(CONFIG_PCI_P2PDMA) && + is_zone_device_page(page) && + page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA; } -#endif /* CONFIG_DEV_PAGEMAP_OPS */ /* 127: arbitrary random number, small enough to assemble well */ #define page_ref_zero_or_close_to_overflow(page) \ @@ -1564,6 +1547,10 @@ long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int get_user_pages_fast(unsigned long start, int nr_pages, unsigned int gup_flags, struct page **pages); +int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc); +int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, + struct task_struct *task, bool bypass_rlim); + /* Container for pinned pfns / pages */ struct frame_vector { unsigned int nr_allocated; /* Number of frames we have space for */ @@ -1771,7 +1758,7 @@ static inline void sync_mm_rss(struct mm_struct *mm) } #endif -#ifndef __HAVE_ARCH_PTE_DEVMAP +#ifndef CONFIG_ARCH_HAS_PTE_DEVMAP static inline int pte_devmap(pte_t pte) { return 0; @@ -2670,8 +2657,7 @@ static inline int vm_fault_to_errno(vm_fault_t vm_fault, int foll_flags) return 0; } -typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, - void *data); +typedef int (*pte_fn_t)(pte_t *pte, unsigned long addr, void *data); extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, unsigned long size, pte_fn_t fn, void *data); @@ -2685,11 +2671,18 @@ static inline void kernel_poison_pages(struct page *page, int numpages, int enable) { } #endif -extern bool _debug_pagealloc_enabled; +#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT +DECLARE_STATIC_KEY_TRUE(_debug_pagealloc_enabled); +#else +DECLARE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); +#endif static inline bool debug_pagealloc_enabled(void) { - return IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) && _debug_pagealloc_enabled; + if (!IS_ENABLED(CONFIG_DEBUG_PAGEALLOC)) + return false; + + return static_branch_unlikely(&_debug_pagealloc_enabled); } #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_ARCH_HAS_SET_DIRECT_MAP) @@ -2839,11 +2832,9 @@ extern long copy_huge_page_from_user(struct page *dst_page, bool allow_pagefault); #endif /* CONFIG_TRANSPARENT_HUGEPAGE || CONFIG_HUGETLBFS */ -extern struct page_ext_operations debug_guardpage_ops; - #ifdef CONFIG_DEBUG_PAGEALLOC extern unsigned int _debug_guardpage_minorder; -extern bool _debug_guardpage_enabled; +DECLARE_STATIC_KEY_FALSE(_debug_guardpage_enabled); static inline unsigned int debug_guardpage_minorder(void) { @@ -2852,21 +2843,15 @@ static inline unsigned int debug_guardpage_minorder(void) static inline bool debug_guardpage_enabled(void) { - return _debug_guardpage_enabled; + return static_branch_unlikely(&_debug_guardpage_enabled); } static inline bool page_is_guard(struct page *page) { - struct page_ext *page_ext; - if (!debug_guardpage_enabled()) return false; - page_ext = lookup_page_ext(page); - if (unlikely(!page_ext)) - return false; - - return test_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); + return PageGuard(page); } #else static inline unsigned int debug_guardpage_minorder(void) { return 0; } diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8ec38b11b361..1d1093474c1a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -329,7 +329,9 @@ struct vm_area_struct { struct file * vm_file; /* File we map to (can be NULL). */ void * vm_private_data; /* was vm_pte (shared mem) */ +#ifdef CONFIG_SWAP atomic_long_t swap_readahead_info; +#endif #ifndef CONFIG_MMU struct vm_region *vm_region; /* NOMMU mapping region */ #endif diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 70394cabaf4e..427b79c39b3c 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -1134,6 +1134,7 @@ static inline unsigned long early_pfn_to_nid(unsigned long pfn) * PFN_SECTION_SHIFT pfn to/from section number */ #define PA_SECTION_SHIFT (SECTION_SIZE_BITS) +#define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT) #define PFN_SECTION_SHIFT (SECTION_SIZE_BITS - PAGE_SHIFT) #define NR_MEM_SECTIONS (1UL << SECTIONS_SHIFT) diff --git a/include/linux/node.h b/include/linux/node.h index 1a557c589ecb..548c226966a2 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -139,8 +139,7 @@ extern int register_cpu_under_node(unsigned int cpu, unsigned int nid); extern int unregister_cpu_under_node(unsigned int cpu, unsigned int nid); extern int register_mem_sect_under_node(struct memory_block *mem_blk, void *arg); -extern int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, - unsigned long phys_index); +extern void unregister_memory_block_under_nodes(struct memory_block *mem_blk); extern int register_memory_node_under_compute_node(unsigned int mem_nid, unsigned int cpu_nid, @@ -176,10 +175,8 @@ static inline int register_mem_sect_under_node(struct memory_block *mem_blk, { return 0; } -static inline int unregister_mem_sect_under_nodes(struct memory_block *mem_blk, - unsigned long phys_index) +static inline void unregister_memory_block_under_nodes(struct memory_block *mem_blk) { - return 0; } static inline void register_hugetlbfs_with_node(node_registration_func_t reg, diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 9f8712a4b1a5..b848517da64c 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -703,6 +703,7 @@ PAGEFLAG_FALSE(DoubleMap) #define PG_offline 0x00000100 #define PG_kmemcg 0x00000200 #define PG_table 0x00000400 +#define PG_guard 0x00000800 #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) @@ -754,6 +755,11 @@ PAGE_TYPE_OPS(Kmemcg, kmemcg) */ PAGE_TYPE_OPS(Table, table) +/* + * Marks guardpages used with debug_pagealloc. + */ +PAGE_TYPE_OPS(Guard, guard) + extern bool is_free_buddy_page(struct page *page); __PAGEFLAG(Isolated, isolated, PF_ANY); diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h index f84f167ec04c..09592951725c 100644 --- a/include/linux/page_ext.h +++ b/include/linux/page_ext.h @@ -17,7 +17,6 @@ struct page_ext_operations { #ifdef CONFIG_PAGE_EXTENSION enum page_ext_flags { - PAGE_EXT_DEBUG_GUARD, PAGE_EXT_OWNER, #if defined(CONFIG_IDLE_PAGE_TRACKING) && !defined(CONFIG_64BIT) PAGE_EXT_YOUNG, diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 9ec3544baee2..6dd7ec95c778 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -396,8 +396,7 @@ extern int read_cache_pages(struct address_space *mapping, static inline struct page *read_mapping_page(struct address_space *mapping, pgoff_t index, void *data) { - filler_t *filler = (filler_t *)mapping->a_ops->readpage; - return read_cache_page(mapping, index, filler, data); + return read_cache_page(mapping, index, NULL, data); } /* diff --git a/include/linux/pfn_t.h b/include/linux/pfn_t.h index 7bb77850c65a..2d9148221e9a 100644 --- a/include/linux/pfn_t.h +++ b/include/linux/pfn_t.h @@ -66,13 +66,6 @@ static inline phys_addr_t pfn_t_to_phys(pfn_t pfn) return PFN_PHYS(pfn_t_to_pfn(pfn)); } -static inline void *pfn_t_to_virt(pfn_t pfn) -{ - if (pfn_t_has_page(pfn)) - return __va(pfn_t_to_phys(pfn)); - return NULL; -} - static inline pfn_t page_to_pfn_t(struct page *page) { return pfn_to_pfn_t(page_to_pfn(page)); @@ -104,7 +97,7 @@ static inline pud_t pfn_t_pud(pfn_t pfn, pgprot_t pgprot) #endif #endif -#ifdef __HAVE_ARCH_PTE_DEVMAP +#ifdef CONFIG_ARCH_HAS_PTE_DEVMAP static inline bool pfn_t_devmap(pfn_t pfn) { const u64 flags = PFN_DEV|PFN_MAP; @@ -122,7 +115,7 @@ pmd_t pmd_mkdevmap(pmd_t pmd); defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) pud_t pud_mkdevmap(pud_t pud); #endif -#endif /* __HAVE_ARCH_PTE_DEVMAP */ +#endif /* CONFIG_ARCH_HAS_PTE_DEVMAP */ #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL static inline bool pfn_t_special(pfn_t pfn) diff --git a/include/linux/pid.h b/include/linux/pid.h index 8aa47056adf0..a261712ac3fe 100644 --- a/include/linux/pid.h +++ b/include/linux/pid.h @@ -4,6 +4,7 @@ #include <linux/rculist.h> #include <linux/wait.h> +#include <linux/refcount.h> enum pid_type { @@ -57,7 +58,7 @@ struct upid { struct pid { - atomic_t count; + refcount_t count; unsigned int level; /* lists of tasks that use this pid */ struct hlist_head tasks[PIDTYPE_MAX]; @@ -75,7 +76,7 @@ extern int pidfd_create(struct pid *pid); static inline struct pid *get_pid(struct pid *pid) { if (pid) - atomic_inc(&pid->count); + refcount_inc(&pid->count); return pid; } diff --git a/include/linux/poison.h b/include/linux/poison.h index d6d980a681c7..df34330b4e34 100644 --- a/include/linux/poison.h +++ b/include/linux/poison.h @@ -21,7 +21,7 @@ * non-initialized list entries. */ #define LIST_POISON1 ((void *) 0x100 + POISON_POINTER_DELTA) -#define LIST_POISON2 ((void *) 0x200 + POISON_POINTER_DELTA) +#define LIST_POISON2 ((void *) 0x122 + POISON_POINTER_DELTA) /********** include/linux/timer.h **********/ /* diff --git a/include/linux/sched.h b/include/linux/sched.h index 1b2590a8d038..e1cb3c564be9 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -850,6 +850,7 @@ struct task_struct { #ifdef CONFIG_DETECT_HUNG_TASK unsigned long last_switch_count; unsigned long last_switch_time; + unsigned long killed_time; #endif /* Filesystem information: */ struct fs_struct *fs; @@ -1201,6 +1202,13 @@ struct task_struct { unsigned long prev_lowest_stack; #endif +#ifdef CONFIG_DEBUG_AID_FOR_SYZBOT + unsigned long getblk_stamp; + unsigned int getblk_executed; + unsigned int getblk_bh_count; + unsigned long getblk_bh_state; +#endif + /* * New fields for task_struct should be added above here, so that * they are included in the randomized portion of task_struct. diff --git a/include/linux/sched/signal.h b/include/linux/sched/signal.h index 532458698bde..01add55a609b 100644 --- a/include/linux/sched/signal.h +++ b/include/linux/sched/signal.h @@ -15,10 +15,10 @@ */ struct sighand_struct { - refcount_t count; - struct k_sigaction action[_NSIG]; spinlock_t siglock; + refcount_t count; wait_queue_head_t signalfd_wqh; + struct k_sigaction action[_NSIG]; }; /* diff --git a/include/linux/signal.h b/include/linux/signal.h index 9702016734b1..78c2bb376954 100644 --- a/include/linux/signal.h +++ b/include/linux/signal.h @@ -276,7 +276,7 @@ extern int sigprocmask(int, sigset_t *, sigset_t *); extern int set_user_sigmask(const sigset_t __user *usigmask, sigset_t *set, sigset_t *oldset, size_t sigsetsize); extern void restore_user_sigmask(const void __user *usigmask, - sigset_t *sigsaved); + sigset_t *sigsaved, bool interrupted); extern void set_current_blocked(sigset_t *); extern void __set_current_blocked(const sigset_t *); extern int show_unhandled_signals; diff --git a/include/linux/swap.h b/include/linux/swap.h index 4bfb5c4ac108..de2c67a33b7e 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -148,7 +148,7 @@ struct zone; * We always assume that blocks are of size PAGE_SIZE. */ struct swap_extent { - struct list_head list; + struct rb_node rb_node; pgoff_t start_page; pgoff_t nr_pages; sector_t start_block; @@ -175,8 +175,9 @@ enum { SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */ SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */ + SWP_VALID = (1 << 13), /* swap is valid to be operated on? */ /* add others here before... */ - SWP_SCANNING = (1 << 13), /* refcount in scan_swap_map */ + SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ }; #define SWAP_CLUSTER_MAX 32UL @@ -247,8 +248,7 @@ struct swap_info_struct { unsigned int cluster_next; /* likely index for next allocation */ unsigned int cluster_nr; /* countdown to next cluster search */ struct percpu_cluster __percpu *percpu_cluster; /* per cpu's swap location */ - struct swap_extent *curr_swap_extent; - struct swap_extent first_swap_extent; + struct rb_root swap_extent_root;/* root of the swap extent rbtree */ struct block_device *bdev; /* swap device or bdev of swap file */ struct file *swap_file; /* seldom referenced */ unsigned int old_block_size; /* seldom referenced */ @@ -460,7 +460,7 @@ extern unsigned int count_swap_pages(int, int); extern sector_t map_swap_page(struct page *, struct block_device **); extern sector_t swapdev_block(int, pgoff_t); extern int page_swapcount(struct page *); -extern int __swap_count(struct swap_info_struct *si, swp_entry_t entry); +extern int __swap_count(swp_entry_t entry); extern int __swp_swapcount(swp_entry_t entry); extern int swp_swapcount(swp_entry_t entry); extern struct swap_info_struct *page_swap_info(struct page *); @@ -470,6 +470,12 @@ extern int try_to_free_swap(struct page *); struct backing_dev_info; extern int init_swap_address_space(unsigned int type, unsigned long nr_pages); extern void exit_swap_address_space(unsigned int type); +extern struct swap_info_struct *get_swap_device(swp_entry_t entry); + +static inline void put_swap_device(struct swap_info_struct *si) +{ + rcu_read_unlock(); +} #else /* CONFIG_SWAP */ @@ -576,7 +582,7 @@ static inline int page_swapcount(struct page *page) return 0; } -static inline int __swap_count(struct swap_info_struct *si, swp_entry_t entry) +static inline int __swap_count(swp_entry_t entry) { return 0; } diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h index b769ecfcc3bd..490f78565a02 100644 --- a/include/linux/sysctl.h +++ b/include/linux/sysctl.h @@ -37,6 +37,13 @@ struct ctl_table_root; struct ctl_table_header; struct ctl_dir; +/* Keep the same order as in fs/proc/proc_sysctl.c */ +#define SYSCTL_ZERO ((void *)&sysctl_vals[0]) +#define SYSCTL_ONE ((void *)&sysctl_vals[1]) +#define SYSCTL_INT_MAX ((void *)&sysctl_vals[2]) + +extern const int sysctl_vals[]; + typedef int proc_handler (struct ctl_table *ctl, int write, void __user *buffer, size_t *lenp, loff_t *ppos); diff --git a/include/linux/tracehook.h b/include/linux/tracehook.h index 09d678433fc0..180fcf8ce63b 100644 --- a/include/linux/tracehook.h +++ b/include/linux/tracehook.h @@ -54,13 +54,15 @@ struct linux_binprm; /* * ptrace report for syscall entry and exit looks identical. */ -static inline int ptrace_report_syscall(struct pt_regs *regs) +static inline int ptrace_report_syscall(struct pt_regs *regs, + unsigned long message) { int ptrace = current->ptrace; if (!(ptrace & PT_PTRACED)) return 0; + current->ptrace_message = message; ptrace_notify(SIGTRAP | ((ptrace & PT_TRACESYSGOOD) ? 0x80 : 0)); /* @@ -73,6 +75,7 @@ static inline int ptrace_report_syscall(struct pt_regs *regs) current->exit_code = 0; } + current->ptrace_message = 0; return fatal_signal_pending(current); } @@ -98,7 +101,7 @@ static inline int ptrace_report_syscall(struct pt_regs *regs) static inline __must_check int tracehook_report_syscall_entry( struct pt_regs *regs) { - return ptrace_report_syscall(regs); + return ptrace_report_syscall(regs, PTRACE_EVENTMSG_SYSCALL_ENTRY); } /** @@ -123,7 +126,7 @@ static inline void tracehook_report_syscall_exit(struct pt_regs *regs, int step) if (step) user_single_step_report(regs); else - ptrace_report_syscall(regs); + ptrace_report_syscall(regs, PTRACE_EVENTMSG_SYSCALL_EXIT); } /** diff --git a/include/linux/zstd.h b/include/linux/zstd.h index 249575e2485f..a1e34838a861 100644 --- a/include/linux/zstd.h +++ b/include/linux/zstd.h @@ -164,9 +164,9 @@ typedef struct { * The default value is all fields set to 0. */ typedef struct { - unsigned int contentSizeFlag; - unsigned int checksumFlag; - unsigned int noDictIDFlag; + unsigned char contentSizeFlag; + unsigned char checksumFlag; + unsigned char noDictIDFlag; } ZSTD_frameParameters; /** @@ -254,7 +254,7 @@ ZSTD_CCtx *ZSTD_initCCtx(void *workspace, size_t workspaceSize); * ZSTD_isError(). */ size_t ZSTD_compressCCtx(ZSTD_CCtx *ctx, void *dst, size_t dstCapacity, - const void *src, size_t srcSize, ZSTD_parameters params); + const void *src, size_t srcSize, const ZSTD_parameters *params); /** * ZSTD_DCtxWorkspaceBound() - amount of memory needed to initialize a ZSTD_DCtx @@ -324,7 +324,7 @@ size_t ZSTD_decompressDCtx(ZSTD_DCtx *ctx, void *dst, size_t dstCapacity, */ size_t ZSTD_compress_usingDict(ZSTD_CCtx *ctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const void *dict, size_t dictSize, - ZSTD_parameters params); + const ZSTD_parameters *params); /** * ZSTD_decompress_usingDict() - decompress src into dst using a dictionary @@ -381,7 +381,7 @@ typedef struct ZSTD_CDict_s ZSTD_CDict; * Return: The digested dictionary emplaced into workspace. */ ZSTD_CDict *ZSTD_initCDict(const void *dictBuffer, size_t dictSize, - ZSTD_parameters params, void *workspace, size_t workspaceSize); + const ZSTD_parameters *params, void *workspace, size_t workspaceSize); /** * ZSTD_compress_usingCDict() - compress src into dst using a ZSTD_CDict @@ -552,7 +552,7 @@ typedef struct ZSTD_CStream_s ZSTD_CStream; * * Return: The zstd streaming compression context. */ -ZSTD_CStream *ZSTD_initCStream(ZSTD_parameters params, +ZSTD_CStream *ZSTD_initCStream(const ZSTD_parameters *params, unsigned long long pledgedSrcSize, void *workspace, size_t workspaceSize); @@ -1006,7 +1006,7 @@ size_t ZSTD_compressBegin(ZSTD_CCtx *cctx, int compressionLevel); size_t ZSTD_compressBegin_usingDict(ZSTD_CCtx *cctx, const void *dict, size_t dictSize, int compressionLevel); size_t ZSTD_compressBegin_advanced(ZSTD_CCtx *cctx, const void *dict, - size_t dictSize, ZSTD_parameters params, + size_t dictSize, const ZSTD_parameters *params, unsigned long long pledgedSrcSize); size_t ZSTD_copyCCtx(ZSTD_CCtx *cctx, const ZSTD_CCtx *preparedCCtx, unsigned long long pledgedSrcSize); diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h index a5ab2973e8dc..c37e2280e6dd 100644 --- a/include/trace/events/vmscan.h +++ b/include/trace/events/vmscan.h @@ -127,18 +127,43 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_direct_reclaim_b ); #ifdef CONFIG_MEMCG -DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_reclaim_begin, +DECLARE_EVENT_CLASS(mm_vmscan_memcg_reclaim_begin_template, - TP_PROTO(int order, gfp_t gfp_flags), + TP_PROTO(unsigned int cgroup_ino, int order, gfp_t gfp_flags), - TP_ARGS(order, gfp_flags) + TP_ARGS(cgroup_ino, order, gfp_flags), + + TP_STRUCT__entry( + __field(unsigned int, cgroup_ino) + __field(int, order) + __field(gfp_t, gfp_flags) + ), + + TP_fast_assign( + __entry->cgroup_ino = cgroup_ino; + __entry->order = order; + __entry->gfp_flags = gfp_flags; + ), + + TP_printk("cgroup_ino=%u order=%d gfp_flags=%s", + __entry->cgroup_ino, __entry->order, + show_gfp_flags(__entry->gfp_flags)) ); -DEFINE_EVENT(mm_vmscan_direct_reclaim_begin_template, mm_vmscan_memcg_softlimit_reclaim_begin, +DEFINE_EVENT(mm_vmscan_memcg_reclaim_begin_template, + mm_vmscan_memcg_reclaim_begin, - TP_PROTO(int order, gfp_t gfp_flags), + TP_PROTO(unsigned int cgroup_ino, int order, gfp_t gfp_flags), - TP_ARGS(order, gfp_flags) + TP_ARGS(cgroup_ino, order, gfp_flags) +); + +DEFINE_EVENT(mm_vmscan_memcg_reclaim_begin_template, + mm_vmscan_memcg_softlimit_reclaim_begin, + + TP_PROTO(unsigned int cgroup_ino, int order, gfp_t gfp_flags), + + TP_ARGS(cgroup_ino, order, gfp_flags) ); #endif /* CONFIG_MEMCG */ @@ -167,18 +192,40 @@ DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_direct_reclaim_end ); #ifdef CONFIG_MEMCG -DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_reclaim_end, +DECLARE_EVENT_CLASS(mm_vmscan_memcg_reclaim_end_template, - TP_PROTO(unsigned long nr_reclaimed), + TP_PROTO(unsigned int cgroup_ino, unsigned long nr_reclaimed), - TP_ARGS(nr_reclaimed) + TP_ARGS(cgroup_ino, nr_reclaimed), + + TP_STRUCT__entry( + __field(unsigned int, cgroup_ino) + __field(unsigned long, nr_reclaimed) + ), + + TP_fast_assign( + __entry->cgroup_ino = cgroup_ino; + __entry->nr_reclaimed = nr_reclaimed; + ), + + TP_printk("cgroup_ino=%u nr_reclaimed=%lu", + __entry->cgroup_ino, __entry->nr_reclaimed) ); -DEFINE_EVENT(mm_vmscan_direct_reclaim_end_template, mm_vmscan_memcg_softlimit_reclaim_end, +DEFINE_EVENT(mm_vmscan_memcg_reclaim_end_template, + mm_vmscan_memcg_reclaim_end, - TP_PROTO(unsigned long nr_reclaimed), + TP_PROTO(unsigned int cgroup_ino, unsigned long nr_reclaimed), - TP_ARGS(nr_reclaimed) + TP_ARGS(cgroup_ino, nr_reclaimed) +); + +DEFINE_EVENT(mm_vmscan_memcg_reclaim_end_template, + mm_vmscan_memcg_softlimit_reclaim_end, + + TP_PROTO(unsigned int cgroup_ino, unsigned long nr_reclaimed), + + TP_ARGS(cgroup_ino, nr_reclaimed) ); #endif /* CONFIG_MEMCG */ diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h index abd238d0f7a4..ef4623f03156 100644 --- a/include/uapi/asm-generic/mman-common.h +++ b/include/uapi/asm-generic/mman-common.h @@ -25,7 +25,12 @@ # define MAP_UNINITIALIZED 0x0 /* Don't support this flag */ #endif -/* 0x0100 - 0x80000 flags are defined in asm-generic/mman.h */ +/* 0x0100 - 0x4000 flags are defined in asm-generic/mman.h */ +#define MAP_POPULATE 0x008000 /* populate (prefault) pagetables */ +#define MAP_NONBLOCK 0x010000 /* do not block on IO */ +#define MAP_STACK 0x020000 /* give out an address that is best suited for process/thread stacks */ +#define MAP_HUGETLB 0x040000 /* create a huge page mapping */ +#define MAP_SYNC 0x080000 /* perform synchronous page faults for the mapping */ #define MAP_FIXED_NOREPLACE 0x100000 /* MAP_FIXED which doesn't unmap underlying mapping */ /* diff --git a/include/uapi/asm-generic/mman.h b/include/uapi/asm-generic/mman.h index 653687d9771b..57e8195d0b53 100644 --- a/include/uapi/asm-generic/mman.h +++ b/include/uapi/asm-generic/mman.h @@ -9,13 +9,11 @@ #define MAP_EXECUTABLE 0x1000 /* mark it as an executable */ #define MAP_LOCKED 0x2000 /* pages are locked */ #define MAP_NORESERVE 0x4000 /* don't check for reservations */ -#define MAP_POPULATE 0x8000 /* populate (prefault) pagetables */ -#define MAP_NONBLOCK 0x10000 /* do not block on IO */ -#define MAP_STACK 0x20000 /* give out an address that is best suited for process/thread stacks */ -#define MAP_HUGETLB 0x40000 /* create a huge page mapping */ -#define MAP_SYNC 0x80000 /* perform synchronous page faults for the mapping */ -/* Bits [26:31] are reserved, see mman-common.h for MAP_HUGETLB usage */ +/* + * Bits [26:31] are reserved, see asm-generic/hugetlb_encode.h + * for MAP_HUGETLB usage + */ #define MCL_CURRENT 1 /* lock all current mappings */ #define MCL_FUTURE 2 /* lock all future mappings */ diff --git a/include/uapi/linux/byteorder/big_endian.h b/include/uapi/linux/byteorder/big_endian.h index 2199adc6a6c2..3e58b717fb5a 100644 --- a/include/uapi/linux/byteorder/big_endian.h +++ b/include/uapi/linux/byteorder/big_endian.h @@ -2,6 +2,10 @@ #ifndef _UAPI_LINUX_BYTEORDER_BIG_ENDIAN_H #define _UAPI_LINUX_BYTEORDER_BIG_ENDIAN_H +#if defined(__BYTE_ORDER__) && __BYTE_ORDER__ != __ORDER_BIG_ENDIAN__ +#error "Unsupported endianness, check your toolchain" +#endif + #ifndef __BIG_ENDIAN #define __BIG_ENDIAN 4321 #endif diff --git a/include/uapi/linux/byteorder/little_endian.h b/include/uapi/linux/byteorder/little_endian.h index 601c904fd5cd..c9ee440716a5 100644 --- a/include/uapi/linux/byteorder/little_endian.h +++ b/include/uapi/linux/byteorder/little_endian.h @@ -2,6 +2,10 @@ #ifndef _UAPI_LINUX_BYTEORDER_LITTLE_ENDIAN_H #define _UAPI_LINUX_BYTEORDER_LITTLE_ENDIAN_H +#if defined(__BYTE_ORDER__) && __BYTE_ORDER__ != __ORDER_LITTLE_ENDIAN__ +#error "Unsupported endianness, check your toolchain" +#endif + #ifndef __LITTLE_ENDIAN #define __LITTLE_ENDIAN 1234 #endif diff --git a/include/uapi/linux/coda.h b/include/uapi/linux/coda.h index 695fade33c64..5dba636b6e11 100644 --- a/include/uapi/linux/coda.h +++ b/include/uapi/linux/coda.h @@ -86,10 +86,6 @@ typedef unsigned long long u_quad_t; #define inline -struct timespec { - long ts_sec; - long ts_nsec; -}; #else /* DJGPP but not KERNEL */ #include <sys/time.h> typedef unsigned long long u_quad_t; @@ -110,13 +106,6 @@ typedef unsigned long long u_quad_t; #define cdev_t dev_t #endif -#ifdef __CYGWIN32__ -struct timespec { - time_t tv_sec; /* seconds */ - long tv_nsec; /* nanoseconds */ -}; -#endif - #ifndef __BIT_TYPES_DEFINED__ #define __BIT_TYPES_DEFINED__ typedef signed char int8_t; @@ -211,6 +200,11 @@ struct CodaFid { */ enum coda_vtype { C_VNON, C_VREG, C_VDIR, C_VBLK, C_VCHR, C_VLNK, C_VSOCK, C_VFIFO, C_VBAD }; +struct coda_timespec { + int64_t tv_sec; /* seconds */ + long tv_nsec; /* nanoseconds */ +}; + struct coda_vattr { long va_type; /* vnode type (for create) */ u_short va_mode; /* files access mode and type */ @@ -220,9 +214,9 @@ struct coda_vattr { long va_fileid; /* file id */ u_quad_t va_size; /* file size in bytes */ long va_blocksize; /* blocksize preferred for i/o */ - struct timespec va_atime; /* time of last access */ - struct timespec va_mtime; /* time of last modification */ - struct timespec va_ctime; /* time file changed */ + struct coda_timespec va_atime; /* time of last access */ + struct coda_timespec va_mtime; /* time of last modification */ + struct coda_timespec va_ctime; /* time file changed */ u_long va_gen; /* generation number of file */ u_long va_flags; /* flags defined for file */ cdev_t va_rdev; /* device special file represents */ @@ -287,7 +281,8 @@ struct coda_statfs { #define CIOC_KERNEL_VERSION _IOWR('c', 10, size_t) -#define CODA_KERNEL_VERSION 3 /* 128-bit file identifiers */ +// CODA_KERNEL_VERSION 3 /* 128-bit file identifiers */ +#define CODA_KERNEL_VERSION 4 /* 64-bit timespec */ /* * Venus <-> Coda RPC arguments @@ -295,8 +290,8 @@ struct coda_statfs { struct coda_in_hdr { u_int32_t opcode; u_int32_t unique; /* Keep multiple outstanding msgs distinct */ - pid_t pid; - pid_t pgid; + __kernel_pid_t pid; + __kernel_pid_t pgid; vuid_t uid; }; diff --git a/include/uapi/linux/coda_psdev.h b/include/uapi/linux/coda_psdev.h deleted file mode 100644 index aa6623efd2dd..000000000000 --- a/include/uapi/linux/coda_psdev.h +++ /dev/null @@ -1,28 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ -#ifndef _UAPI__CODA_PSDEV_H -#define _UAPI__CODA_PSDEV_H - -#include <linux/magic.h> - -#define CODA_PSDEV_MAJOR 67 -#define MAX_CODADEVS 5 /* how many do we allow */ - - -/* messages between coda filesystem in kernel and Venus */ -struct upc_req { - struct list_head uc_chain; - caddr_t uc_data; - u_short uc_flags; - u_short uc_inSize; /* Size is at most 5000 bytes */ - u_short uc_outSize; - u_short uc_opcode; /* copied from data to save lookup */ - int uc_unique; - wait_queue_head_t uc_sleep; /* process' wait queue */ -}; - -#define CODA_REQ_ASYNC 0x1 -#define CODA_REQ_READ 0x2 -#define CODA_REQ_WRITE 0x4 -#define CODA_REQ_ABORT 0x8 - -#endif /* _UAPI__CODA_PSDEV_H */ diff --git a/include/uapi/linux/ptrace.h b/include/uapi/linux/ptrace.h index d5a1b8a492b9..a71b6e3b03eb 100644 --- a/include/uapi/linux/ptrace.h +++ b/include/uapi/linux/ptrace.h @@ -73,6 +73,41 @@ struct seccomp_metadata { __u64 flags; /* Output: filter's flags */ }; +#define PTRACE_GET_SYSCALL_INFO 0x420e +#define PTRACE_SYSCALL_INFO_NONE 0 +#define PTRACE_SYSCALL_INFO_ENTRY 1 +#define PTRACE_SYSCALL_INFO_EXIT 2 +#define PTRACE_SYSCALL_INFO_SECCOMP 3 + +struct ptrace_syscall_info { + __u8 op; /* PTRACE_SYSCALL_INFO_* */ + __u32 arch __attribute__((__aligned__(sizeof(__u32)))); + __u64 instruction_pointer; + __u64 stack_pointer; + union { + struct { + __u64 nr; + __u64 args[6]; + } entry; + struct { + __s64 rval; + __u8 is_error; + } exit; + struct { + __u64 nr; + __u64 args[6]; + __u32 ret_data; + } seccomp; + }; +}; + +/* + * These values are stored in task->ptrace_message + * by tracehook_report_syscall_* to describe the current syscall-stop. + */ +#define PTRACE_EVENTMSG_SYSCALL_ENTRY 1 +#define PTRACE_EVENTMSG_SYSCALL_EXIT 2 + /* Read signals from a shared (process wide) queue */ #define PTRACE_PEEKSIGINFO_SHARED (1 << 0) diff --git a/init/Kconfig b/init/Kconfig index 0e2344389501..2f7f52c5efb8 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1749,7 +1749,7 @@ config SLAB_FREELIST_HARDENED help Many kernel heap attacks try to target slab cache metadata and other infrastructure. This options makes minor performance - sacrifies to harden the kernel slab allocator against common + sacrifices to harden the kernel slab allocator against common freelist exploit methods. config SHUFFLE_PAGE_ALLOCATOR @@ -1781,7 +1781,7 @@ config SLUB_CPU_PARTIAL depends on SLUB && SMP bool "SLUB per cpu partial cache" help - Per cpu partial caches accellerate objects allocation and freeing + Per cpu partial caches accelerate objects allocation and freeing that is local to a processor at the price of more indeterminism in the latency of the free. On overflow these caches will be cleared which requires the taking of locks that may cause latency spikes. diff --git a/ipc/ipc_sysctl.c b/ipc/ipc_sysctl.c index bfaae457810c..d2002d49e8db 100644 --- a/ipc/ipc_sysctl.c +++ b/ipc/ipc_sysctl.c @@ -117,9 +117,6 @@ static int proc_ipc_sem_dointvec(struct ctl_table *table, int write, #define proc_ipc_sem_dointvec NULL #endif -static int zero; -static int one = 1; -static int int_max = INT_MAX; int ipc_mni = IPCMNI; int ipc_mni_shift = IPCMNI_SHIFT; int ipc_min_cycle = RADIX_TREE_MAP_SIZE; @@ -145,7 +142,7 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.shm_ctlmni), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &ipc_mni, }, { @@ -154,8 +151,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.shm_rmid_forced), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax_orphans, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "msgmax", @@ -163,8 +160,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.msg_ctlmax), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "msgmni", @@ -172,7 +169,7 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.msg_ctlmni), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &ipc_mni, }, { @@ -181,8 +178,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_ipc_auto_msgmni, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "msgmnb", @@ -190,8 +187,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.msg_ctlmnb), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "sem", @@ -207,8 +204,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.ids[IPC_SEM_IDS].next_id), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "msg_next_id", @@ -216,8 +213,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.ids[IPC_MSG_IDS].next_id), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "shm_next_id", @@ -225,8 +222,8 @@ static struct ctl_table ipc_kern_table[] = { .maxlen = sizeof(init_ipc_ns.ids[IPC_SHM_IDS].next_id), .mode = 0644, .proc_handler = proc_ipc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, #endif {} diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 7a723194ecbe..2b750f13bc8f 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -158,7 +158,7 @@ out: bool dma_in_atomic_pool(void *start, size_t size) { - return addr_in_gen_pool(atomic_pool, (unsigned long)start, size); + return gen_pool_has_addr(atomic_pool, (unsigned long)start, size); } void *dma_alloc_from_pool(size_t size, struct page **ret_page, gfp_t flags) diff --git a/kernel/hung_task.c b/kernel/hung_task.c index 14a625c16cb3..69f54848af79 100644 --- a/kernel/hung_task.c +++ b/kernel/hung_task.c @@ -142,6 +142,47 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout) touch_nmi_watchdog(); } +static void check_killed_task(struct task_struct *t, unsigned long timeout) +{ + unsigned long stamp = t->killed_time; + + /* + * Ensure the task is not frozen. + * Also, skip vfork and any other user process that freezer should skip. + */ + if (unlikely(t->flags & (PF_FROZEN | PF_FREEZER_SKIP))) + return; + /* + * Skip threads which are already inside do_exit(), for exit_mm() etc. + * might take many seconds. + */ + if (t->flags & PF_EXITING) + return; + if (!stamp) { + stamp = jiffies; + if (!stamp) + stamp++; + t->killed_time = stamp; + return; + } + if (time_is_after_jiffies(stamp + timeout * HZ)) + return; + trace_sched_process_hang(t); + if (sysctl_hung_task_panic) { + console_verbose(); + hung_task_call_panic = true; + } + /* + * This thread failed to terminate for more than + * sysctl_hung_task_timeout_secs seconds, complain: + */ + pr_err("INFO: task %s:%d can't die for more than %ld seconds.\n", + t->comm, t->pid, (jiffies - stamp) / HZ); + sched_show_task(t); + hung_task_show_lock = true; + touch_nmi_watchdog(); +} + /* * To avoid extending the RCU grace period for an unbounded amount of time, * periodically exit the critical section and enter a new one. @@ -193,6 +234,9 @@ static void check_hung_uninterruptible_tasks(unsigned long timeout) goto unlock; last_break = jiffies; } + /* Check threads which are about to terminate. */ + if (unlikely(fatal_signal_pending(t))) + check_killed_task(t, timeout); /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */ if (t->state == TASK_UNINTERRUPTIBLE) check_hung_task(t, timeout); diff --git a/kernel/memremap.c b/kernel/memremap.c index 1490e63f69a9..57980ed4e571 100644 --- a/kernel/memremap.c +++ b/kernel/memremap.c @@ -14,8 +14,6 @@ #include <linux/hmm.h> static DEFINE_XARRAY(pgmap_array); -#define SECTION_MASK ~((1UL << PA_SECTION_SHIFT) - 1) -#define SECTION_SIZE (1UL << PA_SECTION_SHIFT) #if IS_ENABLED(CONFIG_DEVICE_PRIVATE) vm_fault_t device_private_entry_fault(struct vm_area_struct *vma, @@ -95,10 +93,11 @@ static void devm_memremap_pages_release(void *data) pgmap->kill(pgmap->ref); for_each_device_pfn(pfn, pgmap) put_page(pfn_to_page(pfn)); + pgmap->cleanup(pgmap->ref); /* pages are dead and unused, undo the arch mapping */ - align_start = res->start & ~(SECTION_SIZE - 1); - align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) + align_start = res->start & ~(PA_SECTION_SIZE - 1); + align_size = ALIGN(res->start + resource_size(res), PA_SECTION_SIZE) - align_start; nid = page_to_nid(pfn_to_page(align_start >> PAGE_SHIFT)); @@ -133,8 +132,8 @@ static void devm_memremap_pages_release(void *data) * 2/ The altmap field may optionally be initialized, in which case altmap_valid * must be set to true * - * 3/ pgmap->ref must be 'live' on entry and will be killed at - * devm_memremap_pages_release() time, or if this routine fails. + * 3/ pgmap->ref must be 'live' on entry and will be killed and reaped + * at devm_memremap_pages_release() time, or if this routine fails. * * 4/ res is expected to be a host memory range that could feasibly be * treated as a "System RAM" range, i.e. not a device mmio range, but @@ -156,11 +155,13 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) pgprot_t pgprot = PAGE_KERNEL; int error, nid, is_ram; - if (!pgmap->ref || !pgmap->kill) + if (!pgmap->ref || !pgmap->kill || !pgmap->cleanup) { + WARN(1, "Missing reference count teardown definition\n"); return ERR_PTR(-EINVAL); + } - align_start = res->start & ~(SECTION_SIZE - 1); - align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE) + align_start = res->start & ~(PA_SECTION_SIZE - 1); + align_size = ALIGN(res->start + resource_size(res), PA_SECTION_SIZE) - align_start; align_end = align_start + align_size - 1; @@ -168,14 +169,16 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) if (conflict_pgmap) { dev_WARN(dev, "Conflicting mapping in same section\n"); put_dev_pagemap(conflict_pgmap); - return ERR_PTR(-ENOMEM); + error = -ENOMEM; + goto err_array; } conflict_pgmap = get_dev_pagemap(PHYS_PFN(align_end), NULL); if (conflict_pgmap) { dev_WARN(dev, "Conflicting mapping in same section\n"); put_dev_pagemap(conflict_pgmap); - return ERR_PTR(-ENOMEM); + error = -ENOMEM; + goto err_array; } is_ram = region_intersects(align_start, align_size, @@ -267,10 +270,18 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) pgmap_array_delete(res); err_array: pgmap->kill(pgmap->ref); + pgmap->cleanup(pgmap->ref); + return ERR_PTR(error); } EXPORT_SYMBOL_GPL(devm_memremap_pages); +void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap) +{ + devm_release_action(dev, devm_memremap_pages_release, pgmap); +} +EXPORT_SYMBOL_GPL(devm_memunmap_pages); + unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) { /* number of pfns from base where pfn_to_page() is valid */ diff --git a/kernel/pid.c b/kernel/pid.c index 39181ccca846..09ed3242408a 100644 --- a/kernel/pid.c +++ b/kernel/pid.c @@ -37,13 +37,13 @@ #include <linux/init_task.h> #include <linux/syscalls.h> #include <linux/proc_ns.h> -#include <linux/proc_fs.h> +#include <linux/refcount.h> #include <linux/sched/signal.h> #include <linux/sched/task.h> #include <linux/idr.h> struct pid init_struct_pid = { - .count = ATOMIC_INIT(1), + .count = REFCOUNT_INIT(1), .tasks = { { .first = NULL }, { .first = NULL }, @@ -107,8 +107,8 @@ void put_pid(struct pid *pid) return; ns = pid->numbers[pid->level].ns; - if ((atomic_read(&pid->count) == 1) || - atomic_dec_and_test(&pid->count)) { + if ((refcount_read(&pid->count) == 1) || + refcount_dec_and_test(&pid->count)) { kmem_cache_free(ns->pid_cachep, pid); put_pid_ns(ns); } @@ -211,7 +211,7 @@ struct pid *alloc_pid(struct pid_namespace *ns) } get_pid_ns(ns); - atomic_set(&pid->count, 1); + refcount_set(&pid->count, 1); for (type = 0; type < PIDTYPE_MAX; ++type) INIT_HLIST_HEAD(&pid->tasks[type]); diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c index 6d726cef241c..a6a79f85c81a 100644 --- a/kernel/pid_namespace.c +++ b/kernel/pid_namespace.c @@ -291,14 +291,13 @@ static int pid_ns_ctl_handler(struct ctl_table *table, int write, } extern int pid_max; -static int zero = 0; static struct ctl_table pid_ns_ctl_table[] = { { .procname = "ns_last_pid", .maxlen = sizeof(int), .mode = 0666, /* permissions are checked in the handler */ .proc_handler = pid_ns_ctl_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &pid_max, }, { } diff --git a/kernel/ptrace.c b/kernel/ptrace.c index ab14654b2436..3b325c8ebf26 100644 --- a/kernel/ptrace.c +++ b/kernel/ptrace.c @@ -32,6 +32,8 @@ #include <linux/compat.h> #include <linux/sched/signal.h> +#include <asm/syscall.h> /* for syscall_get_* */ + /* * Access another process' address space via ptrace. * Source/target buffer must be kernel space, @@ -883,7 +885,100 @@ static int ptrace_regset(struct task_struct *task, int req, unsigned int type, * to ensure no machine forgets it. */ EXPORT_SYMBOL_GPL(task_user_regset_view); -#endif + +static unsigned long +ptrace_get_syscall_info_entry(struct task_struct *child, struct pt_regs *regs, + struct ptrace_syscall_info *info) +{ + unsigned long args[ARRAY_SIZE(info->entry.args)]; + int i; + + info->op = PTRACE_SYSCALL_INFO_ENTRY; + info->entry.nr = syscall_get_nr(child, regs); + syscall_get_arguments(child, regs, args); + for (i = 0; i < ARRAY_SIZE(args); i++) + info->entry.args[i] = args[i]; + + /* args is the last field in struct ptrace_syscall_info.entry */ + return offsetofend(struct ptrace_syscall_info, entry.args); +} + +static unsigned long +ptrace_get_syscall_info_seccomp(struct task_struct *child, struct pt_regs *regs, + struct ptrace_syscall_info *info) +{ + /* + * As struct ptrace_syscall_info.entry is currently a subset + * of struct ptrace_syscall_info.seccomp, it makes sense to + * initialize that subset using ptrace_get_syscall_info_entry(). + * This can be reconsidered in the future if these structures + * diverge significantly enough. + */ + ptrace_get_syscall_info_entry(child, regs, info); + info->op = PTRACE_SYSCALL_INFO_SECCOMP; + info->seccomp.ret_data = child->ptrace_message; + + /* ret_data is the last field in struct ptrace_syscall_info.seccomp */ + return offsetofend(struct ptrace_syscall_info, seccomp.ret_data); +} + +static unsigned long +ptrace_get_syscall_info_exit(struct task_struct *child, struct pt_regs *regs, + struct ptrace_syscall_info *info) +{ + info->op = PTRACE_SYSCALL_INFO_EXIT; + info->exit.rval = syscall_get_error(child, regs); + info->exit.is_error = !!info->exit.rval; + if (!info->exit.is_error) + info->exit.rval = syscall_get_return_value(child, regs); + + /* is_error is the last field in struct ptrace_syscall_info.exit */ + return offsetofend(struct ptrace_syscall_info, exit.is_error); +} + +static int +ptrace_get_syscall_info(struct task_struct *child, unsigned long user_size, + void __user *datavp) +{ + struct pt_regs *regs = task_pt_regs(child); + struct ptrace_syscall_info info = { + .op = PTRACE_SYSCALL_INFO_NONE, + .arch = syscall_get_arch(child), + .instruction_pointer = instruction_pointer(regs), + .stack_pointer = user_stack_pointer(regs), + }; + unsigned long actual_size = offsetof(struct ptrace_syscall_info, entry); + unsigned long write_size; + + /* + * This does not need lock_task_sighand() to access + * child->last_siginfo because ptrace_freeze_traced() + * called earlier by ptrace_check_attach() ensures that + * the tracee cannot go away and clear its last_siginfo. + */ + switch (child->last_siginfo ? child->last_siginfo->si_code : 0) { + case SIGTRAP | 0x80: + switch (child->ptrace_message) { + case PTRACE_EVENTMSG_SYSCALL_ENTRY: + actual_size = ptrace_get_syscall_info_entry(child, regs, + &info); + break; + case PTRACE_EVENTMSG_SYSCALL_EXIT: + actual_size = ptrace_get_syscall_info_exit(child, regs, + &info); + break; + } + break; + case SIGTRAP | (PTRACE_EVENT_SECCOMP << 8): + actual_size = ptrace_get_syscall_info_seccomp(child, regs, + &info); + break; + } + + write_size = min(actual_size, user_size); + return copy_to_user(datavp, &info, write_size) ? -EFAULT : actual_size; +} +#endif /* CONFIG_HAVE_ARCH_TRACEHOOK */ int ptrace_request(struct task_struct *child, long request, unsigned long addr, unsigned long data) @@ -1100,6 +1195,10 @@ int ptrace_request(struct task_struct *child, long request, ret = __put_user(kiov.iov_len, &uiov->iov_len); break; } + + case PTRACE_GET_SYSCALL_INFO: + ret = ptrace_get_syscall_info(child, addr, datavp); + break; #endif case PTRACE_SECCOMP_GET_FILTER: diff --git a/kernel/signal.c b/kernel/signal.c index 00d239fea7b3..dabe100d2091 100644 --- a/kernel/signal.c +++ b/kernel/signal.c @@ -3000,7 +3000,8 @@ EXPORT_SYMBOL(set_compat_user_sigmask); * This is useful for syscalls such as ppoll, pselect, io_pgetevents and * epoll_pwait where a new sigmask is passed in from userland for the syscalls. */ -void restore_user_sigmask(const void __user *usigmask, sigset_t *sigsaved) +void restore_user_sigmask(const void __user *usigmask, sigset_t *sigsaved, + bool interrupted) { if (!usigmask) @@ -3010,7 +3011,7 @@ void restore_user_sigmask(const void __user *usigmask, sigset_t *sigsaved) * Restoring sigmask here can lead to delivering signals that the above * syscalls are intended to block because of the sigmask passed in. */ - if (signal_pending(current)) { + if (interrupted) { current->saved_sigmask = *sigsaved; set_restore_sigmask(); return; diff --git a/kernel/sysctl.c b/kernel/sysctl.c index 7d1008be6173..e775e1018d72 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -125,9 +125,6 @@ static int sixty = 60; #endif static int __maybe_unused neg_one = -1; - -static int zero; -static int __maybe_unused one = 1; static int __maybe_unused two = 2; static int __maybe_unused four = 4; static unsigned long zero_ul; @@ -188,17 +185,17 @@ extern int no_unaligned_warning; * enum sysctl_writes_mode - supported sysctl write modes * * @SYSCTL_WRITES_LEGACY: each write syscall must fully contain the sysctl value - * to be written, and multiple writes on the same sysctl file descriptor - * will rewrite the sysctl value, regardless of file position. No warning - * is issued when the initial position is not 0. + * to be written, and multiple writes on the same sysctl file descriptor + * will rewrite the sysctl value, regardless of file position. No warning + * is issued when the initial position is not 0. * @SYSCTL_WRITES_WARN: same as above but warn when the initial file position is - * not 0. + * not 0. * @SYSCTL_WRITES_STRICT: writes to numeric sysctl entries must always be at - * file position 0 and the value must be fully contained in the buffer - * sent to the write syscall. If dealing with strings respect the file - * position, but restrict this to the max length of the buffer, anything - * passed the max lenght will be ignored. Multiple writes will append - * to the buffer. + * file position 0 and the value must be fully contained in the buffer + * sent to the write syscall. If dealing with strings respect the file + * position, but restrict this to the max length of the buffer, anything + * passed the max length will be ignored. Multiple writes will append + * to the buffer. * * These write modes control how current file position affects the behavior of * updating sysctl values through the proc interface on each write. @@ -390,8 +387,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = sysctl_schedstats, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif /* CONFIG_SCHEDSTATS */ #endif /* CONFIG_SMP */ @@ -423,7 +420,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, { .procname = "numa_balancing", @@ -431,8 +428,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = sysctl_numa_balancing, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif /* CONFIG_NUMA_BALANCING */ #endif /* CONFIG_SCHED_DEBUG */ @@ -464,8 +461,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_CFS_BANDWIDTH @@ -475,7 +472,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, #endif #if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) @@ -485,8 +482,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = sched_energy_aware_handler, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_PROVE_LOCKING @@ -551,7 +548,7 @@ static struct ctl_table kern_table[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = &neg_one, - .extra2 = &one, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_LATENCYTOP @@ -685,8 +682,8 @@ static struct ctl_table kern_table[] = { .mode = 0644, /* only handle a transition from default "0" to "1" */ .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - .extra2 = &one, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_MODULES @@ -704,8 +701,8 @@ static struct ctl_table kern_table[] = { .mode = 0644, /* only handle a transition from default "0" to "1" */ .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - .extra2 = &one, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_UEVENT_HELPER @@ -864,7 +861,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &ten_thousand, }, { @@ -880,8 +877,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax_sysadmin, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "kptr_restrict", @@ -889,7 +886,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax_sysadmin, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, #endif @@ -914,8 +911,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_watchdog, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "watchdog_thresh", @@ -923,7 +920,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_watchdog_thresh, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &sixty, }, { @@ -932,8 +929,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = NMI_WATCHDOG_SYSCTL_PERM, .proc_handler = proc_nmi_watchdog, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "watchdog_cpumask", @@ -949,8 +946,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_soft_watchdog, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "softlockup_panic", @@ -958,8 +955,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #ifdef CONFIG_SMP { @@ -968,8 +965,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif /* CONFIG_SMP */ #endif @@ -980,8 +977,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #ifdef CONFIG_SMP { @@ -990,8 +987,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif /* CONFIG_SMP */ #endif @@ -1104,8 +1101,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "hung_task_check_count", @@ -1113,7 +1110,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "hung_task_timeout_secs", @@ -1190,7 +1187,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_perf_event_sample_rate), .mode = 0644, .proc_handler = perf_proc_update_handler, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, { .procname = "perf_cpu_time_max_percent", @@ -1198,7 +1195,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_perf_cpu_time_max_percent), .mode = 0644, .proc_handler = perf_cpu_time_max_percent_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, { @@ -1207,7 +1204,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_perf_event_max_stack), .mode = 0644, .proc_handler = perf_event_max_stack_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &six_hundred_forty_kb, }, { @@ -1216,7 +1213,7 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_perf_event_max_contexts_per_stack), .mode = 0644, .proc_handler = perf_event_max_stack_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_thousand, }, #endif @@ -1226,8 +1223,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON) { @@ -1236,8 +1233,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = timer_migration_handler, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_BPF_SYSCALL @@ -1248,8 +1245,8 @@ static struct ctl_table kern_table[] = { .mode = 0644, /* only handle a transition from default "0" to "1" */ .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - .extra2 = &one, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_ONE, }, { .procname = "bpf_stats_enabled", @@ -1257,8 +1254,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_bpf_stats_enabled), .mode = 0644, .proc_handler = proc_dointvec_minmax_bpf_stats, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif #if defined(CONFIG_TREE_RCU) || defined(CONFIG_PREEMPT_RCU) @@ -1268,8 +1265,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(sysctl_panic_on_rcu_stall), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE @@ -1279,8 +1276,8 @@ static struct ctl_table kern_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = stack_erasing_sysctl, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { } @@ -1293,7 +1290,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_overcommit_memory), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { @@ -1302,7 +1299,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_panic_on_oom), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { @@ -1339,7 +1336,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "dirty_background_ratio", @@ -1347,7 +1344,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(dirty_background_ratio), .mode = 0644, .proc_handler = dirty_background_ratio_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, { @@ -1364,7 +1361,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(vm_dirty_ratio), .mode = 0644, .proc_handler = dirty_ratio_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, { @@ -1388,7 +1385,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(dirty_expire_interval), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "dirtytime_expire_seconds", @@ -1396,7 +1393,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(dirtytime_expire_interval), .mode = 0644, .proc_handler = dirtytime_interval_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "swappiness", @@ -1404,7 +1401,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(vm_swappiness), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, #ifdef CONFIG_HUGETLB_PAGE @@ -1429,8 +1426,8 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = sysctl_vm_numa_stat_handler, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { @@ -1461,7 +1458,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = drop_caches_sysctl_handler, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &four, }, #ifdef CONFIG_COMPACTION @@ -1487,8 +1484,8 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif /* CONFIG_COMPACTION */ @@ -1498,7 +1495,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(min_free_kbytes), .mode = 0644, .proc_handler = min_free_kbytes_sysctl_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "watermark_boost_factor", @@ -1506,7 +1503,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(watermark_boost_factor), .mode = 0644, .proc_handler = watermark_boost_factor_sysctl_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "watermark_scale_factor", @@ -1514,7 +1511,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(watermark_scale_factor), .mode = 0644, .proc_handler = watermark_scale_factor_sysctl_handler, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &one_thousand, }, { @@ -1523,7 +1520,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(percpu_pagelist_fraction), .mode = 0644, .proc_handler = percpu_pagelist_fraction_sysctl_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #ifdef CONFIG_MMU { @@ -1532,7 +1529,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_max_map_count), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #else { @@ -1541,7 +1538,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_nr_trim_pages), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #endif { @@ -1557,7 +1554,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(block_dump), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "vfs_cache_pressure", @@ -1565,7 +1562,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_vfs_cache_pressure), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #ifdef HAVE_ARCH_PICK_MMAP_LAYOUT { @@ -1574,7 +1571,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_legacy_va_layout), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #endif #ifdef CONFIG_NUMA @@ -1584,7 +1581,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(node_reclaim_mode), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "min_unmapped_ratio", @@ -1592,7 +1589,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_min_unmapped_ratio), .mode = 0644, .proc_handler = sysctl_min_unmapped_ratio_sysctl_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, { @@ -1601,7 +1598,7 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_min_slab_ratio), .mode = 0644, .proc_handler = sysctl_min_slab_ratio_sysctl_handler, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_hundred, }, #endif @@ -1652,7 +1649,7 @@ static struct ctl_table vm_table[] = { #endif .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #endif #ifdef CONFIG_HIGHMEM @@ -1662,8 +1659,8 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(vm_highmem_is_dirtyable), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_MEMORY_FAILURE @@ -1673,8 +1670,8 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_memory_failure_early_kill), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "memory_failure_recovery", @@ -1682,8 +1679,8 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_memory_failure_recovery), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { @@ -1729,8 +1726,8 @@ static struct ctl_table vm_table[] = { .maxlen = sizeof(sysctl_unprivileged_userfaultfd), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { } @@ -1866,8 +1863,8 @@ static struct ctl_table fs_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "protected_hardlinks", @@ -1875,8 +1872,8 @@ static struct ctl_table fs_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "protected_fifos", @@ -1884,7 +1881,7 @@ static struct ctl_table fs_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { @@ -1893,7 +1890,7 @@ static struct ctl_table fs_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { @@ -1902,7 +1899,7 @@ static struct ctl_table fs_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax_coredump, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, #if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE) @@ -1939,7 +1936,7 @@ static struct ctl_table fs_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, { } }; @@ -1961,8 +1958,8 @@ static struct ctl_table debug_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_kprobes_optimization_handler, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { } diff --git a/kernel/ucount.c b/kernel/ucount.c index f48d1b6376a4..c56a44539ca6 100644 --- a/kernel/ucount.c +++ b/kernel/ucount.c @@ -57,16 +57,14 @@ static struct ctl_table_root set_root = { .permissions = set_permissions, }; -static int zero = 0; -static int int_max = INT_MAX; #define UCOUNT_ENTRY(name) \ { \ .procname = name, \ .maxlen = sizeof(int), \ .mode = 0644, \ .proc_handler = proc_dointvec_minmax, \ - .extra1 = &zero, \ - .extra2 = &int_max, \ + .extra1 = SYSCTL_ZERO, \ + .extra2 = SYSCTL_INT_MAX, \ } static struct ctl_table user_table[] = { UCOUNT_ENTRY("max_user_namespaces"), diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index cbdfae379896..d08f5848958e 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -2040,6 +2040,14 @@ config TEST_STACKINIT If unsure, say N. +config TEST_MEMINIT + tristate "Test heap/page initialization" + help + Test if the kernel is zero-initializing heap and page allocations. + This can be useful to test init_on_alloc and init_on_free features. + + If unsure, say N. + endif # RUNTIME_TESTING_MENU config MEMTEST @@ -2108,6 +2116,12 @@ config IO_STRICT_DEVMEM If in doubt, say Y. +config DEBUG_AID_FOR_SYZBOT + bool "Additional debug code for syzbot" + default n + help + This option is intended for testing by syzbot. + source "arch/$(SRCARCH)/Kconfig.debug" endmenu # Kernel hacking diff --git a/lib/Makefile b/lib/Makefile index fb7697031a79..05980c802500 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -91,6 +91,7 @@ obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o obj-$(CONFIG_TEST_MEMCAT_P) += test_memcat_p.o obj-$(CONFIG_TEST_OBJAGG) += test_objagg.o obj-$(CONFIG_TEST_STACKINIT) += test_stackinit.o +obj-$(CONFIG_TEST_MEMINIT) += test_meminit.o obj-$(CONFIG_TEST_LIVEPATCH) += livepatch/ diff --git a/lib/debugobjects.c b/lib/debugobjects.c index 55437fd5128b..1546a3c00709 100644 --- a/lib/debugobjects.c +++ b/lib/debugobjects.c @@ -375,6 +375,7 @@ __debug_object_init(void *addr, struct debug_obj_descr *descr, int onstack) struct debug_bucket *db; struct debug_obj *obj; unsigned long flags; + bool check_object_on_stack = false; fill_pool(); @@ -391,7 +392,7 @@ __debug_object_init(void *addr, struct debug_obj_descr *descr, int onstack) debug_objects_oom(); return; } - debug_object_is_on_stack(addr, onstack); + check_object_on_stack = true; } switch (obj->state) { @@ -402,20 +403,24 @@ __debug_object_init(void *addr, struct debug_obj_descr *descr, int onstack) break; case ODEBUG_STATE_ACTIVE: - debug_print_object(obj, "init"); state = obj->state; raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "init"); debug_object_fixup(descr->fixup_init, addr, state); return; case ODEBUG_STATE_DESTROYED: + raw_spin_unlock_irqrestore(&db->lock, flags); debug_print_object(obj, "init"); - break; + return; default: break; } raw_spin_unlock_irqrestore(&db->lock, flags); + if (check_object_on_stack) + debug_object_is_on_stack(addr, onstack); + } /** @@ -473,33 +478,34 @@ int debug_object_activate(void *addr, struct debug_obj_descr *descr) obj = lookup_object(addr, db); if (obj) { + ret = 0; switch (obj->state) { case ODEBUG_STATE_INIT: case ODEBUG_STATE_INACTIVE: obj->state = ODEBUG_STATE_ACTIVE; - ret = 0; break; case ODEBUG_STATE_ACTIVE: - debug_print_object(obj, "activate"); state = obj->state; raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "activate"); ret = debug_object_fixup(descr->fixup_activate, addr, state); return ret ? 0 : -EINVAL; case ODEBUG_STATE_DESTROYED: - debug_print_object(obj, "activate"); ret = -EINVAL; break; default: - ret = 0; break; } raw_spin_unlock_irqrestore(&db->lock, flags); + if (ret) + debug_print_object(obj, "activate"); return ret; } raw_spin_unlock_irqrestore(&db->lock, flags); + /* * We are here when a static object is activated. We * let the type specific code confirm whether this is @@ -548,24 +554,30 @@ void debug_object_deactivate(void *addr, struct debug_obj_descr *descr) if (!obj->astate) obj->state = ODEBUG_STATE_INACTIVE; else - debug_print_object(obj, "deactivate"); + goto out_unlock_print; break; case ODEBUG_STATE_DESTROYED: - debug_print_object(obj, "deactivate"); - break; + goto out_unlock_print; + default: break; } - } else { + } + + raw_spin_unlock_irqrestore(&db->lock, flags); + if (!obj) { struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr }; debug_print_object(&o, "deactivate"); } + return; +out_unlock_print: raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "deactivate"); } EXPORT_SYMBOL_GPL(debug_object_deactivate); @@ -599,15 +611,16 @@ void debug_object_destroy(void *addr, struct debug_obj_descr *descr) obj->state = ODEBUG_STATE_DESTROYED; break; case ODEBUG_STATE_ACTIVE: - debug_print_object(obj, "destroy"); state = obj->state; raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "destroy"); debug_object_fixup(descr->fixup_destroy, addr, state); return; case ODEBUG_STATE_DESTROYED: + raw_spin_unlock_irqrestore(&db->lock, flags); debug_print_object(obj, "destroy"); - break; + return; default: break; } @@ -641,9 +654,9 @@ void debug_object_free(void *addr, struct debug_obj_descr *descr) switch (obj->state) { case ODEBUG_STATE_ACTIVE: - debug_print_object(obj, "free"); state = obj->state; raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "free"); debug_object_fixup(descr->fixup_free, addr, state); return; default: @@ -731,22 +744,27 @@ debug_object_active_state(void *addr, struct debug_obj_descr *descr, if (obj->astate == expect) obj->astate = next; else - debug_print_object(obj, "active_state"); + goto out_unlock_print; break; default: - debug_print_object(obj, "active_state"); - break; + goto out_unlock_print; } - } else { + } + + raw_spin_unlock_irqrestore(&db->lock, flags); + if (!obj) { struct debug_obj o = { .object = addr, .state = ODEBUG_STATE_NOTAVAILABLE, .descr = descr }; debug_print_object(&o, "active_state"); } + return; +out_unlock_print: raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "active_state"); } EXPORT_SYMBOL_GPL(debug_object_active_state); @@ -782,10 +800,10 @@ repeat: switch (obj->state) { case ODEBUG_STATE_ACTIVE: - debug_print_object(obj, "free"); descr = obj->descr; state = obj->state; raw_spin_unlock_irqrestore(&db->lock, flags); + debug_print_object(obj, "free"); debug_object_fixup(descr->fixup_free, (void *) oaddr, state); goto repeat; @@ -978,19 +996,24 @@ check_results(void *addr, enum debug_obj_state state, int fixups, int warnings) struct debug_obj *obj; unsigned long flags; int res = -EINVAL; + enum debug_obj_state obj_state; db = get_bucket((unsigned long) addr); raw_spin_lock_irqsave(&db->lock, flags); obj = lookup_object(addr, db); + obj_state = obj ? obj->state : state; + + raw_spin_unlock_irqrestore(&db->lock, flags); + if (!obj && state != ODEBUG_STATE_NONE) { WARN(1, KERN_ERR "ODEBUG: selftest object not found\n"); goto out; } - if (obj && obj->state != state) { + if (obj_state != state) { WARN(1, KERN_ERR "ODEBUG: selftest wrong state: %d != %d\n", - obj->state, state); + obj_state, state); goto out; } if (fixups != debug_objects_fixups) { @@ -1005,7 +1028,6 @@ check_results(void *addr, enum debug_obj_state state, int fixups, int warnings) } res = 0; out: - raw_spin_unlock_irqrestore(&db->lock, flags); if (res) debug_objects_enabled = 0; return res; diff --git a/lib/genalloc.c b/lib/genalloc.c index 5db43476a19b..1ed516716846 100644 --- a/lib/genalloc.c +++ b/lib/genalloc.c @@ -168,20 +168,21 @@ struct gen_pool *gen_pool_create(int min_alloc_order, int nid) EXPORT_SYMBOL(gen_pool_create); /** - * gen_pool_add_virt - add a new chunk of special memory to the pool + * gen_pool_add_owner- add a new chunk of special memory to the pool * @pool: pool to add new memory chunk to * @virt: virtual starting address of memory chunk to add to pool * @phys: physical starting address of memory chunk to add to pool * @size: size in bytes of the memory chunk to add to pool * @nid: node id of the node the chunk structure and bitmap should be * allocated on, or -1 + * @owner: private data the publisher would like to recall at alloc time * * Add a new chunk of special memory to the specified pool. * * Returns 0 on success or a -ve errno on failure. */ -int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phys, - size_t size, int nid) +int gen_pool_add_owner(struct gen_pool *pool, unsigned long virt, phys_addr_t phys, + size_t size, int nid, void *owner) { struct gen_pool_chunk *chunk; int nbits = size >> pool->min_alloc_order; @@ -195,6 +196,7 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy chunk->phys_addr = phys; chunk->start_addr = virt; chunk->end_addr = virt + size - 1; + chunk->owner = owner; atomic_long_set(&chunk->avail, size); spin_lock(&pool->lock); @@ -203,7 +205,7 @@ int gen_pool_add_virt(struct gen_pool *pool, unsigned long virt, phys_addr_t phy return 0; } -EXPORT_SYMBOL(gen_pool_add_virt); +EXPORT_SYMBOL(gen_pool_add_owner); /** * gen_pool_virt_to_phys - return the physical address of memory @@ -260,35 +262,20 @@ void gen_pool_destroy(struct gen_pool *pool) EXPORT_SYMBOL(gen_pool_destroy); /** - * gen_pool_alloc - allocate special memory from the pool - * @pool: pool to allocate from - * @size: number of bytes to allocate from the pool - * - * Allocate the requested number of bytes from the specified pool. - * Uses the pool allocation function (with first-fit algorithm by default). - * Can not be used in NMI handler on architectures without - * NMI-safe cmpxchg implementation. - */ -unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) -{ - return gen_pool_alloc_algo(pool, size, pool->algo, pool->data); -} -EXPORT_SYMBOL(gen_pool_alloc); - -/** - * gen_pool_alloc_algo - allocate special memory from the pool + * gen_pool_alloc_algo_owner - allocate special memory from the pool * @pool: pool to allocate from * @size: number of bytes to allocate from the pool * @algo: algorithm passed from caller * @data: data passed to algorithm + * @owner: optionally retrieve the chunk owner * * Allocate the requested number of bytes from the specified pool. * Uses the pool allocation function (with first-fit algorithm by default). * Can not be used in NMI handler on architectures without * NMI-safe cmpxchg implementation. */ -unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, - genpool_algo_t algo, void *data) +unsigned long gen_pool_alloc_algo_owner(struct gen_pool *pool, size_t size, + genpool_algo_t algo, void *data, void **owner) { struct gen_pool_chunk *chunk; unsigned long addr = 0; @@ -299,6 +286,9 @@ unsigned long gen_pool_alloc_algo(struct gen_pool *pool, size_t size, BUG_ON(in_nmi()); #endif + if (owner) + *owner = NULL; + if (size == 0) return 0; @@ -326,12 +316,14 @@ retry: addr = chunk->start_addr + ((unsigned long)start_bit << order); size = nbits << order; atomic_long_sub(size, &chunk->avail); + if (owner) + *owner = chunk->owner; break; } rcu_read_unlock(); return addr; } -EXPORT_SYMBOL(gen_pool_alloc_algo); +EXPORT_SYMBOL(gen_pool_alloc_algo_owner); /** * gen_pool_dma_alloc - allocate special memory from the pool for DMA usage @@ -394,12 +386,14 @@ EXPORT_SYMBOL(gen_pool_dma_zalloc); * @pool: pool to free to * @addr: starting address of memory to free back to pool * @size: size in bytes of memory to free + * @owner: private data stashed at gen_pool_add() time * * Free previously allocated special memory back to the specified * pool. Can not be used in NMI handler on architectures without * NMI-safe cmpxchg implementation. */ -void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) +void gen_pool_free_owner(struct gen_pool *pool, unsigned long addr, size_t size, + void **owner) { struct gen_pool_chunk *chunk; int order = pool->min_alloc_order; @@ -409,6 +403,9 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) BUG_ON(in_nmi()); #endif + if (owner) + *owner = NULL; + nbits = (size + (1UL << order) - 1) >> order; rcu_read_lock(); list_for_each_entry_rcu(chunk, &pool->chunks, next_chunk) { @@ -419,6 +416,8 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) BUG_ON(remain); size = nbits << order; atomic_long_add(size, &chunk->avail); + if (owner) + *owner = chunk->owner; rcu_read_unlock(); return; } @@ -426,7 +425,7 @@ void gen_pool_free(struct gen_pool *pool, unsigned long addr, size_t size) rcu_read_unlock(); BUG(); } -EXPORT_SYMBOL(gen_pool_free); +EXPORT_SYMBOL(gen_pool_free_owner); /** * gen_pool_for_each_chunk - call func for every chunk of generic memory pool @@ -451,7 +450,7 @@ void gen_pool_for_each_chunk(struct gen_pool *pool, EXPORT_SYMBOL(gen_pool_for_each_chunk); /** - * addr_in_gen_pool - checks if an address falls within the range of a pool + * gen_pool_has_addr - checks if an address falls within the range of a pool * @pool: the generic memory pool * @start: start address * @size: size of the region @@ -459,7 +458,7 @@ EXPORT_SYMBOL(gen_pool_for_each_chunk); * Check if the range of addresses falls within the specified pool. Returns * true if the entire range is contained in the pool and false otherwise. */ -bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start, +bool gen_pool_has_addr(struct gen_pool *pool, unsigned long start, size_t size) { bool found = false; @@ -478,6 +477,7 @@ bool addr_in_gen_pool(struct gen_pool *pool, unsigned long start, rcu_read_unlock(); return found; } +EXPORT_SYMBOL(gen_pool_has_addr); /** * gen_pool_avail - get available free space of the pool diff --git a/lib/math/rational.c b/lib/math/rational.c index ba7443677c90..31fb27db2deb 100644 --- a/lib/math/rational.c +++ b/lib/math/rational.c @@ -3,6 +3,7 @@ * rational fractions * * Copyright (C) 2009 emlix GmbH, Oskar Schirmer <oskar@scara.com> + * Copyright (C) 2019 Trent Piepho <tpiepho@gmail.com> * * helper functions when coping with rational numbers */ @@ -10,6 +11,7 @@ #include <linux/rational.h> #include <linux/compiler.h> #include <linux/export.h> +#include <linux/kernel.h> /* * calculate best rational approximation for a given fraction @@ -33,30 +35,65 @@ void rational_best_approximation( unsigned long max_numerator, unsigned long max_denominator, unsigned long *best_numerator, unsigned long *best_denominator) { - unsigned long n, d, n0, d0, n1, d1; + /* n/d is the starting rational, which is continually + * decreased each iteration using the Euclidean algorithm. + * + * dp is the value of d from the prior iteration. + * + * n2/d2, n1/d1, and n0/d0 are our successively more accurate + * approximations of the rational. They are, respectively, + * the current, previous, and two prior iterations of it. + * + * a is current term of the continued fraction. + */ + unsigned long n, d, n0, d0, n1, d1, n2, d2; n = given_numerator; d = given_denominator; n0 = d1 = 0; n1 = d0 = 1; + for (;;) { - unsigned long t, a; - if ((n1 > max_numerator) || (d1 > max_denominator)) { - n1 = n0; - d1 = d0; - break; - } + unsigned long dp, a; + if (d == 0) break; - t = d; + /* Find next term in continued fraction, 'a', via + * Euclidean algorithm. + */ + dp = d; a = n / d; d = n % d; - n = t; - t = n0 + a * n1; + n = dp; + + /* Calculate the current rational approximation (aka + * convergent), n2/d2, using the term just found and + * the two prior approximations. + */ + n2 = n0 + a * n1; + d2 = d0 + a * d1; + + /* If the current convergent exceeds the maxes, then + * return either the previous convergent or the + * largest semi-convergent, the final term of which is + * found below as 't'. + */ + if ((n2 > max_numerator) || (d2 > max_denominator)) { + unsigned long t = min((max_numerator - n0) / n1, + (max_denominator - d0) / d1); + + /* This tests if the semi-convergent is closer + * than the previous convergent. + */ + if (2u * t > a || (2u * t == a && d0 * dp > d1 * d)) { + n1 = n0 + t * n1; + d1 = d0 + t * d1; + } + break; + } n0 = n1; - n1 = t; - t = d0 + a * d1; + n1 = n2; d0 = d1; - d1 = t; + d1 = d2; } *best_numerator = n1; *best_denominator = d1; diff --git a/lib/string.c b/lib/string.c index 6016eb3ac73d..461fb620f85f 100644 --- a/lib/string.c +++ b/lib/string.c @@ -400,6 +400,9 @@ EXPORT_SYMBOL(strncmp); * strchr - Find the first occurrence of a character in a string * @s: The string to be searched * @c: The character to search for + * + * Note that the %NUL-terminator is considered part of the string, and can + * be searched for. */ char *strchr(const char *s, int c) { @@ -453,12 +456,18 @@ EXPORT_SYMBOL(strrchr); * @s: The string to be searched * @count: The number of characters to be searched * @c: The character to search for + * + * Note that the %NUL-terminator is considered part of the string, and can + * be searched for. */ char *strnchr(const char *s, size_t count, int c) { - for (; count-- && *s != '\0'; ++s) + while (count--) { if (*s == (char)c) return (char *)s; + if (*s++ == '\0') + break; + } return NULL; } EXPORT_SYMBOL(strnchr); diff --git a/lib/test_meminit.c b/lib/test_meminit.c new file mode 100644 index 000000000000..ed7efec1387b --- /dev/null +++ b/lib/test_meminit.c @@ -0,0 +1,362 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for SL[AOU]B/page initialization at alloc/free time. + */ +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include <linux/init.h> +#include <linux/kernel.h> +#include <linux/mm.h> +#include <linux/module.h> +#include <linux/slab.h> +#include <linux/string.h> +#include <linux/vmalloc.h> + +#define GARBAGE_INT (0x09A7BA9E) +#define GARBAGE_BYTE (0x9E) + +#define REPORT_FAILURES_IN_FN() \ + do { \ + if (failures) \ + pr_info("%s failed %d out of %d times\n", \ + __func__, failures, num_tests); \ + else \ + pr_info("all %d tests in %s passed\n", \ + num_tests, __func__); \ + } while (0) + +/* Calculate the number of uninitialized bytes in the buffer. */ +static int __init count_nonzero_bytes(void *ptr, size_t size) +{ + int i, ret = 0; + unsigned char *p = (unsigned char *)ptr; + + for (i = 0; i < size; i++) + if (p[i]) + ret++; + return ret; +} + +/* Fill a buffer with garbage, skipping |skip| first bytes. */ +static void __init fill_with_garbage_skip(void *ptr, size_t size, size_t skip) +{ + unsigned int *p = (unsigned int *)ptr; + int i = 0; + + if (skip) { + WARN_ON(skip > size); + p += skip; + } + while (size >= sizeof(*p)) { + p[i] = GARBAGE_INT; + i++; + size -= sizeof(*p); + } + if (size) + memset(&p[i], GARBAGE_BYTE, size); +} + +static void __init fill_with_garbage(void *ptr, size_t size) +{ + fill_with_garbage_skip(ptr, size, 0); +} + +static int __init do_alloc_pages_order(int order, int *total_failures) +{ + struct page *page; + void *buf; + size_t size = PAGE_SIZE << order; + + page = alloc_pages(GFP_KERNEL, order); + buf = page_address(page); + fill_with_garbage(buf, size); + __free_pages(page, order); + + page = alloc_pages(GFP_KERNEL, order); + buf = page_address(page); + if (count_nonzero_bytes(buf, size)) + (*total_failures)++; + fill_with_garbage(buf, size); + __free_pages(page, order); + return 1; +} + +/* Test the page allocator by calling alloc_pages with different orders. */ +static int __init test_pages(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i; + + for (i = 0; i < 10; i++) + num_tests += do_alloc_pages_order(i, &failures); + + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +/* Test kmalloc() with given parameters. */ +static int __init do_kmalloc_size(size_t size, int *total_failures) +{ + void *buf; + + buf = kmalloc(size, GFP_KERNEL); + fill_with_garbage(buf, size); + kfree(buf); + + buf = kmalloc(size, GFP_KERNEL); + if (count_nonzero_bytes(buf, size)) + (*total_failures)++; + fill_with_garbage(buf, size); + kfree(buf); + return 1; +} + +/* Test vmalloc() with given parameters. */ +static int __init do_vmalloc_size(size_t size, int *total_failures) +{ + void *buf; + + buf = vmalloc(size); + fill_with_garbage(buf, size); + vfree(buf); + + buf = vmalloc(size); + if (count_nonzero_bytes(buf, size)) + (*total_failures)++; + fill_with_garbage(buf, size); + vfree(buf); + return 1; +} + +/* Test kmalloc()/vmalloc() by allocating objects of different sizes. */ +static int __init test_kvmalloc(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i, size; + + for (i = 0; i < 20; i++) { + size = 1 << i; + num_tests += do_kmalloc_size(size, &failures); + num_tests += do_vmalloc_size(size, &failures); + } + + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +#define CTOR_BYTES (sizeof(unsigned int)) +#define CTOR_PATTERN (0x41414141) +/* Initialize the first 4 bytes of the object. */ +static void test_ctor(void *obj) +{ + *(unsigned int *)obj = CTOR_PATTERN; +} + +/* + * Check the invariants for the buffer allocated from a slab cache. + * If the cache has a test constructor, the first 4 bytes of the object must + * always remain equal to CTOR_PATTERN. + * If the cache isn't an RCU-typesafe one, or if the allocation is done with + * __GFP_ZERO, then the object contents must be zeroed after allocation. + * If the cache is an RCU-typesafe one, the object contents must never be + * zeroed after the first use. This is checked by memcmp() in + * do_kmem_cache_size(). + */ +static bool __init check_buf(void *buf, int size, bool want_ctor, + bool want_rcu, bool want_zero) +{ + int bytes; + bool fail = false; + + bytes = count_nonzero_bytes(buf, size); + WARN_ON(want_ctor && want_zero); + if (want_zero) + return bytes; + if (want_ctor) { + if (*(unsigned int *)buf != CTOR_PATTERN) + fail = 1; + } else { + if (bytes) + fail = !want_rcu; + } + return fail; +} + +/* + * Test kmem_cache with given parameters: + * want_ctor - use a constructor; + * want_rcu - use SLAB_TYPESAFE_BY_RCU; + * want_zero - use __GFP_ZERO. + */ +static int __init do_kmem_cache_size(size_t size, bool want_ctor, + bool want_rcu, bool want_zero, + int *total_failures) +{ + struct kmem_cache *c; + int iter; + bool fail = false; + gfp_t alloc_mask = GFP_KERNEL | (want_zero ? __GFP_ZERO : 0); + void *buf, *buf_copy; + + c = kmem_cache_create("test_cache", size, 1, + want_rcu ? SLAB_TYPESAFE_BY_RCU : 0, + want_ctor ? test_ctor : NULL); + for (iter = 0; iter < 10; iter++) { + buf = kmem_cache_alloc(c, alloc_mask); + /* Check that buf is zeroed, if it must be. */ + fail = check_buf(buf, size, want_ctor, want_rcu, want_zero); + fill_with_garbage_skip(buf, size, want_ctor ? CTOR_BYTES : 0); + /* + * If this is an RCU cache, use a critical section to ensure we + * can touch objects after they're freed. + */ + if (want_rcu) { + rcu_read_lock(); + /* + * Copy the buffer to check that it's not wiped on + * free(). + */ + buf_copy = kmalloc(size, GFP_KERNEL); + if (buf_copy) + memcpy(buf_copy, buf, size); + } + kmem_cache_free(c, buf); + if (want_rcu) { + /* + * Check that |buf| is intact after kmem_cache_free(). + * |want_zero| is false, because we wrote garbage to + * the buffer already. + */ + fail |= check_buf(buf, size, want_ctor, want_rcu, + false); + if (buf_copy) { + fail |= (bool)memcmp(buf, buf_copy, size); + kfree(buf_copy); + } + rcu_read_unlock(); + } + } + kmem_cache_destroy(c); + + *total_failures += fail; + return 1; +} + +/* + * Check that the data written to an RCU-allocated object survives + * reallocation. + */ +static int __init do_kmem_cache_rcu_persistent(int size, int *total_failures) +{ + struct kmem_cache *c; + void *buf, *buf_contents, *saved_ptr; + void **used_objects; + int i, iter, maxiter = 1024; + bool fail = false; + + c = kmem_cache_create("test_cache", size, size, SLAB_TYPESAFE_BY_RCU, + NULL); + buf = kmem_cache_alloc(c, GFP_KERNEL); + saved_ptr = buf; + fill_with_garbage(buf, size); + buf_contents = kmalloc(size, GFP_KERNEL); + if (!buf_contents) + goto out; + used_objects = kmalloc_array(maxiter, sizeof(void *), GFP_KERNEL); + if (!used_objects) { + kfree(buf_contents); + goto out; + } + memcpy(buf_contents, buf, size); + kmem_cache_free(c, buf); + /* + * Run for a fixed number of iterations. If we never hit saved_ptr, + * assume the test passes. + */ + for (iter = 0; iter < maxiter; iter++) { + buf = kmem_cache_alloc(c, GFP_KERNEL); + used_objects[iter] = buf; + if (buf == saved_ptr) { + fail = memcmp(buf_contents, buf, size); + for (i = 0; i <= iter; i++) + kmem_cache_free(c, used_objects[i]); + goto free_out; + } + } + +free_out: + kmem_cache_destroy(c); + kfree(buf_contents); + kfree(used_objects); +out: + *total_failures += fail; + return 1; +} + +/* + * Test kmem_cache allocation by creating caches of different sizes, with and + * without constructors, with and without SLAB_TYPESAFE_BY_RCU. + */ +static int __init test_kmemcache(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i, flags, size; + bool ctor, rcu, zero; + + for (i = 0; i < 10; i++) { + size = 8 << i; + for (flags = 0; flags < 8; flags++) { + ctor = flags & 1; + rcu = flags & 2; + zero = flags & 4; + if (ctor & zero) + continue; + num_tests += do_kmem_cache_size(size, ctor, rcu, zero, + &failures); + } + } + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +/* Test the behavior of SLAB_TYPESAFE_BY_RCU caches of different sizes. */ +static int __init test_rcu_persistent(int *total_failures) +{ + int failures = 0, num_tests = 0; + int i, size; + + for (i = 0; i < 10; i++) { + size = 8 << i; + num_tests += do_kmem_cache_rcu_persistent(size, &failures); + } + REPORT_FAILURES_IN_FN(); + *total_failures += failures; + return num_tests; +} + +/* + * Run the tests. Each test function returns the number of executed tests and + * updates |failures| with the number of failed tests. + */ +static int __init test_meminit_init(void) +{ + int failures = 0, num_tests = 0; + + num_tests += test_pages(&failures); + num_tests += test_kvmalloc(&failures); + num_tests += test_kmemcache(&failures); + num_tests += test_rcu_persistent(&failures); + + if (failures == 0) + pr_info("all %d tests passed!\n", num_tests); + else + pr_info("failures: %d out of %d\n", failures, num_tests); + + return failures ? -EINVAL : 0; +} +module_init(test_meminit_init); + +MODULE_LICENSE("GPL"); diff --git a/lib/test_overflow.c b/lib/test_overflow.c index fc680562d8b6..7a4b6f6c5473 100644 --- a/lib/test_overflow.c +++ b/lib/test_overflow.c @@ -486,16 +486,17 @@ static int __init test_overflow_shift(void) * Deal with the various forms of allocator arguments. See comments above * the DEFINE_TEST_ALLOC() instances for mapping of the "bits". */ -#define alloc010(alloc, arg, sz) alloc(sz, GFP_KERNEL) -#define alloc011(alloc, arg, sz) alloc(sz, GFP_KERNEL, NUMA_NO_NODE) +#define alloc_GFP (GFP_KERNEL | __GFP_NOWARN) +#define alloc010(alloc, arg, sz) alloc(sz, alloc_GFP) +#define alloc011(alloc, arg, sz) alloc(sz, alloc_GFP, NUMA_NO_NODE) #define alloc000(alloc, arg, sz) alloc(sz) #define alloc001(alloc, arg, sz) alloc(sz, NUMA_NO_NODE) -#define alloc110(alloc, arg, sz) alloc(arg, sz, GFP_KERNEL) +#define alloc110(alloc, arg, sz) alloc(arg, sz, alloc_GFP) #define free0(free, arg, ptr) free(ptr) #define free1(free, arg, ptr) free(arg, ptr) -/* Wrap around to 8K */ -#define TEST_SIZE (9 << PAGE_SHIFT) +/* Wrap around to 16K */ +#define TEST_SIZE (5 * 4096) #define DEFINE_TEST_ALLOC(func, free_func, want_arg, want_gfp, want_node)\ static int __init test_ ## func (void *arg) \ diff --git a/lib/test_string.c b/lib/test_string.c index bf8def01ed20..7b31f4a505bf 100644 --- a/lib/test_string.c +++ b/lib/test_string.c @@ -36,7 +36,7 @@ static __init int memset16_selftest(void) fail: kfree(p); if (i < 256) - return (i << 24) | (j << 16) | k; + return (i << 24) | (j << 16) | k | 0x8000; return 0; } @@ -72,7 +72,7 @@ static __init int memset32_selftest(void) fail: kfree(p); if (i < 256) - return (i << 24) | (j << 16) | k; + return (i << 24) | (j << 16) | k | 0x8000; return 0; } @@ -108,7 +108,74 @@ static __init int memset64_selftest(void) fail: kfree(p); if (i < 256) - return (i << 24) | (j << 16) | k; + return (i << 24) | (j << 16) | k | 0x8000; + return 0; +} + +static __init int strchr_selftest(void) +{ + const char *test_string = "abcdefghijkl"; + const char *empty_string = ""; + char *result; + int i; + + for (i = 0; i < strlen(test_string) + 1; i++) { + result = strchr(test_string, test_string[i]); + if (result - test_string != i) + return i + 'a'; + } + + result = strchr(empty_string, '\0'); + if (result != empty_string) + return 0x101; + + result = strchr(empty_string, 'a'); + if (result) + return 0x102; + + result = strchr(test_string, 'z'); + if (result) + return 0x103; + + return 0; +} + +static __init int strnchr_selftest(void) +{ + const char *test_string = "abcdefghijkl"; + const char *empty_string = ""; + char *result; + int i, j; + + for (i = 0; i < strlen(test_string) + 1; i++) { + for (j = 0; j < strlen(test_string) + 2; j++) { + result = strnchr(test_string, j, test_string[i]); + if (j <= i) { + if (!result) + continue; + return ((i + 'a') << 8) | j; + } + if (result - test_string != i) + return ((i + 'a') << 8) | j; + } + } + + result = strnchr(empty_string, 0, '\0'); + if (result) + return 0x10001; + + result = strnchr(empty_string, 1, '\0'); + if (result != empty_string) + return 0x10002; + + result = strnchr(empty_string, 1, 'a'); + if (result) + return 0x10003; + + result = strnchr(NULL, 0, '\0'); + if (result) + return 0x10004; + return 0; } @@ -131,6 +198,16 @@ static __init int string_selftest_init(void) if (subtest) goto fail; + test = 4; + subtest = strchr_selftest(); + if (subtest) + goto fail; + + test = 5; + subtest = strnchr_selftest(); + if (subtest) + goto fail; + pr_info("String selftests succeeded\n"); return 0; fail: diff --git a/lib/zstd/compress.c b/lib/zstd/compress.c index 5e0b67003e55..7e0cea239753 100644 --- a/lib/zstd/compress.c +++ b/lib/zstd/compress.c @@ -206,18 +206,21 @@ ZSTD_compressionParameters ZSTD_adjustCParams(ZSTD_compressionParameters cPar, u return cPar; } -static U32 ZSTD_equivalentParams(ZSTD_parameters param1, ZSTD_parameters param2) +static U32 ZSTD_equivalentParams(const ZSTD_parameters *param1, const ZSTD_parameters *param2) { - return (param1.cParams.hashLog == param2.cParams.hashLog) & (param1.cParams.chainLog == param2.cParams.chainLog) & - (param1.cParams.strategy == param2.cParams.strategy) & ((param1.cParams.searchLength == 3) == (param2.cParams.searchLength == 3)); + return (param1->cParams.hashLog == param2->cParams.hashLog) && + (param1->cParams.chainLog == param2->cParams.chainLog) && + (param1->cParams.strategy == param2->cParams.strategy) && + (param1->cParams.searchLength == 3) && + (param1->cParams.searchLength == param2->cParams.searchLength); } /*! ZSTD_continueCCtx() : reuse CCtx without reset (note : requires no dictionary) */ -static size_t ZSTD_continueCCtx(ZSTD_CCtx *cctx, ZSTD_parameters params, U64 frameContentSize) +static size_t ZSTD_continueCCtx(ZSTD_CCtx *cctx, const ZSTD_parameters *params, U64 frameContentSize) { U32 const end = (U32)(cctx->nextSrc - cctx->base); - cctx->params = params; + cctx->params = *params; cctx->frameContentSize = frameContentSize; cctx->lowLimit = end; cctx->dictLimit = end; @@ -239,23 +242,23 @@ typedef enum { ZSTDcrp_continue, ZSTDcrp_noMemset, ZSTDcrp_fullReset } ZSTD_comp /*! ZSTD_resetCCtx_advanced() : note : `params` must be validated */ -static size_t ZSTD_resetCCtx_advanced(ZSTD_CCtx *zc, ZSTD_parameters params, U64 frameContentSize, ZSTD_compResetPolicy_e const crp) +static size_t ZSTD_resetCCtx_advanced(ZSTD_CCtx *zc, const ZSTD_parameters *params, U64 frameContentSize, ZSTD_compResetPolicy_e const crp) { if (crp == ZSTDcrp_continue) - if (ZSTD_equivalentParams(params, zc->params)) { + if (ZSTD_equivalentParams(params, &zc->params)) { zc->flagStaticTables = 0; zc->flagStaticHufTable = HUF_repeat_none; return ZSTD_continueCCtx(zc, params, frameContentSize); } { - size_t const blockSize = MIN(ZSTD_BLOCKSIZE_ABSOLUTEMAX, (size_t)1 << params.cParams.windowLog); - U32 const divider = (params.cParams.searchLength == 3) ? 3 : 4; + size_t const blockSize = MIN(ZSTD_BLOCKSIZE_ABSOLUTEMAX, (size_t)1 << params->cParams.windowLog); + U32 const divider = (params->cParams.searchLength == 3) ? 3 : 4; size_t const maxNbSeq = blockSize / divider; size_t const tokenSpace = blockSize + 11 * maxNbSeq; - size_t const chainSize = (params.cParams.strategy == ZSTD_fast) ? 0 : (1 << params.cParams.chainLog); - size_t const hSize = ((size_t)1) << params.cParams.hashLog; - U32 const hashLog3 = (params.cParams.searchLength > 3) ? 0 : MIN(ZSTD_HASHLOG3_MAX, params.cParams.windowLog); + size_t const chainSize = (params->cParams.strategy == ZSTD_fast) ? 0 : (1 << params->cParams.chainLog); + size_t const hSize = ((size_t)1) << params->cParams.hashLog; + U32 const hashLog3 = (params->cParams.searchLength > 3) ? 0 : MIN(ZSTD_HASHLOG3_MAX, params->cParams.windowLog); size_t const h3Size = ((size_t)1) << hashLog3; size_t const tableSpace = (chainSize + hSize + h3Size) * sizeof(U32); void *ptr; @@ -265,7 +268,7 @@ static size_t ZSTD_resetCCtx_advanced(ZSTD_CCtx *zc, ZSTD_parameters params, U64 size_t const optSpace = ((MaxML + 1) + (MaxLL + 1) + (MaxOff + 1) + (1 << Litbits)) * sizeof(U32) + (ZSTD_OPT_NUM + 1) * (sizeof(ZSTD_match_t) + sizeof(ZSTD_optimal_t)); size_t const neededSpace = tableSpace + (256 * sizeof(U32)) /* huffTable */ + tokenSpace + - (((params.cParams.strategy == ZSTD_btopt) || (params.cParams.strategy == ZSTD_btopt2)) ? optSpace : 0); + (((params->cParams.strategy == ZSTD_btopt) || (params->cParams.strategy == ZSTD_btopt2)) ? optSpace : 0); if (zc->workSpaceSize < neededSpace) { ZSTD_free(zc->workSpace, zc->customMem); zc->workSpace = ZSTD_malloc(neededSpace, zc->customMem); @@ -294,7 +297,7 @@ static size_t ZSTD_resetCCtx_advanced(ZSTD_CCtx *zc, ZSTD_parameters params, U64 zc->dictBase = NULL; zc->dictLimit = 0; zc->lowLimit = 0; - zc->params = params; + zc->params = *params; zc->blockSize = blockSize; zc->frameContentSize = frameContentSize; { @@ -303,7 +306,7 @@ static size_t ZSTD_resetCCtx_advanced(ZSTD_CCtx *zc, ZSTD_parameters params, U64 zc->rep[i] = repStartValue[i]; } - if ((params.cParams.strategy == ZSTD_btopt) || (params.cParams.strategy == ZSTD_btopt2)) { + if ((params->cParams.strategy == ZSTD_btopt) || (params->cParams.strategy == ZSTD_btopt2)) { zc->seqStore.litFreq = (U32 *)ptr; zc->seqStore.litLengthFreq = zc->seqStore.litFreq + (1 << Litbits); zc->seqStore.matchLengthFreq = zc->seqStore.litLengthFreq + (MaxLL + 1); @@ -354,7 +357,7 @@ size_t ZSTD_copyCCtx(ZSTD_CCtx *dstCCtx, const ZSTD_CCtx *srcCCtx, unsigned long { ZSTD_parameters params = srcCCtx->params; params.fParams.contentSizeFlag = (pledgedSrcSize > 0); - ZSTD_resetCCtx_advanced(dstCCtx, params, pledgedSrcSize, ZSTDcrp_noMemset); + ZSTD_resetCCtx_advanced(dstCCtx, ¶ms, pledgedSrcSize, ZSTDcrp_noMemset); } /* copy tables */ @@ -2428,16 +2431,16 @@ static size_t ZSTD_compress_generic(ZSTD_CCtx *cctx, void *dst, size_t dstCapaci return op - ostart; } -static size_t ZSTD_writeFrameHeader(void *dst, size_t dstCapacity, ZSTD_parameters params, U64 pledgedSrcSize, U32 dictID) +static size_t ZSTD_writeFrameHeader(void *dst, size_t dstCapacity, ZSTD_parameters *params, U64 pledgedSrcSize, U32 dictID) { BYTE *const op = (BYTE *)dst; U32 const dictIDSizeCode = (dictID > 0) + (dictID >= 256) + (dictID >= 65536); /* 0-3 */ - U32 const checksumFlag = params.fParams.checksumFlag > 0; - U32 const windowSize = 1U << params.cParams.windowLog; - U32 const singleSegment = params.fParams.contentSizeFlag && (windowSize >= pledgedSrcSize); - BYTE const windowLogByte = (BYTE)((params.cParams.windowLog - ZSTD_WINDOWLOG_ABSOLUTEMIN) << 3); + BYTE const checksumFlag = params->fParams.checksumFlag > 0; + U32 const windowSize = 1U << params->cParams.windowLog; + BYTE const singleSegment = params->fParams.contentSizeFlag && (windowSize >= pledgedSrcSize); + BYTE const windowLogByte = (BYTE)((params->cParams.windowLog - ZSTD_WINDOWLOG_ABSOLUTEMIN) << 3); U32 const fcsCode = - params.fParams.contentSizeFlag ? (pledgedSrcSize >= 256) + (pledgedSrcSize >= 65536 + 256) + (pledgedSrcSize >= 0xFFFFFFFFU) : 0; /* 0-3 */ + params->fParams.contentSizeFlag ? (pledgedSrcSize >= 256) + (pledgedSrcSize >= 65536 + 256) + (pledgedSrcSize >= 0xFFFFFFFFU) : 0; /* 0-3 */ BYTE const frameHeaderDecriptionByte = (BYTE)(dictIDSizeCode + (checksumFlag << 2) + (singleSegment << 5) + (fcsCode << 6)); size_t pos; @@ -2496,7 +2499,7 @@ static size_t ZSTD_compressContinue_internal(ZSTD_CCtx *cctx, void *dst, size_t return ERROR(stage_wrong); /* missing init (ZSTD_compressBegin) */ if (frame && (cctx->stage == ZSTDcs_init)) { - fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, cctx->params, cctx->frameContentSize, cctx->dictID); + fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, &cctx->params, cctx->frameContentSize, cctx->dictID); if (ZSTD_isError(fhSize)) return fhSize; dstCapacity -= fhSize; @@ -2735,7 +2738,7 @@ static size_t ZSTD_compress_insertDictionary(ZSTD_CCtx *cctx, const void *dict, /*! ZSTD_compressBegin_internal() : * @return : 0, or an error code */ -static size_t ZSTD_compressBegin_internal(ZSTD_CCtx *cctx, const void *dict, size_t dictSize, ZSTD_parameters params, U64 pledgedSrcSize) +static size_t ZSTD_compressBegin_internal(ZSTD_CCtx *cctx, const void *dict, size_t dictSize, const ZSTD_parameters *params, U64 pledgedSrcSize) { ZSTD_compResetPolicy_e const crp = dictSize ? ZSTDcrp_fullReset : ZSTDcrp_continue; CHECK_F(ZSTD_resetCCtx_advanced(cctx, params, pledgedSrcSize, crp)); @@ -2744,17 +2747,17 @@ static size_t ZSTD_compressBegin_internal(ZSTD_CCtx *cctx, const void *dict, siz /*! ZSTD_compressBegin_advanced() : * @return : 0, or an error code */ -size_t ZSTD_compressBegin_advanced(ZSTD_CCtx *cctx, const void *dict, size_t dictSize, ZSTD_parameters params, unsigned long long pledgedSrcSize) +size_t ZSTD_compressBegin_advanced(ZSTD_CCtx *cctx, const void *dict, size_t dictSize, const ZSTD_parameters *params, unsigned long long pledgedSrcSize) { /* compression parameters verification and optimization */ - CHECK_F(ZSTD_checkCParams(params.cParams)); + CHECK_F(ZSTD_checkCParams(params->cParams)); return ZSTD_compressBegin_internal(cctx, dict, dictSize, params, pledgedSrcSize); } size_t ZSTD_compressBegin_usingDict(ZSTD_CCtx *cctx, const void *dict, size_t dictSize, int compressionLevel) { ZSTD_parameters const params = ZSTD_getParams(compressionLevel, 0, dictSize); - return ZSTD_compressBegin_internal(cctx, dict, dictSize, params, 0); + return ZSTD_compressBegin_internal(cctx, dict, dictSize, ¶ms, 0); } size_t ZSTD_compressBegin(ZSTD_CCtx *cctx, int compressionLevel) { return ZSTD_compressBegin_usingDict(cctx, NULL, 0, compressionLevel); } @@ -2773,7 +2776,7 @@ static size_t ZSTD_writeEpilogue(ZSTD_CCtx *cctx, void *dst, size_t dstCapacity) /* special case : empty frame */ if (cctx->stage == ZSTDcs_init) { - fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, cctx->params, 0, 0); + fhSize = ZSTD_writeFrameHeader(dst, dstCapacity, &cctx->params, 0, 0); if (ZSTD_isError(fhSize)) return fhSize; dstCapacity -= fhSize; @@ -2816,19 +2819,19 @@ size_t ZSTD_compressEnd(ZSTD_CCtx *cctx, void *dst, size_t dstCapacity, const vo } static size_t ZSTD_compress_internal(ZSTD_CCtx *cctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const void *dict, size_t dictSize, - ZSTD_parameters params) + const ZSTD_parameters *params) { CHECK_F(ZSTD_compressBegin_internal(cctx, dict, dictSize, params, srcSize)); return ZSTD_compressEnd(cctx, dst, dstCapacity, src, srcSize); } size_t ZSTD_compress_usingDict(ZSTD_CCtx *ctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const void *dict, size_t dictSize, - ZSTD_parameters params) + const ZSTD_parameters *params) { return ZSTD_compress_internal(ctx, dst, dstCapacity, src, srcSize, dict, dictSize, params); } -size_t ZSTD_compressCCtx(ZSTD_CCtx *ctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, ZSTD_parameters params) +size_t ZSTD_compressCCtx(ZSTD_CCtx *ctx, void *dst, size_t dstCapacity, const void *src, size_t srcSize, const ZSTD_parameters *params) { return ZSTD_compress_internal(ctx, dst, dstCapacity, src, srcSize, NULL, 0, params); } @@ -2844,7 +2847,7 @@ struct ZSTD_CDict_s { size_t ZSTD_CDictWorkspaceBound(ZSTD_compressionParameters cParams) { return ZSTD_CCtxWorkspaceBound(cParams) + ZSTD_ALIGN(sizeof(ZSTD_CDict)); } -static ZSTD_CDict *ZSTD_createCDict_advanced(const void *dictBuffer, size_t dictSize, unsigned byReference, ZSTD_parameters params, ZSTD_customMem customMem) +static ZSTD_CDict *ZSTD_createCDict_advanced(const void *dictBuffer, size_t dictSize, unsigned byReference, const ZSTD_parameters *params, ZSTD_customMem customMem) { if (!customMem.customAlloc || !customMem.customFree) return NULL; @@ -2890,7 +2893,7 @@ static ZSTD_CDict *ZSTD_createCDict_advanced(const void *dictBuffer, size_t dict } } -ZSTD_CDict *ZSTD_initCDict(const void *dict, size_t dictSize, ZSTD_parameters params, void *workspace, size_t workspaceSize) +ZSTD_CDict *ZSTD_initCDict(const void *dict, size_t dictSize, const ZSTD_parameters *params, void *workspace, size_t workspaceSize) { ZSTD_customMem const stackMem = ZSTD_initStack(workspace, workspaceSize); return ZSTD_createCDict_advanced(dict, dictSize, 1, params, stackMem); @@ -2918,7 +2921,7 @@ size_t ZSTD_compressBegin_usingCDict(ZSTD_CCtx *cctx, const ZSTD_CDict *cdict, u else { ZSTD_parameters params = cdict->refContext->params; params.fParams.contentSizeFlag = (pledgedSrcSize > 0); - CHECK_F(ZSTD_compressBegin_advanced(cctx, NULL, 0, params, pledgedSrcSize)); + CHECK_F(ZSTD_compressBegin_advanced(cctx, NULL, 0, ¶ms, pledgedSrcSize)); } return 0; } @@ -3031,7 +3034,7 @@ static size_t ZSTD_resetCStream_internal(ZSTD_CStream *zcs, unsigned long long p if (zcs->cdict) CHECK_F(ZSTD_compressBegin_usingCDict(zcs->cctx, zcs->cdict, pledgedSrcSize)) else - CHECK_F(ZSTD_compressBegin_advanced(zcs->cctx, NULL, 0, zcs->params, pledgedSrcSize)); + CHECK_F(ZSTD_compressBegin_advanced(zcs->cctx, NULL, 0, &zcs->params, pledgedSrcSize)); zcs->inToCompress = 0; zcs->inBuffPos = 0; @@ -3052,11 +3055,11 @@ size_t ZSTD_resetCStream(ZSTD_CStream *zcs, unsigned long long pledgedSrcSize) return ZSTD_resetCStream_internal(zcs, pledgedSrcSize); } -static size_t ZSTD_initCStream_advanced(ZSTD_CStream *zcs, const void *dict, size_t dictSize, ZSTD_parameters params, unsigned long long pledgedSrcSize) +static size_t ZSTD_initCStream_advanced(ZSTD_CStream *zcs, const void *dict, size_t dictSize, const ZSTD_parameters *params, unsigned long long pledgedSrcSize) { /* allocate buffers */ { - size_t const neededInBuffSize = (size_t)1 << params.cParams.windowLog; + size_t const neededInBuffSize = (size_t)1 << params->cParams.windowLog; if (zcs->inBuffSize < neededInBuffSize) { zcs->inBuffSize = neededInBuffSize; ZSTD_free(zcs->inBuff, zcs->customMem); @@ -3083,13 +3086,13 @@ static size_t ZSTD_initCStream_advanced(ZSTD_CStream *zcs, const void *dict, siz } else zcs->cdict = NULL; - zcs->checksum = params.fParams.checksumFlag > 0; - zcs->params = params; + zcs->checksum = params->fParams.checksumFlag > 0; + zcs->params = *params; return ZSTD_resetCStream_internal(zcs, pledgedSrcSize); } -ZSTD_CStream *ZSTD_initCStream(ZSTD_parameters params, unsigned long long pledgedSrcSize, void *workspace, size_t workspaceSize) +ZSTD_CStream *ZSTD_initCStream(const ZSTD_parameters *params, unsigned long long pledgedSrcSize, void *workspace, size_t workspaceSize) { ZSTD_customMem const stackMem = ZSTD_initStack(workspace, workspaceSize); ZSTD_CStream *const zcs = ZSTD_createCStream_advanced(stackMem); @@ -3105,7 +3108,7 @@ ZSTD_CStream *ZSTD_initCStream(ZSTD_parameters params, unsigned long long pledge ZSTD_CStream *ZSTD_initCStream_usingCDict(const ZSTD_CDict *cdict, unsigned long long pledgedSrcSize, void *workspace, size_t workspaceSize) { ZSTD_parameters const params = ZSTD_getParamsFromCDict(cdict); - ZSTD_CStream *const zcs = ZSTD_initCStream(params, pledgedSrcSize, workspace, workspaceSize); + ZSTD_CStream *const zcs = ZSTD_initCStream(¶ms, pledgedSrcSize, workspace, workspaceSize); if (zcs) { zcs->cdict = cdict; if (ZSTD_isError(ZSTD_resetCStream_internal(zcs, pledgedSrcSize))) { diff --git a/lib/zstd/decompress.c b/lib/zstd/decompress.c index 269ee9a796c1..782826484e55 100644 --- a/lib/zstd/decompress.c +++ b/lib/zstd/decompress.c @@ -233,7 +233,7 @@ size_t ZSTD_getFrameParams(ZSTD_frameParams *fparamsPtr, const void *src, size_t BYTE const fhdByte = ip[4]; size_t pos = 5; U32 const dictIDSizeCode = fhdByte & 3; - U32 const checksumFlag = (fhdByte >> 2) & 1; + BYTE const checksumFlag = (fhdByte >> 2) & 1; U32 const singleSegment = (fhdByte >> 5) & 1; U32 const fcsID = fhdByte >> 6; U32 const windowSizeMax = 1U << ZSTD_WINDOWLOG_MAX; diff --git a/lib/zstd/huf_compress.c b/lib/zstd/huf_compress.c index e727812d12aa..22031247ff9f 100644 --- a/lib/zstd/huf_compress.c +++ b/lib/zstd/huf_compress.c @@ -382,8 +382,8 @@ static U32 HUF_setMaxHeight(nodeElt *huffNode, U32 lastNonNull, U32 maxNbBits) } typedef struct { - U32 base; - U32 curr; + U16 base; + U16 curr; } rankPos; static void HUF_sort(nodeElt *huffNode, const U32 *count, U32 maxSymbolValue) diff --git a/mm/Kconfig b/mm/Kconfig index f0c76ba47695..c73a0aad6283 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -648,8 +648,7 @@ config IDLE_PAGE_TRACKING See Documentation/admin-guide/mm/idle_page_tracking.rst for more details. -# arch_add_memory() comprehends device memory -config ARCH_HAS_ZONE_DEVICE +config ARCH_HAS_PTE_DEVMAP bool config ZONE_DEVICE @@ -657,7 +656,7 @@ config ZONE_DEVICE depends on MEMORY_HOTPLUG depends on MEMORY_HOTREMOVE depends on SPARSEMEM_VMEMMAP - depends on ARCH_HAS_ZONE_DEVICE + depends on ARCH_HAS_PTE_DEVMAP select XARRAY_MULTI help diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index fa6d79281368..82b6a20898bd 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -12,19 +12,23 @@ config DEBUG_PAGEALLOC bool "Debug page memory allocations" depends on DEBUG_KERNEL depends on !HIBERNATION || ARCH_SUPPORTS_DEBUG_PAGEALLOC && !PPC && !SPARC - select PAGE_EXTENSION select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC ---help--- Unmap pages from the kernel linear mapping after free_pages(). Depending on runtime enablement, this results in a small or large slowdown, but helps to find certain types of memory corruption. + Also, the state of page tracking structures is checked more often as + pages are being allocated and freed, as unexpected state changes + often happen for same reasons as memory corruption (e.g. double free, + use-after-free). + For architectures which don't enable ARCH_SUPPORTS_DEBUG_PAGEALLOC, fill the pages with poison patterns after free_pages() and verify - the patterns before alloc_pages(). Additionally, - this option cannot be enabled in combination with hibernation as - that would result in incorrect warnings of memory corruption after - a resume because free pages are not saved to the suspend image. + the patterns before alloc_pages(). Additionally, this option cannot + be enabled in combination with hibernation as that would result in + incorrect warnings of memory corruption after a resume because free + pages are not saved to the suspend image. By default this option will have a small overhead, e.g. by not allowing the kernel mapping to be backed by large pages on some diff --git a/mm/failslab.c b/mm/failslab.c index ec5aad211c5b..f92fed91ac23 100644 --- a/mm/failslab.c +++ b/mm/failslab.c @@ -23,7 +23,8 @@ bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags) if (gfpflags & __GFP_NOFAIL) return false; - if (failslab.ignore_gfp_reclaim && (gfpflags & __GFP_RECLAIM)) + if (failslab.ignore_gfp_reclaim && + (gfpflags & __GFP_DIRECT_RECLAIM)) return false; if (failslab.cache_filter && !(s->flags & SLAB_FAILSLAB)) diff --git a/mm/filemap.c b/mm/filemap.c index df2006ba0cfa..b3ed0a065918 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2457,10 +2457,8 @@ static struct file *do_async_mmap_readahead(struct vm_fault *vmf, * * vma->vm_mm->mmap_sem must be held on entry. * - * If our return value has VM_FAULT_RETRY set, it's because - * lock_page_or_retry() returned 0. - * The mmap_sem has usually been released in this case. - * See __lock_page_or_retry() for the exception. + * If our return value has VM_FAULT_RETRY set, it's because the mmap_sem + * may be dropped before doing I/O or by lock_page_maybe_drop_mmap(). * * If our return value does not have VM_FAULT_RETRY set, the mmap_sem * has not been released. @@ -2773,7 +2771,11 @@ repeat: } filler: - err = filler(data, page); + if (filler) + err = filler(data, page); + else + err = mapping->a_ops->readpage(data, page); + if (err < 0) { put_page(page); return ERR_PTR(err); @@ -2863,7 +2865,8 @@ struct page *read_cache_page(struct address_space *mapping, int (*filler)(void *, struct page *), void *data) { - return do_read_cache_page(mapping, index, filler, data, mapping_gfp_mask(mapping)); + return do_read_cache_page(mapping, index, filler, data, + mapping_gfp_mask(mapping)); } EXPORT_SYMBOL(read_cache_page); @@ -2884,9 +2887,7 @@ struct page *read_cache_page_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp) { - filler_t *filler = (filler_t *)mapping->a_ops->readpage; - - return do_read_cache_page(mapping, index, filler, NULL, gfp); + return do_read_cache_page(mapping, index, NULL, NULL, gfp); } EXPORT_SYMBOL(read_cache_page_gfp); @@ -515,7 +515,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, * an error pointer if there is a mapping to something not represented * by a page descriptor (see also vm_normal_page()). */ -struct page *follow_page_mask(struct vm_area_struct *vma, +static struct page *follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct follow_page_context *ctx) { @@ -1801,7 +1801,7 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end, } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */ -#if defined(__HAVE_ARCH_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) +#if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) static int __gup_device_huge(unsigned long pfn, unsigned long addr, unsigned long end, struct page **pages, int *nr) { @@ -25,8 +25,6 @@ #include <linux/mmu_notifier.h> #include <linux/memory_hotplug.h> -#define PA_SECTION_SIZE (1UL << PA_SECTION_SHIFT) - #if IS_ENABLED(CONFIG_HMM_MIRROR) static const struct mmu_notifier_ops hmm_mmu_notifier_ops; @@ -330,7 +328,7 @@ struct hmm_vma_walk { static int hmm_vma_do_fault(struct mm_walk *walk, unsigned long addr, bool write_fault, uint64_t *pfn) { - unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_REMOTE; + unsigned int flags = FAULT_FLAG_REMOTE; struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; struct vm_area_struct *vma = walk->vma; @@ -550,7 +548,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, static inline uint64_t pte_to_hmm_pfn_flags(struct hmm_range *range, pte_t pte) { - if (pte_none(pte) || !pte_present(pte)) + if (pte_none(pte) || !pte_present(pte) || pte_protnone(pte)) return 0; return pte_write(pte) ? range->flags[HMM_PFN_VALID] | range->flags[HMM_PFN_WRITE] : @@ -1354,9 +1352,8 @@ static void hmm_devmem_ref_release(struct percpu_ref *ref) complete(&devmem->completion); } -static void hmm_devmem_ref_exit(void *data) +static void hmm_devmem_ref_exit(struct percpu_ref *ref) { - struct percpu_ref *ref = data; struct hmm_devmem *devmem; devmem = container_of(ref, struct hmm_devmem, ref); @@ -1433,10 +1430,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, if (ret) return ERR_PTR(ret); - ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, &devmem->ref); - if (ret) - return ERR_PTR(ret); - size = ALIGN(size, PA_SECTION_SIZE); addr = min((unsigned long)iomem_resource.end, (1UL << MAX_PHYSMEM_BITS) - 1); @@ -1475,6 +1468,7 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, devmem->pagemap.ref = &devmem->ref; devmem->pagemap.data = devmem; devmem->pagemap.kill = hmm_devmem_ref_kill; + devmem->pagemap.cleanup = hmm_devmem_ref_exit; result = devm_memremap_pages(devmem->device, &devmem->pagemap); if (IS_ERR(result)) @@ -1512,11 +1506,6 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, if (ret) return ERR_PTR(ret); - ret = devm_add_action_or_reset(device, hmm_devmem_ref_exit, - &devmem->ref); - if (ret) - return ERR_PTR(ret); - devmem->pfn_first = devmem->resource->start >> PAGE_SHIFT; devmem->pfn_last = devmem->pfn_first + (resource_size(devmem->resource) >> PAGE_SHIFT); @@ -1529,6 +1518,7 @@ struct hmm_devmem *hmm_devmem_add_resource(const struct hmm_devmem_ops *ops, devmem->pagemap.ref = &devmem->ref; devmem->pagemap.data = devmem; devmem->pagemap.kill = hmm_devmem_ref_kill; + devmem->pagemap.cleanup = hmm_devmem_ref_exit; result = devm_memremap_pages(devmem->device, &devmem->pagemap); if (IS_ERR(result)) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 3ce956efa0cb..1979db4763e2 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -43,6 +43,11 @@ #define KASAN_ALLOCA_REDZONE_SIZE 32 +/* + * Stack frame marker (compiler ABI). + */ +#define KASAN_CURRENT_STACK_FRAME_MAGIC 0x41B58AB3 + /* Don't break randconfig/all*config builds */ #ifndef KASAN_ABI_VERSION #define KASAN_ABI_VERSION 1 diff --git a/mm/kasan/report.c b/mm/kasan/report.c index 03a443579386..0e5f965f1882 100644 --- a/mm/kasan/report.c +++ b/mm/kasan/report.c @@ -28,6 +28,7 @@ #include <linux/types.h> #include <linux/kasan.h> #include <linux/module.h> +#include <linux/sched/task_stack.h> #include <asm/sections.h> @@ -181,6 +182,168 @@ static inline bool init_task_stack_addr(const void *addr) sizeof(init_thread_union.stack)); } +static bool __must_check tokenize_frame_descr(const char **frame_descr, + char *token, size_t max_tok_len, + unsigned long *value) +{ + const char *sep = strchr(*frame_descr, ' '); + + if (sep == NULL) + sep = *frame_descr + strlen(*frame_descr); + + if (token != NULL) { + const size_t tok_len = sep - *frame_descr; + + if (tok_len + 1 > max_tok_len) { + pr_err("KASAN internal error: frame description too long: %s\n", + *frame_descr); + return false; + } + + /* Copy token (+ 1 byte for '\0'). */ + strlcpy(token, *frame_descr, tok_len + 1); + } + + /* Advance frame_descr past separator. */ + *frame_descr = sep + 1; + + if (value != NULL && kstrtoul(token, 10, value)) { + pr_err("KASAN internal error: not a valid number: %s\n", token); + return false; + } + + return true; +} + +static void print_decoded_frame_descr(const char *frame_descr) +{ + /* + * We need to parse the following string: + * "n alloc_1 alloc_2 ... alloc_n" + * where alloc_i looks like + * "offset size len name" + * or "offset size len name:line". + */ + + char token[64]; + unsigned long num_objects; + + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + &num_objects)) + return; + + pr_err("\n"); + pr_err("this frame has %lu %s:\n", num_objects, + num_objects == 1 ? "object" : "objects"); + + while (num_objects--) { + unsigned long offset; + unsigned long size; + + /* access offset */ + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + &offset)) + return; + /* access size */ + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + &size)) + return; + /* name length (unused) */ + if (!tokenize_frame_descr(&frame_descr, NULL, 0, NULL)) + return; + /* object name */ + if (!tokenize_frame_descr(&frame_descr, token, sizeof(token), + NULL)) + return; + + /* Strip line number; without filename it's not very helpful. */ + strreplace(token, ':', '\0'); + + /* Finally, print object information. */ + pr_err(" [%lu, %lu) '%s'", offset, offset + size, token); + } +} + +static bool __must_check get_address_stack_frame_info(const void *addr, + unsigned long *offset, + const char **frame_descr, + const void **frame_pc) +{ + unsigned long aligned_addr; + unsigned long mem_ptr; + const u8 *shadow_bottom; + const u8 *shadow_ptr; + const unsigned long *frame; + + BUILD_BUG_ON(IS_ENABLED(CONFIG_STACK_GROWSUP)); + + /* + * NOTE: We currently only support printing frame information for + * accesses to the task's own stack. + */ + if (!object_is_on_stack(addr)) + return false; + + aligned_addr = round_down((unsigned long)addr, sizeof(long)); + mem_ptr = round_down(aligned_addr, KASAN_SHADOW_SCALE_SIZE); + shadow_ptr = kasan_mem_to_shadow((void *)aligned_addr); + shadow_bottom = kasan_mem_to_shadow(end_of_stack(current)); + + while (shadow_ptr >= shadow_bottom && *shadow_ptr != KASAN_STACK_LEFT) { + shadow_ptr--; + mem_ptr -= KASAN_SHADOW_SCALE_SIZE; + } + + while (shadow_ptr >= shadow_bottom && *shadow_ptr == KASAN_STACK_LEFT) { + shadow_ptr--; + mem_ptr -= KASAN_SHADOW_SCALE_SIZE; + } + + if (shadow_ptr < shadow_bottom) + return false; + + frame = (const unsigned long *)(mem_ptr + KASAN_SHADOW_SCALE_SIZE); + if (frame[0] != KASAN_CURRENT_STACK_FRAME_MAGIC) { + pr_err("KASAN internal error: frame info validation failed; invalid marker: %lu\n", + frame[0]); + return false; + } + + *offset = (unsigned long)addr - (unsigned long)frame; + *frame_descr = (const char *)frame[1]; + *frame_pc = (void *)frame[2]; + + return true; +} + +static void print_address_stack_frame(const void *addr) +{ + unsigned long offset; + const char *frame_descr; + const void *frame_pc; + + if (IS_ENABLED(CONFIG_KASAN_SW_TAGS)) + return; + + if (!get_address_stack_frame_info(addr, &offset, &frame_descr, + &frame_pc)) + return; + + /* + * get_address_stack_frame_info only returns true if the given addr is + * on the current task's stack. + */ + pr_err("\n"); + pr_err("addr %px is located in stack of task %s/%d at offset %lu in frame:\n", + addr, current->comm, task_pid_nr(current), offset); + pr_err(" %pS\n", frame_pc); + + if (!frame_descr) + return; + + print_decoded_frame_descr(frame_descr); +} + static void print_address_description(void *addr) { struct page *page = addr_to_page(addr); @@ -204,6 +367,8 @@ static void print_address_description(void *addr) pr_err("The buggy address belongs to the page:\n"); dump_page(page, "kasan: bad access detected"); } + + print_address_stack_frame(addr); } static bool row_is_guilty(const void *row, const void *guilty) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 9857446021af..34647395f094 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -588,7 +588,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, if (in_irq()) { object->pid = 0; strncpy(object->comm, "hardirq", sizeof(object->comm)); - } else if (in_softirq()) { + } else if (in_serving_softirq()) { object->pid = 0; strncpy(object->comm, "softirq", sizeof(object->comm)); } else { diff --git a/mm/list_lru.c b/mm/list_lru.c index e4709fdaa8e6..927d85be32f6 100644 --- a/mm/list_lru.c +++ b/mm/list_lru.c @@ -354,7 +354,7 @@ static int __memcg_init_list_lru_node(struct list_lru_memcg *memcg_lrus, } return 0; fail: - __memcg_destroy_list_lru_node(memcg_lrus, begin, i - 1); + __memcg_destroy_list_lru_node(memcg_lrus, begin, i); return -ENOMEM; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ca0bc6e6be13..fd340985a561 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -57,6 +57,7 @@ #include <linux/lockdep.h> #include <linux/file.h> #include <linux/tracehook.h> +#include <linux/seq_buf.h> #include "internal.h" #include <net/sock.h> #include <net/ip.h> @@ -691,11 +692,12 @@ void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) if (mem_cgroup_disabled()) return; + __this_cpu_add(memcg->vmstats_local->stat[idx], val); + x = val + __this_cpu_read(memcg->vmstats_percpu->stat[idx]); if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { struct mem_cgroup *mi; - atomic_long_add(x, &memcg->vmstats_local[idx]); for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) atomic_long_add(x, &mi->vmstats[idx]); x = 0; @@ -745,11 +747,12 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, __mod_memcg_state(memcg, idx, val); /* Update lruvec */ + __this_cpu_add(pn->lruvec_stat_local->count[idx], val); + x = val + __this_cpu_read(pn->lruvec_stat_cpu->count[idx]); if (unlikely(abs(x) > MEMCG_CHARGE_BATCH)) { struct mem_cgroup_per_node *pi; - atomic_long_add(x, &pn->lruvec_stat_local[idx]); for (pi = pn; pi; pi = parent_nodeinfo(pi, pgdat->node_id)) atomic_long_add(x, &pi->lruvec_stat[idx]); x = 0; @@ -771,11 +774,12 @@ void __count_memcg_events(struct mem_cgroup *memcg, enum vm_event_item idx, if (mem_cgroup_disabled()) return; + __this_cpu_add(memcg->vmstats_local->events[idx], count); + x = count + __this_cpu_read(memcg->vmstats_percpu->events[idx]); if (unlikely(x > MEMCG_CHARGE_BATCH)) { struct mem_cgroup *mi; - atomic_long_add(x, &memcg->vmevents_local[idx]); for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) atomic_long_add(x, &mi->vmevents[idx]); x = 0; @@ -790,7 +794,12 @@ static unsigned long memcg_events(struct mem_cgroup *memcg, int event) static unsigned long memcg_events_local(struct mem_cgroup *memcg, int event) { - return atomic_long_read(&memcg->vmevents_local[event]); + long x = 0; + int cpu; + + for_each_possible_cpu(cpu) + x += per_cpu(memcg->vmstats_local->events[event], cpu); + return x; } static void mem_cgroup_charge_statistics(struct mem_cgroup *memcg, @@ -1348,27 +1357,114 @@ static bool mem_cgroup_wait_acct_move(struct mem_cgroup *memcg) return false; } -static const unsigned int memcg1_stats[] = { - MEMCG_CACHE, - MEMCG_RSS, - MEMCG_RSS_HUGE, - NR_SHMEM, - NR_FILE_MAPPED, - NR_FILE_DIRTY, - NR_WRITEBACK, - MEMCG_SWAP, -}; +static char *memory_stat_format(struct mem_cgroup *memcg) +{ + struct seq_buf s; + int i; -static const char *const memcg1_stat_names[] = { - "cache", - "rss", - "rss_huge", - "shmem", - "mapped_file", - "dirty", - "writeback", - "swap", -}; + seq_buf_init(&s, kvmalloc(PAGE_SIZE, GFP_KERNEL), PAGE_SIZE); + if (!s.buffer) + return NULL; + + /* + * Provide statistics on the state of the memory subsystem as + * well as cumulative event counters that show past behavior. + * + * This list is ordered following a combination of these gradients: + * 1) generic big picture -> specifics and details + * 2) reflecting userspace activity -> reflecting kernel heuristics + * + * Current memory state: + */ + + seq_buf_printf(&s, "anon %llu\n", + (u64)memcg_page_state(memcg, MEMCG_RSS) * + PAGE_SIZE); + seq_buf_printf(&s, "file %llu\n", + (u64)memcg_page_state(memcg, MEMCG_CACHE) * + PAGE_SIZE); + seq_buf_printf(&s, "kernel_stack %llu\n", + (u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) * + 1024); + seq_buf_printf(&s, "slab %llu\n", + (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) + + memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE)) * + PAGE_SIZE); + seq_buf_printf(&s, "sock %llu\n", + (u64)memcg_page_state(memcg, MEMCG_SOCK) * + PAGE_SIZE); + + seq_buf_printf(&s, "shmem %llu\n", + (u64)memcg_page_state(memcg, NR_SHMEM) * + PAGE_SIZE); + seq_buf_printf(&s, "file_mapped %llu\n", + (u64)memcg_page_state(memcg, NR_FILE_MAPPED) * + PAGE_SIZE); + seq_buf_printf(&s, "file_dirty %llu\n", + (u64)memcg_page_state(memcg, NR_FILE_DIRTY) * + PAGE_SIZE); + seq_buf_printf(&s, "file_writeback %llu\n", + (u64)memcg_page_state(memcg, NR_WRITEBACK) * + PAGE_SIZE); + + /* + * TODO: We should eventually replace our own MEMCG_RSS_HUGE counter + * with the NR_ANON_THP vm counter, but right now it's a pain in the + * arse because it requires migrating the work out of rmap to a place + * where the page->mem_cgroup is set up and stable. + */ + seq_buf_printf(&s, "anon_thp %llu\n", + (u64)memcg_page_state(memcg, MEMCG_RSS_HUGE) * + PAGE_SIZE); + + for (i = 0; i < NR_LRU_LISTS; i++) + seq_buf_printf(&s, "%s %llu\n", mem_cgroup_lru_names[i], + (u64)memcg_page_state(memcg, NR_LRU_BASE + i) * + PAGE_SIZE); + + seq_buf_printf(&s, "slab_reclaimable %llu\n", + (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) * + PAGE_SIZE); + seq_buf_printf(&s, "slab_unreclaimable %llu\n", + (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE) * + PAGE_SIZE); + + /* Accumulated memory events */ + + seq_buf_printf(&s, "pgfault %lu\n", memcg_events(memcg, PGFAULT)); + seq_buf_printf(&s, "pgmajfault %lu\n", memcg_events(memcg, PGMAJFAULT)); + + seq_buf_printf(&s, "workingset_refault %lu\n", + memcg_page_state(memcg, WORKINGSET_REFAULT)); + seq_buf_printf(&s, "workingset_activate %lu\n", + memcg_page_state(memcg, WORKINGSET_ACTIVATE)); + seq_buf_printf(&s, "workingset_nodereclaim %lu\n", + memcg_page_state(memcg, WORKINGSET_NODERECLAIM)); + + seq_buf_printf(&s, "pgrefill %lu\n", memcg_events(memcg, PGREFILL)); + seq_buf_printf(&s, "pgscan %lu\n", + memcg_events(memcg, PGSCAN_KSWAPD) + + memcg_events(memcg, PGSCAN_DIRECT)); + seq_buf_printf(&s, "pgsteal %lu\n", + memcg_events(memcg, PGSTEAL_KSWAPD) + + memcg_events(memcg, PGSTEAL_DIRECT)); + seq_buf_printf(&s, "pgactivate %lu\n", memcg_events(memcg, PGACTIVATE)); + seq_buf_printf(&s, "pgdeactivate %lu\n", memcg_events(memcg, PGDEACTIVATE)); + seq_buf_printf(&s, "pglazyfree %lu\n", memcg_events(memcg, PGLAZYFREE)); + seq_buf_printf(&s, "pglazyfreed %lu\n", memcg_events(memcg, PGLAZYFREED)); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + seq_buf_printf(&s, "thp_fault_alloc %lu\n", + memcg_events(memcg, THP_FAULT_ALLOC)); + seq_buf_printf(&s, "thp_collapse_alloc %lu\n", + memcg_events(memcg, THP_COLLAPSE_ALLOC)); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + /* The above should easily fit into one page */ + WARN_ON_ONCE(seq_buf_has_overflowed(&s)); + + return s.buffer; +} #define K(x) ((x) << (PAGE_SHIFT-10)) /** @@ -1403,39 +1499,32 @@ void mem_cgroup_print_oom_context(struct mem_cgroup *memcg, struct task_struct * */ void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg) { - struct mem_cgroup *iter; - unsigned int i; + char *buf; pr_info("memory: usage %llukB, limit %llukB, failcnt %lu\n", K((u64)page_counter_read(&memcg->memory)), K((u64)memcg->memory.max), memcg->memory.failcnt); - pr_info("memory+swap: usage %llukB, limit %llukB, failcnt %lu\n", - K((u64)page_counter_read(&memcg->memsw)), - K((u64)memcg->memsw.max), memcg->memsw.failcnt); - pr_info("kmem: usage %llukB, limit %llukB, failcnt %lu\n", - K((u64)page_counter_read(&memcg->kmem)), - K((u64)memcg->kmem.max), memcg->kmem.failcnt); - - for_each_mem_cgroup_tree(iter, memcg) { - pr_info("Memory cgroup stats for "); - pr_cont_cgroup_path(iter->css.cgroup); - pr_cont(":"); - - for (i = 0; i < ARRAY_SIZE(memcg1_stats); i++) { - if (memcg1_stats[i] == MEMCG_SWAP && !do_swap_account) - continue; - pr_cont(" %s:%luKB", memcg1_stat_names[i], - K(memcg_page_state_local(iter, - memcg1_stats[i]))); - } - - for (i = 0; i < NR_LRU_LISTS; i++) - pr_cont(" %s:%luKB", mem_cgroup_lru_names[i], - K(memcg_page_state_local(iter, - NR_LRU_BASE + i))); - - pr_cont("\n"); + if (cgroup_subsys_on_dfl(memory_cgrp_subsys)) + pr_info("swap: usage %llukB, limit %llukB, failcnt %lu\n", + K((u64)page_counter_read(&memcg->swap)), + K((u64)memcg->swap.max), memcg->swap.failcnt); + else { + pr_info("memory+swap: usage %llukB, limit %llukB, failcnt %lu\n", + K((u64)page_counter_read(&memcg->memsw)), + K((u64)memcg->memsw.max), memcg->memsw.failcnt); + pr_info("kmem: usage %llukB, limit %llukB, failcnt %lu\n", + K((u64)page_counter_read(&memcg->kmem)), + K((u64)memcg->kmem.max), memcg->kmem.failcnt); } + + pr_info("Memory cgroup stats for "); + pr_cont_cgroup_path(memcg->css.cgroup); + pr_cont(":"); + buf = memory_stat_format(memcg); + if (!buf) + return; + pr_info("%s", buf); + kvfree(buf); } /* @@ -1458,6 +1547,11 @@ unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg) return max; } +unsigned long mem_cgroup_size(struct mem_cgroup *memcg) +{ + return page_counter_read(&memcg->memory); +} + static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, int order) { @@ -2191,11 +2285,9 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) long x; x = this_cpu_xchg(memcg->vmstats_percpu->stat[i], 0); - if (x) { - atomic_long_add(x, &memcg->vmstats_local[i]); + if (x) for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) atomic_long_add(x, &memcg->vmstats[i]); - } if (i >= NR_VM_NODE_STAT_ITEMS) continue; @@ -2205,12 +2297,10 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) pn = mem_cgroup_nodeinfo(memcg, nid); x = this_cpu_xchg(pn->lruvec_stat_cpu->count[i], 0); - if (x) { - atomic_long_add(x, &pn->lruvec_stat_local[i]); + if (x) do { atomic_long_add(x, &pn->lruvec_stat[i]); } while ((pn = parent_nodeinfo(pn, nid))); - } } } @@ -2218,11 +2308,9 @@ static int memcg_hotplug_cpu_dead(unsigned int cpu) long x; x = this_cpu_xchg(memcg->vmstats_percpu->events[i], 0); - if (x) { - atomic_long_add(x, &memcg->vmevents_local[i]); + if (x) for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) atomic_long_add(x, &memcg->vmevents[i]); - } } } @@ -2277,7 +2365,6 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask, unsigned long nr_reclaimed; bool may_swap = true; bool drained = false; - bool oomed = false; enum oom_status oom_status; if (mem_cgroup_is_root(memcg)) @@ -2364,7 +2451,7 @@ retry: if (nr_retries--) goto retry; - if (gfp_mask & __GFP_RETRY_MAYFAIL && oomed) + if (gfp_mask & __GFP_RETRY_MAYFAIL) goto nomem; if (gfp_mask & __GFP_NOFAIL) @@ -2383,7 +2470,6 @@ retry: switch (oom_status) { case OOM_SUCCESS: nr_retries = MEM_CGROUP_RECLAIM_RETRIES; - oomed = true; goto retry; case OOM_FAILED: goto force; @@ -3470,6 +3556,28 @@ static int memcg_numa_stat_show(struct seq_file *m, void *v) } #endif /* CONFIG_NUMA */ +static const unsigned int memcg1_stats[] = { + MEMCG_CACHE, + MEMCG_RSS, + MEMCG_RSS_HUGE, + NR_SHMEM, + NR_FILE_MAPPED, + NR_FILE_DIRTY, + NR_WRITEBACK, + MEMCG_SWAP, +}; + +static const char *const memcg1_stat_names[] = { + "cache", + "rss", + "rss_huge", + "shmem", + "mapped_file", + "dirty", + "writeback", + "swap", +}; + /* Universal VM events cgroup1 shows, original sort order */ static const unsigned int memcg1_events[] = { PGPGIN, @@ -4483,8 +4591,15 @@ static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) if (!pn) return 1; + pn->lruvec_stat_local = alloc_percpu(struct lruvec_stat); + if (!pn->lruvec_stat_local) { + kfree(pn); + return 1; + } + pn->lruvec_stat_cpu = alloc_percpu(struct lruvec_stat); if (!pn->lruvec_stat_cpu) { + free_percpu(pn->lruvec_stat_local); kfree(pn); return 1; } @@ -4506,6 +4621,7 @@ static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) return; free_percpu(pn->lruvec_stat_cpu); + free_percpu(pn->lruvec_stat_local); kfree(pn); } @@ -4516,6 +4632,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->vmstats_percpu); + free_percpu(memcg->vmstats_local); kfree(memcg); } @@ -4544,6 +4661,10 @@ static struct mem_cgroup *mem_cgroup_alloc(void) if (memcg->id.id < 0) goto fail; + memcg->vmstats_local = alloc_percpu(struct memcg_vmstats_percpu); + if (!memcg->vmstats_local) + goto fail; + memcg->vmstats_percpu = alloc_percpu(struct memcg_vmstats_percpu); if (!memcg->vmstats_percpu) goto fail; @@ -5610,112 +5731,42 @@ static ssize_t memory_max_write(struct kernfs_open_file *of, return nbytes; } +static void __memory_events_show(struct seq_file *m, atomic_long_t *events) +{ + seq_printf(m, "low %lu\n", atomic_long_read(&events[MEMCG_LOW])); + seq_printf(m, "high %lu\n", atomic_long_read(&events[MEMCG_HIGH])); + seq_printf(m, "max %lu\n", atomic_long_read(&events[MEMCG_MAX])); + seq_printf(m, "oom %lu\n", atomic_long_read(&events[MEMCG_OOM])); + seq_printf(m, "oom_kill %lu\n", + atomic_long_read(&events[MEMCG_OOM_KILL])); +} + static int memory_events_show(struct seq_file *m, void *v) { struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - seq_printf(m, "low %lu\n", - atomic_long_read(&memcg->memory_events[MEMCG_LOW])); - seq_printf(m, "high %lu\n", - atomic_long_read(&memcg->memory_events[MEMCG_HIGH])); - seq_printf(m, "max %lu\n", - atomic_long_read(&memcg->memory_events[MEMCG_MAX])); - seq_printf(m, "oom %lu\n", - atomic_long_read(&memcg->memory_events[MEMCG_OOM])); - seq_printf(m, "oom_kill %lu\n", - atomic_long_read(&memcg->memory_events[MEMCG_OOM_KILL])); - + __memory_events_show(m, memcg->memory_events); return 0; } -static int memory_stat_show(struct seq_file *m, void *v) +static int memory_events_local_show(struct seq_file *m, void *v) { struct mem_cgroup *memcg = mem_cgroup_from_seq(m); - int i; - - /* - * Provide statistics on the state of the memory subsystem as - * well as cumulative event counters that show past behavior. - * - * This list is ordered following a combination of these gradients: - * 1) generic big picture -> specifics and details - * 2) reflecting userspace activity -> reflecting kernel heuristics - * - * Current memory state: - */ - - seq_printf(m, "anon %llu\n", - (u64)memcg_page_state(memcg, MEMCG_RSS) * PAGE_SIZE); - seq_printf(m, "file %llu\n", - (u64)memcg_page_state(memcg, MEMCG_CACHE) * PAGE_SIZE); - seq_printf(m, "kernel_stack %llu\n", - (u64)memcg_page_state(memcg, MEMCG_KERNEL_STACK_KB) * 1024); - seq_printf(m, "slab %llu\n", - (u64)(memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) + - memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE)) * - PAGE_SIZE); - seq_printf(m, "sock %llu\n", - (u64)memcg_page_state(memcg, MEMCG_SOCK) * PAGE_SIZE); - - seq_printf(m, "shmem %llu\n", - (u64)memcg_page_state(memcg, NR_SHMEM) * PAGE_SIZE); - seq_printf(m, "file_mapped %llu\n", - (u64)memcg_page_state(memcg, NR_FILE_MAPPED) * PAGE_SIZE); - seq_printf(m, "file_dirty %llu\n", - (u64)memcg_page_state(memcg, NR_FILE_DIRTY) * PAGE_SIZE); - seq_printf(m, "file_writeback %llu\n", - (u64)memcg_page_state(memcg, NR_WRITEBACK) * PAGE_SIZE); - - /* - * TODO: We should eventually replace our own MEMCG_RSS_HUGE counter - * with the NR_ANON_THP vm counter, but right now it's a pain in the - * arse because it requires migrating the work out of rmap to a place - * where the page->mem_cgroup is set up and stable. - */ - seq_printf(m, "anon_thp %llu\n", - (u64)memcg_page_state(memcg, MEMCG_RSS_HUGE) * PAGE_SIZE); - for (i = 0; i < NR_LRU_LISTS; i++) - seq_printf(m, "%s %llu\n", mem_cgroup_lru_names[i], - (u64)memcg_page_state(memcg, NR_LRU_BASE + i) * - PAGE_SIZE); - - seq_printf(m, "slab_reclaimable %llu\n", - (u64)memcg_page_state(memcg, NR_SLAB_RECLAIMABLE) * - PAGE_SIZE); - seq_printf(m, "slab_unreclaimable %llu\n", - (u64)memcg_page_state(memcg, NR_SLAB_UNRECLAIMABLE) * - PAGE_SIZE); - - /* Accumulated memory events */ - - seq_printf(m, "pgfault %lu\n", memcg_events(memcg, PGFAULT)); - seq_printf(m, "pgmajfault %lu\n", memcg_events(memcg, PGMAJFAULT)); - - seq_printf(m, "workingset_refault %lu\n", - memcg_page_state(memcg, WORKINGSET_REFAULT)); - seq_printf(m, "workingset_activate %lu\n", - memcg_page_state(memcg, WORKINGSET_ACTIVATE)); - seq_printf(m, "workingset_nodereclaim %lu\n", - memcg_page_state(memcg, WORKINGSET_NODERECLAIM)); - - seq_printf(m, "pgrefill %lu\n", memcg_events(memcg, PGREFILL)); - seq_printf(m, "pgscan %lu\n", memcg_events(memcg, PGSCAN_KSWAPD) + - memcg_events(memcg, PGSCAN_DIRECT)); - seq_printf(m, "pgsteal %lu\n", memcg_events(memcg, PGSTEAL_KSWAPD) + - memcg_events(memcg, PGSTEAL_DIRECT)); - seq_printf(m, "pgactivate %lu\n", memcg_events(memcg, PGACTIVATE)); - seq_printf(m, "pgdeactivate %lu\n", memcg_events(memcg, PGDEACTIVATE)); - seq_printf(m, "pglazyfree %lu\n", memcg_events(memcg, PGLAZYFREE)); - seq_printf(m, "pglazyfreed %lu\n", memcg_events(memcg, PGLAZYFREED)); + __memory_events_show(m, memcg->memory_events_local); + return 0; +} -#ifdef CONFIG_TRANSPARENT_HUGEPAGE - seq_printf(m, "thp_fault_alloc %lu\n", - memcg_events(memcg, THP_FAULT_ALLOC)); - seq_printf(m, "thp_collapse_alloc %lu\n", - memcg_events(memcg, THP_COLLAPSE_ALLOC)); -#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +static int memory_stat_show(struct seq_file *m, void *v) +{ + struct mem_cgroup *memcg = mem_cgroup_from_seq(m); + char *buf; + buf = memory_stat_format(memcg); + if (!buf) + return -ENOMEM; + seq_puts(m, buf); + kvfree(buf); return 0; } @@ -5787,6 +5838,12 @@ static struct cftype memory_files[] = { .seq_show = memory_events_show, }, { + .name = "events.local", + .flags = CFTYPE_NOT_ON_ROOT, + .file_offset = offsetof(struct mem_cgroup, events_local_file), + .seq_show = memory_events_local_show, + }, + { .name = "stat", .flags = CFTYPE_NOT_ON_ROOT, .seq_show = memory_stat_show, diff --git a/mm/memory-failure.c b/mm/memory-failure.c index bc749265a8f3..b9cc36a284f9 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -216,7 +216,7 @@ static int kill_proc(struct to_kill *tk, unsigned long pfn, int flags) short addr_lsb = tk->size_shift; int ret; - pr_err("Memory failure: %#lx: Killing %s:%d due to hardware memory corruption\n", + pr_err("Memory failure: %#lx: Sending SIGBUS to %s:%d due to hardware memory corruption\n", pfn, t->comm, t->pid); if ((flags & MF_ACTION_REQUIRED) && t->mm == current->mm) { diff --git a/mm/memory.c b/mm/memory.c index ddf20bd0c317..39a82217520d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1475,8 +1475,6 @@ static int insert_page(struct vm_area_struct *vma, unsigned long addr, set_pte_at(mm, addr, pte, mk_pte(page, prot)); retval = 0; - pte_unmap_unlock(pte, ptl); - return retval; out_unlock: pte_unmap_unlock(pte, ptl); out: @@ -1547,7 +1545,7 @@ static int __vm_map_pages(struct vm_area_struct *vma, struct page **pages, int ret, i; /* Fail if the user requested offset is beyond the end of the object */ - if (offset > num) + if (offset >= num) return -ENXIO; /* Fail if the user requested size exceeds available object size */ @@ -2038,7 +2036,6 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, { pte_t *pte; int err; - pgtable_t token; spinlock_t *uninitialized_var(ptl); pte = (mm == &init_mm) ? @@ -2051,10 +2048,8 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd, arch_enter_lazy_mmu_mode(); - token = pmd_pgtable(*pmd); - do { - err = fn(pte++, token, addr, data); + err = fn(pte++, addr, data); if (err) break; } while (addr += PAGE_SIZE, addr != end); @@ -2807,7 +2802,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) struct swap_info_struct *si = swp_swap_info(entry); if (si->flags & SWP_SYNCHRONOUS_IO && - __swap_count(si, entry) == 1) { + __swap_count(entry) == 1) { /* skip swapcache */ page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, vmf->address); @@ -2917,7 +2912,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) flush_icache_page(vma, page); if (pte_swp_soft_dirty(vmf->orig_pte)) pte = pte_mksoft_dirty(pte); - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); vmf->orig_pte = pte; @@ -2931,6 +2925,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) mem_cgroup_commit_charge(page, memcg, true, false); activate_page(page); } + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); swap_free(entry); if (mem_cgroup_swap_full(page) || diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index e096c987d261..a88c5f334e5a 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -251,7 +251,7 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat) #endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, - struct vmem_altmap *altmap, bool want_memblock) + struct vmem_altmap *altmap) { int ret; @@ -259,13 +259,7 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn, return -EEXIST; ret = sparse_add_one_section(nid, phys_start_pfn, altmap); - if (ret < 0) - return ret; - - if (!want_memblock) - return 0; - - return hotplug_memory_register(nid, __pfn_to_section(phys_start_pfn)); + return ret < 0 ? ret : 0; } /* @@ -300,8 +294,7 @@ int __ref __add_pages(int nid, unsigned long phys_start_pfn, } for (i = start_sec; i <= end_sec; i++) { - err = __add_section(nid, section_nr_to_pfn(i), altmap, - restrictions->flags & MHP_MEMBLOCK_API); + err = __add_section(nid, section_nr_to_pfn(i), altmap); /* * EEXIST is finally dealt with by ioresource collision @@ -318,7 +311,6 @@ out: return err; } -#ifdef CONFIG_MEMORY_HOTREMOVE /* find the smallest valid pfn in the range [start_pfn, end_pfn) */ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, unsigned long start_pfn, @@ -528,13 +520,11 @@ static void __remove_section(struct zone *zone, struct mem_section *ms, if (WARN_ON_ONCE(!valid_section(ms))) return; - unregister_memory_section(ms); - scn_nr = __section_nr(ms); start_pfn = section_nr_to_pfn((unsigned long)scn_nr); __remove_zone(zone, start_pfn); - sparse_remove_one_section(zone, ms, map_offset, altmap); + sparse_remove_one_section(ms, map_offset, altmap); } /** @@ -582,7 +572,6 @@ void __remove_pages(struct zone *zone, unsigned long phys_start_pfn, set_zone_contiguous(zone); } -#endif /* CONFIG_MEMORY_HOTREMOVE */ int set_online_page_callback(online_page_callback_t callback) { @@ -1051,16 +1040,11 @@ int try_online_node(int nid) static int check_hotplug_memory_range(u64 start, u64 size) { - unsigned long block_sz = memory_block_size_bytes(); - u64 block_nr_pages = block_sz >> PAGE_SHIFT; - u64 nr_pages = size >> PAGE_SHIFT; - u64 start_pfn = PFN_DOWN(start); - /* memory range must be block size aligned */ - if (!nr_pages || !IS_ALIGNED(start_pfn, block_nr_pages) || - !IS_ALIGNED(nr_pages, block_nr_pages)) { + if (!size || !IS_ALIGNED(start, memory_block_size_bytes()) || + !IS_ALIGNED(size, memory_block_size_bytes())) { pr_err("Block size [%#lx] unaligned hotplug range: start %#llx, size %#llx", - block_sz, start, size); + memory_block_size_bytes(), start, size); return -EINVAL; } @@ -1080,9 +1064,7 @@ static int online_memory_block(struct memory_block *mem, void *arg) */ int __ref add_memory_resource(int nid, struct resource *res) { - struct mhp_restrictions restrictions = { - .flags = MHP_MEMBLOCK_API, - }; + struct mhp_restrictions restrictions = {}; u64 start, size; bool new_node = false; int ret; @@ -1114,6 +1096,13 @@ int __ref add_memory_resource(int nid, struct resource *res) if (ret < 0) goto error; + /* create memory block devices after memory was added */ + ret = create_memory_block_devices(start, size); + if (ret) { + arch_remove_memory(nid, start, size, NULL); + goto error; + } + if (new_node) { /* If sysfs file of new node can't be created, cpu on the node * can't be hot-added. There is no rollback way now. @@ -1736,9 +1725,10 @@ static int check_memblock_offlined_cb(struct memory_block *mem, void *arg) endpa = PFN_PHYS(section_nr_to_pfn(mem->end_section_nr + 1))-1; pr_warn("removing memory fails, because memory [%pa-%pa] is onlined\n", &beginpa, &endpa); - } - return ret; + return -EBUSY; + } + return 0; } static int check_cpu_on_node(pg_data_t *pgdat) @@ -1821,19 +1811,9 @@ static void __release_memory_resource(resource_size_t start, } } -/** - * remove_memory - * @nid: the node ID - * @start: physical address of the region to remove - * @size: size of the region to remove - * - * NOTE: The caller must call lock_device_hotplug() to serialize hotplug - * and online/offline operations before this call, as required by - * try_offline_node(). - */ -void __ref __remove_memory(int nid, u64 start, u64 size) +static int __ref try_remove_memory(int nid, u64 start, u64 size) { - int ret; + int rc = 0; BUG_ON(check_hotplug_memory_range(start, size)); @@ -1841,32 +1821,66 @@ void __ref __remove_memory(int nid, u64 start, u64 size) /* * All memory blocks must be offlined before removing memory. Check - * whether all memory blocks in question are offline and trigger a BUG() + * whether all memory blocks in question are offline and return error * if this is not the case. */ - ret = walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), NULL, - check_memblock_offlined_cb); - if (ret) - BUG(); + rc = walk_memory_range(PFN_DOWN(start), PFN_UP(start + size - 1), NULL, + check_memblock_offlined_cb); + if (rc) + goto done; /* remove memmap entry */ firmware_map_remove(start, start + size, "System RAM"); memblock_free(start, size); memblock_remove(start, size); + /* remove memory block devices before removing memory */ + remove_memory_block_devices(start, size); + arch_remove_memory(nid, start, size, NULL); __release_memory_resource(start, size); try_offline_node(nid); +done: mem_hotplug_done(); + return rc; } -void remove_memory(int nid, u64 start, u64 size) +/** + * remove_memory + * @nid: the node ID + * @start: physical address of the region to remove + * @size: size of the region to remove + * + * NOTE: The caller must call lock_device_hotplug() to serialize hotplug + * and online/offline operations before this call, as required by + * try_offline_node(). + */ +void __remove_memory(int nid, u64 start, u64 size) { + + /* + * trigger BUG() is some memory is not offlined prior to calling this + * function + */ + if (try_remove_memory(nid, start, size)) + BUG(); +} + +/* + * Remove memory if every memory block is offline, otherwise return -EBUSY is + * some memory is not offline + */ +int remove_memory(int nid, u64 start, u64 size) +{ + int rc; + lock_device_hotplug(); - __remove_memory(nid, start, size); + rc = try_remove_memory(nid, start, size); unlock_device_hotplug(); + + return rc; } EXPORT_SYMBOL_GPL(remove_memory); #endif /* CONFIG_MEMORY_HOTREMOVE */ diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 01600d80ae01..fdcb73536319 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -306,7 +306,7 @@ static void mpol_rebind_nodemask(struct mempolicy *pol, const nodemask_t *nodes) else { nodes_remap(tmp, pol->v.nodes,pol->w.cpuset_mems_allowed, *nodes); - pol->w.cpuset_mems_allowed = tmp; + pol->w.cpuset_mems_allowed = *nodes; } if (nodes_empty(tmp)) diff --git a/mm/migrate.c b/mm/migrate.c index f2ecc2855a12..572b4bc85d76 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -397,8 +397,7 @@ static int expected_page_refs(struct address_space *mapping, struct page *page) * 3 for pages with a mapping and PagePrivate/PagePrivate2 set. */ int migrate_page_move_mapping(struct address_space *mapping, - struct page *newpage, struct page *page, enum migrate_mode mode, - int extra_count) + struct page *newpage, struct page *page, int extra_count) { XA_STATE(xas, &mapping->i_pages, page_index(page)); struct zone *oldzone, *newzone; @@ -684,7 +683,7 @@ int migrate_page(struct address_space *mapping, BUG_ON(PageWriteback(page)); /* Writeback must be complete */ - rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); + rc = migrate_page_move_mapping(mapping, newpage, page, 0); if (rc != MIGRATEPAGE_SUCCESS) return rc; @@ -783,7 +782,7 @@ recheck_buffers: } } - rc = migrate_page_move_mapping(mapping, newpage, page, mode, 0); + rc = migrate_page_move_mapping(mapping, newpage, page, 0); if (rc != MIGRATEPAGE_SUCCESS) goto unlock_buffers; diff --git a/mm/mlock.c b/mm/mlock.c index 080f3b36415b..a90099da4fb4 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -636,11 +636,11 @@ static int apply_vma_lock_flags(unsigned long start, size_t len, * is also counted. * Return value: previously mlocked page counts */ -static int count_mm_mlocked_page_nr(struct mm_struct *mm, +static unsigned long count_mm_mlocked_page_nr(struct mm_struct *mm, unsigned long start, size_t len) { struct vm_area_struct *vma; - int count = 0; + unsigned long count = 0; if (mm == NULL) mm = current->mm; @@ -797,7 +797,8 @@ SYSCALL_DEFINE1(mlockall, int, flags) unsigned long lock_limit; int ret; - if (!flags || (flags & ~(MCL_CURRENT | MCL_FUTURE | MCL_ONFAULT))) + if (!flags || (flags & ~(MCL_CURRENT | MCL_FUTURE | MCL_ONFAULT)) || + flags == MCL_ONFAULT) return -EINVAL; if (!can_do_mlock()) diff --git a/mm/mmap.c b/mm/mmap.c index 7e8c3e8ae75f..dedae10cb6e2 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1866,6 +1866,22 @@ unacct_error: return error; } +static inline unsigned long gap_start_offset(struct vm_unmapped_area_info *info, + unsigned long addr) +{ + /* get gap_start offset to adjust gap address to the + * desired alignment + */ + return (info->align_offset - addr) & info->align_mask; +} + +static inline unsigned long gap_end_offset(struct vm_unmapped_area_info *info, + unsigned long addr) +{ + /* get gap_end offset to adjust gap address to the desired alignment */ + return (addr - info->align_offset) & info->align_mask; +} + unsigned long unmapped_area(struct vm_unmapped_area_info *info) { /* @@ -1880,10 +1896,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info) struct vm_area_struct *vma; unsigned long length, low_limit, high_limit, gap_start, gap_end; - /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; - if (length < info->length) - return -ENOMEM; + length = info->length; /* Adjust search limits by the desired length */ if (info->high_limit < length) @@ -1915,6 +1928,7 @@ unsigned long unmapped_area(struct vm_unmapped_area_info *info) } gap_start = vma->vm_prev ? vm_end_gap(vma->vm_prev) : 0; + gap_start += gap_start_offset(info, gap_start); check_current: /* Check if current node has a suitable gap */ if (gap_start > high_limit) @@ -1943,6 +1957,7 @@ check_current: struct vm_area_struct, vm_rb); if (prev == vma->vm_rb.rb_left) { gap_start = vm_end_gap(vma->vm_prev); + gap_start += gap_start_offset(info, gap_start); gap_end = vm_start_gap(vma); goto check_current; } @@ -1952,17 +1967,17 @@ check_current: check_highest: /* Check highest gap, which does not precede any rbtree node */ gap_start = mm->highest_vm_end; + gap_start += gap_start_offset(info, gap_start); gap_end = ULONG_MAX; /* Only for VM_BUG_ON below */ if (gap_start > high_limit) return -ENOMEM; found: /* We found a suitable gap. Clip it with the original low_limit. */ - if (gap_start < info->low_limit) + if (gap_start < info->low_limit) { gap_start = info->low_limit; - - /* Adjust gap address to the desired alignment */ - gap_start += (info->align_offset - gap_start) & info->align_mask; + gap_start += gap_start_offset(info, gap_start); + } VM_BUG_ON(gap_start + info->length > info->high_limit); VM_BUG_ON(gap_start + info->length > gap_end); @@ -1975,16 +1990,14 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) struct vm_area_struct *vma; unsigned long length, low_limit, high_limit, gap_start, gap_end; - /* Adjust search length to account for worst case alignment overhead */ - length = info->length + info->align_mask; - if (length < info->length) - return -ENOMEM; + length = info->length; /* * Adjust search limits by the desired length. * See implementation comment at top of unmapped_area(). */ gap_end = info->high_limit; + gap_end -= gap_end_offset(info, gap_end); if (gap_end < length) return -ENOMEM; high_limit = gap_end - length; @@ -2021,6 +2034,7 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info) check_current: /* Check if current node has a suitable gap */ gap_end = vm_start_gap(vma); + gap_end -= gap_end_offset(info, gap_end); if (gap_end < low_limit) return -ENOMEM; if (gap_start <= high_limit && @@ -2055,13 +2069,14 @@ check_current: found: /* We found a suitable gap. Clip it with the original high_limit. */ - if (gap_end > info->high_limit) + if (gap_end > info->high_limit) { gap_end = info->high_limit; + gap_end -= gap_end_offset(info, gap_end); + } found_highest: /* Compute highest gap address at the desired alignment */ gap_end -= info->length; - gap_end -= (gap_end - info->align_offset) & info->align_mask; VM_BUG_ON(gap_end < info->low_limit); VM_BUG_ON(gap_end < gap_start); diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 99740e1dd273..8c943a6e1696 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -245,14 +245,28 @@ void tlb_finish_mmu(struct mmu_gather *tlb, { /* * If there are parallel threads are doing PTE changes on same range - * under non-exclusive lock(e.g., mmap_sem read-side) but defer TLB - * flush by batching, a thread has stable TLB entry can fail to flush - * the TLB by observing pte_none|!pte_dirty, for example so flush TLB - * forcefully if we detect parallel PTE batching threads. + * under non-exclusive lock (e.g., mmap_sem read-side) but defer TLB + * flush by batching, one thread may end up seeing inconsistent PTEs + * and result in having stale TLB entries. So flush TLB forcefully + * if we detect parallel PTE batching threads. + * + * However, some syscalls, e.g. munmap(), may free page tables, this + * needs force flush everything in the given range. Otherwise this + * may result in having stale TLB entries for some architectures, + * e.g. aarch64, that could specify flush what level TLB. */ if (mm_tlb_flush_nested(tlb->mm)) { + /* + * The aarch64 yields better performance with fullmm by + * avoiding multiple CPUs spamming TLBI messages at the + * same time. + * + * On x86 non-fullmm doesn't yield significant difference + * against fullmm. + */ + tlb->fullmm = 1; __tlb_reset_range(tlb); - __tlb_adjust_range(tlb, start, end - start); + tlb->freed_tables = 1; } tlb_flush_mmu(tlb); diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index ee36068077b6..30158d467b76 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -276,7 +276,7 @@ static int do_mmu_notifier_register(struct mmu_notifier *mn, * thanks to mm_take_all_locks(). */ spin_lock(&mm->mmu_notifier_mm->lock); - hlist_add_head(&mn->hlist, &mm->mmu_notifier_mm->list); + hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list); spin_unlock(&mm->mmu_notifier_mm->lock); mm_drop_all_locks(mm); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d66bc8abe0af..3018360c5cd0 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -50,7 +50,6 @@ #include <linux/backing-dev.h> #include <linux/fault-inject.h> #include <linux/page-isolation.h> -#include <linux/page_ext.h> #include <linux/debugobjects.h> #include <linux/kmemleak.h> #include <linux/compaction.h> @@ -646,30 +645,29 @@ void prep_compound_page(struct page *page, unsigned int order) #ifdef CONFIG_DEBUG_PAGEALLOC unsigned int _debug_guardpage_minorder; -bool _debug_pagealloc_enabled __read_mostly - = IS_ENABLED(CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT); + +#ifdef CONFIG_DEBUG_PAGEALLOC_ENABLE_DEFAULT +DEFINE_STATIC_KEY_TRUE(_debug_pagealloc_enabled); +#else +DEFINE_STATIC_KEY_FALSE(_debug_pagealloc_enabled); +#endif EXPORT_SYMBOL(_debug_pagealloc_enabled); -bool _debug_guardpage_enabled __read_mostly; + +DEFINE_STATIC_KEY_FALSE(_debug_guardpage_enabled); static int __init early_debug_pagealloc(char *buf) { - if (!buf) - return -EINVAL; - return kstrtobool(buf, &_debug_pagealloc_enabled); -} -early_param("debug_pagealloc", early_debug_pagealloc); + bool enable = false; -static bool need_debug_guardpage(void) -{ - /* If we don't use debug_pagealloc, we don't need guard page */ - if (!debug_pagealloc_enabled()) - return false; + if (kstrtobool(buf, &enable)) + return -EINVAL; - if (!debug_guardpage_minorder()) - return false; + if (enable) + static_branch_enable(&_debug_pagealloc_enabled); - return true; + return 0; } +early_param("debug_pagealloc", early_debug_pagealloc); static void init_debug_guardpage(void) { @@ -679,14 +677,9 @@ static void init_debug_guardpage(void) if (!debug_guardpage_minorder()) return; - _debug_guardpage_enabled = true; + static_branch_enable(&_debug_guardpage_enabled); } -struct page_ext_operations debug_guardpage_ops = { - .need = need_debug_guardpage, - .init = init_debug_guardpage, -}; - static int __init debug_guardpage_minorder_setup(char *buf) { unsigned long res; @@ -704,20 +697,13 @@ early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup); static inline bool set_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) { - struct page_ext *page_ext; - if (!debug_guardpage_enabled()) return false; if (order >= debug_guardpage_minorder()) return false; - page_ext = lookup_page_ext(page); - if (unlikely(!page_ext)) - return false; - - __set_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); - + __SetPageGuard(page); INIT_LIST_HEAD(&page->lru); set_page_private(page, order); /* Guard pages are not available for any usage */ @@ -729,23 +715,16 @@ static inline bool set_page_guard(struct zone *zone, struct page *page, static inline void clear_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) { - struct page_ext *page_ext; - if (!debug_guardpage_enabled()) return; - page_ext = lookup_page_ext(page); - if (unlikely(!page_ext)) - return; - - __clear_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags); + __ClearPageGuard(page); set_page_private(page, 0); if (!is_migrate_isolate(migratetype)) __mod_zone_freepage_state(zone, (1 << order), migratetype); } #else -struct page_ext_operations debug_guardpage_ops; static inline bool set_page_guard(struct zone *zone, struct page *page, unsigned int order, int migratetype) { return false; } static inline void clear_page_guard(struct zone *zone, struct page *page, @@ -1151,19 +1130,36 @@ static __always_inline bool free_pages_prepare(struct page *page, } #ifdef CONFIG_DEBUG_VM -static inline bool free_pcp_prepare(struct page *page) +/* + * With DEBUG_VM enabled, order-0 pages are checked immediately when being freed + * to pcp lists. With debug_pagealloc also enabled, they are also rechecked when + * moved from pcp lists to free lists. + */ +static bool free_pcp_prepare(struct page *page) { return free_pages_prepare(page, 0, true); } -static inline bool bulkfree_pcp_prepare(struct page *page) +static bool bulkfree_pcp_prepare(struct page *page) { - return false; + if (debug_pagealloc_enabled()) + return free_pages_check(page); + else + return false; } #else +/* + * With DEBUG_VM disabled, order-0 pages being freed are checked only when + * moving from pcp lists to free list in order to reduce overhead. With + * debug_pagealloc enabled, they are checked also immediately when being freed + * to the pcp lists. + */ static bool free_pcp_prepare(struct page *page) { - return free_pages_prepare(page, 0, false); + if (debug_pagealloc_enabled()) + return free_pages_prepare(page, 0, true); + else + return free_pages_prepare(page, 0, false); } static bool bulkfree_pcp_prepare(struct page *page) @@ -1903,6 +1899,10 @@ void __init page_alloc_init_late(void) for_each_populated_zone(zone) set_zone_contiguous(zone); + +#ifdef CONFIG_DEBUG_PAGEALLOC + init_debug_guardpage(); +#endif } #ifdef CONFIG_CMA @@ -2025,23 +2025,39 @@ static inline bool free_pages_prezeroed(void) } #ifdef CONFIG_DEBUG_VM -static bool check_pcp_refill(struct page *page) +/* + * With DEBUG_VM enabled, order-0 pages are checked for expected state when + * being allocated from pcp lists. With debug_pagealloc also enabled, they are + * also checked when pcp lists are refilled from the free lists. + */ +static inline bool check_pcp_refill(struct page *page) { - return false; + if (debug_pagealloc_enabled()) + return check_new_page(page); + else + return false; } -static bool check_new_pcp(struct page *page) +static inline bool check_new_pcp(struct page *page) { return check_new_page(page); } #else -static bool check_pcp_refill(struct page *page) +/* + * With DEBUG_VM disabled, free order-0 pages are checked for expected state + * when pcp lists are being refilled from the free lists. With debug_pagealloc + * enabled, they are also checked when being allocated from the pcp lists. + */ +static inline bool check_pcp_refill(struct page *page) { return check_new_page(page); } -static bool check_new_pcp(struct page *page) +static inline bool check_new_pcp(struct page *page) { - return false; + if (debug_pagealloc_enabled()) + return check_new_page(page); + else + return false; } #endif /* CONFIG_DEBUG_VM */ @@ -5535,6 +5551,8 @@ static void build_zonelists(pg_data_t *pgdat) int node, load, nr_nodes = 0; nodemask_t used_mask; int local_node, prev_node; + struct zone *zone; + struct zoneref *z; /* NUMA-aware ordering of nodes */ local_node = pgdat->node_id; @@ -5560,6 +5578,11 @@ static void build_zonelists(pg_data_t *pgdat) build_zonelists_in_node_order(pgdat, node_order, nr_nodes); build_thisnode_zonelists(pgdat); + + pr_info("node[%d] zonelist: ", pgdat->node_id); + for_each_zone_zonelist(zone, z, &pgdat->node_zonelists[ZONELIST_FALLBACK], MAX_NR_ZONES-1) + pr_cont("%d:%s ", zone_to_nid(zone), zone->name); + pr_cont("\n"); } #ifdef CONFIG_HAVE_MEMORYLESS_NODES @@ -5662,10 +5685,11 @@ static void __build_all_zonelists(void *data) if (self && !node_online(self->node_id)) { build_zonelists(self); } else { - for_each_online_node(nid) { + for_each_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); - build_zonelists(pgdat); + if (pgdat) + build_zonelists(pgdat); } #ifdef CONFIG_HAVE_MEMORYLESS_NODES @@ -6948,10 +6972,8 @@ static unsigned long __init find_min_pfn_for_node(int nid) for_each_mem_pfn_range(i, nid, &start_pfn, NULL, NULL) min_pfn = min(min_pfn, start_pfn); - if (min_pfn == ULONG_MAX) { - pr_warn("Could not find start_pfn for node %d\n", nid); + if (min_pfn == ULONG_MAX) return 0; - } return min_pfn; } @@ -7295,8 +7317,12 @@ void __init free_area_init_nodes(unsigned long *max_zone_pfn) mminit_verify_pageflags_layout(); setup_nr_node_ids(); zero_resv_unavail(); - for_each_online_node(nid) { + for_each_node(nid) { pg_data_t *pgdat = NODE_DATA(nid); + + if (!pgdat) + continue; + free_area_init_node(nid, NULL, find_min_pfn_for_node(nid), NULL); diff --git a/mm/page_ext.c b/mm/page_ext.c index d8f1aca4ad43..5f5769c7db3b 100644 --- a/mm/page_ext.c +++ b/mm/page_ext.c @@ -59,9 +59,6 @@ */ static struct page_ext_operations *page_ext_ops[] = { -#ifdef CONFIG_DEBUG_PAGEALLOC - &debug_guardpage_ops, -#endif #ifdef CONFIG_PAGE_OWNER &page_owner_ops, #endif diff --git a/mm/page_io.c b/mm/page_io.c index 2e8019d0e048..24ad2d43c11d 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -164,7 +164,7 @@ int generic_swapfile_activate(struct swap_info_struct *sis, blocks_per_page = PAGE_SIZE >> blkbits; /* - * Map all the blocks into the extent list. This code doesn't try + * Map all the blocks into the extent tree. This code doesn't try * to be very smart. */ probe_block = 0; diff --git a/mm/slab.c b/mm/slab.c index f7117ad9b3a3..9e3eee5568b6 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -371,12 +371,6 @@ static void **dbg_userword(struct kmem_cache *cachep, void *objp) static int slab_max_order = SLAB_MAX_ORDER_LO; static bool slab_max_order_set __initdata; -static inline struct kmem_cache *virt_to_cache(const void *obj) -{ - struct page *page = virt_to_head_page(obj); - return page->slab_cache; -} - static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, unsigned int idx) { @@ -3715,6 +3709,8 @@ void kmem_cache_free_bulk(struct kmem_cache *orig_s, size_t size, void **p) s = virt_to_cache(objp); else s = cache_from_obj(orig_s, objp); + if (!s) + continue; debug_check_no_locks_freed(objp, s->object_size); if (!(s->flags & SLAB_DEBUG_OBJECTS)) @@ -3749,6 +3745,8 @@ void kfree(const void *objp) local_irq_save(flags); kfree_debugcheck(objp); c = virt_to_cache(objp); + if (!c) + return; debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); @@ -4219,13 +4217,15 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, */ size_t ksize(const void *objp) { + struct kmem_cache *c; size_t size; BUG_ON(!objp); if (unlikely(objp == ZERO_SIZE_PTR)) return 0; - size = virt_to_cache(objp)->object_size; + c = virt_to_cache(objp); + size = c ? c->object_size : 0; /* We assume that ksize callers could use the whole allocated area, * so we need to unpoison this area. */ diff --git a/mm/slab.h b/mm/slab.h index 43ac818b8592..739099af6cbb 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -310,7 +310,7 @@ static inline bool is_root_cache(struct kmem_cache *s) static inline bool slab_equal_or_root(struct kmem_cache *s, struct kmem_cache *p) { - return true; + return s == p; } static inline const char *cache_name(struct kmem_cache *s) @@ -350,10 +350,20 @@ static inline void memcg_link_cache(struct kmem_cache *s) #endif /* CONFIG_MEMCG_KMEM */ +static inline struct kmem_cache *virt_to_cache(const void *obj) +{ + struct page *page; + + page = virt_to_head_page(obj); + if (WARN_ONCE(!PageSlab(page), "%s: Object is not a Slab page!\n", + __func__)) + return NULL; + return page->slab_cache; +} + static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) { struct kmem_cache *cachep; - struct page *page; /* * When kmemcg is not being used, both assignments should return the @@ -363,18 +373,15 @@ static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x) * will also be a constant. */ if (!memcg_kmem_enabled() && + !IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) && !unlikely(s->flags & SLAB_CONSISTENCY_CHECKS)) return s; - page = virt_to_head_page(x); - cachep = page->slab_cache; - if (slab_equal_or_root(cachep, s)) - return cachep; - - pr_err("%s: Wrong slab cache. %s but object is from %s\n", - __func__, s->name, cachep->name); - WARN_ON_ONCE(1); - return s; + cachep = virt_to_cache(x); + WARN_ONCE(cachep && !slab_equal_or_root(cachep, s), + "%s: Wrong slab cache. %s but object is from %s\n", + __func__, s->name, cachep->name); + return cachep; } static inline size_t slab_ksize(const struct kmem_cache *s) diff --git a/mm/slub.c b/mm/slub.c index cd04dbd2b5d0..1802c87799ff 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1313,9 +1313,7 @@ slab_flags_t kmem_cache_flags(unsigned int object_size, char *end, *glob; size_t cmplen; - end = strchr(iter, ','); - if (!end) - end = iter + strlen(iter); + end = strchrnul(iter, ','); glob = strnchr(iter, end - iter, '*'); if (glob) diff --git a/mm/sparse.c b/mm/sparse.c index fd13166949b5..1552c855d62a 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -604,7 +604,6 @@ static void __kfree_section_memmap(struct page *memmap, vmemmap_free(start, end, altmap); } -#ifdef CONFIG_MEMORY_HOTREMOVE static void free_map_bootmem(struct page *memmap) { unsigned long start = (unsigned long)memmap; @@ -612,7 +611,6 @@ static void free_map_bootmem(struct page *memmap) vmemmap_free(start, end, NULL); } -#endif /* CONFIG_MEMORY_HOTREMOVE */ #else static struct page *__kmalloc_section_memmap(void) { @@ -651,7 +649,6 @@ static void __kfree_section_memmap(struct page *memmap, get_order(sizeof(struct page) * PAGES_PER_SECTION)); } -#ifdef CONFIG_MEMORY_HOTREMOVE static void free_map_bootmem(struct page *memmap) { unsigned long maps_section_nr, removing_section_nr, i; @@ -681,7 +678,6 @@ static void free_map_bootmem(struct page *memmap) put_page_bootmem(page); } } -#endif /* CONFIG_MEMORY_HOTREMOVE */ #endif /* CONFIG_SPARSEMEM_VMEMMAP */ /** @@ -746,7 +742,6 @@ out: return ret; } -#ifdef CONFIG_MEMORY_HOTREMOVE #ifdef CONFIG_MEMORY_FAILURE static void clear_hwpoisoned_pages(struct page *memmap, int nr_pages) { @@ -805,8 +800,8 @@ static void free_section_usemap(struct page *memmap, unsigned long *usemap, free_map_bootmem(memmap); } -void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, - unsigned long map_offset, struct vmem_altmap *altmap) +void sparse_remove_one_section(struct mem_section *ms, unsigned long map_offset, + struct vmem_altmap *altmap) { struct page *memmap = NULL; unsigned long *usemap = NULL; @@ -823,5 +818,4 @@ void sparse_remove_one_section(struct zone *zone, struct mem_section *ms, PAGES_PER_SECTION - map_offset); free_section_usemap(memmap, usemap, altmap); } -#endif /* CONFIG_MEMORY_HOTREMOVE */ #endif /* CONFIG_MEMORY_HOTPLUG */ diff --git a/mm/swap.c b/mm/swap.c index 7ede3eddc12a..6d153ce4cb8c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -740,15 +740,20 @@ void release_pages(struct page **pages, int nr) if (is_huge_zero_page(page)) continue; - /* Device public page can not be huge page */ - if (is_device_public_page(page)) { + if (is_zone_device_page(page)) { if (locked_pgdat) { spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); locked_pgdat = NULL; } - put_devmap_managed_page(page); - continue; + /* + * Not all zone-device-pages require special + * processing. Those pages return 'false' from + * put_devmap_managed_page() expecting a call to + * put_page_testzero() + */ + if (put_devmap_managed_page(page)) + continue; } page = compound_head(page); diff --git a/mm/swap_state.c b/mm/swap_state.c index eb714165afd2..62da25b7f2ed 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -73,23 +73,24 @@ unsigned long total_swapcache_pages(void) unsigned int i, j, nr; unsigned long ret = 0; struct address_space *spaces; + struct swap_info_struct *si; - rcu_read_lock(); for (i = 0; i < MAX_SWAPFILES; i++) { - /* - * The corresponding entries in nr_swapper_spaces and - * swapper_spaces will be reused only after at least - * one grace period. So it is impossible for them - * belongs to different usage. - */ - nr = nr_swapper_spaces[i]; - spaces = rcu_dereference(swapper_spaces[i]); - if (!nr || !spaces) + swp_entry_t entry = swp_entry(i, 1); + + /* Avoid get_swap_device() to warn for bad swap entry */ + if (!swp_swap_info(entry)) + continue; + /* Prevent swapoff to free swapper_spaces */ + si = get_swap_device(entry); + if (!si) continue; + nr = nr_swapper_spaces[i]; + spaces = swapper_spaces[i]; for (j = 0; j < nr; j++) ret += spaces[j].nrpages; + put_swap_device(si); } - rcu_read_unlock(); return ret; } @@ -310,8 +311,13 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, unsigned long addr) { struct page *page; + struct swap_info_struct *si; + si = get_swap_device(entry); + if (!si) + return NULL; page = find_get_page(swap_address_space(entry), swp_offset(entry)); + put_swap_device(si); INC_CACHE_INFO(find_total); if (page) { @@ -354,8 +360,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, bool *new_page_allocated) { - struct page *found_page, *new_page = NULL; - struct address_space *swapper_space = swap_address_space(entry); + struct page *found_page = NULL, *new_page = NULL; + struct swap_info_struct *si; int err; *new_page_allocated = false; @@ -365,7 +371,12 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * called after lookup_swap_cache() failed, re-calling * that would confuse statistics. */ - found_page = find_get_page(swapper_space, swp_offset(entry)); + si = get_swap_device(entry); + if (!si) + break; + found_page = find_get_page(swap_address_space(entry), + swp_offset(entry)); + put_swap_device(si); if (found_page) break; @@ -601,20 +612,16 @@ int init_swap_address_space(unsigned int type, unsigned long nr_pages) mapping_set_no_writeback_tags(space); } nr_swapper_spaces[type] = nr; - rcu_assign_pointer(swapper_spaces[type], spaces); + swapper_spaces[type] = spaces; return 0; } void exit_swap_address_space(unsigned int type) { - struct address_space *spaces; - - spaces = swapper_spaces[type]; + kvfree(swapper_spaces[type]); nr_swapper_spaces[type] = 0; - rcu_assign_pointer(swapper_spaces[type], NULL); - synchronize_rcu(); - kvfree(spaces); + swapper_spaces[type] = NULL; } static inline void swap_ra_clamp_pfn(struct vm_area_struct *vma, diff --git a/mm/swapfile.c b/mm/swapfile.c index 596ac98051c5..c56aef7547d1 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -152,6 +152,18 @@ static int __try_to_reclaim_swap(struct swap_info_struct *si, return ret; } +static inline struct swap_extent *first_se(struct swap_info_struct *sis) +{ + struct rb_node *rb = rb_first(&sis->swap_extent_root); + return rb_entry(rb, struct swap_extent, rb_node); +} + +static inline struct swap_extent *next_se(struct swap_extent *se) +{ + struct rb_node *rb = rb_next(&se->rb_node); + return rb ? rb_entry(rb, struct swap_extent, rb_node) : NULL; +} + /* * swapon tell device that all the old swap contents can be discarded, * to allow the swap device to optimize its wear-levelling. @@ -164,7 +176,7 @@ static int discard_swap(struct swap_info_struct *si) int err = 0; /* Do not discard the swap header page! */ - se = &si->first_swap_extent; + se = first_se(si); start_block = (se->start_block + 1) << (PAGE_SHIFT - 9); nr_blocks = ((sector_t)se->nr_pages - 1) << (PAGE_SHIFT - 9); if (nr_blocks) { @@ -175,7 +187,7 @@ static int discard_swap(struct swap_info_struct *si) cond_resched(); } - list_for_each_entry(se, &si->first_swap_extent.list, list) { + for (se = next_se(se); se; se = next_se(se)) { start_block = se->start_block << (PAGE_SHIFT - 9); nr_blocks = (sector_t)se->nr_pages << (PAGE_SHIFT - 9); @@ -189,6 +201,26 @@ static int discard_swap(struct swap_info_struct *si) return err; /* That will often be -EOPNOTSUPP */ } +static struct swap_extent * +offset_to_swap_extent(struct swap_info_struct *sis, unsigned long offset) +{ + struct swap_extent *se; + struct rb_node *rb; + + rb = sis->swap_extent_root.rb_node; + while (rb) { + se = rb_entry(rb, struct swap_extent, rb_node); + if (offset < se->start_page) + rb = rb->rb_left; + else if (offset >= se->start_page + se->nr_pages) + rb = rb->rb_right; + else + return se; + } + /* It *must* be present */ + BUG(); +} + /* * swap allocation tell device that a cluster of swap can now be discarded, * to allow the swap device to optimize its wear-levelling. @@ -196,32 +228,25 @@ static int discard_swap(struct swap_info_struct *si) static void discard_swap_cluster(struct swap_info_struct *si, pgoff_t start_page, pgoff_t nr_pages) { - struct swap_extent *se = si->curr_swap_extent; - int found_extent = 0; + struct swap_extent *se = offset_to_swap_extent(si, start_page); while (nr_pages) { - if (se->start_page <= start_page && - start_page < se->start_page + se->nr_pages) { - pgoff_t offset = start_page - se->start_page; - sector_t start_block = se->start_block + offset; - sector_t nr_blocks = se->nr_pages - offset; - - if (nr_blocks > nr_pages) - nr_blocks = nr_pages; - start_page += nr_blocks; - nr_pages -= nr_blocks; - - if (!found_extent++) - si->curr_swap_extent = se; - - start_block <<= PAGE_SHIFT - 9; - nr_blocks <<= PAGE_SHIFT - 9; - if (blkdev_issue_discard(si->bdev, start_block, - nr_blocks, GFP_NOIO, 0)) - break; - } + pgoff_t offset = start_page - se->start_page; + sector_t start_block = se->start_block + offset; + sector_t nr_blocks = se->nr_pages - offset; + + if (nr_blocks > nr_pages) + nr_blocks = nr_pages; + start_page += nr_blocks; + nr_pages -= nr_blocks; + + start_block <<= PAGE_SHIFT - 9; + nr_blocks <<= PAGE_SHIFT - 9; + if (blkdev_issue_discard(si->bdev, start_block, + nr_blocks, GFP_NOIO, 0)) + break; - se = list_next_entry(se, list); + se = next_se(se); } } @@ -1079,12 +1104,11 @@ fail: static struct swap_info_struct *__swap_info_get(swp_entry_t entry) { struct swap_info_struct *p; - unsigned long offset, type; + unsigned long offset; if (!entry.val) goto out; - type = swp_type(entry); - p = swap_type_to_swap_info(type); + p = swp_swap_info(entry); if (!p) goto bad_nofile; if (!(p->flags & SWP_USED)) @@ -1187,6 +1211,69 @@ static unsigned char __swap_entry_free_locked(struct swap_info_struct *p, return usage; } +/* + * Check whether swap entry is valid in the swap device. If so, + * return pointer to swap_info_struct, and keep the swap entry valid + * via preventing the swap device from being swapoff, until + * put_swap_device() is called. Otherwise return NULL. + * + * The entirety of the RCU read critical section must come before the + * return from or after the call to synchronize_rcu() in + * enable_swap_info() or swapoff(). So if "si->flags & SWP_VALID" is + * true, the si->map, si->cluster_info, etc. must be valid in the + * critical section. + * + * Notice that swapoff or swapoff+swapon can still happen before the + * rcu_read_lock() in get_swap_device() or after the rcu_read_unlock() + * in put_swap_device() if there isn't any other way to prevent + * swapoff, such as page lock, page table lock, etc. The caller must + * be prepared for that. For example, the following situation is + * possible. + * + * CPU1 CPU2 + * do_swap_page() + * ... swapoff+swapon + * __read_swap_cache_async() + * swapcache_prepare() + * __swap_duplicate() + * // check swap_map + * // verify PTE not changed + * + * In __swap_duplicate(), the swap_map need to be checked before + * changing partly because the specified swap entry may be for another + * swap device which has been swapoff. And in do_swap_page(), after + * the page is read from the swap device, the PTE is verified not + * changed with the page table locked to check whether the swap device + * has been swapoff or swapoff+swapon. + */ +struct swap_info_struct *get_swap_device(swp_entry_t entry) +{ + struct swap_info_struct *si; + unsigned long offset; + + if (!entry.val) + goto out; + si = swp_swap_info(entry); + if (!si) + goto bad_nofile; + + rcu_read_lock(); + if (!(si->flags & SWP_VALID)) + goto unlock_out; + offset = swp_offset(entry); + if (offset >= si->max) + goto unlock_out; + + return si; +bad_nofile: + pr_err("%s: %s%08lx\n", __func__, Bad_file, entry.val); +out: + return NULL; +unlock_out: + rcu_read_unlock(); + return NULL; +} + static unsigned char __swap_entry_free(struct swap_info_struct *p, swp_entry_t entry, unsigned char usage) { @@ -1358,11 +1445,18 @@ int page_swapcount(struct page *page) return count; } -int __swap_count(struct swap_info_struct *si, swp_entry_t entry) +int __swap_count(swp_entry_t entry) { + struct swap_info_struct *si; pgoff_t offset = swp_offset(entry); + int count = 0; - return swap_count(si->swap_map[offset]); + si = get_swap_device(entry); + if (si) { + count = swap_count(si->swap_map[offset]); + put_swap_device(si); + } + return count; } static int swap_swapcount(struct swap_info_struct *si, swp_entry_t entry) @@ -1387,9 +1481,11 @@ int __swp_swapcount(swp_entry_t entry) int count = 0; struct swap_info_struct *si; - si = __swap_info_get(entry); - if (si) + si = get_swap_device(entry); + if (si) { count = swap_swapcount(si, entry); + put_swap_device(si); + } return count; } @@ -1684,7 +1780,7 @@ int swap_type_of(dev_t device, sector_t offset, struct block_device **bdev_p) return type; } if (bdev == sis->bdev) { - struct swap_extent *se = &sis->first_swap_extent; + struct swap_extent *se = first_se(sis); if (se->start_block == offset) { if (bdev_p) @@ -1784,8 +1880,6 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, dec_mm_counter(vma->vm_mm, MM_SWAPENTS); inc_mm_counter(vma->vm_mm, MM_ANONPAGES); get_page(page); - set_pte_at(vma->vm_mm, addr, pte, - pte_mkold(mk_pte(page, vma->vm_page_prot))); if (page == swapcache) { page_add_anon_rmap(page, vma, addr, false); mem_cgroup_commit_charge(page, memcg, true, false); @@ -1794,6 +1888,8 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, mem_cgroup_commit_charge(page, memcg, false, false); lru_cache_add_active_or_unevictable(page, vma); } + set_pte_at(vma->vm_mm, addr, pte, + pte_mkold(mk_pte(page, vma->vm_page_prot))); swap_free(entry); /* * Move the page to the active list so it is not @@ -2161,7 +2257,6 @@ static void drain_mmlist(void) static sector_t map_swap_entry(swp_entry_t entry, struct block_device **bdev) { struct swap_info_struct *sis; - struct swap_extent *start_se; struct swap_extent *se; pgoff_t offset; @@ -2169,18 +2264,8 @@ static sector_t map_swap_entry(swp_entry_t entry, struct block_device **bdev) *bdev = sis->bdev; offset = swp_offset(entry); - start_se = sis->curr_swap_extent; - se = start_se; - - for ( ; ; ) { - if (se->start_page <= offset && - offset < (se->start_page + se->nr_pages)) { - return se->start_block + (offset - se->start_page); - } - se = list_next_entry(se, list); - sis->curr_swap_extent = se; - BUG_ON(se == start_se); /* It *must* be present */ - } + se = offset_to_swap_extent(sis, offset); + return se->start_block + (offset - se->start_page); } /* @@ -2198,12 +2283,11 @@ sector_t map_swap_page(struct page *page, struct block_device **bdev) */ static void destroy_swap_extents(struct swap_info_struct *sis) { - while (!list_empty(&sis->first_swap_extent.list)) { - struct swap_extent *se; + while (!RB_EMPTY_ROOT(&sis->swap_extent_root)) { + struct rb_node *rb = sis->swap_extent_root.rb_node; + struct swap_extent *se = rb_entry(rb, struct swap_extent, rb_node); - se = list_first_entry(&sis->first_swap_extent.list, - struct swap_extent, list); - list_del(&se->list); + rb_erase(rb, &sis->swap_extent_root); kfree(se); } @@ -2219,7 +2303,7 @@ static void destroy_swap_extents(struct swap_info_struct *sis) /* * Add a block range (and the corresponding page range) into this swapdev's - * extent list. The extent list is kept sorted in page order. + * extent tree. * * This function rather assumes that it is called in ascending page order. */ @@ -2227,20 +2311,21 @@ int add_swap_extent(struct swap_info_struct *sis, unsigned long start_page, unsigned long nr_pages, sector_t start_block) { + struct rb_node **link = &sis->swap_extent_root.rb_node, *parent = NULL; struct swap_extent *se; struct swap_extent *new_se; - struct list_head *lh; - - if (start_page == 0) { - se = &sis->first_swap_extent; - sis->curr_swap_extent = se; - se->start_page = 0; - se->nr_pages = nr_pages; - se->start_block = start_block; - return 1; - } else { - lh = sis->first_swap_extent.list.prev; /* Highest extent */ - se = list_entry(lh, struct swap_extent, list); + + /* + * place the new node at the right most since the + * function is called in ascending page order. + */ + while (*link) { + parent = *link; + link = &parent->rb_right; + } + + if (parent) { + se = rb_entry(parent, struct swap_extent, rb_node); BUG_ON(se->start_page + se->nr_pages != start_page); if (se->start_block + se->nr_pages == start_block) { /* Merge it */ @@ -2249,9 +2334,7 @@ add_swap_extent(struct swap_info_struct *sis, unsigned long start_page, } } - /* - * No merge. Insert a new extent, preserving ordering. - */ + /* No merge, insert a new extent. */ new_se = kmalloc(sizeof(*se), GFP_KERNEL); if (new_se == NULL) return -ENOMEM; @@ -2259,7 +2342,8 @@ add_swap_extent(struct swap_info_struct *sis, unsigned long start_page, new_se->nr_pages = nr_pages; new_se->start_block = start_block; - list_add_tail(&new_se->list, &sis->first_swap_extent.list); + rb_link_node(&new_se->rb_node, parent, link); + rb_insert_color(&new_se->rb_node, &sis->swap_extent_root); return 1; } EXPORT_SYMBOL_GPL(add_swap_extent); @@ -2335,9 +2419,9 @@ static int swap_node(struct swap_info_struct *p) return bdev ? bdev->bd_disk->node_id : NUMA_NO_NODE; } -static void _enable_swap_info(struct swap_info_struct *p, int prio, - unsigned char *swap_map, - struct swap_cluster_info *cluster_info) +static void setup_swap_info(struct swap_info_struct *p, int prio, + unsigned char *swap_map, + struct swap_cluster_info *cluster_info) { int i; @@ -2362,7 +2446,11 @@ static void _enable_swap_info(struct swap_info_struct *p, int prio, } p->swap_map = swap_map; p->cluster_info = cluster_info; - p->flags |= SWP_WRITEOK; +} + +static void _enable_swap_info(struct swap_info_struct *p) +{ + p->flags |= SWP_WRITEOK | SWP_VALID; atomic_long_add(p->pages, &nr_swap_pages); total_swap_pages += p->pages; @@ -2389,7 +2477,17 @@ static void enable_swap_info(struct swap_info_struct *p, int prio, frontswap_init(p->type, frontswap_map); spin_lock(&swap_lock); spin_lock(&p->lock); - _enable_swap_info(p, prio, swap_map, cluster_info); + setup_swap_info(p, prio, swap_map, cluster_info); + spin_unlock(&p->lock); + spin_unlock(&swap_lock); + /* + * Guarantee swap_map, cluster_info, etc. fields are valid + * between get/put_swap_device() if SWP_VALID bit is set + */ + synchronize_rcu(); + spin_lock(&swap_lock); + spin_lock(&p->lock); + _enable_swap_info(p); spin_unlock(&p->lock); spin_unlock(&swap_lock); } @@ -2398,7 +2496,8 @@ static void reinsert_swap_info(struct swap_info_struct *p) { spin_lock(&swap_lock); spin_lock(&p->lock); - _enable_swap_info(p, p->prio, p->swap_map, p->cluster_info); + setup_swap_info(p, p->prio, p->swap_map, p->cluster_info); + _enable_swap_info(p); spin_unlock(&p->lock); spin_unlock(&swap_lock); } @@ -2501,6 +2600,17 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) reenable_swap_slots_cache_unlock(); + spin_lock(&swap_lock); + spin_lock(&p->lock); + p->flags &= ~SWP_VALID; /* mark swap device as invalid */ + spin_unlock(&p->lock); + spin_unlock(&swap_lock); + /* + * wait for swap operations protected by get/put_swap_device() + * to complete + */ + synchronize_rcu(); + flush_work(&p->discard_work); destroy_swap_extents(p); @@ -2749,7 +2859,7 @@ static struct swap_info_struct *alloc_swap_info(void) * would be relying on p->type to remain valid. */ } - INIT_LIST_HEAD(&p->first_swap_extent.list); + p->swap_extent_root = RB_ROOT; plist_node_init(&p->list, 0); for_each_node(i) plist_node_init(&p->avail_lists[i], 0); @@ -3265,17 +3375,11 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage) unsigned char has_cache; int err = -EINVAL; - if (non_swap_entry(entry)) - goto out; - - p = swp_swap_info(entry); + p = get_swap_device(entry); if (!p) - goto bad_file; - - offset = swp_offset(entry); - if (unlikely(offset >= p->max)) goto out; + offset = swp_offset(entry); ci = lock_cluster_or_swap_info(p, offset); count = p->swap_map[offset]; @@ -3321,11 +3425,9 @@ static int __swap_duplicate(swp_entry_t entry, unsigned char usage) unlock_out: unlock_cluster_or_swap_info(p, ci); out: + if (p) + put_swap_device(p); return err; - -bad_file: - pr_err("swap_dup: %s%08lx\n", Bad_file, entry.val); - goto out; } /* @@ -3417,6 +3519,7 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) struct page *list_page; pgoff_t offset; unsigned char count; + int ret = 0; /* * When debugging, it's easier to use __GFP_ZERO here; but it's better @@ -3424,15 +3527,15 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) */ page = alloc_page(gfp_mask | __GFP_HIGHMEM); - si = swap_info_get(entry); + si = get_swap_device(entry); if (!si) { /* * An acceptable race has occurred since the failing - * __swap_duplicate(): the swap entry has been freed, - * perhaps even the whole swap_map cleared for swapoff. + * __swap_duplicate(): the swap device may be swapoff */ goto outer; } + spin_lock(&si->lock); offset = swp_offset(entry); @@ -3450,9 +3553,8 @@ int add_swap_count_continuation(swp_entry_t entry, gfp_t gfp_mask) } if (!page) { - unlock_cluster(ci); - spin_unlock(&si->lock); - return -ENOMEM; + ret = -ENOMEM; + goto out; } /* @@ -3504,10 +3606,11 @@ out_unlock_cont: out: unlock_cluster(ci); spin_unlock(&si->lock); + put_swap_device(si); outer: if (page) __free_page(page); - return 0; + return ret; } /* diff --git a/mm/util.c b/mm/util.c index 9834c4ab7d8e..021648a8a3a3 100644 --- a/mm/util.c +++ b/mm/util.c @@ -7,6 +7,7 @@ #include <linux/err.h> #include <linux/sched.h> #include <linux/sched/mm.h> +#include <linux/sched/signal.h> #include <linux/sched/task_stack.h> #include <linux/security.h> #include <linux/swap.h> @@ -347,6 +348,80 @@ int __weak get_user_pages_fast(unsigned long start, } EXPORT_SYMBOL_GPL(get_user_pages_fast); +/** + * __account_locked_vm - account locked pages to an mm's locked_vm + * @mm: mm to account against + * @pages: number of pages to account + * @inc: %true if @pages should be considered positive, %false if not + * @task: task used to check RLIMIT_MEMLOCK + * @bypass_rlim: %true if checking RLIMIT_MEMLOCK should be skipped + * + * Assumes @task and @mm are valid (i.e. at least one reference on each), and + * that mmap_sem is held as writer. + * + * Return: + * * 0 on success + * * -ENOMEM if RLIMIT_MEMLOCK would be exceeded. + */ +int __account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc, + struct task_struct *task, bool bypass_rlim) +{ + unsigned long locked_vm, limit; + int ret = 0; + + lockdep_assert_held_exclusive(&mm->mmap_sem); + + locked_vm = mm->locked_vm; + if (inc) { + if (!bypass_rlim) { + limit = task_rlimit(task, RLIMIT_MEMLOCK) >> PAGE_SHIFT; + if (locked_vm + pages > limit) + ret = -ENOMEM; + } + if (!ret) + mm->locked_vm = locked_vm + pages; + } else { + WARN_ON_ONCE(pages > locked_vm); + mm->locked_vm = locked_vm - pages; + } + + pr_debug("%s: [%d] caller %ps %c%lu %lu/%lu%s\n", __func__, task->pid, + (void *)_RET_IP_, (inc) ? '+' : '-', pages << PAGE_SHIFT, + locked_vm << PAGE_SHIFT, task_rlimit(task, RLIMIT_MEMLOCK), + ret ? " - exceeded" : ""); + + return ret; +} +EXPORT_SYMBOL_GPL(__account_locked_vm); + +/** + * account_locked_vm - account locked pages to an mm's locked_vm + * @mm: mm to account against, may be NULL + * @pages: number of pages to account + * @inc: %true if @pages should be considered positive, %false if not + * + * Assumes a non-NULL @mm is valid (i.e. at least one reference on it). + * + * Return: + * * 0 on success, or if mm is NULL + * * -ENOMEM if RLIMIT_MEMLOCK would be exceeded. + */ +int account_locked_vm(struct mm_struct *mm, unsigned long pages, bool inc) +{ + int ret; + + if (pages == 0 || !mm) + return 0; + + down_write(&mm->mmap_sem); + ret = __account_locked_vm(mm, pages, inc, current, + capable(CAP_IPC_LOCK)); + up_write(&mm->mmap_sem); + + return ret; +} +EXPORT_SYMBOL_GPL(account_locked_vm); + unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flag, unsigned long pgoff) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 4c9e150e5ad3..bed6a065f73a 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -3007,7 +3007,7 @@ void __weak vmalloc_sync_all(void) } -static int f(pte_t *pte, pgtable_t table, unsigned long addr, void *data) +static int f(pte_t *pte, unsigned long addr, void *data) { pte_t ***p = data; diff --git a/mm/vmscan.c b/mm/vmscan.c index 7acd0afdfc2a..84dcb651d05c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1118,6 +1118,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, int may_enter_fs; enum page_references references = PAGEREF_RECLAIM_CLEAN; bool dirty, writeback; + unsigned int nr_pages; cond_resched(); @@ -1129,7 +1130,10 @@ static unsigned long shrink_page_list(struct list_head *page_list, VM_BUG_ON_PAGE(PageActive(page), page); - sc->nr_scanned++; + nr_pages = 1 << compound_order(page); + + /* Account the number of base pages even though THP */ + sc->nr_scanned += nr_pages; if (unlikely(!page_evictable(page))) goto activate_locked; @@ -1137,11 +1141,6 @@ static unsigned long shrink_page_list(struct list_head *page_list, if (!sc->may_unmap && page_mapped(page)) goto keep_locked; - /* Double the slab pressure for mapped and swapcache pages */ - if ((page_mapped(page) || PageSwapCache(page)) && - !(PageAnon(page) && !PageSwapBacked(page))) - sc->nr_scanned++; - may_enter_fs = (sc->gfp_mask & __GFP_FS) || (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); @@ -1255,7 +1254,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, case PAGEREF_ACTIVATE: goto activate_locked; case PAGEREF_KEEP: - stat->nr_ref_keep++; + stat->nr_ref_keep += nr_pages; goto keep_locked; case PAGEREF_RECLAIM: case PAGEREF_RECLAIM_CLEAN: @@ -1287,7 +1286,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, } if (!add_to_swap(page)) { if (!PageTransHuge(page)) - goto activate_locked; + goto activate_locked_split; /* Fallback to swap normal pages */ if (split_huge_page_to_list(page, page_list)) @@ -1296,7 +1295,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, count_vm_event(THP_SWPOUT_FALLBACK); #endif if (!add_to_swap(page)) - goto activate_locked; + goto activate_locked_split; } may_enter_fs = 1; @@ -1311,6 +1310,18 @@ static unsigned long shrink_page_list(struct list_head *page_list, } /* + * THP may get split above, need minus tail pages and update + * nr_pages to avoid accounting tail pages twice. + * + * The tail pages that are added into swap cache successfully + * reach here. + */ + if ((nr_pages > 1) && !PageTransHuge(page)) { + sc->nr_scanned -= (nr_pages - 1); + nr_pages = 1; + } + + /* * The page is mapped into the page tables of one or more * processes. Try to unmap it here. */ @@ -1320,7 +1331,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, if (unlikely(PageTransHuge(page))) flags |= TTU_SPLIT_HUGE_PMD; if (!try_to_unmap(page, flags)) { - stat->nr_unmap_fail++; + stat->nr_unmap_fail += nr_pages; goto activate_locked; } } @@ -1447,7 +1458,11 @@ static unsigned long shrink_page_list(struct list_head *page_list, unlock_page(page); free_it: - nr_reclaimed++; + /* + * THP may get swapped out in a whole, need account + * all base pages. + */ + nr_reclaimed += nr_pages; /* * Is there need to periodically free_page_list? It would @@ -1460,6 +1475,15 @@ free_it: list_add(&page->lru, &free_pages); continue; +activate_locked_split: + /* + * The tail pages that are failed to add into swap cache + * reach here. Fixup nr_scanned and nr_pages. + */ + if (nr_pages > 1) { + sc->nr_scanned -= (nr_pages - 1); + nr_pages = 1; + } activate_locked: /* Not a candidate for swapping, so reclaim swap space. */ if (PageSwapCache(page) && (mem_cgroup_swap_full(page) || @@ -1469,8 +1493,7 @@ activate_locked: if (!PageMlocked(page)) { int type = page_is_file_cache(page); SetPageActive(page); - pgactivate++; - stat->nr_activate[type] += hpage_nr_pages(page); + stat->nr_activate[type] += nr_pages; count_memcg_page_event(page, PGACTIVATE); } keep_locked: @@ -1480,6 +1503,8 @@ keep: VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page); } + pgactivate = stat->nr_activate[0] + stat->nr_activate[1]; + mem_cgroup_uncharge_list(&free_pages); try_to_unmap_flush(); free_unref_page_list(&free_pages); @@ -1505,7 +1530,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone, list_for_each_entry_safe(page, next, page_list, lru) { if (page_is_file_cache(page) && !PageDirty(page) && - !__PageMovable(page)) { + !__PageMovable(page) && !PageUnevictable(page)) { ClearPageActive(page); list_move(&page->lru, &clean_pages); } @@ -1651,10 +1676,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, LIST_HEAD(pages_skipped); isolate_mode_t mode = (sc->may_unmap ? 0 : ISOLATE_UNMAPPED); + total_scan = 0; scan = 0; - for (total_scan = 0; - scan < nr_to_scan && nr_taken < nr_to_scan && !list_empty(src); - total_scan++) { + while (scan < nr_to_scan && !list_empty(src)) { struct page *page; page = lru_to_page(src); @@ -1662,9 +1686,12 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, VM_BUG_ON_PAGE(!PageLRU(page), page); + nr_pages = 1 << compound_order(page); + total_scan += nr_pages; + if (page_zonenum(page) > sc->reclaim_idx) { list_move(&page->lru, &pages_skipped); - nr_skipped[page_zonenum(page)]++; + nr_skipped[page_zonenum(page)] += nr_pages; continue; } @@ -1673,11 +1700,14 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, * return with no isolated pages if the LRU mostly contains * ineligible pages. This causes the VM to not reclaim any * pages, triggering a premature OOM. + * + * Account all tail pages of THP. This would not cause + * premature OOM since __isolate_lru_page() returns -EBUSY + * only when the page is being freed somewhere else. */ - scan++; + scan += nr_pages; switch (__isolate_lru_page(page, mode)) { case 0: - nr_pages = hpage_nr_pages(page); nr_taken += nr_pages; nr_zone_taken[page_zonenum(page)] += nr_pages; list_move(&page->lru, dst); @@ -1953,8 +1983,8 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec, if (global_reclaim(sc)) __count_vm_events(item, nr_reclaimed); __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); - reclaim_stat->recent_rotated[0] = stat.nr_activate[0]; - reclaim_stat->recent_rotated[1] = stat.nr_activate[1]; + reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; + reclaim_stat->recent_rotated[1] += stat.nr_activate[1]; move_pages_to_lru(lruvec, &page_list); @@ -2199,8 +2229,7 @@ enum scan_balance { * nr[2] = file inactive pages to scan; nr[3] = file active pages to scan */ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, - struct scan_control *sc, unsigned long *nr, - unsigned long *lru_pages) + struct scan_control *sc, unsigned long *nr) { int swappiness = mem_cgroup_swappiness(memcg); struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; @@ -2351,20 +2380,72 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg, fraction[1] = fp; denominator = ap + fp + 1; out: - *lru_pages = 0; for_each_evictable_lru(lru) { int file = is_file_lru(lru); - unsigned long size; + unsigned long lruvec_size; unsigned long scan; + unsigned long protection; + + lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx); + protection = mem_cgroup_protection(memcg, + sc->memcg_low_reclaim); + + if (protection) { + /* + * Scale a cgroup's reclaim pressure by proportioning + * its current usage to its memory.low or memory.min + * setting. + * + * This is important, as otherwise scanning aggression + * becomes extremely binary -- from nothing as we + * approach the memory protection threshold, to totally + * nominal as we exceed it. This results in requiring + * setting extremely liberal protection thresholds. It + * also means we simply get no protection at all if we + * set it too low, which is not ideal. + * + * If there is any protection in place, we reduce scan + * pressure by how much of the total memory used is + * within protection thresholds. + * + * There is one special case: in the first reclaim pass, + * we skip over all groups that are within their low + * protection. If that fails to reclaim enough pages to + * satisfy the reclaim goal, we come back and override + * the best-effort low protection. However, we still + * ideally want to honor how well-behaved groups are in + * that case instead of simply punishing them all + * equally. As such, we reclaim them based on how much + * memory they are using, reducing the scan pressure + * again by how much of the total memory used is under + * hard protection. + */ + unsigned long cgroup_size = mem_cgroup_size(memcg); + + /* Avoid TOCTOU with earlier protection check */ + cgroup_size = max(cgroup_size, protection); + + scan = lruvec_size - lruvec_size * protection / + cgroup_size; + + /* + * Minimally target SWAP_CLUSTER_MAX pages to keep + * reclaim moving forwards, avoiding decremeting + * sc->priority further than desirable. + */ + scan = max(scan, SWAP_CLUSTER_MAX); + } else { + scan = lruvec_size; + } + + scan >>= sc->priority; - size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx); - scan = size >> sc->priority; /* * If the cgroup's already been deleted, make sure to * scrape out the remaining cache. */ if (!scan && !mem_cgroup_online(memcg)) - scan = min(size, SWAP_CLUSTER_MAX); + scan = min(lruvec_size, SWAP_CLUSTER_MAX); switch (scan_balance) { case SCAN_EQUAL: @@ -2384,7 +2465,7 @@ out: case SCAN_ANON: /* Scan one type exclusively */ if ((scan_balance == SCAN_FILE) != file) { - size = 0; + lruvec_size = 0; scan = 0; } break; @@ -2393,7 +2474,6 @@ out: BUG(); } - *lru_pages += size; nr[lru] = scan; } } @@ -2402,7 +2482,7 @@ out: * This is a basic per-node page freer. Used by both kswapd and direct reclaim. */ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memcg, - struct scan_control *sc, unsigned long *lru_pages) + struct scan_control *sc) { struct lruvec *lruvec = mem_cgroup_lruvec(pgdat, memcg); unsigned long nr[NR_LRU_LISTS]; @@ -2414,7 +2494,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc struct blk_plug plug; bool scan_adjusted; - get_scan_count(lruvec, memcg, sc, nr, lru_pages); + get_scan_count(lruvec, memcg, sc, nr); /* Record the original scan target for proportional adjustments later */ memcpy(targets, nr, sizeof(nr)); @@ -2619,7 +2699,6 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) .pgdat = pgdat, .priority = sc->priority, }; - unsigned long node_lru_pages = 0; struct mem_cgroup *memcg; memset(&sc->nr, 0, sizeof(sc->nr)); @@ -2629,7 +2708,6 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) memcg = mem_cgroup_iter(root, NULL, &reclaim); do { - unsigned long lru_pages; unsigned long reclaimed; unsigned long scanned; @@ -2654,13 +2732,19 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc) memcg_memory_event(memcg, MEMCG_LOW); break; case MEMCG_PROT_NONE: + /* + * All protection thresholds breached. We may + * still choose to vary the scan pressure + * applied based on by how much the cgroup in + * question has exceeded its protection + * thresholds (see get_scan_count). + */ break; } reclaimed = sc->nr_reclaimed; scanned = sc->nr_scanned; - shrink_node_memcg(pgdat, memcg, sc, &lru_pages); - node_lru_pages += lru_pages; + shrink_node_memcg(pgdat, memcg, sc); if (sc->may_shrinkslab) { shrink_slab(sc->gfp_mask, pgdat->node_id, @@ -3186,13 +3270,14 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, .may_swap = !noswap, .may_shrinkslab = 1, }; - unsigned long lru_pages; sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) | (GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK); - trace_mm_vmscan_memcg_softlimit_reclaim_begin(sc.order, - sc.gfp_mask); + trace_mm_vmscan_memcg_softlimit_reclaim_begin( + cgroup_ino(memcg->css.cgroup), + sc.order, + sc.gfp_mask); /* * NOTE: Although we can get the priority field, using it @@ -3201,9 +3286,11 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, * will pick up pages from other mem cgroup's as well. We hack * the priority and make it zero. */ - shrink_node_memcg(pgdat, memcg, &sc, &lru_pages); + shrink_node_memcg(pgdat, memcg, &sc); - trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed); + trace_mm_vmscan_memcg_softlimit_reclaim_end( + cgroup_ino(memcg->css.cgroup), + sc.nr_reclaimed); *nr_scanned = sc.nr_scanned; return sc.nr_reclaimed; @@ -3241,7 +3328,9 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, zonelist = &NODE_DATA(nid)->node_zonelists[ZONELIST_FALLBACK]; - trace_mm_vmscan_memcg_reclaim_begin(0, sc.gfp_mask); + trace_mm_vmscan_memcg_reclaim_begin( + cgroup_ino(memcg->css.cgroup), + 0, sc.gfp_mask); psi_memstall_enter(&pflags); noreclaim_flag = memalloc_noreclaim_save(); @@ -3251,7 +3340,9 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, memalloc_noreclaim_restore(noreclaim_flag); psi_memstall_leave(&pflags); - trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed); + trace_mm_vmscan_memcg_reclaim_end( + cgroup_ino(memcg->css.cgroup), + nr_reclaimed); return nr_reclaimed; } diff --git a/mm/z3fold.c b/mm/z3fold.c index 985732c8b025..deb982a62694 100644 --- a/mm/z3fold.c +++ b/mm/z3fold.c @@ -89,6 +89,7 @@ struct z3fold_buddy_slots { */ unsigned long slot[BUDDY_MASK + 1]; unsigned long pool; /* back link + flags */ + rwlock_t lock; }; #define HANDLE_FLAG_MASK (0x03) @@ -121,6 +122,7 @@ struct z3fold_header { unsigned short start_middle; unsigned short first_num:2; unsigned short mapped_count:2; + unsigned short foreign_handles:2; }; /** @@ -175,6 +177,14 @@ enum z3fold_page_flags { PAGE_CLAIMED, /* by either reclaim or free */ }; +/* + * handle flags, go under HANDLE_FLAG_MASK + */ +enum z3fold_handle_flags { + HANDLES_ORPHANED = 0, +}; + + /***************** * Helpers *****************/ @@ -199,6 +209,7 @@ static inline struct z3fold_buddy_slots *alloc_slots(struct z3fold_pool *pool, if (slots) { memset(slots->slot, 0, sizeof(slots->slot)); slots->pool = (unsigned long)pool; + rwlock_init(&slots->lock); } return slots; @@ -214,33 +225,6 @@ static inline struct z3fold_buddy_slots *handle_to_slots(unsigned long handle) return (struct z3fold_buddy_slots *)(handle & ~(SLOTS_ALIGN - 1)); } -static inline void free_handle(unsigned long handle) -{ - struct z3fold_buddy_slots *slots; - int i; - bool is_free; - - if (handle & (1 << PAGE_HEADLESS)) - return; - - WARN_ON(*(unsigned long *)handle == 0); - *(unsigned long *)handle = 0; - slots = handle_to_slots(handle); - is_free = true; - for (i = 0; i <= BUDDY_MASK; i++) { - if (slots->slot[i]) { - is_free = false; - break; - } - } - - if (is_free) { - struct z3fold_pool *pool = slots_to_pool(slots); - - kmem_cache_free(pool->c_handle, slots); - } -} - static struct dentry *z3fold_do_mount(struct file_system_type *fs_type, int flags, const char *dev_name, void *data) { @@ -320,6 +304,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, zhdr->start_middle = 0; zhdr->cpu = -1; zhdr->slots = slots; + zhdr->foreign_handles = 0; INIT_LIST_HEAD(&zhdr->buddy); INIT_WORK(&zhdr->work, compact_page_work); return zhdr; @@ -361,6 +346,55 @@ static inline int __idx(struct z3fold_header *zhdr, enum buddy bud) return (bud + zhdr->first_num) & BUDDY_MASK; } +static inline struct z3fold_header *__get_z3fold_header(unsigned long handle, + bool lock) +{ + struct z3fold_buddy_slots *slots; + struct z3fold_header *zhdr; + + if (!(handle & (1 << PAGE_HEADLESS))) { + slots = handle_to_slots(handle); + do { + unsigned long addr; + + read_lock(&slots->lock); + addr = *(unsigned long *)handle; + zhdr = (struct z3fold_header *)(addr & PAGE_MASK); + if (lock && z3fold_page_trylock(zhdr)) { + read_unlock(&slots->lock); + break; + } + read_unlock(&slots->lock); + cpu_relax(); + } while (lock); + } else { + zhdr = (struct z3fold_header *)(handle & PAGE_MASK); + } + + return zhdr; +} + + +/* Returns the z3fold page where a given handle is stored */ +static struct z3fold_header *handle_to_z3fold_header(unsigned long h) +{ + return __get_z3fold_header(h, false); +} + +/* return locked z3fold page if it's not headless */ +static struct z3fold_header *get_z3fold_header(unsigned long h) +{ + return __get_z3fold_header(h, true); +} + +static void put_z3fold_header(struct z3fold_header *zhdr) +{ + struct page *page = virt_to_page(zhdr); + + if (!test_bit(PAGE_HEADLESS, &page->private)) + z3fold_page_unlock(zhdr); +} + /* * Encodes the handle of a particular buddy within a z3fold page * Pool lock should be held as this function accesses first_num @@ -385,22 +419,56 @@ static unsigned long encode_handle(struct z3fold_header *zhdr, enum buddy bud) h |= (zhdr->last_chunks << BUDDY_SHIFT); slots = zhdr->slots; + write_lock(&slots->lock); slots->slot[idx] = h; + write_unlock(&slots->lock); return (unsigned long)&slots->slot[idx]; } -/* Returns the z3fold page where a given handle is stored */ -static inline struct z3fold_header *handle_to_z3fold_header(unsigned long h) +static inline void free_handle(unsigned long handle) { - unsigned long addr = h; + struct z3fold_buddy_slots *slots; + struct z3fold_header *zhdr; + int i; + bool is_free = true; + + if (handle & (1 << PAGE_HEADLESS)) + return; + + if (WARN_ON(*(unsigned long *)handle == 0)) + return; + + zhdr = handle_to_z3fold_header(handle); + slots = handle_to_slots(handle); + write_lock(&slots->lock); + *(unsigned long *)handle = 0; + write_unlock(&slots->lock); + if (zhdr->slots == slots) + return; /* simple case, nothing else to do */ - if (!(addr & (1 << PAGE_HEADLESS))) - addr = *(unsigned long *)h; + /* we are freeing a foreign handle if we are here */ + zhdr->foreign_handles--; + read_lock(&slots->lock); + for (i = 0; i <= BUDDY_MASK; i++) { + if (slots->slot[i]) { + is_free = false; + break; + } + } + read_unlock(&slots->lock); + + if (is_free && test_and_clear_bit(HANDLES_ORPHANED, &slots->pool)) { + struct z3fold_pool *pool = slots_to_pool(slots); - return (struct z3fold_header *)(addr & PAGE_MASK); + kmem_cache_free(pool->c_handle, slots); + } } -/* only for LAST bud, returns zero otherwise */ +/* Return the number of chunks for the object associated with handle. + * + * Should be called with page lock taken, otherwise we would have needed to + * read addr under slots lock + */ static unsigned short handle_to_chunks(unsigned long handle) { unsigned long addr = *(unsigned long *)handle; @@ -412,15 +480,16 @@ static unsigned short handle_to_chunks(unsigned long handle) * (handle & BUDDY_MASK) < zhdr->first_num is possible in encode_handle * but that doesn't matter. because the masking will result in the * correct buddy number. + * + * NB: should be called with page lock taken */ -static enum buddy handle_to_buddy(unsigned long handle) +static enum buddy handle_to_buddy(unsigned long handle, + struct z3fold_header *zhdr) { - struct z3fold_header *zhdr; unsigned long addr; WARN_ON(handle & (1 << PAGE_HEADLESS)); addr = *(unsigned long *)handle; - zhdr = (struct z3fold_header *)(addr & PAGE_MASK); return (addr - zhdr->first_num) & BUDDY_MASK; } @@ -433,6 +502,8 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) { struct page *page = virt_to_page(zhdr); struct z3fold_pool *pool = zhdr_to_pool(zhdr); + bool is_free = true; + int i; WARN_ON(!list_empty(&zhdr->buddy)); set_bit(PAGE_STALE, &page->private); @@ -441,8 +512,25 @@ static void __release_z3fold_page(struct z3fold_header *zhdr, bool locked) if (!list_empty(&page->lru)) list_del_init(&page->lru); spin_unlock(&pool->lock); + + /* If there are no foreign handles, free the handles array */ + read_lock(&zhdr->slots->lock); + for (i = 0; i <= BUDDY_MASK; i++) { + if (zhdr->slots->slot[i]) { + is_free = false; + break; + } + } + read_unlock(&zhdr->slots->lock); + + if (is_free) + kmem_cache_free(pool->c_handle, zhdr->slots); + else + set_bit(HANDLES_ORPHANED, &zhdr->slots->pool); + if (locked) z3fold_page_unlock(zhdr); + spin_lock(&pool->stale_lock); list_add(&zhdr->buddy, &pool->stale); queue_work(pool->release_wq, &pool->work); @@ -470,6 +558,7 @@ static void release_z3fold_page_locked_list(struct kref *ref) struct z3fold_header *zhdr = container_of(ref, struct z3fold_header, refcount); struct z3fold_pool *pool = zhdr_to_pool(zhdr); + spin_lock(&pool->lock); list_del_init(&zhdr->buddy); spin_unlock(&pool->lock); @@ -541,106 +630,6 @@ static inline void add_to_unbuddied(struct z3fold_pool *pool, } } -static inline void *mchunk_memmove(struct z3fold_header *zhdr, - unsigned short dst_chunk) -{ - void *beg = zhdr; - return memmove(beg + (dst_chunk << CHUNK_SHIFT), - beg + (zhdr->start_middle << CHUNK_SHIFT), - zhdr->middle_chunks << CHUNK_SHIFT); -} - -#define BIG_CHUNK_GAP 3 -/* Has to be called with lock held */ -static int z3fold_compact_page(struct z3fold_header *zhdr) -{ - struct page *page = virt_to_page(zhdr); - - if (test_bit(MIDDLE_CHUNK_MAPPED, &page->private)) - return 0; /* can't move middle chunk, it's used */ - - if (unlikely(PageIsolated(page))) - return 0; - - if (zhdr->middle_chunks == 0) - return 0; /* nothing to compact */ - - if (zhdr->first_chunks == 0 && zhdr->last_chunks == 0) { - /* move to the beginning */ - mchunk_memmove(zhdr, ZHDR_CHUNKS); - zhdr->first_chunks = zhdr->middle_chunks; - zhdr->middle_chunks = 0; - zhdr->start_middle = 0; - zhdr->first_num++; - return 1; - } - - /* - * moving data is expensive, so let's only do that if - * there's substantial gain (at least BIG_CHUNK_GAP chunks) - */ - if (zhdr->first_chunks != 0 && zhdr->last_chunks == 0 && - zhdr->start_middle - (zhdr->first_chunks + ZHDR_CHUNKS) >= - BIG_CHUNK_GAP) { - mchunk_memmove(zhdr, zhdr->first_chunks + ZHDR_CHUNKS); - zhdr->start_middle = zhdr->first_chunks + ZHDR_CHUNKS; - return 1; - } else if (zhdr->last_chunks != 0 && zhdr->first_chunks == 0 && - TOTAL_CHUNKS - (zhdr->last_chunks + zhdr->start_middle - + zhdr->middle_chunks) >= - BIG_CHUNK_GAP) { - unsigned short new_start = TOTAL_CHUNKS - zhdr->last_chunks - - zhdr->middle_chunks; - mchunk_memmove(zhdr, new_start); - zhdr->start_middle = new_start; - return 1; - } - - return 0; -} - -static void do_compact_page(struct z3fold_header *zhdr, bool locked) -{ - struct z3fold_pool *pool = zhdr_to_pool(zhdr); - struct page *page; - - page = virt_to_page(zhdr); - if (locked) - WARN_ON(z3fold_page_trylock(zhdr)); - else - z3fold_page_lock(zhdr); - if (WARN_ON(!test_and_clear_bit(NEEDS_COMPACTING, &page->private))) { - z3fold_page_unlock(zhdr); - return; - } - spin_lock(&pool->lock); - list_del_init(&zhdr->buddy); - spin_unlock(&pool->lock); - - if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) { - atomic64_dec(&pool->pages_nr); - return; - } - - if (unlikely(PageIsolated(page) || - test_bit(PAGE_STALE, &page->private))) { - z3fold_page_unlock(zhdr); - return; - } - - z3fold_compact_page(zhdr); - add_to_unbuddied(pool, zhdr); - z3fold_page_unlock(zhdr); -} - -static void compact_page_work(struct work_struct *w) -{ - struct z3fold_header *zhdr = container_of(w, struct z3fold_header, - work); - - do_compact_page(zhdr, false); -} - /* returns _locked_ z3fold page header or NULL */ static inline struct z3fold_header *__z3fold_alloc(struct z3fold_pool *pool, size_t size, bool can_sleep) @@ -739,6 +728,219 @@ lookup: return zhdr; } +static inline void *mchunk_memmove(struct z3fold_header *zhdr, + unsigned short dst_chunk) +{ + void *beg = zhdr; + return memmove(beg + (dst_chunk << CHUNK_SHIFT), + beg + (zhdr->start_middle << CHUNK_SHIFT), + zhdr->middle_chunks << CHUNK_SHIFT); +} + +static inline bool buddy_single(struct z3fold_header *zhdr) +{ + return !((zhdr->first_chunks && zhdr->middle_chunks) || + (zhdr->first_chunks && zhdr->last_chunks) || + (zhdr->middle_chunks && zhdr->last_chunks)); +} + +static struct z3fold_header *compact_single_buddy(struct z3fold_header *zhdr) +{ + struct z3fold_pool *pool = zhdr_to_pool(zhdr); + void *p = zhdr; + unsigned long old_handle = 0; + size_t sz = 0; + struct z3fold_header *new_zhdr = NULL; + int first_idx = __idx(zhdr, FIRST); + int middle_idx = __idx(zhdr, MIDDLE); + int last_idx = __idx(zhdr, LAST); + + /* + * No need to protect slots here -- all the slots are "local" and + * the page lock is already taken + */ + if (zhdr->first_chunks && zhdr->slots->slot[first_idx]) { + p += ZHDR_SIZE_ALIGNED; + sz = zhdr->first_chunks << CHUNK_SHIFT; + old_handle = (unsigned long)&zhdr->slots->slot[first_idx]; + } else if (zhdr->middle_chunks && zhdr->slots->slot[middle_idx]) { + p += zhdr->start_middle << CHUNK_SHIFT; + sz = zhdr->middle_chunks << CHUNK_SHIFT; + old_handle = (unsigned long)&zhdr->slots->slot[middle_idx]; + } else if (zhdr->last_chunks && zhdr->slots->slot[last_idx]) { + p += PAGE_SIZE - (zhdr->last_chunks << CHUNK_SHIFT); + sz = zhdr->last_chunks << CHUNK_SHIFT; + old_handle = (unsigned long)&zhdr->slots->slot[last_idx]; + } + + if (sz > 0) { + enum buddy new_bud = HEADLESS; + short chunks = size_to_chunks(sz); + void *q; + + new_zhdr = __z3fold_alloc(pool, sz, false); + if (!new_zhdr) + return NULL; + + if (WARN_ON(new_zhdr == zhdr)) + goto out_fail; + + if (new_zhdr->first_chunks == 0) { + if (new_zhdr->middle_chunks != 0 && + chunks >= new_zhdr->start_middle) { + new_bud = LAST; + } else { + new_bud = FIRST; + } + } else if (new_zhdr->last_chunks == 0) { + new_bud = LAST; + } else if (new_zhdr->middle_chunks == 0) { + new_bud = MIDDLE; + } + q = new_zhdr; + switch (new_bud) { + case FIRST: + new_zhdr->first_chunks = chunks; + q += ZHDR_SIZE_ALIGNED; + break; + case MIDDLE: + new_zhdr->middle_chunks = chunks; + new_zhdr->start_middle = + new_zhdr->first_chunks + ZHDR_CHUNKS; + q += new_zhdr->start_middle << CHUNK_SHIFT; + break; + case LAST: + new_zhdr->last_chunks = chunks; + q += PAGE_SIZE - (new_zhdr->last_chunks << CHUNK_SHIFT); + break; + default: + goto out_fail; + } + new_zhdr->foreign_handles++; + memcpy(q, p, sz); + write_lock(&zhdr->slots->lock); + *(unsigned long *)old_handle = (unsigned long)new_zhdr + + __idx(new_zhdr, new_bud); + if (new_bud == LAST) + *(unsigned long *)old_handle |= + (new_zhdr->last_chunks << BUDDY_SHIFT); + write_unlock(&zhdr->slots->lock); + add_to_unbuddied(pool, new_zhdr); + z3fold_page_unlock(new_zhdr); + } + + return new_zhdr; + +out_fail: + if (new_zhdr) { + if (kref_put(&new_zhdr->refcount, release_z3fold_page_locked)) + atomic64_dec(&pool->pages_nr); + else { + add_to_unbuddied(pool, new_zhdr); + z3fold_page_unlock(new_zhdr); + } + } + return NULL; + +} + +#define BIG_CHUNK_GAP 3 +/* Has to be called with lock held */ +static int z3fold_compact_page(struct z3fold_header *zhdr) +{ + struct page *page = virt_to_page(zhdr); + + if (test_bit(MIDDLE_CHUNK_MAPPED, &page->private)) + return 0; /* can't move middle chunk, it's used */ + + if (zhdr->middle_chunks == 0) + return 0; /* nothing to compact */ + + if (zhdr->first_chunks == 0 && zhdr->last_chunks == 0) { + /* move to the beginning */ + mchunk_memmove(zhdr, ZHDR_CHUNKS); + zhdr->first_chunks = zhdr->middle_chunks; + zhdr->middle_chunks = 0; + zhdr->start_middle = 0; + zhdr->first_num++; + return 1; + } + + /* + * moving data is expensive, so let's only do that if + * there's substantial gain (at least BIG_CHUNK_GAP chunks) + */ + if (zhdr->first_chunks != 0 && zhdr->last_chunks == 0 && + zhdr->start_middle - (zhdr->first_chunks + ZHDR_CHUNKS) >= + BIG_CHUNK_GAP) { + mchunk_memmove(zhdr, zhdr->first_chunks + ZHDR_CHUNKS); + zhdr->start_middle = zhdr->first_chunks + ZHDR_CHUNKS; + return 1; + } else if (zhdr->last_chunks != 0 && zhdr->first_chunks == 0 && + TOTAL_CHUNKS - (zhdr->last_chunks + zhdr->start_middle + + zhdr->middle_chunks) >= + BIG_CHUNK_GAP) { + unsigned short new_start = TOTAL_CHUNKS - zhdr->last_chunks - + zhdr->middle_chunks; + mchunk_memmove(zhdr, new_start); + zhdr->start_middle = new_start; + return 1; + } + + return 0; +} + +static void do_compact_page(struct z3fold_header *zhdr, bool locked) +{ + struct z3fold_pool *pool = zhdr_to_pool(zhdr); + struct page *page; + + page = virt_to_page(zhdr); + if (locked) + WARN_ON(z3fold_page_trylock(zhdr)); + else + z3fold_page_lock(zhdr); + if (WARN_ON(!test_and_clear_bit(NEEDS_COMPACTING, &page->private))) { + z3fold_page_unlock(zhdr); + return; + } + spin_lock(&pool->lock); + list_del_init(&zhdr->buddy); + spin_unlock(&pool->lock); + + if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) { + atomic64_dec(&pool->pages_nr); + return; + } + + if (unlikely(PageIsolated(page) || + test_bit(PAGE_STALE, &page->private))) { + z3fold_page_unlock(zhdr); + return; + } + + if (!zhdr->foreign_handles && buddy_single(zhdr) && + compact_single_buddy(zhdr)) { + if (kref_put(&zhdr->refcount, release_z3fold_page_locked)) + atomic64_dec(&pool->pages_nr); + else + z3fold_page_unlock(zhdr); + return; + } + + z3fold_compact_page(zhdr); + add_to_unbuddied(pool, zhdr); + z3fold_page_unlock(zhdr); +} + +static void compact_page_work(struct work_struct *w) +{ + struct z3fold_header *zhdr = container_of(w, struct z3fold_header, + work); + + do_compact_page(zhdr, false); +} + /* * API Functions */ @@ -970,9 +1172,9 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) struct page *page; enum buddy bud; - zhdr = handle_to_z3fold_header(handle); - page = virt_to_page(zhdr); + zhdr = get_z3fold_header(handle); + page = virt_to_page(zhdr); if (test_bit(PAGE_HEADLESS, &page->private)) { /* if a headless page is under reclaim, just leave. * NB: we use test_and_set_bit for a reason: if the bit @@ -983,6 +1185,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) spin_lock(&pool->lock); list_del(&page->lru); spin_unlock(&pool->lock); + put_z3fold_header(zhdr); free_z3fold_page(page, true); atomic64_dec(&pool->pages_nr); } @@ -990,8 +1193,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) } /* Non-headless case */ - z3fold_page_lock(zhdr); - bud = handle_to_buddy(handle); + bud = handle_to_buddy(handle, zhdr); switch (bud) { case FIRST: @@ -1006,7 +1208,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) default: pr_err("%s: unknown bud %d\n", __func__, bud); WARN_ON(1); - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); return; } @@ -1021,7 +1223,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) } if (unlikely(PageIsolated(page)) || test_and_set_bit(NEEDS_COMPACTING, &page->private)) { - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); return; } if (zhdr->cpu < 0 || !cpu_online(zhdr->cpu)) { @@ -1035,7 +1237,7 @@ static void z3fold_free(struct z3fold_pool *pool, unsigned long handle) } kref_get(&zhdr->refcount); queue_work_on(zhdr->cpu, pool->compact_wq, &zhdr->work); - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); } /** @@ -1217,15 +1419,14 @@ static void *z3fold_map(struct z3fold_pool *pool, unsigned long handle) void *addr; enum buddy buddy; - zhdr = handle_to_z3fold_header(handle); + zhdr = get_z3fold_header(handle); addr = zhdr; page = virt_to_page(zhdr); if (test_bit(PAGE_HEADLESS, &page->private)) goto out; - z3fold_page_lock(zhdr); - buddy = handle_to_buddy(handle); + buddy = handle_to_buddy(handle, zhdr); switch (buddy) { case FIRST: addr += ZHDR_SIZE_ALIGNED; @@ -1246,8 +1447,8 @@ static void *z3fold_map(struct z3fold_pool *pool, unsigned long handle) if (addr) zhdr->mapped_count++; - z3fold_page_unlock(zhdr); out: + put_z3fold_header(zhdr); return addr; } @@ -1262,18 +1463,17 @@ static void z3fold_unmap(struct z3fold_pool *pool, unsigned long handle) struct page *page; enum buddy buddy; - zhdr = handle_to_z3fold_header(handle); + zhdr = get_z3fold_header(handle); page = virt_to_page(zhdr); if (test_bit(PAGE_HEADLESS, &page->private)) return; - z3fold_page_lock(zhdr); - buddy = handle_to_buddy(handle); + buddy = handle_to_buddy(handle, zhdr); if (buddy == MIDDLE) clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); zhdr->mapped_count--; - z3fold_page_unlock(zhdr); + put_z3fold_header(zhdr); } /** @@ -1304,19 +1504,21 @@ static bool z3fold_page_isolate(struct page *page, isolate_mode_t mode) test_bit(PAGE_STALE, &page->private)) goto out; + if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) + goto out; + pool = zhdr_to_pool(zhdr); + spin_lock(&pool->lock); + if (!list_empty(&zhdr->buddy)) + list_del_init(&zhdr->buddy); + if (!list_empty(&page->lru)) + list_del_init(&page->lru); + spin_unlock(&pool->lock); + + kref_get(&zhdr->refcount); + z3fold_page_unlock(zhdr); + return true; - if (zhdr->mapped_count == 0) { - kref_get(&zhdr->refcount); - if (!list_empty(&zhdr->buddy)) - list_del_init(&zhdr->buddy); - spin_lock(&pool->lock); - if (!list_empty(&page->lru)) - list_del(&page->lru); - spin_unlock(&pool->lock); - z3fold_page_unlock(zhdr); - return true; - } out: z3fold_page_unlock(zhdr); return false; @@ -1342,7 +1544,7 @@ static int z3fold_page_migrate(struct address_space *mapping, struct page *newpa unlock_page(page); return -EAGAIN; } - if (zhdr->mapped_count != 0) { + if (zhdr->mapped_count != 0 || zhdr->foreign_handles != 0) { z3fold_page_unlock(zhdr); unlock_page(page); return -EBUSY; diff --git a/net/core/neighbour.c b/net/core/neighbour.c index a4a6cf808792..bec7b091ba44 100644 --- a/net/core/neighbour.c +++ b/net/core/neighbour.c @@ -3367,8 +3367,6 @@ void neigh_app_ns(struct neighbour *n) EXPORT_SYMBOL(neigh_app_ns); #ifdef CONFIG_SYSCTL -static int zero; -static int int_max = INT_MAX; static int unres_qlen_max = INT_MAX / SKB_TRUESIZE(ETH_FRAME_LEN); static int proc_unres_qlen(struct ctl_table *ctl, int write, @@ -3377,7 +3375,7 @@ static int proc_unres_qlen(struct ctl_table *ctl, int write, int size, ret; struct ctl_table tmp = *ctl; - tmp.extra1 = &zero; + tmp.extra1 = SYSCTL_ZERO; tmp.extra2 = &unres_qlen_max; tmp.data = &size; @@ -3442,8 +3440,8 @@ static int neigh_proc_dointvec_zero_intmax(struct ctl_table *ctl, int write, struct ctl_table tmp = *ctl; int ret; - tmp.extra1 = &zero; - tmp.extra2 = &int_max; + tmp.extra1 = SYSCTL_ZERO; + tmp.extra2 = SYSCTL_INT_MAX; ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos); neigh_proc_update(ctl, write); @@ -3588,24 +3586,24 @@ static struct neigh_sysctl_table { .procname = "gc_thresh1", .maxlen = sizeof(int), .mode = 0644, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, .proc_handler = proc_dointvec_minmax, }, [NEIGH_VAR_GC_THRESH2] = { .procname = "gc_thresh2", .maxlen = sizeof(int), .mode = 0644, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, .proc_handler = proc_dointvec_minmax, }, [NEIGH_VAR_GC_THRESH3] = { .procname = "gc_thresh3", .maxlen = sizeof(int), .mode = 0644, - .extra1 = &zero, - .extra2 = &int_max, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, .proc_handler = proc_dointvec_minmax, }, {}, diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c index 1a2685694abd..8b886564b1b4 100644 --- a/net/core/sysctl_net_core.c +++ b/net/core/sysctl_net_core.c @@ -22,8 +22,6 @@ #include <net/busy_poll.h> #include <net/pkt_sched.h> -static int zero = 0; -static int one = 1; static int two __maybe_unused = 2; static int min_sndbuf = SOCK_MIN_SNDBUF; static int min_rcvbuf = SOCK_MIN_RCVBUF; @@ -390,10 +388,10 @@ static struct ctl_table net_core_table[] = { .mode = 0644, .proc_handler = proc_dointvec_minmax_bpf_enable, # ifdef CONFIG_BPF_JIT_ALWAYS_ON - .extra1 = &one, - .extra2 = &one, + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_ONE, # else - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, # endif }, @@ -404,7 +402,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = proc_dointvec_minmax_bpf_restricted, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { @@ -413,8 +411,8 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(int), .mode = 0600, .proc_handler = proc_dointvec_minmax_bpf_restricted, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, # endif { @@ -461,8 +459,8 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE }, #ifdef CONFIG_RPS { @@ -493,7 +491,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "busy_read", @@ -501,7 +499,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #endif #ifdef CONFIG_NET_SCHED @@ -533,7 +531,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &max_skb_frags, }, { @@ -542,7 +540,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "fb_tunnels_only_for_init_net", @@ -550,8 +548,8 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "devconf_inherit_init_net", @@ -559,7 +557,7 @@ static struct ctl_table net_core_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { } @@ -571,7 +569,7 @@ static struct ctl_table netns_core_table[] = { .data = &init_net.core.sysctl_somaxconn, .maxlen = sizeof(int), .mode = 0644, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .proc_handler = proc_dointvec_minmax }, { } diff --git a/net/dccp/sysctl.c b/net/dccp/sysctl.c index 53731e45403c..4d6147934179 100644 --- a/net/dccp/sysctl.c +++ b/net/dccp/sysctl.c @@ -19,9 +19,7 @@ #endif /* Boundary values */ -static int zero = 0, - one = 1, - u8_max = 0xFF; +static int u8_max = 0xFF; static unsigned long seqw_min = DCCPF_SEQ_WMIN, seqw_max = 0xFFFFFFFF; /* maximum on 32 bit */ @@ -41,7 +39,7 @@ static struct ctl_table dccp_default_table[] = { .maxlen = sizeof(sysctl_dccp_rx_ccid), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &u8_max, /* RFC 4340, 10. */ }, { @@ -50,7 +48,7 @@ static struct ctl_table dccp_default_table[] = { .maxlen = sizeof(sysctl_dccp_tx_ccid), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &u8_max, /* RFC 4340, 10. */ }, { @@ -59,7 +57,7 @@ static struct ctl_table dccp_default_table[] = { .maxlen = sizeof(sysctl_dccp_request_retries), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &u8_max, }, { @@ -68,7 +66,7 @@ static struct ctl_table dccp_default_table[] = { .maxlen = sizeof(sysctl_dccp_retries1), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &u8_max, }, { @@ -77,7 +75,7 @@ static struct ctl_table dccp_default_table[] = { .maxlen = sizeof(sysctl_dccp_retries2), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &u8_max, }, { @@ -86,7 +84,7 @@ static struct ctl_table dccp_default_table[] = { .maxlen = sizeof(sysctl_dccp_tx_qlen), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "sync_ratelimit", diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c index 90f09e47198b..cf5b4462fd42 100644 --- a/net/ipv4/sysctl_net_ipv4.c +++ b/net/ipv4/sysctl_net_ipv4.c @@ -28,8 +28,6 @@ #include <net/protocol.h> #include <net/netevent.h> -static int zero; -static int one = 1; static int two = 2; static int four = 4; static int thousand = 1000; @@ -575,7 +573,7 @@ static struct ctl_table ipv4_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "icmp_msgs_burst", @@ -583,7 +581,7 @@ static struct ctl_table ipv4_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "udp_mem", @@ -661,8 +659,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { @@ -750,8 +748,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = ipv4_fwd_update_priority, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "ip_nonlocal_bind", @@ -781,8 +779,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { @@ -842,7 +840,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one + .extra1 = SYSCTL_ONE }, #endif { @@ -947,7 +945,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &two, }, { @@ -989,7 +987,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_tfo_blackhole_detect_timeout, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, #ifdef CONFIG_IP_ROUTE_MULTIPATH { @@ -998,8 +996,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "fib_multipath_hash_policy", @@ -1007,8 +1005,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_fib_multipath_hash_policy, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { @@ -1025,8 +1023,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, #endif { @@ -1056,7 +1054,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &four, }, { @@ -1200,7 +1198,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &gso_max_segs, }, { @@ -1209,7 +1207,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &one_day_secs }, { @@ -1218,8 +1216,8 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "tcp_invalid_ratelimit", @@ -1234,7 +1232,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &thousand, }, { @@ -1243,7 +1241,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &thousand, }, { @@ -1252,7 +1250,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(init_net.ipv4.sysctl_tcp_wmem), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, { .procname = "tcp_rmem", @@ -1260,7 +1258,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(init_net.ipv4.sysctl_tcp_rmem), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, { .procname = "tcp_comp_sack_delay_ns", @@ -1275,7 +1273,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &comp_sack_nr_max, }, { @@ -1284,7 +1282,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(init_net.ipv4.sysctl_udp_rmem_min), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one + .extra1 = SYSCTL_ONE }, { .procname = "udp_wmem_min", @@ -1292,7 +1290,7 @@ static struct ctl_table ipv4_net_table[] = { .maxlen = sizeof(init_net.ipv4.sysctl_udp_wmem_min), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one + .extra1 = SYSCTL_ONE }, { } }; diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c index 521e3203e83a..dc73888c7859 100644 --- a/net/ipv6/addrconf.c +++ b/net/ipv6/addrconf.c @@ -6432,8 +6432,6 @@ int addrconf_sysctl_disable_policy(struct ctl_table *ctl, int write, } static int minus_one = -1; -static const int zero = 0; -static const int one = 1; static const int two_five_five = 255; static const struct ctl_table addrconf_sysctl[] = { @@ -6450,7 +6448,7 @@ static const struct ctl_table addrconf_sysctl[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *)&one, + .extra1 = (void *)SYSCTL_ONE, .extra2 = (void *)&two_five_five, }, { @@ -6809,7 +6807,7 @@ static const struct ctl_table addrconf_sysctl[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *)&zero, + .extra1 = (void *)SYSCTL_ZERO, .extra2 = (void *)&two_five_five, }, { diff --git a/net/ipv6/route.c b/net/ipv6/route.c index 5f176a7aeb1a..fbaa7e9e0d9c 100644 --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -5469,9 +5469,6 @@ int ipv6_sysctl_rtcache_flush(struct ctl_table *ctl, int write, return 0; } -static int zero; -static int one = 1; - static struct ctl_table ipv6_route_table_template[] = { { .procname = "flush", @@ -5549,8 +5546,8 @@ static struct ctl_table ipv6_route_table_template[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { } }; diff --git a/net/ipv6/sysctl_net_ipv6.c b/net/ipv6/sysctl_net_ipv6.c index e15cd37024fd..4c6adfccc3d2 100644 --- a/net/ipv6/sysctl_net_ipv6.c +++ b/net/ipv6/sysctl_net_ipv6.c @@ -21,8 +21,6 @@ #include <net/calipso.h> #endif -static int zero; -static int one = 1; static int auto_flowlabels_min; static int auto_flowlabels_max = IP6_AUTO_FLOW_LABEL_MAX; @@ -149,8 +147,8 @@ static struct ctl_table ipv6_table_template[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_rt6_multipath_hash_policy, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "seg6_flowlabel", @@ -176,7 +174,7 @@ static struct ctl_table ipv6_rotable[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one + .extra1 = SYSCTL_ONE }, #ifdef CONFIG_NETLABEL { diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c index 198ec4fe4148..c312741df2ce 100644 --- a/net/mpls/af_mpls.c +++ b/net/mpls/af_mpls.c @@ -37,8 +37,6 @@ #define MPLS_NEIGH_TABLE_UNSPEC (NEIGH_LINK_TABLE + 1) -static int zero = 0; -static int one = 1; static int label_limit = (1 << 20) - 1; static int ttl_max = 255; @@ -2607,7 +2605,7 @@ static int mpls_platform_labels(struct ctl_table *table, int write, .data = &platform_labels, .maxlen = sizeof(int), .mode = table->mode, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &label_limit, }; @@ -2636,8 +2634,8 @@ static const struct ctl_table mpls_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { .procname = "default_ttl", @@ -2645,7 +2643,7 @@ static const struct ctl_table mpls_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &ttl_max, }, { } diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c index 84384d896e29..27f014e23976 100644 --- a/net/netfilter/ipvs/ip_vs_ctl.c +++ b/net/netfilter/ipvs/ip_vs_ctl.c @@ -1725,7 +1725,6 @@ static int ip_vs_zero_all(struct netns_ipvs *ipvs) #ifdef CONFIG_SYSCTL -static int zero; static int three = 3; static int @@ -1934,7 +1933,7 @@ static struct ctl_table vs_vars[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &three, }, { diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c index 1e3fa67d91aa..2bbb38161851 100644 --- a/net/rxrpc/sysctl.c +++ b/net/rxrpc/sysctl.c @@ -11,7 +11,6 @@ #include "ar-internal.h" static struct ctl_table_header *rxrpc_sysctl_reg_table; -static const unsigned int one = 1; static const unsigned int four = 4; static const unsigned int thirtytwo = 32; static const unsigned int n_65535 = 65535; @@ -97,7 +96,7 @@ static struct ctl_table rxrpc_sysctl_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *)&one, + .extra1 = (void *)SYSCTL_ONE, .extra2 = (void *)&rxrpc_max_client_connections, }, { @@ -115,7 +114,7 @@ static struct ctl_table rxrpc_sysctl_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *)&one, + .extra1 = (void *)SYSCTL_ONE, .extra2 = (void *)&n_max_acks, }, { @@ -124,7 +123,7 @@ static struct ctl_table rxrpc_sysctl_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *)&one, + .extra1 = (void *)SYSCTL_ONE, .extra2 = (void *)&n_65535, }, { @@ -133,7 +132,7 @@ static struct ctl_table rxrpc_sysctl_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *)&one, + .extra1 = (void *)SYSCTL_ONE, .extra2 = (void *)&four, }, diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c index 9a19147902f1..1250751bca1b 100644 --- a/net/sctp/sysctl.c +++ b/net/sctp/sysctl.c @@ -25,10 +25,7 @@ #include <net/sctp/sctp.h> #include <linux/sysctl.h> -static int zero = 0; -static int one = 1; static int timer_max = 86400000; /* ms in one day */ -static int int_max = INT_MAX; static int sack_timer_min = 1; static int sack_timer_max = 500; static int addr_scope_max = SCTP_SCOPE_POLICY_MAX; @@ -92,7 +89,7 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &timer_max }, { @@ -101,7 +98,7 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_sctp_do_rto_min, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &init_net.sctp.rto_max }, { @@ -137,8 +134,8 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "cookie_preserve_enable", @@ -160,7 +157,7 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &timer_max }, { @@ -178,7 +175,7 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &timer_max }, { @@ -187,8 +184,8 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - .extra2 = &int_max + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "path_max_retrans", @@ -196,8 +193,8 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - .extra2 = &int_max + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "max_init_retransmits", @@ -205,8 +202,8 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, - .extra2 = &int_max + .extra1 = SYSCTL_ONE, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "pf_retrans", @@ -214,8 +211,8 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &int_max + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_INT_MAX, }, { .procname = "sndbuf_policy", @@ -286,7 +283,7 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &addr_scope_max, }, { @@ -295,7 +292,7 @@ static struct ctl_table sctp_net_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = &proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, .extra2 = &rwnd_scale_max, }, { diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c index 1f73a6a7e43c..ffb1684c4573 100644 --- a/net/sunrpc/xprtrdma/transport.c +++ b/net/sunrpc/xprtrdma/transport.c @@ -80,7 +80,6 @@ static unsigned int min_slot_table_size = RPCRDMA_MIN_SLOT_TABLE; static unsigned int max_slot_table_size = RPCRDMA_MAX_SLOT_TABLE; static unsigned int min_inline_size = RPCRDMA_MIN_INLINE; static unsigned int max_inline_size = RPCRDMA_MAX_INLINE; -static unsigned int zero; static unsigned int max_padding = PAGE_SIZE; static unsigned int min_memreg = RPCRDMA_BOUNCEBUFFERS; static unsigned int max_memreg = RPCRDMA_LAST - 1; @@ -122,7 +121,7 @@ static struct ctl_table xr_tunables_table[] = { .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &max_padding, }, { diff --git a/net/tipc/sysctl.c b/net/tipc/sysctl.c index 9df82a573aa7..6159d327db76 100644 --- a/net/tipc/sysctl.c +++ b/net/tipc/sysctl.c @@ -38,8 +38,6 @@ #include <linux/sysctl.h> -static int zero; -static int one = 1; static struct ctl_table_header *tipc_ctl_hdr; static struct ctl_table tipc_table[] = { @@ -49,7 +47,7 @@ static struct ctl_table tipc_table[] = { .maxlen = sizeof(sysctl_tipc_rmem), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &one, + .extra1 = SYSCTL_ONE, }, { .procname = "named_timeout", @@ -57,7 +55,7 @@ static struct ctl_table tipc_table[] = { .maxlen = sizeof(sysctl_tipc_named_timeout), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, }, { .procname = "sk_filter", diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 342c7c781ba5..c1be9f6958b7 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -2725,8 +2725,10 @@ sub process { ($line =~ /^\s*(?:WARNING:|BUG:)/ || $line =~ /^\s*\[\s*\d+\.\d{6,6}\s*\]/ || # timestamp - $line =~ /^\s*\[\<[0-9a-fA-F]{8,}\>\]/)) { - # stack dump address + $line =~ /^\s*\[\<[0-9a-fA-F]{8,}\>\]/) || + $line =~ /^(?:\s+\w+:\s+[0-9a-fA-F]+){3,3}/ || + $line =~ /^\s*\#\d+\s*\[[0-9a-fA-F]+\]\s*\w+ at [0-9a-fA-F]+/) { + # stack dump address styles $commit_log_possible_stack_dump = 1; } @@ -3069,21 +3071,21 @@ sub process { # check SPDX comment style for .[chsS] files if ($realfile =~ /\.[chsS]$/ && $rawline =~ /SPDX-License-Identifier:/ && - $rawline !~ /^\+\s*\Q$comment\E\s*/) { + $rawline !~ m@^\+\s*\Q$comment\E\s*@) { WARN("SPDX_LICENSE_TAG", "Improper SPDX comment style for '$realfile', please use '$comment' instead\n" . $herecurr); } if ($comment !~ /^$/ && - $rawline !~ /^\+\Q$comment\E SPDX-License-Identifier: /) { - WARN("SPDX_LICENSE_TAG", - "Missing or malformed SPDX-License-Identifier tag in line $checklicenseline\n" . $herecurr); + $rawline !~ m@^\+\Q$comment\E SPDX-License-Identifier: @) { + WARN("SPDX_LICENSE_TAG", + "Missing or malformed SPDX-License-Identifier tag in line $checklicenseline\n" . $herecurr); } elsif ($rawline =~ /(SPDX-License-Identifier: .*)/) { - my $spdx_license = $1; - if (!is_SPDX_License_valid($spdx_license)) { - WARN("SPDX_LICENSE_TAG", - "'$spdx_license' is not supported in LICENSES/...\n" . $herecurr); - } + my $spdx_license = $1; + if (!is_SPDX_License_valid($spdx_license)) { + WARN("SPDX_LICENSE_TAG", + "'$spdx_license' is not supported in LICENSES/...\n" . $herecurr); + } } } } @@ -6639,6 +6641,12 @@ sub process { "unknown module license " . $extracted_string . "\n" . $herecurr); } } + +# check for sysctl duplicate constants + if ($line =~ /\.extra[12]\s*=\s*&(zero|one|int_max)\b/) { + WARN("DUPLICATED_SYSCTL_CONST", + "duplicated sysctl range checking value '$1', consider using the shared one in include/linux/sysctl.h\n" . $herecurr); + } } # If we have no input at all, then there is nothing to report on diff --git a/scripts/decode_stacktrace.sh b/scripts/decode_stacktrace.sh index bcdd45df3f51..13e5fbafdf2f 100755 --- a/scripts/decode_stacktrace.sh +++ b/scripts/decode_stacktrace.sh @@ -28,7 +28,7 @@ parse_symbol() { local objfile=${modcache[$module]} else [[ $modpath == "" ]] && return - local objfile=$(find "$modpath" -name $module.ko -print -quit) + local objfile=$(find "$modpath" -name "${module//_/[-_]}.ko*" -print -quit) [[ $objfile == "" ]] && return modcache[$module]=$objfile fi @@ -73,7 +73,7 @@ parse_symbol() { if [[ "${cache[$module,$address]+isset}" == "isset" ]]; then local code=${cache[$module,$address]} else - local code=$(addr2line -i -e "$objfile" "$address") + local code=$(${CROSS_COMPILE}addr2line -i -e "$objfile" "$address") cache[$module,$address]=$code fi @@ -85,7 +85,7 @@ parse_symbol() { fi # Strip out the base of the path - code=${code//^$basepath/""} + code=${code#$basepath/} # In the case of inlines, move everything to same line code=${code//$'\n'/' '} diff --git a/scripts/spelling.txt b/scripts/spelling.txt index 86b87332b9e5..99f82a2a5b54 100644 --- a/scripts/spelling.txt +++ b/scripts/spelling.txt @@ -1009,6 +1009,8 @@ programers||programmers programm||program programms||programs progresss||progress +prohibitted||prohibited +prohibitting||prohibiting promiscous||promiscuous promps||prompts pronnounced||pronounced @@ -1145,7 +1147,6 @@ senarios||scenarios sentivite||sensitive separatly||separately sepcify||specify -sepc||spec seperated||separated seperately||separately seperate||separate diff --git a/security/keys/sysctl.c b/security/keys/sysctl.c index dd1e21fab827..b46b651b3c4c 100644 --- a/security/keys/sysctl.c +++ b/security/keys/sysctl.c @@ -9,8 +9,6 @@ #include <linux/sysctl.h> #include "internal.h" -static const int zero, one = 1, max = INT_MAX; - struct ctl_table key_sysctls[] = { { .procname = "maxkeys", @@ -18,8 +16,8 @@ struct ctl_table key_sysctls[] = { .maxlen = sizeof(unsigned), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *) &one, - .extra2 = (void *) &max, + .extra1 = (void *) SYSCTL_ONE, + .extra2 = (void *) SYSCTL_INT_MAX, }, { .procname = "maxbytes", @@ -27,8 +25,8 @@ struct ctl_table key_sysctls[] = { .maxlen = sizeof(unsigned), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *) &one, - .extra2 = (void *) &max, + .extra1 = (void *) SYSCTL_ONE, + .extra2 = (void *) SYSCTL_INT_MAX, }, { .procname = "root_maxkeys", @@ -36,8 +34,8 @@ struct ctl_table key_sysctls[] = { .maxlen = sizeof(unsigned), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *) &one, - .extra2 = (void *) &max, + .extra1 = (void *) SYSCTL_ONE, + .extra2 = (void *) SYSCTL_INT_MAX, }, { .procname = "root_maxbytes", @@ -45,8 +43,8 @@ struct ctl_table key_sysctls[] = { .maxlen = sizeof(unsigned), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *) &one, - .extra2 = (void *) &max, + .extra1 = (void *) SYSCTL_ONE, + .extra2 = (void *) SYSCTL_INT_MAX, }, { .procname = "gc_delay", @@ -54,8 +52,8 @@ struct ctl_table key_sysctls[] = { .maxlen = sizeof(unsigned), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *) &zero, - .extra2 = (void *) &max, + .extra1 = (void *) SYSCTL_ZERO, + .extra2 = (void *) SYSCTL_INT_MAX, }, #ifdef CONFIG_PERSISTENT_KEYRINGS { @@ -64,8 +62,8 @@ struct ctl_table key_sysctls[] = { .maxlen = sizeof(unsigned), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = (void *) &zero, - .extra2 = (void *) &max, + .extra1 = (void *) SYSCTL_ZERO, + .extra2 = (void *) SYSCTL_INT_MAX, }, #endif { } diff --git a/security/loadpin/loadpin.c b/security/loadpin/loadpin.c index 9e826041da41..f29efb03a29b 100644 --- a/security/loadpin/loadpin.c +++ b/security/loadpin/loadpin.c @@ -51,8 +51,6 @@ static struct super_block *pinned_root; static DEFINE_SPINLOCK(pinned_root_spinlock); #ifdef CONFIG_SYSCTL -static int zero; -static int one = 1; static struct ctl_path loadpin_sysctl_path[] = { { .procname = "kernel", }, @@ -67,8 +65,8 @@ static struct ctl_table loadpin_sysctl_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, - .extra1 = &zero, - .extra2 = &one, + .extra1 = SYSCTL_ZERO, + .extra2 = SYSCTL_ONE, }, { } }; diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c index efac68556b45..bf8871301020 100644 --- a/security/yama/yama_lsm.c +++ b/security/yama/yama_lsm.c @@ -449,7 +449,6 @@ static int yama_dointvec_minmax(struct ctl_table *table, int write, return proc_dointvec_minmax(&table_copy, write, buffer, lenp, ppos); } -static int zero; static int max_scope = YAMA_SCOPE_NO_ATTACH; static struct ctl_path yama_sysctl_path[] = { @@ -465,7 +464,7 @@ static struct ctl_table yama_sysctl_table[] = { .maxlen = sizeof(int), .mode = 0644, .proc_handler = yama_dointvec_minmax, - .extra1 = &zero, + .extra1 = SYSCTL_ZERO, .extra2 = &max_scope, }, { } diff --git a/sound/soc/qcom/qdsp6/q6asm.c b/sound/soc/qcom/qdsp6/q6asm.c index 4f85cb19a309..e8141a33a55e 100644 --- a/sound/soc/qcom/qdsp6/q6asm.c +++ b/sound/soc/qcom/qdsp6/q6asm.c @@ -1194,7 +1194,7 @@ EXPORT_SYMBOL_GPL(q6asm_open_read); * q6asm_write_async() - non blocking write * * @ac: audio client pointer - * @len: lenght in bytes + * @len: length in bytes * @msw_ts: timestamp msw * @lsw_ts: timestamp lsw * @wflags: flags associated with write diff --git a/tools/testing/nvdimm/test/iomap.c b/tools/testing/nvdimm/test/iomap.c index c6635fee27d8..219dd0a1cb08 100644 --- a/tools/testing/nvdimm/test/iomap.c +++ b/tools/testing/nvdimm/test/iomap.c @@ -108,7 +108,9 @@ static void nfit_test_kill(void *_pgmap) { struct dev_pagemap *pgmap = _pgmap; + WARN_ON(!pgmap || !pgmap->ref || !pgmap->kill || !pgmap->cleanup); pgmap->kill(pgmap->ref); + pgmap->cleanup(pgmap->ref); } void *__wrap_devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) diff --git a/tools/testing/selftests/proc/proc-pid-vm.c b/tools/testing/selftests/proc/proc-pid-vm.c index 853aa164a401..18a3bde8bc96 100644 --- a/tools/testing/selftests/proc/proc-pid-vm.c +++ b/tools/testing/selftests/proc/proc-pid-vm.c @@ -215,6 +215,11 @@ static const char str_vsyscall[] = "ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]\n"; #ifdef __x86_64__ +static void sigaction_SIGSEGV(int _, siginfo_t *__, void *___) +{ + _exit(1); +} + /* * vsyscall page can't be unmapped, probe it with memory load. */ @@ -231,11 +236,19 @@ static void vsyscall(void) if (pid == 0) { struct rlimit rlim = {0, 0}; (void)setrlimit(RLIMIT_CORE, &rlim); + + /* Hide "segfault at ffffffffff600000" messages. */ + struct sigaction act; + memset(&act, 0, sizeof(struct sigaction)); + act.sa_flags = SA_SIGINFO; + act.sa_sigaction = sigaction_SIGSEGV; + (void)sigaction(SIGSEGV, &act, NULL); + *(volatile int *)0xffffffffff600000UL; exit(0); } - wait(&wstatus); - if (WIFEXITED(wstatus)) { + waitpid(pid, &wstatus, 0); + if (WIFEXITED(wstatus) && WEXITSTATUS(wstatus) == 0) { g_vsyscall = true; } } diff --git a/tools/testing/selftests/ptrace/.gitignore b/tools/testing/selftests/ptrace/.gitignore index b3e59d41fd82..cfcc49a7def7 100644 --- a/tools/testing/selftests/ptrace/.gitignore +++ b/tools/testing/selftests/ptrace/.gitignore @@ -1 +1,2 @@ +get_syscall_info peeksiginfo diff --git a/tools/testing/selftests/ptrace/Makefile b/tools/testing/selftests/ptrace/Makefile index cb21c76a18ca..c0b7f89f0930 100644 --- a/tools/testing/selftests/ptrace/Makefile +++ b/tools/testing/selftests/ptrace/Makefile @@ -1,6 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-only CFLAGS += -iquote../../../../include/uapi -Wall -TEST_GEN_PROGS := peeksiginfo +TEST_GEN_PROGS := get_syscall_info peeksiginfo include ../lib.mk diff --git a/tools/testing/selftests/ptrace/get_syscall_info.c b/tools/testing/selftests/ptrace/get_syscall_info.c new file mode 100644 index 000000000000..5bcd1c7b5be6 --- /dev/null +++ b/tools/testing/selftests/ptrace/get_syscall_info.c @@ -0,0 +1,271 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright (c) 2018 Dmitry V. Levin <ldv@altlinux.org> + * All rights reserved. + * + * Check whether PTRACE_GET_SYSCALL_INFO semantics implemented in the kernel + * matches userspace expectations. + */ + +#include "../kselftest_harness.h" +#include <err.h> +#include <signal.h> +#include <asm/unistd.h> +#include "linux/ptrace.h" + +static int +kill_tracee(pid_t pid) +{ + if (!pid) + return 0; + + int saved_errno = errno; + + int rc = kill(pid, SIGKILL); + + errno = saved_errno; + return rc; +} + +static long +sys_ptrace(int request, pid_t pid, unsigned long addr, unsigned long data) +{ + return syscall(__NR_ptrace, request, pid, addr, data); +} + +#define LOG_KILL_TRACEE(fmt, ...) \ + do { \ + kill_tracee(pid); \ + TH_LOG("wait #%d: " fmt, \ + ptrace_stop, ##__VA_ARGS__); \ + } while (0) + +TEST(get_syscall_info) +{ + static const unsigned long args[][7] = { + /* a sequence of architecture-agnostic syscalls */ + { + __NR_chdir, + (unsigned long) "", + 0xbad1fed1, + 0xbad2fed2, + 0xbad3fed3, + 0xbad4fed4, + 0xbad5fed5 + }, + { + __NR_gettid, + 0xcaf0bea0, + 0xcaf1bea1, + 0xcaf2bea2, + 0xcaf3bea3, + 0xcaf4bea4, + 0xcaf5bea5 + }, + { + __NR_exit_group, + 0, + 0xfac1c0d1, + 0xfac2c0d2, + 0xfac3c0d3, + 0xfac4c0d4, + 0xfac5c0d5 + } + }; + const unsigned long *exp_args; + + pid_t pid = fork(); + + ASSERT_LE(0, pid) { + TH_LOG("fork: %m"); + } + + if (pid == 0) { + /* get the pid before PTRACE_TRACEME */ + pid = getpid(); + ASSERT_EQ(0, sys_ptrace(PTRACE_TRACEME, 0, 0, 0)) { + TH_LOG("PTRACE_TRACEME: %m"); + } + ASSERT_EQ(0, kill(pid, SIGSTOP)) { + /* cannot happen */ + TH_LOG("kill SIGSTOP: %m"); + } + for (unsigned int i = 0; i < ARRAY_SIZE(args); ++i) { + syscall(args[i][0], + args[i][1], args[i][2], args[i][3], + args[i][4], args[i][5], args[i][6]); + } + /* unreachable */ + _exit(1); + } + + const struct { + unsigned int is_error; + int rval; + } *exp_param, exit_param[] = { + { 1, -ENOENT }, /* chdir */ + { 0, pid } /* gettid */ + }; + + unsigned int ptrace_stop; + + for (ptrace_stop = 0; ; ++ptrace_stop) { + struct ptrace_syscall_info info = { + .op = 0xff /* invalid PTRACE_SYSCALL_INFO_* op */ + }; + const size_t size = sizeof(info); + const int expected_none_size = + (void *) &info.entry - (void *) &info; + const int expected_entry_size = + (void *) &info.entry.args[6] - (void *) &info; + const int expected_exit_size = + (void *) (&info.exit.is_error + 1) - + (void *) &info; + int status; + long rc; + + ASSERT_EQ(pid, wait(&status)) { + /* cannot happen */ + LOG_KILL_TRACEE("wait: %m"); + } + if (WIFEXITED(status)) { + pid = 0; /* the tracee is no more */ + ASSERT_EQ(0, WEXITSTATUS(status)); + break; + } + ASSERT_FALSE(WIFSIGNALED(status)) { + pid = 0; /* the tracee is no more */ + LOG_KILL_TRACEE("unexpected signal %u", + WTERMSIG(status)); + } + ASSERT_TRUE(WIFSTOPPED(status)) { + /* cannot happen */ + LOG_KILL_TRACEE("unexpected wait status %#x", status); + } + + switch (WSTOPSIG(status)) { + case SIGSTOP: + ASSERT_EQ(0, ptrace_stop) { + LOG_KILL_TRACEE("unexpected signal stop"); + } + ASSERT_EQ(0, sys_ptrace(PTRACE_SETOPTIONS, pid, 0, + PTRACE_O_TRACESYSGOOD)) { + LOG_KILL_TRACEE("PTRACE_SETOPTIONS: %m"); + } + ASSERT_LT(0, (rc = sys_ptrace(PTRACE_GET_SYSCALL_INFO, + pid, size, + (unsigned long) &info))) { + LOG_KILL_TRACEE("PTRACE_GET_SYSCALL_INFO: %m"); + } + ASSERT_EQ(expected_none_size, rc) { + LOG_KILL_TRACEE("signal stop mismatch"); + } + ASSERT_EQ(PTRACE_SYSCALL_INFO_NONE, info.op) { + LOG_KILL_TRACEE("signal stop mismatch"); + } + ASSERT_TRUE(info.arch) { + LOG_KILL_TRACEE("signal stop mismatch"); + } + ASSERT_TRUE(info.instruction_pointer) { + LOG_KILL_TRACEE("signal stop mismatch"); + } + ASSERT_TRUE(info.stack_pointer) { + LOG_KILL_TRACEE("signal stop mismatch"); + } + break; + + case SIGTRAP | 0x80: + ASSERT_LT(0, (rc = sys_ptrace(PTRACE_GET_SYSCALL_INFO, + pid, size, + (unsigned long) &info))) { + LOG_KILL_TRACEE("PTRACE_GET_SYSCALL_INFO: %m"); + } + switch (ptrace_stop) { + case 1: /* entering chdir */ + case 3: /* entering gettid */ + case 5: /* entering exit_group */ + exp_args = args[ptrace_stop / 2]; + ASSERT_EQ(expected_entry_size, rc) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(PTRACE_SYSCALL_INFO_ENTRY, info.op) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_TRUE(info.arch) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_TRUE(info.instruction_pointer) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_TRUE(info.stack_pointer) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[0], info.entry.nr) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[1], info.entry.args[0]) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[2], info.entry.args[1]) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[3], info.entry.args[2]) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[4], info.entry.args[3]) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[5], info.entry.args[4]) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + ASSERT_EQ(exp_args[6], info.entry.args[5]) { + LOG_KILL_TRACEE("entry stop mismatch"); + } + break; + case 2: /* exiting chdir */ + case 4: /* exiting gettid */ + exp_param = &exit_param[ptrace_stop / 2 - 1]; + ASSERT_EQ(expected_exit_size, rc) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + ASSERT_EQ(PTRACE_SYSCALL_INFO_EXIT, info.op) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + ASSERT_TRUE(info.arch) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + ASSERT_TRUE(info.instruction_pointer) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + ASSERT_TRUE(info.stack_pointer) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + ASSERT_EQ(exp_param->is_error, + info.exit.is_error) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + ASSERT_EQ(exp_param->rval, info.exit.rval) { + LOG_KILL_TRACEE("exit stop mismatch"); + } + break; + default: + LOG_KILL_TRACEE("unexpected syscall stop"); + abort(); + } + break; + + default: + LOG_KILL_TRACEE("unexpected stop signal %#x", + WSTOPSIG(status)); + abort(); + } + + ASSERT_EQ(0, sys_ptrace(PTRACE_SYSCALL, pid, 0, 0)) { + LOG_KILL_TRACEE("PTRACE_SYSCALL: %m"); + } + } + + ASSERT_EQ(ARRAY_SIZE(args) * 2, ptrace_stop); +} + +TEST_HARNESS_MAIN diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c index 73818f1b2ef8..68092d15e12b 100644 --- a/tools/vm/slabinfo.c +++ b/tools/vm/slabinfo.c @@ -79,6 +79,7 @@ int sort_size; int sort_active; int set_debug; int show_ops; +int sort_partial; int show_activity; int output_lines = -1; int sort_loss; @@ -110,7 +111,7 @@ static void fatal(const char *x, ...) static void usage(void) { printf("slabinfo 4/15/2011. (c) 2007 sgi/(c) 2011 Linux Foundation.\n\n" - "slabinfo [-aADefhilnosrStTvz1LXBU] [N=K] [-dafzput] [slab-regexp]\n" + "slabinfo [-aABDefhilLnoPrsStTUvXz1] [N=K] [-dafzput] [slab-regexp]\n" "-a|--aliases Show aliases\n" "-A|--activity Most active slabs first\n" "-B|--Bytes Show size in bytes\n" @@ -124,6 +125,7 @@ static void usage(void) "-n|--numa Show NUMA information\n" "-N|--lines=K Show the first K slabs\n" "-o|--ops Show kmem_cache_ops\n" + "-P|--partial Sort by number of partial slabs\n" "-r|--report Detailed report on single slabs\n" "-s|--shrink Shrink slabs\n" "-S|--Size Sort by size\n" @@ -131,9 +133,9 @@ static void usage(void) "-T|--Totals Show summary information\n" "-U|--Unreclaim Show unreclaimable slabs only\n" "-v|--validate Validate slabs\n" + "-X|--Xtotals Show extended summary information\n" "-z|--zero Include empty slabs\n" "-1|--1ref Single reference\n" - "-X|--Xtotals Show extended summary information\n" "\n" "-d | --debug Switch off all debug options\n" @@ -146,6 +148,8 @@ static void usage(void) " p | P Poisoning\n" " u | U Tracking\n" " t | T Tracing\n" + + "\nSorting options (--Loss, --Size, --Partial) are mutually exclusive\n" ); } @@ -1047,6 +1051,8 @@ static void sort_slabs(void) result = slab_activity(s1) < slab_activity(s2); else if (sort_loss) result = slab_waste(s1) < slab_waste(s2); + else if (sort_partial) + result = s1->partial < s2->partial; else result = strcasecmp(s1->name, s2->name); @@ -1307,33 +1313,46 @@ static void output_slabs(void) } } +static void _xtotals(char *heading, char *underline, + int loss, int size, int partial) +{ + printf("%s%s", heading, underline); + line = 0; + sort_loss = loss; + sort_size = size; + sort_partial = partial; + sort_slabs(); + output_slabs(); +} + static void xtotals(void) { + char *heading, *underline; + totals(); link_slabs(); rename_slabs(); - printf("\nSlabs sorted by size\n"); - printf("--------------------\n"); - sort_loss = 0; - sort_size = 1; - sort_slabs(); - output_slabs(); + heading = "\nSlabs sorted by size\n"; + underline = "--------------------\n"; + _xtotals(heading, underline, 0, 1, 0); + + heading = "\nSlabs sorted by loss\n"; + underline = "--------------------\n"; + _xtotals(heading, underline, 1, 0, 0); + + heading = "\nSlabs sorted by number of partial slabs\n"; + underline = "---------------------------------------\n"; + _xtotals(heading, underline, 0, 0, 1); - printf("\nSlabs sorted by loss\n"); - printf("--------------------\n"); - line = 0; - sort_loss = 1; - sort_size = 0; - sort_slabs(); - output_slabs(); printf("\n"); } struct option opts[] = { { "aliases", no_argument, NULL, 'a' }, { "activity", no_argument, NULL, 'A' }, + { "Bytes", no_argument, NULL, 'B'}, { "debug", optional_argument, NULL, 'd' }, { "display-activity", no_argument, NULL, 'D' }, { "empty", no_argument, NULL, 'e' }, @@ -1341,21 +1360,21 @@ struct option opts[] = { { "help", no_argument, NULL, 'h' }, { "inverted", no_argument, NULL, 'i'}, { "slabs", no_argument, NULL, 'l' }, + { "Loss", no_argument, NULL, 'L'}, { "numa", no_argument, NULL, 'n' }, + { "lines", required_argument, NULL, 'N'}, { "ops", no_argument, NULL, 'o' }, - { "shrink", no_argument, NULL, 's' }, + { "partial", no_argument, NULL, 'p'}, { "report", no_argument, NULL, 'r' }, + { "shrink", no_argument, NULL, 's' }, { "Size", no_argument, NULL, 'S'}, { "tracking", no_argument, NULL, 't'}, { "Totals", no_argument, NULL, 'T'}, + { "Unreclaim", no_argument, NULL, 'U'}, { "validate", no_argument, NULL, 'v' }, + { "Xtotals", no_argument, NULL, 'X'}, { "zero", no_argument, NULL, 'z' }, { "1ref", no_argument, NULL, '1'}, - { "lines", required_argument, NULL, 'N'}, - { "Loss", no_argument, NULL, 'L'}, - { "Xtotals", no_argument, NULL, 'X'}, - { "Bytes", no_argument, NULL, 'B'}, - { "Unreclaim", no_argument, NULL, 'U'}, { NULL, 0, NULL, 0 } }; @@ -1367,18 +1386,18 @@ int main(int argc, char *argv[]) page_size = getpagesize(); - while ((c = getopt_long(argc, argv, "aAd::Defhil1noprstvzTSN:LXBU", + while ((c = getopt_long(argc, argv, "aABd::DefhilLnN:oPrsStTUvXz1", opts, NULL)) != -1) switch (c) { - case '1': - show_single_ref = 1; - break; case 'a': show_alias = 1; break; case 'A': sort_active = 1; break; + case 'B': + show_bytes = 1; + break; case 'd': set_debug = 1; if (!debug_opt_scan(optarg)) @@ -1399,45 +1418,48 @@ int main(int argc, char *argv[]) case 'i': show_inverted = 1; break; + case 'l': + show_slab = 1; + break; + case 'L': + sort_loss = 1; + break; case 'n': show_numa = 1; break; + case 'N': + if (optarg) { + output_lines = atoi(optarg); + if (output_lines < 1) + output_lines = 1; + } + break; case 'o': show_ops = 1; break; case 'r': show_report = 1; break; + case 'P': + sort_partial = 1; + break; case 's': shrink = 1; break; - case 'l': - show_slab = 1; + case 'S': + sort_size = 1; break; case 't': show_track = 1; break; - case 'v': - validate = 1; - break; - case 'z': - skip_zero = 0; - break; case 'T': show_totals = 1; break; - case 'S': - sort_size = 1; - break; - case 'N': - if (optarg) { - output_lines = atoi(optarg); - if (output_lines < 1) - output_lines = 1; - } + case 'U': + unreclaim_only = 1; break; - case 'L': - sort_loss = 1; + case 'v': + validate = 1; break; case 'X': if (output_lines == -1) @@ -1445,11 +1467,11 @@ int main(int argc, char *argv[]) extended_totals = 1; show_bytes = 1; break; - case 'B': - show_bytes = 1; + case 'z': + skip_zero = 0; break; - case 'U': - unreclaim_only = 1; + case '1': + show_single_ref = 1; break; default: fatal("%s: Invalid option '%c'\n", argv[0], optopt); diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 74b6582eaa3c..17aa4acde7a2 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -141,7 +141,7 @@ static int mmu_topup_memory_cache(struct kvm_mmu_memory_cache *cache, if (cache->nobjs >= min) return 0; while (cache->nobjs < max) { - page = (void *)__get_free_page(PGALLOC_GFP); + page = (void *)__get_free_page(GFP_PGTABLE_USER); if (!page) return -ENOMEM; cache->objects[cache->nobjs++] = page; |