summaryrefslogtreecommitdiff
path: root/lib
Commit message (Collapse)AuthorAgeFilesLines
* dynamic_debug: add jump label supportJason Baron2016-07-131-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Although dynamic debug is often only used for debug builds, sometimes its enabled for production builds as well. Minimize its impact by using jump labels. This reduces the text section by 7000+ bytes in the kernel image below. It does increase data, but this should only be referenced when changing the direction of the branches, and hence usually not in cache. text data bss dec hex filename 8194852 4879776 925696 14000324 d5a0c4 vmlinux.pre 8187337 4960224 925696 14073257 d6bda9 vmlinux.post Link: http://lkml.kernel.org/r/d165b465e8c89bc582d973758d40be44c33f018b.1467837322.git.jbaron@akamai.com Signed-off-by: Jason Baron <jbaron@akamai.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Joe Perches <joe@perches.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* Merge branch 'akpm-current/current'Stephen Rothwell2016-07-139-23/+455
|\
| * kcov: allow more fine-grained coverage instrumentationVegard Nossum2016-07-131-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For more targeted fuzzing, it's better to disable kernel-wide instrumentation and instead enable it on a per-subsystem basis. This follows the pattern of UBSAN and allows you to compile in the kcov driver without instrumenting the whole kernel. To instrument a part of the kernel, you can use either # for a single file in the current directory KCOV_INSTRUMENT_filename.o := y or # for all the files in the current directory (excluding subdirectories) KCOV_INSTRUMENT := y or # (same as above) ccflags-y += $(CFLAGS_KCOV) or # for all the files in the current directory (including subdirectories) subdir-ccflags-y += $(CFLAGS_KCOV) Link: http://lkml.kernel.org/r/1464008380-11405-1-git-send-email-vegard.nossum@oracle.com Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * lib: Add CRC64 ECMA moduleMarian Chereji2016-07-133-0/+349
| | | | | | | | | | | | | | | | | | | | | | | | | | Add implementation of CRC64 ECMA checksum. We have an IP Acceleration driver for Freescale network processors which is using this CRC64. However, it still needs some work in order for it to become upstreamable. Signed-off-by: Marian Chereji <marian.chereji@freescale.com> Reviewed-by: Varvara Andrei-B21317 <andrei.varvara@freescale.com> Reviewed-by: Fleming Andrew-AFLEMING <AFLEMING@freescale.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * crc32: use ktime_get_ns() for measurementArnd Bergmann2016-07-131-12/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The crc32 test function measures the elapsed time in nanoseconds, but uses 'struct timespec' for that. We want to remove timespec from the kernel for y2038 compatibility, and ktime_get_ns() also helps make the code simpler here. It is also slightly better to use monontonic time, as we are only interested in the time difference. Link: http://lkml.kernel.org/r/20160617143932.3289626-1-arnd@arndb.de Signed-off-by: Arnd Bergmann <arnd@arndb.de> Cc: "David S . Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * lib/iommu-helper: skip to next segmentSebastian Ott2016-07-131-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a large enough area in the iommu bitmap is found but would span a boundary we continue the search starting from the next bit position. For large allocations this can lead to several useless invocations of bitmap_find_next_zero_area() and iommu_is_span_boundary(). Continue the search from the start of the next segment (which is the next bit position such that we'll not cross the same segment boundary again). Link: http://lkml.kernel.org/r/alpine.LFD.2.20.1606081910070.3211@schleppi Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * printk, allow different timestamps for printk.timePrarit Bhargava2016-07-131-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Over the past years I've seen many reports of bugs that include time-stamped kernel logs (enabled when CONFIG_PRINTK_TIME=y or print.time=1 is specified as a kernel parameter) that do not align with either external time stamped logs or /var/log/messages. This also makes determining the time of a failure difficult in cases where /var/log/messages is unavailable. For example, [root@intel-wildcatpass-06 ~]# date; echo "Hello!" > /dev/kmsg ; date Thu Dec 17 13:58:31 EST 2015 Thu Dec 17 13:58:31 EST 2015 which displays [83973.768912] Hello! on the serial console. Running a script to convert this to the stamped time, [root@intel-wildcatpass-06 ~]# ./human.sh | tail -1 [Thu Dec 17 13:59:57 2015] Hello! which is already off by 1 minute and 26 seconds off after ~24 hours of uptime. This occurs because the time stamp is obtained from a call to local_clock() which (on x86) is a direct call to the hardware. These hardware clock reads are not modified by the standard ntp or ptp protocol, while the other timestamps are, and that results in situations external time sources are further and further offset from the kernel log timestamps. This patch introduces printk.time=[0-3] allowing a user to specify an adjusted clock to use with printk timestamps. The hardware clock, or the existing functionality, is preserved by default. Real clock & 32-bit systems: Selecting the real clock printk timestamp may lead to unlikely situations where a timestamp is wrong because the real time offset is read without the protection of a sequence lock in the call to ktime_get_log_ts() in printk_get_ts(). Signed-off-by: Prarit Bhargava <prarit@redhat.com> Cc: Cc: Petr Mladek <pmladek@suse.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Xunlei Pang <pang.xunlei@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Baolin Wang <baolin.wang@linaro.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Petr Mladek <pmladek@suse.cz> Cc: Tejun Heo <tj@kernel.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Vasily Averin <vvs@virtuozzo.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * lib, switch CONFIG_PRINTK_TIME to intPrarit Bhargava2016-07-131-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CONFIG_PRINTK_TIME is a bool and in order to add timestamp options for the monotonic and real time clock it must be expanded to an int. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Cc: Petr Mladek <pmladek@suse.com> Cc: John Stultz <john.stultz@linaro.org> Cc: Xunlei Pang <pang.xunlei@linaro.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Baolin Wang <baolin.wang@linaro.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Petr Mladek <pmladek@suse.cz> Cc: Tejun Heo <tj@kernel.org> Cc: Peter Hurley <peter@hurleysoftware.com> Cc: Vasily Averin <vvs@virtuozzo.com> Cc: Joe Perches <joe@perches.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * mm, kasan: switch SLUB to stackdepot, enable memory quarantine for SLUBAlexander Potapenko2016-07-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For KASAN builds: - switch SLUB allocator to using stackdepot instead of storing the allocation/deallocation stacks in the objects; - change the freelist hook so that parts of the freelist can be put into the quarantine. Link: http://lkml.kernel.org/r/1468347165-41906-3-git-send-email-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <adech.fo@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Steven Rostedt (Red Hat) <rostedt@goodmis.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kostya Serebryany <kcc@google.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Kuthonuzo Luruo <kuthonuzo.luruo@hpe.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * radix-tree: implement radix_tree_maybe_preload_order()Kirill A. Shutemov2016-07-131-5/+79
| | | | | | | | | | | | | | | | | | | | | | | | The new helper is similar to radix_tree_maybe_preload(), but tries to preload number of nodes required to insert (1 << order) continuous naturally-aligned elements. This is required to push huge pages into pagecache. Link: http://lkml.kernel.org/r/1466021202-61880-24-git-send-email-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * mm/page_owner: use stackdepot to store stacktraceJoonsoo Kim2016-07-131-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, we store each page's allocation stacktrace on corresponding page_ext structure and it requires a lot of memory. This causes the problem that memory tight system doesn't work well if page_owner is enabled. Moreover, even with this large memory consumption, we cannot get full stacktrace because we allocate memory at boot time and just maintain 8 stacktrace slots to balance memory consumption. We could increase it to more but it would make system unusable or change system behaviour. To solve the problem, this patch uses stackdepot to store stacktrace. It obviously provides memory saving but there is a drawback that stackdepot could fail. stackdepot allocates memory at runtime so it could fail if system has not enough memory. But, most of allocation stack are generated at very early time and there are much memory at this time. So, failure would not happen easily. And, one failure means that we miss just one page's allocation stacktrace so it would not be a big problem. In this patch, when memory allocation failure happens, we store special stracktrace handle to the page that is failed to save stacktrace. With it, user can guess memory usage properly even if failure happens. Memory saving looks as following. (4GB memory system with page_owner) (before the patch -> after the patch) static allocation: 92274688 bytes -> 25165824 bytes dynamic allocation after boot + kernel build: 0 bytes -> 327680 bytes total: 92274688 bytes -> 25493504 bytes 72% reduction in total. Note that implementation looks complex than someone would imagine because there is recursion issue. stackdepot uses page allocator and page_owner is called at page allocation. Using stackdepot in page_owner could re-call page allcator and then page_owner. That is a recursion. To detect and avoid it, whenever we obtain stacktrace, recursion is checked and page_owner is set to dummy information if found. Dummy information means that this page is allocated for page_owner feature itself (such as stackdepot) and it's understandable behavior for user. Link: http://lkml.kernel.org/r/1464230275-25791-6-git-send-email-iamjoonsoo.kim@lge.com Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Minchan Kim <minchan@kernel.org> Cc: Alexander Potapenko <glider@google.com> Cc: Hugh Dickins <hughd@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
| * dma-debug: track bucket lock state for static checkersStephen Boyd2016-07-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | get_hash_bucket() and put_hash_bucket() acquire and release the same spinlock, but this confuses static checkers such as sparse lib/dma-debug.c:254:27: warning: context imbalance in 'get_hash_bucket' - wrong count at exit lib/dma-debug.c:268:13: warning: context imbalance in 'put_hash_bucket' - unexpected unlock Add the appropriate acquire and release statements so that checkers can properly track the lock state. Link: http://lkml.kernel.org/r/20160701191552.24295-1-sboyd@codeaurora.org Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* | Merge remote-tracking branch 'rcu/rcu/next'Stephen Rothwell2016-07-131-0/+20
|\ \
| * | rcu: Disable RCU_PERF_TEST and RCU_TORTURE_TEST for usermode LinuxFengguang Wu2016-07-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Usermode Linux currently does not implement arch_irqs_disabled_flags(), which results in a build failure in TASKS_RCU. Commit 570dd3c74241 ("rcu: Disable TASKS_RCU for usermode Linux") attempted to fix this by making TASKS_RCU depend on !UML, which does work in production builds. However, test builds that enable either RCU_PERF_TEST or RCU_TORTURE_TEST will select TASKS_RCU, defeating the dependency on !UML. This commit therefore makes both RCU_PERF_TEST and RCU_TORTURE_TEST also depend on !UML. The usermode Linux maintainers expect to merge arch_irqs_disabled_flags() into 4.8, at which point this commit may be reverted. Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | waketorture: Add utilization measurementPaul E. McKenney2016-06-151-0/+1
| | | | | | | | | | | | Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | waketorture: Add a wakeup-torture modulePaul E. McKenney2016-06-151-0/+17
| | | | | | | | | | | | | | | | | | | | | This commit adds a wakeup-torture module to assist tracking down an elusive lost-wakeup problem. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | | Merge remote-tracking branch 'tip/auto-latest'Stephen Rothwell2016-07-137-44/+70
|\ \ \
| * \ \ Merge branch 'x86/microcode'Ingo Molnar2016-07-101-1/+4
| |\ \ \
| | * | | lib/cpio: Make find_cpio_data()'s offset arg optionalBorislav Petkov2016-06-081-1/+4
| | | |/ | | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some callers don't use it so make it optional. Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1465225850-7352-4-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | Merge branch 'x86/asm'Ingo Molnar2016-07-102-5/+4
| |\ \ \
| | * | | x86/hweight: Get rid of the special calling conventionBorislav Petkov2016-06-082-5/+4
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | People complained about ARCH_HWEIGHT_CFLAGS and how it throws a wrench into kcov, lto, etc, experimentations. Add asm versions for __sw_hweight{32,64}() and do explicit saving and restoring of clobbered registers. This gets rid of the special calling convention. We get to call those functions on !X86_FEATURE_POPCNT CPUs. We still need to hardcode POPCNT and register operands as some old gas versions which we support, do not know about POPCNT. Btw, remove redundant REX prefix from 32-bit POPCNT because alternatives can do padding now. Suggested-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1464605787-20603-1-git-send-email-bp@alien8.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | Merge branch 'timers/core'Ingo Molnar2016-07-101-1/+0
| |\ \ \
| | * | | timers: Remove set_timer_slack() leftoversThomas Gleixner2016-07-071-1/+0
| | |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We now have implicit batching in the timer wheel. The slack API is no longer used, so remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Andrew F. Davis <afd@ti.com> Cc: Arjan van de Ven <arjan@infradead.org> Cc: Chris Mason <clm@fb.com> Cc: David S. Miller <davem@davemloft.net> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Cc: Eric Dumazet <edumazet@google.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: George Spelvin <linux@sciencehorizons.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Jaehoon Chung <jh80.chung@samsung.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: John Stultz <john.stultz@linaro.org> Cc: Josh Triplett <josh@joshtriplett.org> Cc: Len Brown <lenb@kernel.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mathias Nyman <mathias.nyman@intel.com> Cc: Pali Rohár <pali.rohar@gmail.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rik van Riel <riel@redhat.com> Cc: Sebastian Reichel <sre@kernel.org> Cc: Ulf Hansson <ulf.hansson@linaro.org> Cc: linux-block@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-mmc@vger.kernel.org Cc: linux-pm@vger.kernel.org Cc: linux-usb@vger.kernel.org Cc: netdev@vger.kernel.org Cc: rt@linutronix.de Link: http://lkml.kernel.org/r/20160704094342.189813118@linutronix.de Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | | Merge branch 'locking/core'Ingo Molnar2016-07-102-4/+62
| |\ \ \ | | |_|/ | |/| |
| | * | locking/atomic: Implement ↵Peter Zijlstra2016-06-162-4/+62
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | atomic{,64,_long}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}() Now that all the architectures have implemented support for these new atomic primitives add on the generic infrastructure to expose and use it. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Borislav Petkov <bp@suse.de> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Will Deacon <will.deacon@arm.com> Cc: linux-arch@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
| * | torture: Remove CONFIG_RCU_TORTURE_TEST_RUNNABLE, simplify codePaul E. McKenney2016-06-141-17/+0
| | | | | | | | | | | | | | | | | | | | | | | | This commit removes CONFIG_RCU_TORTURE_TEST_RUNNABLE in favor of the already-existing rcutorture.torture_runnable kernel boot parameter. It also converts an #ifdef into IS_ENABLED(), saving a few lines of code. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
| * | torture: Simplify code, eliminate RCU_PERF_TEST_RUNNABLEPaul E. McKenney2016-06-141-16/+0
| |/ | | | | | | | | | | | | | | This commit applies the infamous IS_ENABLED() macro to eliminate a #ifdef. It also eliminates the RCU_PERF_TEST_RUNNABLE Kconfig option in favor of the already-existing rcuperf.perf_runnable kernel boot parameter. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* | Merge remote-tracking branch 'block/for-next'Stephen Rothwell2016-07-131-30/+15
|\ \
| * | iov_iter: use bvec iterator to implement iterate_bvec()Ming Lei2016-06-091-30/+15
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bvec has one native/mature iterator for long time, so not necessary to use the reinvented wheel for iterating bvecs in lib/iov_iter.c. Two ITER_BVEC test cases are run: - xfstest(-g auto) on loop dio/aio, no regression found - swap file works well under extreme stress(stress-ng --all 64 -t 800 -v), and lots of OOMs are triggerd, and the whole system still survives Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lei <ming.lei@canonical.com> Tested-by: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@fb.com>
* | Merge remote-tracking branch 'kspp/for-next/kspp'Stephen Rothwell2016-07-132-2/+2
|\ \
| * | latent_entropy: Mark functions with __latent_entropyEmese Revfy2016-07-062-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The __latent_entropy gcc attribute can be only on functions and variables. If it is on a function then the plugin will instrument it. If the attribute is on a variable then the plugin will initialize it with a random value. The variable must be an integer, an integer array type or a structure with integer fields. These functions have been selected because they are init functions, are called at random times, or they have variable loops, each of which provide some level of latent entropy. Signed-off-by: Emese Revfy <re.emese@gmail.com> Signed-off-by: Kees Cook <keescook@chromium.org>
| * | gcc-plugins: disable under COMPILE_TESTKees Cook2016-06-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | Since adding the gcc plugin development headers is required for the gcc plugin support, we should ease into this new kernel build dependency more slowly. For now, disable the gcc plugins under COMPILE_TEST so that all*config builds will skip it. Signed-off-by: Kees Cook <keescook@chromium.org>
* | | Merge remote-tracking branch 'kbuild/for-next'Stephen Rothwell2016-07-131-0/+2
|\ \ \ | |/ /
| * | Add sancov pluginEmese Revfy2016-06-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sancov gcc plugin inserts a __sanitizer_cov_trace_pc() call at the start of basic blocks. This plugin is a helper plugin for the kcov feature. It supports all gcc versions with plugin support (from gcc-4.5 on). It is based on the gcc commit "Add fuzzing coverage support" by Dmitry Vyukov (https://gcc.gnu.org/viewcvs/gcc?limit_changes=0&view=revision&revision=231296). Signed-off-by: Emese Revfy <re.emese@gmail.com> Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Michal Marek <mmarek@suse.com>
* | | Merge remote-tracking branch 'crypto/master'Stephen Rothwell2016-07-132-173/+92
|\ \ \
| * | | lib/mpi: Do not do sg_virtHerbert Xu2016-07-011-36/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the mpi SG helpers use sg_virt which is completely broken. It happens to work with normal kernel memory but will fail with anything that is not linearly mapped. This patch fixes this by using the SG iterator helpers. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | crypto: rsa - Generate fixed-length outputHerbert Xu2016-07-011-29/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Every implementation of RSA that we have naturally generates output with leading zeroes. The one and only user of RSA, pkcs1pad wants to have those leading zeroes in place, in fact because they are currently absent it has to write those zeroes itself. So we shouldn't be stripping leading zeroes in the first place. In fact this patch makes rsa-generic produce output with fixed length so that pkcs1pad does not need to do any extra work. This patch also changes DH to use the new interface. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: refactor mpi_read_from_buffer() in terms of mpi_read_raw_data()Nicolai Stange2016-05-311-21/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mpi_read_from_buffer() and mpi_read_raw_data() do basically the same thing except that the former extracts the number of payload bits from the first two bytes of the input buffer. Besides that, the data copying logic is exactly the same. Replace the open coded buffer to MPI instance conversion by a call to mpi_read_raw_data(). Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: mpi_read_from_buffer(): sanitize short buffer printkNicolai Stange2016-05-311-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The first two bytes of the input buffer encode its expected length and mpi_read_from_buffer() prints a console message if the given buffer is too short. However, there are some oddities with how this message is printed: - It is printed at the default loglevel. This is different from the one used in the case that the first two bytes' value is unsupportedly large, i.e. KERN_INFO. - The format specifier '%d' is used for unsigned ints. - It prints the values of nread and *ret_nread. This is redundant since the former is always the latter + 1. Clean this up as follows: - Use pr_info() rather than printk() with no loglevel. - Use the format specifiers '%u' in place if '%d'. - Do not print the redundant 'nread' but the more helpful 'nbytes' value. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: mpi_read_from_buffer(): return -EINVAL upon too short bufferNicolai Stange2016-05-311-10/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, if the input buffer is shorter than the expected length as indicated by its first two bytes, an MPI instance of this expected length will be allocated and filled with as much data as is available. The rest will remain uninitialized. Instead of leaving this condition undetected, an error code should be reported to the caller. Since this situation indicates that the input buffer's first two bytes, encoding the number of expected bits, are garbled, -EINVAL is appropriate here. If the input buffer is shorter than indicated by its first two bytes, make mpi_read_from_buffer() return -EINVAL. Get rid of the 'nread' variable: with the new semantics, the total number of bytes read from the input buffer is known in advance. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/digsig: digsig_verify_rsa(): return -EINVAL if modulo length is zeroNicolai Stange2016-05-311-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, if digsig_verify_rsa() detects that the modulo's length is zero, i.e. mlen == 0, it returns -ENOMEM which doesn't really fit here. Make digsig_verify_rsa() return -EINVAL upon mlen == 0. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: mpi_read_from_buffer(): return error codeNicolai Stange2016-05-312-7/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mpi_read_from_buffer() reads a MPI from a buffer into a newly allocated MPI instance. It expects the buffer's leading two bytes to contain the number of bits, followed by the actual payload. On failure, it returns NULL and updates the in/out argument ret_nread somewhat inconsistently: - If the given buffer is too short to contain the leading two bytes encoding the number of bits or their value is unsupported, then ret_nread will be cleared. - If the allocation of the resulting MPI instance fails, ret_nread is left as is. The only user of mpi_read_from_buffer(), digsig_verify_rsa(), simply checks for a return value of NULL and returns -ENOMEM if that happens. While this is all of cosmetic nature only, there is another error condition which currently isn't detectable by the caller of mpi_read_from_buffer(): if the given buffer is too small to hold the number of bits as encoded in its first two bytes, the return value will be non-NULL and *ret_nread > 0. In preparation of communicating this condition to the caller, let mpi_read_from_buffer() return error values by means of the ERR_PTR() mechanism. Make the sole caller of mpi_read_from_buffer(), digsig_verify_rsa(), check the return value for IS_ERR() rather than == NULL. If IS_ERR() is true, return the associated error value rather than the fixed -ENOMEM. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: mpi_read_raw_data(): fix nbits calculationNicolai Stange2016-05-311-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The number of bits, nbits, is calculated in mpi_read_raw_data() as follows: nbits = nbytes * 8; Afterwards, the number of leading zero bits of the first byte get subtracted: nbits -= count_leading_zeros(buffer[0]); However, count_leading_zeros() takes an unsigned long and thus, the u8 gets promoted to an unsigned long. Thus, the above doesn't subtract the number of leading zeros in the most significant nonzero input byte from nbits, but the number of leading zeros of the most significant nonzero input byte promoted to unsigned long, i.e. BITS_PER_LONG - 8 too many. Fix this by subtracting count_leading_zeros(...) - (BITS_PER_LONG - 8) from nbits only. Fixes: e1045992949 ("MPILIB: Provide a function to read raw data into an MPI") Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: mpi_read_raw_data(): purge redundant clearing of nbitsNicolai Stange2016-05-311-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In mpi_read_raw_data(), unsigned nbits is calculated as follows: nbits = nbytes * 8; and redundantly cleared later on if nbytes == 0: if (nbytes > 0) ... else nbits = 0; Purge this redundant clearing for the sake of clarity. Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
| * | | lib/mpi: purge mpi_set_buffer()Nicolai Stange2016-05-311-76/+0
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | mpi_set_buffer() has no in-tree users and similar functionality is provided by mpi_read_raw_data(). Remove mpi_set_buffer(). Signed-off-by: Nicolai Stange <nicstange@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | | Introduce rb_replace_node_rcu()David Howells2016-07-061-2/+24
| |/ |/| | | | | | | | | | | | | Implement an RCU-safe variant of rb_replace_node() and rearrange rb_replace_node() to do things in the same order. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* | lib/uuid.c: use correct offset in uuid parserBjørn Mork2016-05-301-2/+2
| | | | | | | | | | | | | | | | | | | | Use '+ 0' and '+ 1' as offsets, like they were intended, instead of adding to the result. Fixes: 2b1b0d66704a ("lib/uuid.c: introduce a few more generic helpers") Signed-off-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | lib/uuid: add a test moduleAndy Shevchenko2016-05-303-0/+137
|/ | | | | | | | | It appears that somehow I missed a test of the latest UUID rework which landed in the kernel. Present a small test module to avoid such cases in the future. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'hash' of git://ftp.sciencehorizons.net/linuxLinus Torvalds2016-05-283-0/+262
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull string hash improvements from George Spelvin: "This series does several related things: - Makes the dcache hash (fs/namei.c) useful for general kernel use. (Thanks to Bruce for noticing the zero-length corner case) - Converts the string hashes in <linux/sunrpc/svcauth.h> to use the above. - Avoids 64-bit multiplies in hash_64() on 32-bit platforms. Two 32-bit multiplies will do well enough. - Rids the world of the bad hash multipliers in hash_32. This finishes the job started in commit 689de1d6ca95 ("Minimal fix-up of bad hashing behavior of hash_64()") The vast majority of Linux architectures have hardware support for 32x32-bit multiply and so derive no benefit from "simplified" multipliers. The few processors that do not (68000, h8/300 and some models of Microblaze) have arch-specific implementations added. Those patches are last in the series. - Overhauls the dcache hash mixing. The patch in commit 0fed3ac866ea ("namei: Improve hash mixing if CONFIG_DCACHE_WORD_ACCESS") was an off-the-cuff suggestion. Replaced with a much more careful design that's simultaneously faster and better. (My own invention, as there was noting suitable in the literature I could find. Comments welcome!) - Modify the hash_name() loop to skip the initial HASH_MIX(). This would let us salt the hash if we ever wanted to. - Sort out partial_name_hash(). The hash function is declared as using a long state, even though it's truncated to 32 bits at the end and the extra internal state contributes nothing to the result. And some callers do odd things: - fs/hfs/string.c only allocates 32 bits of state - fs/hfsplus/unicode.c uses it to hash 16-bit unicode symbols not bytes - Modify bytemask_from_count to handle inputs of 1..sizeof(long) rather than 0..sizeof(long)-1. This would simplify users other than full_name_hash" Special thanks to Bruce Fields for testing and finding bugs in v1. (I learned some humbling lessons about "obviously correct" code.) On the arch-specific front, the m68k assembly has been tested in a standalone test harness, I've been in contact with the Microblaze maintainers who mostly don't care, as the hardware multiplier is never omitted in real-world applications, and I haven't heard anything from the H8/300 world" * 'hash' of git://ftp.sciencehorizons.net/linux: h8300: Add <asm/hash.h> microblaze: Add <asm/hash.h> m68k: Add <asm/hash.h> <linux/hash.h>: Add support for architecture-specific functions fs/namei.c: Improve dcache hash function Eliminate bad hash multipliers from hash_32() and hash_64() Change hash_64() return value to 32 bits <linux/sunrpc/svcauth.h>: Define hash_str() in terms of hashlen_string() fs/namei.c: Add hashlen_string() function Pull out string hash to <linux/stringhash.h>
| * <linux/hash.h>: Add support for architecture-specific functionsGeorge Spelvin2016-05-283-0/+262
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is just the infrastructure; there are no users yet. This is modelled on CONFIG_ARCH_RANDOM; a CONFIG_ symbol declares the existence of <asm/hash.h>. That file may define its own versions of various functions, and define HAVE_* symbols (no CONFIG_ prefix!) to suppress the generic ones. Included is a self-test (in lib/test_hash.c) that verifies the basics. It is NOT in general required that the arch-specific functions compute the same thing as the generic, but if a HAVE_* symbol is defined with the value 1, then equality is tested. Signed-off-by: George Spelvin <linux@sciencehorizons.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Andreas Schwab <schwab@linux-m68k.org> Cc: Philippe De Muyter <phdm@macq.eu> Cc: linux-m68k@lists.linux-m68k.org Cc: Alistair Francis <alistai@xilinx.com> Cc: Michal Simek <michal.simek@xilinx.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: uclinux-h8-devel@lists.sourceforge.jp