1
2
3
4
5config CRASH_CORE
6 bool
7
8config KEXEC_CORE
9 select CRASH_CORE
10 bool
11
12config OPROFILE
13 tristate "OProfile system profiling"
14 depends on PROFILING
15 depends on HAVE_OPROFILE
16 select RING_BUFFER
17 select RING_BUFFER_ALLOW_SWAP
18 help
19 OProfile is a profiling system capable of profiling the
20 whole system, include the kernel, kernel modules, libraries,
21 and applications.
22
23 If unsure, say N.
24
25config OPROFILE_EVENT_MULTIPLEX
26 bool "OProfile multiplexing support (EXPERIMENTAL)"
27 default n
28 depends on OPROFILE && X86
29 help
30 The number of hardware counters is limited. The multiplexing
31 feature enables OProfile to gather more events than counters
32 are provided by the hardware. This is realized by switching
33 between events at a user specified time interval.
34
35 If unsure, say N.
36
37config HAVE_OPROFILE
38 bool
39
40config OPROFILE_NMI_TIMER
41 def_bool y
42 depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64
43
44config KPROBES
45 bool "Kprobes"
46 depends on MODULES
47 depends on HAVE_KPROBES
48 select KALLSYMS
49 help
50 Kprobes allows you to trap at almost any kernel address and
51 execute a callback function. register_kprobe() establishes
52 a probepoint and specifies the callback. Kprobes is useful
53 for kernel debugging, non-intrusive instrumentation and testing.
54 If in doubt, say "N".
55
56config JUMP_LABEL
57 bool "Optimize very unlikely/likely branches"
58 depends on HAVE_ARCH_JUMP_LABEL
59 help
60 This option enables a transparent branch optimization that
61 makes certain almost-always-true or almost-always-false branch
62 conditions even cheaper to execute within the kernel.
63
64 Certain performance-sensitive kernel code, such as trace points,
65 scheduler functionality, networking code and KVM have such
66 branches and include support for this optimization technique.
67
68 If it is detected that the compiler has support for "asm goto",
69 the kernel will compile such branches with just a nop
70 instruction. When the condition flag is toggled to true, the
71 nop will be converted to a jump instruction to execute the
72 conditional block of instructions.
73
74 This technique lowers overhead and stress on the branch prediction
75 of the processor and generally makes the kernel faster. The update
76 of the condition is slower, but those are always very rare.
77
78 ( On 32-bit x86, the necessary options added to the compiler
79 flags may increase the size of the kernel slightly. )
80
81config OPTPROBES
82 def_bool y
83 depends on KPROBES && HAVE_OPTPROBES
84 depends on !PREEMPT
85
86config KPROBES_ON_FTRACE
87 def_bool y
88 depends on KPROBES && HAVE_KPROBES_ON_FTRACE
89 depends on DYNAMIC_FTRACE_WITH_REGS
90 help
91 If function tracer is enabled and the arch supports full
92 passing of pt_regs to function tracing, then kprobes can
93 optimize on top of function tracing.
94
95config UPROBES
96 bool "Transparent user-space probes (EXPERIMENTAL)"
97 depends on UPROBE_EVENT && PERF_EVENTS
98 default n
99 select PERCPU_RWSEM
100 help
101 Uprobes is the user-space counterpart to kprobes: they
102 enable instrumentation applications (such as 'perf probe')
103 to establish unintrusive probes in user-space binaries and
104 libraries, by executing handler functions when the probes
105 are hit by user-space applications.
106
107 ( These probes come in the form of single-byte breakpoints,
108 managed by the kernel and kept transparent to the probed
109 application. )
110
111 If in doubt, say "N".
112
113config HAVE_64BIT_ALIGNED_ACCESS
114 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
115 help
116 Some architectures require 64 bit accesses to be 64 bit
117 aligned, which also requires structs containing 64 bit values
118 to be 64 bit aligned too. This includes some 32 bit
119 architectures which can do 64 bit accesses, as well as 64 bit
120 architectures without unaligned access.
121
122 This symbol should be selected by an architecture if 64 bit
123 accesses are required to be 64 bit aligned in this way even
124 though it is not a 64 bit architecture.
125
126 See Documentation/unaligned-memory-access.txt for more
127 information on the topic of unaligned memory accesses.
128
129config HAVE_EFFICIENT_UNALIGNED_ACCESS
130 bool
131 help
132 Some architectures are unable to perform unaligned accesses
133 without the use of get_unaligned/put_unaligned. Others are
134 unable to perform such accesses efficiently (e.g. trap on
135 unaligned access and require fixing it up in the exception
136 handler.)
137
138 This symbol should be selected by an architecture if it can
139 perform unaligned accesses efficiently to allow different
140 code paths to be selected for these cases. Some network
141 drivers, for example, could opt to not fix up alignment
142 problems with received packets if doing so would not help
143 much.
144
145 See Documentation/unaligned-memory-access.txt for more
146 information on the topic of unaligned memory accesses.
147
148config ARCH_USE_BUILTIN_BSWAP
149 bool
150 help
151 Modern versions of GCC (since 4.4) have builtin functions
152 for handling byte-swapping. Using these, instead of the old
153 inline assembler that the architecture code provides in the
154 __arch_bswapXX() macros, allows the compiler to see what's
155 happening and offers more opportunity for optimisation. In
156 particular, the compiler will be able to combine the byteswap
157 with a nearby load or store and use load-and-swap or
158 store-and-swap instructions if the architecture has them. It
159 should almost *never* result in code which is worse than the
160 hand-coded assembler in <asm/swab.h>. But just in case it
161 does, the use of the builtins is optional.
162
163 Any architecture with load-and-swap or store-and-swap
164 instructions should set this. And it shouldn't hurt to set it
165 on architectures that don't have such instructions.
166
167config KRETPROBES
168 def_bool y
169 depends on KPROBES && HAVE_KRETPROBES
170
171config USER_RETURN_NOTIFIER
172 bool
173 depends on HAVE_USER_RETURN_NOTIFIER
174 help
175 Provide a kernel-internal notification when a cpu is about to
176 switch to user mode.
177
178config HAVE_IOREMAP_PROT
179 bool
180
181config HAVE_KPROBES
182 bool
183
184config HAVE_KRETPROBES
185 bool
186
187config HAVE_OPTPROBES
188 bool
189
190config HAVE_KPROBES_ON_FTRACE
191 bool
192
193config HAVE_NMI_WATCHDOG
194 bool
195
196
197
198
199
200
201
202
203
204
205
206
207
208config HAVE_ARCH_TRACEHOOK
209 bool
210
211config HAVE_DMA_CONTIGUOUS
212 bool
213
214config USE_GENERIC_SMP_HELPERS
215 bool
216
217config GENERIC_SMP_IDLE_THREAD
218 bool
219
220config GENERIC_IDLE_POLL_SETUP
221 bool
222
223
224config ARCH_INIT_TASK
225 bool
226
227
228config ARCH_TASK_STRUCT_ALLOCATOR
229 bool
230
231
232config ARCH_THREAD_INFO_ALLOCATOR
233 bool
234
235config HAVE_REGS_AND_STACK_ACCESS_API
236 bool
237 help
238 This symbol should be selected by an architecure if it supports
239 the API needed to access registers and stack entries from pt_regs,
240 declared in asm/ptrace.h
241 For example the kprobes-based event tracer needs this API.
242
243config HAVE_CLK
244 bool
245 help
246 The <linux/clk.h> calls support software clock gating and
247 thus are a key power management tool on many systems.
248
249config HAVE_DMA_API_DEBUG
250 bool
251
252config HAVE_HW_BREAKPOINT
253 bool
254 depends on PERF_EVENTS
255
256config HAVE_MIXED_BREAKPOINTS_REGS
257 bool
258 depends on HAVE_HW_BREAKPOINT
259 help
260 Depending on the arch implementation of hardware breakpoints,
261 some of them have separate registers for data and instruction
262 breakpoints addresses, others have mixed registers to store
263 them but define the access type in a control register.
264 Select this option if your arch implements breakpoints under the
265 latter fashion.
266
267config HAVE_USER_RETURN_NOTIFIER
268 bool
269
270config HAVE_PERF_EVENTS_NMI
271 bool
272 help
273 System hardware can generate an NMI using the perf event
274 subsystem. Also has support for calculating CPU cycle events
275 to determine how many clock cycles in a given period.
276
277config HAVE_PERF_REGS
278 bool
279 help
280 Support selective register dumps for perf events. This includes
281 bit-mapping of each registers and a unique architecture id.
282
283config HAVE_PERF_USER_STACK_DUMP
284 bool
285 help
286 Support user stack dumps for perf event samples. This needs
287 access to the user stack pointer which is not unified across
288 architectures.
289
290config HAVE_ARCH_JUMP_LABEL
291 bool
292
293config HAVE_RCU_TABLE_FREE
294 bool
295
296config ARCH_HAVE_NMI_SAFE_CMPXCHG
297 bool
298
299config HAVE_ALIGNED_STRUCT_PAGE
300 bool
301 help
302 This makes sure that struct pages are double word aligned and that
303 e.g. the SLUB allocator can perform double word atomic operations
304 on a struct page for better performance. However selecting this
305 might increase the size of a struct page by a word.
306
307config HAVE_CMPXCHG_LOCAL
308 bool
309
310config HAVE_CMPXCHG_DOUBLE
311 bool
312
313config ARCH_WANT_IPC_PARSE_VERSION
314 bool
315
316config ARCH_WANT_COMPAT_IPC_PARSE_VERSION
317 bool
318
319config ARCH_WANT_OLD_COMPAT_IPC
320 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
321 bool
322
323config HAVE_ARCH_SECCOMP_FILTER
324 bool
325 help
326 An arch should select this symbol if it provides all of these things:
327 - syscall_get_arch()
328 - syscall_get_arguments()
329 - syscall_rollback()
330 - syscall_set_return_value()
331 - SIGSYS siginfo_t support
332 - secure_computing is called from a ptrace_event()-safe context
333 - secure_computing return value is checked and a return value of -1
334 results in the system call being skipped immediately.
335 - seccomp syscall wired up
336
337config SECCOMP_FILTER
338 def_bool y
339 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
340 help
341 Enable tasks to build secure computing environments defined
342 in terms of Berkeley Packet Filter programs which implement
343 task-defined system call filtering polices.
344
345 See Documentation/prctl/seccomp_filter.txt for details.
346
347config HAVE_CC_STACKPROTECTOR
348 bool
349 help
350 An arch should select this symbol if:
351 - its compiler supports the -fstack-protector option
352 - it has implemented a stack canary (e.g. __stack_chk_guard)
353
354config CC_STACKPROTECTOR
355 def_bool n
356 help
357 Set when a stack-protector mode is enabled, so that the build
358 can enable kernel-side support for the GCC feature.
359
360choice
361 prompt "Stack Protector buffer overflow detection"
362 depends on HAVE_CC_STACKPROTECTOR
363 default CC_STACKPROTECTOR_NONE
364 help
365 This option turns on the "stack-protector" GCC feature. This
366 feature puts, at the beginning of functions, a canary value on
367 the stack just before the return address, and validates
368 the value just before actually returning. Stack based buffer
369 overflows (that need to overwrite this return address) now also
370 overwrite the canary, which gets detected and the attack is then
371 neutralized via a kernel panic.
372
373config CC_STACKPROTECTOR_NONE
374 bool "None"
375 help
376 Disable "stack-protector" GCC feature.
377
378config CC_STACKPROTECTOR_REGULAR
379 bool "Regular"
380 select CC_STACKPROTECTOR
381 help
382 Functions will have the stack-protector canary logic added if they
383 have an 8-byte or larger character array on the stack.
384
385 This feature requires gcc version 4.2 or above, or a distribution
386 gcc with the feature backported ("-fstack-protector").
387
388 On an x86 "defconfig" build, this feature adds canary checks to
389 about 3% of all kernel functions, which increases kernel code size
390 by about 0.3%.
391
392config CC_STACKPROTECTOR_STRONG
393 bool "Strong"
394 select CC_STACKPROTECTOR
395 help
396 Functions will have the stack-protector canary logic added in any
397 of the following conditions:
398
399 - local variable's address used as part of the right hand side of an
400 assignment or function argument
401 - local variable is an array (or union containing an array),
402 regardless of array type or length
403 - uses register local variables
404
405 This feature requires gcc version 4.9 or above, or a distribution
406 gcc with the feature backported ("-fstack-protector-strong").
407
408 On an x86 "defconfig" build, this feature adds canary checks to
409 about 20% of all kernel functions, which increases the kernel code
410 size by about 2%.
411
412endchoice
413
414config HAVE_CONTEXT_TRACKING
415 bool
416 help
417 Provide kernel/user boundaries probes necessary for subsystems
418 that need it, such as userspace RCU extended quiescent state.
419 Syscalls need to be wrapped inside user_exit()-user_enter() through
420 the slow path using TIF_NOHZ flag. Exceptions handlers must be
421 wrapped as well. Irqs are already protected inside
422 rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on
423 irq exit still need to be protected.
424
425config HAVE_VIRT_CPU_ACCOUNTING
426 bool
427
428config HAVE_IRQ_TIME_ACCOUNTING
429 bool
430 help
431 Archs need to ensure they use a high enough resolution clock to
432 support irq time accounting and then call enable_sched_clock_irqtime().
433
434config HAVE_ARCH_TRANSPARENT_HUGEPAGE
435 bool
436
437config HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
438 bool
439
440config HAVE_ARCH_SOFT_DIRTY
441 bool
442
443config HAVE_ARCH_HUGE_VMAP
444 bool
445
446config HAVE_MOD_ARCH_SPECIFIC
447 bool
448 help
449 The arch uses struct mod_arch_specific to store data. Many arches
450 just need a simple module loader without arch specific data - those
451 should not enable this.
452
453config MODULES_USE_ELF_RELA
454 bool
455 help
456 Modules only use ELF RELA relocations. Modules with ELF REL
457 relocations will give an error.
458
459config MODULES_USE_ELF_REL
460 bool
461 help
462 Modules only use ELF REL relocations. Modules with ELF RELA
463 relocations will give an error.
464
465config HAVE_UNDERSCORE_SYMBOL_PREFIX
466 bool
467 help
468 Some architectures generate an _ in front of C symbols; things like
469 module loading and assembly files need to know about this.
470
471config HAVE_STACK_VALIDATION
472 bool
473 help
474 Architecture supports the 'objtool check' host tool command, which
475 performs compile-time stack metadata validation.
476
477config ARCH_HAS_ELF_RANDOMIZE
478 bool
479 help
480 An architecture supports choosing randomized locations for
481 stack, mmap, brk, and ET_DYN. Defined functions:
482 - arch_mmap_rnd()
483 - arch_randomize_brk()
484
485config HAVE_ARCH_MMAP_RND_BITS
486 bool
487 help
488 An arch should select this symbol if it supports setting a variable
489 number of bits for use in establishing the base address for mmap
490 allocations, has MMU enabled and provides values for both:
491 - ARCH_MMAP_RND_BITS_MIN
492 - ARCH_MMAP_RND_BITS_MAX
493
494config ARCH_MMAP_RND_BITS_MIN
495 int
496
497config ARCH_MMAP_RND_BITS_MAX
498 int
499
500config ARCH_MMAP_RND_BITS_DEFAULT
501 int
502
503config ARCH_MMAP_RND_BITS
504 int "Number of bits to use for ASLR of mmap base address" if EXPERT
505 range ARCH_MMAP_RND_BITS_MIN ARCH_MMAP_RND_BITS_MAX
506 default ARCH_MMAP_RND_BITS_DEFAULT if ARCH_MMAP_RND_BITS_DEFAULT
507 default ARCH_MMAP_RND_BITS_MIN
508 depends on HAVE_ARCH_MMAP_RND_BITS
509 help
510 This value can be used to select the number of bits to use to
511 determine the random offset to the base address of vma regions
512 resulting from mmap allocations. This value will be bounded
513 by the architecture's minimum and maximum supported values.
514
515 This value can be changed after boot using the
516 /proc/sys/vm/mmap_rnd_bits tunable
517
518config HAVE_ARCH_MMAP_RND_COMPAT_BITS
519 bool
520 help
521 An arch should select this symbol if it supports running applications
522 in compatibility mode, supports setting a variable number of bits for
523 use in establishing the base address for mmap allocations, has MMU
524 enabled and provides values for both:
525 - ARCH_MMAP_RND_COMPAT_BITS_MIN
526 - ARCH_MMAP_RND_COMPAT_BITS_MAX
527
528config ARCH_MMAP_RND_COMPAT_BITS_MIN
529 int
530
531config ARCH_MMAP_RND_COMPAT_BITS_MAX
532 int
533
534config ARCH_MMAP_RND_COMPAT_BITS_DEFAULT
535 int
536
537config ARCH_MMAP_RND_COMPAT_BITS
538 int "Number of bits to use for ASLR of mmap base address for compatible applications" if EXPERT
539 range ARCH_MMAP_RND_COMPAT_BITS_MIN ARCH_MMAP_RND_COMPAT_BITS_MAX
540 default ARCH_MMAP_RND_COMPAT_BITS_DEFAULT if ARCH_MMAP_RND_COMPAT_BITS_DEFAULT
541 default ARCH_MMAP_RND_COMPAT_BITS_MIN
542 depends on HAVE_ARCH_MMAP_RND_COMPAT_BITS
543 help
544 This value can be used to select the number of bits to use to
545 determine the random offset to the base address of vma regions
546 resulting from mmap allocations for compatible applications This
547 value will be bounded by the architecture's minimum and maximum
548 supported values.
549
550 This value can be changed after boot using the
551 /proc/sys/vm/mmap_rnd_compat_bits tunable
552
553config HAVE_RELIABLE_STACKTRACE
554 bool
555 help
556 Architecture has a save_stack_trace_tsk_reliable() function which
557 only returns a stack trace if it can guarantee the trace is reliable.
558
559
560
561
562config CLONE_BACKWARDS
563 bool
564 help
565 Architecture has tls passed as the 4th argument of clone(2),
566 not the 5th one.
567
568config CLONE_BACKWARDS2
569 bool
570 help
571 Architecture has the first two arguments of clone(2) swapped.
572
573config CLONE_BACKWARDS3
574 bool
575 help
576 Architecture has tls passed as the 3rd argument of clone(2),
577 not the 5th one.
578
579config ODD_RT_SIGACTION
580 bool
581 help
582 Architecture has unusual rt_sigaction(2) arguments
583
584config OLD_SIGSUSPEND
585 bool
586 help
587 Architecture has old sigsuspend(2) syscall, of one-argument variety
588
589config OLD_SIGSUSPEND3
590 bool
591 help
592 Even weirder antique ABI - three-argument sigsuspend(2)
593
594config OLD_SIGACTION
595 bool
596 help
597 Architecture has old sigaction(2) syscall. Nope, not the same
598 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2),
599 but fairly different variant of sigaction(2), thanks to OSF/1
600 compatibility...
601
602config COMPAT_OLD_SIGACTION
603 bool
604
605config REFCOUNT_FULL
606 bool "Perform full reference count validation at the expense of speed"
607 help
608 Enabling this switches the refcounting infrastructure from a fast
609 unchecked atomic_t implementation to a fully state checked
610 implementation, which can be (slightly) slower but provides protections
611 against various use-after-free conditions that can be used in
612 security flaw exploits.
613
614source "kernel/gcov/Kconfig"
615