1/* 2 * Only give sleepers 50% of their service deficit. This allows 3 * them to run sooner, but does not allow tons of sleepers to 4 * rip the spread apart. 5 */ 6SCHED_FEAT(GENTLE_FAIR_SLEEPERS, 1) 7 8/* 9 * Place new tasks ahead so that they do not starve already running 10 * tasks 11 */ 12SCHED_FEAT(START_DEBIT, 1) 13 14/* 15 * Should wakeups try to preempt running tasks. 16 */ 17SCHED_FEAT(WAKEUP_PREEMPT, 1) 18 19/* 20 * Based on load and program behaviour, see if it makes sense to place 21 * a newly woken task on the same cpu as the task that woke it -- 22 * improve cache locality. Typically used with SYNC wakeups as 23 * generated by pipes and the like, see also SYNC_WAKEUPS. 24 */ 25SCHED_FEAT(AFFINE_WAKEUPS, 1) 26 27/* 28 * Prefer to schedule the task we woke last (assuming it failed 29 * wakeup-preemption), since its likely going to consume data we 30 * touched, increases cache locality. 31 */ 32SCHED_FEAT(NEXT_BUDDY, 0) 33 34/* 35 * Prefer to schedule the task that ran last (when we did 36 * wake-preempt) as that likely will touch the same data, increases 37 * cache locality. 38 */ 39SCHED_FEAT(LAST_BUDDY, 1) 40 41/* 42 * Consider buddies to be cache hot, decreases the likelyness of a 43 * cache buddy being migrated away, increases cache locality. 44 */ 45SCHED_FEAT(CACHE_HOT_BUDDY, 1) 46 47/* 48 * Use arch dependent cpu power functions 49 */ 50SCHED_FEAT(ARCH_POWER, 0) 51 52SCHED_FEAT(HRTICK, 0) 53SCHED_FEAT(DOUBLE_TICK, 0) 54SCHED_FEAT(LB_BIAS, 1) 55 56/* 57 * Spin-wait on mutex acquisition when the mutex owner is running on 58 * another cpu -- assumes that when the owner is running, it will soon 59 * release the lock. Decreases scheduling overhead. 60 */ 61SCHED_FEAT(OWNER_SPIN, 1) 62 63/* 64 * Decrement CPU power based on irq activity 65 */ 66SCHED_FEAT(NONIRQ_POWER, 1) 67