linux/drivers/md/Kconfig
<<
>>
Prefs
   1# SPDX-License-Identifier: GPL-2.0-only
   2#
   3# Block device driver configuration
   4#
   5
   6menuconfig MD
   7        bool "Multiple devices driver support (RAID and LVM)"
   8        depends on BLOCK
   9        select SRCU
  10        help
  11          Support multiple physical spindles through a single logical device.
  12          Required for RAID and logical volume management.
  13
  14if MD
  15
  16config BLK_DEV_MD
  17        tristate "RAID support"
  18        select BLOCK_HOLDER_DEPRECATED if SYSFS
  19        help
  20          This driver lets you combine several hard disk partitions into one
  21          logical block device. This can be used to simply append one
  22          partition to another one or to combine several redundant hard disks
  23          into a RAID1/4/5 device so as to provide protection against hard
  24          disk failures. This is called "Software RAID" since the combining of
  25          the partitions is done by the kernel. "Hardware RAID" means that the
  26          combining is done by a dedicated controller; if you have such a
  27          controller, you do not need to say Y here.
  28
  29          More information about Software RAID on Linux is contained in the
  30          Software RAID mini-HOWTO, available from
  31          <https://www.tldp.org/docs.html#howto>. There you will also learn
  32          where to get the supporting user space utilities raidtools.
  33
  34          If unsure, say N.
  35
  36config MD_AUTODETECT
  37        bool "Autodetect RAID arrays during kernel boot"
  38        depends on BLK_DEV_MD=y
  39        default y
  40        help
  41          If you say Y here, then the kernel will try to autodetect raid
  42          arrays as part of its boot process.
  43
  44          If you don't use raid and say Y, this autodetection can cause
  45          a several-second delay in the boot time due to various
  46          synchronisation steps that are part of this step.
  47
  48          If unsure, say Y.
  49
  50config MD_LINEAR
  51        tristate "Linear (append) mode (deprecated)"
  52        depends on BLK_DEV_MD
  53        help
  54          If you say Y here, then your multiple devices driver will be able to
  55          use the so-called linear mode, i.e. it will combine the hard disk
  56          partitions by simply appending one to the other.
  57
  58          To compile this as a module, choose M here: the module
  59          will be called linear.
  60
  61          If unsure, say Y.
  62
  63config MD_RAID0
  64        tristate "RAID-0 (striping) mode"
  65        depends on BLK_DEV_MD
  66        help
  67          If you say Y here, then your multiple devices driver will be able to
  68          use the so-called raid0 mode, i.e. it will combine the hard disk
  69          partitions into one logical device in such a fashion as to fill them
  70          up evenly, one chunk here and one chunk there. This will increase
  71          the throughput rate if the partitions reside on distinct disks.
  72
  73          Information about Software RAID on Linux is contained in the
  74          Software-RAID mini-HOWTO, available from
  75          <https://www.tldp.org/docs.html#howto>. There you will also
  76          learn where to get the supporting user space utilities raidtools.
  77
  78          To compile this as a module, choose M here: the module
  79          will be called raid0.
  80
  81          If unsure, say Y.
  82
  83config MD_RAID1
  84        tristate "RAID-1 (mirroring) mode"
  85        depends on BLK_DEV_MD
  86        help
  87          A RAID-1 set consists of several disk drives which are exact copies
  88          of each other.  In the event of a mirror failure, the RAID driver
  89          will continue to use the operational mirrors in the set, providing
  90          an error free MD (multiple device) to the higher levels of the
  91          kernel.  In a set with N drives, the available space is the capacity
  92          of a single drive, and the set protects against a failure of (N - 1)
  93          drives.
  94
  95          Information about Software RAID on Linux is contained in the
  96          Software-RAID mini-HOWTO, available from
  97          <https://www.tldp.org/docs.html#howto>.  There you will also
  98          learn where to get the supporting user space utilities raidtools.
  99
 100          If you want to use such a RAID-1 set, say Y.  To compile this code
 101          as a module, choose M here: the module will be called raid1.
 102
 103          If unsure, say Y.
 104
 105config MD_RAID10
 106        tristate "RAID-10 (mirrored striping) mode"
 107        depends on BLK_DEV_MD
 108        help
 109          RAID-10 provides a combination of striping (RAID-0) and
 110          mirroring (RAID-1) with easier configuration and more flexible
 111          layout.
 112          Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
 113          be the same size (or at least, only as much as the smallest device
 114          will be used).
 115          RAID-10 provides a variety of layouts that provide different levels
 116          of redundancy and performance.
 117
 118          RAID-10 requires mdadm-1.7.0 or later, available at:
 119
 120          https://www.kernel.org/pub/linux/utils/raid/mdadm/
 121
 122          If unsure, say Y.
 123
 124config MD_RAID456
 125        tristate "RAID-4/RAID-5/RAID-6 mode"
 126        depends on BLK_DEV_MD
 127        select RAID6_PQ
 128        select LIBCRC32C
 129        select ASYNC_MEMCPY
 130        select ASYNC_XOR
 131        select ASYNC_PQ
 132        select ASYNC_RAID6_RECOV
 133        help
 134          A RAID-5 set of N drives with a capacity of C MB per drive provides
 135          the capacity of C * (N - 1) MB, and protects against a failure
 136          of a single drive. For a given sector (row) number, (N - 1) drives
 137          contain data sectors, and one drive contains the parity protection.
 138          For a RAID-4 set, the parity blocks are present on a single drive,
 139          while a RAID-5 set distributes the parity across the drives in one
 140          of the available parity distribution methods.
 141
 142          A RAID-6 set of N drives with a capacity of C MB per drive
 143          provides the capacity of C * (N - 2) MB, and protects
 144          against a failure of any two drives. For a given sector
 145          (row) number, (N - 2) drives contain data sectors, and two
 146          drives contains two independent redundancy syndromes.  Like
 147          RAID-5, RAID-6 distributes the syndromes across the drives
 148          in one of the available parity distribution methods.
 149
 150          Information about Software RAID on Linux is contained in the
 151          Software-RAID mini-HOWTO, available from
 152          <https://www.tldp.org/docs.html#howto>. There you will also
 153          learn where to get the supporting user space utilities raidtools.
 154
 155          If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y.  To
 156          compile this code as a module, choose M here: the module
 157          will be called raid456.
 158
 159          If unsure, say Y.
 160
 161config MD_MULTIPATH
 162        tristate "Multipath I/O support (deprecated)"
 163        depends on BLK_DEV_MD
 164        help
 165          MD_MULTIPATH provides a simple multi-path personality for use
 166          the MD framework.  It is not under active development.  New
 167          projects should consider using DM_MULTIPATH which has more
 168          features and more testing.
 169
 170          If unsure, say N.
 171
 172config MD_FAULTY
 173        tristate "Faulty test module for MD (deprecated)"
 174        depends on BLK_DEV_MD
 175        help
 176          The "faulty" module allows for a block device that occasionally returns
 177          read or write errors.  It is useful for testing.
 178
 179          In unsure, say N.
 180
 181
 182config MD_CLUSTER
 183        tristate "Cluster Support for MD"
 184        depends on BLK_DEV_MD
 185        depends on DLM
 186        default n
 187        help
 188        Clustering support for MD devices. This enables locking and
 189        synchronization across multiple systems on the cluster, so all
 190        nodes in the cluster can access the MD devices simultaneously.
 191
 192        This brings the redundancy (and uptime) of RAID levels across the
 193        nodes of the cluster. Currently, it can work with raid1 and raid10
 194        (limited support).
 195
 196        If unsure, say N.
 197
 198source "drivers/md/bcache/Kconfig"
 199
 200config BLK_DEV_DM_BUILTIN
 201        bool
 202
 203config BLK_DEV_DM
 204        tristate "Device mapper support"
 205        select BLOCK_HOLDER_DEPRECATED if SYSFS
 206        select BLK_DEV_DM_BUILTIN
 207        depends on DAX || DAX=n
 208        help
 209          Device-mapper is a low level volume manager.  It works by allowing
 210          people to specify mappings for ranges of logical sectors.  Various
 211          mapping types are available, in addition people may write their own
 212          modules containing custom mappings if they wish.
 213
 214          Higher level volume managers such as LVM2 use this driver.
 215
 216          To compile this as a module, choose M here: the module will be
 217          called dm-mod.
 218
 219          If unsure, say N.
 220
 221config DM_DEBUG
 222        bool "Device mapper debugging support"
 223        depends on BLK_DEV_DM
 224        help
 225          Enable this for messages that may help debug device-mapper problems.
 226
 227          If unsure, say N.
 228
 229config DM_BUFIO
 230       tristate
 231       depends on BLK_DEV_DM
 232        help
 233         This interface allows you to do buffered I/O on a device and acts
 234         as a cache, holding recently-read blocks in memory and performing
 235         delayed writes.
 236
 237config DM_DEBUG_BLOCK_MANAGER_LOCKING
 238       bool "Block manager locking"
 239       depends on DM_BUFIO
 240        help
 241         Block manager locking can catch various metadata corruption issues.
 242
 243         If unsure, say N.
 244
 245config DM_DEBUG_BLOCK_STACK_TRACING
 246       bool "Keep stack trace of persistent data block lock holders"
 247       depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING
 248       select STACKTRACE
 249        help
 250         Enable this for messages that may help debug problems with the
 251         block manager locking used by thin provisioning and caching.
 252
 253         If unsure, say N.
 254
 255config DM_BIO_PRISON
 256       tristate
 257       depends on BLK_DEV_DM
 258        help
 259         Some bio locking schemes used by other device-mapper targets
 260         including thin provisioning.
 261
 262source "drivers/md/persistent-data/Kconfig"
 263
 264config DM_UNSTRIPED
 265       tristate "Unstriped target"
 266       depends on BLK_DEV_DM
 267        help
 268          Unstripes I/O so it is issued solely on a single drive in a HW
 269          RAID0 or dm-striped target.
 270
 271config DM_CRYPT
 272        tristate "Crypt target support"
 273        depends on BLK_DEV_DM
 274        depends on (ENCRYPTED_KEYS || ENCRYPTED_KEYS=n)
 275        depends on (TRUSTED_KEYS || TRUSTED_KEYS=n)
 276        select CRYPTO
 277        select CRYPTO_CBC
 278        select CRYPTO_ESSIV
 279        help
 280          This device-mapper target allows you to create a device that
 281          transparently encrypts the data on it. You'll need to activate
 282          the ciphers you're going to use in the cryptoapi configuration.
 283
 284          For further information on dm-crypt and userspace tools see:
 285          <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>
 286
 287          To compile this code as a module, choose M here: the module will
 288          be called dm-crypt.
 289
 290          If unsure, say N.
 291
 292config DM_SNAPSHOT
 293       tristate "Snapshot target"
 294       depends on BLK_DEV_DM
 295       select DM_BUFIO
 296        help
 297         Allow volume managers to take writable snapshots of a device.
 298
 299config DM_THIN_PROVISIONING
 300       tristate "Thin provisioning target"
 301       depends on BLK_DEV_DM
 302       select DM_PERSISTENT_DATA
 303       select DM_BIO_PRISON
 304        help
 305         Provides thin provisioning and snapshots that share a data store.
 306
 307config DM_CACHE
 308       tristate "Cache target (EXPERIMENTAL)"
 309       depends on BLK_DEV_DM
 310       default n
 311       select DM_PERSISTENT_DATA
 312       select DM_BIO_PRISON
 313        help
 314         dm-cache attempts to improve performance of a block device by
 315         moving frequently used data to a smaller, higher performance
 316         device.  Different 'policy' plugins can be used to change the
 317         algorithms used to select which blocks are promoted, demoted,
 318         cleaned etc.  It supports writeback and writethrough modes.
 319
 320config DM_CACHE_SMQ
 321       tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)"
 322       depends on DM_CACHE
 323       default y
 324        help
 325         A cache policy that uses a multiqueue ordered by recent hits
 326         to select which blocks should be promoted and demoted.
 327         This is meant to be a general purpose policy.  It prioritises
 328         reads over writes.  This SMQ policy (vs MQ) offers the promise
 329         of less memory utilization, improved performance and increased
 330         adaptability in the face of changing workloads.
 331
 332config DM_WRITECACHE
 333        tristate "Writecache target"
 334        depends on BLK_DEV_DM
 335        help
 336           The writecache target caches writes on persistent memory or SSD.
 337           It is intended for databases or other programs that need extremely
 338           low commit latency.
 339
 340           The writecache target doesn't cache reads because reads are supposed
 341           to be cached in standard RAM.
 342
 343config DM_EBS
 344        tristate "Emulated block size target (EXPERIMENTAL)"
 345        depends on BLK_DEV_DM && !HIGHMEM
 346        select DM_BUFIO
 347        help
 348          dm-ebs emulates smaller logical block size on backing devices
 349          with larger ones (e.g. 512 byte sectors on 4K native disks).
 350
 351config DM_ERA
 352       tristate "Era target (EXPERIMENTAL)"
 353       depends on BLK_DEV_DM
 354       default n
 355       select DM_PERSISTENT_DATA
 356       select DM_BIO_PRISON
 357        help
 358         dm-era tracks which parts of a block device are written to
 359         over time.  Useful for maintaining cache coherency when using
 360         vendor snapshots.
 361
 362config DM_CLONE
 363       tristate "Clone target (EXPERIMENTAL)"
 364       depends on BLK_DEV_DM
 365       default n
 366       select DM_PERSISTENT_DATA
 367        help
 368         dm-clone produces a one-to-one copy of an existing, read-only source
 369         device into a writable destination device. The cloned device is
 370         visible/mountable immediately and the copy of the source device to the
 371         destination device happens in the background, in parallel with user
 372         I/O.
 373
 374         If unsure, say N.
 375
 376config DM_MIRROR
 377       tristate "Mirror target"
 378       depends on BLK_DEV_DM
 379        help
 380         Allow volume managers to mirror logical volumes, also
 381         needed for live data migration tools such as 'pvmove'.
 382
 383config DM_LOG_USERSPACE
 384        tristate "Mirror userspace logging"
 385        depends on DM_MIRROR && NET
 386        select CONNECTOR
 387        help
 388          The userspace logging module provides a mechanism for
 389          relaying the dm-dirty-log API to userspace.  Log designs
 390          which are more suited to userspace implementation (e.g.
 391          shared storage logs) or experimental logs can be implemented
 392          by leveraging this framework.
 393
 394config DM_RAID
 395       tristate "RAID 1/4/5/6/10 target"
 396       depends on BLK_DEV_DM
 397       select MD_RAID0
 398       select MD_RAID1
 399       select MD_RAID10
 400       select MD_RAID456
 401       select BLK_DEV_MD
 402        help
 403         A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
 404
 405         A RAID-5 set of N drives with a capacity of C MB per drive provides
 406         the capacity of C * (N - 1) MB, and protects against a failure
 407         of a single drive. For a given sector (row) number, (N - 1) drives
 408         contain data sectors, and one drive contains the parity protection.
 409         For a RAID-4 set, the parity blocks are present on a single drive,
 410         while a RAID-5 set distributes the parity across the drives in one
 411         of the available parity distribution methods.
 412
 413         A RAID-6 set of N drives with a capacity of C MB per drive
 414         provides the capacity of C * (N - 2) MB, and protects
 415         against a failure of any two drives. For a given sector
 416         (row) number, (N - 2) drives contain data sectors, and two
 417         drives contains two independent redundancy syndromes.  Like
 418         RAID-5, RAID-6 distributes the syndromes across the drives
 419         in one of the available parity distribution methods.
 420
 421config DM_ZERO
 422        tristate "Zero target"
 423        depends on BLK_DEV_DM
 424        help
 425          A target that discards writes, and returns all zeroes for
 426          reads.  Useful in some recovery situations.
 427
 428config DM_MULTIPATH
 429        tristate "Multipath target"
 430        depends on BLK_DEV_DM
 431        # nasty syntax but means make DM_MULTIPATH independent
 432        # of SCSI_DH if the latter isn't defined but if
 433        # it is, DM_MULTIPATH must depend on it.  We get a build
 434        # error if SCSI_DH=m and DM_MULTIPATH=y
 435        depends on !SCSI_DH || SCSI
 436        help
 437          Allow volume managers to support multipath hardware.
 438
 439config DM_MULTIPATH_QL
 440        tristate "I/O Path Selector based on the number of in-flight I/Os"
 441        depends on DM_MULTIPATH
 442        help
 443          This path selector is a dynamic load balancer which selects
 444          the path with the least number of in-flight I/Os.
 445
 446          If unsure, say N.
 447
 448config DM_MULTIPATH_ST
 449        tristate "I/O Path Selector based on the service time"
 450        depends on DM_MULTIPATH
 451        help
 452          This path selector is a dynamic load balancer which selects
 453          the path expected to complete the incoming I/O in the shortest
 454          time.
 455
 456          If unsure, say N.
 457
 458config DM_MULTIPATH_HST
 459        tristate "I/O Path Selector based on historical service time"
 460        depends on DM_MULTIPATH
 461        help
 462          This path selector is a dynamic load balancer which selects
 463          the path expected to complete the incoming I/O in the shortest
 464          time by comparing estimated service time (based on historical
 465          service time).
 466
 467          If unsure, say N.
 468
 469config DM_MULTIPATH_IOA
 470        tristate "I/O Path Selector based on CPU submission"
 471        depends on DM_MULTIPATH
 472        help
 473          This path selector selects the path based on the CPU the IO is
 474          executed on and the CPU to path mapping setup at path addition time.
 475
 476          If unsure, say N.
 477
 478config DM_DELAY
 479        tristate "I/O delaying target"
 480        depends on BLK_DEV_DM
 481        help
 482        A target that delays reads and/or writes and can send
 483        them to different devices.  Useful for testing.
 484
 485        If unsure, say N.
 486
 487config DM_DUST
 488        tristate "Bad sector simulation target"
 489        depends on BLK_DEV_DM
 490        help
 491        A target that simulates bad sector behavior.
 492        Useful for testing.
 493
 494        If unsure, say N.
 495
 496config DM_INIT
 497        bool "DM \"dm-mod.create=\" parameter support"
 498        depends on BLK_DEV_DM=y
 499        help
 500        Enable "dm-mod.create=" parameter to create mapped devices at init time.
 501        This option is useful to allow mounting rootfs without requiring an
 502        initramfs.
 503        See Documentation/admin-guide/device-mapper/dm-init.rst for dm-mod.create="..."
 504        format.
 505
 506        If unsure, say N.
 507
 508config DM_UEVENT
 509        bool "DM uevents"
 510        depends on BLK_DEV_DM
 511        help
 512        Generate udev events for DM events.
 513
 514config DM_FLAKEY
 515       tristate "Flakey target"
 516       depends on BLK_DEV_DM
 517        help
 518         A target that intermittently fails I/O for debugging purposes.
 519
 520config DM_VERITY
 521        tristate "Verity target support"
 522        depends on BLK_DEV_DM
 523        select CRYPTO
 524        select CRYPTO_HASH
 525        select DM_BUFIO
 526        help
 527          This device-mapper target creates a read-only device that
 528          transparently validates the data on one underlying device against
 529          a pre-generated tree of cryptographic checksums stored on a second
 530          device.
 531
 532          You'll need to activate the digests you're going to use in the
 533          cryptoapi configuration.
 534
 535          To compile this code as a module, choose M here: the module will
 536          be called dm-verity.
 537
 538          If unsure, say N.
 539
 540config DM_VERITY_VERIFY_ROOTHASH_SIG
 541        def_bool n
 542        bool "Verity data device root hash signature verification support"
 543        depends on DM_VERITY
 544        select SYSTEM_DATA_VERIFICATION
 545        help
 546          Add ability for dm-verity device to be validated if the
 547          pre-generated tree of cryptographic checksums passed has a pkcs#7
 548          signature file that can validate the roothash of the tree.
 549
 550          By default, rely on the builtin trusted keyring.
 551
 552          If unsure, say N.
 553
 554config DM_VERITY_VERIFY_ROOTHASH_SIG_SECONDARY_KEYRING
 555        bool "Verity data device root hash signature verification with secondary keyring"
 556        depends on DM_VERITY_VERIFY_ROOTHASH_SIG
 557        depends on SECONDARY_TRUSTED_KEYRING
 558        help
 559          Rely on the secondary trusted keyring to verify dm-verity signatures.
 560
 561          If unsure, say N.
 562
 563config DM_VERITY_FEC
 564        bool "Verity forward error correction support"
 565        depends on DM_VERITY
 566        select REED_SOLOMON
 567        select REED_SOLOMON_DEC8
 568        help
 569          Add forward error correction support to dm-verity. This option
 570          makes it possible to use pre-generated error correction data to
 571          recover from corrupted blocks.
 572
 573          If unsure, say N.
 574
 575config DM_SWITCH
 576        tristate "Switch target support (EXPERIMENTAL)"
 577        depends on BLK_DEV_DM
 578        help
 579          This device-mapper target creates a device that supports an arbitrary
 580          mapping of fixed-size regions of I/O across a fixed set of paths.
 581          The path used for any specific region can be switched dynamically
 582          by sending the target a message.
 583
 584          To compile this code as a module, choose M here: the module will
 585          be called dm-switch.
 586
 587          If unsure, say N.
 588
 589config DM_LOG_WRITES
 590        tristate "Log writes target support"
 591        depends on BLK_DEV_DM
 592        help
 593          This device-mapper target takes two devices, one device to use
 594          normally, one to log all write operations done to the first device.
 595          This is for use by file system developers wishing to verify that
 596          their fs is writing a consistent file system at all times by allowing
 597          them to replay the log in a variety of ways and to check the
 598          contents.
 599
 600          To compile this code as a module, choose M here: the module will
 601          be called dm-log-writes.
 602
 603          If unsure, say N.
 604
 605config DM_INTEGRITY
 606        tristate "Integrity target support"
 607        depends on BLK_DEV_DM
 608        select BLK_DEV_INTEGRITY
 609        select DM_BUFIO
 610        select CRYPTO
 611        select CRYPTO_SKCIPHER
 612        select ASYNC_XOR
 613        help
 614          This device-mapper target emulates a block device that has
 615          additional per-sector tags that can be used for storing
 616          integrity information.
 617
 618          This integrity target is used with the dm-crypt target to
 619          provide authenticated disk encryption or it can be used
 620          standalone.
 621
 622          To compile this code as a module, choose M here: the module will
 623          be called dm-integrity.
 624
 625config DM_ZONED
 626        tristate "Drive-managed zoned block device target support"
 627        depends on BLK_DEV_DM
 628        depends on BLK_DEV_ZONED
 629        select CRC32
 630        help
 631          This device-mapper target takes a host-managed or host-aware zoned
 632          block device and exposes most of its capacity as a regular block
 633          device (drive-managed zoned block device) without any write
 634          constraints. This is mainly intended for use with file systems that
 635          do not natively support zoned block devices but still want to
 636          benefit from the increased capacity offered by SMR disks. Other uses
 637          by applications using raw block devices (for example object stores)
 638          are also possible.
 639
 640          To compile this code as a module, choose M here: the module will
 641          be called dm-zoned.
 642
 643          If unsure, say N.
 644
 645endif # MD
 646