linux/net/netfilter/ipvs/Kconfig
<<
>>
Prefs
   1#
   2# IP Virtual Server configuration
   3#
   4menuconfig IP_VS
   5        tristate "IP virtual server support"
   6        depends on NET && INET && NETFILTER
   7        depends on (NF_CONNTRACK || NF_CONNTRACK=n)
   8        ---help---
   9          IP Virtual Server support will let you build a high-performance
  10          virtual server based on cluster of two or more real servers. This
  11          option must be enabled for at least one of the clustered computers
  12          that will take care of intercepting incoming connections to a
  13          single IP address and scheduling them to real servers.
  14
  15          Three request dispatching techniques are implemented, they are
  16          virtual server via NAT, virtual server via tunneling and virtual
  17          server via direct routing. The several scheduling algorithms can
  18          be used to choose which server the connection is directed to,
  19          thus load balancing can be achieved among the servers.  For more
  20          information and its administration program, please visit the
  21          following URL: <http://www.linuxvirtualserver.org/>.
  22
  23          If you want to compile it in kernel, say Y. To compile it as a
  24          module, choose M here. If unsure, say N.
  25
  26if IP_VS
  27
  28config  IP_VS_IPV6
  29        bool "IPv6 support for IPVS"
  30        depends on IPV6 = y || IP_VS = IPV6
  31        select IP6_NF_IPTABLES
  32        ---help---
  33          Add IPv6 support to IPVS.
  34
  35          Say Y if unsure.
  36
  37config  IP_VS_DEBUG
  38        bool "IP virtual server debugging"
  39        ---help---
  40          Say Y here if you want to get additional messages useful in
  41          debugging the IP virtual server code. You can change the debug
  42          level in /proc/sys/net/ipv4/vs/debug_level
  43
  44config  IP_VS_TAB_BITS
  45        int "IPVS connection table size (the Nth power of 2)"
  46        range 8 20
  47        default 12
  48        ---help---
  49          The IPVS connection hash table uses the chaining scheme to handle
  50          hash collisions. Using a big IPVS connection hash table will greatly
  51          reduce conflicts when there are hundreds of thousands of connections
  52          in the hash table.
  53
  54          Note the table size must be power of 2. The table size will be the
  55          value of 2 to the your input number power. The number to choose is
  56          from 8 to 20, the default number is 12, which means the table size
  57          is 4096. Don't input the number too small, otherwise you will lose
  58          performance on it. You can adapt the table size yourself, according
  59          to your virtual server application. It is good to set the table size
  60          not far less than the number of connections per second multiplying
  61          average lasting time of connection in the table.  For example, your
  62          virtual server gets 200 connections per second, the connection lasts
  63          for 200 seconds in average in the connection table, the table size
  64          should be not far less than 200x200, it is good to set the table
  65          size 32768 (2**15).
  66
  67          Another note that each connection occupies 128 bytes effectively and
  68          each hash entry uses 8 bytes, so you can estimate how much memory is
  69          needed for your box.
  70
  71          You can overwrite this number setting conn_tab_bits module parameter
  72          or by appending ip_vs.conn_tab_bits=? to the kernel command line
  73          if IP VS was compiled built-in.
  74
  75comment "IPVS transport protocol load balancing support"
  76
  77config  IP_VS_PROTO_TCP
  78        bool "TCP load balancing support"
  79        ---help---
  80          This option enables support for load balancing TCP transport
  81          protocol. Say Y if unsure.
  82
  83config  IP_VS_PROTO_UDP
  84        bool "UDP load balancing support"
  85        ---help---
  86          This option enables support for load balancing UDP transport
  87          protocol. Say Y if unsure.
  88
  89config  IP_VS_PROTO_AH_ESP
  90        def_bool IP_VS_PROTO_ESP || IP_VS_PROTO_AH
  91
  92config  IP_VS_PROTO_ESP
  93        bool "ESP load balancing support"
  94        ---help---
  95          This option enables support for load balancing ESP (Encapsulation
  96          Security Payload) transport protocol. Say Y if unsure.
  97
  98config  IP_VS_PROTO_AH
  99        bool "AH load balancing support"
 100        ---help---
 101          This option enables support for load balancing AH (Authentication
 102          Header) transport protocol. Say Y if unsure.
 103
 104config  IP_VS_PROTO_SCTP
 105        bool "SCTP load balancing support"
 106        select LIBCRC32C
 107        ---help---
 108          This option enables support for load balancing SCTP transport
 109          protocol. Say Y if unsure.
 110
 111comment "IPVS scheduler"
 112
 113config  IP_VS_RR
 114        tristate "round-robin scheduling"
 115        ---help---
 116          The robin-robin scheduling algorithm simply directs network
 117          connections to different real servers in a round-robin manner.
 118
 119          If you want to compile it in kernel, say Y. To compile it as a
 120          module, choose M here. If unsure, say N.
 121 
 122config  IP_VS_WRR
 123        tristate "weighted round-robin scheduling"
 124        ---help---
 125          The weighted robin-robin scheduling algorithm directs network
 126          connections to different real servers based on server weights
 127          in a round-robin manner. Servers with higher weights receive
 128          new connections first than those with less weights, and servers
 129          with higher weights get more connections than those with less
 130          weights and servers with equal weights get equal connections.
 131
 132          If you want to compile it in kernel, say Y. To compile it as a
 133          module, choose M here. If unsure, say N.
 134
 135config  IP_VS_LC
 136        tristate "least-connection scheduling"
 137        ---help---
 138          The least-connection scheduling algorithm directs network
 139          connections to the server with the least number of active 
 140          connections.
 141
 142          If you want to compile it in kernel, say Y. To compile it as a
 143          module, choose M here. If unsure, say N.
 144
 145config  IP_VS_WLC
 146        tristate "weighted least-connection scheduling"
 147        ---help---
 148          The weighted least-connection scheduling algorithm directs network
 149          connections to the server with the least active connections
 150          normalized by the server weight.
 151
 152          If you want to compile it in kernel, say Y. To compile it as a
 153          module, choose M here. If unsure, say N.
 154
 155config  IP_VS_FO
 156                tristate "weighted failover scheduling"
 157        ---help---
 158          The weighted failover scheduling algorithm directs network
 159          connections to the server with the highest weight that is
 160          currently available.
 161
 162          If you want to compile it in kernel, say Y. To compile it as a
 163          module, choose M here. If unsure, say N.
 164
 165config  IP_VS_OVF
 166        tristate "weighted overflow scheduling"
 167        ---help---
 168          The weighted overflow scheduling algorithm directs network
 169          connections to the server with the highest weight that is
 170          currently available and overflows to the next when active
 171          connections exceed the node's weight.
 172
 173          If you want to compile it in kernel, say Y. To compile it as a
 174          module, choose M here. If unsure, say N.
 175
 176config  IP_VS_LBLC
 177        tristate "locality-based least-connection scheduling"
 178        ---help---
 179          The locality-based least-connection scheduling algorithm is for
 180          destination IP load balancing. It is usually used in cache cluster.
 181          This algorithm usually directs packet destined for an IP address to
 182          its server if the server is alive and under load. If the server is
 183          overloaded (its active connection numbers is larger than its weight)
 184          and there is a server in its half load, then allocate the weighted
 185          least-connection server to this IP address.
 186
 187          If you want to compile it in kernel, say Y. To compile it as a
 188          module, choose M here. If unsure, say N.
 189
 190config  IP_VS_LBLCR
 191        tristate "locality-based least-connection with replication scheduling"
 192        ---help---
 193          The locality-based least-connection with replication scheduling
 194          algorithm is also for destination IP load balancing. It is 
 195          usually used in cache cluster. It differs from the LBLC scheduling
 196          as follows: the load balancer maintains mappings from a target
 197          to a set of server nodes that can serve the target. Requests for
 198          a target are assigned to the least-connection node in the target's
 199          server set. If all the node in the server set are over loaded,
 200          it picks up a least-connection node in the cluster and adds it
 201          in the sever set for the target. If the server set has not been
 202          modified for the specified time, the most loaded node is removed
 203          from the server set, in order to avoid high degree of replication.
 204
 205          If you want to compile it in kernel, say Y. To compile it as a
 206          module, choose M here. If unsure, say N.
 207
 208config  IP_VS_DH
 209        tristate "destination hashing scheduling"
 210        ---help---
 211          The destination hashing scheduling algorithm assigns network
 212          connections to the servers through looking up a statically assigned
 213          hash table by their destination IP addresses.
 214
 215          If you want to compile it in kernel, say Y. To compile it as a
 216          module, choose M here. If unsure, say N.
 217
 218config  IP_VS_SH
 219        tristate "source hashing scheduling"
 220        ---help---
 221          The source hashing scheduling algorithm assigns network
 222          connections to the servers through looking up a statically assigned
 223          hash table by their source IP addresses.
 224
 225          If you want to compile it in kernel, say Y. To compile it as a
 226          module, choose M here. If unsure, say N.
 227
 228config  IP_VS_SED
 229        tristate "shortest expected delay scheduling"
 230        ---help---
 231          The shortest expected delay scheduling algorithm assigns network
 232          connections to the server with the shortest expected delay. The 
 233          expected delay that the job will experience is (Ci + 1) / Ui if 
 234          sent to the ith server, in which Ci is the number of connections
 235          on the ith server and Ui is the fixed service rate (weight)
 236          of the ith server.
 237
 238          If you want to compile it in kernel, say Y. To compile it as a
 239          module, choose M here. If unsure, say N.
 240
 241config  IP_VS_NQ
 242        tristate "never queue scheduling"
 243        ---help---
 244          The never queue scheduling algorithm adopts a two-speed model.
 245          When there is an idle server available, the job will be sent to
 246          the idle server, instead of waiting for a fast one. When there
 247          is no idle server available, the job will be sent to the server
 248          that minimize its expected delay (The Shortest Expected Delay
 249          scheduling algorithm).
 250
 251          If you want to compile it in kernel, say Y. To compile it as a
 252          module, choose M here. If unsure, say N.
 253
 254comment 'IPVS SH scheduler'
 255
 256config IP_VS_SH_TAB_BITS
 257        int "IPVS source hashing table size (the Nth power of 2)"
 258        range 4 20
 259        default 8
 260        ---help---
 261          The source hashing scheduler maps source IPs to destinations
 262          stored in a hash table. This table is tiled by each destination
 263          until all slots in the table are filled. When using weights to
 264          allow destinations to receive more connections, the table is
 265          tiled an amount proportional to the weights specified. The table
 266          needs to be large enough to effectively fit all the destinations
 267          multiplied by their respective weights.
 268
 269comment 'IPVS application helper'
 270
 271config  IP_VS_FTP
 272        tristate "FTP protocol helper"
 273        depends on IP_VS_PROTO_TCP && NF_CONNTRACK && NF_NAT && \
 274                NF_CONNTRACK_FTP
 275        select IP_VS_NFCT
 276        ---help---
 277          FTP is a protocol that transfers IP address and/or port number in
 278          the payload. In the virtual server via Network Address Translation,
 279          the IP address and port number of real servers cannot be sent to
 280          clients in ftp connections directly, so FTP protocol helper is
 281          required for tracking the connection and mangling it back to that of
 282          virtual service.
 283
 284          If you want to compile it in kernel, say Y. To compile it as a
 285          module, choose M here. If unsure, say N.
 286
 287config  IP_VS_NFCT
 288        bool "Netfilter connection tracking"
 289        depends on NF_CONNTRACK
 290        ---help---
 291          The Netfilter connection tracking support allows the IPVS
 292          connection state to be exported to the Netfilter framework
 293          for filtering purposes.
 294
 295config  IP_VS_PE_SIP
 296        tristate "SIP persistence engine"
 297        depends on IP_VS_PROTO_UDP
 298        depends on NF_CONNTRACK_SIP
 299        ---help---
 300          Allow persistence based on the SIP Call-ID
 301
 302endif # IP_VS
 303