ixgbe.txt 6.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199
  1. Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection
  2. ========================================================================
  3. March 10, 2009
  4. Contents
  5. ========
  6. - In This Release
  7. - Identifying Your Adapter
  8. - Building and Installation
  9. - Additional Configurations
  10. - Support
  11. In This Release
  12. ===============
  13. This file describes the ixgbe Linux Base Driver for the 10 Gigabit PCI
  14. Express Intel(R) Network Connection. This driver includes support for
  15. Itanium(R)2-based systems.
  16. For questions related to hardware requirements, refer to the documentation
  17. supplied with your 10 Gigabit adapter. All hardware requirements listed apply
  18. to use with Linux.
  19. The following features are available in this kernel:
  20. - Native VLANs
  21. - Channel Bonding (teaming)
  22. - SNMP
  23. - Generic Receive Offload
  24. - Data Center Bridging
  25. Channel Bonding documentation can be found in the Linux kernel source:
  26. /Documentation/networking/bonding.txt
  27. Ethtool, lspci, and ifconfig can be used to display device and driver
  28. specific information.
  29. Identifying Your Adapter
  30. ========================
  31. This driver supports devices based on the 82598 controller and the 82599
  32. controller.
  33. For specific information on identifying which adapter you have, please visit:
  34. http://support.intel.com/support/network/sb/CS-008441.htm
  35. Building and Installation
  36. =========================
  37. select m for "Intel(R) 10GbE PCI Express adapters support" located at:
  38. Location:
  39. -> Device Drivers
  40. -> Network device support (NETDEVICES [=y])
  41. -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
  42. 1. make modules & make modules_install
  43. 2. Load the module:
  44. # modprobe ixgbe
  45. The insmod command can be used if the full
  46. path to the driver module is specified. For example:
  47. insmod /lib/modules/<KERNEL VERSION>/kernel/drivers/net/ixgbe/ixgbe.ko
  48. With 2.6 based kernels also make sure that older ixgbe drivers are
  49. removed from the kernel, before loading the new module:
  50. rmmod ixgbe; modprobe ixgbe
  51. 3. Assign an IP address to the interface by entering the following, where
  52. x is the interface number:
  53. ifconfig ethx <IP_address>
  54. 4. Verify that the interface works. Enter the following, where <IP_address>
  55. is the IP address for another machine on the same subnet as the interface
  56. that is being tested:
  57. ping <IP_address>
  58. Additional Configurations
  59. =========================
  60. Viewing Link Messages
  61. ---------------------
  62. Link messages will not be displayed to the console if the distribution is
  63. restricting system messages. In order to see network driver link messages on
  64. your console, set dmesg to eight by entering the following:
  65. dmesg -n 8
  66. NOTE: This setting is not saved across reboots.
  67. Jumbo Frames
  68. ------------
  69. The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
  70. enabled by changing the MTU to a value larger than the default of 1500.
  71. The maximum value for the MTU is 16110. Use the ifconfig command to
  72. increase the MTU size. For example:
  73. ifconfig ethx mtu 9000 up
  74. The maximum MTU setting for Jumbo Frames is 16110. This value coincides
  75. with the maximum Jumbo Frames size of 16128.
  76. Generic Receive Offload, aka GRO
  77. --------------------------------
  78. The driver supports the in-kernel software implementation of GRO. GRO has
  79. shown that by coalescing Rx traffic into larger chunks of data, CPU
  80. utilization can be significantly reduced when under large Rx load. GRO is an
  81. evolution of the previously-used LRO interface. GRO is able to coalesce
  82. other protocols besides TCP. It's also safe to use with configurations that
  83. are problematic for LRO, namely bridging and iSCSI.
  84. GRO is enabled by default in the driver. Future versions of ethtool will
  85. support disabling and re-enabling GRO on the fly.
  86. Data Center Bridging, aka DCB
  87. -----------------------------
  88. DCB is a configuration Quality of Service implementation in hardware.
  89. It uses the VLAN priority tag (802.1p) to filter traffic. That means
  90. that there are 8 different priorities that traffic can be filtered into.
  91. It also enables priority flow control which can limit or eliminate the
  92. number of dropped packets during network stress. Bandwidth can be
  93. allocated to each of these priorities, which is enforced at the hardware
  94. level.
  95. To enable DCB support in ixgbe, you must enable the DCB netlink layer to
  96. allow the userspace tools (see below) to communicate with the driver.
  97. This can be found in the kernel configuration here:
  98. -> Networking support
  99. -> Networking options
  100. -> Data Center Bridging support
  101. Once this is selected, DCB support must be selected for ixgbe. This can
  102. be found here:
  103. -> Device Drivers
  104. -> Network device support (NETDEVICES [=y])
  105. -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
  106. -> Intel(R) 10GbE PCI Express adapters support
  107. -> Data Center Bridging (DCB) Support
  108. After these options are selected, you must rebuild your kernel and your
  109. modules.
  110. In order to use DCB, userspace tools must be downloaded and installed.
  111. The dcbd tools can be found at:
  112. http://e1000.sf.net
  113. Ethtool
  114. -------
  115. The driver utilizes the ethtool interface for driver configuration and
  116. diagnostics, as well as displaying statistical information. Ethtool
  117. version 3.0 or later is required for this functionality.
  118. The latest release of ethtool can be found from
  119. http://sourceforge.net/projects/gkernel.
  120. NAPI
  121. ----
  122. NAPI (Rx polling mode) is supported in the ixgbe driver. NAPI is enabled
  123. by default in the driver.
  124. See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.
  125. Support
  126. =======
  127. For general information, go to the Intel support website at:
  128. http://support.intel.com
  129. or the Intel Wired Networking project hosted by Sourceforge at:
  130. http://e1000.sourceforge.net
  131. If an issue is identified with the released source code on the supported
  132. kernel with a supported adapter, email the specific information related
  133. to the issue to e1000-devel@lists.sf.net