multiqueue.txt 4.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111
  1. HOWTO for multiqueue network device support
  2. ===========================================
  3. Section 1: Base driver requirements for implementing multiqueue support
  4. Section 2: Qdisc support for multiqueue devices
  5. Section 3: Brief howto using PRIO or RR for multiqueue devices
  6. Intro: Kernel support for multiqueue devices
  7. ---------------------------------------------------------
  8. Kernel support for multiqueue devices is only an API that is presented to the
  9. netdevice layer for base drivers to implement. This feature is part of the
  10. core networking stack, and all network devices will be running on the
  11. multiqueue-aware stack. If a base driver only has one queue, then these
  12. changes are transparent to that driver.
  13. Section 1: Base driver requirements for implementing multiqueue support
  14. -----------------------------------------------------------------------
  15. Base drivers are required to use the new alloc_etherdev_mq() or
  16. alloc_netdev_mq() functions to allocate the subqueues for the device. The
  17. underlying kernel API will take care of the allocation and deallocation of
  18. the subqueue memory, as well as netdev configuration of where the queues
  19. exist in memory.
  20. The base driver will also need to manage the queues as it does the global
  21. netdev->queue_lock today. Therefore base drivers should use the
  22. netif_{start|stop|wake}_subqueue() functions to manage each queue while the
  23. device is still operational. netdev->queue_lock is still used when the device
  24. comes online or when it's completely shut down (unregister_netdev(), etc.).
  25. Finally, the base driver should indicate that it is a multiqueue device. The
  26. feature flag NETIF_F_MULTI_QUEUE should be added to the netdev->features
  27. bitmap on device initialization. Below is an example from e1000:
  28. #ifdef CONFIG_E1000_MQ
  29. if ( (adapter->hw.mac.type == e1000_82571) ||
  30. (adapter->hw.mac.type == e1000_82572) ||
  31. (adapter->hw.mac.type == e1000_80003es2lan))
  32. netdev->features |= NETIF_F_MULTI_QUEUE;
  33. #endif
  34. Section 2: Qdisc support for multiqueue devices
  35. -----------------------------------------------
  36. Currently two qdiscs support multiqueue devices. A new round-robin qdisc,
  37. sch_rr, and sch_prio. The qdisc is responsible for classifying the skb's to
  38. bands and queues, and will store the queue mapping into skb->queue_mapping.
  39. Use this field in the base driver to determine which queue to send the skb
  40. to.
  41. sch_rr has been added for hardware that doesn't want scheduling policies from
  42. software, so it's a straight round-robin qdisc. It uses the same syntax and
  43. classification priomap that sch_prio uses, so it should be intuitive to
  44. configure for people who've used sch_prio.
  45. The PRIO qdisc naturally plugs into a multiqueue device. If PRIO has been
  46. built with NET_SCH_PRIO_MQ, then upon load, it will make sure the number of
  47. bands requested is equal to the number of queues on the hardware. If they
  48. are equal, it sets a one-to-one mapping up between the queues and bands. If
  49. they're not equal, it will not load the qdisc. This is the same behavior
  50. for RR. Once the association is made, any skb that is classified will have
  51. skb->queue_mapping set, which will allow the driver to properly queue skb's
  52. to multiple queues.
  53. Section 3: Brief howto using PRIO and RR for multiqueue devices
  54. ---------------------------------------------------------------
  55. The userspace command 'tc,' part of the iproute2 package, is used to configure
  56. qdiscs. To add the PRIO qdisc to your network device, assuming the device is
  57. called eth0, run the following command:
  58. # tc qdisc add dev eth0 root handle 1: prio bands 4 multiqueue
  59. This will create 4 bands, 0 being highest priority, and associate those bands
  60. to the queues on your NIC. Assuming eth0 has 4 Tx queues, the band mapping
  61. would look like:
  62. band 0 => queue 0
  63. band 1 => queue 1
  64. band 2 => queue 2
  65. band 3 => queue 3
  66. Traffic will begin flowing through each queue if your TOS values are assigning
  67. traffic across the various bands. For example, ssh traffic will always try to
  68. go out band 0 based on TOS -> Linux priority conversion (realtime traffic),
  69. so it will be sent out queue 0. ICMP traffic (pings) fall into the "normal"
  70. traffic classification, which is band 1. Therefore pings will be send out
  71. queue 1 on the NIC.
  72. Note the use of the multiqueue keyword. This is only in versions of iproute2
  73. that support multiqueue networking devices; if this is omitted when loading
  74. a qdisc onto a multiqueue device, the qdisc will load and operate the same
  75. if it were loaded onto a single-queue device (i.e. - sends all traffic to
  76. queue 0).
  77. Another alternative to multiqueue band allocation can be done by using the
  78. multiqueue option and specify 0 bands. If this is the case, the qdisc will
  79. allocate the number of bands to equal the number of queues that the device
  80. reports, and bring the qdisc online.
  81. The behavior of tc filters remains the same, where it will override TOS priority
  82. classification.
  83. Author: Peter P. Waskiewicz Jr. <peter.p.waskiewicz.jr@intel.com>