- Apr 30, 2018
-
-
Paweł Jabłoński authored
This patch fixes the problem where each MTU change turns TSO, GSO and GRO on from off state. Now when TSO, GSO or GRO is turned off, MTU change does not turn them on. Signed-off-by:
Paweł Jabłoński <pawel.jablonski@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Recent versions of the Linux kernel now warn about incorrect parameter definitions for function comments. Fix up several function comments to correctly reflect the current function arguments. This cleans up the warnings and helps ensure our documentation is accurate. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jia-Ju Bai authored
i40evf_add_vlan() is never called in atomic context. i40evf_add_vlan() is only called by i40evf_vlan_rx_add_vid(), which is only set as ".ndo_vlan_rx_add_vid" in struct net_device_ops. ".ndo_vlan_rx_add_vid" is not called in atomic context. Despite never getting called from atomic context, i40evf_add_vlan() calls kzalloc() with GFP_ATOMIC, which does not sleep for allocation. GFP_ATOMIC is not necessary and can be replaced with GFP_KERNEL, which can sleep and improve the possibility of sucessful allocation. This is found by a static analysis tool named DCNS written by myself. And I also manually check it. Signed-off-by:
Jia-Ju Bai <baijiaju1990@gmail.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Apr 27, 2018
-
-
Jeff Kirsher authored
After many years of having a ~30 line copyright and license header to our source files, we are finally able to reduce that to one line with the advent of the SPDX identifier. Also caught a few files missing the SPDX license identifier, so fixed them up. Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Acked-by:
Shannon Nelson <shannon.nelson@oracle.com> Acked-by:
Richard Cochran <richardcochran@gmail.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Mar 23, 2018
-
-
Jeff Kirsher authored
Add the SPDX identifiers to all the Intel wired LAN driver files, as outlined in Documentation/process/license-rules.rst. Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Mar 19, 2018
-
-
Alexander Duyck authored
While doing some code review I noticed that we can get into a state where we exit with the "IN_CRITICAL_TASK" bit set while notifying the PF of flower filters. This patch is meant to address that plus tweak the ordering of the while loop waiting on it slightly so that we don't wait an extra period after we have failed for the last time. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Mar 14, 2018
-
-
Gustavo A R Silva authored
It seems this is a copy-paste error and that the proper variable to use in this particular case is _src_ instead of _dst_. Addresses-Coverity-ID: 1465282 ("Copy-paste error") Fixes: 0075fa0f ("i40evf: Add support to apply cloud filters") Signed-off-by:
Gustavo A R Silva <garsilva@embeddedor.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Feb 26, 2018
-
-
Colin Ian King authored
The checks to see if key->dst.s6_addr and key->src.s6_addr are null pointers are redundant because these are constant size arrays and so the checks always return true. Fix this by removing the redundant checks. Also replace filter->f with vf, allowing wide lines to be condensed and to rejoin some split wide lines. Detected by CoverityScan, CID#1465279 ("Array compared to 0") Signed-off-by:
Colin Ian King <colin.king@canonical.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Paweł Jabłoński authored
Removes the locking of adapter->mac_vlan_list_lock resource in i40evf_add_filter(). The locking part is moved above i40evf_add_filter(). i40evf_add_filter(), called by i40evf_addr_sync(), was trying to lock the resource again and double locking generated a kernel panic after bringing an interface up. Fixes: 8946b563 ("i40evf: use __dev_[um]c_sync routines in .set_rx_mode") Signed-off-by:
Paweł Jabłoński <pawel.jablonski@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Feb 14, 2018
-
-
Harshitha Ramamurthy authored
This patch enables a tc filter to be applied as a cloud filter for the VF. This patch adds functions which parse the tc filter, extract the necessary fields needed to configure the filter and package them in a virtchnl message to be sent to the PF to apply them. Signed-off-by:
Harshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Harshitha Ramamurthy authored
This patch adds support to configure bandwidth for the traffic classes via tc tool. The required information is passed to the PF which is used in the process of setting up the traffic classes. Signed-off-by:
Harshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Avinash Dayanand authored
This patch allocates number of queues requested by the user as a part of TC command when ADq is enabled on a VF. In order to be consistent in design with PF implementation of ADq, don't allow to set channels via ethtool from VF when ADq is already enabled. This means the users will not be able to change the number of queues/channels via ethtool for a VF when ADq is ON. In order to be able to use set channels, users will be required to disable ADq first and then try setting the channels again. When ADq is enabled on VF, it goes through a reset during which VSIs and queues are re-configured. Meanwhile if we receive link status message from PF even before the queues are re-configured, just ignore this link up message. Signed-off-by:
Avinash Dayanand <avinash.dayanand@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Harshitha Ramamurthy authored
This patch introduces the callback to the ndo_setup_tc function in the VF driver. We add a wrapper function to make room for the upcoming cloud filter patches which add calls to different functions from setup_tc. First, we add support for capability exchange for ADQ between the PF and VF. Next, we add support to take in the mqprio configuration and configure queues as per the traffic classes, rate limit and the priorities specified by the user. This is done by passing the channel config to the PF driver through a virtchannel message. The flags and bits added, track if ADq is enabled, set max number of traffic classes to 4 and provide ability to negotiate capability with the PF. Signed-off-by:
Harshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Avinash Dayanand authored
One of the previous patch fixes the link up issue by ignoring it if i40evf is not in __I40EVF_RUNNING state. However this doesn't fix the race condition when queues are disabled esp for ADq on VF. Hence check if all queues are enabled before starting all queues. Signed-off-by:
Avinash Dayanand <avinash.dayanand@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Feb 13, 2018
-
-
Jacob Keller authored
Similar to changes done to the PF driver in commit 6622f5cd ("i40e: make use of __dev_uc_sync and __dev_mc_sync"), replace our home-rolled method for updating the internal status of MAC filters with __dev_uc_sync and __dev_mc_sync. These new functions use internal state within the netdev struct in order to efficiently break the question of "which filters in this list need to be added or removed" into singular "add this filter" and "delete this filter" requests. This vastly improves our handling of .set_rx_mode especially with large number of MAC filters being added to the device, and even results in a simpler .set_rx_mode handler. Under some circumstances, such as when attached to a bridge, we may receive a request to delete our own permanent address. Prevent deletion of this address during i40evf_addr_unsync so that we don't accidentally stop receiving traffic. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Harshitha Ramamurthy authored
When iterating through the linked list of VLAN filters, make the iterator the same type as that of the linked list. Signed-off-by:
Harshitha Ramamurthy <harshitha.ramamurthy@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Feb 12, 2018
-
-
Alexander Duyck authored
This patch replaces the existing mechanism for determining the correct value to program for adaptive ITR with yet another new and more complicated approach. The basic idea from a 30K foot view is that this new approach will push the Rx interrupt moderation up so that by default it starts in low latency and is gradually pushed up into a higher latency setup as long as doing so increases the number of packets processed, if the number of packets drops to 4 to 1 per packet we will reset and just base our ITR on the size of the packets being received. For Tx we leave it floating at a high interrupt delay and do not pull it down unless we start processing more than 112 packets per interrupt. If we start exceeding that we will cut our interrupt rates in half until we are back below 112. The side effect of these patches are that we will be processing more packets per interrupt. This is both a good and a bad thing as it means we will not be blocking processing in the case of things like pktgen and XDP, but we will also be consuming a bit more CPU in the cases of things such as network throughput tests using netperf. One delta from this versus the ixgbe version of the changes is that I have made the interrupt moderation a bit more aggressive when we are in bulk mode by moving our "goldilocks zone" up from 48 to 96 to 56 to 112. The main motivation behind moving this is to address the fact that we need to update less frequently, and have more fine grained control due to the separate Tx and Rx ITR times. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch is mostly prep-work for replacing the current approach to programming the dynamic aka adaptive ITR. Specifically here what we are doing is splitting the Tx and Rx ITR each into two separate values. The first value current_itr represents the current value of the register. The second value target_itr represents the desired value of the register. The general plan by doing this is to allow for deferring the update of the ITR value under certain circumstances. For now we will work with what we have, but in the future I hope to change the behavior so that we always only update one ITR at a time using some simple logic to determine which ITR requires an update. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
This patch is a further clean-up related to the change over to using q_vector->reg_idx when accessing the ITR registers. Specifically the code appears to have several other spots where we were computing the register offset manually and this resulted in errors in a few spots. Specifically in the i40evf functions for mapping queues to vectors it appears we may have had an off by 1 error since (v_idx - 1) for the first q_vector with an index of 0 would result in us returning -1 if I am not mistaken. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
The rings are already split out into Tx and Rx rings so it doesn't make sense to have any single ring store both a Tx and Rx itr_setting value. Since that is the case drop the pair in favor of storing just a single ITR value. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Jan 26, 2018
-
-
Alexander Duyck authored
The drivers for i40e and i40evf had a reg_idx value stored in the q_vector that was going completely unused. I can only assume this was copied over from ixgbe and nobody knew how to use it. I'm going to make use of the value to avoid having to compute the vector and thus the register index for multiple paths throughout the drivers. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Paweł Jabłoński authored
This patch adds back the capability to turn off offloads when VF has VLAN set. The commit 0a3b4f70 ("i40evf: enable support for VF VLAN tag stripping control") adds the i40evf_set_features function and changes the 'turn off' flow for offloads. This patch adds that capability back by moving checking the VLAN option for VF to the next statement. Signed-off-by:
Paweł Jabłoński <pawel.jablonski@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Jan 23, 2018
-
-
Sudheer Mogilappagari authored
In VFs, there is a known issue which can cause writebacks to not occur when interrupts are disabled and there are less than 4 descriptors resulting in TX timeout. Timeout can also occur due to lost interrupt. The current implementation for detecting and recovering from hung queues in the PF is problematic because it actually actively encourages lost interrupts. By triggering a SW interrupt, interrupts are forced on. If we are already in napi_poll and an interrupt fires, napi_poll will not be rescheduled and the interrupt is effectively lost; thereby potentially *causing* hung queues. This patch checks whether packets are being processed between every watchdog cycle and determine potential hung queue and fires triggers SW interrupt only for that particular queue. Signed-off-by:
Sudheer Mogilappagari <sudheer.mogilappagari@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Avinash Dayanand authored
When a host disables and enables a PF device, all the associated VFs are removed and added back in. It also generates a PFR which in turn resets all the connected VFs. This behaviour is different from that of Linux guest on Linux host. Hence we end up in a situation where there's a PFR and device removal at the same time. And watchdog doesn't have a clue about this and schedules a reset_task. This patch adds code to send signal to reset_task that the device is currently being removed. Signed-off-by:
Avinash Dayanand <avinash.dayanand@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Sudheer Mogilappagari authored
flush_schedule_work blocks until completion of all scheduled work items in global work-queue. This can cause deadlock in some cases. i40evf_remove() cleans up necessary work items with cancel_delayed_work_sync and cancel_work_sync. This fix removes flush_schedule_work call inside i40evf_remove(). Signed-off-by:
Sudheer Mogilappagari <sudheer.mogilappagari@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Jan 10, 2018
-
-
Alexander Duyck authored
Having the interrupts firing while we are polling causes extra overhead and isn't needed for most systems out there. If an interrupt is lost us experiencing a 2s latency spike before recovering is still not acceptable and masks the issue. We are better off just identifying systems that lose interrupts and instead enable workarounds for those systems. To that end I am dropping the code that was strobing the interrupts as there is a narrow window where having them enabled can actually cause race issues anyway where a few stray packets might get misses if the interrupt is re-enabled and fires before we call napi_complete. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alexander Duyck authored
We should not be clearing the pending bit array for each vector manually. The documentation for the hardware states that when in MSI-X mode the pending bit array will be cleared automatically. Us clearing it ourselves just results in multiple opportunities for us to drop an interrupt. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alice Michael authored
Bump the i40e driver from 2.1.14 to 2.3.2. Bump the i40evf driver from 3.0.1 to 3.2.2 Signed-off-by:
Alice Michael <alice.michael@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
If i40evf_open() is called quickly at the same time as a reset occurs (such as via ethtool) it is possible for the device to attempt to open while a reset is in progress. This occurs because the driver was not holding the critical task bit lock during i40evf_open, nor was it holding it around the call to i40evf_up_complete() in i40evf_reset_task(). We didn't hold the lock previously because calls to i40evf_down() would take the bit lock directly, and this would have caused a deadlock. To avoid this, we'll move the bit lock handling out of i40evf_down() and into the callers of this function. Additionally, we'll now hold the bit lock over the entire set of steps when going up or down, to ensure that we remain consistent. Ultimately this causes us to serialize the transitions between down and up properly, and avoid changing status while we're resetting. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Although not strictly necessary, it is customary to reverse the order in which we release locks that we acquire. This helps preserve lock ordering during future refactors, which can help avoid potential deadlock situations. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Stop overloading the __I40EVF_IN_CRITICAL_TASK bit lock to protect the mac_filter_list and vlan_filter_list. Instead, implement a spinlock to protect these two lists, similar to how we protect the hash in the i40e PF code. Ensure that every place where we access the list uses the spinlock to ensure consistency, and stop holding the critical section around blocks of code which only need access to the macvlan filter lists. This refactor helps simplify the locking behavior, and is necessary as a future refactor to the __I40EVF_IN_CRITICAL_TASK would cause a deadlock otherwise. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
In i40evf_reset_task we use netif_running() to determine whether or not the device is currently up. This allows us to properly free queue memory and shut down things before we request the hardware reset. It turns out that we cannot be guaranteed of netif_running() returning false until the device is fully up, as the kernel core code sets __LINK_STATE_START prior to calling .ndo_open. Since we're not holding the rtnl_lock(), it's possible that the driver's i40evf_open handler function is currently being called while we're resetting. We can't simply hold the rtnl_lock() while checking netif_running() as this could cause a deadlock with the i40evf_open() function. Additionally, we can't avoid the deadlock by holding the rtnl_lock() over the whole reset path, as this essentially serializes all resets, and can cause massive delays if we have multiple VFs on a system. Instead, lets just check our own internal state __I40EVF_RUNNING state field. This allows us to ensure that the state is correct and is only set after we've finished bringing the device up. Without this change we might free data structures about device queues and other memory before they've been fully allocated. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Nov 22, 2017
-
-
Alan Brady authored
The current method for notifying clients of l2 parameters is broken because we fail to copy the new parameters to the client instance struct, we need to do the notification before the client 'open' function pointer gets called, and lastly we should set the l2 parameters when first adding a client instance. This patch first introduces the i40evf_client_get_params function to prevent code duplication in the i40evf_client_add_instance and the i40evf_notify_client_l2_params functions. We then fix the notify l2 params function to actually copy the parameters to client instance struct and do the same in the *_add_instance' function. Lastly this patch reorganizes the priority in which client tasks fire so that if the flag for notifying l2 params is set, it will trigger before the open because the client needs these new parameters as part of a client open task. Signed-off-by:
Alan Brady <alan.brady@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Oct 18, 2017
-
-
Kees Cook authored
In preparation for unconditionally passing the struct timer_list pointer to all timer callbacks, switch to using the new timer_setup() and from_timer() to pass the timer pointer explicitly. Switches test of .data field to .function, since .data will be going away. Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Cc: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Oct 09, 2017
-
-
Jacob Keller authored
The ITR register expects to be programmed in units of 2 microseconds. Because of this, all of the drivers I40E_ITR_* constants are in terms of this 2 microsecond register. Unfortunately, the rx_itr_default value is expected to be programmed in microseconds. Effectively the driver defaults to an ITR value of half the expected value (in terms of minimum microseconds between interrupts). Fix this by changing the default values to be calculated using ITR_REG_TO_USEC macro which indicates that we're converting from the register units into microseconds. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alan Brady authored
Due to the asynchronous nature in which mac filters are added and deleted, there exists a bug in which filters are erroneously removed if removed then added again quickly. The events are as such: - filter marked for removal - same filter is re-added before watchdog that cleans up filters - we skip re-adding the filter because we have it already in the list - watchdog filter cleanup kicks off and filter is removed So when we were re-adding the same filter, it didn't actually get added because it already existed in the list, but was marked for removal and had yet to actually be removed. This patch fixes the issue by making sure that when adding a filter, if we find it already existing in our list, make sure it is not marked to be removed. Signed-off-by:
Alan Brady <alan.brady@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Oct 06, 2017
-
-
Jacob Keller authored
A recent commit 809481484e5d ("i40e/i40evf: support for VF VLAN tag stripping control") added support for VFs to negotiate the control of VLAN tag stripping. This should have allowed VFs to disable the feature. Unfortunately, the flag was set only in netdev->feature flags and not in netdev->hw_features. This ultimately causes the stack to assume that it cannot change the flag, so it was unchangeable and marked as [fixed] in the ethtool -k output. Fix this by setting the feature in hw_features first, just as we do for the PF code. This enables ethtool -K to disable the feature correctly, and fully enables user control of the VLAN tag stripping feature. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Jacob Keller authored
Currently, when setting up the IRQ for a q_vector, we set an affinity hint based on the v_idx of that q_vector. Meaning a loop iterates on v_idx, which is an incremental value, and the cpumask is created based on this value. This is a problem in systems with multiple logical CPUs per core (like in simultaneous multithreading (SMT) scenarios). If we disable some logical CPUs, by turning SMT off for example, we will end up with a sparse cpu_online_mask, i.e., only the first CPU in a core is online, and incremental filling in q_vector cpumask might lead to multiple offline CPUs being assigned to q_vectors. Example: if we have a system with 8 cores each one containing 8 logical CPUs (SMT == 8 in this case), we have 64 CPUs in total. But if SMT is disabled, only the 1st CPU in each core remains online, so the cpu_online_mask in this case would have only 8 bits set, in a sparse way. In general case, when SMT is off the cpu_online_mask has only C bits set: 0, 1*N, 2*N, ..., C*(N-1) where C == # of cores; N == # of logical CPUs per core. In our example, only bits 0, 8, 16, 24, 32, 40, 48, 56 would be set. Instead, we should only assign hints for CPUs which are online. Even better, the kernel already provides a function, cpumask_local_spread() which takes an index and returns a CPU, spreading the interrupts across local NUMA nodes first, and then remote ones if necessary. Since we generally have a 1:1 mapping between vectors and CPUs, there is no real advantage to spreading vectors to local CPUs first. In order to avoid mismatch of the default XPS hints, we'll pass -1 so that it spreads across all CPUs without regard to the node locality. Note that we don't need to change the q_vector->affinity_mask as this is initialized to cpu_possible_mask, until an actual affinity is set and then notified back to us. Signed-off-by:
Jacob Keller <jacob.e.keller@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
- Oct 02, 2017
-
-
Mitch Williams authored
This function cannot fail, so why is it returning a value? And why are we checking it? Why shouldn't we just make it void? Why is this commit message made up of only questions? Signed-off-by:
Mitch Williams <mitch.a.williams@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Alan Brady authored
Currently the VF gets a default number of allocated queues from HW on init and it could choose to enable or disable those allocated queues. This makes it such that the VF can request more or less underlying allocated queues from the PF. First the VF negotiates the number of queues it wants that can be supported by the PF and if successful asks for a reset. During reset the PF will reallocate the HW queues for the VF and will then remap the new queues. Signed-off-by:
Alan Brady <alan.brady@intel.com> Tested-by:
Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-