For a list of known issues with this release, refer to Chapter 1 (Release Notes), subsection 1.8 of 'Mellanox WinOF VPI Documentation' bundled with the driver download. Performance tuning guide can be obtained via the following download page. Drivers & Downloads Product support 8845 Copier. Support Drivers & Downloads Language. (PDF) Drivers Drivers VPI Printer Driver. Released: Size: 83.71 KB Filename: 8845vpidrv.zip More details. Download Contact Need more support? Get answers in the Community Support Forum. You can download and install the latest OpenFabrics Enterprise Distribution (OFED) software package available via the Mellanox web site at Products Software Ethernet Drivers Linux SW/Drivers Download. Scroll down to the Download wizard, and click the Download tab.

  1. Vpi Driver Download Software
  2. Vpi Driver Download App
  3. Vpi Driver Download Windows 10
  4. VPI Driver Download
  5. Vpi Driver Download Windows 7

Current BlueLink customer?

1.Download the BlueLink Remoter[HERE].

2. Need help? Open TeamViewer [HERE].

  1. Note: Mellanox VPI drivers support both 10GE and the Infiniband functionality. They can be obtained from HP ProLiant SL390s G7 server page on www.hp.com or from www.mellanox.com If device is configured for 10GE operation only, then you should use the HP NC-Series Mellanox 10GbE Driver which can be obtained from HP ProLiant SL390s G7 server page.
  2. Mellanox EN Driver for Linux. Mellanox offers a robust and full set of protocol software and driver for Linux with the ConnectX® EN family cards. Designed to provide a high performance support for Enhanced Ethernet with fabric consolidation over TCP/IP based LAN applications.

NOTE:An up-to-date Windows 7, 8 or 10 PC and a high-speed internet connection are required

How to submit a ticket:

1. Connect your J2534 Pass-Thru device to the computer and OBDII port.

2. Connect a battery maintainer (12.5v - 14v).

Vpi Driver Download Software

3. Turn the key to position TWO - dash illuminated.

Vpi driver download windows 7

4. Plug your computer into a power outlet and set your PC to 'never sleep'.


5. Click the 'BlueLink Remoter' icon on your desktop. If you have not installed it, you can download it [HERE]. Be sure to enter the e-mail address you PAID WITH.


6. When you have four green lights, you may then lock in your VIN by hitting the “Confirm” button to obtain your fifth green light.


7. Next, fill out the “Coding Request” form on the right hand side with all your vehicle particulars, and the module you would like service for.


8. Once you are ready, TURN THE KEY ON and click 'Submit Request' in the bottom right corner of the software. You will be unable to submit your ticket unless all lights are green, the VIN is locked in and you have completed the form.

European/Asian ‘BlueLink’ Mongooses:

Vpi Driver Download App

VPI Driver download
  • MongoosePro 2 ISO/CAN - [Driver Download]

  • MongoosePro ISO/CAN - [64b Driver][32b Driver]

  • Mongoose ISO/CAN - [64b Driver][32b Driver]

Other Manufacturer’s Devices:

Vpi Driver Download Windows 10

  • Autel MaxiFlash Pro/Elite - [Driver in MaxiPC Suite]

  • DG Tech VSI-2534 - [Driver Download]

  • Blue Streak iFlash - [Driver Download]

J2534 Pass-Thru Drivers

All-Brands/CarDAQ-Based Devices:

  • CarDAQ-Plus 3 - [Driver Download]

  • CarDAQ-Plus 2 - [Driver Download]

  • CarDAQ-Plus - [64b Driver][32b Driver]

  • CarDAQ-M - [Driver Download]

  • Snap-On PassThru Pro IV - [Driver Download]

  • Snap-On PassThru Pro III - [Driver Download]

  • Snap-On PassThru Pro II - [Driver Download]

  • Launch JBox 3 - [Driver Download]

  • Launch JBox 2 - [Driver Download]

  • Launch JBox - [Driver Download]

  • AEZ Flasher 3 - [Driver Download]

  • AEZ Flasher 2 - [Driver Download]

  • AEZ Flasher - [Driver Download]

VPI Driver Download

D-1 Form/NASTF Guide

Below is a link to the required form for attempting an Audi/VW programming involving a vehicle security module. All customers of BlueLink stand to benefit from registering with NASTF to ensure your ability to program Audi/VW, as well as purchase parts from dealerships.

Legacy download of the classic CAN Analyzer software. It’s BlueLink’s old-school draw tool.
NOTE:
We no longer sell or support the physical components for these packages

Printable version
Vpi driver download app

Vpi Driver Download Windows 7

* RECOMMENDED * Mellanox InfiniBand and Ethernet Driver for Microsoft Windows Server 2019

By downloading, you agree to the terms and conditions of the Hewlett Packard Enterprise Software License Agreement.
Note: Some software requires a valid warranty, current Hewlett Packard Enterprise support contract, or a license fee.

Type:Driver - Network
Version:5.50.52000(14 May 2019)
Operating System(s): Microsoft Windows Server 2019
File name:MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.zip (50 MB)
InfiniBand and Ethernet driver for use with Microsoft Windows Server 2019

New Features in Version 5.50.52000:

  • The package contains the following versions of components:
    • Bus, eth, IPoIB and MUX drivers version is 5.50.14688.
    • The CIM Provider version is 5.50.14688.

To ensure the integrity of your download, HPE recommends verifying your results with this SHA-256 Checksum value:

6edbe263a499906f51d5be5d838b3ac1345348af759f01127b22caaf2fd4dad0MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.zip

Reboot Requirement:
Reboot is required after installation for updates to take effect and hardware stability to be maintained.

Installation:

Steps for Installing Mellanox VPI driver version 5.50:

  1. Download 'MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.zip' on to the node.
  2. Extract contents of the zip file.
    1. Double click 'MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.exe' and follow the instructions for installation as explained below:
  • Click “NEXT” on the Welcome screen.
  • Accept license agreement and click “NEXT”.
  • Accept default location “C:Program FilesMellanoxMLNX_VPI” and click “NEXT”.
  • Check the box 'Configure your system for maximum performance' and click 'NEXT'.
  • Select setup type 'Complete' and click 'Next'.
  • Click 'Install'.
  • Once the installation completes, click “Finish”.
  • Restart the node.

End User License Agreements:
HPE Software License Agreement v1

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

Important:

Beta Features:

The following features are currently at the beta level:

  • 'ibdump'
  • IPv6 support of IPoIB (IP Over InfiniBand) in an SR-IOV (Single root input/output virtualization) guest OS over KVM (Kernel-based Virtual Machine).
  • IPoIB teaming support is at beta level and it is supported only on native machine (and not in HyperV or SR-IOV).

Unsupported features:
The following are the unsupported functionality/features in WinOF Rev 5.50:

  • Wake-On-Lan.
  • Software RSC (Recieve Segment Coalescing) for tunneled traffic
  • RDMA in the Guest OSes.
  • ND over virtual switch attached to IPoIB port.
  • Memory Translation Table (MTT) Optimization.

Certain software including drivers and documents may be available from Mellanox. If you select a URL that directs you to http://www.mellanox.com/, you are then leaving HPE.com. Please follow the instructions on http://www.mellanox.com/ to download Mellanox software or documentation. When downloading the Mellanox software or documentation, you may be subject to Mellanox terms and conditions, including licensing terms, if any, provided on its website or otherwise. HPE is not responsible for your use of any software or documents that you download from http://www.mellanox.com/, except that HPE may provide a limited warranty for Mellanox software in accordance with the terms and conditions of your purchase of the HPE product or solution.

  • For a list of known issues with this release, refer to Chapter 1 (Release Notes), subsection 1.8 of 'Mellanox WinOF VPI Documentation' bundled with the driver download.
  • Performance tuning guide can be obtained via the following download page:
  • Topology guide can be obtained via the following download page:
  • Mellanox InfiniBand configurator can be obtained via the following download page:

Notes:

Mellanox WinOF VPI Documentation containing the 'Release Notes' and 'User Manual' are bundled along with the driver download.

Supported Devices and Features:

Supported Network Adapter cards:

  • ConnectX-3 Pro InfiniBand (SDR/DDR/QDR/FDR10/FDR) and ConnectX-3 Pro Ethernet (10, 40, 50 and 56 Gb/s).
  • ConnectX-3 InfiniBand (SDR/DDR/QDR/FDR10/FDR) and ConnectX-3 Ethernet (10, 40, 50 and 56 Gb/s).

Supported Firmware Versions:

NICsRecommended Firmware versionAdditional Firmware Supported
ConnectX-3 Pro / ConnectX-3 Pro EN2.42.50442.42.5004
ConnectX-3 / ConnectX-3 EN2.42.50442.42.5004

Note : Firmware version 2.40.5000 requires upgrade to version 2.40.5032 and above, before installing driver version 5.50 and above.

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

The following issues have been resolved in version 5.50.52000:

  • An issue that resulted in a BSOD (Blue Screen Of Death) when changed the number of queues/CQ in VMMQ (Virtual Machine Multi-Queue).
  • System crash occured while updating the VPort in the error flow.
  • A race condition that occurred when sending RDMA-send-messages between the storage nodes and the compute nodes which resulted in RDMA (Remote Direct Memory Access) connectivity loss.
  • Fixed an issue that prevented the NIC from enabling NVGRE (Network Virtualization using Generic Routing Encapsulation) or VXLAN (Virtual Extensible LAN) although they were enabled by the user.
  • Fixed a BSOD issue that occasionally occurred when used a machine with more than 128 cores.
  • The systems_snapshot tool used to hung when the ETL(Extract, Transform,Load) folder was not present.
  • VMMQ: Fixed an issue that resulted in a BSOD due to an error when changing the number of queues/CQ.
  • A race condition that occurred when simultaneously querying the Permon counters (the 'Mellanox Adapter Traffic Counters' and the “Mellanox Adapter QoS Counters') and deleting the vPort OID, which resulted in BSOD.
  • A rare issue that caused a deadlock between delete vPort and CheckForHang Routine.

Beta Features:

The following features are currently at the beta level:

  • 'ibdump'
  • IPv6 support of IPoIB (IP Over InfiniBand) in an SR-IOV (Single root input/output virtualization) guest OS over KVM (Kernel-based Virtual Machine).
  • IPoIB teaming support is at beta level and it is supported only on native machine (and not in HyperV or SR-IOV).

Unsupported features:
The following are the unsupported functionality/features in WinOF Rev 5.50:

  • Wake-On-Lan.
  • Software RSC (Recieve Segment Coalescing) for tunneled traffic
  • RDMA in the Guest OSes.
  • ND over virtual switch attached to IPoIB port.
  • Memory Translation Table (MTT) Optimization.

Certain software including drivers and documents may be available from Mellanox. If you select a URL that directs you to http://www.mellanox.com/, you are then leaving HPE.com. Please follow the instructions on http://www.mellanox.com/ to download Mellanox software or documentation. When downloading the Mellanox software or documentation, you may be subject to Mellanox terms and conditions, including licensing terms, if any, provided on its website or otherwise. HPE is not responsible for your use of any software or documents that you download from http://www.mellanox.com/, except that HPE may provide a limited warranty for Mellanox software in accordance with the terms and conditions of your purchase of the HPE product or solution.

  • For a list of known issues with this release, refer to Chapter 1 (Release Notes), subsection 1.8 of 'Mellanox WinOF VPI Documentation' bundled with the driver download.
  • Performance tuning guide can be obtained via the following download page:
  • Topology guide can be obtained via the following download page:
  • Mellanox InfiniBand configurator can be obtained via the following download page:
Version:5.50.52000 (14 May 2019)

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

The following issues have been resolved in version 5.50.52000:

  • An issue that resulted in a BSOD (Blue Screen Of Death) when changed the number of queues/CQ in VMMQ (Virtual Machine Multi-Queue).
  • System crash occured while updating the VPort in the error flow.
  • A race condition that occurred when sending RDMA-send-messages between the storage nodes and the compute nodes which resulted in RDMA (Remote Direct Memory Access) connectivity loss.
  • Fixed an issue that prevented the NIC from enabling NVGRE (Network Virtualization using Generic Routing Encapsulation) or VXLAN (Virtual Extensible LAN) although they were enabled by the user.
  • Fixed a BSOD issue that occasionally occurred when used a machine with more than 128 cores.
  • The systems_snapshot tool used to hung when the ETL(Extract, Transform,Load) folder was not present.
  • VMMQ: Fixed an issue that resulted in a BSOD due to an error when changing the number of queues/CQ.
  • A race condition that occurred when simultaneously querying the Permon counters (the 'Mellanox Adapter Traffic Counters' and the “Mellanox Adapter QoS Counters') and deleting the vPort OID, which resulted in BSOD.
  • A rare issue that caused a deadlock between delete vPort and CheckForHang Routine.

New Features in Version 5.50.52000:

  • The package contains the following versions of components:
    • Bus, eth, IPoIB and MUX drivers version is 5.50.14688.
    • The CIM Provider version is 5.50.14688.

Version:5.50.50010 (19 Dec 2018)

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

The following issues have been resolved in version 5.50.50010:

  • System would cease functioning when the vNic was detached from the VM during heavy traffic when in VMQVMMQ mode.
  • RoCE (RDMA Over Converged Ethernet) connection failed occasionally when the Universal/Local (U/L) bit in the MAC was set to 1.
  • Fixed an issue that caused the mlxtool PDDR (Port Diagnostic Database Register) tool to provide some inaccurate information for Infiniband links.
  • Disabled the option to stop the uninstall process once the driver uninstallation process started.
  • Networks with new Subnet Managers (OpenSM 4.7.0 and up) would drop malformed multicast-join packets issued by the driver. The driver now constructs the multicast join request correctly.
  • In case the DSCP (Differentiated Service Code Point) values are lower than the max priority i.e: DSCP(4)- >Prio(0) when mapping the DSCP to a certain priority, the priority’s value will be set the same as the DSCP’s value.
  • Driver would not load due to a race condition that existed between the resiliency flow and the FLR request when issuing an OID_SRIOV_RESET_VF request to reset a specified PCI Express (PCIe) Virtual Function (VF).
  • The driver would reset the adapter as a result of a false alarm indicating that a receive queue was not processing.
  • Fixed an issue that caused a Black Screen upon driver’s removal due to extremely low memory conditions, when the memory allocations started to fail.
  • The Memory Region (MR) was displayed as registered when it was not, thus prevented the user from accessing it. This incorrect status display of MR was a result of the ND function “INDEndpoint” reporting error status when it returned from the underlying functions. This fix verifies that the user will receive the correct error status upon such scenario.
  • MSI-X cores in Virtual Function was limited 8. This limit has been expanded to 128.
  • BSOD (Blue Screen Of Death) occurred on servers with more than 64 cores because the Tx traffic did not honor the Tx affinity implied by the TSS, when the number of potential RSS CPUs was greater than 64.
  • When the mlxtool dbg resources command executed, the FS_RULE quota number displayed instead of the 'Managed by PF' message.
  • When setting the LogNumQp and LogNumRdmaRc registry settings to their maximum value, the WinOF bus driver failed to load.
  • The 'TX Ring Is Full Packets' perfmon counter did not function properly on IPoIB.
  • When installing the driver over Microsoft Windows Server 2012 R2 inbox driver, the LogNumQP parameter remained in the registry. Thus, a number of QPs were limited to 64K instead of 512K (the driver’s default).
  • Fixed an issue that caused a system crash when the interface connected to vSwitch was disabled and the operating system did not clean all VMQs(Virtual Machine Queue).
  • Communication Manager would stop functioning while attempting to obtain ND/NDK (Network Direct Kernel) connection.
  • Command failure and protection domain violation occurred when running the ND application.
  • Mlxtool command, “mlxtool dbg ipoib-ep []”, reported partial results of the EndPoint list when there was a large number of endpoints.
  • VM would stop functioning when restarting the PF drivers and their peers in the target machine. The VM had to be force-restarted to restore the functionality.
  • Fixed an issue that caused a memory leak when RoCE was enabled.
  • Set a wrong value to the *ReceiveBuffers key when it was restored to default.
  • Fixed a crash that occurred when changing the Ethernet IP address while RDMA traffic was running.
  • Fixed a crash that occurred on IPoIB driver stack.
  • A BSOD that occurred when a memory allocation failed upon driver startup.
  • The connection port numbers did not increase sequentially when running nd_*_bw application with multiple QPs.
  • Tx traffic did not honor the Tx affinity implied by the TSS when the number of potential RSS CPUs was greater than 64.
  • When the driver was installed over Microsoft Windows Server 2012 R2 inbox driver, the number of QPs was limited to 64K instead of 512K (the driver’s default) because the LogNumQP parameter remained in the registry.
  • Removing a PKey that was a part of an IPoIB team interface disabled the team and the option to delete it.
  • An incorrect number of HCAs was returned from executing the Get-MlnxPCIDeviceSriovSetting command.
  • The mlxtool dbg resources command failed to pull information about the last VF, and showed the PF as VF0.
  • Using invalid parameters in mlxtool perfstat command resulted in infinite waiting time and results were not returned.
  • The Get-MlnxPCIDeviceSriovSetting command failed on a server with more than one device, when one of the devices was disabled. Following the fix, the command returned results only for the devices that were enabled.

Changes and New Features in Version 5.50.50010:

  • Dump Me Now (DMN), a bus driver (mlx4_bus.sys) feature that generates dumps and traces from various components, including hardware, firmware and software, upon internally detected issues (by the resiliency sensors), user requests (mlxtool) or ND application requests via the extended Mellanox ND API. DMN is unsupported on VFs.
  • Support for systems with up to 252 logical processors when Hyperthreading is enabled and up to 126 logical processors when Hyperthreading is disabled.
  • RSC (Receive Segmnet Coalescing) solution in TCP/IP traffic to reduce CPU overhead.
  • NDSPI to control CQ (Completion Queue) moderation.
  • A new counter for packets with no destination resource.
  • A new registry key that allows users to configure the E2E Congestion Control feature.
  • Added to the vlan_config tool the ability to create VLANs for the Physical Function (PF) in addition to the Virtual Function (VF).
  • Added support for VMQ(Virtual Machine Queue) over IPoIB in Windows Server 2016.
  • Ability to collect firmware MST dumps in cases of system bug check.
  • Added an event log message (ID 273) that is printed when the number of resources to load the VF is insufficient.
  • A counter for the number of packets discarded due to an invalid QP (Queue Pair) number.
  • DSCP(Differentiated Service Code Point) based counters to support traffic where no VLAN/priority is present.
  • Added support for servers with more than 64 cores.
  • Added support for Windows Server 2019.
  • Modified the RSC (Receive Segment Coalescing) default mode when using Windows Server 2019. RSC is disabled by default in Windows Server 2019.

Type:Driver - Network
Version:5.50.52000(14 May 2019)
Operating System(s):
Microsoft Windows Server 2019

Description

InfiniBand and Ethernet driver for use with Microsoft Windows Server 2019

Enhancements

New Features in Version 5.50.52000:

  • The package contains the following versions of components:
    • Bus, eth, IPoIB and MUX drivers version is 5.50.14688.
    • The CIM Provider version is 5.50.14688.

Installation Instructions

To ensure the integrity of your download, HPE recommends verifying your results with this SHA-256 Checksum value:

6edbe263a499906f51d5be5d838b3ac1345348af759f01127b22caaf2fd4dad0MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.zip

Reboot Requirement:
Reboot is required after installation for updates to take effect and hardware stability to be maintained.

Installation:

Steps for Installing Mellanox VPI driver version 5.50:

  1. Download 'MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.zip' on to the node.
  2. Extract contents of the zip file.
    1. Double click 'MLNX_VPI_WinOF-5_50_52000_All_Win2019_x64.exe' and follow the instructions for installation as explained below:
  • Click “NEXT” on the Welcome screen.
  • Accept license agreement and click “NEXT”.
  • Accept default location “C:Program FilesMellanoxMLNX_VPI” and click “NEXT”.
  • Check the box 'Configure your system for maximum performance' and click 'NEXT'.
  • Select setup type 'Complete' and click 'Next'.
  • Click 'Install'.
  • Once the installation completes, click “Finish”.
  • Restart the node.

Release Notes

End User License Agreements:
HPE Software License Agreement v1

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

Important:

Beta Features:

The following features are currently at the beta level:

  • 'ibdump'
  • IPv6 support of IPoIB (IP Over InfiniBand) in an SR-IOV (Single root input/output virtualization) guest OS over KVM (Kernel-based Virtual Machine).
  • IPoIB teaming support is at beta level and it is supported only on native machine (and not in HyperV or SR-IOV).

Unsupported features:
The following are the unsupported functionality/features in WinOF Rev 5.50:

  • Wake-On-Lan.
  • Software RSC (Recieve Segment Coalescing) for tunneled traffic
  • RDMA in the Guest OSes.
  • ND over virtual switch attached to IPoIB port.
  • Memory Translation Table (MTT) Optimization.

Certain software including drivers and documents may be available from Mellanox. If you select a URL that directs you to http://www.mellanox.com/, you are then leaving HPE.com. Please follow the instructions on http://www.mellanox.com/ to download Mellanox software or documentation. When downloading the Mellanox software or documentation, you may be subject to Mellanox terms and conditions, including licensing terms, if any, provided on its website or otherwise. HPE is not responsible for your use of any software or documents that you download from http://www.mellanox.com/, except that HPE may provide a limited warranty for Mellanox software in accordance with the terms and conditions of your purchase of the HPE product or solution.

  • For a list of known issues with this release, refer to Chapter 1 (Release Notes), subsection 1.8 of 'Mellanox WinOF VPI Documentation' bundled with the driver download.
  • Performance tuning guide can be obtained via the following download page:
  • Topology guide can be obtained via the following download page:
  • Mellanox InfiniBand configurator can be obtained via the following download page:

Notes:

Mellanox WinOF VPI Documentation containing the 'Release Notes' and 'User Manual' are bundled along with the driver download.

Supported Devices and Features:

Supported Network Adapter cards:

  • ConnectX-3 Pro InfiniBand (SDR/DDR/QDR/FDR10/FDR) and ConnectX-3 Pro Ethernet (10, 40, 50 and 56 Gb/s).
  • ConnectX-3 InfiniBand (SDR/DDR/QDR/FDR10/FDR) and ConnectX-3 Ethernet (10, 40, 50 and 56 Gb/s).

Supported Firmware Versions:

NICsRecommended Firmware versionAdditional Firmware Supported
ConnectX-3 Pro / ConnectX-3 Pro EN2.42.50442.42.5004
ConnectX-3 / ConnectX-3 EN2.42.50442.42.5004

Note : Firmware version 2.40.5000 requires upgrade to version 2.40.5032 and above, before installing driver version 5.50 and above.

Fixes

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

The following issues have been resolved in version 5.50.52000:

  • An issue that resulted in a BSOD (Blue Screen Of Death) when changed the number of queues/CQ in VMMQ (Virtual Machine Multi-Queue).
  • System crash occured while updating the VPort in the error flow.
  • A race condition that occurred when sending RDMA-send-messages between the storage nodes and the compute nodes which resulted in RDMA (Remote Direct Memory Access) connectivity loss.
  • Fixed an issue that prevented the NIC from enabling NVGRE (Network Virtualization using Generic Routing Encapsulation) or VXLAN (Virtual Extensible LAN) although they were enabled by the user.
  • Fixed a BSOD issue that occasionally occurred when used a machine with more than 128 cores.
  • The systems_snapshot tool used to hung when the ETL(Extract, Transform,Load) folder was not present.
  • VMMQ: Fixed an issue that resulted in a BSOD due to an error when changing the number of queues/CQ.
  • A race condition that occurred when simultaneously querying the Permon counters (the 'Mellanox Adapter Traffic Counters' and the “Mellanox Adapter QoS Counters') and deleting the vPort OID, which resulted in BSOD.
  • A rare issue that caused a deadlock between delete vPort and CheckForHang Routine.

Important

Beta Features:

The following features are currently at the beta level:

  • 'ibdump'
  • IPv6 support of IPoIB (IP Over InfiniBand) in an SR-IOV (Single root input/output virtualization) guest OS over KVM (Kernel-based Virtual Machine).
  • IPoIB teaming support is at beta level and it is supported only on native machine (and not in HyperV or SR-IOV).

Unsupported features:
The following are the unsupported functionality/features in WinOF Rev 5.50:

  • Wake-On-Lan.
  • Software RSC (Recieve Segment Coalescing) for tunneled traffic
  • RDMA in the Guest OSes.
  • ND over virtual switch attached to IPoIB port.
  • Memory Translation Table (MTT) Optimization.

Certain software including drivers and documents may be available from Mellanox. If you select a URL that directs you to http://www.mellanox.com/, you are then leaving HPE.com. Please follow the instructions on http://www.mellanox.com/ to download Mellanox software or documentation. When downloading the Mellanox software or documentation, you may be subject to Mellanox terms and conditions, including licensing terms, if any, provided on its website or otherwise. HPE is not responsible for your use of any software or documents that you download from http://www.mellanox.com/, except that HPE may provide a limited warranty for Mellanox software in accordance with the terms and conditions of your purchase of the HPE product or solution.

  • For a list of known issues with this release, refer to Chapter 1 (Release Notes), subsection 1.8 of 'Mellanox WinOF VPI Documentation' bundled with the driver download.
  • Performance tuning guide can be obtained via the following download page:
  • Topology guide can be obtained via the following download page:
  • Mellanox InfiniBand configurator can be obtained via the following download page:

Revision History

Version:5.50.52000 (14 May 2019)

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

The following issues have been resolved in version 5.50.52000:

  • An issue that resulted in a BSOD (Blue Screen Of Death) when changed the number of queues/CQ in VMMQ (Virtual Machine Multi-Queue).
  • System crash occured while updating the VPort in the error flow.
  • A race condition that occurred when sending RDMA-send-messages between the storage nodes and the compute nodes which resulted in RDMA (Remote Direct Memory Access) connectivity loss.
  • Fixed an issue that prevented the NIC from enabling NVGRE (Network Virtualization using Generic Routing Encapsulation) or VXLAN (Virtual Extensible LAN) although they were enabled by the user.
  • Fixed a BSOD issue that occasionally occurred when used a machine with more than 128 cores.
  • The systems_snapshot tool used to hung when the ETL(Extract, Transform,Load) folder was not present.
  • VMMQ: Fixed an issue that resulted in a BSOD due to an error when changing the number of queues/CQ.
  • A race condition that occurred when simultaneously querying the Permon counters (the 'Mellanox Adapter Traffic Counters' and the “Mellanox Adapter QoS Counters') and deleting the vPort OID, which resulted in BSOD.
  • A rare issue that caused a deadlock between delete vPort and CheckForHang Routine.

New Features in Version 5.50.52000:

  • The package contains the following versions of components:
    • Bus, eth, IPoIB and MUX drivers version is 5.50.14688.
    • The CIM Provider version is 5.50.14688.

Version:5.50.50010 (19 Dec 2018)

Upgrade Requirement:
Recommended - HPE recommends users update to this version at their earliest convenience.

The following issues have been resolved in version 5.50.50010:

  • System would cease functioning when the vNic was detached from the VM during heavy traffic when in VMQVMMQ mode.
  • RoCE (RDMA Over Converged Ethernet) connection failed occasionally when the Universal/Local (U/L) bit in the MAC was set to 1.
  • Fixed an issue that caused the mlxtool PDDR (Port Diagnostic Database Register) tool to provide some inaccurate information for Infiniband links.
  • Disabled the option to stop the uninstall process once the driver uninstallation process started.
  • Networks with new Subnet Managers (OpenSM 4.7.0 and up) would drop malformed multicast-join packets issued by the driver. The driver now constructs the multicast join request correctly.
  • In case the DSCP (Differentiated Service Code Point) values are lower than the max priority i.e: DSCP(4)- >Prio(0) when mapping the DSCP to a certain priority, the priority’s value will be set the same as the DSCP’s value.
  • Driver would not load due to a race condition that existed between the resiliency flow and the FLR request when issuing an OID_SRIOV_RESET_VF request to reset a specified PCI Express (PCIe) Virtual Function (VF).
  • The driver would reset the adapter as a result of a false alarm indicating that a receive queue was not processing.
  • Fixed an issue that caused a Black Screen upon driver’s removal due to extremely low memory conditions, when the memory allocations started to fail.
  • The Memory Region (MR) was displayed as registered when it was not, thus prevented the user from accessing it. This incorrect status display of MR was a result of the ND function “INDEndpoint” reporting error status when it returned from the underlying functions. This fix verifies that the user will receive the correct error status upon such scenario.
  • MSI-X cores in Virtual Function was limited 8. This limit has been expanded to 128.
  • BSOD (Blue Screen Of Death) occurred on servers with more than 64 cores because the Tx traffic did not honor the Tx affinity implied by the TSS, when the number of potential RSS CPUs was greater than 64.
  • When the mlxtool dbg resources command executed, the FS_RULE quota number displayed instead of the 'Managed by PF' message.
  • When setting the LogNumQp and LogNumRdmaRc registry settings to their maximum value, the WinOF bus driver failed to load.
  • The 'TX Ring Is Full Packets' perfmon counter did not function properly on IPoIB.
  • When installing the driver over Microsoft Windows Server 2012 R2 inbox driver, the LogNumQP parameter remained in the registry. Thus, a number of QPs were limited to 64K instead of 512K (the driver’s default).
  • Fixed an issue that caused a system crash when the interface connected to vSwitch was disabled and the operating system did not clean all VMQs(Virtual Machine Queue).
  • Communication Manager would stop functioning while attempting to obtain ND/NDK (Network Direct Kernel) connection.
  • Command failure and protection domain violation occurred when running the ND application.
  • Mlxtool command, “mlxtool dbg ipoib-ep []”, reported partial results of the EndPoint list when there was a large number of endpoints.
  • VM would stop functioning when restarting the PF drivers and their peers in the target machine. The VM had to be force-restarted to restore the functionality.
  • Fixed an issue that caused a memory leak when RoCE was enabled.
  • Set a wrong value to the *ReceiveBuffers key when it was restored to default.
  • Fixed a crash that occurred when changing the Ethernet IP address while RDMA traffic was running.
  • Fixed a crash that occurred on IPoIB driver stack.
  • A BSOD that occurred when a memory allocation failed upon driver startup.
  • The connection port numbers did not increase sequentially when running nd_*_bw application with multiple QPs.
  • Tx traffic did not honor the Tx affinity implied by the TSS when the number of potential RSS CPUs was greater than 64.
  • When the driver was installed over Microsoft Windows Server 2012 R2 inbox driver, the number of QPs was limited to 64K instead of 512K (the driver’s default) because the LogNumQP parameter remained in the registry.
  • Removing a PKey that was a part of an IPoIB team interface disabled the team and the option to delete it.
  • An incorrect number of HCAs was returned from executing the Get-MlnxPCIDeviceSriovSetting command.
  • The mlxtool dbg resources command failed to pull information about the last VF, and showed the PF as VF0.
  • Using invalid parameters in mlxtool perfstat command resulted in infinite waiting time and results were not returned.
  • The Get-MlnxPCIDeviceSriovSetting command failed on a server with more than one device, when one of the devices was disabled. Following the fix, the command returned results only for the devices that were enabled.

Changes and New Features in Version 5.50.50010:

  • Dump Me Now (DMN), a bus driver (mlx4_bus.sys) feature that generates dumps and traces from various components, including hardware, firmware and software, upon internally detected issues (by the resiliency sensors), user requests (mlxtool) or ND application requests via the extended Mellanox ND API. DMN is unsupported on VFs.
  • Support for systems with up to 252 logical processors when Hyperthreading is enabled and up to 126 logical processors when Hyperthreading is disabled.
  • RSC (Receive Segmnet Coalescing) solution in TCP/IP traffic to reduce CPU overhead.
  • NDSPI to control CQ (Completion Queue) moderation.
  • A new counter for packets with no destination resource.
  • A new registry key that allows users to configure the E2E Congestion Control feature.
  • Added to the vlan_config tool the ability to create VLANs for the Physical Function (PF) in addition to the Virtual Function (VF).
  • Added support for VMQ(Virtual Machine Queue) over IPoIB in Windows Server 2016.
  • Ability to collect firmware MST dumps in cases of system bug check.
  • Added an event log message (ID 273) that is printed when the number of resources to load the VF is insufficient.
  • A counter for the number of packets discarded due to an invalid QP (Queue Pair) number.
  • DSCP(Differentiated Service Code Point) based counters to support traffic where no VLAN/priority is present.
  • Added support for servers with more than 64 cores.
  • Added support for Windows Server 2019.
  • Modified the RSC (Receive Segment Coalescing) default mode when using Windows Server 2019. RSC is disabled by default in Windows Server 2019.

Legal Disclaimer: Products sold prior to the November 1, 2015 separation of Hewlett-Packard Company into Hewlett Packard Enterprise Company and HP Inc. may have older product names and model numbers that differ from current models.