Dec
26
pfSense + 10GbE + Bridge
Filed Under Uncategorized
As a recent upgrade I switched to pfSense as a home router/firewall. In my test lab for pfSense I have an Supermicro X10SDV which has two onboard 10GbE ports combined with a PCIe Intel X540-T2 dual-port 10GbE NIC, giving me access to 4x 10GbE ports.
Currently the smallest and most affordable 8-port 10GbE switches are still ~$600+ and from what I can tell, loud and power hungry. My goal, in addition to setting up a new router/firewall, was to use this as a low-power 4-port 10GbE switch by simply creating a bridge between the 4x10GbE ports and 1x vmx3 NIC to give the local box 10GbE access. This would allow me to have up to 4 additional devices in my house all on 10Gb.
This ended up being a huge pain. I ran into all kinds of errors, mostly:
- ix0: 2 link states coalesced
- vmx0, ix0, ix1 rotating in drops from “up” to “no carrier”
- kernel: [zone: mbuf_cluster] kern.ipc.nmbclusters limit reached
- ix1: Could not setup receive structures
- kernel: [zone: mbuf_jumbo_9k] kern.ipc.nmbjumbo9 limit reached”
To be honest, I changed a lot of things back and forth, I’m not even sure which changes made it work. Things I did:
- vmware: Net.ReversePathFwdCheckPromisc 1
- vmware: Enable Promiscuous Mode on vSwitch
- System Tunable: net.link.bridge.pfil_member 0
- System Tunable: net.link.bridge.pfil_bridge 1
- System Tunable: hw.intr_storm_threshold 10000
- Local Loader: kern.ipc.nmbclusters=”1000000″
- Local Loader: kern.ipc.nmbjumbop=”524288″
- Local Loader: kern.ipc.nmbjumbo9=”524288″
- Reboot.
Magic?
I’m not real happy with how difficult that was. It ended up being multiple hours of trial and error, and now that I have it working I still have additional reading to do so that I can better understand the settings I now have configured.
In particular, the kern.ipc.* settings and how they impact the resources of the virtual machine. I probably need to actually tune these based on the currently allocated resources. I also want to re-evaluate the vSwitch settings I’ve made, as I don’t have a multiple pNIC’s in my vSwitch.
A couple of the resources I saved along the way (I wish I would’ve kept more):
I’m sure this was all mostly because of 10GbE and Jumbo Frames combined, but I figured it would have been more of an out of box auto configuration when they added support for the card. I did not expect this to be such a tough combination
pfSense Tuning configuration…
System tuning: Edit /boot/loader.conf as below: autobootdelay=”3″ kern.ipc.nmbclusters=”262144″ kern.ipc.nmbjumbop=”262144″ net.isr.bindthreads=0 net.isr.maxthreads=1 kern.random.sys.harvest.ethernet=0 kern.random.sys.harvest…….
impressive, i m trying to d the same right now, but pfsense doesn’t seem to have native support for mellanox 10 gb NICS, and i am strugging to install these mlx4 mlxen freeBSD drivers to have the NICs reconized as OPTs ports to bridge. any help much appreciated , thanks in advance, Fabrice,