pfSense + 10GbE + Bridge

Filed Under Uncategorized

As a recent upgrade I switched to pfSense as a home router/firewall. In my test lab for pfSense I have an Supermicro X10SDV which has two onboard 10GbE  ports combined with a PCIe Intel X540-T2 dual-port 10GbE NIC, giving me access to 4x 10GbE ports.

Currently the smallest and most affordable 8-port 10GbE switches are still ~$600+ and from what I can tell, loud and power hungry. My goal, in addition to setting up a new router/firewall, was to use this as a low-power 4-port 10GbE switch by simply creating a bridge between the 4x10GbE ports and 1x vmx3 NIC to give the local box 10GbE access. This would allow me to have up to 4 additional devices in my house all on 10Gb.

This ended up being a huge pain. I ran into all kinds of errors, mostly:

To be honest, I changed a lot of things back and forth, I’m not even sure which changes made it work. Things I did:


I’m not real happy with how difficult that was. It ended up being multiple hours of trial and error, and now that I have it working I still have additional reading to do so that I can better understand the settings I now have configured.

In particular, the kern.ipc.* settings and how they impact the resources of the virtual machine. I probably need to actually tune these based on the currently allocated resources. I also want to re-evaluate the vSwitch settings I’ve made, as I don’t have a multiple pNIC’s in my vSwitch.

A couple of the resources I saved along the way (I wish I would’ve kept more):

I’m sure this was all mostly because of 10GbE and Jumbo Frames combined, but  I figured it would have been more of an out of box auto configuration when they added support for the card. I did not expect this to be such a tough combination


2 Responses to “pfSense + 10GbE + Bridge”

  1. Confluence: Operations on July 22nd, 2017 10:11 am

    pfSense Tuning configuration…

    System tuning: Edit /boot/loader.conf as below: autobootdelay=”3″ kern.ipc.nmbclusters=”262144″ kern.ipc.nmbjumbop=”262144″ net.isr.bindthreads=0 net.isr.maxthreads=1 kern.random.sys.harvest.ethernet=0 kern.random.sys.harvest…….

  2. fabservers on March 17th, 2018 4:06 pm

    impressive, i m trying to d the same right now, but pfsense doesn’t seem to have native support for mellanox 10 gb NICS, and i am strugging to install these mlx4 mlxen freeBSD drivers to have the NICs reconized as OPTs ports to bridge. any help much appreciated , thanks in advance, Fabrice,

Leave a Reply