Tesla K80 Performance for Machine Learning

Follow the full discussion on Reddit.
I have an old server that I'd like to convert to a machine learning testing platform. It has (2) Xeon E5-2670 CPUs for a total of 16 cores, 128Gb ram, and (2) PCIe x 16 slots. I bought a pair of Nvidia Tesla K80s but I am wondering if I would get better performance adding another pair of K80s since the prices are so good right now. The issue I have is that my motherboard only has (2) PCIe x16 slots. I was thinking that I could install (2) PCIe bifurcation cards, and add extension cables to plug-in all (4) K80s. If I understand correctly that would give each K80 8x8 PCIe lanes which would be further dropped to 4x4 for each GPU. Would the performance hit of the reduced lanes outweigh the additional GPUs?

Comments

There's unfortunately not much to read here yet...

Discover the Best of Machine Learning.

Ever having issues keeping up with everything that's going on in Machine Learning? That's where we help. We're sending out a weekly digest, highlighting the Best of Machine Learning.

Join over 900 Machine Learning Engineers receiving our weekly digest.

Best of Machine LearningBest of Machine Learning

Discover the best guides, books, papers and news in Machine Learning, once per week.

Twitter