Sign in to follow this  
Followers 0
TheFunk

GPU Cluster

10 posts in this topic

Hey!

I wanna make a supercomputer but I'm currently, "balling on a budget".

I want it to be rackmounted! Why, you may be asking? Cuz how cool would that look in my basement!?

I have no idea what I'd use it for yet. Probably mining ethereum until that goes proof of stake, then some folding, some hash cracking, and a little bit of deep learning.

 

So let's say my budget is $1000. Here's what I want....

 

GPUs
I've messaged every nearby craigslist poster with an AMD card better than a 7850. I want as much parallel processing power as possible. I'm looking to use Virtual OpenCL (VCL) to cluster my machines. Think I can get 6 really nice cards for under $400? Ideally someone has a mess of 7950's they're not using anymore.

 

Hardware

Rack mountable server chassis and a server rack. I know I can get 2 4u server chassis' and a shitty tripp lite rack for something like $220. I like the idea of 2 computing nodes and a third master node.


CPU/MOBO
I need to get as many GPUs per motherboard as possible. A typical rack server chassis only has 7 expansion slots and each video card is conceivably going to be 2 slots wide. This means a max of 3 video cards per board and one single width slot leftover. Based on PCIe lane width I'm going to want to go with the AMD 990FX/SB950 chipset combo to save money but not sacrifice too much performance on the PCIe lanes. I'd probably pick up a 6 or 8 core FX processor. Intel is out of the question with my budget. Looking at mobos that meet this criteria, the Asus M5A99FX Pro R2.0 seems like a good option for ~$85 used on eBay. It also has a 4th PCIe slot, which could be useful because of my next point....


Networking
Eventually I'll need a darn good network connection. For now GB ethernet is fine but maybe, just maybe, i'll switch to Infiniband later. If I do that, I'm going to need a free PCIe slot for the networking card. This will also had some significant cost to the project.

 

Power

Now this is where things get expensive. A decent ATX 1500W PSU is like $250 used, and I need 2 of those! Any ideas?

 

Storage

In order for this supercomputer to be truly super, I'm going to need fast storage...what should I do? An SSD would be the obvious choice, but how am I going to afford 2 of those on my budget? What about RAM, do I need a lot of RAM on a system like this?


Any ideas BinRev?

0

Share this post


Link to post
Share on other sites

I think nvidia uses a grid computing architecture. Basically networking to distribute load amongst different Cuda hosts. That has the potential to be much faster.

 

EDIT: please document this project. I may do the same. This is an awesome idea: DIY Super Computer.

 

 

SSD's in a RAID 0 Array. Zippidy Zoom......  You'll need to find them inexpensive however. Maybe look for a bulk sale?

 

AMD 8350 for the price. Maybe start out smaller and upgrade....

RAM is about how much you can afford.

 

This thing will be really hot on full load as you described. Video cards get hot as CPU's.

 

About four - 5 boxes with nVidia grid?  But that would not be doable for 1000 more than likely. If it were me - I'd start with two systems like Glitch got. But look for something with nvidia grid capabilities (does ATI support something similar? I don't know. Do gamer cards even support nvidia grid? I don't know. But at least research this.

 

You will need to find deals on hardware. Maybe start with a 6 core AMD with a MoBo that is compatible with an 8-Core FX then upgrade when cost goes down. 

 

Try to find three nvidia cards that can be SLi's then put into a grid in the future. Repeat every 8 months or when your budget permits. Pretty soon your basement could rival a dreamworks rendering farm.

 

Just an idea. :-)

Edited by tekio
0

Share this post


Link to post
Share on other sites

gpu:

7950/70 would be great if you need/want dual precision for whatever reason they had 1/4 rate for fp64, so its straight ripped, 7970 has more dual precision flops than a 390x lol the ghz is like 4tf single, 1tf dual where 290x is like 750-850for fp64 instead of 950-1000.

 

would look up benchmark stuff.. some stuff runs great on nvidia, but for mining well.. the 1080 was the first card nvidia has that could beat the amd flagship lol. and they just havent released a new one so its not even really a comparison currently. their arch seems very serial so the workloads your doing will determine which is faster prolly if your looking at similar range of power.

 

hardware:

the pci lanes only make a difference for workloads where the memory is larger than the frame buffer, but small enough you can deliver it at high speed, i.e. if its more than your systems memory and you have to get it from drives, even if you had like.. a nvme x4 ssd youd not be able to saturate out a x8 even so having the x16 prolly wont do much besides stuff that can fit inside system memory where you can deliver it at higher speed unless you go some kind of raid setup where you can get significant speeds.

 

if making your own case/rigging stuff up could look into raiser cards/cables  to use more slots there is a slight performance degradation though

 

networking:

kind of same as hardware depends on workload if was doing extremely heavy network stuff could increase the speeds alot going to a quad port card or 10gb or something but if its just to serve a page or something where you can see the results/tell it what to do you shouldnt need much at all. if you did.. nas or something to deliver files to this machine then would be limiting speed of operations then but if all the data is already on the box just need to figure out the actual network load :p.

buut.. if you can get drivers for quad port cards might be a decent option i guess for cheaper like.. 4 1000, 1 or 2 on the mobo without having to get 10gb switch or something atleast for a significantly lower cost option than probbaly most fiber/whatever stuff although slower.

 

for psu:

other than used, evga like.. supernova g2 1300w 175$ on amazon currently. seems to be fairly cheap sometimes if your looking for new(they are superflower which is good just you only have 1x12v rail so.. if something happens will be more violent about it prolly, but with the big ones youd still be pushing 30amps> through a single rail so if something shorts will still jack shit up either way.

 

but i guess you could get a tester if you dont have one and just try to get a bunch of lower power psus and make up or get jumpers probably the cheapest.. dumpster finds, if you got a freegeek or something like that near you just bunch of like 4-600w ones prolly could get for really cheap but have to make a case then or do somethin.

 

storage: already kind of covered lol

 

anyways super long thing but just some thoughts lol

 

 

0

Share this post


Link to post
Share on other sites

1Gbps will work fine. 10 Gbps is going to demolish a 1000 dollar budget with two NICS and a switch. Good luck finding a 10Gbs cheap SOHO wifi router. 

 

Would need to think heat: 1200 Watt PSU, 2 - 3 7970's, and an 8 or 6 core AMD CPU? AMD CPU's suck up wattage and run really toasty. Now start thinking liquid cooling if you're going to use this like a Super Computer. Cluster computing would decrease the extra cooling needs to spend on more GPU/CPU and memory. But eventually you're gonna need a good 8-25 port 1Gps switch (you could also get dual NICs in each box that support teaming and go to like 4Gbps. But would need drivers for whatever O/S you plan to run them on)

 

I'd think clustering more semi powerful boxes, think my last post I confused Grid Computer with Cluster, sorry. Speeds can be much greater in a cluster than one single Super Computer.  Especially when wanting parallel processing for stuff like cracking password hashes. There is actually a company that makes a distributed hash cracker for Cuda enabled clusters. It is expensive, but can be found. 

Edited by tekio
0

Share this post


Link to post
Share on other sites

AMD CPU's suck up wattage and run really toasty.


...But I betcha they probably don't hold a candle to the BTU output of a decent Northwood ~15 years ago. Seriously, we actually used those as highly effective space heaters in the computer lab in winter because the school's 40+ year old HVAC never worked!

Eons ago one of my coworkers/conspirators played with the idea of writing a distributed program tentatively called "Expensive Room Heater". It would have run on each PC in the fleet during idle times (i.e. not logged in) and would put an otherwise unacceptably high load on the CPU, making them function as a crude substitute for the nonexistent HVAC. Basically, the idea was to convert an incoming data stream to raw thermal energy without having to worry about things like clock speed and bandwidth getting in the way. We didn't implement it but it was a useful idea at the time. The things that enter one's mind in the middle of a southwestern Washington January.

1

Share this post


Link to post
Share on other sites

forgot..

power stuff:

have to remember that usual bedroom circuit is only 15amps, so.. dont want to go over 1600~w as absolute max for a single circuit unless you have a dedicated one with higher, and if going multiple and want redundancy can make or get like.. y cables for the 24pin/8pin cpu and stuff. but would still guess trying for like 4-500w psu prolly cheapest route, like 2 4-500w per box at maybe 15-25$ a shot based on.. people usually wanting excess/optional expandability so these lower wattage ones arent nearly as desirable to like 750-850w you start talking they just keep it till it dies. say.. could put out a craigslist ad or something 'looking for 4-600w name brand psus'

 

network:

https://www.amazon.com/HP-NC364T-Gigabit-Server-Adptr/dp/B000P0NX3G/

 

maybe not that particular card but one that works with your choice os would be prefered, prolly loads of cheap used ones on ebay, sun and bunch of other server manus made them, will take more work to configure the network stuff for max speeds but, being those arent that expensive, and you would just need a 24port or 2, bunch of patch cables or a spool and connectors. something and be able to push 4-6gb through each box

 

but as per the other stuff is highly dependant on the workloads how much bandwidth you would actually use or need like.. if you wanted to render multiple video files or something on 2 machines if you just gave each half the files or frames whatever, would be faster than trying to have each box render half each frame probably by reducing latency/networking components by not having to wait for one of the boxes to split the frames and send it to the other and then put it together after like.. would be stupid to try to use them dependently on eachother to that level for that kind of task where you could just split the workload and let them go fullbore till its done

0

Share this post


Link to post
Share on other sites
13 hours ago, scratchytcarrier said:


...But I betcha they probably don't hold a candle to the BTU output of a decent Northwood ~15 years ago. Seriously, we actually used those as highly effective space heaters in the computer lab in winter because the school's 40+ year old HVAC never worked!

Eons ago one of my coworkers/conspirators played with the idea of writing a distributed program tentatively called "Expensive Room Heater". It would have run on each PC in the fleet during idle times (i.e. not logged in) and would put an otherwise unacceptably high load on the CPU, making them function as a crude substitute for the nonexistent HVAC. Basically, the idea was to convert an incoming data stream to raw thermal energy without having to worry about things like clock speed and bandwidth getting in the way. We didn't implement it but it was a useful idea at the time. The things that enter one's mind in the middle of a southwestern Washington January.

Did you attend WASU? I cheer for the Huskies every year in the Apple Cup. Go Dawgs!!!!  ;-D

 

EDIT: WASU pronounced as WAZU vs UofW pronounced as U-Dub. Hahaha

Edited by tekio
0

Share this post


Link to post
Share on other sites

not sure if it will help as they are kinda old but i have 2 HD6970s and 2 HD4870X2's that i dont use. One of the 4870x2's need thermal pads reapplied.

0

Share this post


Link to post
Share on other sites

Did you attend WASU? I cheer for the Huskies every year in the Apple Cup. Go Dawgs!!!!




Nope, Clark Penguin here. Yeah I know, community college weenie, so what. Worked ~3 years as a tech at a local high school, that was when we came up with the "CPUs-as-Cadet-heaters" thing. Around the turn of the century I had to burn a weekend supervising the cutover of most of the school from archaic slot-1 PIIs running WinDOS 98SE to socket-478 IIRC Northwoods, even with big bloated XPee the difference was like night and day.

The one thing I actually did like about slot-1 was how easy it was to swap out CPUs. I could upgrade a decrepit 233 MHz CPU to a 500 MHz K6-2 (well okay, slotkets) in maybe like 30 seconds.

1

Share this post


Link to post
Share on other sites

Same here, those schools are expensive. I've heard WASU was one of the best party schools next the ones in Cali, Arizona and Florida though. LoL

 

Went to Pierce College, then a Tech Center. Then got smart and got a job for a place with crappy wages, but on the job experience and free Microsoft Certified Training. They were a consulting, ISP and MCT (Microsoft Certified Training Center).

Edited by tekio
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0