January 26th, 2006, 12:16
I work for a small (20 employees) engineering firm that is in need of a new server. The one we have now is a p4, gig of ram, 2 7200rpm seagates on a 10/100 network. Its main duties are routing, samba, and the occasional sftp for some employees that work at home. Its been rock solid but it is time to beef it up.

We deal with alot of massive drawing files. Most of the drawings include aerial images and the aerial images are usually what make them so massive. When you got three or so employees working on drawings at the same time things begin to crawl.

I got a budget of $1500-$2000. I am also thinking about moving all employees to gigabit networking which means new switch and new cards for each workstation. Good idea?
I've never done RAID or SCSI so I imagine I need to move to one or the other. We got about 100 gigs of files right now. Be nice to double that but don't really need that much as most of whats on there now can be archived if it means faster speeds.

The server now is running 3.4 so of course it needs updating which the new one will have 3.8 on it. Maybe this will help us out too? Also I plan on using a spare machine to take over the routing duties.

I'm not really sure where my bottleneck is or really how to find out. Any help is appreciated.

January 26th, 2006, 14:43
Although the network would be my first guess for a bottleneck, getting some nice SCSI drives would be a good thing. Also, having on disk be an os disk and one be a data disk would probably speed things up a bit. Are the files being transfered to workstations and worked on or are they being edited over the network?

January 26th, 2006, 15:34
they are being edited over the network

January 27th, 2006, 04:36
Depending on how much run there is on the server the same at the same time i would say it is the network and the disk systems you have to look at.

I dont think a change to gbit on the network would make much sence if your disk system cant't keep up. So my suggestion from what you write is.

Change the network to gbit and buy a gbit card for the server. Clients still run 100mbit. (you could always upgrade them later on if needed)
buy a 3ware card and a 4 disk and run some raid 0+1 on them. Ofcouse scsi is better IMHO, but so is the price. i included some fault tolerance here. but that is up to you.

About the processor and ram. I think it should do fine maybe even overkill for a fileserver, so just leave them as is.

*edit* just saw that it was in the openbsd forum, dunno if 3ware cards is supported for openbsd. if not find a simlar replacement for ide/sata raid wich is supported by openbsd

January 28th, 2006, 03:16
Disk I/O is likely going to be the biggest contention in this senario. The more drive heads you can get serving the data, the better. For example: four 250Gb drives is much better for performance than two 500Gb drives. A hardware RAID controller will offload the work of the CPU and give you the best performance. SATA/IDE will give you the most space for your money but SCSI will give the best performance overall. You'll have to determine which criteria is more important for you. RAID 1+0 would be a good idea since you want performance and redundancy.

I would suggest that you use ttcp (http://www.pcausa.com/Utilities/pcattcp.htm) to test the network cards on the network. Make sure they can get at least 8Mb/sec+ to the server or you may want to replace them with cards that get better performance. I posted some results in a thread (http://www.screamingelectron.org/forum/showthread.php?t=2493) that I found in my home network a few months ago. Make sure you are getting what you should be getting. Replacing lower performing cards will go a long way in increasing the overall performance.

Switching outright to gigabit seems like a good idea on the surface but there's some gotchas to look out for. Every cable/jack/etc. has to be Cat5e or higher to be able to run gigabit over it. Every flaw in the cabling will show up once you move to gigabit. Long runs that work fine with 100mbit might not be so good at gigabit. The costs of downtime, running new lines, swapping out jacks/patch panels, etc. have to be considered before you dive into this. Save this for the next budget or do it as you go.

Save most of your money for the hard drives/controller(s), etc. Here's a couple suggestions for improving network performance for your consideration:

1. You might try an inexpensive gigabit switch and move the server to gigabit while leaving the clients on 100mbit. This does a couple things for you. It opens the bottleneck to the server since you can now have multiple clients getting a full 100mbit at the same time instead of one or many fractions of 100mbit. The other thing it gives you is an upgrade path since you could upgrade individual clients as time/budget permit.

2. Put multiple network cards in the server and install multiple switches. You could then bridge the network cards and plug one switch into each card. Then spread the clients across the switches. This gives you multiple 100mbit paths to the server and fewer clients fighting over the bandwidth on each switch.

I'd double check my network performance and then sink most of my budget into getting the most of the hard drives. This is my 2 cents on this and I'd be happy to hash it out some more or be open to some suggestions. :biggrin:

January 30th, 2006, 15:32
The 3ware cards should work. my file serevr at home runs 4 250GB IDE drives off of the 3ware 4 port card in raid 5 mode.

February 8th, 2006, 21:04
Wow, thanks guys for the help.
Having never done RAID I did a little reading up on it. Yes it does look like 1+0 is what I am wanting. SCSI is about a thousand dollars outta my range so I will skip that for now. The 3ware cards look as if they work fine but the misc archive was full of LSI Logic praise so I went with them in my list below.

This is gonna be a new server magenta. The one we have now will go down the chain somewhere. I shoulda indicated that.

I like your ideas Strog. I never considered #2. I will get an inexpensive gigaswitch (any recommendations?) and hook the drafters to that. Then keep the engineers and others on the 10/100 switches. Also I will definately check out the TTCP utility for before and after results.

This is what I have came up with so far. I just did a quick price check with newegg on everything. I will do some comparison shopping when I do get ready to order.

Case - $124 Antec PERFORMANCE TX TX1050B Black Computer Case - Retail

Motherboard - $260 Intel SE7221BK1LX ATX Intel Motherboard - Retail

Processor - $113 Intel Pentium 4 506 Prescott 533MHz FSB 1MB L2 Cache LGA 775 EM64T Processor - Retail

Ram - (2) @ $59.99/stick Crucial 512MB 240-Pin DDR2 SDRAM ECC Unbuffered DDR2 533 (PC2 4200) System Memory - OEM

SATA Controller - $429.99 LSI Logic LSI00005 PCI-X SATA Controller Card - Retail

SATA Hard Drive - (4) @ $85/per drive Seagate Barracuda 7200.9 120GB 3.5" SATA 3.0Gb/s Hard Drive - OEM

Additional Network Card - $96 Intel PWLA8490MT 10/100/1000Mbps PCI Network Server Adapter 1x RJ-45 - OEM

CD Rom - $14.50 SONY Black IDE CD-ROM Drive Model CDU5225 - OEM

Keyboard - $5.75 LITE-ON Black Wired Keyboard - Retail

Did I forget anything?

This brings me to just a hair over $1500. This would be nice, then I could use the other $500 for new wiring and some new network cards and a switch.

Am I on the right track? I'm open to anything. Even AMD isn't outta the question. Tho I checked and it looks like it will cost more for me to run AMD using the two (Supermicro and Tyan) boards that I checked into. Maybe not.