Many small companies run their own mail server, often running Exchange, and getting increasingly popular, some sort of *nix based MTA. The biggest mistake small companies make is investing too small, too cheap, which ends up being significantly more expensive. Not just in time/energy wasted, but in actual $$$ outlay.
1) Real Estate.
Renting a small office that can hold 10-30 people is not cheap. Often, things are cramped and the servers get tucked into a closet with inadequate ventilation, or a back corner under on a shelf somewhere. The space, that gets set aside for a server or 5, and random storage, could be much better served by having a revenue-generating human in it. That space costs the company more than $50 a month, and often costs more than $300 a month, if all utilities are pro-rated for it.
2) Suitability of environment.
How often do you see a small company with a server / network gear closet that runs at 85-90F in the summer? How much less efficiently does that machine run, in terms of power consumption. Higher temperature equals less efficiency from every component, and more bit errors. Heat sinks can only sink heat to the ambient temperature, and inside the case, it's over 100F!
A common practice is to put a 5-15K portable air conditioner in the closet. How often is warm air evacuation overlooked! Those units cost several thousand dollars, and require the reservoir to be emptied, or a drain line run, or they shut off when the collect enough evaporation.
Warm air evacuation needs a 5-8" hole drilled through, preferably an external wall, so you can get that hot air out of the office! I've seen companies stuff it into the dropped ceiling, only to have the heat have nowhere to go, and fall right back into the room, or not evacuate the hot air at all!
Random risk of fire, flood, and power outages. Many smaller companies are located in areas that are mixed residential, retail, light commercial, and are located in the first or second floor of buildings with way too much fuel for fire in them. How about sprinklers? A good sprinkler system will dump 1-50K gallons of dirty water on a building as soon as they heads heat up enough. A bic lighter will most certainly set one off.
3) Security.
Does every small office building have a good security system? You may think so, but places get robbed all the time. Dumpster diving is still a huge source of information for a good hacker, but that's another story for another article ;) Most colocation facilities have multiple points of security, humans, cameras, keycard and / or biometrics, and finally locked cages. Couple that with labels that are only internally referenced descriptors, and you're fairly tight. Much tighter than the home office.
4) Bandwidth and availability.
If you're in a co-location facility, chances are you're sharing some pretty robust network gear with a bunch of other people. If you're having a problem, a bunch of other people are having a problem, so the economy of scale in maintenance and initial layout of that gear is there. Using a random little netgear GB switch you picked up at Office Depot for your gear is stupid. If you really don't have the budget for real network hardware, go to Ebay and buy older stuff on the cheap. I can pick up a Cisco 3548 layer 2 switch for less than $100. You do not need Gb Ethernet for your mail server. You don't even usually need it for a file server. There's nothing wrong with using Gb cards to run at 100Mb, make sure you hard set all ports to "speed 100, duplex full" on both the switch and the servers, and you'll be pleasantly surprised.
Bandwidth at the office, unless your business is PUSHING huge files out to clients constantly, should access bandwidth only. Get away from the idea of serving anything from the office. At that point, you should go ahead and put your whole development environment, into a co-lo anyway. Get used to the idea that you don't have to touch the gear physically to have full access and remote supportability!
5) Cost, or a simplistic case study.
Random server at the office to do mail costs $2500.00, circuit to host random server (a single T1) is commodity-priced at ~$400 in most metro markets. Real estate, power, A/C costs at least another $100 a month anywhere, and that's conservative!
If you consider the server to have a 4 year life-span, that's a total per year of $625 for hardware, $4800 a year for the T1, $1200 for the space/lights/power/cooling. Quick math calls that $6625
per year, not counting licenses, administration, or troubles to resolve.
Take the same server, make it a 1U HP, like a last-model DL360 G5 (the G6 is out). Buy it used if you're really pinching the pennies, or new with 5 years of support for around $4K. Put it in co-location for $50 a month, including power and bandwidth. Get business DSL at the office for $200 a month or much less. $1000+$600+$2400=$4000 a year, a net savings of $2625.
The real estate can be a little cheaper at the office, but the bandwidth is nowhere near as good and cheap. Every situation is different, and scale affects things a lot. Individual situations require individual attention, and this model assumes a greenfield deployment, where most people are dealing with what they have first.
Anyone is free to contact me with questions.
Thursday, July 30, 2009
Wednesday, July 29, 2009
Adtran Routers, 95% of the functionality for 50% of the price of Cisco.
Adtran is a well-known vendor in the telecom community, but as people start moving small business from traditional phones and Key or Hybrid Key/PBX systems into the exciting and considerably less reliable (during the first 30 days or so) world of VoIP, with truly converged DIA (direct internet access) and VoIP, Adtran is getting better known, because they are a great vendor.
1) 10 years support/warranty out of the box with no additional contract.
ZOMGWTF! Ask Cisco how much that costs for smartnet for an IAD for 10 years. You could buy a couple of them as spares for that.
2) 95% of the functionality, 50% of the price.
We all know Cisco gear is pricey, but is it really better? Technically, yes. SNMP support is more robust on the Cisco device, if you want to be able to do a GET NEXT on a table that lists your current firewall sessions, Cisco's MIB's will support that, Adtran's won't. They didn't spend quite as much time/money engineering the ability to pull every minute detail out it via SNMP.
Another mild failure is that in some very outlying ancient protocol support environments, the Adtran IAD gear just won't cut it. It's very rare though, again, 95%.
M0stly, though, the important stuff is there. Again, this is a price/performance opinion, not a line by line comparison of who's better or not.
3) Reliability?
About the same or better. Much more streamlined support organization. Instead of having to wait 30-60 minutes waiting for a callback, and attempt to give all kinds of technical details to someone who clearly hears everything you say as "blahblahblahblahblah", you get into a support queue fast, and if you have an issue, the guys aren't bad. If it goes over the tech's head, or has hit a bug, it goes right into engineering. There's about as much visibility into engineering with Adtran as there is into Cisco, but it's a lot faster to get access to.
Who uses them? All kinds of people. Probably the most notable one I can think of is Verizon. They use Adtran extensively throughout the network. If Adtran made a competitive product to the Actiontec routers verizon pushes into everyone's house with FIOS, Adtran would probably have that business. They also make some more hardcore telecom gear, because that's where they come from, the telecom business. CSU/DSU's, MUX's, all kinds of ubiquitous gear you find at a phone company.
1) 10 years support/warranty out of the box with no additional contract.
ZOMGWTF! Ask Cisco how much that costs for smartnet for an IAD for 10 years. You could buy a couple of them as spares for that.
2) 95% of the functionality, 50% of the price.
We all know Cisco gear is pricey, but is it really better? Technically, yes. SNMP support is more robust on the Cisco device, if you want to be able to do a GET NEXT on a table that lists your current firewall sessions, Cisco's MIB's will support that, Adtran's won't. They didn't spend quite as much time/money engineering the ability to pull every minute detail out it via SNMP.
Another mild failure is that in some very outlying ancient protocol support environments, the Adtran IAD gear just won't cut it. It's very rare though, again, 95%.
M0stly, though, the important stuff is there. Again, this is a price/performance opinion, not a line by line comparison of who's better or not.
3) Reliability?
About the same or better. Much more streamlined support organization. Instead of having to wait 30-60 minutes waiting for a callback, and attempt to give all kinds of technical details to someone who clearly hears everything you say as "blahblahblahblahblah", you get into a support queue fast, and if you have an issue, the guys aren't bad. If it goes over the tech's head, or has hit a bug, it goes right into engineering. There's about as much visibility into engineering with Adtran as there is into Cisco, but it's a lot faster to get access to.
Who uses them? All kinds of people. Probably the most notable one I can think of is Verizon. They use Adtran extensively throughout the network. If Adtran made a competitive product to the Actiontec routers verizon pushes into everyone's house with FIOS, Adtran would probably have that business. They also make some more hardcore telecom gear, because that's where they come from, the telecom business. CSU/DSU's, MUX's, all kinds of ubiquitous gear you find at a phone company.
Tuesday, July 28, 2009
The incredibly under appreciated wget.
Everyone has to suck some random files down from an ftp or http site every now and then, right?
I have to do this something like 12 times a week, usually.
If you're running MacOS X, or any variant of *nix, you may or may not be aware of wget and how cool it is..
Case in point:
Vendor X just posted a file on their ftp site, in this case, it's Dialogic, a major telecom vendor for carrier grade stuff.
[pserwe@host ~]$ wget -c --ftp-user=user --ftp-password=password ftp://ftp.dialogic.com/posted_file.zip
-c, my friends, is for continue, so if the transfer gets stalled, or there's some sort of transient issue, I can restart it using the exact same command line and pick up where I left off.
What's not to like? It's approximately 15 times faster than firing up some obnoxious GUI and sucking a file down.. I also get a really nice status line the entire time it's running..
--12:50:45-- ftp://ftp.dialogic.com/posted_file.zip
=> `posted_file.zip'
Resolving ftp.dialogic.com... 192.219.17.67
Connecting to ftp.dialogic.com|192.219.17.67|:21... connected.
Logging in as user ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD not needed.
==> SIZE posted_file.zip ... 55734149
==> PASV ... done. ==> REST 9646080 ... done.
==> RETR posted_file.zip ... done.
Length: 55734149 (53M), 46088069 (44M) remaining
100%[++++++++++++++++============================================================================>] 55,734,149 413K/s in 1m 50s
12:52:36 (408 KB/s) - `posted_file.zip' saved [55734149]
[pserwe@host ~]$
If there is no authentication required, for a lot of http or anonymous ftp sites, like .. mirrors.kernel.org for instance, you can omit the --ftp-user= --ftp-password=.
Even better example..
I have this vendor that makes this super expensive (major fraction of $1M USD) test and measurement system for telephone calls. These guys can sniff and break down a protocol like nobody else. They have to do software updates at various times that require a bit over a GB of files dumped onto my system.
They typically do it by pushing from their office over a 3Mb bonded T1 or fractional DS3 (not sure, probably bonded T1's though). It takes quite a bit of time at that speed.
I tipped them off to wget, and now they suck the files down from my system over a 1Gb link. It moves quite a bit faster! The actual source of the files is a server in Colo on serious bandwidth, so I get around 800K to 1MB/s pulling from there.
Good times, and time saved.. just by using wget.
I have to do this something like 12 times a week, usually.
If you're running MacOS X, or any variant of *nix, you may or may not be aware of wget and how cool it is..
Case in point:
Vendor X just posted a file on their ftp site, in this case, it's Dialogic, a major telecom vendor for carrier grade stuff.
[pserwe@host ~]$ wget -c --ftp-user=
-c, my friends, is for continue, so if the transfer gets stalled, or there's some sort of transient issue, I can restart it using the exact same command line and pick up where I left off.
What's not to like? It's approximately 15 times faster than firing up some obnoxious GUI and sucking a file down.. I also get a really nice status line the entire time it's running..
--12:50:45-- ftp://ftp.dialogic.com/posted_file.zip
=> `posted_file.zip'
Resolving ftp.dialogic.com... 192.219.17.67
Connecting to ftp.dialogic.com|192.219.17.67|:21... connected.
Logging in as user ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD not needed.
==> SIZE posted_file.zip ... 55734149
==> PASV ... done. ==> REST 9646080 ... done.
==> RETR posted_file.zip ... done.
Length: 55734149 (53M), 46088069 (44M) remaining
100%[++++++++++++++++============================================================================>] 55,734,149 413K/s in 1m 50s
12:52:36 (408 KB/s) - `posted_file.zip' saved [55734149]
[pserwe@host ~]$
If there is no authentication required, for a lot of http or anonymous ftp sites, like .. mirrors.kernel.org for instance, you can omit the --ftp-user=
Even better example..
I have this vendor that makes this super expensive (major fraction of $1M USD) test and measurement system for telephone calls. These guys can sniff and break down a protocol like nobody else. They have to do software updates at various times that require a bit over a GB of files dumped onto my system.
They typically do it by pushing from their office over a 3Mb bonded T1 or fractional DS3 (not sure, probably bonded T1's though). It takes quite a bit of time at that speed.
I tipped them off to wget, and now they suck the files down from my system over a 1Gb link. It moves quite a bit faster! The actual source of the files is a server in Colo on serious bandwidth, so I get around 800K to 1MB/s pulling from there.
Good times, and time saved.. just by using wget.
Monday, July 27, 2009
Linux fear still lives in parts of North America.
Let's face it, the run of the mill company that has no real unix sysadmin staff, is scared of Linux.
Not just slightly uncomfortable, because it's something new, ya know, from the 90's, that they don't know, deep seated, instinctual fear as if it's a great white shark about to come up from the murky depths and bite them in half like a scene from 'Jaws'.
The really funny part? Linux is *everywhere*. Want something to run on an embedded device? Who doesn't like Linux? There's a sprinkling of vxworks here and there if you really need a commercial real-time OS, but in reality, 99% of the black boxes out there are Linux based.
Linux is The Truth, The Light, and The Way.
Not just slightly uncomfortable, because it's something new, ya know, from the 90's, that they don't know, deep seated, instinctual fear as if it's a great white shark about to come up from the murky depths and bite them in half like a scene from 'Jaws'.
The really funny part? Linux is *everywhere*. Want something to run on an embedded device? Who doesn't like Linux? There's a sprinkling of vxworks here and there if you really need a commercial real-time OS, but in reality, 99% of the black boxes out there are Linux based.
Linux is The Truth, The Light, and The Way.
Subscribe to:
Posts (Atom)