I am in the middle of moving the mail server out of the house again and wondering if that is the best thing in the long run. Running a mail server and wondering about what is a sufficiently good and cost effective way of doing it has been a long running affair.
Back in the dawn of Internet time I worked for a small company that decided that getting email on that new fangled Internet might be a good idea. This company was too small to have an IT department so this type of thing was handled by whatever software engineer took an interest and I assisted in the setup. We had no budget for a 24×7 connection to the Internet so the first setup was an old desktop PC with a very early version of Linux installed running Sendmail and a dial up connection via UUCP to a provider. Basically a couple of times per day (number of calls limited to reduce costs) the Linux box would dial up the provider and exchange mail with the ISP. Eventually the company’s owner decided he wanted “instant email” and agreed to pay for a 56kbs digital connection. I recall it taking me a couple of days to re-read all the Sendmail stuff to figure out the two or three line change needed in the configuration files. Later Sendmail came up with an alternative configuration scheme that is supposed to be easier and less error prone but I was left with a bad taste about how hard Sendmail was to configure and maintain.
A bit later my wife decided to start her own business and needed email. As a graphic designer, support for sending and receiving large emails have always been required. For the first several years we used the email provided by what ever ISP we were using. In those days ISPs came and went with alarming regularity so the email address changed quite often. And these were all, of course, dial-up accounts. The only slight improvement was to setup an old computer to be the internal mail server so when she sent or received a large email it was off her Mac and on to the dedicated machine for the slow process of doing dialing and sending mail at 14.4kbps. Macintoshes in those days did not really support multiple simultaneous programs running so off loading the actual dial up and sending/receiving of mail was a big improvement and having a gateway machine that connected and disconnected automatically allowed both of us to access mail the the then new web simultaneously.
When DSL became available we were the first in the area to get it. Being first had a couple of advantages: A permanently assigned IP address and terms of service that allowed running servers. Both of those changed as they got smarter, but now I could run a server at the house with a full time high speed (128kbs up-link) connection. Since we would be changing ISPs and thus email addresses again, it seemed like a good idea to purchase a domain name and have the email address change permanently solved.
It seemed to make sense to keep running our own server too. The machine was effectively free, the electricity cost low and the alternative of having professional hosting was, at the time, expensive.
Having a bad taste for Sendmail, I used a succession of cheap proprietary email server programs and then eventually open source software that would run on a Macintosh. These changed as anti-spam issues changed over the years and the OS versions on the cast off Macintoshes changed.
The final at home setup was a PPC G4 Mac Mini running 10.4 and Postfix. Everything on it is well past the official “end of life” but it has been chugging along dutifully with little attention. A hard disk failure years ago caused me to buy a duplicate machine and have it on hand as a cold standby but I’ve never had to put the cold standby in to service since that one failure. The actual computer has been pretty reliable.
But there has been significant down time because of ISP issues and other non-server hardware issues. The original DSL became untenable as everyone else in the area got DSL and the service went to heck as they filled up all the good pairs in the 50 year old cables strung through the neighborhood. Simultaneously with the service going to heck was the change in customer support. It started out with getting a live person immediately and worked its way into touch-tone hell where eventually you would reach someone with English as a tenth language who could’t actually do anything.
So we changed to a “business class service” with a third party ISP with better (at the time) support. Eventually it seemed a good idea to split the DSL out of the phone service onto a separate pair of wires, so we ran a “dry pair” service. But there were still times when we would lose service.
The obvious solution was to have two different ISPs and some sort of load balancing and failover setup. Which lead to a whole succession of experiments with dual WAN routers, some commercial boxes (with issues) and open source running on small embedded Linux boxes (with other issues). I even contributed back to one open source project a bunch of fixes needed to actually make multiple WAN links work with quality of service based queuing when running VoIP, mail and web servers on the LAN side.
But the main issue was a DSL line would fail for one of two reasons: First when rain seeped through the squirrel claw created pinholes in the outer insulation of the cables and soaked the paper insulating the individual pairs of wires. Second, because the phone company started upgrading their facility to offer higher speed digital connections and each time they touched their wires they broke something. I grew to be terrified when I saw a telco company truck in the neighborhood. In either case the result would be that we would lose both DSL lines to both providers at the same time.
Finally cable based Internet access became available and we switched to having that as our secondary ISP. Except that it was far faster and proved to be far more reliable than the DSL. So eventually we dropped the DSL and along with it the need to have a dual WAN router.
However that hasn’t changed the fundamental problem: We still lose connectivity from time to time. Sometimes it’s our router. Sometimes it’s the cable modem. Sometimes it’s the power company. Sometimes it’s actually the ISP. Often it seems a issue occurs when we are out of town and I can’t do anything about it for days. Even that old Mac Mini is starting to be a bit flakey and I’ve had to reboot it a couple of times in the last year. Finally, the cost to have a fixed IP address with terms of service that allow for home servers has gone up over the years.
Virtual private servers (VPS) are becoming quite cheap. Cost per month can be less than the difference between our “business class” Internet and residential service. Well over a year ago I got an “introductory” VPS for the ridiculous price of $12/year. Only $1/mo! It seemed too good to be true, so I just used it as a play machine. But it has had better up time than the much more expensive home setup. So a little while ago I decided to pop for a second VPS to host mail and calendaring. The price for the second one with more disk space, RAM and allowed bandwidth is more than a dollar per month but still very inexpensive.
Now the rub: I spent a good bit of time configuring the second VPS so that it could host virtual domains nicely and safely and then transitioned the mail for the business, the family and several volunteer organizations I host things for. Only a week or two after the transition one or more hackers broke into the provider’s monitoring and control network and wiped out a number of physical servers along with all the VPS instances that ran on them. It took several days to restore the machines and it turns out that the backups for my VPS were corrupted.
I had to pull the old Mac Mini out of the closest, get email running again and do all the falderal with getting DNS setup again (another long post about relative merits of DNS providers might be in order).
Now the VPS is configured again and I have, I hope, a better personal backup of the configuration and am getting all the domains transitioned back to the new server.
But is it a good choice? By allowing us to drop the business class service in favor of residential service, the cost will be less. And, in theory, a server in a real data center with 24×7 support staff and multiple high speed links to the Internet ought to have higher availability than an old Mac on the shelf connected via a neighborhood cable modem. But will that really be true?