iRedMail on Nginx

This is my experiment to get iRedMail to work with Nginx. In the end I got everything to work other than awstats, although with some caveats. I don’t like awstats very much and it seemed quite troublesome to get it setup. There is a mode to run awstats in that lets it just generate static files, which to me seem to be a better solution. I did testing only on Debian 6.0.7, although it should also work in Ubuntu just fine. It was also limited testing on brand new VMs.

So I am starting out with a brand new Debian 6.0.7 system. First things first we setup our hosts and hostname file. For my test environment I used mail.debian.test as my test environment. Then I grabbed the latest iRedMail which happened to be 0.8.3 at the time of writing this. I did this via wget in a ssh session. I had to install bzip2 to “tar -xf” it, so a quick “apt-get install bzip2” resolved that. I then ran the iRedMail installer and let it complete.

Now to stop apache services for good:

Optionally we can run “apt-get remove apache2” to get rid of apache binaries as well.

Now, I needed Nginx and php5-fpm (as I prefer fpm). This takes a little work as Debian 6.0.7 doesn’t have it in its default sources. This would have been easier on Ubuntu.

What I did her is first install nginx and curl. Then I added dotdeb to the sources list, added its key and then updated my sources. Finally I was able to install fpm.
Now that the applications are in place, I need to write their configuration files. Here is the list of files I will be using:
Nginx’s iRedMail site configuration file
php5’s FPM iRedMail web pool file
iRedMail init.d file to launch the iredadmin pyton webservice

During additional testing I uploaded the files and just used curl to put them into place. The init.d script is borrowed from the web (exactly where I can’t remember as I used bites and pieces from multiple places). However I don’t feel the need to write out or explain in great detail all off the changes.

You will need to modify the nginx file (/etc/nginx/sites-available/iRedMail) to contain the correct domain. As well you will need an additional dns enter for iredadmin.domain.tld (in my case iredadmin.debian.test). If this is your only/first ssl site or you prefer it to be default you will need to adjust the ssl section. I added comments to explain that. Nginx expects a default website and if none exist it won’t start.

As for the additional domain, I tried my best, but it seems there is no way to have the perl script to be aware its in a sub directory and pass the correct urls to its output templates. Although the template has the capability to do a homepath variable, this seems to be set from ctx in perl which from my limited knowledge I don’t believe is changeable via environment/server variables. I also didn’t see a way to change that in any setting. Hopefully the iRedMail developers can make this change in future versions.
The good news is the iRedMail developers had foresight to setup the script to run very smoothly as a stanalone python web server via a cgi socket. So no additional work to make that run is needed. I had hoped to use the iredapd service to launch this, but it appears to crash and fail horribly. So I setup a second instance to do this.

Now just a little more work to activate the new service, link the file as a live nginx site and restart some services.

Thats it. Now when I hit mail.debian.test I get the webmail portal. When I access iredadmin.debian.test I get the admin portal. phpmyadmin is also setup on mail.debian.test/phpmyadmin

Setting this up for Ubuntu should be easier, as 12.04 has php5-fpm in its packages so there is no need to add in the dotdeb resources. Everything else would be the same for it.

Nginx has always been flaky for me while doing Ipv6 services. I intended to include them but it just wasn’t playing nicely enough. Sometimes just doing [::]:80 to a listen will make it listen. Other times I have to specify it twice (and it doesn’t complain). Then again if I try it on 443 using [::]:443 nginx may not want to start at all, while it accepted [::]:80 just fine. So because of how picky it can be at times, I just opted to go with ipv4 only support here.

SFTP, SSHFS, VPN + exportFS, and WebDav.

While working on some code, I needed to get something I could access much faster and much easier than my current methods.  So after some testing, I’ve come across a solution.

I started with my simple SSH session.  This proves to not be so helpful when editing multiple files or needing to move around easier.  While the power of command line is great, it isn’t so great for developing larger scripts or moving around between multiple files easily.

So onto SFTP.  I used a sftp shelled user by adding that user to a group and forcing that group in my sshd.conf to always use sftp:

This works very awesome and is much more secure than FTP. It is using the SSh backend, which is very secure but foces it down to a ftp layer. The jailed user can’t run any commands, no forwarding, and is chrooted to a directory (their home in this case). However, this was slow. On average it would take 4 seconds to load a file. Directory listings where fairly fest (usually 1 second, sometimes 2). Unacceptable delays just to edit a file.

Since SFTP was out of the question, I figured it would be similar, but gave it a try anyways. I setup SSHFS using OSXFuse and SSHFS and then (A simple GUI to test with, if it worked I would learn to use the CLI). With that setup, it was even worse. Files would open in 1-2 seconds, but directory listings just took forever, sometimes not even loading at all.

As SSHFS was not a option and since I wanted to try it anyways, I tried to setup VPN. OpenVPN being the choice here, I spent a few hours working on setting this up. This took a bit to configure as my firewall was blocking a lot of the connections, even once I got the right port configured, the firewall still blocked access. But once I sorted out allowing that private traffic and getting the certs in the right place, I got connected to my new VPN. I will note that if you don’t sign the certificate, it doesn’t produce a valid .crt file. So make sure to say yes to that.

After setting up VPN, I needed to setup exportfs so I could export the directory I wanted. More troubles with that. A combination of the correct options on server side (rw,sync,no_subtree_check,insecure,anonuid=1000,anongid=1001,all_squash) and the right ones on the client side (-o rw,noowners -t nfs) to finally get it to work properly. Alas after all these troubles, it was the same issue as SSHFS. Slow directory loading. This was unacceptable and would not do.

Finally, tried WebDav. At first I was trying it in a directory, but my location directive for php files in Nginx was wreaking havoc. So I just setup another subdomain to deploy this under. It also appears that Nginx at least on Ubuntu 12.04 (possibly similar versions on Debian as well) has the dav module and extension (for full support) built into it. I simply just needed to setup the configuration for it. Really easy to do and didn’t take much time, I think I set that up in less than 30 minutes).

The result is great. WebDav is fast. Directory listings are almost instant and files open in just a second. While OS X (Mountain Lion) does not seem to have the correct support for WebDav and attempts to look foe resource files and other hidden files (such as a .ql_disablethumbnails which I assume is for QuickLook to not load a thumbnail). So it was over to my FTP client that supported WebDav. Wish I could of had native Finder support for it, but oh well.

A IRC user said it best though and I couldn’t agree more now: < rnowak> SleePy: webdav rocks, totally underused.

OpenFiler with Rocket Raid card

Open filer, nor Linux in general likes working with the Rocket Raid cards. However despite what is said, it is possible to set this up. It took some time, searches, testing, frustration and putting a piece of tape over the cards speaker (The raid failure beep got annoying).

Well the first problem is installing Open Filer. It does not like installing onto the raid, for the same reason it takes work to get it to install the driver. I gave up early on trying to get anything to work and just opted to run the OS on a single non raid drive plugged directly into the system board. It took some time with the BIOS settings and toying with Open Filer to get it to recognize the drive. I can’t be sure as to why but I assume its because the motherboard also had built in raid support and Open Filer was trying to load those drivers as well. I changed some BIOS settings and worked with the open filer installer a few times of trial and error to install. I think I had to load the IMB raid and usb mass storage drivers for it to get to the install screen.
After completing this, I mostly reversed the changes I did to the BIOS. I had to make some additional bios changes and move the SATA to a lower SATA port on the mother board for the BIOS to recognize the hard drive as a boot option. In the end I needed it to boot of the stand alone hard drive while the raid card was plugged in,

After that the problem was down to getting the driver to work. This was the most tricky part. I found out during testing of installing the drivers the manufacture provides that it would not interact properly with the raid card. In fact it would split the raid into two and send off alerts. I got tired of the beeps and put tape over the speaker while I worked with it, although it could still be heard much to the disappointment of those around me.

How I got it to finally work was actually easier than I thought. First off I downloaded the open source generic driver. Then proceeded to untar it and changed my directory to hptdriver2/rr231x_0x-linux-src-v2.5/product/rr2310pm/linux

Then I ran the make commands:

Then I exported some variables.

Finally I changed back to hptdriver2/osm/linux and ran the command. It will tell you that it failed to update the linux ram image. Thats ok at this point.

Now to get the image to compile I had to copy the .ko file a few times. I am sure there is a reason it wanted those drivers but none the less the copy command worked and things went just fine.

The I finally built the ram image. My first couple tries failed. Which is when I found out I needed to copy those files. After it was still failing I found out that sata_mv driver had been removed by and the kernel wanted it (or at least thinks it did). So I just told the image that it was built in and it succeeded to create it.

Now that it completed. I opened up the grub config file (in /boot/grub) and proceded to duplicate the first boot item and modified its ram image file to point to the new one. I made sure to leave the old one incase it didn’t work and would have another way to boot the system.

Finally I issued a reboot and hoped for the best. To my luck it finally started up, with no beeping and doing a “fdisk -l” in the console only showed the main OS disk and a single disk. When I did it wrong or didn’t have the drivers linux would see each of the drives individually.

While I didn’t do this, at this point you should be able to copy over the OS to the raid card. Windows recognizes the raid card just fine. So you could use Hirens boot cd and run Raw Copy to clone it over. Grub does work fine with the raid card. It is only when it starts Linux without the Rocket Raid card drivers that it kernel panics and fails. Having the Rocket Raid drivers in the ram image should let it start up fine.

I should also note here that I had tried many times and failed on the same system. Prior to doing this I thought I searched for all instances of hpt* and rr2* and any other instances I could think of related to the driver and removed them. Its possible something else still did exist and is how it worked.