Tag Archives: Nginx

LetsEncrypt Nginx SSL Sites

I got my hands on the LetsEncrypt beta and already testing it out.  Incase it wasn’t obvious, if you have sites that are SSL only (I have a few subdomains which do not operate on http/port 80), you will need to set them up.  Here is a quick example of how I adjusted my Nginx to only support the LetsEncrypt script, but make sure everyone else is https only.

And if it helps anyone, the relevant portion of the server setup with SSL

 

Check your listen attributes.  I’ve sometimes seen this cause things to not work and other times you need this in order for it to work (with IPv6).  Do a configtest to make sure of your changes before restarting nginx.

Read More

iRedMail on Nginx

This is my experiment to get iRedMail to work with Nginx. In the end I got everything to work other than awstats, although with some caveats. I don’t like awstats very much and it seemed quite troublesome to get it setup. There is a mode to run awstats in that lets it just generate static files, which to me seem to be a better solution. I did testing only on Debian 6.0.7, although it should also work in Ubuntu just fine. It was also limited testing on brand new VMs.

So I am starting out with a brand new Debian 6.0.7 system. First things first we setup our hosts and hostname file. For my test environment I used mail.debian.test as my test environment. Then I grabbed the latest iRedMail which happened to be 0.8.3 at the time of writing this. I did this via wget in a ssh session. I had to install bzip2 to “tar -xf” it, so a quick “apt-get install bzip2” resolved that. I then ran the iRedMail installer and let it complete.

Now to stop apache services for good:

Optionally we can run “apt-get remove apache2” to get rid of apache binaries as well.

Now, I needed Nginx and php5-fpm (as I prefer fpm). This takes a little work as Debian 6.0.7 doesn’t have it in its default sources. This would have been easier on Ubuntu.

What I did her is first install nginx and curl. Then I added dotdeb to the sources list, added its key and then updated my sources. Finally I was able to install fpm.
Now that the applications are in place, I need to write their configuration files. Here is the list of files I will be using:
Nginx’s iRedMail site configuration file
php5’s FPM iRedMail web pool file
iRedMail init.d file to launch the iredadmin pyton webservice

During additional testing I uploaded the files and just used curl to put them into place. The init.d script is borrowed from the web (exactly where I can’t remember as I used bites and pieces from multiple places). However I don’t feel the need to write out or explain in great detail all off the changes.

You will need to modify the nginx file (/etc/nginx/sites-available/iRedMail) to contain the correct domain. As well you will need an additional dns enter for iredadmin.domain.tld (in my case iredadmin.debian.test). If this is your only/first ssl site or you prefer it to be default you will need to adjust the ssl section. I added comments to explain that. Nginx expects a default website and if none exist it won’t start.

As for the additional domain, I tried my best, but it seems there is no way to have the perl script to be aware its in a sub directory and pass the correct urls to its output templates. Although the template has the capability to do a homepath variable, this seems to be set from ctx in perl which from my limited knowledge I don’t believe is changeable via environment/server variables. I also didn’t see a way to change that in any setting. Hopefully the iRedMail developers can make this change in future versions.
The good news is the iRedMail developers had foresight to setup the script to run very smoothly as a stanalone python web server via a cgi socket. So no additional work to make that run is needed. I had hoped to use the iredapd service to launch this, but it appears to crash and fail horribly. So I setup a second instance to do this.

Now just a little more work to activate the new service, link the file as a live nginx site and restart some services.

Thats it. Now when I hit mail.debian.test I get the webmail portal. When I access iredadmin.debian.test I get the admin portal. phpmyadmin is also setup on mail.debian.test/phpmyadmin

Setting this up for Ubuntu should be easier, as 12.04 has php5-fpm in its packages so there is no need to add in the dotdeb resources. Everything else would be the same for it.

Nginx has always been flaky for me while doing Ipv6 services. I intended to include them but it just wasn’t playing nicely enough. Sometimes just doing [::]:80 to a listen will make it listen. Other times I have to specify it twice (and it doesn’t complain). Then again if I try it on 443 using [::]:443 nginx may not want to start at all, while it accepted [::]:80 just fine. So because of how picky it can be at times, I just opted to go with ipv4 only support here.

Read More

SFTP, SSHFS, VPN + exportFS, and WebDav.

While working on some code, I needed to get something I could access much faster and much easier than my current methods.  So after some testing, I’ve come across a solution.

I started with my simple SSH session.  This proves to not be so helpful when editing multiple files or needing to move around easier.  While the power of command line is great, it isn’t so great for developing larger scripts or moving around between multiple files easily.

So onto SFTP.  I used a sftp shelled user by adding that user to a group and forcing that group in my sshd.conf to always use sftp:

This works very awesome and is much more secure than FTP. It is using the SSh backend, which is very secure but foces it down to a ftp layer. The jailed user can’t run any commands, no forwarding, and is chrooted to a directory (their home in this case). However, this was slow. On average it would take 4 seconds to load a file. Directory listings where fairly fest (usually 1 second, sometimes 2). Unacceptable delays just to edit a file.

Since SFTP was out of the question, I figured it would be similar, but gave it a try anyways. I setup SSHFS using OSXFuse and SSHFS and then MacFusion.app (A simple GUI to test with, if it worked I would learn to use the CLI). With that setup, it was even worse. Files would open in 1-2 seconds, but directory listings just took forever, sometimes not even loading at all.

As SSHFS was not a option and since I wanted to try it anyways, I tried to setup VPN. OpenVPN being the choice here, I spent a few hours working on setting this up. This took a bit to configure as my firewall was blocking a lot of the connections, even once I got the right port configured, the firewall still blocked access. But once I sorted out allowing that private traffic and getting the certs in the right place, I got connected to my new VPN. I will note that if you don’t sign the certificate, it doesn’t produce a valid .crt file. So make sure to say yes to that.

After setting up VPN, I needed to setup exportfs so I could export the directory I wanted. More troubles with that. A combination of the correct options on server side (rw,sync,no_subtree_check,insecure,anonuid=1000,anongid=1001,all_squash) and the right ones on the client side (-o rw,noowners -t nfs) to finally get it to work properly. Alas after all these troubles, it was the same issue as SSHFS. Slow directory loading. This was unacceptable and would not do.

Finally, tried WebDav. At first I was trying it in a directory, but my location directive for php files in Nginx was wreaking havoc. So I just setup another subdomain to deploy this under. It also appears that Nginx at least on Ubuntu 12.04 (possibly similar versions on Debian as well) has the dav module and extension (for full support) built into it. I simply just needed to setup the configuration for it. Really easy to do and didn’t take much time, I think I set that up in less than 30 minutes).

The result is great. WebDav is fast. Directory listings are almost instant and files open in just a second. While OS X (Mountain Lion) does not seem to have the correct support for WebDav and attempts to look foe resource files and other hidden files (such as a .ql_disablethumbnails which I assume is for QuickLook to not load a thumbnail). So it was over to my FTP client that supported WebDav. Wish I could of had native Finder support for it, but oh well.

A IRC user said it best though and I couldn’t agree more now: < rnowak> SleePy: webdav rocks, totally underused.

Read More

SMF on Nginx+SPDY

No surprise here as SPDY is a server side thing, SMF works no problem using Nginx+SPDY. So there should be no problem with the SPDY plugin/module at all in any other web server. Just need to set it up on your web server. Nginx currently has a patch for it. But I would suspect that this will be merged into its trunk soon and make its way to stable in Nginx.

Read More

Nginx with IPv6 and vhosts

Linode.com has recently setup IPv6 natively and is deploying it across their data-centers.  This is great as I now have a native IPv6 address for my VPS.

I use Nginx as a replacement for Apache and I noticed today that my vhosts where not correctly responding on the IPv6 address.  Since I use a wildcard for my subdomains, it still would respond with my main domain, but it wouldn’t recongize any additional or subdomain.  From the configuration documentation it makes it sound like I only need to add “listen [::]:80;” to my vhosts in order to get this to work.  However despite my tries I received an error:

[emerg]: bind() to [::]:80 failed (98: Address already in use)

All documentation supports the suggested command and some suggest running the sockets separately (by adding ip6only=on to that listen).  However this still failed to make it work.

So, after going through all my configs, test configs (for test subdomains I have) and disabling any listen directives (which broke a few things), I still couldn’t get it to work.  In the end I am not quite sure how I got it to work.  I even checked with “lsof -i :80” to see anything that might of been running and couldn’t find anything.

But what I did to finally get this to work right was add this to my default config (ie for my main domain):

listen 80 default;
listen [::]:80 default ipv6only=on;

Then for each other vhost I added:

listen [::]:80;

This seems to make things work without any problem.  No errors whatsoever and ipv6 responds as it should.

As a final note, I should mention my ISP does not natively support IPv6 yet.  I am using a tunnel broker via HE.

Read More

phpMyAdmin using login with nginx behind a https auth login

The title may be confusing, but I am sure it is related to how I have things setup. I have phpMyAdmin setup to use http login, which means it gives a login form for me to log into phpMyAdmin. phpMyAdmin is on a protected folder with a auth basic login setup (so a dual auth is required to access my database). This is all behind https.

The problem as been that after I login, is phpMyAdmin will redirect to http://domain.tld:443/phpmyadmin/index.php[…]
This causes Nginx to complain that a redirect to a https port coming from a http protocol. Nginx won’t even do the redirect to https protocol even though I have that setup.

I know the blame here is phpMyAdmin. It took some time to figure out why and sadly a solution in phpMyAdmin isn’t the easiest. It is much easier to fix in the Nginx configuration.

The issue is that HTTPS is not set in the server environment variables. So phpMyAdmin detects the port mismatch and when it fixes up the url, it includes the port (since it doesn’t detect HTTPS on and the port is not 80).

The simplest solution is just to add this to my fastcgi_params. Since the location of phpMyAdmin is behinds its own domain that always uses https, I don’t have to worry about the variable being set where it shouldn’t.

I also show a HTTP_SCHEME environment variable. phpMyAdmin will also detect this if it doesn’t detect HTTPS is on. Either one of these should work. I only tested the first but the second is looked at in the phpMyAdmin config test and it bypasses all the other scheme checks.

Read More

Multiviews in nginx (sorta)

I use wsvn on my svn subdomain. Nginx doesn’t have real support for this, but there is a way to sorta do this.

First we set this in our / location:

Now I just need to let fastcgi know this. My fast cgi params are in their own file, but this is the only thing that uses this. So I do not bother with adding it into the params file. I just define it after my SCRIPT_FILENAME param is defined.

That gets it working.. However, I ran into two issues so far while working with this.
1. when trying to view a file that has a .php extension, it will try to run that through fastcgi. You can’t use fastcgi inside of a if statement. So there is no way I could see to resolve this. This is actually my breaking point of using nginx to serve my wsvn pages.
2. For some reason, it urlencodes the data in PATH_INFO. Apache when it sets this, does not (spaces are not converted to %20). I had to modify the wsvn code that used multiviews and told it to urldecode() the path info before it handled it elsewhere in the script.

Maybe somebody else who knows more about nginx can resolve these two issues. I would be glad to hear anything about it.

Update:
After more working, I did find a solution for the php issue. Not a nice solution, but it gets around the issue. The urlencode issue still exists. But a minor change to my wsvn.php to fix this was no biggie.

Update 2:
I did locate a solution on nginx’s website. Although I found it by chance.
http://wiki.nginx.org/HttpFcgiModule#fastcgi_split_path_info

However, I would like to note while this solution would work, it would fail still if a .php exist in the url.

I am including my current entire config for my svn sub domain just to show how its being done. I know some things can be done better and would love to hear thoughts.

I should note that the note at the top is what I am using as reference for what ports fastcgi ports I can use on this virtualhost. Since each php configuration needs its own .ini file, I need a simple way to know what ports to be using.

Read More

Disabling php files in wordpress upload when using nginx

This isn’t well documented anywhere for nginx. In fact it is sorta hidden and hard to find. Nginx does support a way for me to disable php from being executed in my uploads directory.
The way I came across actually I am loving, as I am able to control how content is handled actually. This is a plus on the server admins end.

Simply put, I setup a location to only run on my uploads directory. Then I change the types and only defined jpg, gif and png. All other files get sent as a download. Finally since I run php as fastcgi, I setup a nested location to run for php files and tell it to stop evaluating rules.

In fact, this is all actually nested in my primary location /. I did it this way as it worked the easiest. Although I am sure I could remove that nesting.

(more…)

Read More

Nginx with wordpress seo urls

I have been testing running my site with Nginx instead of Apache.  One of the issues I have ran across is getting wordpress to work right since I use the SEO urls.  Not that SEO urls make any difference, its a fun challenge to just work with.

One issue I ran across is getting these urls to work right.  After some reading, I did discover that there is a simple code for the rewrite that is used in apache.  However I couldn’t get this to work as the document examples showed.  I found out after testing, that it must exist in the location attribute.  Which is actually better for the setup.

This makes things work as they should.

Update:

If has been suggested by the Nginx team to be avoided.  So here is another solution that avoids if:

Read More

Highslide for Wordpress Plugin