Dovecot with ubuntu 11.10

Just recently updated my vps to ubuntu 11.10, this went mostly smooth. However, I had some issues with dovecot. I could not get it to start.

It seems that the configuration guide I followed to setup dovecot broke due to outdated settings. However thanks to a wiki guide from dovecot, I was able to convert my configuration file: http://wiki2.dovecot.org/Upgrading/2.0

However, dovecot refused to work properly. After much searching and much issues, I finally figured out that I had to install a new package, dovecot-mysql in order to get this to work. After which, a restart of the saslauthd service brought everything back into working order, at least for dovecot

Nginx with IPv6 and vhosts

Linode.com has recently setup IPv6 natively and is deploying it across their data-centers.  This is great as I now have a native IPv6 address for my VPS.

I use Nginx as a replacement for Apache and I noticed today that my vhosts where not correctly responding on the IPv6 address.  Since I use a wildcard for my subdomains, it still would respond with my main domain, but it wouldn’t recongize any additional or subdomain.  From the configuration documentation it makes it sound like I only need to add “listen [::]:80;” to my vhosts in order to get this to work.  However despite my tries I received an error:

[emerg]: bind() to [::]:80 failed (98: Address already in use)

All documentation supports the suggested command and some suggest running the sockets separately (by adding ip6only=on to that listen).  However this still failed to make it work.

So, after going through all my configs, test configs (for test subdomains I have) and disabling any listen directives (which broke a few things), I still couldn’t get it to work.  In the end I am not quite sure how I got it to work.  I even checked with “lsof -i :80” to see anything that might of been running and couldn’t find anything.

But what I did to finally get this to work right was add this to my default config (ie for my main domain):

listen 80 default;
listen [::]:80 default ipv6only=on;

Then for each other vhost I added:

listen [::]:80;

This seems to make things work without any problem.  No errors whatsoever and ipv6 responds as it should.

As a final note, I should mention my ISP does not natively support IPv6 yet.  I am using a tunnel broker via HE.

Convert TS3 from sqlite to mysql database

I run a teamspeak server and it uses teamspeak3.  However when I set it up, I didn’t bother getting any further than getting it running.  Now I find out that its using sqlite for a database and that database is taking up a lot of data for useless logs.

First step was to figure out how to convert the database.  After some thankless google searches I found something that worked (after my own edits to it):

sqlite3 ts3server.sqlitedb .dump | egrep -vi ‘^(BEGIN TRANSACTION|PRAGMA|COMMIT|INSERT INTO “devices”|INSERT INTO “sqlite_sequence”|DELETE FROM “sqlite_sequence”)’ | perl -pe ‘s/INSERT INTO \”(.*)\” VALUES/INSERT INTO \1 VALUES/’ | perl -pe ‘s/AUTOINCREMENT/auto_increment/’ | perl -pe ‘s/varchar\)/varchar\(255\)\)/’ > tsdb.sql

Basically it dumps the database, then we remove the things that mysql doesn’t understand or are useless for mysql, and finally it fixes some stuff up so its a proper database script acceptable by mysql.  Then I just went to importing it.  I setup a teamspeak database and user before I did this.

mysql -u teampseak -p teamspeak < tsdb.sql

For the next part, I was just testing.  First I created a ts3server.ini file, then added the agrument into it:

dbplugin=ts3db_mysql

I tried to start up the server but failed.  It seems from google searches others are getting this error as well:

|CRITICAL|DatabaseQuery |   | unable to load database plugin library “libts3db_mysql.so”, halting

It turns out that it needs a library file on the server.  You can find this out with the ldd command:

$ ldd libts3db_mysql.so
linux-vdso.so.1 =>  (0x00007fffa27ff000)
libmysqlclient.so.15 => not found
libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x00007f272c3f8000)
libm.so.6 => /lib/libm.so.6 (0x00007f272c174000)
libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00007f272bf5d000)
libc.so.6 => /lib/libc.so.6 (0x00007f272bbda000)
/lib64/ld-linux-x86-64.so.2 (0x00007f272c919000)

So I had hopped that I had some sort of file:

/usr/lib/libmysqlclient.so.16
/usr/lib/libmysqlclient.so.16.0.0

I had located those two files, but I couldn’t get them to work.  Suggestions from searching showed people symlinking the .15 version to their teamspeak home directory.  I tried to just use the .16, but no go.  Back to google to find out how to get that file for my version of ubuntu.  I tried to do a apt-get on “libmysqlclient15off” as suggested name elsewhere, but no luck for my ubuntu version.  I found out I could just pull it right from the package server directly.  That works out for me 🙂  I use 64 bit, so I got the 64 bit version:

$ wget http://mirrors.kernel.org/ubuntu/pool/universe/m/mysql-dfsg-5.0/libmysqlclient15off_5.1.30really5.0.83-0ubuntu3_amd64.deb

$ dpkg -i libmysqlclient15off_5.1.30really5.0.83-0ubuntu3_amd64.deb

Tried to restart teamspeak, still no luck.  So I tried the symlink suggestion (while working in my teamspeak install location):

$ ln -s /usr/lib/libmysqlclient.so.15 libmysqlclient.so.15

Finally it worked, but gave errors because I never setup the ini file that contained the mysql user details (ts3db_mysql.ini).  So I created that and restarted teamspeak again.  The format of the file is as follows:

[config]
host=localhost
port=3306
username=mysql_user_name
password=Your_cool_password
database=mysql_database_name
socket=

Finally, things where working :).  After that I also used the “createinifile=1” attribute when I started the server so it would dump all current contents of my configuration into a ini file.

I setup my log folder for teamspeak via a symlink (as you can’t move it to /var/log directly since it was running as a unprivileged user) to a folder in /var/log (I called mine ts3).  I wanted to setup autorotation of the log files (since the server almost never goes down and I don’t want a 100 mb log file :P).  Alas, it seems to of gotten the best of me so far.  I haven’t had time to figure out how to get it to auto rotate the log files out.

The only other issue is teamspeak also seems to log files into the database (two places!).  I just ran this manually, but I may have to setup a cron script to auto do this for me later on:

DELETE from log WHERE log_timestamp > unix_timestamp() – 2592000

That little command will delete all logs older than 30 days.  Which is more than good for me.  I haven’t even read the logs since I set it up.

Multiviews in nginx (sorta)

I use wsvn on my svn subdomain. Nginx doesn’t have real support for this, but there is a way to sorta do this.

First we set this in our / location:

Now I just need to let fastcgi know this. My fast cgi params are in their own file, but this is the only thing that uses this. So I do not bother with adding it into the params file. I just define it after my SCRIPT_FILENAME param is defined.

That gets it working.. However, I ran into two issues so far while working with this.
1. when trying to view a file that has a .php extension, it will try to run that through fastcgi. You can’t use fastcgi inside of a if statement. So there is no way I could see to resolve this. This is actually my breaking point of using nginx to serve my wsvn pages.
2. For some reason, it urlencodes the data in PATH_INFO. Apache when it sets this, does not (spaces are not converted to %20). I had to modify the wsvn code that used multiviews and told it to urldecode() the path info before it handled it elsewhere in the script.

Maybe somebody else who knows more about nginx can resolve these two issues. I would be glad to hear anything about it.

Update:
After more working, I did find a solution for the php issue. Not a nice solution, but it gets around the issue. The urlencode issue still exists. But a minor change to my wsvn.php to fix this was no biggie.

Update 2:
I did locate a solution on nginx’s website. Although I found it by chance.
http://wiki.nginx.org/HttpFcgiModule#fastcgi_split_path_info

However, I would like to note while this solution would work, it would fail still if a .php exist in the url.

I am including my current entire config for my svn sub domain just to show how its being done. I know some things can be done better and would love to hear thoughts.

I should note that the note at the top is what I am using as reference for what ports fastcgi ports I can use on this virtualhost. Since each php configuration needs its own .ini file, I need a simple way to know what ports to be using.

Disabling php files in wordpress upload when using nginx

This isn’t well documented anywhere for nginx. In fact it is sorta hidden and hard to find. Nginx does support a way for me to disable php from being executed in my uploads directory.
The way I came across actually I am loving, as I am able to control how content is handled actually. This is a plus on the server admins end.

Simply put, I setup a location to only run on my uploads directory. Then I change the types and only defined jpg, gif and png. All other files get sent as a download. Finally since I run php as fastcgi, I setup a nested location to run for php files and tell it to stop evaluating rules.

In fact, this is all actually nested in my primary location /. I did it this way as it worked the easiest. Although I am sure I could remove that nesting.

Continue reading

Nginx with wordpress seo urls

I have been testing running my site with Nginx instead of Apache.  One of the issues I have ran across is getting wordpress to work right since I use the SEO urls.  Not that SEO urls make any difference, its a fun challenge to just work with.

One issue I ran across is getting these urls to work right.  After some reading, I did discover that there is a simple code for the rewrite that is used in apache.  However I couldn’t get this to work as the document examples showed.  I found out after testing, that it must exist in the location attribute.  Which is actually better for the setup.

This makes things work as they should.

Update:

If has been suggested by the Nginx team to be avoided.  So here is another solution that avoids if:

Removing everything other than .svn

After I updated some code, I was downloading it into my local svn repository and planned to do a svn commit.  I thought I had set my FTP settings correctly to merge folders, but alas I didn’t.  What I ended up with was a broken svn working copy.

So I decided to pull another checkout in another directory.  After doing that, I needed a way to just get the .svn folders and its files out.  The quickest method I could think of would be to use my FTP client (Transmit by Panic) and this time merge the folders together.  I am sure there is a better way but I didn’t have much time to waste searching.

To accomplish this task, I needed all other files removed.  So I wrote a function to do this:

—–

function remove_non_svn($dir)
{

$files = scandir($dir);

foreach ($files as $file)
{

if ($file == ‘.’ || $file == ‘..’ || $file == ‘.DS_Store’ || $file == ‘.svn’)

continue;

if (is_dir($file) && $file != ‘.svn’)
{

remove_non_svn($dir . ‘/’ . $file);
rmdir($dir . ‘/’ . $file);

}

else

unlink($dir . ‘/’ . $file);

}

}
—–

Then I just popped that into a script and told it what folder to execute this on and it went to work.  It quickly did the job and got it all cleaned up.  Then I simply used my FTP client to merge the folders into the working copy.  After that a svn status showed the modified copies and was working.

I should note that doing this is dangerous to your svn working copy and could break things if not done right.  There also may be better methods to restore you working copy to working order.  I just didn’t have much time on my hands to search for it.

Mysql queries using offsets without limits

While working on a project, I came across the need for a script to run a loop through a table and process some commands.  However, due to the size of the table, this surely would take longer than the default 30 seconds that is setup in most configurations.  However, I didn’t want to do any limits.  I wanted my script to determine when it was nearing the timeout and then stop, otherwise try to process more.  So a standard LIMIT in mysql wouldn’t do it.

Much to my surprise, MySQL doesn’t offer a way to just do a OFFSET.  You have to use the LIMIT with OFFSET or none at all.  This was really annoying as I thought I would have to go back to limiting the query size.

Well, then I realized, that this could be solved another way.  I just added a column and populated it with an incremental numbers.  Then I told my script to ORDER BY that column id using ascending.  Now with that in play, I simply just added a WHERE to my query and told it not to do anything below a certain id.  The certain id comes from a variable that is passed from the user and cleaned up (safety first!).  The script after processing the needed commands, updates this variable.  Finally when it is time to pause the script so it doesn’t time out, that variable is sent with the forwarding url.  This method allows the script to pick up where it left off again when it starts up.

Seems like a very simple work around, although if I didn’t have the id column, it wouldn’t of worked.

Automating modification packaing

Packaging mods is not the funnest part of building any mod.  So why should I do it manually?  I run Mac OS X which means I have a terminal and can run commands directly to accomplish the packaging process.  I just needed to build a script.  Easy to do and now its done, so I will just detail out the script.

// Da mads location!
$dir = ‘/home/smf/Mods/’;

// Disallowed stuff.
$disallowed_files = array(‘.’, ‘..’, ‘.DS_Store’);

// Our Tar binary executable
$tar_bin = ‘/home/software/gnutar/bin/tar’;

So, this is some settings.  The first tells me where my mods are located.  The path after this matches what I have in SVN for my mods.  So you can put together an image of what I have setup.  The next is an array of disallowed files that we want to ignore when reading directories.  The final one is the full  path and name to the tar binary.

I custom installed a tar binary since the built in OS X adds resource forks and I did not want to break anything by replacing the built in tar with my own (I doubt it would, but didn’t feel like finding out months later and having to fix it).

// No more changes!!!
Warp_header();

// Package them?
if (isset($_POST[‘package’]))
doPacking();

listMods();
Warp_footer();

This has no explanation really.  It is my header, packaging code, most list and footer (to properly close all html tags 😉 ).

// List the mods!
function listMods()
{

global $dir, $disallowed_files;

// Get the mods.
$mods = scandir($dir);

This is the start of my mod listing.  Which I globalize the directory and disallowed files.  Then I simply perform a scandir to get a listing of all my mods.  The next section of code contains html, so I will skip that since it isn’t important.

$modOut = array();
foreach ($mods as $mod)
{

if (in_array($mod, $disallowed_files))

continue;

$xmlData = simplexml_load_file($dir . ‘/’ . $mod . ‘/package-info.xml’);
$modOut[strtolower($mod)] = $xmlData->name;

}
ksort($modOut);

Now simply put, this code will simply prepare the output by doing simplexml to create an object based on the xml data, which I can then use to get the name of the mod (much easier than reading, and pulling from the file with a regex).  Finally I sort the array by the key.  Again, more html to ouptut this data.  I simply used checkboxes.

function DoPacking()
{

global $dir, $tar_bin;

echo ‘
<div>
Packing…<br />’;

This function does the actual work.  For this one I just need to global the directory and tar binary.

$force = isset($_REQUEST[‘force’]) ? true : false;

This simply just allows me to force a mod to be packaged even if it exists for that version.  I didn’t need anything complicated as this rarely is needed.

// This just finds what mods we want to package.
$allowed_mods = array();
if (isset($_REQUEST[‘mods’]))

foreach ($_REQUEST[‘mods’] as $in)

$allowed_mods[] = trim($in);

This simply just does a loop to find all mods I want to package.  If this was a public script, I would need to validate the input against an array of mods that exist.  But since it is just used internally, I didn’t do that.  But now we get the the actual work.

// Get em!
$mods = scandir($dir);
foreach ($mods as $mod)
{

global $temp_key;

if (in_array($mod, $disallowed_files))

continue;

if (!empty($mods) && !in_array(strtolower($mod), $allowed_mods))

continue;

We first start by scanning the directory again, removing the files we don’t want, this time we are removing mods we didn’t want to package from the array.

// Files in this folder.
$files = scandir($dir . ‘/’ . $mod);

foreach ($files as $key => $file)

if (in_array($file, array_merge($disallowed_files, array(‘images’, ‘releases’))))

unset($files[$key]);

This just puts all files inside each mod folder into an array and removes the files/folders we do not want to package.  I use the same structure for all my mods, so I don’t have to worry about individual cases.

// Figure out our version, the first match is our keeper!
preg_match(‘~version\s+([\d\.]+)(^\S+)?~i’, file_get_contents($dir . ‘/’ . $mod . ‘/Readme.txt’), $matches);

I don’t usually update the version in my .xml file, only my readme.  So I need to get the latest data from my readme file.  This is going to be used to update my version info in multiple places.  The next part is more html, so I am skipping it again.  It basically is checking for existing versions of missing versions and letting me know.

// Update all version information.
foreach ($files as $file)
{

if (substr($file, -4) != ‘.xml’)

continue;

$new_contents = preg_replace(‘~<version>([^<]+)</version>~i’, ‘<version>’ . $matches[1] . ‘</version>’, file_get_contents($dir . ‘/’ . $mod . ‘/’ . $file));

// Null is ugly!
if (!is_null($new_contents) && !is_array($new_contents))

file_put_contents($dir . ‘/’ . $mod . ‘/’ . $file, $new_contents);

}

Now I simply loop through all files, looking for the .xml ones, as these have a version tag in them.  Once I located them, I simply update them with the new version.  Prior to updating the file, I make sure nothing went wrong.

// Change our directory.
chdir($dir . ‘/’ . $mod);

// Tar it!
// ZIP: zip -0XT ../path_name.zip ./* -x .svn
exec($tar_bin . ‘ -czf releases/’ . $mod . ‘_v’ . $matches[1] . ‘.tgz ‘ . implode(‘ ‘, $files));

Now for the actual fun stuff.  Prior to packaging the mod, I need to change to the directory.  This prevents a path of folders when the mod is unpackaged.  Then finally I run the command to package the mod.  I have it automatically package it into the releases folder based on the mod name, and its version and explicitly name all files I want to package, thus avoiding disallowed files.

That is all the actual work to handle the packaging.  I haven’t tried it yet, but I added code with theoretically should allow this script to work from CLI.

// Not used yet, but can handle cli stuff.
function handle_cli()
{

if (in_array(‘force’, $_SERVER[‘argv’]))

$_REQUEST[‘force’] = true;

foreach ($_SERVER[‘argv’] as $in)
{

if (in_array($in, array(basename(__FILE__), ‘–‘, ‘force’)))

continue;

$_REQUEST[‘mods’][] = trim($in);

}

}

Someday I may actually test the code, but oh well for now.   The final bit is more html for the header and footer.  So that is all the code I need.

Download: modpacking.php (Right click and save file)

SimpleDesk Download Manager

For the SimpleDesk website, I made a very easy to use and very sleek download manager.  Complete with branch, version, file and mirror management.  Simply put, this thing is very powerful and flexible.  While I didn’t add it in, I could easily expand this script to manage multiple pieces of software as well.