Zabbix 3.0 on Ubuntu 16.04 with Percona

After upgrading to Ubuntu 16.04, I couldn’t get Zabbix to run and was receiving the following

To fix this I needed to symlink the perconaclient library to the mysql client libraries that Zabbix was expecting.

 

Read More

SMF 2.0 with PHP 7

SMF does not officially support running version 2.0.x with PHP 7.0.  This is due to PHP removing the mysql library in favor of more secure MySQLi library.  To get around this if you have root access to your server, you could manually build in the old mysql library functions.  Or you know, build compatibility functions.  I’m releasing this as a proof of concept that this works, I highly suggest migrating all code toMySQLi functions rather than using this, but it provides a simple path that allows you to upgrade PHP and enough time to migrate your code base over.

In $sourcedir/Subs-Db-mysql.php Find:

Add before this:

Now find:

Replace with:

In $sourcedir/Subs-Compat.php at the end before ?> add:

 

This should allow SMF to run just fine.  These are all the functions SMF calls from it’s database abstraction layer and thus should work.  Anyone using mysql functions outside of SMF’s database abstraction layer may need to add in additional functions.

Read More

Fixing stuck Exchange delegated access

Recently ran into an issue where an admin account had stuck delegated access to user accounts.  Even after removing the access the admin would still see the user account showing in Outlook.  Force updating the Offline Address Book and others didn’t fix it.  PowerShell showed that the admin account was still there with deny permissions

As seen, the permission are inherited and not explicitly implied.  The below example is what it looks like when the delegated admin had access. The delegated admin is not inherited and not denied.

When you remove the delegated admins permissions the delegated admin is not inherited and is now denied.

So using proper where I can filter out to just find the bad access.

Applying Remove-MailboxPermission to the end of that query properly removes them and after proper time removes itself from Outlook.  So we need to trace this down to figure out where its coming from.  So I checked the mailbox permissions to see if it was applied there.

So digging even more I checked Active Directory Users and Computers under the “Microsoft Exchange System Objects” OU and looked at all the SystemMailbox objects permissions.  None of them had any weird permissions.  Additionally it isn’t easy to tell which object belongs to which mailboxes, loading up ADSI Edit and connecting to the “Default naming Content” and opening the “Microsoft Exchange System Objects” gets you back to the SystemMailbox objects and you can get the properties to find the name of the mailbox object.

So, next I connected to ADSI Edit again and the “Configuration” context and went to the “Services” > “Microsoft Exchange”.  I checked permissions here, the admin account didn’t exist.  Digging down one more to the “COS” (The name of the exchange organization) I found the admin account had permissions implied here.  Below the “COS” is the objects for the Exchange mailbox databases.
Now trying to remove the admin permissions the GUI would give me a dialog stating that it would change 100+ permissions on all child objects.  That was a scary message as the last thing I wanted to do was break all child objects or change permissions on them.

After reading around I found the dsacls command (Technet article) and found a way to remove permissions.

Running the command didn’t seem to change any child permissions from what I can see and removed the admin user from having inherited permissions.  Now removing or adding delegated access does not leave the user around in Outlook.

The only logic I can figure out is Exchange Control Panel fails to properly remove delegated access because the user exists already under some sort of access to the mailbox.  Because it gets confused I think it is changing it to a deny access rather than deleting it.

Read More

WordPress won’t update

I’ve had issues with WordPress failing to update.  After searching forever and manually updating myself for months, I found the problem.  Check the /wp-contents/upgrade folder for any files.  A previous failed upgrade to core, themes or plugins will stay there and silently cause WordPress to fail to update without any notices.  Further more WordPress doesn’t make attempts to clean up the folder.

Read More

LetsEncrypt Nginx SSL Sites

I got my hands on the LetsEncrypt beta and already testing it out.  Incase it wasn’t obvious, if you have sites that are SSL only (I have a few subdomains which do not operate on http/port 80), you will need to set them up.  Here is a quick example of how I adjusted my Nginx to only support the LetsEncrypt script, but make sure everyone else is https only.

And if it helps anyone, the relevant portion of the server setup with SSL

 

Check your listen attributes.  I’ve sometimes seen this cause things to not work and other times you need this in order for it to work (with IPv6).  Do a configtest to make sure of your changes before restarting nginx.

Read More

Fixing Warnings opening files in Explorer from SharePoint

While opening files in Explorer to a connected SharePoint Document library, you may receive a warning that action is unsafe, etc.  The fix to this for network drives is to add them to the Intranet sites in Internet Settings.  It isn’t clear how to do this for SharePoint as using SSL gives you a address such as \\sharepoint@SSL\davwwwroot\.  Adding this site to your list just results in it storing SSL as an address rather than the full address you want (i.e. sharepoint@SSL).  The fix is simple, just use registry

I’ve directly added sites to my intranet sets in the past before with registry.  It is how I manage my company so users can still modify their trusted sites, but I can inject the proper trusted/intranet sites they need for things to work.

Read More

Raspberry Pi NOOBS install without DHCP

Having got a Raspberry Pi and having never setup Raspberry Pi from scratch, I went ahead and proceeded to do the NOOBs install for myself. However since my Cisco switches have Spanning-Tree-Protocol enabled, it takes a while before DHCP addresses are handed out. Long enough that the NOOBS install would time out and give up without letting me continue.  The noobs install doesn’t have a reboot or shutdown function and power cycling the device causes it to go offline on the switch, which has it recheck it for loopbacks upon it starting up again.  Further more it appears if I edited /etc/network/interfaces and gave it a static IP address then rebooted, changes where lost.

To get around this, we just simply need to restart the recovery GUI.

  1. I booted up my Raspberry Pi and let the NOOBS installer fail
  2. Press ALT+F2 to switch to a virtual console
  3. login as root / raspberry
  4. Run:  killall recovery
  5. Edit our interfaces file:  vi /etc/network/interfaces
  6. My interfaces file looks like this, yours may vary:
  7. Then we edit our resolver:  vi /etc/resolv.conf
  8. My resolv.conf just contained a single line:  nameserver 8.8.8.8
  9. I then restarted my network interface:  ifdown eth0 && ifup eth0
  10. Did a quick ping test to verify everything was working:  ping google.com
  11. Finally I restarted the recovery console:  /usr/bin/recovery qws
  12. At this point the recovery console started up and after a few minutes offered me my download options.

 

Although I didn’t, If you had a working DHCP server you could just do a  ifdown eth0 && ifup eth0  until you get a ip address reported in ifconfig, and then restart the recovery console.

Read More

SlickGrid Autocomplete

I’ve been working with SlickGrid on a project recently and it has been fun to work with.  It isn’t the best documented setup, but after a while I have figured out how to work with it on most levels giving me exactly what I want to work with.  One of the things my users asked for is a Autocomplete function.  I found a Stackoverflow question giving me the hint I needed in order to make this work.  The only problem was the autocomplete example provided was using a static list.  I wanted to have it build the list from that columns existing values and show a list, just like you get in Excel.  So here is the init function where I set this up.

Read More

Recovery of LUNs as VHDs

I had recently worked on a SAN failure that resulted in a perfect storm of bad backups, broken offsite replication and disabled notifications. The data was sent off to professionals to recover the data. What was returned to me was VHD files from each of the LUNs this SAN had. This SAN had about 20 LUN with some Windows and Linux VMs attaching directly into the SAN for a data drive. I assume this was done because it appears VMWare ESXi 4.1 had a 2TB LUN limitation. Which explains why we had 2 massive 1.5 TB LUNs attached to the hosts. I won’t detail out the complete 16 hours of trial and error I spent trying to get this working, which first started on a Windows box, then my Mac and finally the Ubuntu box I settled on.

During my initial testing of the VHDs, I found that Windows 7 wouldn’t attach any of them saying there where corrupted. I then thought to try and use Microsoft’s iSCSI Target software to push out the VHDs as iSCSI drives again to the hosts. But this also failed. I then started copying the data off the 5 TB recovery drive I had in my hands to other external drives via USB 3, so I would have copies as I didn’t want to risk modifying or damaging any of the files and waiting to get another copy from the recovery specialists. I also found out during this time that the recovery specialists don’t handle this portion of the job, but they where able to verify and tell me which ones where good and bad before they sent me the data.

I then moved over to my mac, as Windows was just not as powerful for my needs as my mac had. I first tried to get the mac to open the VHDs with little results from the GUI interface. Dropping into terminal.app, I issued a “head -n20” command on the VHDs, when the data returned on one, I saw a good old NTDL is missing in the output along with some other lines. I determined without any research on the mater, that this was a MBR formatted Windows drive.

After some research I decided I needed to downloaded Mac FUSE and installed it. I attempted to mount the VHD again but this failed. More research was leading me to using the hdiutil command, but with little luck I wasn’t able to mount the drive. While discussing with another tech on this, it came up to change the file extension. Normally I don’t believe in this to often as verifying what a file contains only based of its extension seems ridiculous. Well, I changed that .vhd to a .img and attempted to mount it via the GUI interface and to my shock, I had the Windows drive showing in my sidebar along with plenty of Windows data!

I attempted to repeat this result with a VHD containing a linux EXT3 volume, but while it would attach, it wouldn’t be read properly. I assumed it was because my Mac couldn’t read EXT2/3 volumes, so I downloaded the EXT2 FUSE module and installed it. No difference here. I was unsure at the time as to why I couldn’t see what was inside the mounted volume, as I had decided it was time to use a Linux VM for this. FUSE works on linux (and default in Ubuntu 14.04), so no download for that was needed. Ubuntu would be my goto source for this as I figured it had lots of public support and would have the tools.

However, while my Mac would instantly mount the drive and work, Linux just said no to doing this at all. After more research I found out I could use the “fdisk -l” command on the .VHD and see a drive data. This was great. I also seen I had a start offset. While research other methods to attach this, I found this guide on using xmount, in which during their process they had to know the start to set a offset bytes count before it would mount the drive. Taking that hunch, I did the same thing, calculated the offset and was able to get it to mount both the Windows and Linux VHDs. I had data at this point.

However, I still didn’t have my big 1.5 TB VHDs/LUNs yet. I knew this would be a problem, as they where attached to the VMWare servers themselves. Sure enough when checking it out, I found them formatted as VMFS File system. I quickly did more research and found out that there are VMFS tools out there to mount the VMFS volumes in Ubuntu and Mac. This ended up with failure at first as I was trying to install Virtualbox and then compile some code that would allow me to mount this on my Mac. This ended up failing. All the guides that instructed me to do a simple “apt-get install” for Ubuntu, didn’t work, “apt-cache search” didn’t return any results. I ended up locating the .dpkg file on theUbuntu Packages page, and downloaded it manually, then installed it with “dpkg -i”. Due to all my trial and errors on Ubuntu at first, this broke package manager afterwords because I forced it to install because of a dependency error I created installing virtualbox.

After getting vmfs-fuse installed, I attempted to mount the VHD, with the inability to do as it would return an error. I soon realized after looking around, my error was because of that 128 byte offset on the VHD. This was proving to be my enemy in the entire process. I wasn’t having any luck getting into the data. I decided to go another route quickly and setup iscsitarget on the ubuntu and attach the luns directly to the VHD files. I had hoped by doing this, I would get them to be seen by hosts as a valid ISCSI resource and see the drives. This didn’t work, I even setup one with one of the LUNs/VHDs that had Windows data and attempted to present it to a Windows server via iSCSI Target services. It would connect but wouldn’t recognize the drive.

I don’t know where in my research it dawned on me, but I decided I what I wanted to do was create a loopback of the .VHD to attach it to a /dev/ research and hopefully see the partition tables. Each VHD/LUN was basically a drive and if I could attach it to a /dev/ resource it would hopefully attach each partition to a /dev resource and I would be free to use the mount command to get my data. Well this just wasn’t working. While using “mount -o loop” could mount the drive to a /mnt, it wasn’t doing the job of showing as a resource.

Finally I stumbled onto a post about using losetup, this was looking promising, but while using it would attach the VHD/LUN to a /dev resource, it wouldn’t attach each of the partitions. I attempted to use “sfdisk -R” to force it to do this, without any luck. Research wasn’t turing up any results as to why. I then realized that losetup had a -o for a offset. After calculating my offset again, I removed the original losetup I did and ran this again with the offset. After I did this I had a /dev/loop1 that had a VMWare File system on it. I then issued my vmfs-fuse command to mount that to a /mnt/ drive and I could see data! That was it, I had folders with vmdks and other files for me to restore.

So now it was time to repeat this setup on other Ubuntu systems setup so we could import data faster as the USB 3.0 interface would be the slow point, not network or the new SAN with RAID 50.  The basic summary of working steps is.

  1. Installed ubuntu
  2. Installed vmfs tools via the dpkg download
  3. Run fdisk to get the offset and calculated its bytes
  4. Ran the losetup with the -o bytes which attached to a /dev/loop
  5. Ran vmfs-fuse to mount the /dev/loop to a /mnt/temp drive
  6. scp the data up.

Read More

iRedMail on Nginx

This is my experiment to get iRedMail to work with Nginx. In the end I got everything to work other than awstats, although with some caveats. I don’t like awstats very much and it seemed quite troublesome to get it setup. There is a mode to run awstats in that lets it just generate static files, which to me seem to be a better solution. I did testing only on Debian 6.0.7, although it should also work in Ubuntu just fine. It was also limited testing on brand new VMs.

So I am starting out with a brand new Debian 6.0.7 system. First things first we setup our hosts and hostname file. For my test environment I used mail.debian.test as my test environment. Then I grabbed the latest iRedMail which happened to be 0.8.3 at the time of writing this. I did this via wget in a ssh session. I had to install bzip2 to “tar -xf” it, so a quick “apt-get install bzip2” resolved that. I then ran the iRedMail installer and let it complete.

Now to stop apache services for good:

Optionally we can run “apt-get remove apache2” to get rid of apache binaries as well.

Now, I needed Nginx and php5-fpm (as I prefer fpm). This takes a little work as Debian 6.0.7 doesn’t have it in its default sources. This would have been easier on Ubuntu.

What I did her is first install nginx and curl. Then I added dotdeb to the sources list, added its key and then updated my sources. Finally I was able to install fpm.
Now that the applications are in place, I need to write their configuration files. Here is the list of files I will be using:
Nginx’s iRedMail site configuration file
php5’s FPM iRedMail web pool file
iRedMail init.d file to launch the iredadmin pyton webservice

During additional testing I uploaded the files and just used curl to put them into place. The init.d script is borrowed from the web (exactly where I can’t remember as I used bites and pieces from multiple places). However I don’t feel the need to write out or explain in great detail all off the changes.

You will need to modify the nginx file (/etc/nginx/sites-available/iRedMail) to contain the correct domain. As well you will need an additional dns enter for iredadmin.domain.tld (in my case iredadmin.debian.test). If this is your only/first ssl site or you prefer it to be default you will need to adjust the ssl section. I added comments to explain that. Nginx expects a default website and if none exist it won’t start.

As for the additional domain, I tried my best, but it seems there is no way to have the perl script to be aware its in a sub directory and pass the correct urls to its output templates. Although the template has the capability to do a homepath variable, this seems to be set from ctx in perl which from my limited knowledge I don’t believe is changeable via environment/server variables. I also didn’t see a way to change that in any setting. Hopefully the iRedMail developers can make this change in future versions.
The good news is the iRedMail developers had foresight to setup the script to run very smoothly as a stanalone python web server via a cgi socket. So no additional work to make that run is needed. I had hoped to use the iredapd service to launch this, but it appears to crash and fail horribly. So I setup a second instance to do this.

Now just a little more work to activate the new service, link the file as a live nginx site and restart some services.

Thats it. Now when I hit mail.debian.test I get the webmail portal. When I access iredadmin.debian.test I get the admin portal. phpmyadmin is also setup on mail.debian.test/phpmyadmin

Setting this up for Ubuntu should be easier, as 12.04 has php5-fpm in its packages so there is no need to add in the dotdeb resources. Everything else would be the same for it.

Nginx has always been flaky for me while doing Ipv6 services. I intended to include them but it just wasn’t playing nicely enough. Sometimes just doing [::]:80 to a listen will make it listen. Other times I have to specify it twice (and it doesn’t complain). Then again if I try it on 443 using [::]:443 nginx may not want to start at all, while it accepted [::]:80 just fine. So because of how picky it can be at times, I just opted to go with ipv4 only support here.

Read More

Highslide for Wordpress Plugin