Recovery of LUNs as VHDs

I had recently worked on a SAN failure that resulted in a perfect storm of bad backups, broken offsite replication and disabled notifications. The data was sent off to professionals to recover the data. What was returned to me was VHD files from each of the LUNs this SAN had. This SAN had about 20 LUN with some Windows and Linux VMs attaching directly into the SAN for a data drive. I assume this was done because it appears VMWare ESXi 4.1 had a 2TB LUN limitation. Which explains why we had 2 massive 1.5 TB LUNs attached to the hosts. I won’t detail out the complete 16 hours of trial and error I spent trying to get this working, which first started on a Windows box, then my Mac and finally the Ubuntu box I settled on.

During my initial testing of the VHDs, I found that Windows 7 wouldn’t attach any of them saying there where corrupted. I then thought to try and use Microsoft’s iSCSI Target software to push out the VHDs as iSCSI drives again to the hosts. But this also failed. I then started copying the data off the 5 TB recovery drive I had in my hands to other external drives via USB 3, so I would have copies as I didn’t want to risk modifying or damaging any of the files and waiting to get another copy from the recovery specialists. I also found out during this time that the recovery specialists don’t handle this portion of the job, but they where able to verify and tell me which ones where good and bad before they sent me the data.

I then moved over to my mac, as Windows was just not as powerful for my needs as my mac had. I first tried to get the mac to open the VHDs with little results from the GUI interface. Dropping into terminal.app, I issued a “head -n20″ command on the VHDs, when the data returned on one, I saw a good old NTDL is missing in the output along with some other lines. I determined without any research on the mater, that this was a MBR formatted Windows drive.

After some research I decided I needed to downloaded Mac FUSE and installed it. I attempted to mount the VHD again but this failed. More research was leading me to using the hdiutil command, but with little luck I wasn’t able to mount the drive. While discussing with another tech on this, it came up to change the file extension. Normally I don’t believe in this to often as verifying what a file contains only based of its extension seems ridiculous. Well, I changed that .vhd to a .img and attempted to mount it via the GUI interface and to my shock, I had the Windows drive showing in my sidebar along with plenty of Windows data!

I attempted to repeat this result with a VHD containing a linux EXT3 volume, but while it would attach, it wouldn’t be read properly. I assumed it was because my Mac couldn’t read EXT2/3 volumes, so I downloaded the EXT2 FUSE module and installed it. No difference here. I was unsure at the time as to why I couldn’t see what was inside the mounted volume, as I had decided it was time to use a Linux VM for this. FUSE works on linux (and default in Ubuntu 14.04), so no download for that was needed. Ubuntu would be my goto source for this as I figured it had lots of public support and would have the tools.

However, while my Mac would instantly mount the drive and work, Linux just said no to doing this at all. After more research I found out I could use the “fdisk -l” command on the .VHD and see a drive data. This was great. I also seen I had a start offset. While research other methods to attach this, I found this guide on using xmount, in which during their process they had to know the start to set a offset bytes count before it would mount the drive. Taking that hunch, I did the same thing, calculated the offset and was able to get it to mount both the Windows and Linux VHDs. I had data at this point.

However, I still didn’t have my big 1.5 TB VHDs/LUNs yet. I knew this would be a problem, as they where attached to the VMWare servers themselves. Sure enough when checking it out, I found them formatted as VMFS File system. I quickly did more research and found out that there are VMFS tools out there to mount the VMFS volumes in Ubuntu and Mac. This ended up with failure at first as I was trying to install Virtualbox and then compile some code that would allow me to mount this on my Mac. This ended up failing. All the guides that instructed me to do a simple “apt-get install” for Ubuntu, didn’t work, “apt-cache search” didn’t return any results. I ended up locating the .dpkg file on theUbuntu Packages page, and downloaded it manually, then installed it with “dpkg -i”. Due to all my trial and errors on Ubuntu at first, this broke package manager afterwords because I forced it to install because of a dependency error I created installing virtualbox.

After getting vmfs-fuse installed, I attempted to mount the VHD, with the inability to do as it would return an error. I soon realized after looking around, my error was because of that 128 byte offset on the VHD. This was proving to be my enemy in the entire process. I wasn’t having any luck getting into the data. I decided to go another route quickly and setup iscsitarget on the ubuntu and attach the luns directly to the VHD files. I had hoped by doing this, I would get them to be seen by hosts as a valid ISCSI resource and see the drives. This didn’t work, I even setup one with one of the LUNs/VHDs that had Windows data and attempted to present it to a Windows server via iSCSI Target services. It would connect but wouldn’t recognize the drive.

I don’t know where in my research it dawned on me, but I decided I what I wanted to do was create a loopback of the .VHD to attach it to a /dev/ research and hopefully see the partition tables. Each VHD/LUN was basically a drive and if I could attach it to a /dev/ resource it would hopefully attach each partition to a /dev resource and I would be free to use the mount command to get my data. Well this just wasn’t working. While using “mount -o loop” could mount the drive to a /mnt, it wasn’t doing the job of showing as a resource.

Finally I stumbled onto a post about using losetup, this was looking promising, but while using it would attach the VHD/LUN to a /dev resource, it wouldn’t attach each of the partitions. I attempted to use “sfdisk -R” to force it to do this, without any luck. Research wasn’t turing up any results as to why. I then realized that losetup had a -o for a offset. After calculating my offset again, I removed the original losetup I did and ran this again with the offset. After I did this I had a /dev/loop1 that had a VMWare File system on it. I then issued my vmfs-fuse command to mount that to a /mnt/ drive and I could see data! That was it, I had folders with vmdks and other files for me to restore.

So now it was time to repeat this setup on other Ubuntu systems setup so we could import data faster as the USB 3.0 interface would be the slow point, not network or the new SAN with RAID 50.  The basic summary of working steps is.

  1. Installed ubuntu
  2. Installed vmfs tools via the dpkg download
  3. Run fdisk to get the offset and calculated its bytes
  4. Ran the losetup with the -o bytes which attached to a /dev/loop
  5. Ran vmfs-fuse to mount the /dev/loop to a /mnt/temp drive
  6. scp the data up.

Read More

iRedMail on Nginx

This is my experiment to get iRedMail to work with Nginx. In the end I got everything to work other than awstats, although with some caveats. I don’t like awstats very much and it seemed quite troublesome to get it setup. There is a mode to run awstats in that lets it just generate static files, which to me seem to be a better solution. I did testing only on Debian 6.0.7, although it should also work in Ubuntu just fine. It was also limited testing on brand new VMs.

So I am starting out with a brand new Debian 6.0.7 system. First things first we setup our hosts and hostname file. For my test environment I used mail.debian.test as my test environment. Then I grabbed the latest iRedMail which happened to be 0.8.3 at the time of writing this. I did this via wget in a ssh session. I had to install bzip2 to “tar -xf” it, so a quick “apt-get install bzip2″ resolved that. I then ran the iRedMail installer and let it complete.

Now to stop apache services for good:

update-rc.d -f apache2 remove service apache2 stop

Optionally we can run “apt-get remove apache2″ to get rid of apache binaries as well.

Now, I needed Nginx and php5-fpm (as I prefer fpm). This takes a little work as Debian 6.0.7 doesn’t have it in its default sources. This would have been easier on Ubuntu.

yes | apt-get install nginx curl echo "" >> /etc/apt/sources.list echo "# dotdeb packages" >> /etc/apt/sources.list echo "deb http://packages.dotdeb.org stable all" >> /etc/apt/sources.list echo "deb-src http://packages.dotdeb.org stable all" >> /etc/apt/sources.list curl -0 http://www.dotdeb.org/dotdeb.gpg | apt-key add - apt-get update yes | apt-get install php5-fpm

What I did her is first install nginx and curl. Then I added dotdeb to the sources list, added its key and then updated my sources. Finally I was able to install fpm. Now that the applications are in place, I need to write their configuration files. Here is the list of files I will be using: Nginx’s iRedMail site configuration file php5’s FPM iRedMail web pool file iRedMail init.d file to launch the iredadmin pyton webservice

During additional testing I uploaded the files and just used curl to put them into place. The init.d script is borrowed from the web (exactly where I can’t remember as I used bites and pieces from multiple places). However I don’t feel the need to write out or explain in great detail all off the changes.

curl -0 http://sleepycode.com/wordpress/wp-content/uploads/2013/03/iRedMail.nginx_.txt > /etc/nginx/sites-available/iRedMail curl -0 http://sleepycode.com/wordpress/wp-content/uploads/2013/03/iRedMail.fpm_.txt > /etc/php5/fpm/pool.d/iRedMail.conf curl -0 http://sleepycode.com/wordpress/wp-content/uploads/2013/03/iRedMail.initd_.txt > /etc/init.d/iredadmin

You will need to modify the nginx file (/etc/nginx/sites-available/iRedMail) to contain the correct domain. As well you will need an additional dns enter for iredadmin.domain.tld (in my case iredadmin.debian.test). If this is your only/first ssl site or you prefer it to be default you will need to adjust the ssl section. I added comments to explain that. Nginx expects a default website and if none exist it won’t start.

As for the additional domain, I tried my best, but it seems there is no way to have the perl script to be aware its in a sub directory and pass the correct urls to its output templates. Although the template has the capability to do a homepath variable, this seems to be set from ctx in perl which from my limited knowledge I don’t believe is changeable via environment/server variables. I also didn’t see a way to change that in any setting. Hopefully the iRedMail developers can make this change in future versions. The good news is the iRedMail developers had foresight to setup the script to run very smoothly as a stanalone python web server via a cgi socket. So no additional work to make that run is needed. I had hoped to use the iredapd service to launch this, but it appears to crash and fail horribly. So I setup a second instance to do this.

Now just a little more work to activate the new service, link the file as a live nginx site and restart some services.

chmod a+x /etc/init.d/iredadmin ln -s /etc/nginx/sites-available/iRedMail /etc/nginx/sites-enabled/ service apache2 stop service php5-fpm restart service nginx restart service iredadmin start

Thats it. Now when I hit mail.debian.test I get the webmail portal. When I access iredadmin.debian.test I get the admin portal. phpmyadmin is also setup on mail.debian.test/phpmyadmin

Setting this up for Ubuntu should be easier, as 12.04 has php5-fpm in its packages so there is no need to add in the dotdeb resources. Everything else would be the same for it.

Nginx has always been flaky for me while doing Ipv6 services. I intended to include them but it just wasn’t playing nicely enough. Sometimes just doing [::]:80 to a listen will make it listen. Other times I have to specify it twice (and it doesn’t complain). Then again if I try it on 443 using [::]:443 nginx may not want to start at all, while it accepted [::]:80 just fine. So because of how picky it can be at times, I just opted to go with ipv4 only support here.

Read More

Dynamics CRM notes change ownership unexpectedly

I stumbled across a weird issue with Dynamics CRM 2011. I thought it was due to Rollup 12 as it only recently started happening, but it appears to be default behavior (whether I noticed it before or not). Basically the steps to reproduce this are very simple.

User A

  1. Create a case with some basics
  2. Add some notes
  3. Add the case to the queue

User B

  1. Access the queue
  2. Click Assign in ribbon
  3. Assign case to yourself.
  4. Save and close.

After a short while the case notes that once where owned by User A, now are owned by User B. Thats not what I expect to happen. I expect the notes to stay the same.

Well it appears there is a simple work around to this. Although not what I wanted to change, this does get around it.

  1. Go to Settings -> Customizations -> Customize this system
  2. Navigate to Entities -> Cases -> 1:N Relationships
  3. Open Notes
  4. Change Relationship from Parent to Cascading.
  5. Change the Assign from All to None
  6. Save and close
  7. Publish all Customizations (may not be needed but just to be sure)

Read More

Hyperlinks in CRM 2011 Rollup 12 do not work in Safari

One of my co-workers who use Safari way more than I do pointed out today that he couldn’t click links in Safari on his Macbook Pro running Mountain Lion. I was able to reproduce this as well in my Safari. Starting my debugging I noticed if I opened the developer console the links worked, which didn’t help me much. I made sure popups where disabled and no luck as well. Finally thinking maybe I had an extension in Safari causing it, I tried disabling all extensions. Again nothing was working to resolve the issue.

By chance I happen to stumble on a workaround at least. In the progress of debugging this, I changed my user agent, the page refreshed and it worked. I realized something, after opening the debug console it refreshed the page as well. So closing Safari and opening it again, I again came upon the page and couldn’t open any links in CRM. I hit F5 and let the page refresh. Once the refresh completed all the links worked. Not sure why, but a refresh seems to resolve this.

Note: My co-worker couldn’t use F5 in his Safari. I believe I remapped it at one point in Safari as its F5 in all other browsers and just like it that way. I believe the default is Apple + R.

Read More

Chrome closes with CRM 2011 Rollup 12

Doing more testing with CRM 2011 Rollup 12, I found out that Chrome was self closing when I logged into CRM. This is very annoying, but having worked with CRM before in IE, I knew what this was. By chance I was able to verify it by going to the CRM url and changing the last part of the url to /main.aspx. I got a notification that a popup was blocked. Sure enough, after I added the crm address to the popup blocker exception list, no more self closing windows.

Update 2/14/13: Also should note that this affects Safari as well. Popupblocker’s cause quite a problem with CRM and there is no notification what it is about to do. I find the fact that CRM needs to launch into its own window a pain. I personally have it in a pinned tab in Chrome. I don’t worry about it and when I need CRM its there and not on some other obscure window.

Read More

Using Chrome on CRM 2011 Rollup 12 will repeatedly show login prompt

While testing out CRM 2011 Rollup 12, I noticed that I could not get it to log me in for Chrome. After checking my security log and resetting Chrome back to defaults, it still didn’t shed any light onto why this was happening.

After much searching, I happen to find the article explaining this. http://support.microsoft.com/kb/2709891/en-us?sd=rss&spid=15707

While this requires a registry edit and supposedly opens up a man in the middle attack, it does indeed fix it. Hopefully a proper fix comes out in the near future to resolve this properly. In the mean time this works well.

Read More

The specified storage repository scan failed

This one has been bugging me for a while. I had a CIFS based ISO share. For some reason the share would fail to update now and then with the error message “The specified storage repository scan failed”. A restart of the CIFS system would fix it, but none the less it wasn’t a solution. Other google results mostly pointed to NAS/SAN issues and not a CIFS share issue. I thought that the windows system was to blame because of the connections limit it has on non server editions.

While I was manually browsing the CIFS share I noticed something. There was a file on the share with a file name being a UUID. The file contained nothing and I checked and sure enough I was getting this error at the time. So I copied it somewhere else and deleted it. A rescan worked without any errors. No more errors and new files show up right away when added to the ISO share.

Read More

Mapping a drive letter to a SharePoint Document Library

I was looking for a way to accomplish the need of making it easier for users to access a SharePoint Document Library, with the plus side of being able to add this into a GPO and letting it be mapped across the domain. Of course you can add a network location to a SharePoint Document Library, but I didn’t know of a way to script that via a GPO logon script. I really didn’t research much into that, I figured mapping drives would be easier to explain to users.

It turns out this is really simple to do. The SharePoint server maps it as a WebDav path on a network share. So by using \\SERVERNAME\DavWWWroot\ I got the root of the SharePoint site. From there, I just filled out the rest. Such as \\SharePoint\DavWWWroot\Shared&20Documents links to the Shared Documents library on the SharePoint site.

From that point on, its as simple as using a GPO to deploy this and map the drive.

Read More

Adding search to Dynamics CRM Customer Portal

So I needed to add a search box to a Microsoft Dynamics CRM Customer Portal. So into Visual Web Developer 2010 Express I went. I opened the Customer Portal project and then expanded Pages and eService in the tree.

Opening up ViewCases.aspx I located the following

<asp:GridView

I added this before that:

<asp:Label ID="Label2" Font-Bold="true" runat="server">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Search: </asp:Label> <asp:TextBox ID="CaseSearch" AutoPostBack="true" runat="server"></asp:TextBox>

I then opened the code for the file (right clicked and showed code) and located the following:

if (casesByCustomer.Count() == 0)

and added before that:

if (!string.IsNullOrEmpty(CaseSearch.Text)) { casesByCustomer = casesByStatus.Where(c => c.Title.ToLower().Contains(CaseSearch.Text.ToLower().ToString())); }

Saved the files, built the code and tested. Everything worked just fine, except for an error that occurred because the view I had, used follow up by and some of those cases didn’t have them. If a search returned only results with no follow up by set, it would give an error. But I just swapped out the follow up by for modified on.

Read More

OEM like branding in Windows 7

I needed to figure out a simple way to set the default background image. I didn’t want to force the background image, I didn’t want to apply a GPO, and I wasn’t having much luck editing the default user hive. I happen to stumble onto this solution by chance, really. So into regedit we go for this.

The location I am looking for is: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Themes]

You may need to add keys, but in general you want the below keys to be set. I set DesktopBackground, but excluded it in this example because it contained a path I didn’t want to post. This is just a hex string that contains the full path to a file using the oobe folder is recommended. The oobe folder may not exist on your system, so go ahead and create it if needed.

“Drop Shadow”=”FALSE” “Flat Menus”=”FALSE” “SetupVersion”=”10″ “InstallTheme”=”C:\\Windows\\resources\\Themes\\aero.theme” “InstallVisualStyle”=”%ResourceDir%\\themes\\Aero\\Aero.msstyles” “DesktopBackground”=hex(2): “BrandIcon”=”C:\\Windows\\System32\\oobe\\info\\logo.bmp” “NoThemeInstall”=dword:00000000 “ThemeName”=”Theme Name” “WindowColor”=”Slate”

Using this plus other OOBE branding, I was able to make a bat file which upon running completely sets up branding on the System Information panel, theme and default background.

Read More

Highslide for Wordpress Plugin