SFTP, SSHFS, VPN + exportFS, and WebDav.

While working on some code, I needed to get something I could access much faster and much easier than my current methods.  So after some testing, I’ve come across a solution.

I started with my simple SSH session.  This proves to not be so helpful when editing multiple files or needing to move around easier.  While the power of command line is great, it isn’t so great for developing larger scripts or moving around between multiple files easily.

So onto SFTP.  I used a sftp shelled user by adding that user to a group and forcing that group in my sshd.conf to always use sftp:

This works very awesome and is much more secure than FTP. It is using the SSh backend, which is very secure but foces it down to a ftp layer. The jailed user can’t run any commands, no forwarding, and is chrooted to a directory (their home in this case). However, this was slow. On average it would take 4 seconds to load a file. Directory listings where fairly fest (usually 1 second, sometimes 2). Unacceptable delays just to edit a file.

Since SFTP was out of the question, I figured it would be similar, but gave it a try anyways. I setup SSHFS using OSXFuse and SSHFS and then MacFusion.app (A simple GUI to test with, if it worked I would learn to use the CLI). With that setup, it was even worse. Files would open in 1-2 seconds, but directory listings just took forever, sometimes not even loading at all.

As SSHFS was not a option and since I wanted to try it anyways, I tried to setup VPN. OpenVPN being the choice here, I spent a few hours working on setting this up. This took a bit to configure as my firewall was blocking a lot of the connections, even once I got the right port configured, the firewall still blocked access. But once I sorted out allowing that private traffic and getting the certs in the right place, I got connected to my new VPN. I will note that if you don’t sign the certificate, it doesn’t produce a valid .crt file. So make sure to say yes to that.

After setting up VPN, I needed to setup exportfs so I could export the directory I wanted. More troubles with that. A combination of the correct options on server side (rw,sync,no_subtree_check,insecure,anonuid=1000,anongid=1001,all_squash) and the right ones on the client side (-o rw,noowners -t nfs) to finally get it to work properly. Alas after all these troubles, it was the same issue as SSHFS. Slow directory loading. This was unacceptable and would not do.

Finally, tried WebDav. At first I was trying it in a directory, but my location directive for php files in Nginx was wreaking havoc. So I just setup another subdomain to deploy this under. It also appears that Nginx at least on Ubuntu 12.04 (possibly similar versions on Debian as well) has the dav module and extension (for full support) built into it. I simply just needed to setup the configuration for it. Really easy to do and didn’t take much time, I think I set that up in less than 30 minutes).

The result is great. WebDav is fast. Directory listings are almost instant and files open in just a second. While OS X (Mountain Lion) does not seem to have the correct support for WebDav and attempts to look foe resource files and other hidden files (such as a .ql_disablethumbnails which I assume is for QuickLook to not load a thumbnail). So it was over to my FTP client that supported WebDav. Wish I could of had native Finder support for it, but oh well.

A IRC user said it best though and I couldn’t agree more now: < rnowak> SleePy: webdav rocks, totally underused.

Read More

Droid 4 Verizon Update fails on rooted devices

Finally Verizon has released Ice Cream and my Droid 4 got the notice. However my device is/was also rooted to remove that bloatware. To my frustration it seems that the update was failing, again and again.

I want to make special notice of a trick I learned. Just after it reboots to install the update, when the android system update screen shows up, hit the up+down keys (No Konami code here). This brings up the event log/diagnostics type screen which tells you what is going on. As well this helpful menu also shows you why it fails to apply an update.

In my case it was failing on “/system/app/VCASTVideo.apk”. To the internet and after hours of searching, I finally came across a solution. This only works on rooted devices though. A kind internet user has posted a Droid 4 dump (http://www.droidforums.net/forum/droid-4-hacks/198775-droid-4-dump.html). After downloading those parts, I was able to extract the files and locate the app/VCASTVideo.apk file.
After I had that, I simply plugged in my droid into my computer, using the fancy USB Mass storage device. Transfered it to my SD card. This was kind of a pain as it appears there is two partitions it mounts. One of them shows up properly when looking for others, the other one I didn’t try to locate (should of though).

Once on the sdcard though, I simply unplugged it from the computer, waited for it to reattach the sdcard to my phone and then opened up Root Manager. Using this utility (which requires root), I was able to copy that to the /system/app, fixed permissions (-rw-r–r–) and retried the update.

I had another application that also failed, but it was rinse and repeat at this point. After that it finally did the update. This took me hours mostly because the update had to download each time again before I could apply it. I never did try to see if I couldn’t patch it from my SD card like some suggest/say is possible.

I am very thankful to the anon who provided that droid 4 dump and the many others that I went through while trying to fix my device. Hopefully those files don’t go away for a few months as I am sure they will be very helpful for those rooted phones trying to update. As well it seems I shouldn’t delete bloatware, but freezing them seems better.

As well, Voodoo OTA RootKeeper is a must. Installing/running this prior to running the updates and now even after applying ICS, it appears I can restore the root with no problem.

Read More

Changing ChoiceType to ChoiceTypeFillIn in Info Path 2010

Info Path 2010 does not let you change the field type to a fill in or any other type. You have to use SharePoint to modify the column. However once you do this, even when you reopen the modified form in InfoPath, the drop down does not change to a Fill-In type. It seems while the column is updated (and InfoPath asks to update the changed columns), it doesn’t update the binding in the form.
In addition it seems that InfoPath 2010 doesn’t allow any option or checkbox to enable Fill-In type. The only way around this is to delete the control field dropdown (not the field) and add the new one (by dragging it in). Not the friendliest of setups when you have rules and other changes to the control.

Read More

Microsoft CRM 2011 IFD 404 error

I had a issue today where a machine was getting a 404 error while logging into CRM over IFD. Using the internal crm login url worked just fine. Since this was only affecting this one machine, I didn’t suspect any issue with the IFD setup. Not that I didn’t check the CRM server to verify if the login was actually successful or not (it was) and to make sure the ADFS Relaying party was updated.

What this issue finally came down to was that the organization url (hxxps://org.crmhost.com) was added as a trusted site. It seems that caused some issues with the ADFS/IFD login page. Removing that from trusted sites made it work as expected.

The important piece here is that multiple urls are used during the login process and other aspects. dev, auth, and sts are other default subdomains used during the IFD setup process. Adding these to trusted sites will allow this to work properly. In addition and easier to setup, using a wildcard (hxxps://*.crmhost.com), is much easier to add to trusted sites.

Finally, if you have a trust setup between the crm host domain and your domain and you can use the internal crm url (hxxp://internalcrm.crmhost.com). Using that you would add the internal crm url (hxxp://internalcrm.crmhost.com) to the internal sites. This allows your domain credentials to be passed directly to the server without the need for a login prompt. Just remember to setup your security policies to prevent logins to machines and other security risks.

Read More

SMF on Nginx+SPDY

No surprise here as SPDY is a server side thing, SMF works no problem using Nginx+SPDY. So there should be no problem with the SPDY plugin/module at all in any other web server. Just need to set it up on your web server. Nginx currently has a patch for it. But I would suspect that this will be merged into its trunk soon and make its way to stable in Nginx.

Read More

Another Ubuntu upgrade, Another dovecot+postfix breakage

It seems like every time I upgrade Ubuntu, dovecot+postfix breaks. Maybe its just my luck, but it has gotten fairly annoying to be the only service that breaks after any upgrade.

This time I spent hours last weekend reinstalling, uninstalling and reinstalling postfix and dovecot about 4 or 5 times. Sad to say here I don’t know quite what fixed it but I was able to receive mail.

Now today I found out I wasn’t able to send mail. So back into debug mode again to resolve that.

While doing some tests I realized that i couldn’t even log into my mail server under smtp (port 25). After some digging around I came across this little post:

The most important part here was a new line and change to an existing one:

Which for me was that auxprop_plugin went from mysql to sql and I added the new line below. This after a proper service restart resolved that problem. However I still had a problem of connecting to mail across SMTP+SSL (ie SMTPS on 465).

First off, I discovered my SSL certs for dovecot where outdated (expired it seems). While this shouldn’t of been causing the problem, I reissued the certificates. A quick search turned up makecert.sh and I was quickly back in business after backing up, deleting and generating the new certificates. I did modify the file and generate longer certificates though so it wouldn’t expire as fast (default is 1 year).

In my research, I found out a helpful command would tell me if SSL was working:

It failed as you can see. I will also mention you can test just TLS here by using:

I ran this command on the server directly, and it did give me more output, which became my basis for google searches.

This problem here after many google searches not turning up too many results, I stumbled onto this blog post, which had my answer.http://abing.gotdns.com/posts/2008/getting-postfix-to-run-smtps-on-port-465/

I had the options commented out. So while the service was running on 465, the options where not set to enable TLS on that port. A few quick changes and service restart later, everything was working. Which leaves me with another note of being more careful doing file merging when using SSH during a upgrade. I most likely botched the file at that time.

Read More

Airport Extreme ignoring ICMP requests

After setting up my ISPs Modem/router combo unit to not run the router and act only as a bridge. I came across an issue where I couldn’t update my IPv6 settings with my new IP address because my router was not responding to ICMP requests.

Well, it turns out that I had enabled a default host, or as the rest of the world knows it, a DMZ. The machine I put into the DMZ was my windows machine and its firewall was prohibiting ICMP request.

Simple solution here is to just remove the machine and let the router respond to the ICMP request.

I also should note that you can also disable ICMP requests by setting the DMZ to a unused IP on your network. The request will silently fail as long as that IP is not in use on the network. Probably the easiest way to do this is to tell the router to only assign a range of IPs such as from 100 to 200 for its DHCP. All your normal systems will get a DHCP address in that range while systems you statically configure can obtain ones outside of that range and you can be sure that no normal system would get a DHCP address.

Read More

OpenFiler with Rocket Raid card

Open filer, nor Linux in general likes working with the Rocket Raid cards. However despite what is said, it is possible to set this up. It took some time, searches, testing, frustration and putting a piece of tape over the cards speaker (The raid failure beep got annoying).

Well the first problem is installing Open Filer. It does not like installing onto the raid, for the same reason it takes work to get it to install the driver. I gave up early on trying to get anything to work and just opted to run the OS on a single non raid drive plugged directly into the system board. It took some time with the BIOS settings and toying with Open Filer to get it to recognize the drive. I can’t be sure as to why but I assume its because the motherboard also had built in raid support and Open Filer was trying to load those drivers as well. I changed some BIOS settings and worked with the open filer installer a few times of trial and error to install. I think I had to load the IMB raid and usb mass storage drivers for it to get to the install screen.
After completing this, I mostly reversed the changes I did to the BIOS. I had to make some additional bios changes and move the SATA to a lower SATA port on the mother board for the BIOS to recognize the hard drive as a boot option. In the end I needed it to boot of the stand alone hard drive while the raid card was plugged in,

After that the problem was down to getting the driver to work. This was the most tricky part. I found out during testing of installing the drivers the manufacture provides that it would not interact properly with the raid card. In fact it would split the raid into two and send off alerts. I got tired of the beeps and put tape over the speaker while I worked with it, although it could still be heard much to the disappointment of those around me.

How I got it to finally work was actually easier than I thought. First off I downloaded the open source generic driver. Then proceeded to untar it and changed my directory to hptdriver2/rr231x_0x-linux-src-v2.5/product/rr2310pm/linux

Then I ran the make commands:

Then I exported some variables.

Finally I changed back to hptdriver2/osm/linux and ran the install.sh command. It will tell you that it failed to update the linux ram image. Thats ok at this point.

Now to get the image to compile I had to copy the .ko file a few times. I am sure there is a reason it wanted those drivers but none the less the copy command worked and things went just fine.

The I finally built the ram image. My first couple tries failed. Which is when I found out I needed to copy those files. After it was still failing I found out that sata_mv driver had been removed by install.sh and the kernel wanted it (or at least thinks it did). So I just told the image that it was built in and it succeeded to create it.

Now that it completed. I opened up the grub config file (in /boot/grub) and proceded to duplicate the first boot item and modified its ram image file to point to the new one. I made sure to leave the old one incase it didn’t work and would have another way to boot the system.

Finally I issued a reboot and hoped for the best. To my luck it finally started up, with no beeping and doing a “fdisk -l” in the console only showed the main OS disk and a single disk. When I did it wrong or didn’t have the drivers linux would see each of the drives individually.

While I didn’t do this, at this point you should be able to copy over the OS to the raid card. Windows recognizes the raid card just fine. So you could use Hirens boot cd and run Raw Copy to clone it over. Grub does work fine with the raid card. It is only when it starts Linux without the Rocket Raid card drivers that it kernel panics and fails. Having the Rocket Raid drivers in the ram image should let it start up fine.

I should also note here that I had tried many times and failed on the same system. Prior to doing this I thought I searched for all instances of hpt* and rr2* and any other instances I could think of related to the driver and removed them. Its possible something else still did exist and is how it worked.

Read More

SMF $user_info in as a class

I wrote up this method while writing my Pastebin. It was an experiment mostly to not have to use as many globals in my main code. I think it turned out nice and easy. Knowing that SMF 3.0 will use OOP does not help as this is the least likely way this would be implanted in the code. At least this can act as a bridge at that time to the new code.

I should explain the _() method. I set this up so I could do a Singleton. It also allows me to use it in a sorta static method by doing “userInfo::_()->id;”. Nice quick and easy.

I could of wrote this like my smcFunc class, but choose otherwise on the fact I wanted to use it as a object rather than a static method. A setup like that should be possible, but I didn’t test that.

I never tested but I don’t think accessing multiple dimensional parts of the $user_info will work. Things like $user_info[‘group’][1] most likely won’t work here. I should add support to drill down into the array, but will save that for a later day.

Read More

Highslide for Wordpress Plugin