Sunday, December 12, 2010

Setting up a VPN connection to a Linux Internet server

First, this has absolutely nothing to do with Ruby or Rails, but I wanted to post this since I couldn't find much good info anywhere.

If you have a need to connect to a server directly on the Internet (not on a private LAN) to access services on the server that are not publicly exposed, or to tunnel your Internet connection through that server, setting up a VPN connection to the server is the way to go. Most documentation, tutorials, etc. are for setting up a VPN to an entire behind the firewall network, but this guide is for setting up a VPN connection to just one server.

OpenVPN is the best server based VPN solution out there. It's open source so you can install it on any OS, I'll be guiding you through setting it up on a Ubuntu or Debian Linux server. Client software is readily available and easily configured for Windows, Linux, and Mac.

Setting up OpenVPN Server
Run "sudo apt-get install openvpn" to install the OpenVPN server.

Now you'll need to generate certificates and keys. Some example scripts are provided to make this easy.
  1. Run "sudo mkdir /etc/openvpn/easy-rsa/"
  2. Run "sudo cp -r /usr/share/doc/openvpn/examples/easy-rsa/2.0/* /etc/openvpn/easy-rsa/"
  3. Run "sudo chown -R $USER /etc/openvpn/easy-rsa/"
  4. Edit the file /etc/openvpn/easy-rsa/vars, and change the KEY_COUNTRY, KEY_PROVINCE, KEY_CITY, KEY_ORG, and KEY_EMAIL to what you want to show up in your server certificates.
  5. "cd /etc/openvpn/easy-rsa/"
  6. Run "source vars"
  7. Run "./clean-all"
  8. Run "./build-dh"
  9. Run "./pkitool --initca"
  10. Run "./pkitool --server server
  11. Now generate the actual keys in the keys folder. "cd keys"
  12. Run "openvpn --genkey --secret ta.key"
  13. Now copy all keys to the main open VPN folder with "sudo cp server.crt server.key ca.crt dh1024.pem ta.key /etc/openvpn/"
  14. Now you'll need to generate a client certificate. "cd .."
  15. Run "source vars"
  16. Run "./pkitool client-certificate-name", substituting client-certificate-name for whatever you want to call it.
Now you'll need to create a server config file for OpenVPN to use. Place this file in /etc/openvpn/server.conf. An example file with plenty of comments for all possible examples is in the file /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz. I'll show my example file here which is set up for the client to only access the server and to use the server's Internet connection.


# Port to listen on, this needs to be opened
# in the firewall
port 1194

# TCP or UDP server. I had better download
# throughput when using TCP, but slower
# upload throughput. If you have speed
# issues try switching this.
proto tcp
;proto udp

# Use "dev tun" to create a
# routed IP tunnel, where clients get their
# own subnet behind the server. This is
# what you should use if you only want to
# VPN to the server to access private server
# resources, or to get the client out to the
# Internet through the server's connection.
# "dev tap0" is used for an ethernet bridge
# VPN, this config isn't for this type of
# VPN.
dev tun

# Certificate and key file locations
# (in /etc/openvpn/)
ca ca.crt
cert server.crt
# This file should be kept secret
key server.key
# Diffie hellman parameters.
dh dh1024.pem

# For "dev tun" configurations only.
# Configure server mode and supply a VPN
# subnet for OpenVPN to draw client
# addresses from. This can be modified to be
# any private /24 network (ie 192.168.10.0,
# 10.8.8.0, etc.) that the server doesn't
# already know about.
# The server will take 172.18.100.1 for
# itself, the rest will be made available to
# clients. Each client will be given its own
# /30 subnet in this range, and will able to
# reach the server on 172.18.100.1. Comment
# this line out if you are using "dev tap"
# for ethernet bridging.
server 172.18.100.0 255.255.255.0

# Maintain a record of client <-> virtual IP address
# associations in this file. If OpenVPN goes down or
# is restarted, reconnecting clients can be assigned
# the same virtual IP address from the pool that was
# previously assigned.
ifconfig-pool-persist ipp.txt

# Push routes to the client, if you have
# other subnets that you want the client to
# access through the VPN. Isn't
# necessary for dev tun connections, as you
# only want the client to access the VPN
# server.
;push "route 192.168.10.0 255.255.255.0"

# Specify a DNS server that the client
# should use, by default it will continue
# to use its regular DNS server, which you
# probably don't want it using with all 
# traffic going through the VPN. 8.8.8.8
# is the Google public DNS server
push "dhcp-option DNS 8.8.8.8"

# Enable this to cause all of the client's
# Internet traffic to go through the VPN,
# including DNS requests (if this is set
# you should enable the above DNS option).
push "redirect-gateway def1 bypass-dhcp"

# The keepalive directive causes ping-like
# messages to be sent back and forth over
# the link so that each side knows when
# the other side has gone down.
# Ping every 10 seconds, assume that remote
# peer is down if no ping received during
# a 120 second time period.
keepalive 10 120

# The server and each client must have
# a copy of this key.
# The second parameter should be '0'
# on the server and '1' on the clients.
tls-auth ta.key 0 # This file is secret

# Enable compression on the VPN link.
comp-lzo

# It's a good idea to reduce the OpenVPN
# daemon's privileges after initialization.
user nobody
group nogroup

# The persist options will try to avoid
# accessing certain resources on restart
# that may no longer be accessible because
# of the privilege downgrade.
persist-key
persist-tun

# Output a short status file showing
# current connections, truncated
# and rewritten every minute.
status openvpn-status.log

# By default, log messages will go to the syslog
# Use log or log-append to override this default.
# "log" will truncate the log file on OpenVPN startup,
# while "log-append" will append to it. Use one
# or the other (but not both).
;log openvpn.log
;log-append openvpn.log

# Set the appropriate level of log
# file verbosity.
# 0 is silent, except for fatal errors
# 4 is reasonable for general usage
# 5 and 6 can help to debug connection problems
# 9 is extremely verbose
verb 3

# Silence repeating messages. At most 20
# sequential messages of the same message
# category will be output to the log.
;mute 20


Now that you have the server configured, if you're running a firewall on the server, here are the rules that you should add to iptables to allow traffic from your VPN clients to go through, and to open the OpenVPN port (these are usually in /etc/iptables.rules). First, add this to the top section, above the current rules. Substitute eth0 with the interface for the public Internet on your server.

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A POSTROUTING -s 172.18.100.0/24 -o eth0 -j MASQUERADE
COMMIT

Now, add these to your filter rules:

-A INPUT -i tun+ -j ACCCEPT
-A INPUT -p tcp -m tcp --dport 1194 -j ACCEPT
-A INPUT -p udp -m udp --dport 1194 -j ACCEPT
One last thing you'll need to do. To enable forwarding of data from your VPN clients to the server's Internet connection, edit the file /etc/sysctl.conf and uncomment the following line:

net.ipv4.ip_forward = 1

Now, the server's configuration should be complete! To make sure the firewall settings are correct just reboot the server.

Configuring the Client
Now you need a client to use with your OpenVPN server. For Linux, just use openvpn. You can simply place a client.conf file and all necessary certificates/keys in /etc/openvpn/. For Windows and Mac, get your client software from http://openvpn.net/index.php/openvpn-client.html. Use Community version for Windows.

I had some trouble getting the Windows client to install in Windows 7, particularly the network driver install. Be sure that the first time you install it, that when it pops up to install the driver, check to trust all software from this provider before clicking Install. If you don't check this, you must first run Delete all TAP virtual ethernet drivers from the OpenVPN -> Utilities folder on the start menu as an administrator (right click, Run as Administrator). Then uninstall it. Then right click on the install file and click Run as Administrator.

Now you'll need to copy the keys from the server to the client. For Windows, copy the following files to C:\Program Files (x86)\OpenVPN\config from /etc/openvpn/easy-rsa/keys on the server. For Linux, copy these files in to /etc/openvpn/.
  • client-certificate-name.crt
  • client-certificate-name.key
  • ca.crt
  • ta.key
And you'll need to create a servername.ovpn file that contains the client configuration file in the same directory. In Linux just call it client.conf in /etc/openvpn/. Here is my example servername.ovpn file:



client
dev tun
proto tcp
remote your_server_ip_address 1194
nobind
persist-key
persist-tun
ca ca.crt
cert client-certificate-name.crt
key client-certificate-name.key
tls-auth ta.key 1
comp-lzo
verb 3

Now, in Windows, to launch the client, you'll need to right click on the desktop and click Run as Administrator. If you don't run it as administrator, the routes can't be modified. Once you do this, just right click on the tray icon and click Connect. This should be it! Now all of your Internet traffic will be going through your server.

Windows 7 or Mac?

This blog posting was originally made on brentsowers.com on January 1, 2010.

Your current Windows laptop is a several years old, and is running pretty slow. You want to get a new laptop. The I'm a Mac and I'm a PC ads have been effective, it's made you think about getting a Mac. But you've also heard lots of good things about the new Windows 7? Do you make the switch and get a Mac, or get a new PC laptop with Windows 7? This is a question that many Windows users have thought about lately.

I've been a Windows user for a long time now, back to Windows 3.1. I used Macs a little 12-14 years ago in high school but haven't used one since. However, recently I got a Mac laptop at work for doing Ruby on Rails development. Shortly after I started using this at work, my Windows 7 upgrade discs arrived, and I upgraded my home desktop and home laptop to Windows 7. I've been using both for a few months now, and my experience with both has been very positive.

So I thought, why not write up my general impressions from using both Mac and Windows 7? I've been keeping constant notes on both as I've used them, learning how to use them, wondering why things work the way they do, finding problems with them, etc. By the end of this write up, I will give you my recommendation for the question at the beginning - make the switch or get a new Windows 7 computer.

Hardware
This isn't a true comparison, since I didn't get a new PC. My new Mac laptop is a 13" Macbook Pro. I've got to say that I am amazed at how nice of a laptop this is. The form factor is perfect, it's small enough to easily fit in to backpacks, it's easy to carry around, and it's light. The 13" screen looks great, it's still big enough to be able to do pretty much anything, however, I'm not sure I'd want to use it as my main computer, if I did I'd want an external display. All ports are on the same side which is nice. There aren't many ports, but when do you really need lots of ports on a laptop? 2 USB, 1 firewire (which I can't ever think of a reason why I'd need or even want), ethernet, power, display, SD card, and headphone. What else do you need? The DVD drive is on the other side, it's a slot load which is much better than a tray like many PC laptops have (no way to break a tray if there isn't one). The case is aluminum unibody, which means that it's basically one piece of aluminum. I LOVE this, the laptop feels much more sturdy than any PC laptop that I've ever used (well, maybe except for the big Toughbook). The only downside of this is that if you've just brought your laptop in from outside, and it's a cold day, the laptop will feel really cold!

In addition to all of the above things, Apple really seems to get the little details nearly perfect too. The screen opens and closes easier and more smoothly than any PC I've used. The power adapter is great - it attaches to the laptop with a magnetic connector. So if someone trips over your power cord, it will very easily yank out of the laptop without dragging the whole laptop with it. The keyboard keys are slightly lit up when it's dark in the room. The touchpad is huge and works great. The entire touchpad is a button, so there are no separate buttons to click, which is great.

Do I sound like an Apple fanboy yet?

My first gripe about the hardware is a pretty minor one. The display port is useless by itself unless you have a ridiculously expensive Apple monitor because it's a Mini Displayport connector. For it to be useful you have to buy the $30 Mini Displayport to VGA or $30 Mini Displayport to DVI adapter when you buy the laptop. Another gripe is the lack of choices for hardware. It's either a Macbook, Macbook Pro, or Macbook Air for laptops, and you don't get too much choice on components for them.

But those two gripes are far outweighed by how good the hardware is. I've never used or seen a PC laptop that has hardware that's as solid as the Macbook Pro.

Winner: Mac by a longshot


Turning on for the first time
I was very impressed with how simple and quick it was to get up and running for both. The Mac asked me a few questions like my name, then allowed me to easily select my wifi network. After this I was at the desktop! Windows 7 was about the same after upgrading with a clean install. Very few questions, all questions were easy to understand, and very easily to select the network to connect to. There were no questions about drivers, hardware, etc.

Both just kind of leave you hanging after you get to the desktop. There is no tutorial, no video explaining how to use them, or what's different from previous versions of the OS. It seems pretty easy to me to figure out how to use, but I can see a lot of people being confused, particularly Windows users that are coming from XP and not Vista.

However, both operating systems are very intuitive. I didn't think too often "that's confusing, why is it like this?" This more than outweighs the lack of a tutorial or video in both.

Winner: Both

Using the OS
This is a pretty big cateory so I'm going to split it up.

Loading programs and navigating between them
Loading programs, and navigating between running programs are two seemingly simple tasks but is one of the core things that can either make the operating system easier or harder to use. Both Windows 7 and Mac attempt to essentially blur the line between a program that's running and one that's not. Windows 7 has added a new taskbar that replaced the old taskbar that's essentially been the same since Windows 95. When you load a program, it shows up as a large icon with no text in the area where the taskbar used to be. If a program has multiple windows or instances running, the icon looks stacked. If you right click on this icon, you can "pin" the program to the taskbar, so it always appears in the taskbar. Programs that are actually running look different in the taskbar so you can easily see which ones are running. Moving your mouse over a running program's icon here shows you a little preview of the program, that you can click on to bring the program to the front. Many programs also support additional options if you right click on this icon, like loading your most recent documents for Word. All programs have an option to load a new instance of the program from the right click menu.

The old start menu is still there, it's like the Vista start menu which is one big list of programs. However, with the new taskbar, I rarely use the Start menu.

Mac has something similar to the new Windows 7 taskbar called the Dock. All running programs show up here, and programs that you've pinned to the dock are always there with one click to load them. This works great, just like Windows 7. It looks cooler than the Windows 7 taskbar, programs slide in and out with a cooler visual effect, and icons automatically resize depending on how many icons are in the dock. But that's about the only advantage it's got over Windows 7.

First, looking cooler comes at the expense of screen space. I'm sure it would be fine on a 17" Macbook pro with a big screen, but on my 13" it takes up way too much space at the bottom of the screen. I've moved it to the right side of the screen instead, but now it doesn't look as nice. Second, it's harder to visualize which programs are running. There's only a little dot below the icon to indicate that it's running, the visualization that Windows 7 has is much nicer. You can't mouse over the dock icon to get a preview of all windows of that application, like you can in Windows.

For navigating between running programs, Mac has a really cool feature called Expose. Activating Expose shows previews of all running windows in the foreground, and you can click on the one that you want to bring to the front. This is nice, but it's still not as easy to navigate between running programs on a Mac. First, bring up Expose isn't quick or convenient. Press F3 on the keyboard, or use all 4 fingers on the touchpad and move down. Second, if you've got a lot of open windows, the Expose screen seems really cluttered. You can always click on the program's icon in the dock, and if you have just one window for a program, this is a quick way to bring that program back to the front. But if a program has multiple windows, you've got to right click on it, and a non-user friendly text list shows up of all windows, no preview.

I consistently find myself getting frustrated when trying to bring back a different window of an already running program on Mac. I never have this problem in Windows. This is a huge annoyance for me. If Mac would just copy the hover over window previews that Windows 7 has, this would help a lot.

Mac also has an Applications folder which is similar to the start menu, except just for launching applications. I can see that it won't be quite as cluttered as the Windows start menu, since only application shortcuts get installed here. But that also means that icons don't get grouped by program name, and there aren't shortcuts for help pages, read me files, etc. This could be a good or bad thing, I'm not sure how I feel about it.

Winner: Windows

Program User Interface Features
In addition to the operating system wide capabilities for navigating around, another important part is navigating around individual programs. Individual programs have their own controls, but most have a common set of capabilities.

First, is the menu bar. Most programs (but not all in Windows) have a menu bar with File, Edit, etc, where clicking each main heading shows a list of options underneath for performing actions. In Windows, each program can have but doesn't have to have a menu bar. The menu bar is inside of the program's window. This works well, and it's always worked this way.

On Mac, however, there is always one menu bar at the very top of the screen, spanning the entire length of the screen. The contents of the menu bar change based on what program is the active program. I do not like the way this works at all. First, even on a 13" screen, there's always some empty white space for bigger applications like Firefox. Smaller applications have even more white space. I'd rather have that space back and let the application have the menu. Second, you've got to change which program is active to even see what menu options there are. Third, you might think one program is active and start clicking on the menu bar to do something, only to realize that you're actually clicking menu options in a different program. I think that having one menu bar for all programs is confusing, and I think the way Windows (and other operating systems) have always done it, with a menu bar per program, is much better.

Mac and Windows both have 3 common buttons in all applications - close, maximize, and minimize. This hasn't changed much since Windows 95 for Windows - close completely closes the window and program, maximize makes the window take up the whole screen, and minimize keeps the program running but hides the window down to the task bar. Things work similar in Mac, with a few key differences. Maximize (the green button) works the same. Minimize (yellow) is similar, except that the minimized program window goes in to a separate area of the dock. I'm not sure why there is a separate area of the dock for these. I don't find it any more useful to have a separate area of the dock, these minimized windows could just be activated by clicking on the program's main icon in the dock, like the Windows taskbar. The close button (red) is what I have a problem with though. When you click Close, the window goes away, but the program itself doesn't close, it stays running. The only way to get a program to actually stop running is to click the program's name in the menu bar at the top and click Quit, press Command (keyboard button) + Q while a program is running, or right click it in the dock and click Quit. I don't understand this at all. If you close the last window of a program, why would you want it to still run?

Maybe some of this is just me being used to Windows, but I just find the program specific controls much better in Windows 7.

Winner: Windows

Exploring and navigating files
Windows has Windows Explorer to browse and find files, and Mac has Finder. In Windows 7, not too much has changed from Vista, but it is somewhat different than XP. Windows 7 has added "Libraries", where you can quickly see all of your Documents, Music, Pictures, and Videos. This is similar to the My Documents, My Music, etc. that Windows previously had, except that you can add in other folders to these views as well. This is very helpful. Other computers in your homegroup (see the networking section later) also show up on the left side here. Navigating folders and files is pretty much like it always has been in Windows, except by default you don't see a tree structure of all files and directories as you're navigating.

Mac is pretty similar. I don't see a similar thing to Libraries though, the link for Documents is just one folder, and there is no Pictures, Music, or Videos links. There are quick search links on Finder by default, click to find all Images, Movies, and Documents, or everything from today, yesterday, or the past week.

Both have quick searching capability that will update search results as you type.

The capabilities are very similar, and any problems with one are equaled by different problems in the other.

Winner: Both

Speed
I haven't run any official speed tests or timed anything, so I'm just going by what I see. Both seem very quick, programs load quickly, things seem to run fine when lots of programs are loaded at once. Boot up times in Windows 7 are much better than Windows Vista, they're fast enough that you hopefully won't leave your computer on at night just becuase it takes so long to boot up. But Mac is MUCH faster at booting up. Shut down is the same story, Windows 7 seems quicker than Vista, but still a lot slower than Mac.

Winner: Mac

Stability
So far, both operating systems seem to be pretty stable, but not without issues. In Windows 7, I got an error at one point when copying lots of files to a USB flash drive, and no matter what I couldn't copy files. I ended up having to format the drive to be able to copy new files to it (files could be read no problem). I reset my computer without properly shutting down, and got the ugly start Windows in safe mode prompt. I'm used to this so I know what it is, but a lot of people might be a little scared off by this ugly screen.

Mac isn't without it's problems either. The program that I use with my AT&T 3G card, Globetrotter connect, always causes problems if I use it for a long time. The network will stop working, and if I click Disconnect and reconnect, nothing happens. Unplugging and plugging the card back in does nothing. I can't even shut the computer down. If I do this, the computer just sits there. I have to hold the power button in for 5 seconds. This isn't a one time thing, it's happened to me at least 5 times. No program should ever cause me to have to hold the power button in to turn the computer off. And I had an issue once where the computer would not get an IP address until I fiddled around with the network settings, not actually changing anything but clicking through different screens.

But, despite these few problems, both seem pretty stable.

Winner: Both

Included Programs
The operating system by itself doesn't do you much good, you need programs to do stuff. There isn't much good to say about what comes with Windows. Internet Explorer 8 is a very poor web browser, Microsoft still hasn't caught up to freely available browsers (Firefox, Chrome, Safari). It's slow, and has security issues. Windows Media Player is OK but seems a little clunky. Windows DVD maker seems like it could be good but I haven't tried it. Two small but useful applications are sticky notes and snipping tool.

Mac, on the other hand, has great programs included. First, the web browser is Safari, which is much better than Internet Explorer. iTunes is the included media player, which I'm not a huge fan of but is better than Windows Media Player. Where Mac really stands out are the programs where there isn't something comparable in Windows. Take iPhoto. This program allows you to very easily organize and edit your digital pictures. It's very simple to use and has as many capabilities as any photo program a regular person would ever want. It's also got a cool face recognition feature, which will find pictures of the same person. And, you can map photos (although it didn't read the coordinates on my geotagged photos). It's a better program for managing your photo collection than any program I've ever seen on Windows, and it comes with the OS. iCal is another pretty cool program that will use your Gmail or other online calendar, or use your own if you don't want to use an online calendar. iChat allows you to video chat with your IM buddies. Honestly, I haven't tried many of the other i programs, but the ones I have used are great.

Another really cool and useful utility that comes with Mac is Time Machine. If you buy a time machine compatible (you don't have to spend the extra money on an Apple Time Capsule) network hard drive (NAS), time machine will automatically make backups of your computer to it. You can easily browse through prior backups and find old versions of files, files that you accidentally deleted, etc. Or worst case, your hard drive dies (which a surprising number of people I know with Macs have had happen), you can restore from this. It's so easy to use!

Winner: Mac by a long shot

Other Programs
Windows has been the dominant desktop operating system for a really long time, and it definitely shows in the number of third party applications that are available. Windows 7 will run most old Windows applications without any problems, but I have run in to a few that won't run correctly (Winamp and VMWare Server). But just like Apple says "there's an app for that" about the iPhone, well, there's an application for just about anything on Windows. And most are free. I can't say the same thing about Mac. While there are a lot, there aren't anywhere near as many as Windows. And a lot of them cost money.

The abundance of programs for Windows can cause problems for security and stability. Many install spyware, and that's how the application company makes money. However, a good antivirus will keep most of these away.

Winner: Windows

Security
Windows 7 is MUCH improved over Windows XP for security, but it still has its problems. Most of it is related to the abundance of free applications that install spyware. In Windows 7, if you run a good anti virus (I would recommend Malwarebytes Anti Malware, it's fast and effective, buy the full version so you can get the real time protection), and allow updates to be installed automatically (the default setting), you shouldn't have any problems. But the simple fact remains, you don't have to run an anti virus on Mac to be safe.

Winner: Mac

Networking and file sharing
I only have one Mac, so I don't have a good comparison here. But from what I see with Windows 7, their home networking features would be tough to beat. Windows 7 has a new "homegroup" feature, which allows you to share your documents, music, pictures, and video libraries with anyone in your homegroup. This does just about everything right, other computers in your homegroup show up in Windows Explorer on the left pane, and you can very easily view and edit these files. You can also set up custom permissions, so things are read only, or only certain people in your homegroup have access. The biggest problem with this is that it's only available in Windows 7, not even Vista. So other computers have to have 7 to use this. Microsoft, why has it taken you so long to get this, and why don't you make a Vista and XP program to do the homegroup?

Media sharing is also integrated in to the libraries, pictures, music, and videos can be shared to other media devices. iTunes has this capability too but I haven't tried it.

Winner: Windows

Price
OK, price is a pretty important thing. The absolute cheapest Mac laptop that you can get is $1000. A few months ago Apple revamped this base Macbook laptop and it's actually a really good computer now. The biggest problem with this is that it has a 13" screen, and you can't get a bigger screen. I like the 13" screen of my Macbook Pro, but I wouldn't want to use it as my primary computer. Most PC laptops are 15", but from what I could find the 13" PCs are actually more expensive. A comparable 13" PC laptop will run you maybe 200 dollars less.

However, most PC laptops are 15". I think a 15" makes more sense as your primary computer. So let's compare prices for those. The base 15" Macbook pro is $1700 (ouch!). A comparable HP laptop with about the same specs is $925. That's almost $800 difference. Let's take it one step further and look at 17" laptops. A top of the line HP 17" laptop is $1500, and a comparable Macbook Pro is $2875, a difference of almost $1400!

Where the PC really stands out is the budget laptop. Say you don't have $1000 laying around to spend on a laptop. You can get a really good HP laptop with an AMD 2.2 GHz dual core CPU, 15" screen, 3 gigs of RAM for $500. This computer won't be slow, it'll run just about as fast as the 15" laptop that I priced out above. Now if you really want to go budget, you can get an Acer 15" laptop with a single core CPU, 3 gigs of RAM, for $330 from Best Buy. This laptop will run fine for years to come. The cheapest you can get a Mac for is $1000.

It's hard to argue against this. You can say that Windows costs more because you have to buy extra programs (anti virus, good image editing, etc.). Well, that will barely make a dent in the $925 difference for high end 15" laptops.

Winner: Windows, by a long shot

Other Factors
One thing that I love on Mac is the new multi touch mouse gestures. You can use two fingers on the touchpad to scroll up and down, three to go back and forward, etc. This is WAY better than what most PC laptops do with reserving an area on the top and right of the touchpad for this. Also, Apple has FINALLY added a right click capability, simply click the touchpad with two buttons for the equivalent of right click. Apple people, how did you go for so long with having to hold Ctrl for right click?

One annoyance I have with Mac is how programs are installed. When you download a program, you usually have to drag the icon for the program in to the applications folder, why don't programs do this automatically? And it leaves a "drive" on the desktop for the installer files, that you have to eject by right clicking on it or dragging it to the trash

However, the Downloads folder makes up for this. All downloaded files are stored in a Downloads folder which is accessible from the dock at all times. No wondering "Where did I download that file to?" like sometimes happens in Windows.

Winner: Mac


Final Verdict
It's very close, but I would recommend getting a new Windows 7 laptop instead of making the switch to Mac. While the Macbook Pro is a great computer, it just isn't worth the huge price difference. While Windows 7 has its shortcomings, particularly in bundled applications, in other ways it's better than the Mac OS. Honestly, if prices were the same, I MIGHT recommend making the switch. But prices aren't the same. However, you can't go wrong with either.

Sunday, November 7, 2010

New Twitterscour gem

I've finished working on a new Ruby gem, twitterscour. Simply type "gem install twitterscour" to install it. With this gem you can search for tweets by users or by specifying a search term. You may wonder what makes this different from the other twitter gems out there? Twitterscour actually pulls tweets from the Twitter web page, so every tweet that is publicly viewable is returned. Other gems use the Twitter API to get tweets, which will only return what Twitter considers the "most popular" tweets. For example, if I use the API to search for tweets by me when I'm not logged in, I only get 2 tweets back, despite the fact that I have over 60 tweets. Twitterscour will return all tweets by me.

Sunday, October 3, 2010

Bazaar problems and lessons learned

As promised in my blog post My Distribued Version Control System comparison, I'm going to share some of the problems that I've encountered with Bazaar and how I've gotten around them. To summarize my previous blog posting, I decided to use Bazaar on my current project at work over Git and Mercurial. We've been using it for almost 6 months now, and overall it's worked very well. But we've run in to a few problems, made a few mistakes, so I figured I should share these experiences.

Data transferred for new branch is huge
We had been using sftp as the transport mechanism to push and pull from our central repository server. I noticed that the amount of data transferred to do a new branch just kept getting higher and higher, eventually topping out at 400 megs even though our repository was only 22 megs in size on the server! Two steps fixed this. First, bzr+ssh is more efficient than sftp, only use sftp if you can't install Bazaar on the server. Second, I ran "bzr pack" on the repository on the server. I thought that data would automatically get packed, but it didn't appear as if it was. After these two things, a new branch only transfers about 12 megs. I think both of these may have been tied to using sftp, so I would not recommend ever using sftp, always install bazaar on the server and use bzr+ssh:// instead of sftp://

Revision numbers keep getting renamed in the top level branch
We had been going about getting new changes in to the top level branch incorrectly. Here is our basic setup. A top level branch for our "integration" branch on the server (called trunk), then branches on the server for different tasks that we're working on. Say we have two branches that two different developers are working on - A and B. There is a branch for A and B on the server. As of a week ago, A and B had the same revisions as the trunk branch, up to revision 600. John works on A and pushes changes from his local development branch to a branch A on the server every day. I work on B and push my local changes up to B every day. John completes task A, and then pushes his changes up to trunk (revisions 601-610). Now I'm done with B, I've made my own revisions 601-620. I can't push directly to trunk because trunk has new revisions that I don't have (John's 601-610). When we first used Bazaar, from my local branch, I would have done a merge with trunk, and then pushed the merged changes up to trunk. This is not optimal, because John's revision numbers get renamed (from 601-610 to 600.1.1 to 600.1.10). You don't want revision numbers that were already in trunk to change, this confuses everyone. My local revision numbers should be changed when I push them up. So to solve this, we changed our work flow. We now NEVER push changes in to trunk. From trunk, we always pull or merge in changes from other branches. The easy way to do this is to create a local trunk branch, pull or merge your changes from there, and then push up to trunk on the server.

Fixing accidental pushes, pulls, and merges
What if you accidentally push a revision to another branch too early, or pull revisions that you didn't mean to yet? My first thought was to just do a revert. WRONG. A revert will just make changes to the local file system to put the files back in the state that they were at the revision you specify to revert to, so you must commit again to record these changes as a new revision. Your original changes are STILL there as a revision. So if you later want to get these changes back, it's too late, Bazaar thinks that it has those changes. You'd have to manually reintroduce the changes. If you push the revisions that you reverted out again, Bazaar already has those revisions. The proper way to do this is "bzr uncommit" on the upstream branch on the server. This will step back X number of revisions (do bzr help uncommit to see how to specify more than one revision to step back to). When you're actually ready do push or pull the changes that you pushed or pulled too early, you can just push or pull again. The upstream branch won't have these revisions yet since they were uncommitted so everything should work out.

Making changes after a merge
After you do a merge, the merge changes are sitting on your local file system, and Bazaar is marked that it has an unresolved merge. When you do a merge, do things one at a time - completely resolve the merge, and then do a commit immediately after with ALL changes from the merge. Doing a commit will automatically mark that the merge is resolved. Do NOT attempt to make ANY other changes after you run the merge command without first completely resolving the merge and committing the merge changes.

We ran in to a big problem here by mixing regular changes with a merge change. A merge was done by accident in a local branch. At the same time, a file not related to the merge was changed in that local branch. If a normal commit had been done, all of the merge changes would have been commited. But the developer didn't want the merge to save. Rather than uncommitting, the change was made to the one file unrelated to the merge, and a commit was made specifying just that one file to commit. This commit was pushed up. This caused problems, because a commit was done after the merge, Bazaar marked the merge as resolved. But the actual changes pulled in during the merge were stuck in the local file system, since they weren't included in the commit after the merge. The local branch was then deleted by accident. So all of these changes essentially disappeared. Looking at the log, all of the revisions from the merge were still listed as revisions, but the actual contents of these revisions were gone because the commit after the merge didn't include them. We ended up reverting back to the merged revision number right before the commit after the merge. We lost all changes after the merge, and had to manually re-introduce those changes (this was easier than manually re-introducing the changes from the branch that we merged with). Lesson learned, when committing after a merge, include ALL changes from the merge, and don't make changes unrelated to the merge.


Saturday, September 25, 2010

My Distributed Version Control System comparison

If you want to cut to the chase, I've been using Bazaar for 6 months now and I really like it. Read more below for background and details on why I chose Bazaar over other VCS'es.

A while back I had decided that I wanted to switch the version control system that we use for the main project I'm on at my job. The company has used a propriety, licensed version control system called Accurev for years now (as a side note, most of the work at the company is not Ruby on Rails, but is C, C++, and C#) Accurev is pretty easy to pick up and start using but has lots of shortcomings and can be pretty inflexible, not to mention the fact that there is a yearly per developer license cost.

So 7-8 months ago I finally decided to evaluate other version control systems for our project, and possibly for the rest of the company. Here are my requirements for a new VCS:
  1. Free and preferably open source
  2. Easy branching so each task people are working on can be in its own branch
  3. Easy merging
  4. Can use a central server for the main repository with secure, encrypted transport to server
  5. Developers still need to be able to use VCS if server is down or if they have no Internet
  6. Command line support for all main operations
  7. GUI available for performing basic commands, viewing commit log, doing diffs, and performing merges. Preferrably GUI is built in to VCS (I don't want to have to do a whole other study on what 3rd party GUIs to use)
  8. Works the same in Windows, Mac, and Linux (I know what you're saying, why Windows - this isn't for our project but for other non Rails projects at the company, the majority of which are Windows)
  9. Has a large community of users (not some new VCS that could go away in a year)

Because of #5, traditional free VCSes like Subversion fall short. So, I evaluated the 3 main distributed version control systems. A problem with all 3 that will hopefully get better with time (although Git has had plenty of time, so I'm not sure if it will) is a lack of documentation. Most of the examples for all 3 seem to be geared either towards a single developer, or a large, widely distributed open source project. The work on my project is neither of those - it's a group of 3 people working on a company proprietary project.

Git
Git is by far the most widely used distributed version control system. I started using git before this evaulation for personal projects. If you are working on an open source project, this is definitely the way to go, git gives you a ton of functionality, and using github.com is really easy. And it's extremely fast and efficient, it definitely has the others beat in that department. Setting up a central server repository is very easy, it can use SSH so you don't even need to have git installed on the server. But I eliminated git pretty quickly, here's why:
  1. Doesn't work too well in Windows. You can either go through the hassle of using it in Cygwin, which is not fun, or use the Google project msysgit. During the Pragmatic Studios Advanced Ruby on Rails class that I took earlier in the year, I saw about half of the class who had Windows laptops struggling the entire class to get git to install and work correctly. Since this time, msysGit has come a long way, but I still believe that the other VCSes work better in Windows.
  2. Terrible built in GUI. The gitk program that comes with Git looks like a UNIX GUI from 1995. Even forgetting how bad it looks, it isn't too functional, it won't do much. I did a brief investigation to find a 3rd party GUI but couldn't find much. While I use the command line for most of my work, it's no substitute for a GUI for doing merges, diffs, and viewing branches, forks, and merges in the log.
  3. Revisions are identified by a long hash tag. For convenience you can refer to the revisions with just the last 6 characters of the tag. Not everyone writing source code is so hardcore that they remember all of their work with a 6 hex digit identifier. Which would you rather remember, revision 121.5, or revision 6f88ca?
  4. User unfriendliness/complicated - Using git just seems to require having to learn too much. This is more of an intangible thing. While geeks who eat, breathe, and sleep coding (still not sure if I'd put myself in that category) can easily pick up git, seeing how a large group of regular developers struggled with Git at the Advanced Ruby and Rails class did not give me a good feeling for using Git with other developers.
Mercurial
Next up is Mercurial. It's very similar to Git. A main difference is that it's built with Python and runs well on any OS that Python runs on. The user experience is pretty much the same on Windows, Mac, and Linux. But I decided not to use Mercurial for some of the same reasons as Git:
  1. Bad built in GUI. Not quite as ugly as the git GUI, but pretty much just as useless.
  2. User unfriendliness/complication. Same as git, it just seemed a bit too complicated for regular developers to be comfortable with. I'll admit I didn't look in to Mercurial too extensively but it just seemed too much like git for my project.
  3. Slow and inefficient - mercurial definitely seemed to run a little slower than git, and according to the benchmarks on Bazaar's site, the repository takes up much more space than a git or bazaar repository.
Bazaar
Bazaar isn't quite as widely used as Mercurial, but does have the support of the Ubuntu group and MySQL, and a code hosting site launchpad.net which is like github (but also has bug tracking, mailing lists, etc.). I decided to go with Bazaar. My team has been using it for abou5 months now, and things have gone pretty well, everyone seems to like it (although it did take a little getting used to, coming from Accurev). For a good "why should I use Bazaar" page look at http://doc.bazaar.canonical.com/migration/en/why-switch-to-bazaar.html. Let me explain my reasons why we're using it, and why I think it's better than the other VCSes:
  1. User friendliness is a core goal and not an afterthought. Somewhere on their web page they have a quote that a main goal of Bazaar is "version control for human beings". I don't think you'd ever see this quote anywhere about git, the attitude there seems to be more "it works perfectly for the Linux kernel development team so of course it will work for everything else". Most commands are very easy to use, and the work flow seems to be a little easier. Example - you don't have to add modified files. A commit will automatically commit modified files. Having to "add" an already tracked file in the same way that you actually add an untracked file to the repository, like you have to do in git, doesn't seem intuitive.
  2. Cross platform support - like Mercurial, it's built on Python and runs great in Windows, Mac, and Linux.
  3. Nice built in GUI - The Bazaar Explorer gui that comes with Bazaar is actually pretty nice. It looks nice, allows you to do almost all operations from the GUI, the log viewer is excellent. About the only downside is that there is no built in merge tool, you have to use a 3rd party tool (I've found diffmerge to work the best across all OS'es). But the hooks are in explorer to launch any 3rd party merge tool.
  4. Revision numbers are not hash tags - Each revision has a full hash tag that never changes (called revision ID), but there is also a "revision number", which is sequential. It's much easier to refer to revisions by a single number than a hash tag. A complication with this is that after a merge, revision numbers from the branch that you're merging with get renamed. Like say that revision 6 is the last common revision. New revisions 7 and 8 are created on branch A, and 7 is created on branch B. If a merge with B is done on branch A, what was 7 on branch B now becomes 6.1 (or 6.1.1), then a new revision 8 is created on branch B when you commit after performing the merge. At first this seems a little unsettling but you get used to it quickly. It also makes viewing the revision log after branches a little clearer, you immediately see the last common revision from the revision number and not by tracing back. I believe Mercurial works this way too but I didn't look in to it far enough.
  5. Both checkouts and branches. Bazaar has a checkout command which is the same as a Subversion checkout - a working directory is created from the server with all current files, and any operations (commit, log view, etc.) performed happen on the server, there is no local repository. Having the ability to do both checkouts and fully distributed branches gives you a ton of flexibility for your project management. Say that you want work done from your office desktops to always be preserved on the server for security in case of a disk failure, but you also want people to do distrubed work - contractors at other facilities, developers on laptops, etc. Bazaar is the only VCS that I know of that can do both. This is also handy for test servers - simply do a checkout so that way a full copy of the repository isn't on the test machine.
  6. Support for various transport mechanisms - the other two do this as well. You can do all operations over a variety of transport protocols - http, https, sftp (for servers where you don't have Bazaar installed), ssh (where Bazaar is isntalled on the server), plus I think a few others.
  7. Recovery from user error - we've made just about every mistake that there is to make (merging with the wrong branch, pushing code that isn't ready up to the main branch, specifying wrong files to commit, reverting everything not just one change, etc.). While recovery from these problems hasn't always been straight forward, and sometimes is hard to figure out how to do, in the end, we've always been able to recover. I don't have much experience with the other two VCS'es for this, I imagine they would work pretty much the same way too.
Bazaar is certainly not perfect. I've writte a subsequent blog posting Bazaar problems and lessons learned explaining some of the problems we've run in to and how to fix them. But, while git may work best for large open source projects, in my opinion Bazaar is by far the best VCS to use for the vast majority of software development work being done - businesses doing company proprietary development work with a team of developers of varying skill levels.

Sunday, September 12, 2010

YouTube videos

Now that I have a phone with a camera that can take decent videos (Droid X), I've started taking videos when I go to concerts and posting them on YouTube. You can see all of them at my YouTube channel at http://www.youtube.com/user/valenshek?feature=mhum.

Friday, September 10, 2010

Can't run rake test in Rubymine with Ruby 1.9

This one has happened several times to me on new installs but I figured I should finally write something up about it. If you're using Rubymine (which I would highly recommend) with Ruby 1.9, and you attempt to run rake test using the built in rake task tools in Rubymine, you'll get an error that says "File 'test/unit/autorunner.rb' not found in $LOAD_PATH of Ruby SDK ...". There is an easy fix for this. Just install and attach the test-unit gem.

Tuesday, September 7, 2010

Upgrade to Rails 2.3.9 session no longer works

I just upgraded to the newly released Rails 2.3.9, and session data stopped getting saved. I could set session data and it was accessible within the same request, but on the next request, the session data is gone.

After digging a little deeper, I found that I was specifying the options for the session in the wrong place. Previously, the session options were specified in environment.rb. Now, Rails has moved this in to a different file, config/initializers/session_store.rb. Simply create this file with the following code:



Change the key to what you previously set as :session_key, and set secret to your previous :secret value. Be sure to uncomment the last line if you're using the database as the session store. Also, be sure to delete all session code from environment.rb after you do this.

However, this still didn't do the trick for the main Rails app that I work on. I'm using ActiveRecord to store session data for this app - data is too sensitive to store in a cookie. After adding the file session_store.rb to config/initializers, data still wasn't getting stored in the session. This appears to be a bug in Rails 2.3.9, as evidenced by ticket #5581. I tried the patch that Mislav posted in the comments of the ticket, but the session still didn't work for me. So, it's back to Rails 2.3.5 for my main app. The ticket is closed, so it appears as if this has been fixed in the Rails code, but I'm not sure if/when a 2.3.10 version of Rails will be released.

Sunday, August 29, 2010

Error installing ruby-debug-base19

The ruby-debug19 gems are used for interactive debugging with Ruby 1.9 (the gems without 19 on the end are for Ruby 1.8). To use the debugging feature in Rubymine (which is awesome if you haven't tried it yet) you need to install these gems (ruby-debug-base19 and ruby-debug-ide19). I had previously used these with no problems, but yesterday when I tried installing ruby-debug-base19 on a new Ubuntu 10 system with Ruby 1.9.1 yesterday I got the following error:

make
gcc -I. -I/usr/local/include/ruby-1.9.1/i686-linux -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -DHAVE_VM_CORE_H -DHAVE_ISEQ_H -DHAVE_INSNS_INC -DHAVE_INSNS_INFO_INC -DHAVE_EVAL_INTERN_H -I/usr/local/include/ruby-1.9.1/ruby-1.9.1-p376 -fPIC -O2 -g -Wall -Wno-parentheses -o breakpoint.o -c breakpoint.c
gcc -I. -I/usr/local/include/ruby-1.9.1/i686-linux -I/usr/local/include/ruby-1.9.1/ruby/backward -I/usr/local/include/ruby-1.9.1 -I. -DHAVE_VM_CORE_H -DHAVE_ISEQ_H -DHAVE_INSNS_INC -DHAVE_INSNS_INFO_INC -DHAVE_EVAL_INTERN_H -I/usr/local/include/ruby-1.9.1/ruby-1.9.1-p376 -fPIC -O2 -g -Wall -Wno-parentheses -o ruby_debug.o -c ruby_debug.c
ruby_debug.c: In function ‘ruby_method_ptr’:
ruby_debug.c:141: error: ‘rb_method_entry_t’ undeclared (first use in this function)
ruby_debug.c:141: error: (Each undeclared identifier is reported only once
ruby_debug.c:141: error: for each function it appears in.)
ruby_debug.c:141: error: ‘method’ undeclared (first use in this function)
ruby_debug.c:142: warning: implicit declaration of function ‘rb_method_entry’
ruby_debug.c: In function ‘debug_event_hook’:
ruby_debug.c:719: error: ‘rb_method_entry_t’ undeclared (first use in this function)
ruby_debug.c:719: error: ‘me’ undeclared (first use in this function)
make: *** [ruby_debug.o] Error 1


Apparently version 0.11.24 of ruby-debug-base19 was released on August 22, 2010, and it won't install correctly on Ruby 1.9.1. This version fixed support in Ruby 1.9.2, but broke 1.9.1 support. I went back to the previous version and it works fine. So to install the ruby-debug19 gems on a Ruby 1.9.1 system, run these two commands:

sudo gem install ruby-debug-base19 -v=0.11.23
sudo gem install ruby-debug-ide19


One important note! If you've already attempted to install the latest version of ruby-debug-base19 and gotten the failure, ruby-debug-ide19 may fail, even if you install the working version of ruby-debug-base19. You have to actually delete the files for 0.11.24 of ruby-debug-base19, and then reinstall ruby-debug-ide19. For me on a Ubuntu system where Ruby was compiled from source:

sudo rm -rf /usr/local/lib/ruby/gems/1.9.1/gems/ruby-debug-base19-0.11.24
sudo gem install ruby-debug-ide19


Also this bug has been reported on as ticket #28512.

Thursday, August 26, 2010

Rails 2.3.8 automatically escaping HTML when you don't want it to

I was upgrading my application to Rails 2.3.8 from 2.3.5 and found a pretty annoying bug in Rails 2.3.8. This bug HAS BEEN FIXED in Rails 2.3.9, so simply install Rails 2.3.9 to get around this problem. However, there are a lot of other problems with Rails 2.3.9, read my posting at Upgrade to Rails 2.3.9 session no longer works for a killer bug for me. Other problems have been reported too. I'm just sticking with 2.3.5.

The bug is that when you concatenate HTML strings in helper methods, Rails will automatically HTML escape the string under certain conditions. There is NO way to tell Rails not to do this. Here is an example that reproduces the problem. Add these two methods to your application helper:


Then simply output the outer_helper method in one of your views:

  <%= outer_helper %>



This is the result:
about to call inner_helper method

inside p content tag

a space should be between the following words: hello&nbsp;worldmore <span style="font-weight:bold;">dirty HTML</span>
inside div content tag

outside of inner_helper method in p tag



This is obviously not what it should be producing. Rails 3 automatically escapes HTML rendered, but you can simply call .html_safe on the output to mark that you don't want it to escape, or call raw(string), from what I've read. But these don't exist in Rails 2.3.8. This bug has been fixed in this commit to the Rails code, which has been included in Rails 2.3.9.

The blog posting at http://breakthebit.org/post/647352254/rails-2-3-8-forced-html-escaping-of-concatenated shows some ways you can get around this, but in my opinion you shouldn't have to work around this. Just stick with 2.3.5, or if you're brave you can try 2.3.9.

Friday, August 20, 2010

Gem for getting Google static maps

I've created a gem for getting maps from the Google Maps static API service called googlestaticmap. With static maps, you can specify parameters for a map, from image size, image type, and markers, path lines, and polygons to draw on the map, and in one http get to Google, retrieve the map. This is great for mobile sites where many visitors won't be able to use the Google Maps 2D API. It's also great if you have a map image that you want people to see, but don't want to load all of the Google maps javascript on your page.

To install the gem, simply type "gem install googlestaticmap" (the gem is on gemcutter, and I believe you need version 1.3.6 or higher of Rubygems to get gems from there). Documentation for the gem is at http://coordinatecommons.com/googlestaticmap with several examples of how to use it. Also if you want to see the source, it's on Github at http://github.com/brentsowers1/googlestaticmap.

Saturday, July 24, 2010

Getting Facebooker app to run in subdirectory

When I set up My Trips, my Facebook app that I used the Facebooker gem for the Facebook interface for, I intended to run it as a subdirectory of my domain name. This way I could set up multiple Facebook apps with the same domain name and IP address. However, no matter what I tried, I could not get everything to work correctly. I thought that it was issues with my web server, Nginx. However, after looking in to this more I found that it was an issue with Facebooker. You'll have to edit a Facebooker source code file. It's best to unpack the gem to your vendor directory, that way you can edit this file for the specific project.

Once you've unpacked it, edit the file vendor/gems/facebooker-1.0.xx/lib/facebooker/rails/facebook_url_rewriting.rb. Modify the relative_url_root method to the following:
class Base
class << self
alias :old_relative_url_root :relative_url_root
def relative_url_root
#Facebooker.path_prefix
'/subdirectory_name'
end
end
end


Where subdirectory_name is the name of the subdirectory that you want to run this app from. This is a bit of a hack, hard coding the directory that you want to run from, but it works. After you do this, you'll then have to configure your web server to run your app in a subdirectory. For Nginx with Passenger, simply add "passenger_base_url /subdirectory_name" to your nginx.conf in the server { } block. Now you can change your Canvas URL to http://www.yourdomainname.com/subdirectory_name.

Twitter and LinkedIn

I've finally gotten with the times and have set up profiles on Twitter and LinkedIn. I'll probably post a bunch on Twitter for a week or so and then forget about it though, we'll see how this goes. I added a gadget to the blog which shows my 3 latest tweets.