Skip navigation

When I looked at payment processors for online signup, I quickly became surprised how easy it was to integrate with PayPal. This ease, combined with very competitive rates, made it the obvious choice. For a company that is for all appearences run by suits, PayPal has a wonderfully simple and well-documented API, as well as multiple methods of accepting payments.

A good overview of the methods available can be found here. The method I ended up using, Instant Payment Notification (IPN), allows you to be notified whenever a client makes a payment. The basic flow of checkout goes something like this: the customer select the service or item they want on your website. Once they are ready to check out, then click a button to go to PayPal to make the actual payment. This button click posts information to PayPal, including who you (the seller) are, how much the customer is to be charged, and a url where you want to be notified when the customer pays. As soon as the customer pays, PayPal pings your server, letting you know that you should active the customer’s service account (or ship the item, or whatever.) To prevent forgery (i.e. the customer looking at the form data to figure out the notification url and pinging that manually), when PayPal pings your server (in technical terms, an HTTP GET with post data), you must take all of the data that it sends you and post it back to PayPal via HTTPS. If the data is valid (i.e. the customer paid), PayPal confirms the data, and you can be assured that the payment is legit. If your server is not available, or returns anything other than HTTP 200 OK, PayPal continues to ping your server until you respond correctly.

To be sure, having the customer leave your website, even for a little bit, to go through PayPal is not ideal. However, the alternative is becoming PCI compliant, and trust me, if you’re doing less than several hundred thousand dollars per month, it’s not worth it.

I recently started developing an online signup application for Midwest Data Storage. (Using PayPal, more details later.) At the time, it seemed like a significant project, and since I typically spend less than 15 hours per week on projects like this, I was worried that the project would drag out for too long. So I made a rule: I would not do anything else on the computer until the project was done. No slashdot, no blogs, no personal email, nothing.

I was surprised by a couple of things. First, how often I found myself opening a new tab in Firefox, preparing to type sl[ashdot]. 🙂 Second, how much I got done in the absense of those things. The project that looked big was done in less than a week, with more polish than I had originally planned for. Unfortunately, when I opened up Bloglines after the sprint was over, I was informed that I had 640+ unread posts. (So that’s where my time goes!)

While I can’t do this kind of thing often or for extended periods of time (some of the blog posts are security-related announcements, for example), I do think that I’ll end up doing this kind of thing again when I have a larger project that I need to work on.

Regarding the post name, I’ve dubbed this idea “a personal sprint”, based on the open source community’s “sprints”. It’s become fairly popular for programmers on open source projects to gather together for a short time (a weekend or a week) and concentrate on improving one part of the project. Often these happen at conferences, and usually involve a lot of knowledge transfer from more experienced coders to new team members.

How do you get large projects done in your spare time?

I recently opened my work computer to add a second graphics card. (No, I don’t do gaming; I was adding a third monitor.) I happened to look at the existing graphics card, and found that the sticker on the heat sink fan was pealed up. Taking a closer look, I realized that this wasn’t a normal sticker, but one of those metallic stickers, and that it had peeled up because it had gotten hot. Apparently the fan gave out, and without air circulating over the heat sink, the temperatures skyrocketed. So much so that the plastic fan blades had actually melted down against the heat sink:

GPU heat sink

Out of morbid curiosity, I opened Nvidia’s temperature monitor, to find that it was running at 95 C (around 203 F). And yet it continues to work perfectly…

For a business I run on the side, I need to implement a remote support application. I’ll post more about that later, but first I need to get something off my chest.

UltraVNC is quite possibly the worst-run “open source” project I’ve ever encountered.

  • The website looks like a commercial program’s site, and is broken. Lots of links don’t work, and it’s not clear where to go to get what you want. The hundreds of questions on the forums about what should be simple questions is a testament to this.
  • The documentation is horrible. It’s not just that there’s little documentation there, it’s that it’s all old (like, from 2005) and there are multiple copies that contradict each other.
  • There’s no clear project leader. No one is shepherding it along, gathering consensus, declaring direction. The community is fragmented in dozens of different directions, each going a different way.
  • (This one urks me the most.) Despite claiming to be open source (the code is GPL), there’s no central place to get the source. Oh, there’s a subversion repository, but it’s hopelessly out of date, and the released packages don’t come from that source. To get source for some of the central packages you have to (get this) PM someone on the forums, and he usually sends you a link to the source *after several weeks*. Wait until RMS gets a hold of them. Because of this, when people want to make modifications, they end up posting a new package with their changes (patches? what are those?!?) in the forums. For one package I looked at, there were literally 6 different copies of the same package, each based off of a different version, and most incompatible with each other. It boggles the mind.

The project feels like the worst example of IT implementation: done just well enough to work most of the time, but with little thought to security. There’s also a slight conflict of interest, since apparently some of the original developers took the source code, closed it up, made some usability improvements, and released it as a new, incompatible product.

To be fair, UltraVNC isn’t the only one with issues here; the remote support category is filled with one-off, limited (closed-source) projects. In fact, of the projects that I found, UltraVNC actually looks the most promising. The user community is very strong; I really think that if the project were simply run better, it could be a fantastic solution.

CherryPy, starting with version 3.0, support SSL out of the box. All you need to do is set the ssl_certificate and ssl_privatekey settings, and it automatically serves HTTPS. Unfortunately, SSL support is recent and has a few missing features. One of these missing features is the ability to specify certificate chain files.

What are certificate chain files?!? Well, cheap certificate authorities aren’t actually trusted by most browsers. Instead, they derive their authority from being signed by an authority that is trusted. In the case of GoDaddy, they are trusted by StarField, and StarField is trusted by the browser. The certificate chain file tells the brower, “See? I’m trusted by GoDaddy, and GoDaddy is trusted by StarField, and you trust StarField. So please trust me.”

Anyway, to use certificate chain files (and thus, GoDaddy certificates) in CherryPy, you’ll need to patch CherryPy. Download this patch, then apply it to CherryPy version 3.0.2 like so:

cd CherryPy-3.0.2
patch -p1 < cherrypy_ssl.patch

Install the patched version. This adds an ssl_certificate_chain_file, which can be used in parallel with ssl_certificate.

Now, to generate the certificate chain file. Download GoDaddy’s intermediate bundle into the same directory as your .key and .crt files. Next, combine all these files into a single file:
cat hostname.key hostname.crt gd_bundle.crt > certificate_chain.pem

Then simply set ssl_certificate_chain_file to the path of certificate_chain.pem, and you’re good to go!

I’ve long been a fan of the Rockbox open source project, which develops firmware for mp3 players. It is, in my mind, an excellent model of how to run an open source project. Since I’ve started following the project in 2002, contributors have come and gone, yet it continues to improve an already excellent feature set, long after any commercial entity would have abandoned it. Not surprisingly, the project has a very powerful (and cool!) distributed build system. This past weekend, I took some time to set up a build server for Rockbox. It’s surprisingly easy, and it benefits a cool project. If you have a spare (fast) computer that’s not used heavily, why not donate some unused CPU cycles?

Post to Twitter from Vim.

Nothing like pairing a 15-year-old, modal editor with a web service from 2007.765.

My head is still spinning with the implications.

As many of you know, I am a big proponent of backups. I’ve lost count of how many times having reliable, historical backups have saved me. Like the time I only copied off part of my home folder before reformatting my hard drive. Just recently, I restored an old python project from a backup of a computer I parted with over a year ago. It’s more than just recovering lost files, too. Leave your usb flash drive with passwords at home? No problem, just download it from the web-accessible backup.

There are some attributes that I consider essential to backups:

  • Offsite – if my house burns down, I don’t want all my passwords, email, photos, etc. to go with it.
  • Automatic – CD and DVD backups are great in theory, but they just don’t work in practice, because it requires manual intervention.
  • Revisions – If I accidentally overwrite a file, and discover it 3 days (or 3 months!) later, I need to be able to restore the file as it was before I overwrote it.
  • Easy restore – whether it’s a single file or my whole email folder, if I can’t get my files back with less than 30 seconds of work, a lot of the value is lost.

So how do you do backups? What is important to you?

Well, I’ve decided to start back at posting on again, after a coworker and friend switched to WordPress from b2evolution (which I had been using) and was thoroughly impressed. I’ll have to admit that it is a nice deal – $10/year for a blog hosted on my domain (subdomain, actually), complete with spam filtering and a really clean UI. Hopefully this will give me a chance to go on my signature rants once again…

SDL is a multi-platform multimedia library with functions for screen, input and audio access. I’ve been using it heavily in HyperType, where it makes drawing graphics much easier than using the underlying APIs.

SDL was developed with game programming in mind, and one area where it shows is in its keyboard handling. By default, there is no way to get at the translated key code, only the physical key and any modifiers. For example, if I hold down the Shift key, and press the ‘a’ key, SDL tells me exactly that. But it doesn’t tell me that that key combination should really result in an ‘A’ letter being typed. Caps Lock works the same way. This is a real pain, especially when you need to deal with the number keys, which should result in symbols being typed. For a while, I simply used the toupper() function to translate letters and a lookup table to translate the other keys.

This came to a head when I realized that under certain conditions (including NoMachine’s NX remote access software), some modifiers, such as Caps Lock, did not always work correctly. There had to be a better way. After a fair bit of googling, I found that if you call SDL_EnableUNICODE(SDL_ENABLE) before capturing keyboard input, SDL fills in the unicode member of the SDL_keysym structure with the correct ASCII keycode. This value can be safely typecasted to a char value. Now you know…