FTP vs. HTTP

Flattr this!

Right. I just tried getting the FTP server I’ve been working on to work with FileZilla for the umpteenth time. It still doesn’t work, no matter what I do.

Unfortunately, the FTP-protocol is extremely badly documented (aside from the documents available from IETF, which are extremely terse, contains unneeded bloat and are generally just downright bad). I’ve been able to make my way up to and including the LIST and/or MLSD command (the MLSD being a newer extension command), but when trying to transfer the data containing the directory listing for the root directory, FileZilla never acknowledges that the data was received! Instead it times out and decides that there was a problem receiving the directory listing.

So, I’ve been thinking, would it make sense to just switch to HTTP for transferring patches to clients? It is starting to look extremely tempting. Arguably, I could code my way around the FTP-protocol by introducing non-standard protocol “extensions”, but that would defeat the purpose of using FTP in the first place (namely that it’s a pre-designed filetransfer protocol).

I don’t know if I’ll have the time tomorrow (I should study), but one of these days I’m going to install a copy of Apache on my box and see about making my patch client communicate with it.

So far I’ve been thinking of two possible solutions for a directory structure on the server:

  • Dumping a manifest  and all files into the root folder. This would have the downside of making it near-impossible to do incremental updates, unless the manifest contained information about which files belonged to which update.
  • Have a manifest in the root folder that points to different versions contained in separate directories. This seems like the best way to go, as it would enable incremental updates without making the manifest needlessly complex.

Once I’ve been able to write some basic HTTP code for the patch client, I’ll make another post with pictures of the client to inform of the progress.