Clusterfuck
I'm thinking about writing some programs to let jobs that can be split up into small tasks that run independently (think S.E.T.I., genetic algorithms, neural networks, etc.) be run on any number of available computers. The basic idea is this: a client would check out a task from some server, run it in a sandbox, and then tar up and return the result data. I'm thinking about using chroot, and a bunch of mount -binds and hard links to get the needed binaries in place. For security purposes, all mounts would be done -ro, and only hard links to files which couldn't be editable would be allowed. Also, no suid executables would be allowed and the client would su to nobody and cd into the new /, since otherwise the application could break out of jail. Eventually, I'd like to set it up so that standard utilities could be requested based on some ID.
A task would consist of a template filesystem (which chroot would set / to), a set of utility package requests (later version, probably), an command to run (in the new chrooted pathspace, of course), and a path to tar up and return as the result. A server for this system would, in effect, turn tasks into results, pushing the actual cpu grunt work to the client machines. The clients would probably have to be run through nice, since it will likely eat up the CPU, and if this is to be run to sop up extra CPU cycles it should be done politely.
I'll probably get started on this within a few weeks, any suggestions before I start?