I have an ancient “backup” sitting somewhere. It was a provider called Nameplanet which provided vanity addresses (lastname.[assorted TLDs] for example), and morphed into Global Name Registry when we launched the .name TLD… The webmail system itself was sold to NetIdentity in 2001 or 2002…
It had a few interesting details – we ran it on ReiserFS for the fast/efficient small file support (at the time it really stood out) which made it great for Maildir’s.
We also eventually used a small daemon to poll backends for which server a user belonged to, which had a mechanism to let us mark a user as “busy” so that we could balance accounts between backends by marking it as “busy” on both servers, sync the files over, and then mark it as available again without triggering errors anywhere. qmail on our MX’s was modified to look up the right server that way.
The biggest changes were the POP modifications I mentioned. The ones I remember off the top of my head were:
* We modified qmail-local and the pop server to append size changes (from writing a new message or deleting one) to a file used to manage quotas. We appended rather than rewrite because it reduced the need for file locking (we took care to do single writes). We’d lock and coalesce the changes when the file got over a certain size.
* qmail-local was also changed to append the message size, and read-status to the filename. That let us avoid stat() calls for the files for the filesize, and opening and reading the files for unread counts etc. It was one of the first optimizations we did.
* Then we added a cache file that contained subject, sender, size, attachment status etc., for the web frontend, which would be dynamically re-generated automatically as needed.
* We made “+[something]” sort directly into folder “something”.
* When we sold it I was most of the way through adding Sieve support to our qmail-local replacement.
These changes were quite small, and each successive changes lowered IO load dramatically (we handled about 2 million accounts before it was sold). Today, we could probably handle the IO load and storage we had with a single NVMe card…
The web frontend would try to use our extended POP3 command, and then fall back to scan the messages (and store a cache locally on the frontend) if it wasn’t available, so we could use it as a POP3 client for other backends too. (The frontend is a story in itself – C++ CGI statically linked to shorted load time (it made a big difference at the time) and with “delete” only for really large allocations, to avoid wasting time on deallocation since we knew each process would at most live for a few seconds.