giza: Giza White Mage (Default)
[personal profile] giza
Looking back after this morning's stampede, I thought I'd share with folks how the webserver held up, since I know I am not the only geek out there. And, truth be told, I was a bit nervous myself, since I wasn't quite sure just how much traffic we would get and if the webserver would survive, or turn into a smoking crater.

Well, here's what we got:

Ethernet Traffic

The first hump is a manual backup I did last night. The second is the automatic backup that runs every morning, where the database and files are rsynced to a machine at another data center. The third hump at 9 AM was when we opened hotel reservations. 1.4 Megabits/sec doesn't look too bad, until you look at:

Active Connections

The 336 simultaneous connections a second was far more interesting. That's about 16 times the normal number of connections to the webserver.

So, what were the effects? Let's look at MySQL first:

MySQL Queries

There were nearly 1000 queries per second, 500 of which were in the cache. Between that and the other queries which are not hitting the MySQL cache, there's definite room for improvement. But before I look at the RAM situation, let's look at the CPU usage:

CPU Usage

Load Average

One of the cores was close to 100%, but there was virtually no I/O wait. The load average was also good to see--it was actually less than the load during the nightly backups. From a performance standpoint, both of these graphs look very good, as it means the disk was not the bottleneck. But why not? Well, here's the final piece of the puzzle:

Memory Usage

The RAM usage is what ties all of the other graphs together. By keeping the memory usage near constant, I was able to avoid hitting swap space, which would have incurred a huge performance penalty and quite possibly a "death spiral".

How did I keep RAM usage so low? Instead of running the Apache webserver, which requires a separate process for each connection, I instead ran the Nginx webserver. Unlike Apache, it uses asynchronous I/O to handle incoming requests. This approach scales much better than Apache, which creates a separate child process for each "listner" and chews up a lot of memory.

For comparison, the number of simultaneous connections peaked at "only" around 100 during last year's convention. We broke the old record by a factor 3.

"And what we have learned?"

Even under the highest load to date, we were in no danger of running out of RAM. This means that I can (and probably should) allocate more memory to MySQL so that more queries are cached and overall performance is increased even more. There are also some more advanced caching modules that I intend to research, to see if we can cache straight off of the filesystem and avoid the database altogether. More on that as it happens.

(no subject)

Date: 2010-02-03 02:49 pm (UTC)
From: [identity profile] davinwarter.livejournal.com
Pssst...LJ-cut

(no subject)

Date: 2010-02-03 03:48 pm (UTC)
ext_603885: (Default)
From: [identity profile] graywolf769.livejournal.com
Now if only FurAffinity would figure this out ;)

(no subject)

Date: 2010-02-04 12:56 am (UTC)
From: [identity profile] andrew7782.livejournal.com
Nom, graphs!

(no subject)

Date: 2010-02-04 02:23 am (UTC)
From: [identity profile] thraxarious.livejournal.com
I was kind of curious, what kind of backups do you do? HDD based or DVD based?

I've been pondering getting something working with a DVD-R drive, just dumping data to a disc, but would love to get something I could automate, and have it append to a session till the disc finally fills up.

None of the software I've seen so far do anything close to that. I guess I'd have to write a bunch of script of Python maybe... dunno.

(no subject)

Date: 2010-02-04 03:26 am (UTC)
From: [identity profile] giza.livejournal.com
My primary backup is neither. It consists of:

- Running Duplicity (http://www.nongnu.org/duplicity/) to back up my filesystem, which consists of a directory called $HOME/Data/, that every other folder is symlinked to. (OS/X is very happy with that)

- Running rsync on both my Data/ folder, and the folder my Duplicity archives are in, and syncing them over to the other desktop and my laptop, as well as to a machine I have in an undisclosed datacenter. This way I have both archives of my file system, and a mirror of my file system.

Excluding my MP3 collection, I currently have 2.3 GB of data to back up. Once the initial rsync and Duplicity runs are done, I can do incremental backups in under 5 minutes each night. I just start the shell script and walk away.

I also occasionally do backups to DVD and drop those in my media safe, but that has little additional value given my current system.

(no subject)

Date: 2010-02-04 02:40 am (UTC)
From: [identity profile] farraptor.livejournal.com
What I learned: get my tail up right at the start and get a room at the get-go. No time to waste! :D

(no subject)

Date: 2010-02-04 03:21 am (UTC)
From: [identity profile] giza.livejournal.com

You're in the Pacific timezone, right?

Hmm, maybe we should consider a later starting time if we do this again next year...

(no subject)

Date: 2010-02-04 03:41 am (UTC)
From: [identity profile] farraptor.livejournal.com
Yep yep! It was no biggie (had to get up early for work anyway), but definitely required an earlier-than-usual rise-time.

Profile

giza: Giza White Mage (Default)
Douglas Muth

April 2012

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags