I’ll admit it. I’m a hardware junkie.
It started when I built my first PC in 1999. I spent hours reading articles on Tom’s Hardware and pouring over Egghead and TigerDirect catalogs, obsessing over what components would provide the best bang for my buck. I settled on a K6/2 running at 380 MHz with 128MB of RAM and managed to scrounge a hard drive from a family member who had recently upgraded their PC. 128MB of RAM! I couldn’t believe it!
Whether you are a hardware junkie or not, I’d like to share with you some insight into how we select hardware here at DRH. In this post, I’ll focus on performance. While there are reliability considerations to keep in mind when choosing hardware (redundancy, error correction, etc.), I’ll save those for another blog post.
Nowadays, I get my “fix” by building servers for DRH’s cloud-hosted environment. To be honest, sometimes, I feel like a kid in a toy store. After all, I get to research the latest technology, order some amazing servers on the company credit card, assemble them when the parts come in, and run them through a series of tests to prove how awesome they are. Some days I still can’t believe they pay me for this!
Before we dive into the nitty-gritty of what goes into the hardware selection process, I’d like to explain a few things about our hosted environment.
A Peek Into Our Hosted Environment
We host GreenArrow servers for customers that have performance guarantees ranging from 50,000 to 8 million messages per hour. Most of our physical servers are KVM hosts used to host multiple VMs (virtual machines). Most customers get their own VM.
We have opted to run our cloud solution on our own hardware instead of using a cloud service like AWS for several reasons:
- The price-to-performance ratio for high CPU and IO performance is better
- We have more control over the performance
- This environment more closely matches how our on-premise customers install our software, simplifying our engineering
Here are some examples that give you an idea of what the GreenArrow software can do. The examples are not exhaustive and show very different utilization because, in the second example, the software is being asked to perform a much smaller sub-test of the task requested by the first example:
Customer Example – Production Traffic of a GreenArrow Client, an Email Service Provider
- GreenArrow Engine and Studio
- Production traffic of 6-8 million messages/hour maximum speed; 750 million messages sent per month, tens of millions of email addresses in the system.
- 32 x 3.3GHz CPU cores (Quad Intel Xeon E5-4627 3.3GHz 8-core processors)
- 128GB of RAM
- 1 x 1.2TB Intel 750 PCIe SSD drive for GreenArrow’s queue and statistics
- GreenArrow’s PostgreSQL database is split between:
- RAID 1 array of 2 x 1.2TB Intel 750 PCIe SSD drives
- Two separate RAID 1 arrays, each containing 2 x 1.2TB Intel DC S3710 SSD drives
- System Utilization:
- CPU: fully utilized at peak sending speed; this was the bottleneck of the system
- RAM: fully utilized
- Storage IO: never close to saturation. In fact, the entire system would not even saturate the IOPS capacity of a single 1.2TB Intel 750 PCIe SSD drive.
Test Example – Synthetic Test Sending 4 million messages/hour
- GreenArrow Engine only
- Synthetic test of sending 4 million messages/hour with 70kB message size
- Synthetic 16% deferral rate and a 1% failure rate sending actual messages to our test cluster of dummy SMTP servers
- No click and open tracking
- Injecting email into GreenArrow using QMQP (which is the most efficient injection method)
- 16 x 2.7GHz CPU cores (Dual Intel Xeon E5-2680 2.7 GHz 8-core processors)
- 128GB RAM
- RAID 5 array of 4 x 400GB Intel S3700 Series Enterprise SATA SSD drives
- System Utilization:
- CPU: 72% idle
- RAM: Most of the RAM not used
- Storage IO:
- Average of 450 IO operations per second
- Average of 39,364 kB per second written to and read from disk
Building Your Perfect Server
Now that you’ve seen some general examples let’s dig into the specifics for you. When considering what goes into a server, I find it helpful to break the hardware selection process down into four components: Storage, CPU, RAM, and Network. Let’s take a look at each one in detail.
I usually start by speccing out the storage requirements because they dictate what types of server chassis (cases) and motherboards are viable. For example, if you want to use NVMe drives (discussed later), you’ll need to ensure that your chassis and motherboard have enough slots to install them.
When I see a GreenArrow server hitting some a hardware bottleneck, it’s often related to storage. A GreenArrow server could be bogged down either because its storage isn’t fast enough, or because there’s not enough space.
Here are some of the most important things to consider when picking out storage components for your server:
#1: Understand the Differences Between SSDs and Magnetic Drives
Both solid-state drives (SSDs) and the older magnetic drives have their place in our hosted environment. We use SSDs when performance is critical, such as customer servers. We use magnetic drives where quantity is more important than speed, like backup storage.
If you want to send more than 100,000 messages per hour, then I recommend investing in SSDs. You might be able to break past 100,000 messages per hour with non-SSD drives, but I wouldn’t count on it. Prices have fallen to the point where you can obtain quality SSDs for under a dollar a gigabyte.
#2: Don’t Rely on Labels
You may see SSD manufacturers put their offerings into categories like “Consumer,” “Data Center,” or “Enterprise.” While these categorizations may provide a hint as to what kinds of capabilities each drive has, what the different categories mean isn’t at all consistent between manufacturers. My advice is to not depend just on the label that the manufacturer uses. Look at the specs before making your choice.
#3: Don’t Forget Battery Backup Units
I’m cheating here a little bit since battery backup units (BBUs) aren’t directly applicable to performance. But they are one of the most important features to look for in an SSD. We try to use a BBU or its equivalent anyplace where data integrity matters–in other words, we use them almost everywhere.
Keep in mind that if you’re using a RAID controller, they often come with a BBU or the ability to add one. And some SSD drives come with a built-in capacitor that provides enough power to finish writing data in the event of a power outage. You can learn more here: enhanced power-loss data protection.
#4: Avoid Interface Bottlenecks
Modern SSD drives are fast. Often times they’re so fast internally that their true bottleneck is the interface that’s used to communicate with the rest of the server. Traditionally this interface has been a SAS or SATA port. However, NVMe drives use a much faster and lower latency PCI Express (PCIe) interface. We’ve been adopting some of these drives in new servers, and have been thrilled with the results.
#5: Find An SSD Manufacturer You Can Trust
Years ago, it was said that “nobody ever got fired for buying IBM.” I feel like we’re at the stage where the same could be said of Intel SSD drives. At any given point, they may or may not be the best value when looking at price and performance, but their reliability track record is impressive. We’ve been using mostly Intel SSD drives since 2010 and have been pleased with their performance and reliability. Our limited experience with other brands has been a mixed bag. I’m sure there are other good brands out there, but I can’t recommend any others from experience.
#6: Understand Write Cycles
SSD drives have a limited number of write cycles, past which they could stop functioning. I recommend taking into account the number of write cycles that the SSD drives you’re considering are capable of proving (the drive’s manufacturer should be able to provide this data). After you have your drives installed, I also recommend monitoring their SMART data so that you’ll get an early warning if the drive starts to approach its write limit.
Write cycle limitations are a legitimate concern, but I’d like to share a story to put them into context. In 2011, we put some SSD’s into production (Intel X25-M drives, in case you’re curious). We expected them to max out our write cycles within a few years, and to replace them when they had only 20% of their write cycles remaining. However, it’s been five years, and they’re still going strong!
#7: Use TRIM
TRIM can eliminate most of the performance degradation that SSD drives experience over time. We’ve turned TRIM on selectively in our hosted environment.
There are two ways that you can enable TRIM. It can either be performed as files get deleted, most often by adding a mount option like discard or performed in batch during off-peak hours by using a utility like fstrim. TRIM does have a negative performance impact while it’s running, so our strategy has been to use fstrim on servers where the workload tends to drop at specific points of the day, and the discard option or its equivalent on servers where the load is unpredictable.
Most SSDs support TRIM out of the box, but unfortunately, most RAID controllers do not. When a RAID controller prevents us from running TRIM on an SSD drive, we reduce the performance impact by leaving a portion of the drive’s space unallocated.
A Hands-On Comparison of Three SSDs
To illustrate some of the above points, I’d like to compare two of the SSDs that we’re using in our hosted environment and a third that we’re considering. The first drive is the Intel DC S3710 series – an old school 2.5″ SATA drive. This is a great drive, but it’s hampered by its SATA interface. We use it in servers that either don’t have a viable NVMe option or have all of their NVMe compatible slots used up.
The second and third drives are both from Intel’s 750 series. They use fast NVMe interfaces. The difference between the two is that one is a traditional PCIe card, and the other uses a new 2.5″ PCIe interface.
Right now we’re using the traditional PCIe card:
|Drive||Intel DC S3710 Series 1.2TB||Intel 750 Series ½ Height PCIe||Intel 750 Series 2.5″ PCIe|
|Capacity||1.2 TB||1.2 TB||1.2 TB|
|Interface||2.5″ SATA||½ height PCIe 3.0||2.5″ PCIe 3.0|
|Sequential read||550 MBps||2400 MBps||2400 MBps|
|Sequential write||520 MBps||1200 MBps||1200 MBps|
|Random 4kB read (up to)||75,000 IOPS||460,000 IOPS||460,000 IOPS|
|Random 4kB write (up to)||36,000 IOPS||290,000 IOPS||290,000 IOPS|
|Enhanced Power Loss
|Price in September 2016||$1,507||$660||$803|
There are many figures in the above table, so I’d like to zero in on the one that we’ve found to be most important to GreenArrow’s performance: the random write speed. As you can see, the 750 series is about 8 times as fast as the DC S3710 series—at least on paper. In the real world, the difference isn’t that large because we’ve never managed to saturate a 750 series drive. Something else, such as CPU, gets bottlenecked first.
I’m a big fan of Intel’s 750 series SSD drives and recommend considering them when configuring a server that has slots available to install them into. SSD technology is advancing quickly, so it’s possible that by the time you read this article, there will be better options out there, but as of October 2016, this drive looks like a solid choice to me.
After storage, the next component that I spec out is CPU because it also impacts your chassis and motherboard options. Here are some things to keep in mind when shopping for CPUs:
#1: Intel vs. AMD
Intel or AMD? That’s up to you. Intel made all of the CPUs that we’re using in our hosted environment right now, and our customers more commonly use Intel CPUs outside of our hosted environment, so I’ll focus on Intel in this article. You’re welcome to use AMD, though. We have many customers who do.
#2: Threads vs. Cores
Two values that are often included in the specs of a CPU are the number of cores and threads. For example, a CPU might have 8 cores and 16 threads. While both figures are useful, we’re going to focus on CPU core count since we’ve found that metric to be more relevant to GreenArrow’s performance than CPU thread count. The most important thing to remember in this area is not to treat the thread and core count as being interchangeable.
#3: CPU Core Speed
How important is the speed of each CPU core? Most of GreenArrow’s components are multithreaded, so aggregate performance among all CPU cores matters far more than the performance of any individual core. You can calculate a rough approximation of a server’s CPU performance by multiplying the total number of CPU cores by the speed of each core. For example, a server with four 2GHz cores could be said to have an aggregate performance of 8GHz.
CPU caches allow CPUs to be more efficient by caching information locally so that they don’t have to spend as much time waiting for data to be retrieved from RAM. This is a broad topic. If you’d like to do a deep dive, this video on what a CPU cache is is an excellent starting point.
How do you make a selection that’s right for you? The short version of the selection process is this: The larger the CPU cache, the better the performance. And be sure to compare apples-to-apples. CPUs often have three caches referred to as the L1, L2, and L3 cache. The three caches work together. The L1 cache is faster than the L2 cache, which is faster than the L3 cache. Comparing the L1 cache of one CPU to the L3 cache of another wouldn’t be a fair comparison.
#5: Go 64-bit
GreenArrow supports both 32-bit and 64-bit x86 CPUs. All other things being equal, though, you should select a 64-bit (x86_64) CPU. Their primary benefits are that they support more RAM than 32-bit CPUs, and modern Linux distributions better support them. The vast majority of servers built in the last 10 years use 64-bit CPUs.
A Head-to-Head Comparison of Two CPUs
I’d like to compare two CPUs from Intel’s E7 series. I’ve selected one of the least expensive, and one of the most expensive. These two CPUs have very different performance characteristics and prices, so the reality is they’re not competing with each other. Rather, the contrast provides a nice illustration of the wide range of CPUs available, even within the same series.
|Intel E7-2803||Intel E7-8890V4|
|Max number of CPUs||2||8|
|Memory types||DDR3 800 / 978 / 1066 / 1333||DDR4-1333 / 1600 / 1866 DDR3-1066 / 1333 / 1600|
As you can see, the more expensive CPU costs about nine times as much as the less expensive one. Does that mean that it’s nine times as fast? It’s faster, for sure, but if we look at just the aggregate speed of the CPU cores, the E7-8890V4 appears to be about five times as fast as the E7-2803.
The E7-8890V4 holds some additional advantages over the E7-2803. Here are the biggest ones:
- It has Turbo Boost, which allows each CPU core to run at up to 3.4GHz temporarily. This means that if your server’s performance becomes bottlenecked by CPU speed, you can get a temporary bump in speed. How long is temporary? That depends on several factors, including how busy the other CPU cores are and how effective the CPU’s cooling is. Turbo Boost has safeguards that prevent it from overheating your CPU or trying to consume more electricity than is available.
- It has a larger L3 cache, at 60MB vs. the E7-2803’s 18MB.
- It supports both DDR3 and faster DDR4 RAM. The E7-2803 only supports DDR3 RAM. (We’ll discuss this detail in this post’s RAM section below.)
- It has a better power usage to performance ratio.
- You can only use up to two Intel E7-2803 CPUs in a single server, but you can use up to eight E7-8890V4 CPUs. Part of what you’re paying for with the E7-8890V4 is this ability to place more CPUs into a single server.
For the purposes of running GreenArrow, Intel’s E7-8890V4 is typically going to be more than five times as fast as Intel’s E7-2803. How much faster depends on how much of an impact the additional performance boosts listed above provide. It’s probably not going to be a full nine times as fast, so you may end up paying more per CPU unit of work for the E7-8890V4.
Get Your RAM Up to Speed
It’s true, the more RAM you have, the better for performance. That’s because more RAM means less work for your hard drives and the less time spent by your CPUs waiting on data to be read from hard drives. But how much? All production GreenArrow servers should have at least 4GB of RAM installed. That’s the minimum required for stable operation in most cases. How much more you use past that depends primarily on your performance requirements.
You can get a ballpark estimate of how much RAM we recommend by using the same formula that we use in our hosted environment. We allocate 4GB of RAM for every 200,000 messages our hosted customers wish to send per hour. For example, if a hosted customer wants to be able to send up to one million messages per hour, then we’d typically allocate 20GB of RAM to their server.
GreenArrow’s largest memory user is usually its PostgreSQL database. If you’re importing lots of data for each subscriber or using complex segmentation criteria in your campaigns, then I recommend installing even more RAM.
After you’ve decided on how much RAM you need, it’s time to select the specific modules. In an earlier chart, I compared two CPUs, one of which supports DDR4 RAM, and the other which only supports slower DDR3 RAM. I recommend selecting the fastest available RAM that’s supported by your CPU / motherboard combination as long as the price difference between it and a slower option isn’t prohibitive. I think you’ll find the price difference is minor.
This is also an excellent time to review your motherboard or server’s manual to determine what the best-performing combinations of memory slot usage are.
If you’re sending less than a million messages per hour, I recommend a single Gigabit Ethernet network interface and a hosting provider that allows you to saturate it during peak sending times. While you might be able to get away with a slower connection, Gigabit Ethernet network interfaces and switch ports are so inexpensive that I don’t think it’s worth spending the time evaluating a slower connection. The only exception here is if you’re hosting your GreenArrow server at a location with severe bandwidth constraints or higher than typical network pricing.
If you’re planning to send more than a million messages per hour, then you might need faster network connectivity. There are a number of factors that we use to calculate bandwidth requirements in that situation: average message size, peak sending rate, the average aggregate size of any images that get loaded when subscribers view emails and click and open rates.
Calculating Your Bandwidth Needs
Here’s a formula that you can use to provide a rough estimate of your GreenArrow server’s upload bandwidth requirements in Gbps (gigabits per second):
(max messages per hour) x (average message size in kilobytes) / 225,000,000
For example, if you send one million messages per hour during your peak sending periods, and those messages have an average size of 80 kilobytes, your upload bandwidth requirements can be estimated as:
1,000,000 x 80 / 225,000,000 = 0.36Gbps
Granted, this formula doesn’t take into account all of the variables which could impact bandwidth–it simply makes reasonable assumptions for a number of them. We recommend obtaining more network capacity than the formula indicates you should need.
Most high-volume email servers have greater upload than download bandwidth usage. Since most network connections have identical upload/download speeds, I won’t dig into how to estimate download requirements here.
Wrapping It Up
Once you’ve determined your requirements in these four areas (storage, CPU, RAM, and network), you should have a much better idea of the kind of email server you need. By giving this information to your system admin, hardware vendor, or hosting provider, you may well make their job easier by guiding them to just the right machine.
On the other hand, if reading this has caused you to break out in a sweat, fear not! We offer a fully managed Cloud option for customers who want to use the power of GreenArrow without the hassle of having to spec out their own server.
In any event, I hope this post was informative and helped you to better size up your hardware needs. If you have any questions, or just want us to perform a sanity check on your hardware plans, feel free to contact a member of our team. I’d also love to hear about your favorite hardware selection tips.
Hardware junkies, unite!
Don't Miss Out!
Sign up for the GreenArrow newsletter, and we’ll email you tips, updates, and resources.