Centos on BrandZ lx zones benchmarkedCentos on lx BrandZ zones benchmarkedCentos on BrandZ lx zones benchmarked

With Solaris 11, Sun/Oracle dropped “lx brandz” zones, the linux guest support for the solaris containers technology. Some doesn’t even knew this was existing. The perspectives of using ZFS’s capabilities for provisioning was really exciting. But that project wasn’t ready for production setups, and support was only provided for linux 2.4 kernels (although there was ways to run a 2.6.x), and with incomplete features.

I ran some performance tests few months ago. Let’s bring some figures explaining the Solaris 11 decision.

 

As shown above, brandz zones are full zones providing a syscall translation layer to the guest os. The linux guest is instanciated by running its init process, like a “user mode linux” system would be.

Overhead estimation

I used BYTE UNIX Benchmarks (Version 5.1.2) to run a set of micro benchmarks  involving syscalls.

The brandz zone was set up with a Centos 5.4/i386 distrib, minimal install.

OpenSolaris Host details

System: opensolaris: OpenSolaris Development snv_134 X86
OS: SunOS — 5.11 — snv_134
Machine: i86pc: i86pc
Language: en_US.utf8 (charmap=, collate=)
CPUs: no details available
Uptime: 5:30pm up 1:08, 1 user, load average: 0.17, 3.21, 3.86; runlevel

Most Solaris details were not detected, but this is the global zone for the following brandz zone result.

Every zone was reduced to a minimal set of processes and brought to runlevel 3 (or single-user milestone).

Linux Guest details (Centos 5.4 i386)

System:
centos: GNU/Linux
OS: GNU/Linux — 2.6.18 — BrandZ fake linux
Machine: i686: i386
Language: en_US.utf8 (charmap=”UTF-8″, collate=”UTF-8″)
CPUs: 0: AMD Athlon(tm) 64 X2 Dual Core Processor 4800+ (0.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
1: AMD Athlon(tm) 64 X2 Dual Core Processor 4800+ (0.0 bogomips)
Hyper-Threading, x86-64, MMX, Physical Address Ext, SYSENTER/SYSEXIT, SYSCALL/SYSRET
Uptime: 10:28:23 up 1 min, 1 user, load average: 0.16, 0.58, 0.34; runlevel 3

The following table shows score for both zones, and relative performance.

CentOS Score
Native Score
Difference
1 thread 2 // 1 // 2 // 1// 2//
Execl Throughput 82.00 99.30 204.90 230.20 lps -60,0% -56,9%
File Copy 1024 bufsize 2000 maxblocks 31784.90 45180.30 40514.50 54613.10 K Bps -21,5% -17,3%
File Copy 256 bufsize 500 maxblocks 8273.00 11576.50 10601.20 13860.50 K Bps -22,0% -16,5%
File Copy 4096 bufsize 8000 maxblocks 122401.90 172447.90 149167.70 204321.90 K Bps -17,9% -15,6%
Pipe Throughput 58154.50 80420.50 89803.10 117446.00 lps -35,2% -31,5%
Pipe-based Context Switching 3773.30 6809.80 3838.50 7238.90 lps -1,7% -5,9%
Process Creation 134.20 151.90 263.30 301.20 lps -49,0% -49,6%
Shell Scripts (1 concurrent) 210.00 238.70 432.50 482.00 lpm -51,4% -50,5%
Shell Scripts (16 concurrent) 15.80 14.90 32.40 33.00 lpm -51,2% -54,8%
Shell Scripts (8 concurrent) 32.20 30.50 66.10 67.50 lpm -51,3% -54,8%
System Call Overhead 60471.00 81388.60 58388.50 73052.70 lps 3,6% 11,4%

Running syscalls micro benchmarks is likely to draw the worst case, and we can see an increasing overhead (up to 11.4%) when running multiple threads. On the other figures, we can see a real performance decrease for I/O where increasing buffer size doesn’t really help: This means the syscall overhead only has little impact in the overall performance decrease and there’s probably a 15% performance hit from another cause. Processes spawning performance is really bad, and this is also probably due to the fact a conversion is made before attaching them to the Solaris kernel (remember we do not run any linux kernel, just its init process).

Really weak in worst cases and on I/O, this zones were probably usable for cpu intensive tasks only. An unsurprising drop of unpolished technology. (Note they also provide para-virtualized linux support through virtualbox on x86 platforms).Centos 5.4 i386

minimal install

Centos 5.4 i386

minimal install