| CARVIEW |
|
|
|
Search
|
Benchmarks
How Fast is Redis?Redis includes the redis-benchmark utility that simulates SETs/GETs done by N clients at the same time sending M total queries (it is similar to the Apache's ab utility). Below you'll find the full output of the benchmark executed against a Linux box.
Results: about 110000 SETs per second, about 81000 GETs per second. Latency percentiles./redis-benchmark -n 100000 ====== SET ====== 100007 requests completed in 0.88 seconds 50 parallel clients 3 bytes payload keep alive: 1 58.50% <= 0 milliseconds 99.17% <= 1 milliseconds 99.58% <= 2 milliseconds 99.85% <= 3 milliseconds 99.90% <= 6 milliseconds 100.00% <= 9 milliseconds 114293.71 requests per second ====== GET ====== 100000 requests completed in 1.23 seconds 50 parallel clients 3 bytes payload keep alive: 1 43.12% <= 0 milliseconds 96.82% <= 1 milliseconds 98.62% <= 2 milliseconds 100.00% <= 3 milliseconds 81234.77 requests per second ====== INCR ====== 100018 requests completed in 1.46 seconds 50 parallel clients 3 bytes payload keep alive: 1 32.32% <= 0 milliseconds 96.67% <= 1 milliseconds 99.14% <= 2 milliseconds 99.83% <= 3 milliseconds 99.88% <= 4 milliseconds 99.89% <= 5 milliseconds 99.96% <= 9 milliseconds 100.00% <= 18 milliseconds 68458.59 requests per second ====== LPUSH ====== 100004 requests completed in 1.14 seconds 50 parallel clients 3 bytes payload keep alive: 1 62.27% <= 0 milliseconds 99.74% <= 1 milliseconds 99.85% <= 2 milliseconds 99.86% <= 3 milliseconds 99.89% <= 5 milliseconds 99.93% <= 7 milliseconds 99.96% <= 9 milliseconds 100.00% <= 22 milliseconds 100.00% <= 208 milliseconds 88109.25 requests per second ====== LPOP ====== 100001 requests completed in 1.39 seconds 50 parallel clients 3 bytes payload keep alive: 1 54.83% <= 0 milliseconds 97.34% <= 1 milliseconds 99.95% <= 2 milliseconds 99.96% <= 3 milliseconds 99.96% <= 4 milliseconds 100.00% <= 9 milliseconds 100.00% <= 208 milliseconds 71994.96 requests per second Notes: changing the payload from 256 to 1024 or 4096 bytes does not change the numbers significantly (but reply packets are glued together up to 1024 bytes so GETs may be slower with big payloads). The same for the number of clients, from 50 to 256 clients I got the same numbers. With only 10 clients it starts to get a bit slower. You can expect different results from different boxes. For example a low profile box like Intel core duo T5500 clocked at 1.66Ghz running Linux 2.6 will output the following: ./redis-benchmark -q -n 100000 SET: 53684.38 requests per second GET: 45497.73 requests per second INCR: 39370.47 requests per second LPUSH: 34803.41 requests per second LPOP: 37367.20 requests per second Another one using a 64 bit box, a Xeon L5420 clocked at 2.5 Ghz: ./redis-benchmark -q -n 100000 PING: 111731.84 requests per second SET: 108114.59 requests per second GET: 98717.67 requests per second INCR: 95241.91 requests per second LPUSH: 104712.05 requests per second LPOP: 93722.59 requests per second |
||
My test on an oldish dual core Linux box with one kernel-based RAID1 volume and an average load of 0.1-0.2
Ludo: cool! I could basically handle my largest applications with a single server instead of the MySQL cluster I'm running currently :)
Dell E520 desktop with 1 Core2 cpu and 4GB RAM
I modified Redis in order to send multi-buffer (but small) replies in a single TCP packet. Results updated in this page.
I did a bit of testing on the Amazon EC2 virtual servers. The results are "a bit" disappointing for the small instance... (but it's a VPS)
P.S. If it's of any interest I have prepared a script that can be used to do the benchmarking on EC2 (launches the instances, downloads from svn, compiles, benchmarks, ...).
Fedora Base 32 bit ( ami-5647a33f )
Small ( m1.small )
High-CPU Medium ( c1.medium )
Fedora Base 64 bit ( ami-2547a34c )
Large ( m1.large )
Extra Large ( m1.xlarge )
High-CPU Extra Large ( c1.xlarge )
Hi, I've tested using PHP client.
In a Dell Inspiron 6400 (Core Duo T2250 @1.73GHz, 1 GB RAM). With Ubuntu 8.04 (linux 2.6)
I've generated 1000000 (1M) entries id=text, and took over 1'20" (1 minute 20 seconds)
With 100000 (100K) entries took 9 seconds, as I expected.
I've made the same test in MySQL in local, and the results are similar.
The redis-benchmarks:
SET
57.12% <= 0 milliseconds 99.76% <= 1 milliseconds 99.83% <= 2 milliseconds 99.86% <= 3 milliseconds 99.89% <= 4 milliseconds 99.93% <= 5 milliseconds 99.93% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 203 milliseconds 78994.47 requests per second
GET
52.85% <= 0 milliseconds 99.35% <= 1 milliseconds 99.68% <= 2 milliseconds 99.79% <= 3 milliseconds 99.86% <= 4 milliseconds 99.86% <= 5 milliseconds 99.93% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 203 milliseconds 70926.24 requests per second
INCR
49.58% <= 0 milliseconds 99.41% <= 1 milliseconds 99.71% <= 2 milliseconds 99.81% <= 3 milliseconds 99.89% <= 4 milliseconds 99.90% <= 5 milliseconds 99.90% <= 6 milliseconds 99.96% <= 7 milliseconds 100.00% <= 8 milliseconds 100.00% <= 202 milliseconds 66757.68 requests per second
LPUSH
53.13% <= 0 milliseconds 99.56% <= 1 milliseconds 99.74% <= 2 milliseconds 99.81% <= 3 milliseconds 99.85% <= 4 milliseconds 99.86% <= 5 milliseconds 99.92% <= 6 milliseconds 100.00% <= 7 milliseconds 100.00% <= 203 milliseconds 74128.98 requests per second
LPOP
50.45% <= 0 milliseconds 99.40% <= 1 milliseconds 99.65% <= 2 milliseconds 99.69% <= 3 milliseconds 99.83% <= 4 milliseconds 99.86% <= 6 milliseconds 99.96% <= 7 milliseconds 99.97% <= 8 milliseconds 99.99% <= 16 milliseconds 100.00% <= 17 milliseconds 100.00% <= 205 milliseconds 67026.14 requests per second
root@inspiron6400:/opt/redis-beta-6# ./redis-benchmark -n 100000
SET
39.43% <= 0 milliseconds 96.91% <= 1 milliseconds 98.56% <= 2 milliseconds 99.23% <= 3 milliseconds 99.63% <= 4 milliseconds 99.72% <= 5 milliseconds 99.83% <= 6 milliseconds 99.92% <= 7 milliseconds 99.95% <= 8 milliseconds 100.00% <= 11 milliseconds 74295.69 requests per second
GET
46.01% <= 0 milliseconds 98.75% <= 1 milliseconds 99.42% <= 2 milliseconds 99.51% <= 3 milliseconds 99.72% <= 4 milliseconds 99.79% <= 5 milliseconds 99.82% <= 6 milliseconds 99.89% <= 7 milliseconds 99.89% <= 10 milliseconds 99.93% <= 11 milliseconds 99.95% <= 20 milliseconds 99.96% <= 21 milliseconds 100.00% <= 22 milliseconds 100.00% <= 208 milliseconds 59560.45 requests per second
INCR
42.83% <= 0 milliseconds 97.54% <= 1 milliseconds 98.76% <= 2 milliseconds 99.32% <= 3 milliseconds 99.48% <= 4 milliseconds 99.63% <= 5 milliseconds 99.79% <= 6 milliseconds 99.81% <= 7 milliseconds 99.85% <= 9 milliseconds 99.89% <= 11 milliseconds 99.92% <= 12 milliseconds 99.94% <= 14 milliseconds 99.96% <= 15 milliseconds 99.99% <= 26 milliseconds 100.00% <= 27 milliseconds 100.00% <= 208 milliseconds 57013.11 requests per second
LPUSH
54.74% <= 0 milliseconds 99.32% <= 1 milliseconds 99.70% <= 2 milliseconds 99.83% <= 3 milliseconds 99.86% <= 4 milliseconds 99.87% <= 5 milliseconds 99.89% <= 6 milliseconds 99.97% <= 7 milliseconds 99.98% <= 16 milliseconds 100.00% <= 17 milliseconds 100.00% <= 205 milliseconds 73338.71 requests per second
LPOP
45.98% <= 0 milliseconds 99.43% <= 1 milliseconds 99.88% <= 2 milliseconds 99.96% <= 3 milliseconds 99.96% <= 5 milliseconds 100.00% <= 7 milliseconds 100.00% <= 208 milliseconds 68448.32 requests per second
I think the PHP client is not properly tuned.
If any is interested I send to him the code I've used.
Hello jhernandis, I don't have numbers about the PHP client performances but I need more data about your tests. The PHP script, the average length of the values SET and so on. Note that if you are adding keys with a single connection you can't expect the same performance of the benchmarks, since they simulate 50 simultaneous clients, so the time spent in the 'round trip time' of request-reply is amortized. If you need to set a lot of keys from a single connection and make it very fast you should use pipelining instead. Thanks!
@luca: thanks for this benchmarks. The small seems a bit too slow... indeed
@antirez, I've sent you the code of my tests (I've supossed your mail is antirez AT gmail.com, don't?)
I've used a file containing 1000 phrases from "lorem ipsum" for data, and a for($i=0; $i<1000000M $i++) { $r->set('id' . $i, $data$datapos++?);.......
@jhernandis: thanks I'll look at the code later, but I think the numbers you got are more or less ok:
1000000 sets /80 sec = 12500 query/sec, that is what you will get more or less using the benchmark with '-c 1', add to this a bit of PHP overhead and the sentences that are a bit bigger, and what you get is that you are actually measuring more the round trip time that Redis performances! :)
In order to compare this numbers with MySQL what you really need is this:
Use a client to perform continuous writes, and N clients performing random reads. Under load Redis will have performances similar to the vanilla benchmark, while MySQL will start to degrade and get a lot slower.
Another test you may want to do is to use pipelining to send N queries, with N big (for example 1000 or 10000) and then get N replies, and so on. This will bring you again the numbers you see in the benchmarks.
@antirez: Thanks for your explain. I'll test again using the enviroment that you describes.
amd 2.1GHz(single core), 2G RAM
redis-benchmark -q -n 100000
ruby bench.rb
Rehearsal --------------------------------------- set 3.220000 0.710000 3.930000 ( 5.794546) ------------------------------ total: 3.930000sec user system total real set 2.970000 0.670000 3.640000 ( 5.281302)@impactplayr: that's a bit slow, what kind of OS? This is what I get under Linux emulated under vmware in a macbook (Inte Core Duo CPU T8300 @ 2.40GHz):
Another benchmark against a 64 bit Linux box, Xeon L5420 clocked at 2.5 Ghz:
@antirez Ubuntu 8.10 32bit
A slightly aging machine, I suppose: Dual AMD 285 Opteron, 16GB ram. (tbh, I had expected more)
Ubuntu 8.04 LTS x64
Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz
MemTotal?: 8189780 kB
redis-0.091
I tried ./redis-benchmark -n 1000000 (note the extra zero)
"top", "1" shows:
Mem: 8189780k total, 1043512k used, 7146268k free, 95896k buffers Swap: 9968292k total, 0k used, 9968292k free, 625096k cached
@maxdemarzi: very interesting! Especially the CPU usage bit. Should I guess all this time spent in software interrupts is about the loopback interface?
Btw it's worth to note that every benchmark against loopback must consume 100% of CPU in total otherwise it means there are idle times to remove in the code somewhere.
Now with version 0.091:
conf1:
conf2 changes only "shareobjects" to "yes"
It appears that the new option, shareobjects, does give a bit of a performance boost.
I did put some critical comments on using Redis on EC2: https://michalfrackowiak.com/blog:redis-performance
On OS X on a 2.66GHz Core 2 Duo MacBook? Pro I get:
$ redis-benchmark -q -n 100000 PING: 28646.62 requests per second SET: 27887.62 requests per second GET: 25755.09 requests per second INCR: 25712.85 requests per second LPUSH: 27951.09 requests per second LPOP: 25544.32 requests per second
Seems very slow (well, not really, but in comparison to other numbers given here), but perhaps is an OS X thing. I'll be using it in production on Linux anyway :)
Hello pcooper, indeed under Mac OS X the benchmark shows numbers that are very far compared to Linux. My best guess is that this is a not very optimized loopback interface implementation but I'm not sure about it, tests using a real interface against Linux and Mac OS X in similar conditions are really needed in order to verify this guess.
For comparisons against MySQL: https://colinhowe.wordpress.com/2009/04/27/redis-vs-mysql/
Why nobody considers testing sets performance ?? The most goal I'm looking at redis it's ability to operate on sets.. but I've got terrible result.. it does 1000 SADDs in about 40 sec! on Core Duo E6850 3.00GHz
why it's so much slooow? anybody else experienced that ?
For me, operations with sets are only a little bit slower then with lists or strings.
you're right, my excuse.. something wrong with my redis API, I've put sets tests into redis-benchmark and it worked quite well:
hi,
have anybody made benchmark with keep alive disabled?
i've run benchmark:
./redis-benchmark -c 100 -n 10000 -d 200 -k 0 -l
and i've got a lot of:
Connect: connect: Cannot assign requested address
Connect: connect: Cannot assign requested address
Connect: connect: Cannot assign requested address
Connect: connect: Cannot assign requested address
is it normal?
@sebpaa: try this: "sudo sysctl -w net.inet.tcp.msl=1000" if you are using a Mac. In Linux instead use the following: "echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse"
great, thx (i didnt see that hint while running benchmark... )
I have done some benchmarking using Redis 1.0.2 on: Amazon EC2 Flexiscale Slichost (albeit a tiny vm)
I have compiled it all into a Google Docs spreadsheet and done some cost analysis for both request throughput and available storage. Please let me know if you find it useful:
https://spreadsheets.google.com/ccc?key=0AhcHKeq_S228dDE3QzlES2V3RHhCbmh5MDlQVjdhc1E&hl=en
PS. It would be great to put some bare-metal results on there. If someone could send me benchmarks for 1.0.2 using: ./redis-benchmark -n 100000 ./redis-benchmark -n 10000 -d 200
...along with the server spec and monthly cost of their hosting, I would be happy to add it to the analysis.
I have just written a blog post analysing the above results:
https://porteightyeight.com/2009/11/09/redis-benchmarking-on-amazon-ec2-flexiscale-and-slicehost/
My benchmark result for Mac OSX 10.5
centurydaily-lm:redis sabhinav$ ./redis-benchmark -q -n 100000 SET: 30442.01 requests per second GET: 23855.24 requests per second INCR: 27178.53 requests per second LPUSH: 30552.23 requests per second LPOP: 24934.68 requests per second PING: 32311.79 requests per second LPUSH (again, in order to bench LRANGE): 32306.85 requests per second LRANGE (first 100 elements): 4253.33 requests per second LRANGE (first 300 elements): 1377.75 requests per second LRANGE (first 450 elements): 900.67 requests per second LRANGE (first 600 elements): 685.02 requests per second
01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Error writing to client: Broken pipe 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:07 . Client closed connection 01 Dec 20:40:08 . DB 0: 3 keys (0 volatile) in 4 slots HT. 01 Dec 20:40:08 . 0 clients connected (0 slaves), 11991936 bytes in use, 0 shared objects 01 Dec 20:40:13 . DB 0: 3 keys (0 volatile) in 4 slots HT. 01 Dec 20:40:13 . 0 clients connected (0 slaves), 11991936 bytes in use, 0 shared objects 01 Dec 20:40:18 . DB 0: 3 keys (0 volatile) in 4 slots HT. 01 Dec 20:40:18 . 0 clients connected (0 slaves), 11991936 bytes in use, 0 shared objects
Why a poor performance on LRANGE, i m not sure. Also connections reported as broken pipes.
Otherwise a kick ass server.
Hey acharnock,
Awesome benchmarks. One thing to keep in mind when comparing Amazon to someone like slicehost is that Slicehost allows for bursting. So, if other people on the machine are not using resources you will get a boost in performance. This makes it hard to truly measure because as soon as the server becomes busy all of a sudden your slicehost server won't be performing top notch.
> Why a poor performance on LRANGE, i m not sure.
Why? It's obvious that server can't return hundreds of records as fast as single record.
redis-benchmark -n 1000000
SET
66.92% <= 0 milliseconds 99.80% <= 1 milliseconds 99.93% <= 2 milliseconds 99.94% <= 3 milliseconds 99.96% <= 4 milliseconds 99.97% <= 5 milliseconds 99.98% <= 7 milliseconds 99.99% <= 8 milliseconds 99.99% <= 15 milliseconds 100.00% <= 16 milliseconds 100.00% <= 32 milliseconds 148127.53 requests per second
GET
65.08% <= 0 milliseconds 99.93% <= 1 milliseconds 99.93% <= 2 milliseconds 99.97% <= 4 milliseconds 99.97% <= 5 milliseconds 99.97% <= 6 milliseconds 99.99% <= 7 milliseconds 100.00% <= 8 milliseconds 141804.17 requests per second
INCR
54.45% <= 0 milliseconds 98.89% <= 1 milliseconds 99.93% <= 2 milliseconds 99.96% <= 4 milliseconds 99.97% <= 5 milliseconds 99.97% <= 6 milliseconds 99.99% <= 7 milliseconds 100.00% <= 8 milliseconds 106501.27 requests per second
LPUSH
67.16% <= 0 milliseconds 99.94% <= 1 milliseconds 99.95% <= 2 milliseconds 99.98% <= 4 milliseconds 99.98% <= 5 milliseconds 99.99% <= 7 milliseconds 100.00% <= 8 milliseconds 150944.45 requests per second
LPOP
65.47% <= 0 milliseconds 99.93% <= 1 milliseconds 99.93% <= 2 milliseconds 99.93% <= 3 milliseconds 99.97% <= 4 milliseconds 99.98% <= 5 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 143369.33 requests per second
PING
71.63% <= 0 milliseconds 99.95% <= 1 milliseconds 99.95% <= 3 milliseconds 99.98% <= 4 milliseconds 99.99% <= 5 milliseconds 100.00% <= 7 milliseconds 100.00% <= 8 milliseconds 174734.58 requests per second
Wow...Redis does not perform well when virtualized! Here's some simple benchmark results between an 8GB, 4 vCPU monster Cloud Server (RackSpace?) and...a 1GB, 1vCPU VirtualBox? VM running on my underpowered laptop. Both are running Ubuntu 9.10 x64. Not only are neither anywhere close to the expected performance, but my weakling laptop actually beats the bigger, arguably more capable Cloud Server!
RackSpace? Cloud Server
root@Redis:~/redis-1.1.95-beta# ./redis-benchmark -q -n 100000 SET: 32900.00 requests per second GET: 23049.54 requests per second INCR: 19276.88 requests per second LPUSH: 19455.25 requests per second LPOP: 19531.25 requests per second PING: 21834.06 requests per second LPUSH (again, in order to bench LRANGE): 20618.56 requests per second LRANGE (first 100 elements): 4500.27 requests per second LRANGE (first 300 elements): 1366.31 requests per second LRANGE (first 450 elements): 1047.67 requests per second LRANGE (first 600 elements): 762.72 requests per second
root@Redis:~/redis-1.02# ./redis-benchmark -q -n 100000 SET: 29242.40 requests per second GET: 28901.73 requests per second INCR: 27551.79 requests per second LPUSH: 28901.73 requests per second LPOP: 27247.96 requests per second PING: 30395.14 requests per second
VirtualBox? on localhost
john@ubuntu:~/redis-1.1.95-beta$ ./redis-benchmark -q -n 100000 SET: 44822.95 requests per second GET: 43535.05 requests per second INCR: 41084.63 requests per second LPUSH: 42955.32 requests per second LPOP: 43572.98 requests per second PING: 47460.84 requests per second LPUSH (again, in order to bench LRANGE): 45808.52 requests per second LRANGE (first 100 elements): 7945.75 requests per second LRANGE (first 300 elements): 1849.93 requests per second LRANGE (first 450 elements): 1204.59 requests per second LRANGE (first 600 elements): 854.87 requests per second
it seems everyone in here is using the redis-benchmark utility to do benchmarks. if you look carefully into it's source code, it's doing some pretty strange stuff - using same values for each query for example. there's a benchmark at sourceforge that shows much darker numbers in 1000's range and falling fast with database saturation, but something is most probably wrong with it too.. at least it's tests are supposedly based on real-life use-case: https://sourceforge.net/projects/dbbenchmark/
if somebody can comment on redis performance on real applications with real traffic, please do it. a long thread based on "./redis-benchmark" runs and no real tests seems really weird.
@equoblog
Hello!
This is why you should reconsider your statements:
a) Redis-benchmark can SET/GET random keys. Just use the "-r" option. For instance "-r 1000000" means use random keys, in the range 0-999999. Performances are exactly the same:
You can try with different "-r" values and what you'll get will be the same results? Why? Read "b"
b) With in-memory databases things are very different. Operations of very different types are going to be the same speed. Did you noticed that PING is the same speed as LPUSH? Yes, this is why the networking overhead is so big compared to the fast in memory operations that almost all the time is spent in I/O and request parsing and handling.
c) Most benchmarks you'll read not showing the same performances as Redis-benchmark are broken in one this two ways: 1) they are single threaded. So what they measure is Round Trip Time, not performances. 2) they use databases that don't fit in memory (but without the brand new Virtual Memory feature in Redis Git) so the OS is swapping like mad.
d) There are real-world usage of Redis showing that with 200/300 requests/second the CPU usage is mostly 0%. Guess what?
So please, before to claim that Redis is not as fast as I cliam, try to design your own test with good methodology and show us that I and redis-benchmark are wrong.
Sorry for the aggressive tone of this email but I think that if you was not able to check that there was a "-r" option in the redis-benchmark utility your research was very poor and still you are here claiming that Redis can handle 1000 requests/second in the real world at best.
@jazzman007 Redis can actually perform well on a VPS, here is an example running on 512MB RAM with 8vCPUS:
@antirez Thanks for explanation, and don't take it the wrong way, like Redis immensely, though confused and disappointed with the existing benchmarking tools for it. Checked the -r options already before writing :) The point was about values that are being set in the benchmark. You're basically SETting a lot of keys to the same value. That's not a use case from real life even remotely. And what's with the GET's ? All of them return the same value, the value itself being very small. Basically, the benchmark works on a hyper-artificial toy task and provides numbers that are, it seems, at least partially misleading. A benchmark on some real-life workload would be much more interesting. The one on sourceforge claims to use a real workload and gets very low performance on that job from both MySQL and Redis. Both the starting performance and the slowdown with DB saturation seems to be similar, with Redis being slower on average.
Can you please update the benchmark code to work closer to "real-life" scenarios or explain why on the sf benchmark Redis is not flying as it should ?
Thanks.
p.s. your tone was quite appropriate, hope that there is some misconfiguration that can explain the slowdown.
getting some interesting performance on my win32 machine :) - just compiled it with cygwin and the machine is doing like a million other things at the same time though, would be interesting to find out what the bottleneck on win32 is.