Redis Is Fast – I'll Cache in Postgres
Key topics
The article compares PostgreSQL and Redis as caching solutions, concluding that PostgreSQL is a viable alternative, but the discussion reveals controversy over the benchmark's validity and the trade-offs between the two technologies.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
75
0-6h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 25, 2025 at 7:34 PM EDT
3 months ago
Step 01 - 02First comment
Sep 25, 2025 at 9:47 PM EDT
2h after posting
Step 02 - 03Peak activity
75 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 29, 2025 at 6:05 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Edit: https://antonz.org/redka/#performance
I'd suggest using Redis pipelining -- or better: using the excellent rueidis redis client which performs auto-pipelining. Wouldn't be surprising to see a 10x performance boost.
https://github.com/redis/rueidis
To this I would add that more often than not the extra cost and complexity of a memory cache does not justify shaving off a few hypothetical milliseconds from a fetch.
On top of that, some nosql offerings from popular cloud providers already have CRUD operations faster than 20ms.
I didn't measure setting keys or req/sec because for my use case keys were updated infrequently.
I generally find ms to be a more useful metric than reqs/sec or latency at full load, as this is not a typical load. Or at least wasn't for my use case.
Of course all depends on your use case etc. etc. In some cases throughput does matter. I would encourage everyone to run their own benchmarks suited to their own use case to be sure – should be quick and easy.
As I rule I recommend starting with PostgreSQL and using something else only if you're heavily using the cache or you run in to problems. Redis isn't too hard to run, but still just one less service to worry about. Or alternatively, just use a in-memory DB. Not always appropriate of course, but sometimes it is.
Of course such sensitive environments are easily imaginable but I wonder why you'd select either in that case.
Yes, that was my take-away.
Doesn’t require SQLite.
Works with other DBs:
https://github.com/rails/solid_cache
Definitely a premature optimization on my part.
Wherever you go, there you are.
Is redis not improving your latency? Is it adding complexity that isn’t worth it? Why bother removing it?
But when you have 0-10 users and 0-1000 requests per day, it can make more sense to write something more monolithic and with limited scalability. Eg, doing everything in Postgres. Caching is especially amenable to adding in later. If you get too far into the weeds managing services and creating scalability you might bogged down and never get your application in front of potential users in the first place.
Eg, your UX sucks and key features aren't implemented, but you're tweaking TTLs and getting a Redis cluster to work inside Docker Compose. Is that a good use of your time? If your goal is to get a functional app in front of potential users, probably not.
But I agree that it would be appropriate to start out that way in some projects.
We can't get rid of Postgres, but since we run Postgres on GCP we really never even think about it.
If your cache is so performance critical that you can't lose the data then it sounds like you need a (denormalized) database.
I always find these "don't use redis" posts kind of strange. Redis is so simple to operate at any scale, I don't quite get why it is important to remove it.
Maybe Postgres could use a caching feature. Until then, I'm gonna drop in Redis or memcached instead of reinventing the wheel.
Personally for a greenfield project, my thinking would be that I am paying for Postgres already. So I would want to avoid paying for Redis too. My Postgres database is likely to be underutilized until (and unless) I get any real scale. So adding caching to it is free in terms of dollars.
Usually Postgres costs a lot more than Redis if you're paying for a platform. Like a decent Redis or memcached in Heroku is free. And I don't want to waste precious Postgres connections or risk bogging down the whole DB if there's lots of cache usage, which actually happened last time I tried skipping Redis.
Postgres might cost more but I'm probably already paying. I agree that exhausting connections and writing at a high rate are easy ways to bring down Postgres, but I'm personally not going to worry about exhausting connections to Postgres until I have at least a thousand of them. Everything has to be considered within the actual problem you are solving, there are definitely situations to start out with a cache.
Edit: well a tiny bit, max $3/mo
You need to back up your unbelievable assertion with facts. Memory cache is typically far more expensive than a simple database, specially as provisioning the same memory capacity as RAM is orders of magnitude more expensive than storing the equivalent data in a database.
So be specific. What exactly did you wanted to say?
> But yeah a base tier Redis that will carry a small project tends to be a lot cheaper than the base tier Postgres.
This is patently false. I mean,some cloud providers offer nosql databases with sub-20ms performance as part of their free tier.
Just go ahead and provide any evidence, any at all,that support the idea that Redis is cheaper than Postgres. Any concrete data will do.
I have no ideas where did you got that from.
I'm not sure how else to interpret this
You do not need cron jobs to do cache. Sometimes you don't even need a TTL. All you need is a way to save data in a way that is easy and cheaper to retrieve. I feel these comments just misinterpret what a cache is by confusing it with what some specific implementation does. Perhaps that's why we see expensive and convoluted strategies using Redis and the like when they are absolutely not needed at all.
Do you have a bound? I mean, with Redis you do, but that's primarily a cost-driven bound.
Nevertheless, I think you're confusing the point of a TTL. TTLs are not used to limit how much data you cache. The whole point of a TTL is to be able to tell whether a cache entry is still fresh or it is stale and must be revalidated. Just because some cache strategies use TTL to determine what entry they should evict, that is just a scenario that takes place when memory is at full capacity.
Non-sequitur,and imaterial to the discussion.
> You should probably evict it when the write comes in.
No. This is only required if memory is maxed out and there is no more room to cache your entry. Otherwise you are risking cache misses by evicting entries that are still relatively hot.
You said:
> The whole point of a TTL is to be able to tell whether a cache entry is still fresh or it is stale and must be revalidated.
So I responded to it. I don't really understand why you think that's nonsequiter.
> No.
I'm a bit confused. We're not using TTLs and we're not evicting things when they become invalid. What is your suggestion?
I'm a big "just use Postgres" fan but I think Redis is sufficiently simple and orthogonal to include in the stack.
It seems like the autovacuum could take care of these expired rows during its periodic vacuum. The query planner could automatically add a condition that excludes any expired rows, preventing expired rows from being visible before autovacuum cleans them up.
What exactly is the challenge you're seeing? In the very least, you can save an expiry timestamp as part of the db entry. Your typical caching strategy already involves revalidating cache before it expires, and it's not as if returning stale while revalidating is something completely unheard of.
Don't get me wrong, the idea that he wants to just use a RDMBS because his needs aren't great enough, is a perfectly inoffensive conclusion. The path that led him there is very unpersuasive.
It's also dangerous. Ultimately the author is willing to do a bit more work rather than learn something new. This works because he's using a popular tool people like. But overall, he doesn't demonstrate he's even thought about any of the things I'd consider most important; he just sort of assumes running a Redis is going to be hard and he'd rather not mess with it.
To me, the real question is just cost vs. how much load the DB can even take. My most important Redis cluster basically exists to take load off the DB, which takes high load even by simple queries. Using the DB as a cache only works if your issue is expensive queries.
I think there's an appeal that this guy reaches the conclusion someone wants to hear, and it's not an unreasonable conclusion, but it creates the illusion the reasoning he used to get there was solid.
I mean, if you take the same logic, cross out the word Postgres, and write in "Elasticsearch," and now it's an article about a guy who wants to cache in Elasticsearch because it's good enough, and he uses the exact same arguments about how he'll just write some jobs to handle expiry--is this still sounding like solid, reasonable logic? No it's crazy.
Perhaps you could have a second cron job that runs to verify that the first one completed. It could look for a last-ran entry. You should put it in the same database, so maybe perhaps you could a key value store like redis for that.
I mean what if an actual benchmark showed Redis is 100X as fast as postgres for a certain use case? What are the constraints you might be operating with? What are the characteristics of your workload? What are your budgetary constraints?
Why not just write a blog post saying "Unoptimized postgres vs redis for the lazy, running virtualized with a bottleneck at the networking level"
I even think that blog post would be interesting, and might be useful to someone choosing a stack for a proof of concept. For someone who to scale to large production workloads (~10,000 requests/second or more), this isn't a very useful article, so the criticism is fair, and I'm not sure why you're dismissing it off hand.
Within the constraints of my setup, postgres came out slower but still fast enough. I don't think I can quantify what fast enough is though. Is it 1000 req/s? Is it 200? It all depends on what you're doing with it. For many of my hobby projects which see tens of requests per second it definitely is fast enough.
You could argue that caching is indeed redundant in such cases, but some of those have quite a lot of data that takes a while to query.
Would it bother you as well if the conclusion was rephrased as "based on my observations, I see no point in rearchitecting the system to improve the performance by this much"?
I think you are too tied to a template solution that not only you don't stop to think why you're using it or even if it is justified at all. Then, when you are faced with observations that challenge your unfounded beliefs, you somehow opt to get defensive? That's not right.
Add an app that actually uses postgres as a database, you will probably see its performance crumble, as the app will content the cache for resources.
Nobody asked for benchmarking as rigorous as you would have in a published paper. But toy examples are toy examples, be it in a publication or not.
Otherwise, the article does well to show that we can get a lot of baseline performance either way. Sometimes a cache is premature optimisation.
Writes will go to RAM as well if you have synchronous=off.
Your comments suggest that you are definitely missing some key insights onto the topic.
If you, like the whole world, consume Redis through a network connection, it should be obvious to you that network is in fact the bottleneck.
Furthermore, using a RDBMS like Postgres may indeed imply storing data in a slower memory. However, you are ignoring the obvious fact that a service such as Postgres also has its own memory cache, and some query results can and are indeed fetched from RAM. Thus it's not like each and every single query forces a disk read.
And at the end of the day, what exactly is the performance tradeoff? And does it pay off to spend more on an in-memory cache like Redis to buy you the performance Delta?
That's why real world benchmarks like this one are important. They help people think through the problem and reassess their irrational beliefs. You may nitpick about setup and configuration and test patterns and choice of libraries. What you cannot refute are the real world numbers. You may argue they could be better if this and that, but the real world numbers are still there.
Not to be annoying - but... what?
I specifically _do not_ use Redis over a network. It's wildly fast. High volume data ingest use case - lots and lots of parallel queue workers. The database is over the network, Redis is local (socket). Yes, this means that each server running these workers has its own cache - that's fine, I'm using the cache for absolutely insane speed and I'm not caching huge objects of data. I don't persist it to disk, I don't care (well, it's not a big deal) if I lose the data - it'll rehydrate in such a case.
Try it some time, it's fun.
> And at the end of the day, what exactly is the performance tradeoff? And does it pay off to spend more on an in-memory cache like Redis to buy you the performance Delta?
Yes, yes it is.
> That's why real world benchmarks like this one are important.
That's not what this is though. Just about nobody who has a clue is using default configurations for things like PG or Redis.
> They help people think through the problem and reassess their irrational beliefs.
Ok but... um... you just stated that "the whole world" consumes redis through a network connection. (Which, IMO, is wrong tool for the job - sure it will work, but that's not where/how Redis shines)
> What you cannot refute are the real world numbers.
Where? This article is not that.
Eh - while surely not everyone has the benefits of doing so, I'm running Laravel and using Redis is just _really_ simple and easy. To do something via memory mapped files I'd have to implement quite a bit of stuff I don't want/need to (locking, serialization, ttl/expiration, etc).
Redis just works. Disable persistence, choose the eviction policy that fits the use, config for unix socket connection and you're _flying_.
My use case is generally data ingest of some sort where the processing workers (in my largest projects I'm talking about 50-80 concurrent processes chewing through tasks from a queue (also backed by redis) and are likely to end up running the same queries against the database (mysql) to get 'parent' records (ie: user associated with object by username, post by slug, etc) and there's no way to know if there will be multiples (ie: if we're processing 100k objects there might be 1 from UserA or there might be 5000 by UserA - where each one processing will need the object/record of UserA). This project in particular there's ~40 million of these 'user' records and hundreds of millions of related objects - so can't store/cache _all_ users locally - but sure would benefit from not querying for the same record 5000 times in a 10 second period.
For the most part, when caching these records over the network, the performance benefits were negligible (depending on the table) compared to just querying myqsl for them. They are just `select where id/slug =` queries. But when you lose that little bit of network latency and you can make _dozens_ of these calls to the cache in the time it would take to make a single networked call... it adds up real quick.
PHP has direct memory "shared memory" but again, it would require handling/implementing a bunch of stuff I just don't want to be responsible for - especially when it's so easy and performant to lean on Redis over a unix socket. If I needed to go faster than this I'd find another language and likely do something direct-to-memory style.
I think "you are definitely missing some key insights onto the topic". The whole world is a lot bigger than your anecdotes.
I sometimes read this stuff like people explaining how they replaced their spoon and fork with a spork and measured only a 50% decrease in food eating performance. And have you heard of the people with a $20,000 Parisian cutlery set to eat McDonalds? I just can't understand insane fork enjoyers with their over-engineered their dining experience.
The less dependencies my project has the better. If it is not needed why use it?
Hardware…is cheap, and bare metal performance outweighs anything cloudy by multiples of magnitudes. If I have to invest money into something, I’d rather invest that in bare metal tooling, than paying for a managed service, that’s just a wrapper around tooling. E.g RDS, EC2, Fargate… or their equivalents across other CSPs.
I can run a Postgres cluster on bare metal, that will obliterate anything cloudy, and cost less than a 3rd if not less. Is it easy? No. But that’s where the investment comes in. A few good Infra resources can do magic, and yes, I hope to be large enough that these labor costs will be way less than a cloud bill.
My own conclusions from your data:
- Under light workloads, you can get away with Postgres. 7k RPS is fine for a lot of stuff.
- Introducing Redis into the mix has to be carefully weighted against increased architectural complexity, and having a common interface allows us to change that decision down the road.
Yeah maybe that's not up to someone else's idea of a good synthetic benchmark. Do your load-testing against actual usage scenarios - spinning up an HTTP server to serve traffic is a step in the right direction. Kudos.
I don't see any point to this blend of cynical contrarianism. If you feel you can do better, put your money where your mouth is. Lashing at others because they went through the trouble of sharing something they did is something that's absurd and creates no value.
Also, maintaining a blog doesn't make anyone an expert, but not maintaining a blog doesn't mean you are suddenly more competent than those who do.
What exactly is your point? That you can further optimize either option? Well yes, that comes at no suprise. I mean, the latencies alone are in the range of some transcontinental requests. Were you surprised that Redis outperformed Postgres? I hardly think so.
So what's the problem?
The main point that's proven is that there is indeed diminishing returns in terms of performance. For applications where you can afford an extra 20ms when hitting a cache, caching using a persistent database is an option. For some people, it seems this fact was very surprising. That's food for thought, isn't it?
Comes with ttl support (which isn't precise so you still need to check expiration on read), and can support long TTLs as there's essentially no limit to the storage.
All of this at a fraction of the cost of HA redis Only if you need that last millisecond of performance and have done all other optimizations should one consider redis imho
This depends on your scale. Dynamodb is pay per request and the scaling isn’t as smooth. At certain scales Redis is cheaper.
Then if you don’t have high demand maybe it’s ok without HA for Redis and it can still be cheaper.
Can you specify in which scenario you think Redis is cheaper than caching things in, say, dynamodb.
You posted a vague and meaningless assertion. If you do not have latency numbers and cost differences, you have absolutely nothing to show for, and you failed to provide any rationale that justified even whether any cache is required at all.
ElastiCache Serverless (Redis/Memcached): Typical latency is 300–500 microseconds (sub-millisecond response)
DynamoDB On-Demand: Typical latency is single-digit milliseconds (usually between 1–10 milliseconds for standard requests)
You need to be more specific than that. Depending on your read/write patterns and how much memory you need to allocate to Redis, back of the napkin calculations still point to the fact that Redis can still cost >$1k/month more than DynamoDB.
Did you actually do the math on what it costs to run Redis?
You would've used local memory first. At which point I cannot see getting to those request levels anymore
> ElastiCache Serverless (Redis/Memcached): Typical latency is 300–500 microseconds (sub-millisecond response)
Sure
> DynamoDB On-Demand: Typical latency is single-digit milliseconds (usually between 1–10 milliseconds for standard requests)
I know very little use cases where that difference is meaningful. Unless you have to do this many times sequentially in which case optimizing that would be much more interesting than a single read being .5 ms versus the typical 3 to 4 for dynamo (that last number is based on experience)
When not hosted on AWS? Who says we have to compare dynamodb to AWS managed Redis? Redis the company has paid hosted versions. You can run it as part of your k8s cluster too.
For HA redis you need at least 6 instances, 2 regions * 3 AZs. And you're paying for all of that 24/7.
And if you truly have 24/7 use then just 2 regions won't make sense as the latency to get to those regions from the other side of the globe easily removes any caching benefit.
If you're given the requirement of highly available, how do you not end up with at least 3 nodes? I wouldn't consider a single region to be HA but I could see that argument as being paranoid.
A cache is just a store for things that expire after a while that take load of your persistent store. It's inherently eventually consistent and supposed to help you scale reads. Whatever you use for storage is irrelevant to the concept of offloading reads
It's $15/mo for 2x cache.t4g.micro nodes for ElastiCache Valkey with multi-az HA and a 1-year commitment. This gives you about 400 MB.
It very much depends on your use case though if you need multiple regions then I think DynamoDB might be better.
I prefer Redis over DynamoDB usually because it's a widely supported standard.
You need to be more specific with your scenario. Having to cache 100MB of anything is hardly a scenario that involves introducing a memory cache service such as Redis. This is well within the territory of just storing data in a dictionary. Whatever is driving the requirement for Redis in your scenario, performance and memory clearly isn't it.
Tell that to Github or HN or many other sites? So caching for them doesn't make sense?
Exactly. I think nosql offerings from any cloud provider already supports both TTL and conditional requests out-of-the-box, and the performance of basic key-value CRUD operations is often <10ms.
I've seem some benchmarks advertise memory cache services as having latencies around 1ms. Yeah, this would mean the latency of a database is 10 times higher. But relative numbers matter nothing. What matters is absolute numbers, as they are the ones that drive tradeoff analysis. Does a feature afford an extra 10ms in latency, and is that performance improvement worth paying a premium?
Conclusions aren't incorrect either, so what's the problem?
A takeaway could be that you can dedicate a postgres instance for caching and have acceptable results. But who does that? Even for a relatively simple intranet app, your #1 cost when deploying in Google Cloud would probably be running Postgres. Redis OTOH is dirt cheap.
Maybe I'm reading the article wrong, but it is representative of any application that uses a PosgreSQL server for data, correct?
In what way is that not a real-life scenario? I've deployed Single monolith + PostgreSQL to about 8 different clients in the last 2.5 years. It's my largest source of income.
If your don't mind overprovisioning your postgres, yes I guess the presented benchmarks are kind of representative. But they also don't add anything that you didn't know without reading the article.
Why would I mind it? I'm not using overpriced hosted PostgreSQL, after all.
And... do you do that with the default configuration?
Yes. Internal apps/LoB apps for a large company might have, at most 5k users. PostgreSQL seems to manage it fine, none of my metrics are showing high latencies even when all employees log on in the morning during the same 30m period.
Kudos to you sir. Sincerely, I'm not hating, I'm actually jealous of the environment being that mellow.
A lot of us ate shit to stay in the Bay Area, to stay in computing. I have stories of great engineers doing really crappy jobs and "contracting" on the side.
I couldn't really have a 'startup' out of my house and a slice of rented hosting. Hardware was expensive and nothing was easy. Today I can set up a business and thrive on 1000 users at 10 bucks a month. Thats a viable and easy to build business. It's an achievable metric.
But Im not going to let amazon and its infinite bill you for everything at 2012 prices so it can be profitable hosting be my first choice. Im not going to do that when I can get fixed cost hosting.
For me, all the interesting things going on in tech aren't coming out of FB, Google and hyperscalers. They aren't AI or ML. We dont need another Kubernetes or Kafka or react (no more Conways law projects). There is more interesting work going on down at the bottom. In small 2 and 3 man shops solving their problems on limited time and budget with creative "next step" solutions. Their work is likely more applicable to most people reading HN than another well written engineering blog from cloud flare about their latest massive rust project.
> The way it is presented, a casual reader would think Postgres is 2/3rds the performance of Redis.
If a reader cares about the technical choice, they'll probably at least read enough to learn of the benchmarks in this popular use case, or even just the conclusion:
> Redis is faster than postgres when it comes to caching, there’s no doubt about it. It conveniently comes with a bunch of other useful functionality that one would expect from a cache, such as TTLs. It was also bottlenecked by the hardware, my service or a combination of both and could definitely show better numbers. Surely, we should all use Redis for our caching needs then, right? Well, I think I’ll still use postgres. Almost always, my projects need a database. Not having to add another dependency comes with its own benefits. If I need my keys to expire, I’ll add a column for it, and a cron job to remove those keys from the table. As far as speed goes - 7425 requests per second is still a lot. That’s more than half a billion requests per day. All on hardware that’s 10 years old and using laptop CPUs. Not many projects will reach this scale and if they do I can just upgrade the postgres instance or if need be spin up a redis then. Having an interface for your cache so you can easily switch out the underlying store is definitely something I’ll keep doing exactly for this purpose.
I might take an issue with the first sentence (might add "...at least when it comes to my hardware and configuration."), but the rest seems largely okay.
As a casual reader, you more or less just get:
If I wanted to read super serious benchmarks, I'd go looking for those (which would also have so many details that they would no longer be a casual read, short of just the abstract, but them I'm missing out on a lot anyways), or do them myself. This is more like your average pop-sci article, nothing wrong with that, unless you're looking for something else.Eliminating the bottlenecks would be a cool followup post though!
Ugh. I know this gives the illusion of fairness, but it's not how any self-respecting software engineer should approach benchmarks. You have hardware. Perhaps you have virtualized hardware. You tune to the hardware. There simply isn't another way, if you want to be taken seriously.
Some will say that in a container-orchestrated environment, tuning goes out the window since "you never know" where the orchestrator will schedule the service but this is bogus. If you've got time to write a basic deployment config for the service on the orchestrator, you've also got time to at least size the memory usage configs for PostgreSQL and/or Redis. It's just that simple.
This is the kind of thing that is "hard and tedious" for only about five minutes of LLM query or web search time and then you don't need to revisit it again (unless you decide to change the orchestrator deployment config to give the service more/less resources). It doesn't invite controversy to right-size your persistence services, especially if you are going to publish the results.
Postgres is a power tool usable for many many use cases - if you want performance it must be tuned.
If you judge Postgres without tuning it - that's not Postgres being slow, that's the developer being naive.
Didn't OP end by picking Postgres anyway?
It's the right answer even for a naive developer, perhaps even more so for a naive one.
At the end of the post it even says
>> Having an interface for your cache so you can easily switch out the underlying store is definitely something I’ll keep doing
IOW, he judged it fast enough.
Benchmarking the defaults and benchmarking a tuned setup will measure very different things, but both of them matter.
For example, if you keep adding data to a Redis server under default config, it will eat up all of your RAM and suddenly stop working. Postgres won't do the same, because its default buffer size is quite small by modern standards. It will happily accept INSERTs until you run out of disk, albeit more slowly as your index size grows.
The two programs behave differently because Redis was conceived as an in-memory database with optional persistence, whereas Postgres puts persistence first. When you use either of them with their default config, you are trusting that the developers' assumptions will match your expectations. If not, you're in for a nasty surprise.
Enough people use the default settings that benchmarking the default settings is very relevant.
It often isn't a good thing to rely on the defaults, but it's nevertheless the case that many do.
(Yes, it is also relevant to benchmark tuned versions, as I also pointed out, my argument was against the claim that it is somehow unfair not to tune)
If the defaults are fine for a use case then unless I want to tune it for personal interest it’s either a poor use of my fun time or a poor use of my clients funds.
I don't think this holds true. Caches are used for reasons other than performance. For example, caches are used in some scenarios for stampede protection to mitigate DoS attacks.
Also, the impact of caches on performance is sometimes negative. With distributed caching, each match and put require a network request. Even when those calls don't leave a data center, they do cost far more than just reading a variable from memory. I already had the displeasure of stumbling upon a few scenarios where cache was prescribed in a cargo cult way and without any data backing up the assertion, and when we took a look at traces it was evident that the bottleneck was actually the cache itself.
Not really. Running out of computational resources to fulfill requests is not a performance issue. Think of thinks such as exhausting a connection pool. More often than not, some components of a system can't scale horizontally.
Amazon actually moved away from caches for some parts of its system because consistent behavior is a feature, because what happens if your cache has problems and the interaction between that and your normal thing is slow? What if your cache has some bugs or edge case behavior? If you don't need it you are just doing a bunch of extra work to make sure things are in sync.
There are async functions provided by PostgreSQL client library (libpq). I've used it to process around 2000 queries on a single connection per second on a logged table.
Also does anyone like memcached anymore? When I compared with Redis in the past it appeared more simple.
140 more comments available on Hacker News