Go maps reuse memory on overwrites, which is why orcaman achieves 0 B/op for pure updates. xsync's custom bucket structure allocates 24 B/op per write even when overwriting existing keys.
At 1M writes/second with 90% overwrites: xsync allocates ~27 MB/s, orcaman ~6 MB/s. The trade is 24 bytes/op for 2x speed under contention. Whether this matters depends on whether your bottleneck is CPU or memory allocation.
Benchmark code: standard Go testing framework, 8 workers, 100k keys.
puzpuzpuz-hn 15 hours ago [-]
Allocation rates comparison is included. If your application writes into the map most of the time, you should go with plain map + RWMutex (or orcaman/concurrent-map). But if, for instance, you're using the map as a cache, read operations will dominate and having better read scalability becomes important. As an example, Otter cache library uses a modified variant of xsync.Map, not a plain map + RWMutex.
hinkley 22 hours ago [-]
How does reuse avoid false sharing between cores? Since this is concurrent hashmap we are talking about.
tl2do 19 hours ago [-]
I focused on B/op because it was the only apparent weakness I saw. My “reuse” note was about allocation behavior, not false sharing. We’re talking about different concerns.
hinkley 16 hours ago [-]
Allocation behavior when one core deletes and another adds and they reuse the same memory allocation is what I thought you meant.
Is that what you meant? Because if it is then you now have potential for the problem I described.
withinboredom 1 days ago [-]
Looks good! There's an important thing missing from the benchmarks though:
- cpu usage under concurrency: many of these spin-lock or use atomics, which can use up to 100% cpu time just spinning.
- latency under concurrency: atomics cause cache-line bouncing which kills latency, especially p99 latency
puzpuzpuz-hn 15 hours ago [-]
Yup, that's a valid point. I'll consider adding these metrics.
candiddevmike 1 days ago [-]
Idk why but I tend to shy away from non std libs that use unsafe (like xsync). I'm sure the code is fine, but I'd rather take the performance hit I guess.
puzpuzpuz-hn 15 hours ago [-]
Unsafe usage in the recent xsync versions is very limited (runtime.cheaprand only). On the other hand, your point is valid and it'd be great to see standard library improvements.
mappu 23 hours ago [-]
A few release cycles back, Swiss Maps became popular (i think, particular thanks to CockroachDB) as a replacement for standard Go map[K]V.
Later, Go's stdlib map implementation was updated to use Swiss Maps internally and everyone benefited.
Do you think the xsync.Map could be considered for upstreaming? Especially if it outperforms sync.Map at all the same use cases.
puzpuzpuz-hn 15 hours ago [-]
There are multiple GH issues around better sync.Map. Among other alternatives, xsync.Map is also mentioned. But Golang core team doesn't seem interested in sync.Map (or a generic variant of it) improvements.
1 days ago [-]
eatonphil 1 days ago [-]
Will we also eventually get a generic sync.Map?
puzpuzpuz-hn 15 hours ago [-]
Would be great to see that - there are multiple GH issues for that. But so far, I'm not convinced that Google prioritizes community requests over its own needs.
darkr 22 hours ago [-]
It’d be nice to have in stdlib, but it’s pretty trivial to write a generic wrapper for it
jeffbee 1 days ago [-]
Almost certainly, since the internal HashTrieMap is already generic. But for now this author's package stands in nicely.
kgeist 21 hours ago [-]
Orcaman is a very straightforward implementation (just sharded RW locks and backing maps), but it limits the number of shards to a fixed 32. I wonder what the benchmarks would look like if the shard count were increased to 64, 128, etc.
puzpuzpuz-hn 15 hours ago [-]
My box is 12c/24t only, so it won't make any difference. But on a beefy box, it may improve performance in high cardinality key scenarios.
nasretdinov 11 hours ago [-]
It potentially still might make a difference due to reduced contention: if we have more shards the chances of two or more goroutines hitting the same shard would be lower. In my mind the only downside to having more shards is the upfront cost, so it might slow down the smallest example only
kgeist 7 hours ago [-]
The upfront cost isn't that big: a Go map+RW lock is probably a few hundred bytes. Allocating them costs far below 1 ms.
vanderZwan 1 days ago [-]
I don't write Go but respect to the author for trying to list trade-off considerations for each of the implementations tested, and not just proclaim their library the overal winner.
puzpuzpuz-hn 15 hours ago [-]
Thanks. There are downsides in each approach, e.g. if you care about minimal allocation rate, you should go with plain map + RWMutex. So yeah, no silver bullet.
umairnadeem123 17 hours ago [-]
[dead]
puzpuzpuz-hn 15 hours ago [-]
Allocation rates are also compared. Long story short, vanilla map + RWMutex (or a sharded variant of it like orcaman/concurrent-map) is the way to go if you want to minimize allocations. On the other hand, if reads dominate your workload, using one of custom concurrent maps may be a good idea.
Rendered at 22:56:43 GMT+0000 (Coordinated Universal Time) with Vercel.
Pure overwrite workload (pre-allocated values): xsync.Map: 24 B/op 1 alloc/op 31.89 ns/op orcaman/concurrent-map: 0 B/op 0 alloc/op 70.72 ns/op
Real-world mixed (80% overwrites, 20% new): xsync.Map: 57 B/op 2 allocs/op 218.1 ns/op orcaman/concurrent-map: 63 B/op 3 allocs/op 283.1 ns/op
Go maps reuse memory on overwrites, which is why orcaman achieves 0 B/op for pure updates. xsync's custom bucket structure allocates 24 B/op per write even when overwriting existing keys.
At 1M writes/second with 90% overwrites: xsync allocates ~27 MB/s, orcaman ~6 MB/s. The trade is 24 bytes/op for 2x speed under contention. Whether this matters depends on whether your bottleneck is CPU or memory allocation.
Benchmark code: standard Go testing framework, 8 workers, 100k keys.
Is that what you meant? Because if it is then you now have potential for the problem I described.
- cpu usage under concurrency: many of these spin-lock or use atomics, which can use up to 100% cpu time just spinning.
- latency under concurrency: atomics cause cache-line bouncing which kills latency, especially p99 latency
Later, Go's stdlib map implementation was updated to use Swiss Maps internally and everyone benefited.
Do you think the xsync.Map could be considered for upstreaming? Especially if it outperforms sync.Map at all the same use cases.