Post A5ao1s2SRUDM7l01GC by rune@mastodon.nzoss.nz
(DIR) More posts by rune@mastodon.nzoss.nz
(DIR) Post #A5ao1s2SRUDM7l01GC by rune@mastodon.nzoss.nz
2021-02-20T12:24:10Z
0 likes, 0 repeats
The more I think about the value proposition of Elasticsearch the more I realize it's probably mostly used for the wrong thing.Keeping an index of everything in memory is expensive. Why would you do it for all your data?It's not an accident that Elasticsearch built a SIEM solution into Kibana. They wanted to get into the insane corporate waste that is log management. Corporations load all their logs into ES and keep them there for 5 years. Wasting memory and getting no value out of it.
(DIR) Post #A5ao1sT2qdTvSDRFhY by rune@mastodon.nzoss.nz
2021-02-20T12:27:13Z
0 likes, 0 repeats
And Elasticsearch, being Java, is very stingy with memory. You're advised to use no more than ~32GB of for JVM. Then you can potentially get performance out of an extra 32GB of OS caching memory, but 64GB isn't a lot these days...And the Elasticsearch license model is of course per node, so the more trash you put in the more nodes you need even if you don't use most of that data.It's not the default to load old data out, so most people don't and then they just keep growing their clusters.
(DIR) Post #A5ao1sofYEmMXHYWPI by idle@jauntygoat.net
2021-02-21T04:32:43Z
1 likes, 0 repeats
@rune Also worth noting that JVM above 32GB changes the size and behavior of memory object pointers from 32bit compressed to 64bit . Typically going above 32GB of heap without additional tuning or allocating memory to accommodate the increased size of OOP pointers will result in less usable memory and different GC performance. Really common to see older Java devs terrified of heap sizes above 32GB.