Post AWboA5jpPyLMZgIdua by zellerin@emacs.ch
(DIR) More posts by zellerin@emacs.ch
(DIR) Post #AWU8SuR5lsxJsGwvxY by galdor@emacs.ch
2023-06-08T15:06:09Z
0 likes, 0 repeats
Interesting but disappointing: my #CommonLisp HTTP server in non-blocking mode (epoll) has 5x lower throughput and 200x higher latency compared to the multithreaded blocking approach.Way too much time spent in read/write calls, method invocation and text encoding/decoding.
(DIR) Post #AWVchz2k7DUaI3clxw by galdor@emacs.ch
2023-06-09T08:19:42Z
0 likes, 0 repeats
@lispi314 Nothing special, just a couple threads handling connections. See https://github.com/galdor/tungsten/blob/master/tungsten-http/src/server.lisp for more details.
(DIR) Post #AWVdcaXZt88ynq2RhQ by louis@emacs.ch
2023-06-09T08:29:55Z
0 likes, 0 repeats
@galdor Total beginner here and just curious. Do I read that code correctly in that it creates a new thread for every incoming connection? Because this is the approach I would choose. I'm not so deep into systems programming to understand how expensive the creation of threads is. Could a thread pool with reusable threads probably be an improvement?@lispi314
(DIR) Post #AWVeI7rSuJ4W1C9BhI by louis@emacs.ch
2023-06-09T08:37:27Z
0 likes, 0 repeats
@galdor Strike my last message - I realize that you already implemented a thread pool. 🙂​@lispi314
(DIR) Post #AWVeJmEMIhzVVjduXg by galdor@emacs.ch
2023-06-09T08:37:46Z
0 likes, 0 repeats
@louis @lispi314 No, it uses a thread pool created during the initialization of the server [1].The connection handler of the TCP server pushes new connections to a queue [2].All threads wait on the connection queue [3], of course with a condition variable.It is a very simple design, very performant. The only issue is that clients could absolutely block the server by opening N concurrent network connections. In practice if the server is running behind NGINX it cannot hapen since NGINX buffers all requests before sending them to the backend, but non-blocking IO will be a hard requirement for things such as Websockets.Spawning one thread per request would be extremely slow and memory intensive.[1] https://github.com/galdor/tungsten/blob/master/tungsten-http/src/server.lisp#L68[2] https://github.com/galdor/tungsten/blob/master/tungsten-http/src/server.lisp#L103[3] https://github.com/galdor/tungsten/blob/master/tungsten-http/src/server.lisp#L129
(DIR) Post #AWVlbpRXnIguMmMgiW by zellerin@emacs.ch
2023-06-09T09:59:28Z
0 likes, 0 repeats
@galdor What is (are) your test scenarios for this measurement? One/few/many clients, one/many pipelined/not requests per connection?
(DIR) Post #AWVn96to0y8M101AMi by zellerin@emacs.ch
2023-06-09T09:59:50Z
0 likes, 0 repeats
@galdor Also, for epoll scenario (supposedly over multiple connection), how did you handle situation when the client send just half of the line - were all other clients waiting for it to finish, or did you do something clever and possibly slow?
(DIR) Post #AWVn97dXGwiIIpPge0 by galdor@emacs.ch
2023-06-09T10:16:42Z
0 likes, 0 repeats
@zellerin For the moment test parameters don't really matter (I'm using "wrk -c 500 -d 10 -t 4"), I'm still finishing request handling.With non-blocking sockets, there is no issue with slow clients: if one only send half of a request, the socket stays open and you don't get a read event until the rest is available. This is precisely why you want non-blocking sockets.
(DIR) Post #AWbdms6cwAaA5oFjiS by zellerin@emacs.ch
2023-06-12T06:00:06Z
0 likes, 0 repeats
@galdor Thanks. I guess I miss something in how it is works. Is this code available somewhere to see as well? (my understanding, maybe wrong, was that if a client sends a packet with just "GE", you will receive this and need to keep it somewhere on server side until the rest arrives - possibly handling other connections in the meantime)
(DIR) Post #AWblk9iJoXcDT26Awy by galdor@emacs.ch
2023-06-12T07:29:15Z
0 likes, 0 repeats
@zellerin See the non-blocking-http-server: https://github.com/galdor/tungsten/blob/non-blocking-http-server/tungsten-http/src/server.lispNot sure what you mean with "GE", we're talking about HTTP.
(DIR) Post #AWboA5jpPyLMZgIdua by zellerin@emacs.ch
2023-06-12T07:56:22Z
0 likes, 0 repeats
@galdor Thanks.GE is first two letters of GET / HTTP/1.1, a typical http opener.
(DIR) Post #AWboEqoeKGU3k3p9sm by galdor@emacs.ch
2023-06-12T07:57:13Z
0 likes, 0 repeats
@zellerin Ah I see. So yes, this is the idea. Every request fragment you read must be kept aside.