[HN Gopher] Show HN: go-nbd - A Pure Go NBD Server and Client
___________________________________________________________________
Show HN: go-nbd - A Pure Go NBD Server and Client
Hey HN! I just released go-nbd, a lightweight Go library for
effortlessly creating NBD servers and clients. Its a neat tool for
creating custom Linux block devices with arbitrary backends, such
as a file, byte slice or what I'm planning to use it for, a tape
drive. While there are a few partially abandoned projects like this
out there already, this library tries to be as maintainable as
possible by only implementing the most recent handshake revision
and baseline functionality for both the client and the server,
while still having enough support to be useful. I'd love to get
your feedback :)
Author : pojntfx
Score : 83 points
Date : 2023-03-29 16:23 UTC (6 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| Thaxll wrote:
| You should probably defer mutex unlock() and not use naked
| returns: https://github.com/pojntfx/go-
| nbd/blob/main/pkg/backend/file...
|
| Defering overhead is very small nowdays.
| pojntfx wrote:
| I did think of doing that, but from my understanding there is a
| slight performance hit from `defer`, and there is no other
| branch it could deadlock - or am I missing something here?
| Thanks either way!
|
| Edit: Oh I just saw the addition to your comment - that is
| exactly what I was thinking of ^^
| kyrra wrote:
| Defer overhead was mostly fixed in Go 1.14. From:
| https://go.dev/doc/go1.14
|
| > This release improves the performance of most uses of defer
| to incur almost zero overhead compared to calling the
| deferred function directly. As a result, defer can now be
| used in performance-critical code without overhead concerns.
|
| EDIT: https://github.com/golang/go/issues/14939 I believe is
| the main tracking bug for this.
| philosopher1234 wrote:
| what about during a panic?
| pojntfx wrote:
| Good point, thanks!
| Patrickmi wrote:
| I think there was no performance hit on panic it was memory
| leak being fixed long time ago
| i5heu wrote:
| NBD = https://de.wikipedia.org/wiki/Network_Block_Device
| msla wrote:
| German: English, but Capitalized.
| toxik wrote:
| Or if you speak English,
| https://en.m.wikipedia.org/wiki/Network_block_device
| ronsor wrote:
| Or if you're not on a mobile device,
| https://en.wikipedia.org/wiki/Network_block_device
| toxik wrote:
| Why is there even a separate domain for mobile that has
| these issues?
| rkeene2 wrote:
| NBD is a simple protocol, I used it to recover a RAID5 hardware
| array that lost parity [0] [1], in just a few lines of C.
|
| [0] https://dev.to/rkeene/raid5-lost-raid5-recovered-3kld
|
| [1]
| https://www.rkeene.org/projects/info/resources/diatribes/blo...
| davidjfelix wrote:
| NBD = network block device. Hope I saved somebody a google.
| gpvos wrote:
| "The Network Block Device is a Linux-originated lightweight
| block access protocol that allows one to export a block device
| to a client."
| znpy wrote:
| how does nbd compare to, say, iSCSI ?
|
| beyond likely being simpler to understand/manage, i mean.
| duskwuff wrote:
| SCSI was a fairly wide-ranging protocol, supporting
| anything from hard disks to CD recorders to document
| scanners, and iSCSI could theoretically encapsulate all of
| that. SCSI also came with a lot of historical quirks, like
| 6/10/12/16 byte addressing, which were progressively added
| as devices got larger and requirements got more complex. As
| a result, implementing software to interact with iSCSI is a
| pain, because there's simply so much legacy weirdness to
| deal with.
|
| NBD is much more narrowly focused. It exposes a single
| block device to the kernel, with a minimal set of commands
| focused on that use case (e.g. read, write, trim, prefetch,
| etc). It doesn't do as many things as iSCSI, but that's
| probably for the better.
| tptacek wrote:
| It's much, much simpler than iSCSI, which is an advantage.
|
| It's possibly more idiomatically Linux. But the Linux iSCSI
| initiator might (last I checked?) do a better job of
| utilizing the kernel block multiqueue interface than nbd,
| and thus might get higher I/O performance.
|
| nbd is extremely simple to set up; iSCSI less so.
| [deleted]
| teenigma wrote:
| Thanks. My first thought is Next Business Day, then follow by
| 'Why this thing need server/client?'
| zainhoda wrote:
| My first thought was that it was a No Big Deal server akin to
| Python's simple HTTP server
| alias_neo wrote:
| Did nobody else learn to spell out an acronym the first time
| it's used?
|
| I had heard of it, but I had to read "NBD" far too many times
| in that repo before I saw what it stood for.
| naikrovek wrote:
| no one knows how to write or how to use hypertext properly
| anymore and it drives me nuts.
| cellularmitosis wrote:
| Very cool! I'm curious if you've explored testing error cases
| yet? Years ago I fooled around with nbdkit and developed a "bad
| sectors" and "bad disk" plugin and found that the error handling
| around these scenarios left a little to be desired.
|
| https://github.com/pepaslabs/nbdkit-baddisk-plugin
|
| https://github.com/pepaslabs/nbdkit-badsector-plugin
| pojntfx wrote:
| Thanks! I have not yet actually - I am planning to test this
| with MHVTL to get some artificial delay in there (for the
| upcoming tape backend), but something like this would be
| interesting to integrate/port!
| latchkey wrote:
| Since this heavily involves networking, take a look into using
| gnet [0]. You might find some interesting performance
| improvements by using that over just net.Conn.
|
| [0] https://github.com/panjf2000/gnet
| tptacek wrote:
| Probably wouldn't do this unless you really needed to; nbd
| workloads are probably easier than HTTP workloads (a single NBD
| "mount" might have lots of connections, but you're not adding
| and removing hundres of connections per second).
| latchkey wrote:
| You might be right. Different workloads will definitely have
| different effects. That said, implementing the gnet api is
| pretty easy and doesn't require a huge context switch. It is
| worth a test to see which one performs better.
|
| I used it for a tcp connection (json-rpc) workload and it was
| far better and the code was cleaner.
| tptacek wrote:
| Right, I don't want to talk down gnet, it's neat, but
| you're basically writing non-idiomatic libevent-style
| networking code --- ie, very non-idiomatic Go code --- and
| it seems to me like most of the perf win here is minimizing
| the number of goroutines you have serving blocking
| operations, which is not really a problem you're going to
| have with an NBD implementation.
| latchkey wrote:
| I see your point. In my case it was basically a proxy
| concentrator. On one side, accept and hold open a huge
| number of connections, then maintain a single open
| connection on the other side. It worked really well for
| this situation.
| pojntfx wrote:
| Thanks, I had not heard of that package, I will be sure to
| check it out!
___________________________________________________________________
(page generated 2023-03-29 23:00 UTC)