[HN Gopher] Vectorflow: Minimalist neural network library faster...
___________________________________________________________________
Vectorflow: Minimalist neural network library faster than
TensorFlow in D
Author : teleforce
Score : 66 points
Date : 2022-04-15 22:27 UTC (1 days ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| smegsicle wrote:
| what does it take to get a good word-to-vec--style model going-
| or are those not 'sparse' ?
| VHRanger wrote:
| They should be sparse -- the input to Word2Vec is one-hot
| encodings of words in the models dictionary
| MuffinFlavored wrote:
| What are interesting things you can do with a minimalist neural
| network, or even a full blown neural network for that matter?
|
| You just kind of like... train a data set, hope you don't under
| or overtrain, and then what? Feed it some input, get out an
| approximate output?
| cinntaile wrote:
| Their usecase is explained in the accompanying blogpost.
| https://netflixtechblog.medium.com/introducing-vectorflow-fe...
| hintymad wrote:
| I wonder if "faster" really matters. The implementation-level
| optimization seems paled when compared to productivity,
| ecosystem, and algorithm-level and hardware-level boost. Case in
| point, Caffe2 wanted to be the backend of PyTorch in FB, yet
| failed. MxNet claims to be "faster" than PyTorch in pretty much
| every way and Smola personally pushed orgs in AWS AI really hard
| to adopt MxNet and Gluon* yet failed. The latter failure is
| particularly worth mentioning: all teams in AWS AI complied and
| adopted MxNet but then three years later decided to move to
| PyTorch. All-in-all, the incremental speed boost was simply
| irrelevant.
| stainablesteel wrote:
| if there was constant deployment in some large industrial
| niche, faster implementations might help you save on your
| electric bill
| dr_zoidberg wrote:
| I keep a few projects at work where we train models with
| Keras+TensorFlow. Twice before I tried to switch to PyTorch,
| because it did have nicer features/syntax for research and SOTA
| models, but deploy and support was limited on the toolchain we
| use, so we had to stay with Keras+TF. I'd still like to switch
| to PyTorch because of a few goodies, but the deploy of the
| models on some platforms is still simpler starting from a TF
| model.
|
| If anyone has a few links or resources to point me to, I'm more
| than happy to make a third try.
| nivekkevin wrote:
| I wonder why "faster" matters; Many neural network use cases, at
| the end, turn into exporting to the model to common formats like
| ONNX in companion with inference servers like OpenVino, Seldon
| Core, Triton, TensorRt, etc. I think this is why PyTorch took a
| big share of the market, because it's in a language that's
| broadly used by the model designers and it has the ecosystem to
| also become production-level, which is where the "faster" really
| matters yet it's beyond what framework one uses while designing
| the model.
|
| A good use case of this, perhaps is really for the scientists
| that favor D over Scala and Python?
| softinio wrote:
| I am curious to know more about what lead to D lang being chosen
| for this project?
| VHRanger wrote:
| Check the blog post
|
| - They wanted the same language for implementation and user API
|
| - They wanted a single machine, CPU bound runtime where the
| executable is the model
|
| Given these, D and Go make sense - easy to pick up, compiles
| fast and performant.
| softinio wrote:
| Thanks for heads up on blog had missed that. Great!
| yvdriess wrote:
| Vectorflow was my gateway drug to Dlang.
___________________________________________________________________
(page generated 2022-04-16 23:01 UTC)