[HN Gopher] Show HN: ChainForge, a visual tool for prompt engine...
___________________________________________________________________
Show HN: ChainForge, a visual tool for prompt engineering and LLM
evaluation
Hi HN! We're been working hard on this low-code tool for rapid
prompt discovery, robustness testing and LLM evaluation. We've just
released documentation to help new users learn how to use it and
what it can already do. Let us know what you think! :)
Author : fatso784
Score : 91 points
Date : 2023-08-07 17:54 UTC (5 hours ago)
(HTM) web link (chainforge.ai)
(TXT) w3m dump (chainforge.ai)
| _puk wrote:
| I think you should probably mention that its source is available!
| [0]
|
| I don't personally have a need for this right now, but I can
| really see the use for the parameterised queries, as well as
| comparisons across models.
|
| Thanks for your efforts!
|
| 0: https://github.com/ianarawjo/ChainForge
| swyx wrote:
| "source available" means one thing, but this is properly MIT
| open source and i believe the authors should receive credit for
| that in this age of frankenlicenses
|
| EDIT: ah: "This work was partially funded by the NSF grant
| IIS-2107391." ok cool we the taxpayer funded it haha
| _puk wrote:
| I didn't want to open that can of worms: "If you use
| ChainForge for research purposes, or build upon the source
| code, we ask that you cite this project in any related
| publications. The BibTeX you can use for now is.."
|
| That's outside of the MIT licence as far as I'm concerned
| 7moritz7 wrote:
| It says "we ask", not "you have to". I'd say it's open to
| interpretation that this is an informal request and not
| legally binding. Also I do want to open the can of worms
| and say that whoever doesn't even have the respect to
| include a citation on request when using someone else's
| work should just write everything themselves.
| priyanmuthu wrote:
| Hi! I'm one of the great students working on this. This is
| merely a request to get more visibility. It will also help
| us get more grants. We don't have any intention of
| restricting the "openness" of it.
| gsuuon wrote:
| Y'all sure are some great students! ;)
| _puk wrote:
| Awesome, thank you!
|
| If I called it truly open source I half expected to get
| shot down.
|
| I know where we stand now :)
| mabcat wrote:
| I think that's the wrong end of the stick. When you publish
| research, the software you used/built on is part of the
| methods and needs to be cited. The authors are doing you a
| courtesy by providing a pasteable citation.
|
| Similar "we would appreciate citations" statement for (BSD-
| licensed) pandas:
| https://pandas.pydata.org/about/citing.html
|
| 8000+ pubs citing pandas: https://scholar.google.com/schola
| r?cites=9876954816936339312
| KRAKRISMOTT wrote:
| It seems to be more powerful than langflow and flowise
|
| https://github.com/logspace-ai/langflow
|
| https://github.com/FlowiseAI/Flowise
| swyx wrote:
| k i need somebody to do a comparison table for us...
| priyanmuthu wrote:
| We will mostly do this comparison when we write our
| research paper. I can post it here when we do it.
| sdesol wrote:
| I can't comment on the features, but ChainForge has some
| catching up to do...mind share wise. Below are some
| community insights for langflow, Flowise and ChainForge
|
| https://devboard.gitsense.com/logspace-ai/langflow
|
| https://devboard.gitsense.com/FlowiseAI/Flowise
|
| https://devboard.gitsense.com/ianarawjo/ChainForge
|
| Flowise currently has the largest active community (based
| on GitHub data)
|
| Full Disclosure: This is my tool
| ericskiff wrote:
| We just used this on a project and it was very helpful! Cool to
| see it here on HN
| dekervin wrote:
| May I ask, how was it useful ? I find it cool but I have a hard
| time justifying using it.
| saladdressing wrote:
| [dead]
| trentearl wrote:
| Cool project
| [deleted]
| koryk wrote:
| I like it! Any plans to add Google Vertex AI support?
| mabcat wrote:
| This looks excellent! It's a great interface for two things I'm
| struggling to make LlamaIndex do: explain and debug multi-step
| responses for agent flows, and cache queries aggressively. If I
| can work out how to hook it into my LlamaIndex-based pile, happy
| days.
|
| Feature/guidance request: how to actually call functions, how to
| loop on responses to resolve multiple function calls. I've
| managed to mock a response to get_current_weather using this
| contraption: https://pasteboard.co/aO9BmHG5qsFt.png . But it's
| messy and I can't see a way to actually evaluate function calls.
| And if I involve the Chat Turn node, the message sequences seem
| to get tangled with each other. Probably I'm holding it wrong!
| [deleted]
| maxlamb wrote:
| What exactly is prompt discovery?
| toxicFork wrote:
| AFAIK it's finding out what prompts to use for what LLM to get
| the answer your want
|
| E.g. this
|
| > Compare response quality across prompt permutations, across
| models, and across model settings to choose the best prompt and
| model for your use case.
___________________________________________________________________
(page generated 2023-08-07 23:00 UTC)