[HN Gopher] Show HN: I built a non-linear UI for ChatGPT
___________________________________________________________________
Show HN: I built a non-linear UI for ChatGPT
Hi HN, I built this out of frustration of the evergrowing list of
AI models and features to try and to fit my workflow. The visual
approach clicks for me so i went with it, it provides more freedom
and control of the outcome, because predictable results and
increased productivity is what I'm after when using conversational
AI. The app is packed with features, my most used are prompt
library, voice input and text search, narration is useful too. The
app is local-first and works right in the browser, no sign up
needed and it's absolutely free to try. BYOAK - bring your own API
Keys. Let me know what you think, any feedback is appreciated!
Author : setnone
Score : 420 points
Date : 2024-05-08 16:41 UTC (1 days ago)
(HTM) web link (www.grafychat.com)
(TXT) w3m dump (www.grafychat.com)
| ntonozzi wrote:
| This is wild! What have you found it most useful for?
|
| Have you tried a more straightforward approach that follows the
| ChatGPT model of being able to fork a chat thread? I could use
| something like this where I can fork a chat thread and see my old
| thread(s) as a tree, but continue participating in a new thread.
| Your model seems more powerful, but also more complex.
| setnone wrote:
| This is my daily GPT driver, so for almost anything from
| research to keeping my snippets tidy and well organized. I use
| voice input a lot to take my time and form my thoughts and
| requests, text-to-speech to listen for answers too.
| Zambyte wrote:
| Looks cool! How can I host it?
| setnone wrote:
| Thanks! Self-host package comes with Extended license
| iknownthing wrote:
| Curious why you settled on the BYOAK approach rather than a
| subscription approach
| setnone wrote:
| Subscription fatigue is real :)
| iknownthing wrote:
| I was thinking it was because it would be easier than keeping
| track of usage which I assume you would need to do with a
| subscription based model i.e. all users using your key.
| tomfreemax wrote:
| I have to say, I didn't realize it was no subscription before
| I saw this comment. Makes it much more interesting from the
| start.
|
| Yes, I hate subscriptions. Love your approach.
|
| I also love that you focus on your strength which is the
| intuitive and flexible interface, rather than LLM or prompts
| or whatever. Like this its also very extensible, as every
| good tool should be.
| rajarsheem wrote:
| The demo you shared shows you are creating child chat from the
| original parent chat. Have you tried something like connecting
| merging two child chats to create a subsequent child chat? Or
| maybe simply creating a child chat from a previous child chat?
| visarga wrote:
| I wish there was a node to load a folder of JSON, TXT or CSV
| files, pipe them one by one and collect the outputs in another
| folder. Like a LLM pipeline / prompt editor.
| dav43 wrote:
| Datasette has this exact functionality and i used it. Works
| well.
|
| https://datasette.io/plugins/datasette-enrichments-gpt
|
| *edit links
| 7734128 wrote:
| Make sure to have very tight limits on any API key you provide to
| someone else. They could burn through tens of thousands of
| dollars each day if you do not have security in place.
| entherhe wrote:
| I always feel like whiteboarding & concept mapping is better when
| it comes to generative AI, especially when it comes to the nature
| that we are chat in a "multimodal way" these days -- just think
| of old plain text SMS compared to mems links rich-text powered IM
| tools nowadays.
|
| Congrats! you may also check flowith and ai.affine.pro for
| similar selling points.
|
| Also, heptabase is good and they will definitely make a ai
| version soon or later.
| ramoz wrote:
| I like this and wished openai or anthropic enabled similar in
| their UIs... it would be simple actually: "create a new chat from
| here"
|
| Otherwise, great job! It's cool, but it's pricey and that is a
| personal deterrence.
| gopher_space wrote:
| I've pegged my thinking on software purchases to local
| McDonald's drive-thru menu equivalencies.
| diebillionaires wrote:
| macdonalds is so overpriced, so I cannot condone this method
| :)
| tippytippytango wrote:
| I find editing a previous question accomplishes this well, the
| existing UI already keeps all your previous edits in a revision
| tree.
| shreezus wrote:
| Third-party clients support this. I like MindMac for instance -
| it's the "Fork from this message" feature.
| pants2 wrote:
| From watching the demo it looks interesting, but I figure I would
| get tired of dragging nodes around and looking for ones that I'm
| interested in. Does it allow searching?
|
| It would be more interesting to me if it could use AI as an agent
| to create a graph view - or at least propose/highlight followup
| questions that self-organize into a graph.
| setnone wrote:
| Yes, search is one of my favorite features here, try '/'
| shortcut
| setnone wrote:
| > I would get tired of dragging nodes around
|
| Me personally i find value in taking my time to organize and
| drag around, probably because i'm a visual thinker
| yaantc wrote:
| For a text based version of the "tree of chats" idea, using
| Emacs, Org mode and gptel see `gptel-org-branching-context`in:
| https://github.com/karthink/gptel?tab=readme-ov-file#extra-o...
| tomfreemax wrote:
| Of course, it can be done with emacs and org mode...
|
| It's almost like every software or library will get ported to
| JavaScript eventually, with the difference, emacs and org mode
| was before.
| rfc wrote:
| Nice! This is really cool. Well done.
| CuriouslyC wrote:
| It looks like you put a lot of work into this but node based
| workflows are ok when they're a necessary evil and just an evil
| otherwise.
|
| I'd be more interested in a tool where I can "add data" to it by
| drag and drop or folder import, then I can just type whatever
| prompt and the app's RAG system pulls relevant data/previous
| prompts/etc out of its store ranked by relevance, and I can just
| click on all the things that I want inserted into my context with
| a warning if I'm getting near the context limit. I waste a lot of
| time finding the relevant code/snippets to paste in manually.
| setnone wrote:
| For me this interface is canvas-based first, node-based second,
| meaning sometimes I might not even use connections to get my
| desired result from LLM but i have the place and form for the
| result and i know how to find it. Connections here are not set
| in stone like in mind mapping software for example, it's a
| tool.
| setnone wrote:
| > I'd be more interested in a tool where I can "add data" to it
| by drag and drop or folder import, then I can just type
| whatever prompt and the app's RAG system pulls relevant
| data/previous
|
| This is something very similar to what i'm planning to add
| next, so stick around.
| ramoz wrote:
| Well here's a somewhat limited version of your idea and really
| only helps mitigate the copy/paste effort with coding:
| https://github.com/backnotprop/prompt-tower
|
| My original idea was a DnD interface that works at the os level
| as a HUD... and functions like your idea but that is not so
| simple to develop.
| durch wrote:
| This sounds a lot like my dream setup, We've been slowly
| building something along those lines. I've linked a video at
| the bottom that shows how we did something similar with an
| Obsidian plugin. Hit me up if you're interested in more
| details, we'd be happy to get an alpha user who gets it.
|
| We've mostly had trouble explaining to ppl what exactly it is
| that we're building, which is fine, since we're mostly building
| for us, but still it seems like something like this would be
| the killer app for LLMs
|
| Obsidian Canvas UI demo ->
| https://www.youtube.com/watch?v=1tDIXoXRziA
|
| Also linking out Obsidian plugin repo in case someone wants to
| dive deeper into what we're about -> https://github.com/cloud-
| atlas-ai/obsidian-client
| yoouareperfect wrote:
| Very nice! Thanks for sharing, will definitely give it a try. I
| think we settled for chat interface to play with LLMs, but
| there's nothing really holding us back to try new ways.
| x3haloed wrote:
| Yeah, I'm annoyed that OpenAI has deprecated its text
| completion models and API. I think there's a ton of value to be
| had from constrained generation like what's available with the
| Guidance library.
| _boffin_ wrote:
| Yes! this is what i've been thinking about!
| LASR wrote:
| Wow. I was so frustrated with chat that I was almost going to
| write something like this myself. Now I don't have to :)
|
| Curious about the business model here though. How much sales have
| you had so far, if you don't mind me asking?
| kkukshtel wrote:
| I built a similar demo to this but for images - IMO this is a
| much better structure for working with LLMs as it allows you to
| really riff with a machine instead of feeling like you need a
| deterministic "next step"
|
| https://youtu.be/k_mJgFmdWWY
| dvt wrote:
| Sweet demo, you should do a Show HN! This is much more
| interesting to me, as the visual element makes much more sense
| here rather than just putting entire paragraphs in nodes.
| serial_dev wrote:
| The text nodes is also interesting, it's like a mind map, I
| can see how it could be great for learning, planning,
| collaboration, exploring...
| kkukshtel wrote:
| Thanks for the encouragement! I just put up a post, hope
| other people like it!
| setnone wrote:
| Great stuff! That deterministic "next step" is the last line of
| defense for us humans :)
| ag_hn wrote:
| Looks amazing! The Unity client is quite sleek. I'd wager the
| creative play can be taken to the next level with a low-latency
| model like https://fal.ai/models/fast-turbo-diffusion-turbo
| kkukshtel wrote:
| What I really want to do is make it model agnostic. SDXL was
| an easy choice at the time, but you could really easily just
| make it be a local model or any hosted visual model with an
| endpoint. The core idea is just tying an LLM to an image
| model and tying those to a force-directed graph, so really
| anything could be an input (or an output - you could also do
| it with text)
| lukan wrote:
| Looks good, I tried it out and it is indeed alpha in many
| regards(e.g. sometimes it does not save a picture on windows,
| sometimes it does not show the prompt, ..) , but the idea has
| potential. I would encourage you to keep working on it (and
| maybe keep in mind, that if this suddenly gets viral and you
| have no API limits in place, you might get poor quickly).
| kkukshtel wrote:
| Yeah the idea was mostly to put a stake in the ground for an
| early UX experiment (I released it last year), but it's been
| in the back of my mind as something to continue experimenting
| with and honestly rebuilding for web in the custom game
| engine I'm working on.
| pasaley wrote:
| Interesting choice of questions in the demo.
|
| Are you from Nepal?
| setnone wrote:
| No but I'm a frequent visitor, i love the mountains there!
| niutech wrote:
| You can get almost the same results for free using Obsidian
| Canvas and one of the following plugins:
|
| - https://github.com/MetaCorp/obsidian-augmented-canvas
|
| - https://github.com/phasip/obsidian-canvas-llm-extender
|
| - https://github.com/rpggio/obsidian-chat-stream
|
| - https://github.com/zatevakhin/obsidian-local-llm
| firtoz wrote:
| Thank you, now I really have to try Obsidian...
| btbuildem wrote:
| Interesting take! It does seem to address a typical
| "intermediate" workflow; even though we prefer linear finished
| products, we often work by completing a hierarchy first. I've
| been using Gingko [1] for years, I find it eases the struggle of
| organizing the structure of a problem by both allowing endless
| expansion of levels, and easily collapsing it into a linear
| structure.
|
| In your case, do you hold N contexts (N being the number of
| leaves in the tree)? Are the chats disconnected from each other?
| How do you propose to transition from an endless/unstructured
| canvas to some sort of a finished, organized deliverable?
|
| 1: https://gingkowriter.com/
| setnone wrote:
| Great questions!
|
| > In your case, do you hold N contexts (N being the number of
| leaves in the tree)?
|
| It depends, contexts are just a form of grouping
|
| > Are the chats disconnected from each other? > How do you
| propose to transition from an endless/unstructured canvas to
| some sort of a finished, organized deliverable?
|
| RAG with in-app commands, i'm working on a local RAG solution,
| it's early but promising. Basically chat with all your data and
| applying a wide range of command on it.
| Ringz wrote:
| Slightly OT, but there was a standalone software just like
| gingko for the Mac. Do you now something about it?
|
| Edit: I think it was an old version of gingko as a desktop app.
| Still available at https://github.com/gingko/client/releases
| floam wrote:
| Are you thinking of FlowList?
|
| https://www.flowtoolz.com/flowlist/
| Ringz wrote:
| Thanks, but that's not the one. It was like a pure Markdown
| outliner, very keyboard driven.
| ludwigschubert wrote:
| Are you thinking of Bike?
|
| https://www.hogbaysoftware.com/bike/
|
| (Maybe not -- this isn't markdown first; but it is a very
| macOS-y, keyboard driven, hierarchical outliner that I
| enjoy.)
| Ringz wrote:
| Bike looks very nice and it's built on open file formats. I
| will try it out. Look at my edit above: it might be an old
| version of ginkgo. But I'm on my phone right now and can't
| figure it out...
| TeMPOraL wrote:
| > _How do you propose to transition from an endless
| /unstructured canvas to some sort of a finished, organized
| deliverable?_
|
| Why would they, though? For me as a potential user of this (and
| someone who thought about building a tool like this for
| myself), the tree (or better, a directed graph) _is_ the
| desired end result.
| jdthedisciple wrote:
| looks packed with stuff, how long did it take u to build this?
| asadalt wrote:
| i wish perplexity had a similar ui option. so I can out my
| research in multiple paths.
| tomfreemax wrote:
| Didn't find it in the documentation. How would I go about if I
| want to self-host it for a small team of like 14 people?
|
| Should I buy licenses for 14 (3x extended) instances, or 1 for
| all, where everyone can see everyone's conversations or are there
| accounts? I have a central ollama instance running and also
| Openai API keys.
|
| Thank you.
| setnone wrote:
| > How would I go about if I want to self-host it for a small
| team of like 14 people
|
| > Should I buy licenses for 14 (3x extended) instances
|
| Yes that should work. Each license comes with 5
| seat/activations. Each seat has its own copy of the data.
| dangoodmanUT wrote:
| Super cool, would be great for prompt engineering and iteration
| rmbyrro wrote:
| Thank you so much for building this, it's exactly what I was
| looking for!
|
| Love the license instead of subscription model. Also loved that I
| can start trying right away without any hassle.
|
| Couple suggestions:
|
| I can't decide between Extended and Premium options. What does
| "premium support" mean?
|
| Also, it only shows an upgrade option in the check-out page,
| perhaps it'd be interesting to include it in the FAQ and also the
| Pricing section.
| setnone wrote:
| Thank you!
|
| > What does "premium support" mean?
|
| Premium option includes prioritized support and access to new
| features that might be unavailable for other types of licenses.
|
| I will update the website for more clarity.
| mubu wrote:
| This seems very cool and I'd like to try it out
| nirav72 wrote:
| This is great. More importantly - I love the pricing!!
| groby_b wrote:
| I have to admit, I don't get it. (And I want to be clear that's a
| personal statement, not an overall comment on the app. It looks
| quite well done, and if others get value from it, awesome!)
|
| But for me, I'm stuck with questions. What's the point of drawing
| connectors, there seems no implied data flow? Is this just for
| you as a reminder of the hierarchy of your queries? Or do you
| actually set the upstream chat as a context, and reflow the whole
| thing if you change upstream queries? (That one would definitely
| be fun to play with - still not sure about long-term value, but
| def. interesting)
|
| Good luck, and looking forward to see where you're taking this!
| setnone wrote:
| Thank you!
|
| Like I mentioned earlier for me the app is canvas-based first,
| node-based second. So connections are a tool, a visual tool to
| craft or manage prompt to then feed it to LLM. Canvas is a
| visual tool to organize and keep large amounts of chats.
|
| I try use LLM not for the sake of chatting, but to get results
| and those tools seem to help me with that.
|
| Hope that makes sense.
| jonnycoder wrote:
| Seems like organized chatGPT in the form of mind mapping. It's
| quite intuitive to me because I've had some chats where I kept
| scrolling back to the first gpt response. Therefore, you can
| map out a question and answer, then create nodes for follow up
| about specific details. Each branch of the tree structure can
| organize a rabbit hole of follow ups on a specific topic.
| raxrb wrote:
| Do you plan to open source it? I will love to extend it. I had
| similar ideas about non linear UI.
| brunoborges wrote:
| Can you share details of the technology stack used to build the
| tool?
| wan888888 wrote:
| Amazing work, kudos! Love the canvas, drag'n'drop and line
| connectors, did you use a library or made it yourself?
| nssmeher wrote:
| Great stuff! Interesting usecases will be present
| bredren wrote:
| Your full-stack dev graph seems to have 75 queries in it.
|
| Please consider providing a demo video showing how this works
| with code work.
|
| I get the overall behavior, but sometimes code segments can be
| quite long, or multiple specific sections need to be combined to
| create additional context.
|
| It would be helpful to see the current baseline product behavior
| for interaction on a "common" coding task, solving problems in
| typescript and / or python.
| setnone wrote:
| Thank you for the feedback!
|
| I'm planning to release more videos, stay tuned.
| causal wrote:
| Congrats on the launch - I love this. Organizing text is often
| the hard part when working with LLMs.
|
| Only thing I don't love is heavy mouse use. Are there keyboard
| shortcuts for all the operations shown?
| setnone wrote:
| Thanks!
|
| > Are there keyboard shortcuts for all the operations shown?
|
| For now yes, what would you like to be added?
| altruios wrote:
| The only feedback I would give is I'm suspicious of (will not
| buy) closed sourced AI anything. With that said: thank you for
| sloughing off the subscription model trend! That is welcome.
|
| But going open source so that I know "for sure" no telemetry is
| being sent and charging for support would be the only way to get
| money out of me for this. I'm probably the odd one out for this,
| so take that with a fair helping of salt.
|
| This is a great idea, so much so that this is also something I
| could probably put together a MVP of in a weekend (or two) of
| dedicated work (the fancy features that I personally don't care
| about would probably take longer to implement, of course...).
|
| Good work! Keep it up.
| setnone wrote:
| Thank you!
|
| I would love if we had some kind of 'open-build' methodology so
| those projects not willing to open the source but are willing
| to perform any kind of necessary audit against the build, just
| a thought.
| IanCal wrote:
| > But going open source so that I know "for sure" no telemetry
| is being sent and charging for support would be the only way to
| get money out of me for this.
|
| Is the self hosted option a workable solution for you?
|
| https://www.grafychat.com/d/docs/selfhost
|
| Unless it's minified I guess.
| altruios wrote:
| I would only use this (or any ai) self-hosted if it works
| 100% offline.
|
| I would also not want it minified - as I would want the
| freedom to tinker with it to my personal specifications.
| Which makes me ask a question: what rights would I have to
| modify this software, per your license?
| joshuahutt wrote:
| Very cool! I built a version of this [1], but balked at trying to
| sell it. This is the third iteration of this idea I've seen so
| far. Your reply popup is a smart feature and a nice touch! Love
| it. I love the privacy focus and BYOK, as well.
|
| Congrats on the launch!
|
| Really cool to see graph interfaces for AI having their moment.
| :)
|
| [1] https://coloring.thinkout.app/
| diebillionaires wrote:
| Wow, this is really cool! Thanks for sharing!
| teruakohatu wrote:
| It seems to work well but a desktop app (or self hosted) is
| essential. I can't paste in valuable API keys to a third party
| website.
| setnone wrote:
| Desktop app is coming soon and self-host option is already
| available as a part of Extended License.
|
| I have no plans to open source it at the moment, but it would
| be great to come up with something like 'open build' for cases
| like that.
| teruakohatu wrote:
| The purchase screen made me think self hosted was coming soon
| for extended. How far off is desktop and will the desktop be
| self-hosted or an interface to the website ?
| setnone wrote:
| Not far off, some days i would say.
|
| Yes it's a wrapper with opted-out sentry and vercel
| analitycs, just like self-host package.
| freedomben wrote:
| This looks really cool. I did not expect to see something I might
| actually buy but this is something that could be very nice for me
| :-)
|
| Will the Self-host package include source (i.e. source available)
| or is it just the transpiler output?
|
| Also, is there (or plan to be) support for postgres or other
| database for persistence?
| setnone wrote:
| Thank you!
|
| > Will the Self-host package include source (i.e. source
| available) or is it just the transpiler output?
|
| No sources, just a folder with compiled assets that you can run
| on a static server. This is already available.
|
| > Also, is there (or plan to be) support for postgres or other
| database for persistence?
|
| Yes there are plans for local pg
| Hrun0 wrote:
| You can create something like this easily by yourself using
| Obsidian and a plugin like
| https://github.com/AndreBaltazar8/obsidian-canvas-conversati...
| varispeed wrote:
| It's like when I replaced dropbox with just a few scripts and
| sftp.
| igor47 wrote:
| Syncthing, actually.
|
| I think you were joking but the benefit of designing software
| at personal scale is often an exponential reduction in
| complexity.
| siva7 wrote:
| "easily"? well, no except you're a techie.
| niutech wrote:
| What's the hassle for normal users?
|
| 1. Open Settings -> Community Plugins
|
| 2. Search for "Canvas Conversation" and install.
|
| Done!
| niutech wrote:
| Indeed, I mentioned even more free plugins for Obsidian Canvas
| in my comment below:
| https://news.ycombinator.com/item?id=40301465
| htrp wrote:
| have you looked at airops (similar ideas that you could 'borrow'
| from)
|
| https://www.airops.com/platform
| noashavit wrote:
| Congrats on the launch! I love that you let ppl try it without
| even signing up! The mobile experience needs to work tho.
| bschmidt1 wrote:
| Powerful stuff, this is the kind of workspace I've been waiting
| for for AI. Excited to see how it evolves!
| p1esk wrote:
| Hard to try it on my phone.
| invisitor wrote:
| Looks interesting. I'm working on an LLM client myself.
|
| Video: https://files.catbox.moe/zy4tbr.mp4
|
| Repo: https://github.com/Merkoba/Meltdown
| midnitewarrior wrote:
| Can you go get acquired by Phind please? Brainstorming with the
| robots is a non-linear activity and I believe you are on the
| right track.
| subhashp wrote:
| Excellent UI! I love it.
| damnever wrote:
| Awesome, this is similar to the thread conversations on Slack.
| shanghaikid wrote:
| This is interesting and all, but it's a tad complex to use. AI is
| supposed to simplify your life, but this just ends up making
| things more complicated.
|
| Ask -> answer, no more steps, that is the core value of ChatGPT
| or AI.
| social_quotient wrote:
| Suppose I have a conversation with ChatGPT about a macro, or
| better yet, a series of macros. We reach the 10th sub-module,
| but suddenly, I find a bug in module 2 (20 minutes ago chat).
| While I could redirect the chat back to module 2, it's a bit
| convoluted. Ideally, I'd want to return to an earlier point in
| the conversation, resolve module 2, and then continue where we
| left off. However, if I update my response from 20 chats ago, I
| risk orphaning the rest of the conversation. The response lag
| also complicates things because I might move on to new ideas or
| debugging tasks in the meantime. I suppose I should say because
| of the lag time, I'm not in sync with the chat, that lag
| affords me the opportunity to keep doing other things. If the
| chat was more like groq maybe it would be less the case - not
| sure.
|
| The other thing I find is that if I change how I replied/asked,
| I get a different answer. I like the idea I can fork this node
| and evaluate outcomes based on my varied inputs. You're right
| it's hugely more complex. But its complexity I think I'd love
| to have available.
| setnone wrote:
| > Ask -> answer, no more steps, that is the core value of
| ChatGPT or AI.
|
| This is the absolutely ideal state of the product, i agree.
| carlosbaraza wrote:
| Sometime ago I had an idea for a similar interface without the
| dragging feature. Basically, just a tree visualisation. I usually
| discuss a tangent topic in the same conversation, but I don't
| want to confuse the AI afterwards, so I edit a previous message
| when the tangent started. However, OpenAI would discard that
| tangent tree, instead it would be nice to have a tree of the
| tangent topics explored, without necessarily having to sort them
| manually, just visualising the tree.
| IanCal wrote:
| ChatGPT keeps the full tree doesn't it? You can swap back and
| forth on any particular node last I checked.
| endofreach wrote:
| I haven't seen that. So i have actually built what parent
| wrote.
|
| So it seems i did waste time unnecessarily... but where
| exactly do i find the full tree in ChatGPT convos?
| noahjk wrote:
| I don't think it's available on mobile, if that's where you
| are. On desktop, you can switch between previous edits.
|
| I'd be interested in seeing what you made though because
| I'm really interested in the idea of a branching UI
| IanCal wrote:
| It's all kept but it's not a nice UI. When you change a
| question you get (on the site, maybe just desktop?) a left
| and right button to move between the different variations.
|
| One thing you could do is import your data, as the exported
| conversations have this full tree last time I tried.
| siva7 wrote:
| Good landing page, explained to me the product well enough. I
| like your concept also as i wished sometimes for something
| similiar in the past.
| lIIllIIllIIllII wrote:
| For what it's worth, one CSS line lags the HELL out of my laptop
| on the site. It's backdrop-filter: blur(0.1875rem) for modals,
| like the youtube video popup
| wildrhythms wrote:
| I'm a front-end dev and I refuse to apply this effect for this
| reason. Even on high end laptops it uses way too much power and
| starts blasting the fans.
| LorenzoBloedow wrote:
| Does anyone know why the blur effect always takes so much
| power? Is there not a way to use the GPU, or is the problem
| something else entirely?
| niutech wrote:
| You can tell CSS to use GPU by adding `transform:
| translate3d(0, 0, 0);`
|
| Explanation of slow CSS filter performance is in this video:
| https://www.youtube.com/watch?v=oie6KqSPPlE
| troupo wrote:
| I wanted the same for myself but balked at the amount of work I'd
| need to do to implement it :)
|
| Great job!
| xucian wrote:
| nice, something I didn't know I needed :D
|
| might want to increase font weight in the pricing section, it's
| hard to read
|
| also in "How much does it cost?" I think you should also add the
| Free option (for those like me who missed the Try For Free button
| at the top)
| Wheaties466 wrote:
| something i built as an add on but would be nice to integrate
| into some of these front ends would be a find replace key: value
| store to assist in potentially "leaking" something.
|
| if you could replace IPs or domains or subdomains with a filler
| domain like something.contoso.com and send that to chatgpt
| instead of my internal domain that would be a feature that I
| would pay money for.
|
| like i said i have an implementation written in python for this
| but its an add on to an additional frontend which makes it extra
| clunky.
| whiddershins wrote:
| Cool!
|
| You have a typo in the word 'presicion'
|
| Ironically
| buescher wrote:
| A tree visualization like this one would be great as a complement
| to tabs in web browsing, especially on a monster display.
| seedie wrote:
| Congrats on the launch. I'll take a closer look soon and only
| played around a bit.
|
| Would be great if you could extend the documentation.
|
| If you're not open sourcing the app, what about at least open
| sourcing the documentation?
|
| One thing I'd like to extend is on
| https://www.grafychat.com/d/docs/intro
|
| _3. Configure Ollama server to make sure it allows connection
| from grafychat._
|
| That's not very helpful. Something along the line Set the
| environment variable OLLAMA_ORIGINS to
| "https://www.grafychat.com" and rerun "ollama serve". Use your
| custom host if your using the self-host option.
| ```sh OLAMA_ORIGINS="https://www.grafychat.com" ollama
| serve ```
|
| Is not that much more text but makes it way easier for people to
| go and try out your app with ollama.
___________________________________________________________________
(page generated 2024-05-09 23:02 UTC)