[HN Gopher] Show HN: Mastra - Open-source JS agent framework, by...
       ___________________________________________________________________
        
       Show HN: Mastra - Open-source JS agent framework, by the developers
       of Gatsby
        
       Hi HN, we're Sam, Shane, and Abhi, and we're building Mastra
       (https://mastra.ai), an open-source JavaScript SDK for building
       agents on top of Vercel's AI SDK.  You can start a Mastra project
       with `npm create mastra` and create workflow graphs that can
       suspend/resume, build a RAG pipeline and write evals, give agents
       memory, create multi-agent workflows, and view it all in a local
       playground.  Previously, we built Gatsby, the open-source React web
       framework. Later, we worked on an AI-powered CRM but it felt like
       we were having to roll all the AI bits (agentic workflows, evals,
       RAG) ourselves. We also noticed our friends building AI
       applications suffering from long iteration cycles: they were
       getting stuck debugging prompts, figuring out why their agents
       called (or didn't call) tools, and writing lots of custom memory
       retrieval logic.  At some point we just looked at each other and
       were like, why aren't we trying to make this part easier, and
       decided to work on Mastra.  Demo video:
       https://www.youtube.com/watch?v=8o_Ejbcw5s8  One thing we heard
       from folks is that seeing input/output of every step, of every run
       of every workflow, is very useful. So we took XState and built a
       workflow graph primitive on top with OTel tracing. We wrote the
       APIs to make control flow explicit: `.step()` for branching,
       `.then()` for chaining, and `.after()` for merging. We also added
       .`.suspend()/.resume()` for human-in-the-loop.  We abstracted the
       main RAG verbs like `.chunk()`, `embed()`, `.upsert(),' `.query()`,
       and `rerank()` across document types and vector DBs. We shipped an
       eval runner with evals like completeness and relevance, plus the
       ability to write your own.  Then we read the MemGPT paper and
       implemented agent memory on top of AI SDK with a `lastMessages`
       key, `topK` retrieval, and a `messageRange` for surrounding context
       (think `grep -C`).  But we still weren't sure whether our agents
       were behaving as expected, so we built a local dev playground that
       lets you curl agents/workflows, chat with agents, view evals and
       traces across runs, and iterate on prompts with an assistant. The
       playground uses a local storage layer powered by libsql (thanks
       Turso team!) and runs on localhost with `npm run dev` (no Docker).
       Mastra agents originally ran inside a Next.js app. But we noticed
       that AI teams' development was increasingly decoupled from the rest
       of their organization, so we built Mastra so that you can also run
       it as a standalone endpoint or service.  Some things people have
       been building so far: one user automates support for an iOS app he
       owns with tens of thousands of paying users. Another bundled Mastra
       inside an Electron app that ingests aerospace PDFs and outputs CAD
       diagrams. Another is building WhatsApp bots that let you chat with
       objects like your house.  We did (for now) adopt an Elastic v2
       license. The agent space is pretty new, and we wanted to let users
       do whatever they want with Mastra but prevent, eg, AWS from
       grabbing it.  If you want to get started: - On npm: npm create
       mastra@latest - Github repo: https://github.com/mastra-ai/mastra -
       Demo video: https://www.youtube.com/watch?v=8o_Ejbcw5s8 - Our
       website homepage: https://mastra.ai (includes some nice diagrams
       and code samples on agents, RAG, and links to examples) - And our
       docs: https://mastra.ai/docs  Excited to share Mastra with everyone
       here - let us know what you think!
        
       Author : calcsam
       Score  : 266 points
       Date   : 2025-02-19 15:25 UTC (7 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | aranibatta wrote:
       | Congrats on launching! Curious how early the Mastra team thinks
       | people should be thinking about evals and setting up a pipeline
       | for them.
        
         | calcsam wrote:
         | We tend to recommend folks spend a few hours writing evals
         | after they spend a couple weeks prototyping. Then they get a
         | sense of how valuable evals are for their use-case.
         | 
         | We think about evals a bit like perf monitoring -- it's good to
         | have RUM but also good to have some synthetic stuff in your CI.
         | So if you do find them valuable, useful to do both.
        
       | bobremeika wrote:
       | A TypeScript first AI framework is something that has been
       | missing. How do you work with AI SDK?
        
         | calcsam wrote:
         | We originally were wrapping AI SDK, but that confused people
         | who wanted to use both, so we decided to make the API more
         | explicit, eg:
         | 
         | import { Agent } from "@mastra/core/agent"; import { openai }
         | from "@ai-sdk/openai";
         | 
         | export const myAgent = new Agent({ name: "My Agent",
         | instructions: "You are a helpful assistant.", model:
         | openai("gpt-4o-mini"), });
        
         | soulofmischief wrote:
         | Mine is written in TypeScript and I still think it's more
         | ergonomic than anything else I'm seeing in the wild. Maybe
         | there's finally an appetite for this stuff and I should release
         | it. The Mastra dashboard looks pretty nice, might take some
         | notes from it.
        
         | swyx wrote:
         | idk man
         | 
         | https://js.langchain.com/docs/introduction/
         | 
         | https://www.vellum.ai/products/workflows-sdk
         | 
         | https://github.com/transitive-bullshit/agentic
         | 
         | which is not to say any of them got it right or wrong, but it
         | is by no means "missing". the big question w all of them is do
         | they deliver enough value to last. kudos to those who at least
         | try, of course
        
       | yovboy wrote:
       | You're awesome guys! I had so many problems with lanchain and am
       | very happy since switching to Mastra
        
         | calcsam wrote:
         | that's great to hear!!
        
         | ge96 wrote:
         | that sus account with no activity until now
        
       | jobryan wrote:
       | Bamfs
        
         | calcsam wrote:
         | lol thanks
        
       | Gakho wrote:
       | Congrats on launching. I've noticed that switching prompts
       | without edits between different LLM providers has degradation on
       | performance. I'm wondering if you guys have noticed how
       | developers do these "translations", I'm wondering since maybe
       | your eval framework might have data for best practices.
        
         | calcsam wrote:
         | Yeah, this is something we've heard as well. No particular
         | feature right now but we did ship an agent in local dev to help
         | people improve their prompts.
        
           | Gakho wrote:
           | I'm wondering since there seem to be a lot of
           | frameworks/websites that support evals, even OpenAI has
           | evals.
           | 
           | Do you think that a lot of these components like
           | observability and evals will eventually be consumed by either
           | providers (like OpenAI) or an orchestration framework like
           | Mastra (when using multiple providers, though even if you're
           | using just one provider for many tasks I can see it belonging
           | to the orchestration framework)?
        
             | calcsam wrote:
             | I could be wrong but don't think OpenAI wants to be
             | opinionated about that, except maybe the OpenAI solutions
             | engineers :)
        
           | swyx wrote:
           | link to this agent?
        
             | calcsam wrote:
             | demo: https://x.com/calcsam/status/1889856384549982419
        
       | realmikebernico wrote:
       | Congrats! This is exactly what the AI world needs. I'm thinking
       | about using Mastra for a class I'm working on with AI Agents.
        
         | calcsam wrote:
         | that's awesome!
        
         | ash_091 wrote:
         | So an AI Mastra Class?
        
       | 5Qn8mNbc2FNCiVV wrote:
       | I thought Kyle Matthews was the creator of Gatsby
        
         | calcsam wrote:
         | Kyle started the project, I started helping pretty shortly
         | thereafter, then he and I cofounded the company together.
         | Kyle's working on ElectricSQL now but is using us, we're doing
         | a meetup together next month, etc.
        
           | thruflo wrote:
           | Come along :)
           | 
           | https://lu.ma/sync-sf
        
         | dang wrote:
         | I put the "creators" bit in the title because I thought readers
         | would find it interesting. Sorry if that was not-quite-right!
         | I've turned them into developers now.
        
       | harliem wrote:
       | Impressive. Have you seen any success with Mastra being used to
       | build voice agents? Our company has been experimenting with VAPI,
       | which just launched a workflow builder into open beta
       | (https://docs.vapi.ai/workflows), but it has a lot of rough
       | edges.
        
         | calcsam wrote:
         | We're just starting to do that and have a few TTS providers:
         | ElevenLabs, OpenAI, PlayAI.
         | 
         | We hear a lot from people who are outgrowing the voice agent
         | platforms and moving to something like pipecat (in Python), and
         | we'd love to be the JS option.
        
         | soulofmischief wrote:
         | If you'd like, feel free to reach out to me via email with your
         | requirements and we can get a conversation going. I've built a
         | few voice agent systems in both python and JavaScript and would
         | love to hear about what issues you're running into. Might be
         | able to build what you need.
        
       | levensti wrote:
       | Super excited to try out the new agent memory features
        
         | swyx wrote:
         | interesting to contrast the recent memory releases
         | 
         | - https://mastra.ai/docs/agents/01-agent-memory
         | 
         | - https://blog.langchain.dev/langmem-sdk-launch/
         | 
         | - https://help.getzep.com/concepts#adding-memory
         | 
         | not sure where all this is leading yet but glad people are
         | exploring.
        
           | calcsam wrote:
           | 100% and agree with this, we saw the langmem stuff last night
           | 
           | imho getting some sort of hierarchical memory is conceptually
           | fairly straightforward, the tricky part is having the storage
           | and vector db pieces well integrated so that the apis are
           | clean
        
         | calcsam wrote:
         | let us know what you think!
        
       | _1 wrote:
       | This looks really nice. We've been considering developing
       | something very similar in-house. Are you guys looking at
       | supporting MLC Web LLM, or someother local models?
        
         | calcsam wrote:
         | Yup! We rely on the AI SDK for model routing, and they have an
         | Ollama provider, which will handle pretty much any local model.
        
       | kylemathews wrote:
       | Very excited about Mastra! We have a number of Agent-ic things
       | we'll be building at ElectricSQL and Mastra looks like a breath
       | of fresh air.
       | 
       | Also the team is top-notch -- Sam was my co-founder at Gatsby and
       | I worked closely with Shane and Abhi and I have a ton of
       | confidence in their product & engineering abilities.
        
         | cpursley wrote:
         | Why not use Elixir for agents as Electric is already heavily
         | invested? It's a much better fit than JS.
        
       | joshstrange wrote:
       | This looks awesome! Quick question, are there plans to support
       | SSE MCP servers? I see Stdio [0] are supported and I can always
       | run a proxy but SSE would be awesome.
       | 
       | [0] https://mastra.ai/docs/reference/tools/client
        
         | tybaa wrote:
         | Hey! Glad to hear you're excited about it! Yes, we're currently
         | working on improving our MCP support in general - we'll have
         | more to share soon, but part of that is supporting SSE servers
         | directly
        
           | joshstrange wrote:
           | Very cool. Like I said I can make it work with Stdio but I
           | have a SSE MCP proxy I wrote to combine multiple MCP servers
           | (just to make plugging in all my tools to a new client easier
           | to test). That said, I think after looking at the docs that
           | I'll be tempted to move my tools in directly but I probably
           | will keep them behind MCP for portability.
        
             | tybaa wrote:
             | Oh nice, did you write your own proxy or are you using
             | something like https://www.npmjs.com/package/mcp-proxy ?
        
               | joshstrange wrote:
               | I have used `mcp-proxy` but (afaik) you can only use it
               | 1-to-1 and I wanted an N-to-1 proxy so that instead of
               | configuring all my MCP servers in the multiple clients
               | I've tested out I could just add 1 server and pull in
               | everything.
               | 
               | I found `mcp-proxy-server` [0] which seemed like it would
               | do what I want but I ran into multiple problems. I added
               | some minor debug logging to it and the ball sort of
               | rolled downhill from there. Now it's more my code than
               | what was there originally but I have tool proxying
               | working for multiple clients (respecting sessionIds, etc)
               | and I think I've solved most all the issues I've run into
               | and added features like optional tool prefixing so there
               | isn't overlap between MCP servers.
               | 
               | Given what I know now, I don't think N-to-1 is quite as
               | useful as I thought. Or rather, it really depends on your
               | "client". If you can toggle on/off tools in your client
               | then it's not a big problem but sometimes you don't want
               | "all" the tools and if you client only allows toggling
               | per MCP server then you will have an issue.
               | 
               | I love the ideas of workflows and how you have defined
               | agents. I think my current issue is almost too many tools
               | and the LLM sometimes gets confused over which ones to
               | use. I'm especially thrilled with your HTTP endpoints you
               | expose for the agents. My main MCP server (my custom
               | tools I wrote, vs the third-party ones) exposes an HTTP
               | GUI for calling the tools (faster iteration vs trying it
               | through LLMs) and I've been using that and 3rd-party chat
               | clients (LibreChat and OpenWebUI) as my "LLM testing"
               | platform (because I wasn't aware of a better options) but
               | neither of those tools let you "re-expose" the agents via
               | an API.
               | 
               | All in all I'm coming to the conclusion that 90% of MCP
               | servers out there are really cool for seeing what's
               | possible but it's probably best to write your own
               | tools/MCP since most all MCP servers are just thin
               | wrappers around an API. Also it's so easy to create an
               | MCP server that they are popping up all over the place
               | and often of low quality (don't fully implement the API,
               | take shortcuts for the authors use-case, etc). Using LLMs
               | to writing the "glue" code from API->Tool is fairly minor
               | and I think is worth "owning". To sum that all up: I
               | think my usage of 3rd party MCP servers is going to trend
               | towards 0 as I "assimilate" MCP servers into my own
               | codebase for more control but I really like MCP as a way
               | to vend tools to various different LLM clients/tools.
               | 
               | [0] https://github.com/adamwattis/mcp-proxy-server
        
               | tybaa wrote:
               | Thanks for sharing! It's so helpful to hear real world
               | experiences like this. Would you be interested in meeting
               | up on a call sometime? I'd love to chat about how you're
               | using MCP to help inform how we can make all of this
               | easier for folks. We're actively thinking about our APIs
               | for tool use and MCP right now.
        
               | joshstrange wrote:
               | I appreciate the offer but I think you'll probably find
               | someone better to talk to here in the comments.
               | 
               | MCP is super cool and I've loved playing with it but
               | playing with it is all I'm doing. I'm working on some
               | tools to use in my $dayJob and also just using it as an
               | excuse to learn about LLMs and play with new tech. Most
               | my work is writing tools that connect our to our
               | distributed fleet of servers to collect data, run
               | commands, etc. My goal is to build a SlackOps-type bot
               | that can provide extra context about errors we get in
               | Slack (Pull the latest commits/PRs around that code, link
               | to current deployed version, provide all the logs for the
               | request that threw an error, check system stats, etc).
               | And while I have tools written to do all of that I'm
               | still working on bringing it all together in something
               | more than a bot I can invoke from Slack and make MCP
               | calls.
               | 
               | All that to say, I'm not a professional user of
               | MCP/Mastra and my opinion is probably not one you want
               | shaping your framework.
        
               | tybaa wrote:
               | No worries! But I am definitely interested in chatting
               | still - that you've tried it in multiple ways, ran into
               | pain points, and overcame those in your own ways is super
               | interesting and valuable. Playing around is how everyone
               | starts and this "agents with tool use in prod" game is
               | still very new. These APIs should work well and make
               | sense for folks who are just getting into it as well
               | folks who have been around the block. If you change your
               | mind let me know! Would love to chat
        
         | nilslice wrote:
         | we have a tutorial that covers this!
         | 
         | https://docs.mcp.run/tutorials/mcpx-mastra-ts
         | 
         | you don't even need to use SSE, as mcp.run brings the tools
         | directly to your agent, in-process, as secure wasm modules.
         | 
         | mcp.run does have SSE support for all its servlet tools in the
         | registry though too.
        
         | tybaa wrote:
         | Added support in this PR https://github.com/mastra-
         | ai/mastra/pull/1957! Isn't shipped just yet but will be soon
        
       | epolanski wrote:
       | By the developers of Gatsby is a minus, not a plus makes me think
       | this is going to be the next abandonware.
        
         | user9999999999 wrote:
         | gatsby was one of the first static react frameworks, now you
         | have things like nextjs remix astro etc... i dont think
         | abandonware is fair, thats just the way software goes
        
           | mplewis wrote:
           | The Gatsby team made a lot of promises upon which they didn't
           | follow through. Not a great way to build confidence in your
           | next big project.
        
             | DSchau wrote:
             | ... such as?
        
         | squillion wrote:
         | Gatsby never made sense to me. Weird design decisions I
         | couldn't find any plausible reason for. As soon as Next.js
         | became capable of doing SSG I convinced my team to abandon
         | Gatsby. Definitely a minus, sorry.
        
         | paultannenbaum wrote:
         | Surprised this is comment is not higher. Gatsby was one of the
         | worst technologies I have worked with in my long career of
         | working with various JS libraries and frameworks. Im sure the
         | team is smart and capable, but I would not be advertising their
         | work with Gatsby.
        
         | benatkin wrote:
         | The character Gatsby didn't function very well either (as far
         | as being a successful person goes, I quite liked the book and
         | he functioned well as a character) :)
         | 
         | However, the Gatsby CMS had a couple of things that were really
         | interesting about it - especially runtime type safety through
         | GraphQL and doing headless WordPress.
        
       | gregpr07 wrote:
       | Any timeline for python?
        
         | calcsam wrote:
         | Not planning on it -- we think frameworks should be single-
         | language
        
       | brap wrote:
       | I don't really understand agents. I just don't get why we need to
       | pretend we have multiple personalities, especially when they're
       | all using the same model.
       | 
       | Can anyone please give me a usecase, that couldn't be solved with
       | a single API call to a modern LLM (capable of multi-step
       | planning/reasoning) and a proper prompt?
       | 
       | Or is this really just about building the prompt, and giving the
       | LLM closer guidance by splitting into multiple calls?
       | 
       | I'm specifically _not_ asking about function calling.
        
         | 2pointsomone wrote:
         | I don't work in prompt engineering but my partner does and she
         | tells me numerous need for agents in cases where you want some
         | technology which goes and seeks things on the live web and then
         | comes back and you want to make sense of that found data with
         | the LLM and pre-written prompts where you use that data as
         | variables, and then possibly go back into the web if the task
         | remains unsolved.
        
           | dimgl wrote:
           | Can't that be solved with regular workflow tools and prompts?
           | Is that what an agent is, essentially?
           | 
           | Or is an agent a collection of prompts with a limited set of
           | available tools?
        
         | blainm wrote:
         | One of the key limitations of even state-of-the-art LLMs is
         | that their coherence and usefulness tend to degrade as the
         | context window grows. When tackling complex workflows, such as
         | customer support automation or code review pipelines - breaking
         | the process into smaller, well-defined tasks allows the model
         | to operate with more relevant and focused context at each step,
         | improving reliability.
         | 
         | Additionally, in self-hosted environments, using an agent-based
         | approach can be more cost-effective. Simpler or less
         | computationally intensive tasks can be offloaded to smaller
         | models, which not only reduces costs but also improves response
         | times.
         | 
         | That being said, this approach is most effective when dealing
         | with structured workflows that can be logically decomposed. In
         | more open-ended tasks, such as "build me an app," the results
         | can be inconsistent unless the task is well-scoped or has
         | extensive precedent (e.g., generating a simple Pong clone). In
         | such cases, additional oversight and iterative refinement are
         | often necessary.
        
         | weego wrote:
         | I don't get it either. Watching implementations on YouTube etc
         | it primarily it feels like a load of verbiage trying to carve
         | out a sub-industry, but the meat on the bone just seems to be
         | defining discreet units of AI actions that can be chained into
         | workflows that interact with non-ai services.
        
           | jacobr1 wrote:
           | > defining discreet units of AI actions that can be chained
           | into workflows that interact with non-ai services.
           | 
           | You got. But that is the interesting part! To make AI useful,
           | beyond basic content generation in a chat context you need
           | interaction with the outside world. And you may need
           | iterative workflows that can spawn more work based on the
           | output of those interactions. The focus on Agents as
           | _personas_ is a tangent to the core use case. We could just
           | call this stuff  "AI Workflow Orchestration" or something ...
           | and it would remain pretty useful!
        
             | karn97 wrote:
             | I wont trust an agent with anything by itself at their
             | current state though.
        
         | bravura wrote:
         | https://aider.chat/2024/09/26/architect.html
         | 
         | "Aider now has experimental support for using two models to
         | complete each coding task:
         | 
         | An Architect model is asked to describe how to solve the coding
         | problem.
         | 
         | An Editor model is given the Architect's solution and asked to
         | produce specific code editing instructions to apply those
         | changes to existing source files.
         | 
         | Splitting up "code reasoning" and "code editing" in this manner
         | has produced SOTA results on aider's code editing benchmark.
         | Using o1-preview as the Architect with either DeepSeek or
         | o1-mini as the Editor produced the SOTA score of 85%. Using the
         | Architect/Editor approach also significantly improved the
         | benchmark scores of many models, compared to their previous
         | "solo" baseline scores (striped bars)."
         | 
         | In particular, recent discord chat suggests that o3m is the
         | most effective architect and Claude Sonnet is the most
         | effective code editor.
        
         | andrewmutz wrote:
         | Modularity. We could put all code in a single function, it is
         | possible, but we prefer to organize it differently to make it
         | easier to develop and reason about. Agents are similar
        
         | coffeemug wrote:
         | If you ignore the word "agent" and autocomplete it in your mind
         | to "step", things will make more sense.
         | 
         | Here is an example-- I highlight physical books as I read them
         | with a red pen. Sometimes my highlights are underlines,
         | sometimes I bracket relevant text. I also write some comments
         | in the margins.
         | 
         | I want to photograph relevant pages and get the highlights and
         | my comments into plain text. If I send an image of a
         | highlighted/commented page to ChatGPT and ask to get everything
         | into plain text, it doesn't work. It's just not smart enough to
         | do it in one prompt. So, you have to do it in steps. First you
         | ask for the comments. Then for underlined highlights. Then for
         | bracketed highlights. Then you merge the output. Empirically,
         | this produces much better results. (This is a really simple
         | example; but imagine you add summarization or something, then
         | the steps feed into each other)
         | 
         | As these things get complicated, you start bumping into
         | repeated problems (like understanding what's happening between
         | each step, tweaking prompts, etc.) Having a library with some
         | nice tooling can help with those. It's not especially magical
         | and nothing you couldn't do yourself. But you also could write
         | Datadog or Splunk yourself. It's just convenient not to.
         | 
         | The internet decided to call these types of programs agents,
         | which confuses engineers like you (and me) who tend to think
         | concretely. But if you get past that word, and maybe write an
         | example app or something, I promise these things will make
         | sense.
        
           | fryz wrote:
           | To add some color to this
           | 
           | Anthropic does a good job of breaking down some common
           | architecture around using these components [1] (good outline
           | of this if you prefer video [2]).
           | 
           | "Agent" is definitely an overloaded term - the best framing
           | of this I've seen is aligns more closely with the Anthropic
           | definition. Specifically, an "agent" is a GenAI system that
           | dynamically identifies the tasks ("steps" from the parent
           | comment) without having to be instructed that those are the
           | steps. There are obvious parallels to the reasoning
           | capabilities that we've seen released in the latest cut of
           | the foundation models.
           | 
           | So for example, the "Agent" would first build a plan for how
           | to address the query, dynamically farm out the steps in that
           | plan to other LLM calls, and then evaluate execution for
           | correctness/success.
           | 
           | [1] https://www.anthropic.com/research/building-effective-
           | agents [2] https://www.youtube.com/watch?v=pGdZ2SnrKFU
        
             | eric-burel wrote:
             | This sums up as ranging from multiple LLM calls to build a
             | smart features to letting the LLM decide what to do next. I
             | think you can go very far with the former but the latter is
             | more autonompus in unconstrained environments (like
             | chatting with a human etc.)
        
         | jacobr1 wrote:
         | One way to think about it is job orchestration. You end up with
         | some kind of DAG of work to execute. If all the work you are
         | doing is based on context from the initiation of the workflow,
         | then theoretically you could do everything in a single prompt.
         | But more interesting is when there is some kind of real-world
         | interaction, potentially multiple. Such as a websearch, or
         | executing code, calling an API. Then you take action based on
         | the result of then. Which in turn might trigger another
         | decision to take some other action, iteratively, and
         | potentially branching.
        
         | nsonha wrote:
         | Without checking out this particular framework, the word is
         | sometimes overloaded with that meaning (LLM personality), but
         | actually in software engineering in general, "agent" generally
         | means something with its own inner loop and branching logic
         | (agent as in autonomy). It's a neccessary abstraction when you
         | compose multiple workflows together under the same LLM
         | interface, things like which flow to run next, and edge case
         | handling for each of them etc.
        
       | fuddle wrote:
       | "You may not provide the software to third parties as a hosted or
       | managed service" - The Elastic v2 license isn't actually open
       | source like your title mentions: "Open-source JS agent framework"
       | 
       | https://github.com/mastra-ai/mastra/blob/main/LICENSE
        
         | calcsam wrote:
         | I mentioned that in the comment. We're using Elastic v2 for now
         | because we want users to be able to do anything with us, but
         | protect from eg AWS
        
           | Tomte wrote:
           | So it's a lie.
        
           | fuddle wrote:
           | If the license isn't open source, then the SDK shouldn't be
           | labeled as open source.
        
       | monideas wrote:
       | Are there any plans to add automatic retries for individual steps
       | (with configurable max attempts and backoff strategy)?
        
       | davedx wrote:
       | Why is it on top of Vercel's platform?
        
         | netcraft wrote:
         | It looks like theyre using the vercel ai sdk, which really isnt
         | the vercel platform, doesnt have anything to do with any of the
         | rest of vercel. Its actually quite nice and full featured.
        
         | calcsam wrote:
         | It's not. It's on top of AI SDK, which is a popular open source
         | library maintained by Vercel.
        
       | netcraft wrote:
       | This looks really great! How do you make money? Do you charge for
       | deploying these to your platform? I couldnt find anything on
       | pricing
        
         | calcsam wrote:
         | If you watch the demo video you will see the cloud platform we
         | are building at the end. Right now it's in beta.
        
       | pablodecm wrote:
       | Very interesting set of abstractions that address lots of the
       | pain points when building agents, also the team is super eager to
       | help out!
        
         | calcsam wrote:
         | thank you!
        
       | alanwells wrote:
       | Happy Mastra user here! Strikes the right balance between letting
       | me build with higher level abstractions but providing lower level
       | controls when needed. I looked at a handful of other frameworks
       | before getting started and the clarity & easy of use of Mastra
       | stood out. Nice work.
        
         | calcsam wrote:
         | thank you!
        
       | eliotthehacker wrote:
       | I basically learned everything about how agents work by using
       | Mastra's framework and going through their documentation. The
       | founders are also super hands-on and love to help!
        
       | dhorthy wrote:
       | i am very long on TS as the future of agent applications. nice
       | work team
        
         | calcsam wrote:
         | thanks!!
        
       | _pdp_ wrote:
       | I don't want to be that person but there are hundreds of other
       | similar frameworks doing more or less the same thing. Do you know
       | why? Because writing a framework that orchestrates a number of
       | tools with a model is the easy part. In fact, most of the time
       | you don't even need a framework. All of these framework focus on
       | the trivial and you can tell that simply by browsing the examples
       | section.
       | 
       | This is like 5% of the work. The developer needs to fill the
       | other 95% which involves a lot more things that are strictly
       | outside of scope of the framework.
        
         | cpursley wrote:
         | I agree, and it feels like JS is just the wrong runtime for
         | agents. Really languages that can model state in sane ways and
         | have a good concurrency story like Elixir make much more sense.
         | 
         | And here's a fun exercise: ask Claude via Cursor or Perplexity
         | with R1 to create a basic agentic framework for you in your
         | language of choice on top of Instructor.
        
         | fullstackwife wrote:
         | You could describe all frontend JS frameworks the same way: you
         | spend 95% of time on content and mechanics of your webapp,
         | while the framework provides the easy 5%.
        
           | chipgap98 wrote:
           | I think most JS frameworks save more than 5% of the effort
           | for developers compared to writing raw JS. Especially when
           | you include the ecosystem around those frameworks
        
         | fsndz wrote:
         | True. That's the reason I see a lot of people dropping similar
         | frameworks like LangChain recently:
         | https://medium.com/thoughts-on-machine-learning/drop-langcha...
        
       | fnikacevic wrote:
       | Do the workflows support voice-to-voice models like openai's
       | realtime? Or if something like that exists I'd be curios.
        
       ___________________________________________________________________
       (page generated 2025-02-19 23:00 UTC)