[HN Gopher] Show HN: Lume - automate data mappings using AI
___________________________________________________________________
Show HN: Lume - automate data mappings using AI
Hi HN! I'm Nicolas, co-founder of Lume, a seed-stage startup
(https://www.lume.ai/). At Lume, we use AI to automatically
transform your source data into any desired target schema in
seconds, making onboarding client data or integrating with new
systems take seconds rather than days or weeks. In other words, we
use AI to automatically map data between any two data schemas, and
output the transformed data to you. We are live with customers and
are just beginning to open up our product to more prospects.
Although we do not have a sandbox yet, here is a video walkthrough
of how the product works:
https://www.loom.com/share/c651b9de5dc8436e91da96f88e7256ec?....
And, here is our documentation: https://docs.lume.ai. We would love
to get you set up to test it, so please reach out. Using Lume: we
do not have self-serve yet. In the meantime, you can request full
access to our API through the Request Access button in
https://www.lume.ai. The form asks for quick information e.g. email
so that I can reach out to you to onboard you. Please mention you
came from HN and I'll prioritize your request. How our full API
product offering works: Through Lume's API, users can specify their
source data and target schema. Lume's engine, which includes AI and
rule-based models, creates the desired transformation under the
hood by producing the necessary logic, and returns the transformed
data in the response. We also support mapper deployment, which
allows you to edit and save the AI generated mappers for important
production use cases. This allows you to confidently reuse a static
and deterministic mapper for your data pipelines. Our clients have
three primary use cases - Ingest Client Data: Each client you work
with handles data differently. They name, format, and handle their
data in their own way, and it means you have to iteratively ingest
each new client's data. - Normalize data from unique data systems.
To provide your business value, your team needs to connect to
various data providers or handle legacy data. Creating pipelines
from each one is time consuming, and things as small as column name
differences between systems makes it burdensome to get started. -
Build and maintain data pipelines. Creating different pipelines to
that map to your target schema, whether for BI tooling, downstream
data processing, or other purposes, means you have to manually
create and maintain these mappings between schemas. We're still
trying to figure out pricing so we don't have that on our website
yet - sorry, but we wanted to share this even though it's still at
an early stage. We'd love your feedback, ideas & questions. Also,
feel free to reach out to me directly at nicolas@lume.ai. Thank
you.
Author : nmachado
Score : 39 points
Date : 2023-12-06 17:37 UTC (5 hours ago)
(HTM) web link (www.lume.ai)
(TXT) w3m dump (www.lume.ai)
| mtmail wrote:
| The animation on the homepage puts my processor to 100% (Firefox
| browser). I know that only an UI annoyance, and not really
| product feedback, but it made me close the browser tab faster
| than usual and other users might, too.
| nmachado wrote:
| Thank you for shouting this out! I'll look into getting a
| smaller version in there.
| r_singh wrote:
| So this is like Flat File but also for APIs?
| nmachado wrote:
| Great question. We focus on embedding in your data pipelines
| themselves. So, our AI automatically maps data, and can be used
| as a data pipeline indefinitely. Indeed, it can connect to APIs
| and handle dynamic output or edge cases you did not expect.
| Also, we work on handling any complexity of transformations
| (1-1 mappings, all the way to string manipulation,
| classification, aggregrations, etc).
| deely3 wrote:
| Hi, could you please roughly explain how do you verify that
| transformation successful and correct?
| robert-te-ross wrote:
| Yes! Once the transformation job has been completed, you can
| review the mapping in the returned job payload and our Lume
| dashboard. You can review, edit, and deploy the mapping
| pipeline from the dashboard. There are two ways to fix
| mappings. You can edit the target schema (e.g., make a required
| target field nullable) or manually override our mapping by
| giving the correct mapping value from the source data. I have
| also attached a Loom video showing this workflow:
| https://www.loom.com/share/95e47ead923d4911b647456174142e00
| qarl wrote:
| Best wishes from qarl, co-founder of Lume (1998)
|
| https://web.archive.org/web/19981201053816/http://www.lume.c...
| nmachado wrote:
| wow! It's a pleasure, qarl :) Reach out anytime.
| sighansen wrote:
| Are your transformations written in SQL?
| nmachado wrote:
| They are written in python but SQL is in the roadmap.
| paddy_m wrote:
| Check out generating ibis, which can output SQL and many
| dataframe formats (pandas, polars, modin,...)
| ldjkfkdsjnv wrote:
| These are all just openai wrappers with a nice ux, better to
| build your own prompt and go straight to the source. More
| visibility into errors/edge cases and ability to leverage new
| model capabilities as they come out. Whats more, as your use case
| gets more complex, you will outgrow these apis. You could have
| just written your own prompt to begin with and added edge cases
| as they arose
| hermitcrab wrote:
| So could your AI automatically create a data flow that solves one
| of the 'Advent of code' problems, such as:
|
| https://adventofcode.com/2023/day/1
| https://adventofcode.com/2023/day/2
| justsocrateasin wrote:
| Hey Nicolas! best of luck on the product, you gave me a demo a
| while back and it's excellent - excited to see what comes of it.
| nmachado wrote:
| Thank you!
___________________________________________________________________
(page generated 2023-12-06 23:00 UTC)