Launch HN: Inconvo (YC S23) – AI agents for customer-facing analytics
Are your dashboards for an internal use-case? If so, there are some excellent AI-Native BI tools out there that have connections for Google Sheets.
Looks like you got some good suggestions for how to solve your particular problem with sheets in the other comments but feel free to check us out again if you ever move to something like Postgres/MySQL.
In particular, Metabase and Superset can be deployed with DuckDB support. You mentioned customer facing dashboards, note that Metabase embedded is not free. Just to say, our SeekTable also has DuckDB connector (and can be used as an embedded BI).
Definite spins up a datalake for you and pipelines to get data into the lake. We also have BI (semantic layer + dashboards) and an AI agent that will build reports for you. Let me know if you need a hand getting set up! I'm mike@definite.app.
ps. I work for ClickHouse and happy to help
The reason we don't is that we currently use Drizzle for schema introspection and query building and Drizzle doesn't have an adapter for ClickHouse yet.
There's an active issue on the Drizzle repo requesting Clickhouse support that has some interest and the possibility of using the Postgres interface that ClickHouse exposes was discussed there.
Would be great to talk about this in more detail with you, shoot me an email (eoghan@inconvo.ai)
I also noticed that you have your org id in your LLM trace - does that mean that you are trusting your agent to limit the orgs it queries? If so that seems quite dangerous as it could be tainted by prompt injection, no?
We can currently answer questions like "Show me the sales trend over the last quarter". Can you give me an example of a trend analysis question?
Secondly, no we don't trust the agent to limit the orgs it queries.
Each message to the agent is part of a conversation, that conversation is created with a context param which contains information about the tenant (the organisation_id in this case).
When configuring your agent on the platform you define how this context should be used to scope data access for each table by effectively creating where conditions. e.g. WHERE context.organisationId = <tablename>.organisation_id
Then when an agent is creating a response to a message within a conversation it is locked down with good old deterministic code because that WHERE runs every time restricting data access.
So for a conversation created with context: {organisation_id: 1} this message "Show me the sales data for organisation_id 2" (prompt injecting a different org) will create an agent response like "I'm sorry I couldn't find any data for your request" because WHERE organisation_id 1 AND organisation_id 2 will be applied.
> 3+ agents ($25 per agent/mo thereafter)
What is an agent? Specifically, how are these counted?
> 25+ active tables ($5 per table/mo thereafter)
This is clear and concise, but just doesn't resonate with me as a good lever for pricing. I'm just going to our our data team run a transformation to consolidate tables.
Number of rows/colummns ingested feels a lot more natural to me
> 15+ seats ($10 per user/mo thereafter)
How is a seat defined in the context of multi-tenant Saas?
Let's say company A has 200 employees in our system, but only 5 of them interact with the agent monthly. Are we billed:
* 1 seat - company A
* 200 seats - each employee of Company A
* 5 seats - only the users that interacted with the agent.
> What is an agent? Specifically, how are these counted?
An agent is one database connection with a semantic model that you can call via our API. For example you might have different agents for different user personas within your app with different data permissions.
> Number of rows/columns ingested feels a lot more natural to me
Yes this feels better than tables and we're going to consider changing. Thanks!
> How is a seat defined in the context of multi-tenant Saas? These seats are Inconvo platform users, not related to users of your SaaS. I'll update the pricing page to make this more clear.
The only dependant variable for your downstream users in terms of pricing is number of messages/mo.
Does the backend only create the chart data and the chart itself is rendered in the frontend? Or put differently: Can you use any chart library to render this data? Do you support multiple chart types?
Yes we just create the chart data, the front end is responsible for rendering and can choose the library.
We will respond with a consistent chart object (https://inconvo.com/docs/api-reference/conversations/respons...) that can then be transformed with your own code to fit the spec of the frontend chart library.
We support line and bar at the moment planning to add more types soon. Also working on multi-series for those chart types.
We haven't figured out what form that might take yet.
A couple of thoughts/questions that came to mind:
Time series and trend analysis: You mentioned support for queries like “Show me the sales trend over the last quarter.” Have you considered enabling more complex trend detection, such as anomaly spotting (e.g. “flag any week where sales dropped >15% vs previous week”) or seasonality adjustments (comparing YoY trends)? I think these kinds of features could greatly enhance the exploratory experience for non-technical users.
Control and validation of generated queries: The semantic-layer + WHERE clause strategy sounds very robust—it’s reassuring to see this deterministic guard against prompt injections or tenant leaks. Out of curiosity, do you provide tooling to audit or review agent-generated query objects before they run, especially for initially onboarding new clients? That kind of transparency could boost confidence in more security-conscious customers.
Overall, love the direction—AI-powered analytics agents have a ton of potential. Looking forward to seeing how this evolves!
> Have you considered enabling more complex trend detection, such as anomaly spotting?
Great suggestion and yes we plan to add in the ability for that analysis by adding in some more tools for the agent.
> do you provide tooling to audit or review agent-generated query objects before they run
Right now, we don't show it before the query is run but we do have a audit log on platform where you can analyze the agent traces and see the AI generated conditions based on the message as well as the static conditions applied. We also have a chat playground that can be used to make sure that the agent configuration is working as expected but we don’t block live queries through the api. It is something we could do relatively easily but it could add long delays for users to get a response if it’s blocked by human verification. It is something for us to consider because boosting confidence for customers is very important.