Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News

Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News
  1. Home
  2. /Story
  3. /Mew Design – Natural language to editable UI/graphic design
  1. Home
  2. /Story
  3. /Mew Design – Natural language to editable UI/graphic design
Nov 23, 2025 at 8:43 PM EST

Mew Design – Natural language to editable UI/graphic design

bkidyy
1 points
1 comments

Mood

excited

Sentiment

positive

Category

startup_launch

Key topics

Ui_design

Graphic_design

Natural_language_processing

Design_tools

Discussion Activity

Light discussion

First comment

N/A

Peak period

1

Hour 1

Avg / period

1

Comment distribution1 data points
Loading chart...

Based on 1 loaded comments

Key moments

  1. 01Story posted

    Nov 23, 2025 at 8:43 PM EST

    6h ago

    Step 01
  2. 02First comment

    Nov 23, 2025 at 8:43 PM EST

    0s after posting

    Step 02
  3. 03Peak activity

    1 comments in Hour 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Nov 23, 2025 at 8:43 PM EST

    6h ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (1 comments)
Showing 1 comments
bkidyy
6h ago
Hi HN, I’m the founder of Mew Design.

Like many of you, I love tools like Midjourney and DALL-E, but I was frustrated that they output flattened raster images. If the text is misspelled or the layout is slightly off, the whole image is unusable.

We built Mew Design to solve the "text hallucination" problem in AI graphics.

The Tech: Instead of generating a single pixel layer, we use a Multi-Agent System (currently running on Gemini 3.0 and customized models).

One agent acts as the "Art Director," parsing your prompt to understand intent (e.g., "minimalist event poster").

It dispatches tasks to specialized sub-agents: one generates the background visuals, while another calculates the typography hierarchy and layout vectors.

The result is a fully editable design where text, images, and shapes are separate layers—not a flat JPEG.

What makes it different:

Editable Text: You can actually correct typos or change fonts after generation.

Vector/Layout Awareness: It understands "logo top right" or "large headline" better than standard diffusion models.

We are currently exploring how far we can push agentic workflows in design. I’d love to hear your feedback on the generation quality and the editing experience!

(We just integrated Gemini 3.0, so speed should be improved.)

View full discussion on Hacker News
ID: 46029406Type: storyLast synced: 11/24/2025, 1:44:07 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN

Not

Hacker News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Jobs radar
  • Tech pulse
  • Startups
  • Trends

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.