1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

Available for work

Available for work

hello@amercer.com

Button Hover Text

Available for work

AceAssist

AceAssist is an AI tutor that understands the full context of a student's academic life.

Timeline

February 2025, 1-month sprint + self-initiated iteration while waiting for YC results

Company

AceAssist

Role

Sole designer — user research, UI/UX design, prototyping, demo video production and voiceover

Teams

1 founder/engineer (ex-Alibaba, CMU), 1 designer (me), fully remote

Research

7 qualitative interviews recruited via userinterview.com

Outcome

YC application submitted with full demo video. Core screens redesigned on own initiative while waiting for results. Team sunsetted after YC rejection.

Link

Problem

Students juggle 4–5 broken tools. No AI understands their full academic context.

Method/Design

College students juggle 4–5 disconnected tools: ChatGPT, Quizlet, YouTube, calendar, email. Nothing talks to each other.

Existing AI tools report accuracy as low as 45% — but every student uses them anyway. Adoption isn't the problem. Trust and fragmentation are.

No single tool understands a student's full academic context. That's the gap AceAssist was built to close.

Research

Seven interviews in a week revealed the real problem wasn't AI quality — it was fragmentation.

Method/Design

7 qualitative interviews in a week — pre-med, CS, economics, physics, dance/psych, community college, MBA analytics. Diverse enough to find patterns that weren't just one type of student.

INSIGHT 1

Tool fragmentation is universal

Every student used 4+ disconnected tools. No unified environment existed.

INSIGHT 2

Students tolerate broken AI because nothing better exists.

All 7 used ChatGPT despite 45% accuracy. The problem wasn't adoption — it was trust.

INSIGHT 3

Price consensus at $5–10/month.

Unusually consistent across very different student profiles. A real willingness-to-pay signal.

"Interviewing 7 students in a week while racing toward a YC deadline felt reckless. It was the opposite. Without those conversations, we would have built a better ChatGPT wrapper. Instead, we understood the actual problem was fragmentation — not AI quality."

YC Sprint & First Version

One month to ship a working YC demo. Fast and functional — a conscious tradeoff, not carelessness.

Method/Design

KEY DECISION

Unified dashboard as the primary surface.

User Value

Courses, deadlines, AI actions, and inbox in one view. Addresses Insight 1.

Business Value

Value prop visible on first login. Critical for $5–10/month conversion.

KEY DECISION

Unified dashboard as the primary surface.

User Value

Courses, deadlines, AI actions, and inbox in one view. Addresses Insight 1.

Business Value

Value prop visible on first login. Critical for $5–10/month conversion.

The demo video was produced and voiced entirely by me. The opening line — "juggling all your courses and deadlines" — is the fragmentation insight in plain language.

"The deadline was hit. But I knew the UI wasn't where it needed to be — and I wasn't willing to leave it there."

Second Version

Nobody asked for this. The UI wasn't where it needed to be — so I rebuilt it before knowing the outcome.

Method/Design

The problem wasn't visual. It was structural. Information was organized by category rather than by user intent.

OPTION A

Polish the existing screens visually.

Faster, and the layout already worked functionally. Rejected — polishing a structural problem produces prettier friction.

OPTION A

Polish the existing screens visually.

Faster, and the layout already worked functionally. Rejected — polishing a structural problem produces prettier friction.

OPTION B — CHOSEN

Restructure core screens around surfacing action.

From "organize information" → "surface action." AI chat moved front and centre. Required rethinking the layout, not restyling it.

OPTION B — CHOSEN

Restructure core screens around surfacing action.

From "organize information" → "surface action." AI chat moved front and centre. Required rethinking the layout, not restyling it.

KEY DECISION

AI prompt bar as the hero element on home screen.

User Value

The most important action — asking for help — is the first thing users see.

Business Value

AI is the core differentiator. First touchpoint = higher value prop exposure.

KEY DECISION

AI prompt bar as the hero element on home screen.

User Value

The most important action — asking for help — is the first thing users see.

Business Value

AI is the core differentiator. First touchpoint = higher value prop exposure.

KEY DECISION

Notification panel restructured around Act / Know / Explore taxonomy.

User Value

Notifications sorted by intent, not content type. Addresses Insight 2.

Business Value

Intent-based scaling holds as features grow. Content-type taxonomy breaks.

KEY DECISION

Notification panel restructured around Act / Know / Explore taxonomy.

User Value

Notifications sorted by intent, not content type. Addresses Insight 2.

Business Value

Intent-based scaling holds as features grow. Content-type taxonomy breaks.

"The impulse to polish instead of restructure was strong. But the research pointed at a structural problem, and I'd rather ship fewer screens that are architecturally right than more screens that are superficially better."

Reflection

Structure before surface.

Method/Design

The first instinct after the sprint was to polish. The research said otherwise — the problem was how information was organized, not how it looked. The Act/Know/Explore taxonomy, the AI prompt bar as hero element, the shift from organizing to surfacing action — none of these are visual changes. They're structural. I carry this forward: when something feels wrong in a design, check the architecture before reaching for the pixel tools.

What's unfinished — honest: Flashcards, summary, and notification detail screens weren't redesigned before the team sunsetted. Only two core screens got the full treatment. The scope I'd prioritize next: notification detail flows first (highest user frequency), then flashcard generation (most directly requested — Ashlyn and Shamar both named it).

What I'd measure if the project continued:

  • AI query frequency per course — is the contextual prompt bar being used?

  • Feature adoption rate per action type — which AI actions are trusted, which are ignored?

  • Week 2 retention — if it drops, investigate whether file-upload onboarding creates too much friction before value is delivered

"Seven interviews in a week isn't textbook methodology — but every design decision I made traces back to something a real student said. Speed and rigour aren't opposites if you know what questions to ask."

© 2026 Build by Henry Yang 楊添 - All rights reserved.
© 2026 Build by Henry Yang 楊添 - All rights reserved.
© 2026 Build by Henry Yang 楊添 - All rights reserved.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16