
Product Research, Behavioral Science, Learning Design, Human-AI Collaboration
Sometimes, it means building things to see what actually happens:
Find me on
selected studies
Making AI Legible to Teachers
Teachers are constantly trying to answer a simple question: how is this student doing, and what should I do next?In many learning products, the answer exists, but it’s spread across signals: scores, pathways, remediation, usage. Interpreting it takes time, context, and trust.
What I was trying to understandWhat information actually helps teachers make a decision in the momentWhere more explanation helps vs where it creates frictionHow AI summaries change perception of the system
What we learned• Teachers don’t need full explanations, but they do need to know if something requires action• Timing matters more than completeness• Longer AI explanations sometimes reduce confidence instead of increasing it• Lightweight signals and summaries serve different purposes
What changed• We shifted from “explain everything” to “support decisions in context”• Prioritized simple indicators for quick interpretation• Scoped AI summaries to short, focused outputs tied to action
Post-Launch Learning
(aka What Happens After You Ship?)
Most teams are good at launching features. Fewer are good at learning from them in flight.I’ve been working on ways to make post-launch learning more structured, faster, and actually usable in real product workflows.
What I was trying to understandHow teams decide whether something is “working” after launchWhat signals are actually useful vs just availableWhere AI can help vs where it adds noise
What we learned• Teams default to whatever data is easiest to access, not most meaningful• Qualitative signals are often delayed or ignored entirely• AI is helpful for structuring messy inputs, but not for deciding what matters• Speed matters more than completeness in post-launch learning
What changed• Focused on lightweight, repeatable evaluation loops• Designed systems that combine quick signals with targeted follow-ups• Used AI to organize inputs, not replace judgment
Understanding the Student Experience
We set out to understand how students actually experience one of our learning platforms, not only what they do, but also how they make sense of it.The question was about meaning, not just usability.
What I was trying to understandHow students interpret the purpose of the platformWhere they feel stuck without supportWhat drives engagement vs compliance
What we learned• Many students don’t see the platform as a place to learn, but a place to complete assignments• Confusion often comes from unclear purpose, not poor UI• Progress is meaningful only when it connects to something larger
What changed• Shifted focus from interface tweaks to providing clarity of purpose• Prioritized moments where students need orientation or feedback• Reframed engagement as understanding, not activity
What AI Can Do in Product Design
I’ve been running small experiments with design and research teams to understand how AI actually shows up in day-to-day work, not in theory, but in practice.The goal isn’t to evaluate the tools. It’s to understand how they change thinking and decision-making.
What I was trying to understandHow teams integrate AI into existing workflowsWhich tasks and workflows do my teammates expect AI genuinely to accelerateHow confident does my team feel evaluating the quality of AI outputs
What I learned• The strongest signal was about structure and expectations, not tools. The most consistent ask was for a shared playbook.• Most outputs landed in the same place: useful, but not done. The average result was something you could work with, but almost never something you could use as-is.• The biggest friction wasn’t accuracy, it was overhead. A lot of outputs were technically fine but verbose, generic, or slightly off, so you spend time cleaning them up. That “verification tax” often cancels out the time saved.• AI was most helpful at getting something started or shaping rough inputs. It struggled more with anything that required prioritization, judgment, or tradeoffs without clear constraints.• The difference between a good and bad output was rarely the tool. It was the ask: clear constraints, audience, and iteration made a much bigger difference than model choice.
What I’m taking from this• Treat AI as a collaborator for structure, not a source of answers• Start by embedding it into specific moments in a workflow, not replacing the workflow• Invest in shared patterns and prompts rather than tool exploration
my projects are structured experiments carefully disguised as sticky, magnetic products
built by teachers, for teachers

Teaching is ephemeral. Auly makes it lasting.
Auly helps educators turn everyday teaching artifacts into stories that sparkle. Upload instructional materials and get insights on what stands out. Build and share portfolios that feel truly you. It’s playful, helpful, and currently in beta.
Why I built it:• Gap in the market for a truly teacher-centered LMS• Frustration with lack of clarity around teaching portfolios• Curiosity about what lesson-building tools might look like if they felt more like thinking through ideas than filling out forms.
What I'm observing:• Teachers want tools that they own• Over-structured tools slow thinking down• The best tools feel like they’re “thinking with you,” not evaluating you
rate clouds, not people
nimbus is a playful photo app where you capture clouds and tag their “vibes.” a gentle reminder that the best things in life are free.currently in closed beta for iOS
Why I built it:I wanted a photo-journaling app that treated everyday moments as something worth noticing
What I'm observing:• People care about how something feels more than they how it looks• Minimalist, joyful structure makes consistency much easier• Reflection ends up mattering more than capture
painting with clouds

Nimbus Atelier is my custom GPT that generates photorealistic skies, scenes, and landscapes where clouds take center stage. Transforms short creative briefs into cinematic, true-to-life imagery, dramatic, serene, or anything in between.
Why I built it:Nimbus Atelier is a quieter extension of Nimbus, an experiment in turning moments into something you can revisit, reshape, and share differently.I also live in Pittsburgh, where paradoxically, it's always cloudy yet there's never just a cloud to photograph.
What I'm observing:• People prefer reworking their own content over starting from scratch• Small creative constraints increase engagement• “Finished” matters less than “worth returning to”
Bite-sized prophecies
Noshtradamus is a custom GPT that reads fortunes from images of your leftovers. Speaking with folkloric wisdom and playful mystery, it transforms crumbs, pits, smears, and wrappers into omens that awaken, surprise, and delight.
Why I built it:My grandmother used to read fortunes in tea leaves, and it delighted my and my cousins.Nostradamus explores how and when people predict things, using cookies, garlic, and whatever else happens to be nearby.It leans into ritual, randomness, and the strange appeal of making meaning out of almost anything.
What I'm observing:
• People love prediction, even when they don’t believe it.• The ritual matters more than the outcome• Playfulness lowers the barrier to sustained engagement
debate me, cowards!
Debatable is a single-player debate app where you spar with an AI on silly topics. Choose your preferred style (Wholesome, Smug, Chaotic, and more). It’s part game, part thought experiment, and always a little too real. Because, let's face it, cereal is just cold soup.
Why I built it:Debatable explores how people engage with AI-generated arguments, especially when those arguments have style and personality.It’s less about winning debates and more about how tone, style, and framing shape the experience.
What I'm observing:• Tone matters more than logic in how people judge responses• Feeling seen and understood beats “winning”• Style (wholesome, smug, chaotic) drives engagement more than content
Practice smarter, not harder
A delightfully unhinged product management interview question generator that can serve up both boardroom-ready scenarios and completely absurd challenges. Toggle between "Design a budgeting app for remote workers" and "Create onboarding flow for a haunted mirror used by time travelers." Perfect for interview prep or just having a laugh at the wild world of PM questions.
Why I built it:• I wanted a tool for generating PM interview questions that help people think, but also surprise and delight.• It’s designed to create space for humor as well as reflection in the interview experience.
What I'm observing:• Structure and novelty drive learning and reflection• Good interview questions should reduce pressure, not increase it• People want prompts that guide thinking, but also need to be made to laugh
empathy, curiosity, creativity
Fishpickle is my UX research consultancy. Like the jar of pickled fish it’s named after, it’s not always the star of the dish, but it makes everything richer, sharper, and more memorable. I help teams uncover what their users really need, and turn those insights into products people love.
Why I built it:• Product research is where most of my daily work lives.• I wanted a brand for myself that captured my creative and thoughtful approach researching