Download the Google Zanzibar white paper annotated by AuthZed and with a foreword by Kelsey Hightower.

Extended T: Augment your design craft with AI tools

/assets/team/corey-t.jpg
October 3, 2025|8 min read

TL;DR
AI doesn't replace design judgment. It widens my T-shaped skill set by surfacing on-brand options quickly. It's still on me to uphold craft, taste, and standards for what ships.

Designers on small teams, especially at startups, default to being T-shaped: deep in a core craft and broad enough to support adjacent disciplines. My vertical is brand and visual identity, while my horizontal spans marketing, product, illustration, creative strategy, and execution. Lately, AI tools have pushed that horizontal reach further than the usual constraints allow.

At AuthZed, I use AI to explore ideas that would normally be blocked by time or budget: 3D modeling, character variation, and light manufacturing for physical pieces. The point is not to replace design craft with machine output. It is to expand the number of viable ideas I can evaluate, then curate and polish a final product that meets our design standard.

Exploration vs. curation: what actually changed

Previous tools mostly sped up execution. AI speeds up exploration. When you can generate twenty plausible directions in minutes, the scarce skill is not pushing Bézier handles. It is knowing which direction communicates the right message, and why.

Concrete example: Photoshop made retouching faster, but great photography still depends on eye and intent. Figma made collaboration faster, but good product design still depends on hierarchy, flows, and clarity. AI widens the search field so designers can spend more time on curation instead of setup.

Volume before polish
While at SVA we focused on volume before refinement. We would thumbnail dozens (sometimes a hundred) poster concepts before committing to one. That practice shaped how I use AI today: explore wide, then curate down to find the right solution. Richard Wilde's program emphasized iterative problem-solving and visual literacy long before today's tools made rapid exploration this easy.

Expanding horizontally with AI without losing the vertical

AI works best when it is constrained by the systems you already trust, whether that is the permission model that controls who can view a file or the rules you enforce when writing code. Clarity is what turns an AI model from a toy into a multiplier. When we developed our mascot, Dibs, I knew we would eventually need dozens of consistent, reference-accurate variations: expressions, poses, environments. Historically, that meant a lot of sketching and cleanup before we could show anything.

With specific instructions and a set of reference illustrations, I can review a new variation every few moments. None of those are final, but they land close while surfacing design choices I might not have explored on my own. I still adjust typography, tweak poses, and rebalance compositions before anything ships, so we stay on brand and accessible.

This mirrors every major tool shift. Photoshop did not replace photographers. Figma did not replace designers. AI does not replace design thinking. It gives you a broader search field so you can make better choices earlier.

Dibs mascot exploration and refinement process

Case study: turning 2D into 3D trophies

For our offsite hackathon, I wanted trophies the team would be proud to earn and motivated to chase next time. Our mascot, Dibs, was the obvious hero. I started with approved 2D art and generated a character turn that covered front, side, back, and top views. From there I used a reconstruction tool (Meshy has been the most reliable lately) to get a starter mesh before moving into Blender for cleanup, posing, and print prep.

Mesh cleanup and sculpting

I am not a Blender expert, but I have made a donut or two. With the starting mesh it was straightforward to get a printable file: repair holes, smooth odd vertices, and thicken delicate areas. When I hit something rusty, I leaned on documentation and the right prompts to fill the gaps. Before doing any of that refinement, I printed the raw export on my Bambu Lab P1P in PLA, cleaned up the supports, and dropped the proof on a teammate's desk. We went from concept to a physical artifact in under a day.

We ended up producing twelve trophies printed in PETG with a removable base that hides a pocket for added weight (or whatever ends up in there). I finished them by hand with Rub 'n Buff, a prop-maker staple, to get a patinated metallic look. Once the pipeline was dialed in, I scaled it down for a sleeping Dibs keychain so everyone could bring something home, even if they were not on the podium. Small lift, real morale boost.

Prints and final Golden Dibs trophies

Dibs keychains, Blender pose, in-progress and final prints

Why this matters for T-shaped designers

When anyone can produce a hundred logos or pose variations, the value as a designer shifts to selection with intent. Brand expertise tells you which pose reads playful versus chaotic, which silhouette will hold up at small sizes, and which material choice survives handling at an event. The models handle brute-force trial. You own the taste, the narrative, and the necessary constraints.

The result is horizontal expansion without vertical compromise. Consistency improves because character work starts from reference-accurate sources instead of ad-hoc one-offs. Physical production becomes realistic because you can iterate virtually before committing to materials and time.

With newer models, I can get much closer to production-ready assets with far less back-and-forth prompting. I render initial concepts, select top options based on color, layout, expression, and composition, then create a small mood board for stakeholders to review before building the final production-ready version. The goal is not to outsource taste. It is to see more viable paths sooner, pick one, and refine by hand so the final assets stay original and on-brand.

The guardrails that help keep quality high

  • Define what success looks like before you generate anything, and decide what "done" means.
  • Capture the non-negotiables: character traits, palette, typography, voice.
  • Provide references instead of adjectives.
  • Call out the exact angles, poses, or compositions you need.
  • Keep a human in the loop for selection, edits, and distribution.
  • Stay ethical: use your own IP, avoid mimicking living artists, and be transparent about where AI fits.

Mini checklist: stretch your own T

  • Pick one adjacent skill that will unlock an upcoming launch.
  • Codify the source material: references, palettes, schema, constraints.
  • Pair one AI assist with that project and track what you keep, edit, or reject.
  • Close with critique: share the work, gather feedback, and refine the pipeline for next time.

Process note: I drafted the outline and core ideas, then used an editor to tighten phrasing and proofread. Same pattern as the rest of my work: widen the search, keep the taste.


FAQs

What is a T-shaped designer?
A designer with deep expertise in one area (the vertical) and working knowledge across adjacent disciplines (the horizontal).

How does AI help T-shaped designers?
AI quickly generates plausible options so you can evaluate more directions, then apply judgment to pick, refine, and ship the best one.

How do I keep brand consistency with AI images?
Define non-negotiables (proportions, palette, silhouette), use reference images, and keep a human finish pass for polish.

Which tools did you use in this workflow?
Model-guided image generation (e.g., Midjourney or a tuned model with references), a 2D-to-3D reconstruction step for a starter mesh (Rodin/Hyper3D or Meshy), Blender for cleanup, and a Bambu Lab P1P to slice G-code and print.


See AuthZed in action

Build delightful, secure application experiences with AuthZed.