My first project for a large bank was designing the transfers screen. One screen. Just one. It took us four months. Coming from startups where we shipped complete features in two weeks, I thought it was madness. I was wrong.
Scale changes everything
When your design will be used by three million people, every decision amplifies. A confusing button does not generate ten support tickets. It generates thirty thousand. A 200-millisecond-too-long animation does not bother one user. Multiply that by three million daily interactions and it is six hundred thousand wasted seconds per day. Scale turns every detail into a systemic problem.
Accessibility is not optional
At startups accessibility was something we reviewed afterward. In banking it comes first. Because your user base includes elderly people using the app at 200% zoom, colorblind people who need to distinguish positive from negative numbers, people with low digital literacy who need explicit instructions at every step.
That trained me to design for the hardest use case first. If it works for the person with the most limitations, it works for everyone. It is the same principle I apply now with AI: if my AI-assisted design works without internet, on a small screen, for a novice user, then it is a good design.
Data humbles you
At a startup you can trust your intuition because the numbers are small. At a bank with millions of users, data tells you exactly how wrong you were. I remember designing an onboarding flow I considered elegant and minimalist. Data showed 40% abandonment at the second step. Users did not understand what to do because I had removed instructions for aesthetics.
That humility before data is the same you need with AI. Your prompt may seem perfect. The results will tell you the truth.
Bureaucracy as protection
I used to hate compliance, legal, security, and risk reviews. Each one added days to the timeline. Until I understood that each review protected the user from an error I had not considered. Financial bureaucracy exists because errors have real consequences. And that mindset is exactly what we need to apply to AI: not everything that can be generated should be published without human review.