[HN Gopher] Show HN: DesignArena - crowdsourced benchmark for AI...
       ___________________________________________________________________
        
       Show HN: DesignArena - crowdsourced benchmark for AI-generated
       UI/UX
        
       I've been using AI to generate some repetitive frontend (guilty),
       and while most outputs felt vibe-coded, some results were
       surprisingly good. So I cleaned it up and made a ranking game out
       of it with friends, and you can check it out here:
       https://www.designarena.ai/vote  /vote: Your prompt will be
       answered by four random, anonymous models. You pick the one you
       prefer and crown the winner, tournament-style.  /leaderboard: See
       the current winning models, as dictated by voter preferences.
       /play: Iterate quickly by seeing four models respond to the same
       input and pressing space to regenerate the results you don't lock-
       in.  We were especially impressed with the quality of DeepSeek and
       Grok, and variance between categories (To judge by the results so
       far, OpenAI is very good for game dev, but seems to suck everywhere
       else).  We've learned a lot, and are curious to hear your comments
       and questions. Excited to make this better!
        
       Author : grace77
       Score  : 56 points
       Date   : 2025-07-12 15:07 UTC (7 hours ago)
        
 (HTM) web link (www.designarena.ai)
 (TXT) w3m dump (www.designarena.ai)
        
       | coryvirok wrote:
       | This is really good! It would be really cool to somehow get human
       | designs in the mix to see how the models compare. I bet there are
       | curated design datasets with descriptions that you could pass to
       | each of the models and then run voting as a "bonus" question
       | (comparing the human and AI generated versions) after the normal
       | genAI voting round.
        
         | grace77 wrote:
         | wow this is a super interesting idea, and the team loves it --
         | we'll fast follow-through and follow-up here when we add it,
         | thanks for the suggestion!
        
         | debesyla wrote:
         | This would be extra interesting for unique designs - something
         | more experimental, new. As as for now even when you ask AI to
         | break all rules it still outputs standard BS.
        
       | a2128 wrote:
       | I tried the vote and both results always suck, there's no option
       | to say neither are winners. Also it seems from the network tab
       | you're sending 4 (or 5?) requests but only displaying the first
       | two that respond, which biases it to the small models that
       | respond more quickly which usually results in showing two bad
       | results
        
         | ethan_smith wrote:
         | Adding a "neither is good" option would improve data quality by
         | preventing forced choices between two poor designs.
        
           | grxxxce wrote:
           | this is a great note -- will be sure to add!
        
         | grace77 wrote:
         | Yes -- great point. We originally waited for all model
         | responses and randomized the vote order, but that made it a
         | very bad user experience -- some models, especially open-source
         | ones, took over 4 minutes to respond, leading to a high voter
         | drop-off rate.
         | 
         | To preserve the voter experience without introducing bias, our
         | current approach waits for the slowest model within each binary
         | comparison -- so even if one model is faster, we don't display
         | until both are ready. You're right that this does introduce
         | some bias for the two smallest models, and we'd love to hear
         | suggestions for how to make this better!
         | 
         | As for the 5th request: we actually kick off one reserve model
         | alongside the four randomly selected for the tournament. This
         | backup isn't shown unless one of the four fails -- it's not the
         | fastest or lowest-latency model, just a randomly selected
         | fallback to keep the system robust without skewing results.
        
       | justusm wrote:
       | nice! Training models using reward signals for code correctness
       | is obviously very common; I'm very curious to see how good things
       | can get using a reward signal obtained from visual feedback
        
         | grace77 wrote:
         | As are we, seems like the natural next step
        
       | muskmusk wrote:
       | This is a surprisingly good idea. The model vs model is fun, but
       | not really that useful.
       | 
       | But this could be a legitimate way to design apps in general if
       | you could tell the models what you liked and didn't like.
        
         | grace77 wrote:
         | yes! that is the hope -- /play is our first attempt at building
         | out utility, would love your feedback and will ship hard to
         | make it happen!
        
       | iJohnDoe wrote:
       | Can the code and design that is generated be used?
        
         | grace77 wrote:
         | yes! we have a copy code and copy react code button on
         | https://www.designarena.ai/play
        
       ___________________________________________________________________
       (page generated 2025-07-12 23:00 UTC)