Wow, ok, a lot to unpack here @elba_sindoni! Iāve got to come back to these to review. Great stuff.
Jumping into the ideas around stochastic gradient descent, which I think you are suggesting that AI interactions work better when they proceed in small, controlled steps rather than dumping everything at once. Love this
Ideas start small
Most ideas do not deserve equal weight
One idea earns depth
Synthesis happens later, after evidence
Movement between ideas is deliberate (not automatic)
On a personal note, hereās how I see the value of AI in creation right now in my own work: idea evolution and convergence, rather than interaction control over time. Real-time synthesis limits the slop and controls where Iām going with an idea.
@elba_sindoni ahhh, thank you for dropping these! I canāt lie, I had to parse them (coming from a non designer role) haha. BUT the one that stood out to me most was āDesigning AI Interactions Using Progressive Structure and Explicit Rulesā
I often find that AI outputs can be SOOO long. While sometimes I do think the long outputs help with opening up different angles, Iād argue its information overload. What I hear you advocating for are more ārulesā to make sure the outputs are focused and intentional, which I love!! haha.
On the other side of the coin, Iād ask, how much structure does AI need to be trustworthy without becoming restrictive?
I really enjoyed āDesigning AI Interactions Using Progressive Structure and Explicit Rulesā. Really helps break down an approach that feels actionable during a development process. Will definitely want this as reference for future projects.
Yes, the idea is for the AI to respond to what the user actually needs, but it also depends on the personality you want to give the agent, and that goes hand in hand with the product.
In that sense, limits are important. The premise is not to restrict, but to ensure the AI does not make assumptions on behalf of the user. The user should explicitly declare their need, and the AI executes the request.