This might just be me, a person who keeps going snorkeling in the vast amounts of innovation happening in the deep, dark, depths of AI, but I’m getting to a point where I don’t really even want to interact with UIs anymore.
Agents and chat windows make this too easy for me. Instead of fumbling through several screens to get to an outcome, especially for more complex tools like n8n, I find myself having a much more fun and productive time talking to an agent, providing the necessary context, and then having it go and build out what I need it to through n8n’s MCP.
Another great example: I have a little side project that I’ve been working on for about a year now. It was always a pain in the butt (and a bit annoying) to test new features, make code more robust, add error logging, etc, but now I’ve had the whole system hooked up to an MCP and even had AI go and build a CLI so that an agent could also work with it that way.
Now, I can literally have the agent go and test everything itself, and I know that the core functionality is sound. You can also now easily lock down UIs by going through and hooking up e2e testing for pretty much everything (e2e testing is a way for code to directly interact with the browser). And now, with Chrome’s latest web MCP, this will become even easier than ever.
I’m curious about a couple of things:
Are other teams working with tools through agents? How fast is the adoption?
Do you prefer a UI? Or do you prefer a chat interface? Does it depend on the tool?
Do you like this change? Or does it scare you? Curious on how people and teams are feeling.
Either way, the future is interesting. Keep learning and keeping up with what’s coming out. Understand what’s changing for other people (and even yourself). Are you purposefully keeping AI out of your workflows (a valid reaction imo), or are you leaning in more than ever?
The hard thing about UIs is that they force decisions. When you build them, you have to define the problem and set boundaries. Those constraints are often what spark innovation.
Can we lose intentional thinking?
I think when everything feels easy, it’s tempting to skip the hard framing work. Agents can optimize what already exists, but they don’t naturally question whether the system should exist in the first place.
I love the experimentation that you are doing. Though I am not doing this myself (mostly because of lack of time), our platform team is doing this. We have already built connecters via MCP to google drive, sharepoint etc and our belief is that chat bar will flip the enterprise from system specific data to data specific workflows across any consumption layer - which becomes the bar. Whether we allow this agent interaction for workflows or standardize output to get more deterministic outputs - the jury is still out but the bar will be a good way of interaction. Its fun to play around with. We are still very early to say if the adoption is good but I do have a plan for the adoption which will mostly be KPI/ metric driven. If you mean are companies attempting this - not at scale. People are still pretty scared.
Personally I like the idea of the Bar being the only place we give bespoke instructions and everything else being templatised. However it is not enough. I quite liked what Vellum has done with the connections and data sources attached. I would further like an editor js / coding methodology of back slash for added functionality. But I think that might be more relevant to pro-users and there is a fear of having the base of the chat interface become crowded with functionality in an attempt to maximise usage. UI gods are still out there trying to figure out what the market will accept.
I am currently using AI within templatized uses. I have a notion list of prompts which I use for discovery, reimagination of processes etc. But those almost always need some tweaking. I think we are a few months away from having all our internal processes become AI native.
Very valid point. To counter this we have built in enterprise grade guardrails along with a method to build RBAC within the platform for the enterprises such that floating information outside regular channels is still not accessible to people who should not have access to that information.
This is more for access to the data question rather than the system’s existence at all. But it is a step towards that thinking. I think we will have a better view of agent based user questioning / system questioning once we build our context graphs - which again are at least an year away I imagine.
People are going to ultimately struggle with this. When the “slot machine”, being AI, makes it too easy to win without thinking, why not just delegate it all to an agent?
I’ve noticed this too. More specifically, on the AI outputs, it will typically achieve 80% of the work, but there is always something to review and tweak.
Users also like having the control to edit both the inputs and outputs of the AI tools, whether they’re structured or not. It’s an interesting insight, as now I’m thinking that we have to give more power to the users than previously sought.
Crazy times we’re in. I can see this too. I already have my own personal openclaw bot that does surprisingly more and more and knows more and more about me (ofc, I made sure that this env is ultra secure, don’t have too many allowed permissions, etc).
Tasks - manage tasklists and tasks: get/create/add/update/done/undo/delete/clear, repeat schedules
Sheets - read/write/update spreadsheets, insert rows/cols, format cells, read notes, create new sheets (and export via Drive)
Forms - create/get forms and inspect responses
Apps Script - create/get projects, inspect content, and run functions
Docs/Slides - export to PDF/DOCX/PPTX via Drive (plus create/copy, docs-to-text, and sedmat sed-style document editing with Markdown formatting, images, and tables)