Should I set up a personal AI agent to help me with my daily tasks?
— Asking for help
As a general rule, I think relying on any kind of automation in your daily life is dangerous when taken to extremes and potentially alienating even when used in moderation, especially in relation to personal interactions. An I have an agent that organizes my to-do list and collects web links for further reading? Incredible. An AI agent that automatically texts my parents every week with a quick life update? Passionately.
Still, the strongest argument for not incorporating more generative AI tools into your daily routine remains impact on the environment these models continue to have during training and output generation. With all that in mind, I dug around WIRED Archivespublished during the glorious dawn of this mess we call the internet, to find more historical context for your question. After a bit of searching, I came back convinced that you probably already use AI agents every day.
The idea of artificial intelligence agents, or heaven forbid “artificial intelligence agents,” is the current buzzword for any tech leader trying to grow their recent investments. But the concept of an automated assistant dedicated to the execution of software tasks is far from a fresh idea. So much of the discourse surrounding “software agents” in the 1990s mirrors the current conversation in Silicon Valley, where tech executives now promise a coming flood of AI-powered generative agents trained to do online business on our behalf.
“One problem I see is that people will question who is responsible for the agent’s actions,” writes a WIRED interview with MIT professor Pattie Maes, originally published in 1995. “Especially things like agents taking up too much time on the machine or buying something you don’t want on your behalf. The agents will raise many interesting questions, but I am convinced that we will not be able to live without them.”
I called Maes in early January to hear how her views on AI agents have changed over the years. She is as optimistic as ever about the potential for personal automation, but believes that “extremely naïve” engineers don’t spend enough time dealing with the complexities of human-computer interaction. In fact, she says, their recklessness could cause another AI winter.
“The way these systems are built right now, they’re optimized from a technical point of view, from an engineering point of view,” she says. “But they’re not optimized for human design problems at all.” It focuses on how artificial intelligence agents are still easily deceived or resort to biased assumptions, despite improvements to the underlying models. And misplaced confidence leads users to trust answers generated by AI tools when they shouldn’t.
To better understand other potential pitfalls for personal AI agents, let’s break that nebulous term into two distinct categories: those who feed you and those who represent you.
Feeders are algorithms with data about your habits and tastes that search through a range of information to find what’s relevant to you. Sounds familiar, right? Any social media a recommendation engine that fills the timeline with custom posts, or an incessant ad tracker that shows me those mushroom gummies for the thousandth time on Instagram it can be considered a personal AI agent. As another example from an interview in the ’90s, Maes mentioned a newsgathering agent fine-tuned to return the articles she wanted. That sounds like my Google News landing page.