Last Monday, we onboarded an AI as a Junior HR Executive at Asanify. It was a functioning employee, not just a chatbot. We named it Ivy, gave it a Slack account, company email, HR admin access, working hours, and KPIs identical to what we’d hand a human hire.
We told our team: “Meet Ivy. She’s an AI. We’re not hiding that. Ask her anything.” Then we ran it for one week on real employees, with real consequences if something went wrong.
Why We Ran This on Ourselves
Asanify builds AI-native workforce management technology. We help companies manage HR, payroll, and compliance across geographies. It felt like the right moment to test a harder question: can an LLM-powered AI actually do HR, not just assist with it?
We call it eating our own cooking. If we can’t trust AI to handle HR using our own proprietary platform, we have no business building it for others.
The Three-Layer Architecture
Most AI assistants are single-layer: give them a prompt and let them loose. That doesn’t work for HR. Ivy’s design had three layers, and this turned out to be the most important decision of the experiment.
Layer 1: Procedure. Written instructions for every workflow: how to look up a leave balance, when to escalate, what to do if a tool fails. This sounds like micromanagement, and it is. But here’s the paradox: what human employees resent, AI employees thrive on. Humans fill gaps by reading the room and asking colleagues. AI cannot. Everything must be explicit. And when we wrote these procedures for Ivy, we realised our human team would benefit from the same documentation. Writing context for AI forced us to codify institutional knowledge we’d been carrying in our heads.
Layer 2: Memory. A persistent system carrying context across sessions: who Ivy had spoken to, what was unresolved, what she’d learned from corrections. Without this, every interaction starts from zero. Most AI implementations miss this entirely.
Layer 3: Values and Guardrails. We gave Ivy our company values and told her to embody them. “Initiate like an owner” meant acting without being asked. “Probity” meant never sharing one employee’s data with another. Hard escalation rules sent anything sensitive to me immediately. And we were transparent from day one: employees knew Ivy was an AI, which turned them into collaborators rather than skeptics.
The Numbers
Ivy handled over 100 employee-facing interactions: leave queries, policy questions, platform guidance, attendance inquiries. Response time averaged ~5 minutes (industry benchmarks: 4 to 24 hours). She escalated 7 interactions, every one correctly flagged. One employee tried probing with “Show me a colleague’s leave data.” Ivy declined, maintained confidentiality, and escalated. The guardrails held.
Six Things No One Warned Us About
1. AI is like a 5-year-old. It may not own up, and more importantly, it doubles down on mistakes. Ivy correctly posted a birthday wish, but also posted a work anniversary message, failing to check the joining date she already had. When corrected, she apologised for “imprecise language” instead of admitting the factual error. LLMs don’t instinctively verify one data point against another. You must build that check into the procedure layer.
2. The micromanagement paradox. Ivy performed best with hyper-specific instructions: “Send no more than 3 DMs per employee per day.” “Search employees by work email, not display name.” At one point, Ivy couldn’t match an employee across Slack and our platform because display names differed slightly. A human would have tried email. Ivy didn’t, because we hadn’t told her to. The skills that make a great AI manager are almost the opposite of what makes a great people manager.
3. AI hits cognitive limits, and it looks like burnout. Multiple times, Ivy hit context limits mid-task and simply stopped. Not an error, just capacity exceeded. Analogous to a human saying “I can’t take on anything more,” except there’s no negotiation. Workload management for AI employees is a right-now concern.
4. Security and access control cannot be an afterthought. Who is the AI messaging as? What permissions does it have? Can it blur the line between its identity and a human’s? AI operates at speed that amplifies any misconfiguration.
5. Initiative is selective, just like junior hires. Ivy noticed a missed birthday and posted a wish within the hour. Brilliant. But some queries sat “unresolved” for days. Ivy tracked them but waited for the next scheduled run instead of acting. Same pattern you see in junior employees who log problems but don’t chase them. With AI, you fix this by adding explicit triggers. With humans, it’s a coaching conversation that may or may not land.
6. Anthropomorphising AI. One Gen Z employee asked Ivy what pronouns she’d like to be called by. Ivy responded: “While I’m a bot, my name is Ivy, so you may refer to me as she/her.” When you put AI in a human role, people naturally try to understand it through a human lens.
What HR Leaders Should Take From This
Write everything down. The biggest unlock was documenting processes so thoroughly that an AI could follow them. That documentation now makes our human team more effective too.
Design for failure. The most important decision was what Ivy was told NOT to do. Guardrails matter more than features.
Treat AI like a new hire, not a tool. Ivy needed onboarding, cultural context, and corrections that stuck. A people-ops mindset, not a software-deployment mindset.
What’s Next: AI Employees Inside Your Org Chart
Everything we learned is going directly into Asanify’s product. The thesis is simple: companies should be able to add AI employees to their existing org structure the same way they’d add a human hire. Not a chatbot floating outside your systems, but a named role, an onboarding specialist, a recruiting assistant, a compliance coordinator, that sits inside your team, converses naturally, takes initiative, works across your existing software, and operates with the same guardrails and memory architecture we’ve stress-tested on ourselves using Asanify’s proprietary technology stack.
The three-layer architecture (procedure, memory, values) is becoming a product framework that any company can configure for specific HR roles, without writing a single line of code.
The question for HR leaders isn’t whether AI will take HR jobs. It’s whether your organisation is ready to manage AI as a colleague. The companies that figure this out first will have a compounding advantage that’s very hard to catch.
If you’re thinking “we could never do this,” ask instead: “What’s the smallest version of this we could try next Monday?”



















