Launch HN: Human Layer (YC F24) – Human-in-the-Loop API for AI Systems
What’s really exciting is that we’re enabling teams to deploy AI systems that would otherwise be too risky. We let you focus on building powerful agents while knowing that critical steps will always get a human-in-the-loop. It’s been dope seeing people start to think bigger when they consider dynamic human oversight as a key ingredient in production AI systems.
This started when we were building AI agents for data teams. We wanted to automate tedious tasks like dropping unused tables, but customers were (rightfully!) opposed to giving AI agents direct access to production systems.
Getting AI to “production grade” reliability is a function of “how risky is this task the AI is performing”. We didn’t have the 3+ months it would have taken to sink into evals, fine tuning, and prompt engineering to get to a point where the agent had 99.9+% reliability—and even then, getting decision makers comfortable with flipping the switch on was a challenge. So instead we built some basic approval flows, like “ask in Slack before dropping tables”.
But this communication itself needed guardrails—what if the agent contacted the wrong person? How would the head of data look if a tool he bought sent a nagging Slack message to the CEO? Our buyers wanted the agent to ask stakeholders for approval, but first they wanted to approve the “ask for approval” action itself. And then I started thinking about it… as a product builder + owner, I wanted to approve the “ask for approval to ask for approval” action!
I hacked together a human-AI interaction that would handle each of these cases across both my and my customers’ Slack instances. By this time, I was convinced that any team building AI agents would need this kind of infrastructure and decided to build it as a standalone product. I presented the MVP at an AI meetup in SF and had a ton of incredible conversations, and went all in on building HumanLayer.
When you integrate the HumanLayer SDK, your AI agent can request human approval at any point in its execution. We handle all the complexity of routing these requests to the right people through their preferred channels (Slack or email, SMS and Teams coming soon), managing state while waiting for responses, and providing a complete audit trail. In addition to “ask for approval”, we also support a more generic “human as tool” function that can be exposed to an LLM or agent framework, and will handle collecting a human response to a generic question like “I’m stuck on $PROBLEM, I’ve tried $THINGS, please advise” (I get messages like this sometimes from in-house agents we rolled out for back-office automations).
Because it’s at the tool-calling layer, HumanLayer’s SDK works with any AI framework like CrewAI, LangChain, etc, and any language model that supports tool calling. If you’re rolling your own agentic/tools loop, you can use lower level SDK primitives to manage approvals however you want. We’re even exploring use cases where HumanLayer is used for human-to-human approval, not just AI-to-human.
We’re already seeing HumanLayer used in some cool ways. One customer built an AI SDR that drafts personalized sales emails but asks for human approval in Slack before sending anything to prospects. Another uses it to power an AI newsletter where subscribers can have email conversations with the content. HumanLayer handles receiving inbound emails and routing them to agents that can respond, and giving those agents tools to do so. One team uses HumanLayer to build a customer-facing DevOps agent—their AI agent reviews PRs, plans and executes db migrations, all while getting human sign-off at critical steps and reaching out to the team for steering if it encounters any issues.
We have a free tier and flexible credits-based pricing. For teams building customer-facing agents, you get whitelabeling and additional features and priority support.
If you want to integrate HumanLayer into your systems, check out our docs at https://docs.humanlayer.dev or book a demo at https://humanlayer.dev.
Thank you for reading! We’re admittedly early and I welcome your ideas and experiences as it relates to agents, reliability, and balancing human+AI workloads.