The Making of our AI Assistant for Auth0 Actions

November 7, 2024
Pete Nicholls
We received a lot of interest about how we built our new free tool, the AI Assistant for Auth0 Actions, which helps author code and write tests for Auth0 Actions. In this post we break down how we designed, built, and deployed it.

The Backstory

Auth0 Actions are JavaScript functions that run in response to various events in the auth lifecycle, such as when a user logs in.

Earlier this year we published an open source Auth0 Actions Testing library to let developers test and develop Actions locally. It was extracted from work we did building an extension for the Auth0 Marketplace that syncs leads from Auth0 to Salesforce.

As we had been playing with LLMs and techniques for Retrieval Augmented Generation (RAG), we thought it would be interesting to see if we could build an AI assistant to help author both Action code and tests using our library.

Building the assistant

We experimented with different RAG techniques, but the best solution for our V1 release proved to be the simplest one: OpenAI’s Custom Assistants. We found that OpenAI already had fair knowledge of Auth0 Actions. Rather than needing the entire Auth0 documentation, it just needed a little guidance and some concrete examples to give us good results. However, the amount of context needed was greater than sensibly fits into a single Custom Assistant.

Our insight was to break the problem into different specialities and create multiple OpenAI Custom Assistants, each with just enough knowledge to produce a good answer for its chosen speciality.

Auth0 has seven Flows, or points of integration, such as when a user logs in or when they reset their password. For each of these Flows, we created a pair of assistants: one we call the Code Author, which takes a natural language prompt and writes a likely Action, and another we call the Test Author, which takes Action code and writes a test suite.

To automate this, we built an Assistant Generator. It reads from dozens of examples of code and tests we have built, along with various bits of best practice advice for each flow. The Generator’s job is to update each of the 14 total Custom Assistants with the relevant information for each one. Currently, each Custom Assistant is using gpt-3.5-turbo.


This approach works well when you’re working with a few pages worth of information. It won’t scale to tens of thousands of documents or database records, but it was a good fit for this project. It’s simple to build, simple to test, and simple to refine.

Building the app

The server is responsible for serving the UI, streaming responses and storing answers in Postgres. It’s written in TypeScript and uses a fast, lightweight web framework calledHono. We use Bun as the JavaScript runtime. The server has three NPM dependencies: hono, openai, and pg.

As for the front-end, it’s written in plain HTML, CSS, and JavaScript. There are two dependencies: one for rendering Markdown, and one for syntax highlighting. There are no frameworks. You can View Source, for real. There are no separate requests for icons, images, or web fonts.

The app is deployed to Heroku, backed with Heroku Postgres and protected with a Cloudflare WAF to thwart bots. We have always enjoyed the simplicity and developer experience of working with Heroku, and it’s been nice to see the pace of innovation has been picking up again following a change of leadership.

What’s next?

We hope you found this interesting. Are you using the Assistant? We’d love your feedback on what you’d like to see next.

Let’s talk

Contact Us

Stories & insights

read the blog