Progress Update
Fixes, breaking changes and closer to beta!
It’s been a busy few weeks since the last update. A lot has shipped and I’ve made some unfortunate breaking changes. This will be mostly a technical post explaining the changes and the reason behind some decisions.
Breaking Changes (Sorry)
-
Scoping got a lot of work; some behavior that was meant to happen but wasn’t is now fixed.
-
File storage moved to S3 completely. The old database storage is gone. This affects how you access files in workflows and how the MCP tools reference content. If you haven’t updated, I recommend copying your repo to GitHub so that when the database migrates, you can use the
/bifrost:migrateskill (in the repo) to help you fix your repo. -
The CLI has gone through a lot of work to support a better SDK-first workflow experience. You can now
bifrost pushandbifrost watchfrom your local repo — watch mode streams file changes to the server in real time, and push gates on a dirty check so you don’t accidentally clobber in-flight edits. In my testing,bifrost watchworks best when you’re working on a large feature, like an app for a customer. This is just because local coding tools have better capabilities against local files to scan lots of things to make decisions, and you can take advantage of purpose-built skills to standardize and expedite deployment. MCP is still pretty great, but I’m sure I’ll be finding myself using this mostly to iterate on existing stuff or build the occasional thing from my phone.
Since the original design in Azure Functions, I’ve been battling with solving for a horizontally scalable system that executed raw code without losing native Python features. In an early version, we actually created a file watcher that attempted to fan out writes to all worker nodes so they had an equivalent “local” copy. This allowed us to treat it like a normal file system. This worked, but before we even hit our first issue, I opted to redesign everything. My gut told me that rebuilding a file syncing solution was not going to be sustainable.
Some time ago I learned that you can actually modify the Python import system to create a “virtual importer”, which meant we had a real opportunity to solve this problem. What this does is allow you to do something like store your modules in a database, as an example, and tell Python how to look things up by path and get content. This worked okay, but the problem was that I ended up creating a virtual file system that tried to resolve things like the real file system, a couple of database tables and entities like forms into a unified file tree. All of the entities had different read and update patterns, some did or didn’t have organization scoping, and it was quite the fragile mess. Every time I made a change I was worried about breaking something and honestly, it was confusing to look at and know what things really were.
What I landed on is something I think will be a huge improvement overall. For one, we’re still using the virtual importer, but it works strictly with redis and S3. This means that files get to just be files and we don’t need to reinvent the wheel.
The second thing is I yanked out a whole bunch of abstraction that tried to interpret what files were, how to update them and how to detect drift. So for example, if you had a workflow file, we had to say “this is a Python file, it has decorators, these decorators aren’t in the database — are they new or updated?”. Then we had to say “this is a JSON file, it is a form, we should validate and update it in the DB”. Compounded with creating an abstract file view, there was bound to be a scenario where you moved something, it broke your apps or forms, and you wouldn’t know why. The first time this happened and I saw the Claude plan to fix it, it looked like we were building an entire framework, which I have no interest in inflicting upon our team or anyone in the community.
The solution is to use metadata files in .bifrost\*.yaml. Essentially, if the entity is not registered here, it does not exist. It also means that we effectively have infrastructure as code now, which means if you drop your workspace into a dev instance, everything except secrets should come back. It also means that Git can work the way you would expect without any mysteries. The reason this is a breaking change is because I didn’t create a migration script — the old system was honestly too complicated and since we’re in development, I felt like this was an appropriate trade off. The good news is that the bifrost:build skill in the repo should be able to help you generate working .bifrost\*.yaml from your repo.
What’s New
App Embedding
You can now embed Bifrost apps and forms on external sites. It’s HMAC-signed JWT auth so you can reasonably trust things like query params (which often will identify the user or the context in which the form was accessed). The use case I’m excited about: putting forms on customer-facing websites without building a whole separate frontend. You generate an embed secret, drop a script tag, and the form runs scoped to whatever org or context you specify. Big thanks to @sdc535 on Reddit for suggesting this and submitting a pull request that fixed an SDK bug I missed.
Server-Side Compilation + NPM Dependencies
Apps used to compile in the browser with Babel. That was always a temporary hack. Now everything compiles server-side, and you can add npm dependencies via esm.sh. There’s a dependency panel in the editor where you search packages, pin versions, and manage what your app has access to. Smaller bundles, faster loads, and you can actually use real libraries instead of hoping the browser has them.
Knowledge Sources (RAG Without the Pain)
Upload documents to a knowledge base, and agents can search them semantically. The embedding and vector storage are abstracted away — you don’t manage Pinecone or figure out chunking strategies. Scoping works like everything else: org-specific knowledge shadows global, so you can have default documentation that customers can override with their own runbooks. Agents finally stop making things up when they have your actual procedures to reference.
Private Agents and Permission Overhaul
Users can now create their own agents without an admin blessing. There’s a new can_promote_agent permission if you want to control who can make agents visible to the whole org. This came from watching our own team — everyone wants to experiment with agents, but not everyone should be publishing them company-wide.
SDK Skills for Claude Code
If you’re using Claude Code (and you should be), there are now bifrost:setup and bifrost:build skills. The setup skill detects whether you’re in SDK mode or MCP mode and bootstraps accordingly. The build skill handles the full compile-and-deploy cycle. It’s the smoothest way to develop workflows right now.
Email Module
Workflows can now send email via bifrost.email.send() and bifrost.email.send_template(). Domain verification, sender management, and template CRUD are all handled through the settings UI. It’s the kind of thing that used to require wiring up SendGrid yourself — now it’s a one-liner in your workflow.
Secret Redaction
Secrets in workflow execution logs are now automatically scrubbed. Any value loaded through bifrost.config.get() or bifrost.integrations.get() is registered for redaction, and you can manually register additional values with bifrost.register_secret(). This was overdue — it’s easy to accidentally log an API key in a debug statement, and now the platform catches it for you.
Per-App CSS
Apps now support a styles.css file that gets injected server-side. Small quality-of-life improvement, but it means you can write real CSS for your apps instead of cramming everything into inline styles or hoping a component library covers your case.
llms.txt and MCP Tool Consolidation
We replaced 7 separate schema MCP tools with a single get_docs tool backed by auto-generated llms.txt. This is better for AI-assisted development — instead of the model having to discover and call multiple tools to understand the platform, it gets one comprehensive document. The llms.txt file also auto-generates manifest YAML docs, so Claude always has up-to-date schema references.
app.yaml Elimination
Individual app.yaml files per app are gone. App metadata (name, description, dependencies) now lives in .bifrost/apps.yaml alongside everything else. One less file to manage, and it’s consistent with how workflows, forms, and agents are already registered.
Export/Import for Portability
Full org export as a JSON bundle. Workflows, agents, forms, configs, tables, knowledge sources — everything. Reconstruct it elsewhere. This is the escape hatch I always wanted from other platforms. Your data is yours.
Community Contributions
We got our first batch of external PRs, which is exciting. @sdc535 submitted a fix for allow_as_query_param, form auto-fill support, a fix for checkbox defaults not persisting in the form builder, permanent deletion of inactive forms, and cache-control headers for the nginx proxy. If you’re interested in contributing, the repo is open.
What’s Actually Working Now
We’re running Bifrost internally at Covi now. Real workflows, real agents, real customer data (properly scoped). Here’s what has survived actual use:
• Workflow execution and scheduling is solid • The MCP server integration with Claude Desktop is genuinely useful for development • App embedding is being tested with a customer-facing form • Knowledge search is replacing our internal wiki lookups
Less Tested:
• Git sync got a major refactor — persistent working tree, a unified GitHub Desktop-style sync button, and inline conflict resolution. The stash/pop corruption bug that was eating merge markers is fixed. It’s in a much better place, but I still want to stress-test it more before I call it done. • Performance at scale is still theoretical — we’re not stressing it hard enough yet • Error handling in the UI is… spotty. You get stack traces when things break, which is helpful for me, less so for users. Contributions on this are more than welcome.
What’s Next
The big focus now is stability and polish, but the next thing on my mind is autonomous agents. The thought here is to give agents the ability to be triggered from things like schedules, webhooks and the SDK. It would also involve an Agent History UI, similar to executions, but focused on the chain of events an agent followed, and the ability to generate an explanation to help you iterate on prompts. This is a pretty big lift as it requires consolidating a lot of stuff over to the worker process (like chat), but will be fun once it’s knocked out.
Stay tuned!