Compiling Workflows into Databases: The Architecture That Shouldn't Work (But Does)

Abstract

What if everything you know about building distributed systems is backwards? What if instead of putting databases at the bottom of your architecture stack, you put them at the center—not just storing data, but actually orchestrating your application logic, managing your workflows and their state, and providing reliability guarantees?  A system that treats PostgreSQL not just as your database, but also as a durability layer for your application runtime. Instead of the typical pattern of building orchestration layers on top of databases, we compile workflow logic directly into database operations.

Drawing from our experience scaling systems at Reddit and Netflix and years of research at Stanford and MIT, this talk will discuss various use cases showing how using the open source Transact library from DBOS and durable computing concepts creates more reliable systems while reducing cost, processing time, and operational complexity.

Interview:

What is your session about, and why is it important for senior software developers?

Our session is about building reliable software directly on top of the database you already use, without relying on an external workflow coordinator. This is important because introducing extra services often adds new points of failure instead of improving reliability. By leveraging the database as the foundation for durability and recovery, applications become simpler and more reliable, which should always be important for senior developers, as important as security.

Why is it critical for software leaders to focus on this topic right now, as we head into 2026?

Durable computing and reliability are especially critical as we move into 2026 because AI agents are starting to take on more critical tasks on behalf of users. Today, these agents are still fragile and prone to failure. But as they gain the ability to make more real world interactions, the cost of unreliability grows dramatically. Building reliable agents will be one of the most important responsibilities for software leaders in 2026.

What are the common challenges developers and architects face in this area?

Developers and architects face recurring challenges in this space. Modern applications, especially AI agents, fail in unpredictable ways, and most systems don't provide enough observability to diagnose them. Existing workflow engines promise reliability but introduce heavy infrastructure, steep learning curves, and operational overhead. They often create more fragility than they remove. With the rise of AI coding agents, expecting them to design full distributed systems is unrealistic. Instead, using a library-based approach to durability allows AI agents to create durable software with a high chance of success.

What's one thing you hope attendees will implement immediately after your talk?

We would hope attendees leave with the mindset that reliability should be built in from the start, not bolted on later. The simplest next step is to try out a lightweight library like DBOS Transact in a small part of their system and see how durable execution changes the way they think about reliability.

What makes QCon stand out as a conference for senior software professionals?

QCon attracts people who are actually building and operating systems at scale, not just talking about them. You’re sitting next to engineers solving the same problems you’ve dealt with, people who understand the difference between what sounds good in a blog post and what actually works. The conference has this unique ability to spot architectural shifts before they become mainstream, and the speakers aren't just evangelizing – they're sharing real war stories, including the parts that didn't work. Plus, the hallway conversations with practitioners who've been through the same scaling challenges, late-night outages, and architectural regrets are often more valuable than the talks themselves.

What was one interesting thing that you learned from a previous QCon?

(Jeremy here) My favorite QCon story of all time:  I was a keynote speaker at QCon Sao Paulo, along with Neal Ford. Neal was doing a keynote on how to give a good technical presentation, and he went first. He then proceeded to give a list of do’s and don’t’s, and I realized that I was doing every don’t and didn’t have any of the do’s. I then had to speak after him!  He graciously gave me a copy of his book on the topic after the conference, which I read cover to cover and then completely changed how I give presentations.  
The fun epilogue to this story is that five years later we were both keynote speakers back to back at another conference.  We both had different topics, but I asked him if he could rate my talk. He gave me perfect marks!


Speaker

Jeremy Edberg

CEO of DBOS, Creator of Chaos Engineering, Tech Editor for 'AWS for Dummies'; Previously Founding Reliability Engineer @Netflix, and First Engineer @Reddit

Jeremy is an angel investor and advisor for various incubators and startups, and the CEO of DBOS. He was the founding Reliability Engineer for Netflix and before that he ran ops for reddit as its first engineering hire. Jeremy also tech-edited the highly acclaimed AWS for Dummies, and he is one of the six original AWS Heroes. He is a noted speaker in serverless computing, distributed computing, availability, rapid scaling, and cloud computing, and holds a Cognitive Science degree from UC Berkeley.

Read more

Speaker

Qian Li

Co-founder, Architect @DBOS, Stanford CS Ph.D., Co-organizer of South Bay Systems

Qian is the co-founder of DBOS and co-leads engineering.  She completed her Ph.D. in Computer Science at Stanford University, advised by Christos Kozyrakis, and worked closely with Matei Zaharia (CTO of Databricks and creator of Spark) and Mike Stonebraker (creator of Postgres) on the DBMS-oriented Operating System (DBOS) academic project, which is the foundation of DBOS, Inc. She has broad interests in computer systems, databases, architecture, and abstractions for efficient and reliable cloud computing.  Prior to joining Stanford,  she received her B.Sc. in Computer Science and Technology from Peking University.

Read more

From the same track

Session

How to Build an Exchange: Sub Millisecond Response Times and 24/7 Uptimes in the Cloud

Monday Nov 17 / 10:35AM PST

These days it is possible to achieve fairly good performance on cloud provisioned systems. We discuss the design of a high performance, strongly consistent system which maintains constant service in the face of regular updates to core logic.

Speaker image - Frank Yu

Frank Yu

Director of Engineering @Coinbase, Previously Principal Engineer and Director @FairX

Session

Building Resilient Platforms: Insights from 20+ Years in Mission-Critical Infrastructure

Monday Nov 17 / 11:45AM PST

In this talk, Matthew will describe lessons learned from over 20+ years of building scalable, secure and stable infrastructure platforms for software in financial services (electronic trading, credit card processing etc.), the talk is relevant to anyone building platforms for mission-critic

Speaker image - Matthew Liste

Matthew Liste

Head of Infrastructure @American Express, Previously @JPMorgan Chase and @Goldman Sachs

Session

Unconference: Architectures You've Always Wondered About

Monday Nov 17 / 05:05PM PST

Session

Architecting a Centralized Platform for Data Deletion at Netflix

Monday Nov 17 / 01:35PM PST

What does it take to safely delete data at Netflix scale? In large-scale systems, data deletion cuts across infrastructure, reliability, and performance complexities.

Speaker image - Vidhya Arvind

Vidhya Arvind

Tech Lead & a Founding Architect for the Data Abstraction Platform @Netflix, Previously @Box and @Verizon

Speaker image - Shawn Liu

Shawn Liu

Senior Software Engineer @Netflix, Building Reliable and Extensible Systems for Consumer Data Lifecycle at Scale

Session

The Architecture of an Infinite Scroll

Monday Nov 17 / 03:55PM PST

Details coming soon.