Architecture & Ownership
Per-install everything, no shared infrastructure, no multi-tenant database, the buyer pays the vendors directly. Why this is harder for me to operate, what it gives the buyer, and the firing test that ownership has to pass.
The question that shaped every other answer: who owns the data after the engagement ends. Most small-business software answers that question with the phrase 'we do.' The buyer rents access to their own data and pays a monthly subscription for the privilege. The data lives in the vendor's infrastructure, behind the vendor's auth, governed by the vendor's terms of service, and the buyer's leverage is limited to the threat of cancellation, which on a 90-day terms-of-service notice period is often slower than the next quarter's planning cycle. The architecture I built for Canopy inverts that. The buyer owns the data, the buyer owns the infrastructure, the buyer owns the auth, and the buyer can fire me on any day they choose and keep operating tomorrow morning. That is the test that ownership actually has to pass, and it shaped every other technical decision in the stack. The mechanism is per-install everything. Each Canopy install is its own Postgres database, its own deployment to a Vercel project the buyer owns, its own Google sign-in OAuth client wired to an admin-only allow-list of email addresses the buyer controls, its own domain registered to the buyer's account, its own SSL certificate issued to that domain, its own monitoring infrastructure, its own audit log. There is no shared infrastructure between customers. There is no multi-tenant database. There is no central admin dashboard where the studio can switch between installs by clicking a customer in a list. The studio gets access to a buyer's install only because the buyer has explicitly added the studio's email to that install's admin allow-list, and the buyer can revoke that access on any day with a single configuration change. I considered multi-tenant SaaS with row-level security and a master billing dashboard, which is the standard pattern and the one that is cheaper for me to operate. The infrastructure is one database, one deployment, one auth wall, one billing system; new customers are rows in tables instead of new infrastructure provisioning. Most agency-built dashboards ship this way because the alternative is more expensive to run. I rejected it for the buyer's worst case. Every multi-tenant system I have ever read about has eventually leaked across tenants, either through a row-level security misconfiguration, a query that forgot the tenant filter, an admin who clicked through the wrong customer, or a provider-level breach that affected the shared infrastructure. The buyer who matters in the rejection decision is the one who reads the postmortem and remembers it, not the one whose worst day was a slow login. If the worst case for the buyer is a row of their data appearing in someone else's account, that is too bad a worst case to design toward, and the cost saving on operations is not worth the cost cost. The per-install architecture has consequences I have to absorb on the studio side. Provisioning a new install is more work than adding a row to a table; it involves creating a new Vercel project, a new Postgres database, a new Google Cloud OAuth client, configuring DNS, issuing the SSL certificate, running the schema migrations against the empty database, configuring the per-install settings, and onboarding the buyer to the admin sign-in. That is a few hours of work per install instead of a few seconds. The compounding cost of this choice is real and I chose it deliberately because the alternative cost falls on the buyer, and the buyer is the one paying. Operating consequences also flow from the per-install pattern. Software updates have to be deployed to each install separately because the deployments are separate. There is no central rollout. Schema migrations have to be applied to each Postgres database. Security patches that affect the auth layer have to be rolled out per install. I built tooling that automates these per-install operations from a single command, so the operational tax is bounded, but the tax is real and the buyer is the one who benefits from it: the buyer's install does not get migrated to a new region without their permission, does not get a database schema change they did not approve, does not get a feature flag flipped by someone else's customer support ticket. Role-based access control gates who can see what, with multiple permission tiers configurable per install. Each install's admin allow-list controls who can sign in at all. Inside the admin surface, role assignments determine who can see which sections, who can edit which records, who can change settings, who can manage other operators' access. The role assignments are per install, not shared across installs, and the buyer's primary admin can revoke any role at any time. Every meaningful change to a record is attributable to a specific operator via the audit log, and every change to a role assignment is itself a record in the audit log. The whole architecture is designed around the principle that the buyer should be able to answer 'who did what' at any point in their install's history. The audit log itself is the system of record for change attribution and is treated as immutable in normal operation. Every meaningful entity change writes a before-and-after snapshot attributed to the operator who made the change, with a timestamp and the rule or action that triggered the change. The audit log is queryable by the buyer's primary admin without any vendor mediation. If the buyer ever needs to investigate a question like 'who marked this deal closed-lost on this date and what was the previous value', the answer is one query away. If the buyer ever needs to investigate a security question like 'did anyone outside our operator team access this contact's record', the answer is also one query away. This is what attributability means in practice: not just that the system records who did what, but that the buyer can ask the question without me being in the loop. The portability story matters too. Because the install is the buyer's deployment, the buyer's database, the buyer's auth, and the buyer's domain, the buyer can take the whole stack with them on any day they choose. There is no export-to-CSV pattern that hides the structure of the data behind a denormalized flat-file. The data is already in the buyer's database in its native shape; the buyer can run any SQL they want against it; the buyer can replicate it to another database; the buyer can hire a different developer to extend it. The integrations the install talks to are configured with the buyer's API credentials, not the studio's; the buyer can rotate the credentials and the studio loses access to those integrations. The deployment is the buyer's Vercel project; the buyer can change the deployment hooks to point at a different repository if they want to fork the codebase. The firing test is the cleanest way I have found to articulate what ownership has to mean. If the buyer wakes up one morning and decides they no longer want to work with the studio, they should be able to keep operating tomorrow morning. The data is still there. The auth still works. The domain still resolves. The deployment still serves traffic. The audit log is still queryable. The role assignments are still in force. Nothing breaks because the studio's role in the install was always advisory and configurational, never load-bearing on the runtime. The buyer can hand the install to a different developer for ongoing maintenance, can extend the codebase without my involvement, can run the install for years without ever talking to me again. That is the test that ownership has to pass, and the per-install architecture is the version of the system that passes it. The operational consequence the buyer feels is the one that quietly pervades every other section of this case study: a Canopy install is a piece of infrastructure the buyer fully owns, not a SaaS subscription with a long export procedure. The dashboards, the data, the authentication, the domain, the audit trail, and the deployment all live where the buyer keeps the rest of their business. The studio's job is to build the install correctly and hand it over; the buyer's job is to operate it. That handoff is the moment ownership becomes real, and the architecture is what makes the handoff a clean line instead of a renewable contract.