Deploying to Vercel + Supabase: The Gotchas Nobody Warns You About

Deploying to Vercel + Supabase: The Gotchas Nobody Warns You About

By Elena Rodriguez · April 17, 2026 · 19 min read

Quick Answer

Vercel and Supabase work great together but have sharp edges in production. The biggest gotchas: edge runtime cannot use the Node.js Supabase client, connection pooling must use Supavisor on port 6543, RLS policies silently return empty arrays instead of errors, auth sessions need manual cookie handling in SSR, and environment variables must be configured separately for preview and production deployments.

Key Insight

Vercel and Supabase work great together but have sharp edges in production. The biggest gotchas: edge runtime cannot use the Node.js Supabase client, connection pooling must use Supavisor on port 6543, RLS policies silently return empty arrays instead of errors, auth sessions need manual cookie handling in SSR, and environment variables must be configured separately for preview and production deployments.

The Stack Everyone Loves Until They Deploy

Vercel and Supabase are the darling combination of the modern full-stack developer. Next.js for the frontend and API routes, Supabase for the database, auth, storage, and real-time — it is a genuinely productive stack that can take you from idea to production in days.

Until you actually deploy to production. That is when the gotchas emerge.

After deploying over a dozen production applications on this stack and helping teams debug their deployments, I have catalogued every sharp edge, silent failure, and configuration trap that nobody warns you about. This guide is the document I wish I had before my first Vercel + Supabase production deploy.

If you are building authentication features, our Next.js Authentication Tutorial covers the auth patterns in detail.

Gotcha #1: Edge Runtime vs Node Runtime Incompatibilities

This is the gotcha that bites the hardest and the earliest. Vercel offers two runtimes for serverless functions: Node.js runtime and Edge Runtime. They look similar in your code but have fundamentally different capabilities.

The Problem

The Edge Runtime runs on Cloudflare Workers-style V8 isolates. It does not have access to Node.js-specific APIs like net, tls, fs, dns, or crypto (the full Node module, not the Web Crypto API). The standard Supabase JavaScript client depends on node-postgres (pg), which uses net and tls for database connections.

This means: you cannot use the standard Supabase client for direct Postgres queries in Edge Runtime functions.

Your code will work perfectly in local development (which runs Node.js), pass type checking, and even build successfully — then fail at runtime on Vercel Edge with a cryptic error like:

Error: The edge runtime does not support Node.js 'net' module

The Fix

You have three options depending on your use case:

Option A: Use the Supabase REST client (PostgREST)

Instead of direct Postgres connections, route queries through Supabase's PostgREST API, which uses HTTP/fetch — fully compatible with Edge Runtime.

typescript
// middleware.ts or any Edge Runtime route
import { createClient } from '@supabase/supabase-js'

// This works on Edge because it uses fetch, not pg
const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
  {
    auth: { persistSession: false },
    // Force REST transport — no direct Postgres
    db: { schema: 'public' }
  }
)

// Standard query — goes through PostgREST, not pg
const { data, error } = await supabase
  .from('posts')
  .select('*')
  .eq('published', true)

Option B: Use @supabase/ssr for auth in middleware

For auth-specific operations in Next.js middleware (which always runs on Edge):

typescript
// middleware.ts
import { createServerClient } from '@supabase/ssr'
import { NextResponse, type NextRequest } from 'next/server'

export async function middleware(request: NextRequest) {
  let response = NextResponse.next({
    request: { headers: request.headers },
  })

  const supabase = createServerClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
    {
      cookies: {
        getAll() {
          return request.cookies.getAll()
        },
        setAll(cookiesToSet) {
          cookiesToSet.forEach(({ name, value, options }) => {
            request.cookies.set(name, value)
            response.cookies.set(name, value, options)
          })
        },
      },
    }
  )

  // This refreshes the session — critical for SSR auth
  await supabase.auth.getUser()
  return response
}

Option C: Force Node.js runtime on specific routes

If you need direct Postgres access (e.g., for complex joins or Postgres functions), explicitly set the runtime:

typescript
// app/api/complex-query/route.ts
export const runtime = 'nodejs' // Force Node.js runtime

import { createClient } from '@supabase/supabase-js'

export async function GET() {
  // Now you can use direct Postgres features
  const supabase = createClient(
    process.env.SUPABASE_URL!,
    process.env.SUPABASE_SERVICE_ROLE_KEY!
  )

  const { data } = await supabase.rpc('complex_function', { param: 'value' })
  return Response.json(data)
}

My recommendation: Use PostgREST (Option A) for most queries and reserve Node.js runtime (Option C) for routes that genuinely need direct Postgres features like RPCs, complex CTEs, or advisory locks.

Gotcha #2: Supabase Connection Pooling (Supavisor) Misconfiguration

The Problem

Every Vercel serverless function instance opens its own database connection. If your app handles 100 concurrent requests and each spins up a new function instance, that is 100 simultaneous Postgres connections. Supabase free tier allows 60 direct connections; Pro allows more but still has limits.

Without connection pooling, you will see errors like:

remaining connection slots are reserved for
non-replication superuser connections

Or worse, connections silently timeout and your queries hang for 30 seconds before failing.

The Fix

Supabase provides Supavisor, a built-in connection pooler. You must use the pooler connection string in serverless environments.

bash
# Direct connection (DON'T use in serverless)
postgresql://postgres:[password]@db.[ref].supabase.co:5432/postgres

# Pooler connection (USE THIS in serverless)
postgresql://postgres.[ref]:[password]@aws-0-[region].pooler.supabase.com:6543/postgres

Key configuration details:

typescript
// lib/supabase-server.ts
import { createClient } from '@supabase/supabase-js'

const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!
const supabaseKey = process.env.SUPABASE_SERVICE_ROLE_KEY!

export const supabase = createClient(supabaseUrl, supabaseKey, {
  db: {
    schema: 'public',
  },
  auth: {
    persistSession: false, // Server-side: no session persistence
    autoRefreshToken: false,
  },
})

If you are using Drizzle, Prisma, or raw pg for direct SQL, set the connection string to the pooler URL:

typescript
// drizzle.config.ts
import { config } from 'dotenv'
config({ path: '.env.local' })

export default {
  schema: './src/db/schema.ts',
  out: './drizzle',
  dialect: 'postgresql',
  dbCredentials: {
    // Use the POOLER URL, not the direct connection
    url: process.env.DATABASE_URL!, // Should be the port 6543 URL
  },
}

Supavisor modes: Supavisor supports two modes — Transaction (port 6543, default) and Session (port 5432 through pooler). Use Transaction mode for serverless. Session mode maintains state between queries, which does not work with ephemeral serverless functions.

Gotcha #3: Row Level Security That Silently Returns Empty Results

The Problem

This gotcha has caused more debugging hours than any other Supabase issue. When a Row Level Security policy prevents access, Supabase does not return an error. It returns an empty array.

typescript
const { data, error } = await supabase
  .from('secret_documents')
  .select('*')

console.log(error) // null — no error!
console.log(data)  // [] — empty array, not an auth error

Your code thinks the query succeeded with zero results. You display "No documents found" instead of "You don't have access." This is by design in PostgreSQL (RLS is meant to be transparent), but it is infuriating when debugging.

The Fix

In development: Create a debug helper that compares results with and without RLS:

typescript
// lib/debug-rls.ts (ONLY for development)
import { createClient } from '@supabase/supabase-js'

export async function debugRLS(table: string, query: any) {
  if (process.env.NODE_ENV !== 'development') {
    throw new Error('debugRLS must not be used in production')
  }

  // Query with anon key (RLS enforced)
  const anonClient = createClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
  )
  const anonResult = await anonClient.from(table).select(query)

  // Query with service role (RLS bypassed)
  const serviceClient = createClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.SUPABASE_SERVICE_ROLE_KEY!
  )
  const serviceResult = await serviceClient.from(table).select(query)

  if (anonResult.data?.length === 0 && (serviceResult.data?.length ?? 0) > 0) {
    console.warn(
      `⚠️ RLS is blocking access to \${table}. ` +
      `Service role returns \${serviceResult.data?.length} rows, ` +
      `anon key returns 0. Check your RLS policies.`
    )
  }

  return { anonResult, serviceResult }
}

Common RLS mistakes:

  1. Forgetting to set the JWT in the Supabase client — Server-side clients default to the anon role unless you pass the user's JWT:
typescript
// This client has no user context — RLS policies
// using auth.uid() will not match anything
const supabase = createClient(url, anonKey)

// Correct: pass the user's session
const supabase = createServerClient(url, anonKey, {
  cookies: { /* cookie handlers */ }
})
const { data: { user } } = await supabase.auth.getUser()
// Now queries respect this user's RLS policies
  1. Policy uses auth.uid() but the user is not authenticated — Anonymous users have no uid, so policies like auth.uid() = user_id will never match.
  1. Missing SELECT policy — You created INSERT and UPDATE policies but forgot SELECT. RLS defaults to deny if no policy exists for the operation.
  1. Policy checks a column that does not exist — Typos in column names in RLS policies fail silently. auth.uid() = userid when the column is user_id returns zero rows with no error.

Gotcha #4: Environment Variable Management Across Deployments

The Problem

Vercel scopes environment variables to three environments: Production, Preview, and Development. Variables configured only for Production will not be available in preview branch deployments — meaning your staging and PR preview URLs will fail silently.

Worse, Supabase requires multiple environment variables, and the Next.js convention of NEXT_PUBLIC_ prefixes creates its own confusion:

bash
# Client-side (exposed to browser — NEXT_PUBLIC_ prefix required)
NEXT_PUBLIC_SUPABASE_URL=https://xxx.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJ...

# Server-side only (NEVER expose to browser)
SUPABASE_SERVICE_ROLE_KEY=eyJ...
SUPABASE_DB_URL=postgresql://...

# These look similar but are DIFFERENT
DATABASE_URL=postgresql://...pooler...6543/postgres  # Pooler
DIRECT_URL=postgresql://...5432/postgres             # Direct

The Fix

  1. In Vercel dashboard, check all three environment scopes for every variable.
  1. Use a `.env.local.example` file committed to your repo as documentation:
bash
# .env.local.example
# Copy to .env.local and fill in values from Supabase dashboard

# Public (safe for browser)
NEXT_PUBLIC_SUPABASE_URL=
NEXT_PUBLIC_SUPABASE_ANON_KEY=

# Server-only (NEVER prefix with NEXT_PUBLIC_)
SUPABASE_SERVICE_ROLE_KEY=
DATABASE_URL=            # Pooler connection (port 6543)
DIRECT_URL=              # Direct connection (port 5432, migrations only)
  1. For preview environments with different Supabase projects (staging vs production databases), use Vercel's environment-specific variables. Set the Preview values to your staging Supabase project credentials.
  1. Validate at build time:
typescript
// lib/env.ts
function getEnvVar(name: string): string {
  const value = process.env[name]
  if (!value) {
    throw new Error(
      `Missing environment variable: \${name}. ` +
      `Check your Vercel environment settings for this deployment.`
    )
  }
  return value
}

export const env = {
  supabaseUrl: getEnvVar('NEXT_PUBLIC_SUPABASE_URL'),
  supabaseAnonKey: getEnvVar('NEXT_PUBLIC_SUPABASE_ANON_KEY'),
  supabaseServiceKey: getEnvVar('SUPABASE_SERVICE_ROLE_KEY'),
  databaseUrl: getEnvVar('DATABASE_URL'),
}

For more on optimizing your Next.js application performance after deployment, see our React Performance Optimization guide.

Gotcha #5: Cold Start Latency on Serverless Functions

The Problem

Vercel serverless functions (Node.js runtime) have cold starts ranging from 250ms to 3+ seconds. When that function also needs to establish a database connection to Supabase, the total time for the first request can hit 4-5 seconds. For API routes that serve page data, this means your users stare at a loading spinner.

The Fix

Strategy 1: Use Edge Functions for latency-critical paths

Edge Functions have near-zero cold starts (typically under 50ms). If your query is simple enough for the PostgREST client, move it to Edge:

typescript
// app/api/quick-data/route.ts
export const runtime = 'edge'

export async function GET() {
  const res = await fetch(
    `\${process.env.NEXT_PUBLIC_SUPABASE_URL}/rest/v1/posts?select=*&published=eq.true`,
    {
      headers: {
        apikey: process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
        Authorization: `Bearer \${process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY}`,
      },
    }
  )
  const data = await res.json()
  return Response.json(data)
}

Strategy 2: Vercel Fluid Compute

Vercel Fluid Compute reuses function instances across multiple requests, dramatically reducing cold starts. Enable it in your vercel.json:

json
{
  "functions": {
    "app/api/**/*.ts": {
      "memory": 1024,
      "maxDuration": 30
    }
  }
}

Strategy 3: Connection keep-alive

Configure your Supabase client to reuse connections within a function instance:

typescript
// lib/supabase-server.ts
// Module-level client is reused across warm invocations
let supabaseInstance: ReturnType<typeof createClient> | null = null

export function getSupabase() {
  if (!supabaseInstance) {
    supabaseInstance = createClient(
      process.env.NEXT_PUBLIC_SUPABASE_URL!,
      process.env.SUPABASE_SERVICE_ROLE_KEY!,
      {
        auth: { persistSession: false },
      }
    )
  }
  return supabaseInstance
}

Gotcha #6: Supabase Auth Session Handling with SSR

The Problem

Supabase Auth uses JWTs stored in cookies for SSR. The default @supabase/supabase-js client stores sessions in localStorage, which does not exist on the server. This creates a split-brain problem: the client thinks the user is authenticated, but the server renders an unauthenticated page.

The symptoms:

  • Flash of unauthenticated content on page load
  • Protected pages briefly showing login form before client hydration
  • getUser() returning null on the server but valid user on the client

The Fix

Use the official `@supabase/ssr` package, which handles cookie-based sessions:

typescript
// lib/supabase/server.ts
import { createServerClient } from '@supabase/ssr'
import { cookies } from 'next/headers'

export async function createSupabaseServer() {
  const cookieStore = await cookies()

  return createServerClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
    {
      cookies: {
        getAll() {
          return cookieStore.getAll()
        },
        setAll(cookiesToSet) {
          try {
            cookiesToSet.forEach(({ name, value, options }) =>
              cookieStore.set(name, value, options)
            )
          } catch {
            // The set method is called from a Server Component
            // where cookies cannot be set. This is fine because
            // middleware handles the refresh.
          }
        },
      },
    }
  )
}

And the critical middleware that refreshes sessions:

typescript
// middleware.ts
import { createServerClient } from '@supabase/ssr'
import { NextResponse, type NextRequest } from 'next/server'

export async function middleware(request: NextRequest) {
  let supabaseResponse = NextResponse.next({ request })

  const supabase = createServerClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
    {
      cookies: {
        getAll() {
          return request.cookies.getAll()
        },
        setAll(cookiesToSet) {
          cookiesToSet.forEach(({ name, value }) =>
            request.cookies.set(name, value)
          )
          supabaseResponse = NextResponse.next({ request })
          cookiesToSet.forEach(({ name, value, options }) =>
            supabaseResponse.cookies.set(name, value, options)
          )
        },
      },
    }
  )

  // IMPORTANT: Do not remove this line
  // It refreshes the auth token and must run on every request
  const { data: { user } } = await supabase.auth.getUser()

  // Optional: redirect unauthenticated users
  if (!user && request.nextUrl.pathname.startsWith('/dashboard')) {
    const url = request.nextUrl.clone()
    url.pathname = '/login'
    return NextResponse.redirect(url)
  }

  return supabaseResponse
}

export const config = {
  matcher: ['/((?!_next/static|_next/image|favicon.ico|.*\.(?:svg|png|jpg)$).*)'],
}

The line `await supabase.auth.getUser()` in middleware is non-negotiable. Without it, expired JWTs are not refreshed, and the server side will see stale or expired sessions.

Gotcha #7: Real-Time Subscriptions on Edge

The Problem

Supabase Realtime uses WebSocket connections for postgres_changes listeners. Edge Functions and Vercel Edge Runtime do not support persistent WebSocket connections — they are request/response only with a maximum execution time.

If you try to set up a real-time subscription in an Edge Function:

typescript
// THIS WILL NOT WORK ON EDGE
supabase
  .channel('changes')
  .on('postgres_changes', { event: '*', schema: 'public', table: 'messages' },
    (payload) => console.log(payload)
  )
  .subscribe()

The subscription will appear to succeed but will never receive events because the function terminates after the response is sent.

The Fix

Real-time subscriptions belong on the client (browser), not the server. Structure your app so that:

  1. Server (Edge or Node): Handles data fetching, mutations, and auth
  2. Client (browser): Handles real-time subscriptions and live updates
typescript
// components/RealtimeMessages.tsx (Client Component)
'use client'

import { useEffect, useState } from 'react'
import { createClient } from '@supabase/supabase-js'

const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
)

export function RealtimeMessages({ initialMessages }: { initialMessages: Message[] }) {
  const [messages, setMessages] = useState(initialMessages)

  useEffect(() => {
    const channel = supabase
      .channel('messages')
      .on('postgres_changes',
        { event: 'INSERT', schema: 'public', table: 'messages' },
        (payload) => {
          setMessages(prev => [...prev, payload.new as Message])
        }
      )
      .subscribe()

    return () => { supabase.removeChannel(channel) }
  }, [])

  return (
    <ul>
      {messages.map(msg => <li key={msg.id}>{msg.text}</li>)}
    </ul>
  )
}
typescript
// app/chat/page.tsx (Server Component)
import { createSupabaseServer } from '@/lib/supabase/server'
import { RealtimeMessages } from '@/components/RealtimeMessages'

export default async function ChatPage() {
  const supabase = await createSupabaseServer()
  const { data: messages } = await supabase
    .from('messages')
    .select('*')
    .order('created_at', { ascending: true })

  // Server fetches initial data, client handles real-time
  return <RealtimeMessages initialMessages={messages ?? []} />
}

For server-to-server real-time needs (webhooks, background jobs), use Supabase Database Webhooks or pg_net to send HTTP requests to your API routes when data changes.

Gotcha #8: Image and File Storage CORS Issues

The Problem

You upload a file to Supabase Storage, generate a public URL, and display it in your app. It works on localhost. You deploy to Vercel. The image shows a broken icon, and the console shows:

Access to fetch at 'https://xxx.supabase.co/storage/v1/object/public/...'
from origin 'https://your-app.vercel.app' has been blocked by CORS policy

The Fix

Supabase Storage CORS must be configured separately from your database API CORS settings.

  1. Go to Supabase Dashboard > Storage
  2. Select the bucket causing issues
  3. Under bucket settings, configure Allowed Origins

For Vercel deployments, you need to allow:

  • Your production domain: https://your-app.com
  • Your Vercel preview URLs: https://*.vercel.app
  • Localhost for development: http://localhost:3000

Alternative: Proxy through your Next.js API route

Instead of loading storage URLs directly from the browser, proxy them:

typescript
// app/api/storage/[...path]/route.ts
export const runtime = 'edge'

export async function GET(
  request: Request,
  { params }: { params: { path: string[] } }
) {
  const storagePath = params.path.join('/')
  const storageUrl = `\${process.env.NEXT_PUBLIC_SUPABASE_URL}/storage/v1/object/public/\${storagePath}`

  const response = await fetch(storageUrl)
  const blob = await response.blob()

  return new Response(blob, {
    headers: {
      'Content-Type': response.headers.get('Content-Type') ?? 'application/octet-stream',
      'Cache-Control': 'public, max-age=31536000, immutable',
    },
  })
}

This eliminates CORS entirely because the browser requests from your own domain.

Bonus: Use Next.js Image component with Supabase Storage

Configure next.config.js to allow Supabase Storage as an image source:

javascript
// next.config.js
module.exports = {
  images: {
    remotePatterns: [
      {
        protocol: 'https',
        hostname: '*.supabase.co',
        pathname: '/storage/v1/object/public/**',
      },
    ],
  },
}

Gotcha #9: Database Migrations in Vercel Build Pipeline

The Problem

You need to run database migrations as part of your deployment. But Vercel build steps run in an ephemeral container that may not have direct database access, and running migrations in a serverless function is risky (concurrent deployments could conflict).

The Fix

Do NOT run migrations in the Vercel build pipeline. Instead:

  1. Use Supabase CLI migrations as a separate CI/CD step:
bash
# In GitHub Actions, run before the Vercel deployment
- name: Run Supabase Migrations
  run: |
    npx supabase db push --db-url ${{ secrets.DIRECT_DATABASE_URL }}
  1. Use Supabase Dashboard for simple schema changes in smaller projects.
  1. For Drizzle/Prisma users, run migrations from a local machine or CI runner — not from the Vercel build:
bash
# Local migration (uses DIRECT_URL, not pooler)
npx drizzle-kit push
# or
npx prisma migrate deploy

Important: Migrations must use the direct database URL (port 5432), not the pooler URL (port 6543). Supavisor transaction mode does not support the multi-statement transactions that migrations require.

Putting It All Together: A Production-Ready Setup

Here is the file structure and configuration for a battle-tested Vercel + Supabase deployment:

├── lib/
│   ├── supabase/
│   │   ├── client.ts          # Browser client
│   │   ├── server.ts          # Server Component client
│   │   ├── middleware.ts       # Middleware client
│   │   └── admin.ts           # Service role client (server only)
│   └── env.ts                 # Environment validation
├── middleware.ts               # Auth refresh + redirects
├── next.config.js             # Image domains, headers
├── vercel.json                # Function config
└── .env.local.example         # Documented env vars

The key principle: separate your Supabase client instantiation by context. Browser, server component, middleware, and admin operations each have different requirements for auth handling, cookie management, and connection pooling.

Check out the best AI tools for developers that can help you debug and optimize these deployment patterns faster.

Debugging Checklist: When Something Breaks in Production

When your Vercel + Supabase deployment fails, run through this checklist:

  1. Check Vercel Function Logs — Vercel > Project > Deployments > Functions tab. Look for runtime errors.
  2. Check Supabase Logs — Supabase Dashboard > Logs > PostgREST and Auth. Look for failed queries and auth errors.
  3. Verify environment variables — Vercel > Project > Settings > Environment Variables. Confirm all variables exist for the correct environment scope.
  4. Test RLS policies — Use Supabase SQL Editor with SET ROLE authenticated; SET request.jwt.claims = '{"sub": "user-uuid"}'; to simulate policy evaluation.
  5. Check connection count — Supabase Dashboard > Database > Connections. If near the limit, switch to Supavisor.
  6. Verify runtime — Add console.log(typeof globalThis.EdgeRuntime) to check if you are running on Edge or Node.
  7. Test with service role — Temporarily use the service role key (server-side only!) to determine if the issue is RLS, auth, or data.

Final Thoughts: It Is Worth It (With the Right Knowledge)

The Vercel + Supabase stack is genuinely one of the most productive full-stack combinations available in 2026. The gotchas documented here are real, but they are all solvable — and once solved, they stay solved. The pattern of Edge for speed, Node for complexity, and Supabase for everything data is a foundation that scales from side project to production application.

The key is knowing the sharp edges before you cut yourself on them. Bookmark this guide, share it with your team, and save yourself the 3am debugging sessions.


This post is part of our developer tutorials series. For more full-stack development patterns, check out our [Next.js Authentication Tutorial](/blog/nextjs-authentication-tutorial-2026).

Key Takeaways

  • Edge Runtime functions cannot use node-postgres or the standard Supabase JS client — use the lightweight @supabase/ssr package or fetch-based REST calls instead
  • Always connect through Supavisor (port 6543) in serverless environments to avoid exhausting direct database connections
  • Row Level Security policies that fail silently return empty results, not errors — add explicit RLS debugging with a service role bypass in development
  • Supabase Auth with SSR requires manual cookie management using @supabase/ssr and Next.js middleware for reliable session persistence
  • Environment variables in Vercel are deployment-scoped — preview deployments do not inherit production variables unless explicitly configured
  • Real-time subscriptions from edge functions require the REST-based channel approach, not the WebSocket-dependent postgres_changes listener
  • Cold start latency on serverless functions connecting to Supabase can exceed 3 seconds — use connection warming, keep-alive, or edge functions for latency-critical paths

Frequently Asked Questions

Why does my Supabase query work locally but return empty results on Vercel?

The most common cause is Row Level Security. Locally you may be using the service role key (which bypasses RLS) or your development database has RLS disabled. In production on Vercel, the anon key respects RLS policies. If your policies don't match the user's JWT claims, queries silently return empty arrays instead of throwing errors. Check your RLS policies, verify the auth token is being passed correctly, and test with the same key locally.

Can I use Supabase with Vercel Edge Functions?

Yes, but not with the standard client. The default @supabase/supabase-js package depends on node-postgres, which uses Node.js APIs (net, tls, dns) unavailable in the Edge Runtime. Use @supabase/ssr for auth operations, use the Supabase REST API (PostgREST) via fetch for data queries, or use Supabase's Edge-compatible client configuration that routes through the REST endpoint instead of direct Postgres connections.

How do I handle Supabase connection limits on Vercel serverless?

Vercel serverless functions spin up many concurrent instances, each opening its own database connection. Use Supavisor (Supabase connection pooler) on port 6543 instead of the direct database connection on port 5432. Supavisor pools connections and manages the lifecycle. Also set the connection string to use the pooler URL found in your Supabase dashboard under Settings > Database > Connection Pooling.

Why are my environment variables missing in Vercel preview deployments?

Vercel scopes environment variables to specific environments: Production, Preview, and Development. Variables set only for Production will not be available in preview branch deployments. Go to your Vercel project Settings > Environment Variables and ensure each variable is checked for both Production and Preview. For sensitive variables that differ between environments, create separate entries with the Preview checkbox.

How do I fix CORS errors with Supabase Storage on Vercel?

Supabase Storage uses its own CORS configuration separate from your API. Go to your Supabase dashboard > Storage > Policies and ensure the bucket's CORS settings include your Vercel deployment URLs (both production and preview domains). For preview deployments with dynamic URLs, add a wildcard pattern like "https://*-your-team.vercel.app". Also verify that your Storage bucket is set to public if you're serving files directly to browsers.

Why does Supabase Auth lose the session on page refresh in Next.js SSR?

Supabase Auth stores sessions in cookies for SSR, but the default setup doesn't persist cookies correctly across server and client. You need to use @supabase/ssr, configure the cookie options in your Supabase client factory, and add Next.js middleware that refreshes the session on every request. Without the middleware refresh, the session JWT expires silently and the server-rendered page shows an unauthenticated state even though the client-side session is still valid.

How do I reduce cold start latency for Supabase queries on Vercel?

Cold starts on Vercel serverless functions add 1-3 seconds when establishing a new Supabase connection. Mitigations: use the Supabase REST API instead of direct Postgres (REST connections are faster to establish), deploy latency-critical routes as Edge Functions (near-zero cold start), enable connection keep-alive in your Supabase client configuration, and consider Vercel Fluid Compute which reuses function instances across requests.

Can I use Supabase Realtime with Vercel Edge Functions?

Not with the standard WebSocket-based postgres_changes listener, because Edge Functions are short-lived and do not maintain persistent WebSocket connections. For real-time functionality, use Supabase Realtime on the client side only (browser), use Supabase Broadcast and Presence channels which work with REST-based polling, or use Vercel serverless functions (Node.js runtime, not Edge) for server-side subscription processing with a persistent connection.

Share this article

About the Author

E

Elena Rodriguez

Full-Stack Developer & Web3 Architect

BS Software Engineering, Stanford | Former Lead Engineer at Coinbase

Elena Rodriguez is a full-stack developer and Web3 architect with seven years of experience building decentralized applications. She holds a BS in Software Engineering from Stanford University and has worked at companies ranging from early-stage startups to major tech firms including Coinbase, where she led the frontend engineering team for their NFT marketplace. Elena is a core contributor to several open-source Web3 libraries and has built dApps that collectively serve over 500,000 monthly active users. She specializes in React, Next.js, Solidity, and Rust, and is particularly passionate about creating intuitive user experiences that make Web3 technology accessible to mainstream audiences. Elena also mentors aspiring developers through Women Who Code and teaches a popular Web3 development bootcamp.