Vanilla Breeze

Server-Side Service Facade

A first-party endpoint that proxies a third-party service. Keep external dependencies behind your own API surface for privacy, caching, swappability, and rate limiting.

A first-party endpoint that proxies a third-party service. Clients talk to your origin; your origin talks to the provider. One indirection layer yields privacy, caching, swappability, and rate limiting.

The problem with direct third-party calls

When a browser fetches from tile.openstreetmap.org, fonts.googleapis.com, or api.sendgrid.com directly, four concerns travel with it:

  • Privacy. The user’s IP, referrer, and cookies go straight to the third party. Tile providers, font CDNs, and analytics services all get a full view of your audience.
  • Lock-in. Hostnames and keys are baked into client code. Switching providers means touching every component that calls them.
  • No caching control. Your cache policy is whatever the upstream sets. You can’t cache a slow endpoint aggressively, or bust a stale one.
  • No rate limiting. A misbehaving component (or an abusive user) hits the upstream directly. No per-user, per-page, or global budget.

These are not individually severe, but they compound. GDPR scrutiny, a provider outage, or a pricing change turns a one-line fetch() into a migration project.

The pattern

Route every third-party request through a first-party endpoint:

The client never contacts the third party. Your edge fetches upstream, caches the response, and serves it as a first-party resource.

Characteristics

  • Same-origin. No CORS, no third-party cookies, no mixed-content issues.
  • Swappable backend. Change provider server-side; the frontend stays the same.
  • Cacheable. Apply your own Cache-Control, edge cache, KV, or R2.
  • Observable. Log requests, latency, and errors at your proxy layer.
  • Rate-limited. Enforce fair-use policies before a request reaches upstream.

Direct call vs. service facade

ConcernDirect callService facade
User privacyIP + referrer sent to third partyOnly your server contacts upstream
Provider lock-inURLs hardcoded in clientSwap backend transparently
CachingUpstream headers onlyYour cache policy + edge / KV
Rate limitingNonePer-user, per-page, global
Offline / SSRClient-onlyPre-fetch, pre-cache server-side
MonitoringNo visibilityFull request / response logging

Applications beyond maps

Any third-party asset or service is a candidate:

  • Icon CDNs. Proxy sprite sheets; swap Lucide for Heroicons without touching markup.
  • Font services. Proxy Google Fonts, self-cache, and avoid FOUT on flaky upstreams.
  • Analytics. A first-party endpoint sidesteps ad blockers and third-party tracking.
  • Payment SDKs. Proxy tokenization for PCI isolation; the client only sees your origin.
  • Social embeds. Render server-side; avoid loading tracking scripts into the page.
  • Image CDNs. Proxy and transform images through your own endpoint.
  • AI / LLM APIs. Keep keys server-side; rate-limit per user or session.

Relationship to /go/

Vanilla Breeze’s /go/ convention is one disciplined instance of this pattern. Every component that talks to a backend service uses a stable /go/{role} URL; the operator wires that URL to whatever provider currently fits. The broader pattern is the same — facades in front of third parties — but /go/ is opinionated about naming, reserved roles, and how service contracts are documented.

Think of the facade pattern as the general shape and /go/ as the VB-flavoured recipe. You can adopt the shape anywhere; adopt /go/ when you want VB’s components and tooling to plug in with zero configuration.

Geo-map integration

Future <geo-map> support will expose a tile-url attribute pointing to the first-party proxy:

The component doesn’t care where tiles come from — it only needs a URL template with {z}, {x}, and {y} placeholders.

Implementation sketch

A minimal Cloudflare Worker fronting OpenStreetMap tiles, with KV as the cache layer:

The same shape works on any edge runtime — Vercel Edge Functions, Deno Deploy, Netlify Edge, Node behind Caddy. The five moving pieces are:

  1. Parse identifying parameters from the URL.
  2. Check a cache layer (KV, R2, Redis, filesystem).
  3. Fetch upstream on cache miss.
  4. Store the response with an appropriate TTL.
  5. Return with first-party headers and cache metadata.

Trade-offs

  • Infrastructure. You need an edge function or origin to run the proxy. For static-only sites, this is a new dependency.
  • Origin bandwidth. Cache misses pay egress to upstream through your infra.
  • Cache invalidation. Pick a TTL strategy. OSM tiles update slowly; 24h is fine. API responses may need shorter windows or purge hooks.
  • Upstream policies. Some providers (OSM tiles, Google Fonts, AI APIs) require a valid User-Agent, attribution, or usage caps. A caching proxy that reduces upstream load is usually welcomed; bulk scraping through your facade is still bulk scraping — read the tile usage policy or the equivalent before shipping.
  • Cold-start latency. The first uncached request adds a hop. Mitigate with prefetch, long TTLs, and warming jobs.
  • Cost. KV / R2 storage and edge invocations have a price. For tiles and icons this is typically pennies; for high-volume API calls, budget it.

When to use this pattern

Use it when:

  • Privacy matters (GDPR, user expectations, B2B trust).
  • You might switch providers — or have been bitten by a surprise deprecation.
  • You want caching control beyond what the upstream offers.
  • You need usage monitoring or rate limiting.

Skip it when:

  • You’re prototyping or building an internal tool.
  • The third party is already same-origin (a self-hosted tile server, your own microservice).
  • The service requires client-side auth (OAuth redirect flows) that a proxy would break.

Related