In Q1 2026, Netflix’s playback engineering team reduced canary-related outage minutes by 92% and cut deployment lead time from 47 minutes to 8 minutes by migrating their legacy canary pipeline to Argo Rollouts 2.12, handling 15.2 million daily container deployments across 12 global regions.
📡 Hacker News Top Stories Right Now
- Show HN: Apple's Sharp Running in the Browser via ONNX Runtime Web (55 points)
- A couple million lines of Haskell: Production engineering at Mercury (299 points)
- This Month in Ladybird – April 2026 (392 points)
- Dav2d (520 points)
- Six Years Perfecting Maps on WatchOS (349 points)
Key Insights
- Argo Rollouts 2.12’s weighted canary controller reduced erroneous traffic routing from 0.7% to 0.002% in Netflix’s production environment
- Netflix standardized on Argo Rollouts 2.12.0 (https://github.com/argoproj/argo-rollouts) with custom plugins for their legacy Spinnaker integration
- Reducing canary validation time from 22 minutes to 3 minutes saved Netflix $2.1M annually in idle compute resources
- By 2027, 80% of streaming media companies will adopt Argo Rollouts for canary deployments, up from 32% in 2025
Netflix’s Legacy Canary Pipeline: The Pain Points
Before 2026, Netflix’s canary deployment pipeline was built on top of Spinnaker 2.20, using a custom script-based canary logic that spun up parallel canary pods, ran 22 minutes of validation tests, and routed 10% of traffic to canaries via manual load balancer configuration. This pipeline had three critical flaws that caused repeated outages:
- Incorrect traffic routing: Manual load balancer updates often routed 2-3x the intended traffic to canary pods, causing 12% of canaries to overload downstream services like DRM providers and CDN edges.
- Slow validation: 22 minutes of sequential validation tests (including end-to-end playback tests, latency checks, and error rate calculations) meant deployment lead times of 47 minutes, which slowed down feature velocity for the playback engineering team.
- High compute costs: Canary validation pods ran for 22 minutes regardless of early failure signals, wasting $4.8M annually in idle compute resources across 12 global regions.
In Q4 2025, a canary deployment for the playback startup service caused a 14-minute outage affecting 2.1 million subscribers in Europe, when the legacy pipeline routed 28% of traffic to a canary pod with a misconfigured DRM integration. This incident triggered the decision to migrate to a modern, Kubernetes-native canary tool, with Argo Rollouts 2.12 selected after a 3-month evaluation against Flagger and Spinnaker’s native canary v2.
Argo Rollouts 2.12: Technical Deep Dive
Argo Rollouts 2.12 introduced three critical features that made it the right choice for Netflix’s scale: native weighted canary routing with traffic shaping, a extensible plugin interface for custom metrics, and improved stability for high-throughput Kubernetes environments. Below is the first code example: a custom metric provider built by Netflix to pull playback-specific success rates from their Prometheus stack.
package netflixmetricprovider
import (
"context"
"fmt"
"io"
"net/http"
"time"
rolloutsv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
"github.com/argoproj/argo-rollouts/pkg/metricproviders"
promapi "github.com/prometheus/client_golang/api"
promv1 "github.com/prometheus/client_golang/api/prometheus/v1"
"github.com/prometheus/common/model"
"golang.org/x/exp/slog"
)
const (
defaultPrometheusTimeout = 10 * time.Second
netflixPlaybackNamespace = "netflix-playback"
)
// NetflixPlaybackMetricProvider implements metricproviders.MetricProvider for Netflix's playback service canary metrics
type NetflixPlaybackMetricProvider struct {
promClient promv1.API
logger *slog.Logger
}
// NewNetflixPlaybackMetricProvider initializes a new metric provider with a Prometheus client
func NewNetflixPlaybackMetricProvider(promURL string, logger *slog.Logger) (*NetflixPlaybackMetricProvider, error) {
if promURL == "" {
return nil, fmt.Errorf("prometheus URL cannot be empty")
}
client, err := promapi.NewClient(promapi.Config{Address: promURL})
if err != nil {
return nil, fmt.Errorf("failed to create Prometheus client: %w", err)
}
return &NetflixPlaybackMetricProvider{
promClient: promv1.NewAPI(client),
logger: logger.With("provider", "netflix-playback-metric"),
}, nil
}
// GetMetrics fetches canary success rate from Prometheus for the given rollout
func (p *NetflixPlaybackMetricProvider) GetMetrics(ctx context.Context, rollout *rolloutsv1alpha1.Rollout, metric rolloutsv1alpha1.Metric) ([]metricproviders.MetricResult, error) {
ctx, cancel := context.WithTimeout(ctx, defaultPrometheusTimeout)
defer cancel()
// Construct Prometheus query for playback success rate: ratio of 2xx responses to total requests for canary pods
query := fmt.Sprintf(`sum(rate(http_requests_total{namespace="%s", rollout="%s", pod=~"canary-.+", status!~"5.."}[5m])) / sum(rate(http_requests_total{namespace="%s", rollout="%s", pod=~"canary-.+"}[5m])) * 100`,
netflixPlaybackNamespace, rollout.Name, netflixPlaybackNamespace, rollout.Name)
p.logger.Info("fetching canary success rate", "query", query, "rollout", rollout.Name)
// Execute Prometheus query with error handling
results, warnings, err := p.promClient.Query(ctx, query, time.Now())
if err != nil {
return nil, fmt.Errorf("prometheus query failed: %w", err)
}
if len(warnings) > 0 {
p.logger.Warn("prometheus query warnings", "warnings", warnings)
}
// Validate result type
vector, ok := results.(model.Vector)
if !ok {
return nil, fmt.Errorf("unexpected prometheus result type: %T", results)
}
if len(vector) == 0 {
return nil, fmt.Errorf("no metrics returned for query: %s", query)
}
// Extract success rate value
successRate := float64(vector[0].Value)
p.logger.Info("fetched canary success rate", "rollout", rollout.Name, "success_rate", successRate)
return []metricproviders.MetricResult{{
Value: successRate,
Status: metricproviders.MetricStatusSuccess,
}}, nil
}
// ValidateMetric checks if the metric configuration is valid for Netflix's use case
func (p *NetflixPlaybackMetricProvider) ValidateMetric(metric rolloutsv1alpha1.Metric) error {
if metric.Name == "" {
return fmt.Errorf("metric name cannot be empty")
}
if metric.Type != rolloutsv1alpha1.MetricTypePrometheus {
return fmt.Errorf("unsupported metric type: %s, only prometheus is supported", metric.Type)
}
return nil
}
Production Deployment Configuration
Netflix deployed Argo Rollouts 2.12 across 12 EKS clusters using Terraform, with strict version pinning and production hardening. The second code example below shows the Terraform configuration used for their primary us-east-1 cluster.
# Terraform configuration for deploying Argo Rollouts 2.12 controller to Netflix's production EKS cluster
# Provider configuration for AWS and Kubernetes
tterraform {
required_version = ">= 1.6.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.20"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.10"
}
}
}
# Configure AWS provider for us-east-1 (Netflix primary region)
provider "aws" {
region = "us-east-1"
}
# Fetch EKS cluster details for Netflix's playback cluster
data "aws_eks_cluster" "playback_cluster" {
name = "netflix-playback-prod-eks-2026"
}
data "aws_eks_cluster_auth" "playback_cluster_auth" {
name = data.aws_eks_cluster.playback_cluster.name
}
# Configure Kubernetes provider with EKS cluster credentials
provider "kubernetes" {
host = data.aws_eks_cluster.playback_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.playback_cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.playback_cluster_auth.token
}
# Configure Helm provider for Kubernetes
provider "helm" {
kubernetes {
host = data.aws_eks_cluster.playback_cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.playback_cluster.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.playback_cluster_auth.token
}
}
# Deploy Argo Rollouts 2.12 controller via Helm
resource "helm_release" "argo_rollouts" {
name = "argo-rollouts"
repository = "https://argoproj.github.io/argo-helm"
chart = "argo-rollouts"
version = "2.12.0" # Pinned to Argo Rollouts 2.12.0 as used by Netflix
namespace = "argo-rollouts"
create_namespace = true
# Netflix-specific values for production hardening
set {
name = "controller.replicas"
value = "3" # High availability for 15M daily deployments
}
set {
name = "controller.metrics.enabled"
value = "true"
}
set {
name = "controller.metrics.serviceMonitor.enabled"
value = "true" # Integrate with Netflix's Prometheus stack
}
set {
name = "controller.extraArgs"
value = "{--experiment-requeue-time=10s,--canary-requeue-time=5s}" # Tune for Netflix's traffic patterns
}
# Error handling: wait for rollout to complete before marking resource as created
wait = true
wait_for_jobs = true
timeout = 300 # 5 minute timeout for controller deployment
# Validate Helm chart version exists
lifecycle {
precondition {
condition = can(regex("^2\.12\.\d+$", self.version))
error_message = "Argo Rollouts version must be 2.12.x as used by Netflix production"
}
}
}
# Deploy Netflix's custom metric provider plugin for Argo Rollouts
resource "kubernetes_deployment" "netflix_metric_provider" {
metadata {
name = "netflix-metric-provider"
namespace = "argo-rollouts"
labels = {
app = "netflix-metric-provider"
}
}
spec {
replicas = 2
selector {
match_labels = {
app = "netflix-metric-provider"
}
}
template {
metadata {
labels = {
app = "netflix-metric-provider"
}
}
spec {
container {
name = "metric-provider"
image = "netflix-docker.jfrog.io/argo-rollouts/netflix-metric-provider:v2.12.0"
port {
container_port = 8080
}
env {
name = "PROMETHEUS_URL"
value = "http://prometheus-k8s.monitoring.svc:9090"
}
# Health checks for production readiness
liveness_probe {
http_get {
path = "/healthz"
port = 8080
}
initial_delay_seconds = 5
period_seconds = 10
}
readiness_probe {
http_get {
path = "/readyz"
port = 8080
}
initial_delay_seconds = 3
period_seconds = 5
}
}
}
}
}
# Wait for deployment to roll out successfully
wait_for_rollout = true
}
Benchmark Comparison: Legacy vs Argo Rollouts 2.12
Netflix ran a 4-week parallel benchmark of their legacy Spinnaker canary pipeline and Argo Rollouts 2.12 in staging before production rollout. The table below shows the final production metrics after full migration.
Metric
Legacy Netflix Canary Pipeline (2025)
Argo Rollouts 2.12 Pipeline (2026)
Canary validation time (minutes)
22
3
Erroneous traffic routing (%)
0.7
0.002
Deployment lead time (minutes)
47
8
Annual compute cost (M$)
4.8
2.7
Rollback time (seconds)
120
9
Max daily deployments
8.2M
15.2M
Testing Custom Argo Rollouts Plugins
Netflix maintained 94% code coverage for their custom metric provider with unit and integration tests. The third code example below shows a subset of their test suite for the NetflixPlaybackMetricProvider.
package netflixmetricprovider
import (
"context"
"errors"
"testing"
"time"
rolloutsv1alpha1 "github.com/argoproj/argo-rollouts/pkg/apis/rollouts/v1alpha1"
"github.com/argoproj/argo-rollouts/pkg/metricproviders"
promv1 "github.com/prometheus/client_golang/api/prometheus/v1"
"github.com/prometheus/common/model"
"golang.org/x/exp/slog"
"golang.org/x/exp/slog/slogtest"
)
// MockPrometheusAPI mocks the Prometheus API for testing
type MockPrometheusAPI struct {
queryResult model.Value
queryErr error
warnings promv1.Warnings
}
func (m *MockPrometheusAPI) Query(ctx context.Context, query string, ts time.Time) (model.Value, promv1.Warnings, error) {
return m.queryResult, m.warnings, m.queryErr
}
func (m *MockPrometheusAPI) QueryRange(ctx context.Context, query string, r promv1.Range) (model.Value, promv1.Warnings, error) {
return nil, nil, errors.New("query range not implemented")
}
func (m *MockPrometheusAPI) AlertManagers(ctx context.Context) (promv1.AlertManagersResult, error) {
return promv1.AlertManagersResult{}, errors.New("not implemented")
}
func TestNetflixPlaybackMetricProvider_GetMetrics_Success(t *testing.T) {
// Initialize test logger
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var logBuf slogtest.Logs
logger := slog.New(slogtest.NewHandler(&logBuf))
// Set up mock Prometheus API to return 99.9% success rate
mockAPI := &MockPrometheusAPI{
queryResult: model.Vector{
&model.Sample{Value: 99.9},
},
queryErr: nil,
}
// Create metric provider with mock API
provider := &NetflixPlaybackMetricProvider{
promClient: mockAPI,
logger: logger,
}
// Create test rollout object
rollout := &rolloutsv1alpha1.Rollout{
Name: "playback-canary-test",
Namespace: netflixPlaybackNamespace,
}
// Create test metric
metric := rolloutsv1alpha1.Metric{
Name: "playback-success-rate",
Type: rolloutsv1alpha1.MetricTypePrometheus,
}
// Execute GetMetrics
results, err := provider.GetMetrics(ctx, rollout, metric)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
// Validate results
if len(results) != 1 {
t.Fatalf("expected 1 result, got %d", len(results))
}
if results[0].Status != metricproviders.MetricStatusSuccess {
t.Fatalf("expected success status, got %v", results[0].Status)
}
if results[0].Value != 99.9 {
t.Fatalf("expected 99.9 success rate, got %f", results[0].Value)
}
}
func TestNetflixPlaybackMetricProvider_GetMetrics_PrometheusError(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var logBuf slogtest.Logs
logger := slog.New(slogtest.NewHandler(&logBuf))
// Set up mock to return error
mockAPI := &MockPrometheusAPI{
queryErr: errors.New("prometheus unreachable"),
}
provider := &NetflixPlaybackMetricProvider{
promClient: mockAPI,
logger: logger,
}
rollout := &rolloutsv1alpha1.Rollout{
Name: "playback-canary-test",
Namespace: netflixPlaybackNamespace,
}
metric := rolloutsv1alpha1.Metric{
Name: "playback-success-rate",
Type: rolloutsv1alpha1.MetricTypePrometheus,
}
_, err := provider.GetMetrics(ctx, rollout, metric)
if err == nil {
t.Fatal("expected error when Prometheus is unreachable")
}
if !errors.Is(err, mockAPI.queryErr) {
t.Fatalf("expected prometheus error, got %v", err)
}
}
func TestNetflixPlaybackMetricProvider_GetMetrics_EmptyResults(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
var logBuf slogtest.Logs
logger := slog.New(slogtest.NewHandler(&logBuf))
// Set up mock to return empty vector
mockAPI := &MockPrometheusAPI{
queryResult: model.Vector{},
}
provider := &NetflixPlaybackMetricProvider{
promClient: mockAPI,
logger: logger,
}
rollout := &rolloutsv1alpha1.Rollout{
Name: "playback-canary-test",
Namespace: netflixPlaybackNamespace,
}
metric := rolloutsv1alpha1.Metric{
Name: "playback-success-rate",
Type: rolloutsv1alpha1.MetricTypePrometheus,
}
_, err := provider.GetMetrics(ctx, rollout, metric)
if err == nil {
t.Fatal("expected error when no metrics are returned")
}
}
func TestNewNetflixPlaybackMetricProvider_EmptyURL(t *testing.T) {
var logBuf slogtest.Logs
logger := slog.New(slogtest.NewHandler(&logBuf))
_, err := NewNetflixPlaybackMetricProvider("", logger)
if err == nil {
t.Fatal("expected error when Prometheus URL is empty")
}
}
Case Study: Netflix Playback Canary Migration
- Team size: 4 backend engineers, 2 site reliability engineers (SREs)
- Stack & Versions: Kubernetes 1.30 (EKS), Argo Rollouts 2.12.0 (https://github.com/argoproj/argo-rollouts), Spinnaker 2.28.3, Prometheus 2.48.1, Terraform 1.7.0
- Problem: p99 latency for playback start was 2.4s during canary deployments, 12% of canary deployments caused customer-facing errors, deployment lead time was 47 minutes, $4.8M annual compute spend on idle canary validation pods
- Solution & Implementation: Migrated legacy Spinnaker canary logic to Argo Rollouts 2.12, built custom metric provider plugin for playback success rate, integrated with existing Spinnaker deployment triggers, implemented weighted canary routing with 5% initial traffic to canary pods, automated rollback on success rate drop below 99.5%
- Outcome: p99 latency dropped to 120ms during canaries, 0.3% of canaries caused errors, deployment lead time reduced to 8 minutes, $2.1M annual savings in compute costs, 15.2M daily deployments handled reliably
Developer Tips for Argo Rollouts 2.12
1. Pin Argo Rollouts Versions in Production
Netflix’s biggest lesson from early Argo Rollouts adoption was avoiding automatic minor version upgrades in production environments. Argo Rollouts 2.12 introduced breaking changes to the canary weight calculation logic compared to 2.11, which caused a 14-minute outage in their staging environment when an automatic Helm chart upgrade deployed 2.11.3 instead of 2.12.0. Always pin your Argo Rollouts controller version to a specific patch release (e.g., 2.12.0 instead of ~2.12.0) in production Terraform or Helm configurations. Use the official Argo Rollouts release page to track patch notes, and run a full canary simulation in staging for 72 hours before upgrading any production environment. For teams with legacy CI/CD integrations, use the Argo Rollouts 2.12 compatibility matrix to validate plugin support: Netflix maintained their Spinnaker integration by pinning the argo-rollouts-spinnaker-plugin to v1.3.0, which explicitly supports 2.12.x controllers. A short Terraform snippet to enforce version pinning is below:
resource "helm_release" "argo_rollouts" {
name = "argo-rollouts"
version = "2.12.0" # Explicitly pinned, no ~> or >=
# ... other config
}
This tip alone saved Netflix 3 outages in Q1 2026, and is applicable to any team running more than 1M daily deployments. Always validate that your custom metric providers compile against the pinned Argo Rollouts version’s Go module dependencies, as minor version changes can update the rollouts/v1alpha1 API types.
2. Use Weighted Canary Routing with Traffic Shaping
Argo Rollouts 2.12’s weighted canary implementation is far more reliable than Netflix’s legacy percentage-based routing, which often routed 2-3x the intended traffic to canary pods due to incorrect load balancer configuration. The 2.12 release added native support for traffic shaping via Istio and AWS App Mesh, which Netflix uses to gradually increase canary traffic from 5% to 100% over 15 minutes, with automatic pause if success rate drops below 99.5%. For teams not using service meshes, Argo Rollouts 2.12 supports NGINX Ingress Controller weight annotations, but we recommend service mesh integration for production workloads handling >100k requests per second. Netflix’s traffic shaping configuration for playback canaries uses a 5-step rollout: 5% (3 minutes), 20% (3 minutes), 50% (5 minutes), 75% (2 minutes), 100%. Each step validates Prometheus metrics for playback success rate, p99 latency, and error rate before proceeding. A sample canary step configuration from their Rollout manifest is below:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: playback-canary
spec:
strategy:
canary:
steps:
- setWeight: 5
- pause: {duration: 3m}
- setWeight: 20
- pause: {duration: 3m}
# ... additional steps
This approach reduced erroneous traffic routing from 0.7% to 0.002% in Netflix’s environment, and cut customer-facing canary errors by 97%. Always include a pause step after each weight change to allow metrics to stabilize, even if you have automated metric validation.
3. Instrument Custom Metrics for Your Use Case
Out-of-the-box Argo Rollouts metrics are insufficient for most production workloads, especially for streaming media companies like Netflix where playback success rate is a more relevant canary metric than generic HTTP 200 rates. Netflix built a custom metric provider (as shown in the first code example) that pulls playback success rate, p99 startup latency, and DRM license acquisition success rate from their Prometheus stack, which are specific to their business logic. Generic metrics like CPU usage or pod health will not catch domain-specific regressions: in one 2025 incident, a canary with healthy pods caused a 12% drop in playback success due to a misconfigured DRM provider, which generic metrics did not detect. Argo Rollouts 2.12’s plugin interface makes it straightforward to add custom metrics, with support for Prometheus, Datadog, New Relic, and custom HTTP endpoints. Netflix’s custom metric provider adds only 12ms of overhead per canary validation, which is negligible compared to the 22-minute validation time of their legacy pipeline. A short snippet of their custom metric configuration is below:
spec:
strategy:
canary:
metrics:
- name: playback-success-rate
type: Prometheus
provider:
netflix:
query: "sum(rate(playback_success_total[5m])) / sum(rate(playback_requests_total[5m])) * 100"
Investing 2-3 weeks in custom metric instrumentation will pay off in reduced outage time: Netflix’s custom metrics catch 89% of canary regressions before they reach 1% of production traffic.
Join the Discussion
Netflix’s migration to Argo Rollouts 2.12 represents a shift in how streaming media companies handle canary deployments at scale. We’d love to hear from engineers who have adopted Argo Rollouts, or are evaluating it against other tools. Share your experiences, war stories, and questions in the comments below.
Discussion Questions
- By 2027, will Argo Rollouts become the de facto standard for canary deployments in Kubernetes, or will a new tool emerge to challenge it?
- What is the biggest trade-off your team has made when adopting weighted canary routing: increased complexity, longer deployment times, or higher compute costs?
- How does Argo Rollouts 2.12 compare to Flagger for canary deployments in high-traffic production environments?
Frequently Asked Questions
Does Argo Rollouts 2.12 support legacy Spinnaker pipelines?
Yes, Netflix successfully integrated Argo Rollouts 2.12 with their existing Spinnaker 2.28.3 pipelines using the official argo-rollouts-spinnaker-plugin (https://github.com/argoproj-labs/argo-rollouts-spinnaker-plugin). The plugin acts as a bridge between Spinnaker’s orchestration logic and Argo Rollouts’ canary controller, allowing teams to keep their existing Spinnaker deployment triggers while using Argo Rollouts for canary logic. Netflix’s integration required 6 weeks of development time to customize the plugin for their playback service metrics.
What is the minimum Kubernetes version required for Argo Rollouts 2.12?
Argo Rollouts 2.12 requires Kubernetes 1.27 or later, as it uses the cronjobs/v1\ API which was promoted to GA in 1.25, and deployments/v1\ features for canary pod management. Netflix runs Argo Rollouts 2.12 on EKS 1.30 clusters, which is the current long-term support release for AWS Kubernetes service. Teams running older Kubernetes versions (1.26 or earlier) will need to upgrade before adopting 2.12, as the controller will fail to start due to missing API support.
How much does it cost to run Argo Rollouts 2.12 at 15M daily deployments?
Netflix’s total cost for running Argo Rollouts 2.12 across 12 global regions is $2.7M annually, which includes controller pods (3 replicas per region, 2 vCPU/4GB RAM each), metric provider plugins (2 replicas per region), and associated monitoring costs. This is a 44% reduction from their legacy canary pipeline’s $4.8M annual cost. For teams with smaller workloads (1M daily deployments), Argo Rollouts 2.12 can run on a single controller replica with 1 vCPU/2GB RAM, costing ~$12k annually on AWS EKS.
Conclusion & Call to Action
After 15 years of building deployment pipelines for streaming media and fintech companies, my recommendation is clear: Argo Rollouts 2.12 is the most mature, production-ready canary deployment tool for Kubernetes workloads in 2026. Netflix’s results speak for themselves: 92% reduction in canary outage minutes, 83% faster deployment lead times, and $2.1M annual cost savings. The 2.12 release’s improved weighted routing, plugin interface, and stability make it suitable for workloads from 1k to 15M daily deployments. If your team is still using legacy canary logic in Spinnaker, Jenkins X, or custom scripts, start a proof of concept with Argo Rollouts 2.12 today. Use the code examples in this article to build your custom metric provider, pin your controller version, and instrument domain-specific metrics first. The learning curve is steep for the first 2 weeks, but the long-term gains in reliability and velocity are worth it. Don’t wait for a canary-related outage to force your hand: migrate to Argo Rollouts 2.12 now.
92% Reduction in canary-related outage minutes for Netflix in Q1 2026












