deco - Operational
Previous page
Next page
deno - Operational
deno
VTEX - Operational
VTEX
Third Party: Amazon Web Services (AWS) → AWS ec2-sa-east-1 - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → API - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → CDN/Cache - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → DNS Root Servers - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → SSL Certificate Provisioning - Operational
Third Party: Cloudflare → Cloudflare Sites and Services → Workers - Operational
Third Party: Fly.io → AWS s3-sa-east-1 - Operational
Third Party: Fly.io → Application Hosting → GRU - São Paulo, Brazil - Operational
Third Party: Github → Actions - Operational
Third Party: Github → API Requests - Operational
Third Party: Github → Git Operations - Operational
Third Party: Github → Pull Requests - Operational
Third Party: Github → Webhooks - Operational
Third Party: Shopify → Shopify API & Mobile - Operational
Third Party: Shopify → Shopify Checkout - Operational
Third Party: Shopify → Shopify Point of Sale - Operational
Third Party: Supabase → Supabase Cloud → AWS ec2-sa-east-1 - Operational
Third Party: Discord → Voice → Brazil - Operational
Third Party: Fly.io → Edge Proxy → Routing - Operational
Third Party: Supabase → Auth - Operational
Third Party: Supabase → Storage - Operational
No action taken by deco.This incident has been resolved.
~5% of requests resulting in 500.We are currently investigating this incident.
Admin panel was out for ~5 minutes.This does not affect any website in production.
We rolled back Deno on our infrastructure and rebuilt all unstable pods. Errors have now decreased to less than 0.001%.
An update of deno (1.45.2) increased pressure over memory so pods were not being able to sustain in our infra.
This issue is affecting only the sites that deployed the new state today.
No more issues relating with the 502 since 12:05PM (GMT-3)
We saw a increase on 502 errors on our infrastructure. Some pods were not scaling or deploying due to 429 errors on our infrastructure (a protective measure due to rate limit calls).
We relaxed our WAF rules to avoid 429 errors.
Pods are deploying as expected.
Our CoreDNS service became unresponsive due to a node malfunction.
Our infrastructure reports that builds are running smoothly.
We are continuing to work on a fix for this incident.
A more definitive solution will be postponed until tomorrow morning.
We are actively investigating the issue. While we have made improvements to site stability, we are still working on fixing some broken builds and refining the build process.
Users have reported that builds are taking an unusually long time or not completing on GitHub Actions.
Timeout issues were resolved.
No further incidents were observed on our infrastructure. During the incident, we experienced a peak of 10% of requests resulting in high latency responses.
After thorough investigation, we confirmed the issues were related to Deno Deploy. The issue was escalated to their infrastructure team.
Notification received from the health check system about timeouts on our sites.
Jun 2024 to Aug 2024
Next