This is not good. One major outage? Something exceptional. Several outages in a short time? As someone thats worked in operations, I have empathy; there are so many “temp havks” that are put in place for incidents. but the rest of the world won’t… they’re gonna suffer a massive reputation loss if this goes on as long as the last one.
At least this warrants a good review of anyone's dependency on cloudflare.
If it turns out that this was really just random bad luck, it shouldn't affect their reputation (if humans were rational, that is...)
But if it is what many people seem to imply, that this is the outcome of internal problems/cuttings/restructuring/profit-increase etc, then I truly very much hope it affects their reputation.
But I'm afraid it won't. Just like Microsoft continues to push out software, that, compared to competitors, is unstable, insecure, frustrating to use, lacks features, etc, without it harming their reputation or even bottomlines too much. I'm afraid Cloudflare has a de-facto monopoly (technically: big moat) and can get away with offering poorer quality, for increasing pricing by now.
Microsoft's reputation couldn't be much lower at this point, that's their trick.
The issue is the uninformed masses being led to use Windows when they buy a computer. They don't even know how much better a system could work, and so they accept whatever is shoved down their throats.
Vibe infrastructure
So that is what the best case definition of what "Vibe Engineering" is.
> Just like Microsoft continues to push out software, that, compared to competitors, is unstable, insecure, frustrating to use, lacks features, etc, without it harming their reputation or even bottomlines too much.
Eh.... This is _kind_ of a counterfactual, tho. Like, we are not living in the world where MS did not do that. You could argue that MS was in a good place to be the dominant server and mobile OS vendor, and simply screwed both up through poor planning, poor execution, and (particularly in the case of server stuff) a complete disregard for quality as a concept.
I think someone who'd been in a coma since 1999 waking up today would be baffled at how diminished MS is, tbh. In the late 90s, Microsoft practically _was_ computers, with only a bunch of mostly-dying UNIX vendors for competition. And one reasonable lens through which to interpret its current position is that it's basically due to incompetence on Microsoft's part.
well that's the thing, such a huge number of companies route all their traffic through Cloudflare. This is at least partially because for a long time, there was no other company that could really do what Cloudflare does, especially not at the scales they do. As much as I despise Cloudflare as a company, their blog posts about stopping attacks and such are extremely interesting. The amount of bandwidth their network can absorb is jaw-dropping.
I've said to many people/friends that use Cloudflare to look elsewhere. When such a huge percentage of the internet flows through a single provider, and when that provider offers a service that allows them to decrypt all your traffic (if you let them install HTTPS certs for you), not only is that a hugely juicy target for nation-states but the company itself has too much power.
But again, what other companies can offer the insane amount of protection they can?
The crowdstrike incident taught us that no one is going to review any dependency whatsoever.
Yep, that's what late stage capitalism leaves you with: consolidation, abuse, helplessness and complacency/widespread incompetence as a result
We are now seeing which companies do not consider the third party risk of single point of failures in systems they do not control as part of their infrastructure and what their contingency plan is.
It turns out so far, there isn't one. Other than contacting the CEO of Cloudflare rather than switching on a temporary mitigation measure to ensure minimal downtime.
Therefore, many engineers at affected companies would have failed their own systems design interviews.
Alternative infrastructure costs money, and it's hard to get approval from leadership in many cases. I think many know what the ideal solution looks like, but anything linked to budgets is often out of the engineer's hands.
In some cases it is also a valid business decision. If you have 2 hour down time every 5 years, it may not have a significant revenue impact. Most customers think it's too much bother to switch to a competitor anyway, and even if it were simple the competition might not be better. Nobody gets fired for buying IBM
The decision was probably made by someone else who moved on to a different company, so they can blame that person. It's only when down time significantly impacts your future ARR (and bonus) that leadership cares (assuming that someone can even prove that they actually lose customers).
On the other thread there were comments claiming it’s unknowable what IaaS some SaaS is using, but SaaS vendors need to disclose these things one way or another, e.g. DPAs. Here is for example renders list of subprocessors: https://render.com/security
It’s actually fairly easy to know which 3rd party services a SaaS depends on and map these risks. It’s normal due diligence for most companies to do so before contracting a SaaS.
Sometimes it's not worth it. Your plan is just to accept you'll be off for a day or two, while you switch to a competitor.
If there's a fitting competitor worth switching to.
Plus most people don't get blamed when AWS (or to a lesser extent Cloudflare) goes down, since everyone knows more than half the world is down, so there's not an urgent motivation to develop multi-vendor capability.
Can't say that when it is a time critical service such as hospitals, banks, financial institutions or air-traffic control services.
Only a fool would build an architecture for critical air-traffic with Cloudflare as a SPoF.
My point still stands.
Having no backup / contingency plan even if any third party system goes down on a time critical service means that you want to risk another disaster around the corner.
In those industries, accepting to wait for them for a "day or two" is not only unacceptable, it isn't even an option.
I'm quite sure the reputational damage has already been done.
How do they not have better isolation of these issues, or redundancy of some sort?
The seed has been planted. It will take awhile for others to fill the void. Still the big players see this as an opportunity to steal market share if Cloudflare cannot live up to their reputation.
This will be another post-mortem of...config file messed...did not catch...promise to be doing better next....We are sorry.
They problem is architectural.
cloudflare is a huge system in active development.
it will randomly fail. there is no way it cannot.
there is a point where the cost to not fail simply becomes too high.
Absolutely. I wouldn’t be surprised if they turned the heat up a little after the last incident. The result? Even more incidents.
Lots of big sites are down
2 days ago they had outage that affected Europe, Cloudflare seems to be going down the drain. I removed it for my personal sites.
Probably fired a lot of their best people in the past few years and replaced it with AI. They have a de-facto monopoly, so we'll just accept it and wait patiently until they fix the problem. You know, business as usual in the grift economy.
>They have a de-facto monopoly
On what? There are lots of CDN providers out there.
They do fare more than just CDN. It's the combination of service, features, reach, price, and the integration of it all.
There's only one that lets everyone sign up for free.
The "AI agents" are on holiday when an outage like this happens.
This didn't happen at all. You're just completely making shit up.
Yeah I am a bit.
This is a good reminder for everyone to reconsider making all of their websites depend on a single centralized point of failure. There are many alternatives to the different services which Cloudflare offers.
But the nature of a CDN and most other products CF offers, is central by nature.
If you switch from CF to the next CF competitor, you've not improved this dependency.
The alternative here, is complex or even non-existing. Complex would be some system that allows you to hotswap a CDN, or to have fallback DDOS protection services, or to build you own in-house. Which, IMO, is the worst to do if your business is elsewhere. If you sell, say, petfood online, the dependency-risk that comes with a vendor like CF, quite certainly is less than the investment needed- and risk associted with- building a DDOS protection or CDN on your own; all investment that's not directed to selling more pet-food or get higher margins at doing so.
You can load-balance between CDN vendors as well
With what? The only (sensible) way is DNS, but then your DNS provider is your SPOF. Amazon used to run 2 DNS providers (separate NS from 2 vendors for all of AWS), but when one failed, there was still a massive outage.
Then your load balancer becomes the single point of failure.
BGP Anycast will let you dynamically route traffic into multiple front-end load balancers - this is how GSLB is usually done.
Needs an ASN and a decent chunk of PI address space, though, so not exactly something a random startup will ever be likely to play with.
Then add a load balancer in front of your load balancer, duh. /s
yeah there is no incentive to do a CDN in house, esp for businesses that are not tech-oriented. And the costs of the occasional outage has not really been higher than the cost of doing it in-house. And I'm sure other CDNs gets outages as well, just CF is so huge everyone gets to know about it and it makes the news
IPFS is a decentralized CDN.
We just love to merge the internet into single points of failure
This is just how free markets work, on the internet with no "physical" limitations it is simply accelerated.
Left alone corporations to rival governments emerge, which are completely unaccountable. At least there is some accountability of governments to the people, depending on your flavour of government.
no one loves the need for CDNs other than maybe video streaming services.
the problem is, below a certain scale you can't operate anything on the internet these days without hiding behind a WAF/CDN combo... with the cut-off mark being "we can afford a 24/7 ops team". even if you run a small niche forum no one cares about, all it takes is one disgruntled donghead that you ban to ruin the fun - ddos attacks are cheap and easy to get these days.
and on top of that comes the shodan skiddie crowd. some 0day pops up, chances are high someone WILL try it out in less than 60 minutes. hell, look into any web server log, the amount of blind guessing attacks (e.g. /wp-admin/..., /system/login, /user/login) or path traversal attempts is insane.
CDN/WAFs are a natural and inevitable outcome of our governments and regulatory agencies not giving a shit about internet security and punishing bad actors.
There are many alternatives
Of varying quality depending on the service. Most of the anti-bot/catpcha crap seems to be equivalently obnoxious, but the handful of sites that use PerimeterX… I've basically sworn off DigiKey as a vendor since I keep getting their bullshit "press and hold" nonsense even while logged in.I don't like that we're trending towards a centralized internet, but that's where we are.
My Cloudflare Pages website works fine.
From the incident page:
A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning. This was not an attack; the change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components. We will share more information as we have it today.
https://www.cloudflarestatus.com/incidents/lfrm31y6sw9q
I’m really curious what their rollout procedure is, because it seems like many of their past outages should have been uncovered if they released these configuration changes to 1% of global traffic first.
They don't appear to have a rollout procedure for some of their globally replicated application state. They had a number of major outages over the past years which all had the same root cause of "a global config change exposed a bug in our code and everything blew up".
I guess it's an organizational consequence of mitigating attacks in real time, where rollout delays can be risky as well. But if you're going to do that, it would appear that the code has to be written much more defensively than what they're doing it right now.
Yea agree.. This is the same discussion point that came up last time they had an incident.
I really don’t buy this requirement to always deploy state changes 100% globally immediately. Why can’t they just roll out to 1%, scaling to 100% over 5 minutes (configurable), with automated health checks and pauses? That will go along way towards reducing the impact of these regressions.
Then if they really think something is so critical that it goes everywhere immediately, then sure set the rollout to start at 100%.
Point is, design the rollout system to give you that flexibility. Routine/non-critical state changes should go through slower ramping rollouts.
Can't get hacked when you are down.
For hypothetical conflicting changes (read worst case: unupgraded nodes/services can't interop with upgraded nodes/services), what's best practice for a partial rollout?
Blue/green and temporarily ossify capacity? Regional?
- Push a version with the new logic but not yet enabled, still using legacy logic, able to implement both
- Push a version that enables new logic for 1% of traffic
- Continue rollout until 100%
Can also do canary rollout before that. Canary means rollout to endpoints only used by CF to test. Monitor metrics and automated test results.
That's ok but doesn't solve issues you notice only on actual prod traffic. While it can be a nice addition to catch issues earlier with minimal user impact, best practice on large scale systems still requires a staged/progressive prod rollout.
Yep. This is definitely an "as well as"
Unit test, Integration Test, Staging Test, Staging Rollout, Production Test, Canary, Progressive Rollout
Can all be automated can smash through all that quickly with no human intervention.
You can selectively bypass many roll out procedures in a properly designed system.
If there is a proper rollout procedure that would've caught this, and they bypass it for routine WAF configuration changes, they might as well not have one.
Not sure I buy it. Do 1% for 10 minutes. I mean it must have taken over half a day to code and test a patch. Why not wait another 10 minutes.
I believe they use Argo according to a previous post mortem.
https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-...
"Please don‘t block the rollout pipleline with a simple react security patch update."
The update they describe should never bring down all services. I agree with other posters that they must lack a rollout strategy yet they sent spam emails mocking the reliability of other clouds
The irony is they support rolling out incrementally with some of their products for deployment.
They need that same mindset for themselves in config/updates/infra changes but probably easier said than done.
So their parser broke again I guess.
And no staged rollout I assume?
Apparently somehow this had never been how Cloudflare did this. I expressed incredulity about this to one of their employees, but yeah, seems like their attitude was "We never make mistakes so it's fastest to just deploy every change across the entire system immediately" and as we've seen repeatedly in the past short while that means it sometimes blows up.
They have blameless post mortems, but maybe "We actually do make mistakes so this practice is not good" wasn't a lesson anybody wanted to hear.
Blameless post mortems should be similar to air accident investigations. I.e. don't blame the people involved (unless they are acting maliciously), but identify and fix the issues to ensure this particular incident is unlikely to recur.
The intent of the postmortems is to learn what the issues are and prevent or mitigate similar issues happening in the future. If you don't make changes as a result of a postmortem then there's no point in conducting them.
>don't blame the people involved (unless they are acting maliciously)
Or negligently.
That still shouldn't be a part of post mortem, more of a performance review item.
They should be performantly removed.
The aviation industry regularly requires certifications, check rides, and re-qualifications when humans mess up. I have never seen anything like that in tech.
Sometimes the solution is to not let certain people do certain things which are risky.
Agree 100%, however using your example, there is no regulatory agency that investigate the issue and demand changes to avoid related future problems. Should the industry move towards this way?
However, one of the things you see (if you read enough of them) in accident investigation reports for regulated industries is a recurring pattern
1. Accident happens 2. Investigators conclude Accident would not happen if people did X. Recommend regulator requires that people do X, citing previous such recommendations each iteration 3. Regulator declined this recommendation, arguing it's too expensive to do X, or people already do X, or even (hilariously) both 4. Go to 1.
Too often, what happens is that eventually
5. Extremely Famous Accident Happens, e.g. killing loved celebrity Space Cowboy 6. Investigators conclude Accident would not happen if people did X, remind regulator that they have previously recommended requiring X 7. Press finally reads dozens of previous reports and so News Story says: Regulator killed Space Cowboy! 8. Regulator decides actually they always meant to require X after all
As bad as (3) sounds, I'll strongman the argument: it's important to keep the economic cost of any regulation in mind.*
On the one hand, you'd like to prevent the thing the regulation is seeking to prevent.
On the other hand, you'd have costs for the regulation to be implemented (one-time and/or ongoing).
"Is the good worth the costs?" is a question worth asking every time. (Not least because sometimes it lets you downscope/target regulations to get better good ROI)
*Yes, the easy pessimistic take is 'industry fights all regulation on cost grounds', but the fact that the argument is abused doesn't mean it doesn't have some underlying merit
I think conventionally the verb is "to steelman" with the intended contrast being to a strawman, an intentionally weak argument by analogy to how straw isn't strong but steel is. I understood what you meant by "strongman" but I think that "steelman" is better here.
There is indeed a good reason regulators aren't just obliged to institute all recommendations - that would be a lot of new rules. The only accident report I remember reading with zero recommendations was a MAIB (Maritime accidents) report here which concluded that a crew member of a fishing boat has died at sea after their vessel capsized because they both they and the skipper (who survived) were on heroin, the rationale for not recommending anything was that heroin is already illegal, operating a fishing boat while on heroin is already illegal, and it's also obviously a bad idea, so, there's nothing to recommend. "Don't do that".
Cost is rarely very persuasive to me, because it's very difficult to correctly estimate what it will actually cost to change something once you decided it's required - based on current reality where it is not. Mass production and clever cost reductions resulting from the normal commercial pressures tend to drive down costs when we require something but not before (and often not after we cease to require it either)
It's also difficult to anticipate all benefits from a good change without trying it. Lobbyists against a regulation will often try hard not to imagine benefits after all they're fighting not to be regulated. But once it's in action, it may be obvious to everyone that this was just a better idea and absurd it wasn't always the case.
Remember when you were allowed to smoke cigarettes on aeroplanes? That seems crazy, but at the time it was normal and I'm sure carriers insisted that not being allowed to do this would cost them money - and perhaps for a short while it did.
> it's very difficult to correctly estimate what it will actually cost to change something once you decided it's required - based on current reality where it is not. Mass production and clever cost reductions resulting from the normal commercial pressures tend to drive down costs
Difficult, but not impossible.
What are calculable and do NOT scale down is cost for compliance documentation and processes. Changing from 1 form of documentation to 4 forms of documentation has measurable cost, that will be imposed forever.
> It's also difficult to anticipate all benefits from a good change without trying it.
That's not a great argument, because it can be counterbalanced by the equally true opposite: it's difficult to anticipate all downsides to a change without trying it.
> Remember when you were allowed to smoke cigarettes on aeroplanes?
Remember when you could walk up to a gate 5 minutes before a flight, buy a ticket, and fly?
The current TSA security theater has had some benefits, but it's also made using airports far worse as a traveler.
I mean, I'm pretty sure there was a long period where you could walk up 5 minutes before, and fly on a plane where you're not allowed to smoke. It's completely unrelated.
The TSA makes no sense as a safety intervention, it's theatre, it's supposed to look like we're trying hard to solve the problem, not be an attempt to solve the problem, and if there was an accident investigation for 9/11 I can't think why, that's not an accident.
As to your specific claim about enforcement, actually we don't even know whether we'd increase paperwork overhead in many cases. Rationalization driven by new regulation can actually reduce this instead.
For a non-regulatory (at least in the sense that there's no government regulators involved) example consider Let's Encrypt's ACME which was discussed here recently. ACME complies with the "Ten Blessed Methods". But prior to Let's Encrypt the most common processes weren't stricter, or more robust, they were much worse and much more labour intensive. Some of them were prohibited more or less immediately when the "Ten Blessed Methods" were required because they're just obviously unacceptable.
The Proof of Control records from ACME are much better than what had been the usual practice prior yet Let's Encrypt is $0 at point of use and even if we count the actual cost (borne by donations rather than subscribers) it's much cheaper than the prior commercial operators had been for much more value delivered.
> They have blameless post mortems, but maybe "We actually do make mistakes so this practice is not good" wasn't a lesson anybody wanted to hear.
Or they could say, "we want to continue to prioritise speed of security rollouts over stability, and despite our best efforts, we do make mistakes, so sometimes we expect things will blow up".
I guess it depends what you're optimising for... If the rollout speed of security patches is the priority then maybe increased downtime is a price worth paying (in their eyes anyway)... I don't agree with that, but at least it's an honest position to take.
That said, if this was to address the React CVE then it was hardly a speedy patch anyway... You'd think they could have afforded to stagger the rollout over a few hours at least.
It's just poor risk management at this point. Making sure that a configuration change doesn't crash the production service shouldn't take more than a few seconds in a well-engineered system even if you're not doing staged rollout.
React (a frontend JS framework) can now bring down critical Internet infrastructure.
I will repeat it because it's so surreal: React (a frontend JS framework) can now bring down critical Internet infrastructure.
That's Next.js, not React.
Mentioning React Server Components in the status page can be seen as a bad way to shift the blame. Would have been better to not specify which CVE they were trying to patch. The issue is their rollout management, not the Vendor and CVE.
> That's Next.js, not React.
React seems to think that it was React:
https://react.dev/blog/2025/12/03/critical-security-vulnerab...
True, thanks for sharing. Worth mentioning that's on the "full-stack" part of the framework. It doesn't impact most React website while it impacts most next.js websites.
It was React. Code in React's repository had to be patched to fix this.
Next.JS just happens to be the biggest user of this part of React, but blaming Next.JS is weird...
Thanks, that's what I acknowledged in the message you just replied to.
I'm not blaming anyone. Mostly outlining who was impacted as it's not really related to the front-end parts of the framework that the initial comment was referring to.
I think the "argument" is that it's a critical vuln so they can't "go slow".
So now a vuln check for a component deployed on, being generous, 1% of servers causes an outage for 30% of the internet.
The argument is dumb.
To be accurate: React developed server-side capabilities, and that's where the vulnerability exists.
It's feels noteworthy because React started out frontend-only, but pedantically it's just another backend with a vulnerability.
[flagged]
What was the AI slop part?
When something goes wrong, people are starting to immediately assume it's because of the thing they don't like.
I wonder if this is the new normal? Weekly Cloudflare outages that breaks huge parts of the internet.
Ah yes, Cloudflare's worst enemy: The configuration change.
On fridays, yes.
so it's react again in the end .. zzzzzzz
So. Another regex problem?
Yes.
Weird that https://www.cloudflarestatus.com/ isn't reporting this properly. It should be full of red blinking lights.
Yeah. I only work for a small company, but you can be certain we will not update the status page if only a small portion of customers are affected, and if we are fully down, rest assured there will be no available hands to keep the status page updated
>rest assured there will be no available hands to keep the status page updated
That's not how status pages if implemented correctly work. The real reason status pages aren't updated is SLAs. If you agree on a contract to have 99.99% uptime your status page better reflect that or it invalidates many contracts. This is why AWS also lies about it's uptime and status page.
These services rarely experience outages according their own figures but rather 'degraded performance' or some other language that talks around the issue rather than acknowledging it.
It's like when buying a house you need an independent surveyor not the one offered by the developer/seller to check for problems with foundations or rotting timber.
SLA’s usually just give you a small credit for the exact period of the incident, which is arymetric to the impact. We always have to negotiate for termination rights for failing to meet SLA standards but, in reality, we never exercise them.
Reality is that in an incident, everyone is focused on fixing issue, not updating status pages; automated checks fail or have false positives often too. :/
Yep, every SLA I've ever seen only offers credit. The idea that providers are incentivized to fudge uptime % due to SLAs makes no sense to me. Reputation and marketing maybe, but not SLAs.
The compensation is peanuts. $137 off a $10,000 bill for 10 hours of downtime, or 98.68% uptime in a month, is well within the profit margins.
This is weird - at this level contracts are supposed to be rock solid so why wouldn't they require accurate status reporting? That's trivial to implement, and you can even require to have it on a neutral third-party like UptimeRobot and be done with it.
I'm sure there are gray areas in such contracts but something being down or not is pretty black and white.
> something being down or not is pretty black and white
This is so obviously not true that I'm not sure if you're even being serious.
Is the control panel being inaccessible for one region "down"? Is their DNS "down" if the edit API doesn't work, but existing records still get resolved? Is their reverse proxy service "down" if it's still proxying fine, just not caching assets?
I understand there are nuances here, and I may be oversimplifying, but if part of the contract effectively says "You must act as a proxy for npmjs.com" yet the site has been returning 500 Cloudflare errors across all regions several times within a few weeks while still reporting a shining 99.99% uptime, something doesn't quite add up. Still, I'm aware I don't know much about these agreements, and I'm assuming the people involved aren't idiots and have already considered all of this.
> I'm sure there are gray areas in such contracts but something being down or not is pretty black and white.
Is it? Say you've got some big geographically distributed service doing some billions of requests per day with a background error rate of 0.0001%, what's your threshold for saying whether the service is up or down? Your error rate might go to 0.0002% because a particular customer has an issue so that customer would say it's down for them, but for all your other customers it would be working as normal.
> something being down or not is pretty black and white
it really isn't. We often have degraded performance for a portion of customers, or just down for customers of a small part of the service. It has basically never happened that our service is 100% down.
Are the contracts so easy to bypass? Who signs a contract with an SLA knowing the service provider will just lie about the availability? Is the client supposed to sue the provider any time there is an SLA breach?
Anyone who doesn't have any choice financially or gnostically. Same reason why people pay Netflix despite the low quality of most of their shows and the constant termination of tv series after 1 season. Same reason why people put up with Meta not caring about moderating or harmful content. The power dynamics resemble a monopoly
Why bother to put the SLA in the contract at all, if people have no choice but to sign it?
Netflix doesn't put in the contract that they will have high-quality shows. (I guess, don't have a contract to read right now.)
Most of services are not really critical but customers want to have 99.999% on the paper.
Most of the time people will just get by and ignore even full day of downtime as minor inconvenience. Loss of revenue for the day - well you most likely will have to eat that, because going to court and having lawyers fighting over it most likely will cost you as much as just forgetting about it.
If your company goes bankrupt because AWS/Cloudflare/GCP/Azure is down for a day or two - guess what - you won't have money to sue them ¯\_(ツ)_/¯ and most likely will have bunch of more pressing problems on your hand.
The company that is trying to cancel its contract early needs to prove the SLA was violated, which is very easy of the company providing the service also provides a page that says their SLA was violated. Otherwise it's much harder to prove.
The client is supposed to monitor availability themselves, that is how these contracts work.
I imagine there will be many levels of "approvals" to get the status page actually showing down, since SLA uptime contracts is involved.
I work for a small company. We have no written SLA agreements.
I have to say that if an incident becomes so overwhelming that nobody can spare even a moment to communicate with customers, that points to a deeper operational problem. A status page is not something you update only when things are calm. It is part of the response itself. It is how you keep users informed and maintain trust when everything else is going wrong.
If communication disappears entirely during an outage, the whole operation suffers. And if that is truly how a company handles incidents, then it is not a practice I would want to rely on. Good operations teams build processes that protect both the system and the people using it. Communication is one of those processes.
if we are fully down, rest assured there will be no available hands to keep the status page updated
There is no quicker way for customers to lose trust in your service than it to be down and for them to not know that you're aware and trying to fix it as quickly as possible. One of the things Cloudflare gets right is the frequent public updates when there's a problem.
You should give someone the responsibility for keeping everyone up to date during an incident. It's a good idea to give that task to someone quite junior - they're not much help during the crisis, and they learn a lot about both the tech and communication by managing it.
You won't be able to update the status page due to failures anyway.
Why not? A good status page runs on a different cloud provider in a different region, specifically to not be affected at the same time.
This is just business as usual, status pages are 95% for show now. The data center would have to be under water for the status page to say "some users might be experiencing disruptions".
They just did an update, and it is bad (in the sense that they are not realizing their clients are down?)
> Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.
> These issues do not affect the serving of cached files via the Cloudflare CDN or other security features at the Cloudflare Edge.
> Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.
> (in the sense that they are not realizing their clients are down?)
Their own website seems down too https://www.cloudflare.com/
--
500 Internal Server Error
cloudflare
>Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed.
"Might fail"
well it does say that now, so…
which datacenter got flooded?
> In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 05, 2025 - 09:00 UTC
It's a scheduled maintenance, so SLA should not apply right ?
https://updog.ai/status/cloudflare reported the incident 13 minutes ago (at the moment of writing this).
Yeah, their status site reports nothing but then clicking on some of the links on that site bring you the 500 error
Company internal status pages are always like this. When you don't report problems they don't exist!
It’s wild how non of the big corporations can make a functional status page
They could, but accurate reporting is not good for their SLAs
They can. They don't want to though.
Management is always going to take too long (in an engineer’s opinion) to manually throw the alerts on. They’re pressing people for quick fixes so they can claim their SLAs are intact.
They were intending to start a maintenance window starting 6 minutes ago, but they were already down by then.
There is an update:
"Cloudflare Dashboard and Cloudflare API service issues"
Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.
Customers using the Dashboard / Cloudflare APIs are impacted as requests might fail and/or errors may be displayed. Dec 05, 2025 - 08:56 UTC
Not weird, that’s tradition by now.
Interesting, I get a 500 if I try to visit coinbase.com, but my WebSocket connections to advanced-trade-ws.coinbase.com are still live with no issues.
probably these websockets are not going through cloudflare
> In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 05, 2025 - 07:00 UTC
Something must have gone really wrong.
It's 1AM in San Francisco right now. I don't envy the person having to call Matthew Prince and wake him up for this one. And I feel really bad for the person that forgot a closing brace in whatever config file did this.
Agreed, I feel bad for them. But mostly because cloudflare's workflows are so bad that you're seemingly repeatedly set up for really public failures. Like how does this keep happening without leadership's heads rolling. The culture clearly is not fit for their level of criticality
> The culture clearly is not fit for their level of criticality
I don't think anyone's is.
How often do you hear of Akamai going down and they host a LOT more enterprise/high value sites than Cloudflare.
There's a reason Cloudflare has been really struggling to get into the traditional enterprise space and it isn't price.
A quick google turned up an Akamai outage in July that took Linode down and two in 2021. At that scale nobody's going to come up smelling like roses. I mostly dealt with Amazon crap at megacorp, but nobody that had to deal with our Akamai stuff had anything kind to say about them as a vendor.
At first blush it's getting harder to "defend" use of Cloudflare, but I'll wait until we get some idea of what actually broke. For the time being I'll save my outrage for the AI scrapers that drove everyone into Cloudflare's arms.
The last place I heard of someone deploying anything to Akamai was 15 years ago in FedGov.
Akamai was historically only serving enterprise customers. Cloudflare opened up tons of free plans, new services, and basically swallowed much of that market during that time period.
> I don't envy the person having to call Matthew Prince
They shouldn't need to do that unless they're really disorganised. CEOs are not there for day to day operations.
> And I feel really bad for the person that forgot a closing brace in whatever config file did this.
If a closing brace take your whole infra. down, my guess is that we'll see more of this.
Life hack: Announce bug that brings your entire network down as scheduled maintenance.
Yes, the incident report claims this was limited to their client dashboard. It most certainly was not. I have the PagerDuty alerts to prove it...
> Investigating - Cloudflare is investigating issues with Cloudflare Dashboard and related APIs.
They seem to now, a few min after your comment
Im much more concerned with customer sites being down which indicates are not impacted. They are.. :/
They have enough data to at least automate yellow.
The AI agents can't help out on this time.
maybe we can back to stackoverflow :)
Now showing a message, posted at 08:56 UTC.
Yes, it’s really ‘weird’ that they refuse to share any details. Completely unlike AWS, for example. As if being open about issues with their own product wouldn’t be in their best interest. /s
Wow, just plain 500s on customer sites. That's a level of down you don't see that often.
Yeah that's a hard 500 right? Not even Cloudflare's 500 branded page like last time. What could have caused this, I wonder.
"A cable!"
"How do you know?"
"I'm holding it!"
I hope it’s not another Result.unwrap().
maybe this would cause rust to adopt exception handling, and by exception I mean panic
Mine [0] seems to be very high latency but no 500s. But yes, most cloudflare-proxied websites I tried seems to just return 500s.
A precious glimpse of the less seen page renders.
So. I don't understand the 5 nines they promote. One bad day those nines are gone. So they next year you are pushing 2 nines.
Its just fabricated bullshit. It's how all the companies do it. 99.999% over a year is literally 5 minutes. Or under an hour in a decade, that's wildly unrealistic.
Reddit was once down for a full day and that month they reported 99.5% uptime instead of 99.99% as they normally claimed for most months.
There is this amazing combination of nonsense going on to achieve these kinds of numbers:
1. Straight up fraudulent information on status page. Reporting incendents as more minor than any internal monitors would claim.
2. If it's working for at least a few percent of customers it's not down. Degraded is not counted.
3. If any part of anything is working then it's not down. For example with the reddit example even if the site was dead as long as the image server is still at 1% functional with some internal ping the status is good.
Funnily enough an hour in a decade on a good hoster, with a stable service running on it, occasionally updated by version number ... it might even be possible. Maybe not quite, but close, if one tries. While it seems completely impossible with cloudflare, AWS, and whatnot, who are having outages every other week these days.
Unlike the previous outage, my server seems fine, and I can use Cloudflare's tunnel to ssh to the host as well.
Yes Claude is down with a 500 (cloudflare).
At least they branded it!
its like someone-shut-down-the-power 500s
Looking forward to the post mortem on this one. We weren't affected (just using the CDN), and people are saying they weren't affected who are using Cloudflare Workers (a previous culprit which we've since moved off), so I wonder what service / API was actually affected that brought down multiple websites with a 500 but not all of them.
Wise was just down which is a pretty big one.
Also odd how some websites were down this time that previously weren't down with the global outage in November
Our locations excluded from Cloudflare WAF were up, but the rest was down. I think WAF took a dump.
Yeah it's strange. My sites that are are proxied through Cloudflare remained up, but Supabase was taken offline so some backends were down. Either a regional PoP style issue, or a specific API or service had to be used to be affected.
The excuse:
>A change made to how Cloudflare's Web Application Firewall parses requests caused Cloudflare's network to be unavailable for several minutes this morning.
>The change was deployed by our team to help mitigate the industry-wide vulnerability disclosed this week in React Server Components.
>We will share more information as we have it today.
https://www.cloudflarestatus.com/incidents/lfrm31y6sw9q
It's quite an unfortunate coincidence that React has indirectly been the reason for two recent issues at Cloudflare haha
Two's a coincidence, three's a pattern; I guess we will have to wait until next month to see if it becomes a pattern. Was there a particular aspect of the React Server Components that made it easy to have this problem appear? would it have been caught or avoided in another framework or language?
Who sent an xml request?
The entire Cloud/SaaS story had a lot of happy-path cost optimization. The particular glitch that triggered the domino effect may be irrelevant relative to the fact that the effect reproduces.
we were not affected too and we realised it was Cloudflare because Linear was down and they were mentioning an upstream service. Also Ecosia was affected, and I then realised they might be relying on Cloudflare too.
CDN was definitely down also. We were widely impacted by it with 500's.
CDN was also affected for some customers. we were down with 500.
Maven Repository was down for me for a while, now it recovered.
> Looking forward to the post mortem
This is becoming a meme.
This has to be setting off some alarm bells internally, a well written postmortem on an occasional issue, great, but when your postmortem talks about learnings and improvements yet major outages keep happening, it becomes meaningless..
was interesting, some of our stuff failed, but some other stuff that used cloudflare indirectly didn't.
This is second time this week: https://news.ycombinator.com/item?id=46140145
The previous one affected European users for >1h and made many Cloudflare websites nearly unusable for them.
https://downdetector.com/ classic
hmm... https://downdetectorsdowndetector.com/
(edit: it's working now (detecting downdetector's down))
So,
This one is green: https://downdetectorsdowndetector.com
This one is not openning: https://downdetectorsdowndetectorsdowndetector.com
This one is red: https://downdetectorsdowndetectorsdowndetectorsdowndetector....
Lol. The fact that the 4x one actually works and is correctly reporting that the 3x one is down actually makes this a lot funnier to me.
it's like they didn't fully think it through/expect people to actually use it so soon
It’s down detectors all the way down!
downdetectorsdowndetectors didn't detect breakdown of downdetectors with 500 Error
A wrong downdetectordowntector is worse than a 500 one. :D
You had one job.
So down²detector was fake all along?
So DownDetector is down, but DownDetectorDownDetector does not detect it... We probably need one more DownDetector. (no)
Yes we do have[^1] but unfortunately it looks like not checking the integrity, just reachability.
We have one. But according to Down Detector's Down Detector's Down Detector's Down Detector, that's also down.
Well Down Detector's Down Detector isn't down...What we might need is a Down Detector's Down Detector Validator
>half the internet is down >downdetector is down >downdetector down detector reports everything is fine
software was a mistake
This is a fake detector that just has frontend logic for mocking realistic data, you can easily see it in the source code.
Ehh, so down detector for down detector is up but it is inaccurate.
great news, schrodingersdetector.com is available!
At least it's still right in spite of being down.
That's the 30% vibe code they promised us.
Cynicism aside, something seems to be going wrong in our industry.
Going? I think we got there a long time ago. I'm sure we all try our best but our industry doesn't take quality seriously enough. Not compared to every other kind of engineering discipline.
Always been there. But it seems to be creeping into institutions that previously cared over the past few years, accelerating in the last.
Salaries are flat relative to inflation and profits. I've long felt that some of the hype around "AI" is part of a wage suppression tactic.
Also “Rewrite it in Rust”.
P.S. it’s a joke, guys, but you have to admit it’s at least partially what’s happening
No, it has nothing to do with Rust.
But it might have something to do with the "rewrite" part:
> The idea that new code is better than old is patently absurd. Old code has been used. It has been tested. Lots of bugs have been found, and they’ve been fixed. There’s nothing wrong with it. It doesn’t acquire bugs just by sitting around on your hard drive.
> Back to that two page function. Yes, I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.
> Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.
> When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.
From https://www.joelonsoftware.com/2000/04/06/things-you-should-...
A lot of words for a 'might'. We don't know what caused the downtime.
Not this time; but the rewrite was certainly implicated in the previous one. They actually had two versions deployed; in response to unexpected configuration file size, the old version degraded gracefully, while the new version failed catastrophically.
Both versions were taken off-guard by the defective configuration they fetched, it was not a case of a fought and eliminated bug reappearing like in the blogpost you quoted.
[dead]
The first one had something to do with Rust :-)
Not really. In C or C++ that could have just been a segfault.
.unwrap() literally means “I’m not going to handle the error branch of this result, please crash”.
Indeed, but fortunately there are more languages in the world than Rust and C++. A language that performed decently well and used exceptions systematically (Java, Kotlin, C#) would probably have recovered from a bad data file load.
There is nothing that prevents you from recovering from a bad data file load in Rust. The programmer who wrote that code chose to crash.
That's exactly my point. There should be no such thing as choosing to crash if you want reliable software. Choosing to crash is idiomatic in Rust but not in managed languages in which exceptions are the standard way to handle errors.
I am not a C# guy, but I wrote a lot of Java back in the day, and I can authoritatively tell you that it has so-called "checked exceptions" that the compiler forces you to handle. However, it also has "runtime exceptions" that you are not forced to handle, and they can happen any where and any time. Conceptually, it is the same as error versus panic in Rust. One such runtime exception is the notorious `java.lang.NullPointerException` a/k/a the billion-dollar mistake. So even software in "managed" languages can and does crash, and it is way more likely to do so than software written in Rust, because "managed" languages do not have all the safety features Rust has.
In practice, programs written in managed languages don't crash in the sense of aborting the entire process. Exceptions are usually caught at the top level (both checked and unchecked) and then logged, usually aborting the whole unit of work.
For trapping a bad data load it's as simple as:
try {
data = loadDataFile();
} catch (Exception e) {
LOG.error("Failed to load new data file; continuing with old data", e);
}
This kind of code is common in such codebases and it will catch almost any kind of error (except out of memory errors).Here is the Java equivalent of what happened in that Cloudflare Rust code:
try {
data = loadDataFile();
} catch (Exception e) {
LOG.error("Failed to load new data file", e);
System.exit(1);
}
So the "bad data load" was trapped, but the programmer decided that either it would never actually occur, or that it is unrecoverable, so it is fine to .unwrap(). It would not be any less idiomatic if, instead of crashing, the programmer decided to implement some kind of recovery mechanism. It is that programmer's fault, and has nothing to do with Rust.Also, if you use general try-catch blocks like that, you don't know if that try-catch block actually needs to be there. Maybe it was needed in the past, but something changed, and it is no longer needed, but it will stay there, because there is no way to know unless you specifically look. Also, you don't even know the exact error types. In Rust, the error type is known in advance.
Yes, I know. But nobody writes code like that in Java. I don't think I've ever seen it outside of top level code in CLI tools. Never in servers.
> It is that programmer's fault, and has nothing to do with Rust.
It's Rust's fault. It provides a function in its standard library that's widely used and which aborts the process. There's nothing like that in the stdlibs of Java or .NET
> Also, if you use general try-catch blocks like that, you don't know if that try-catch block actually needs to be there.
I'm not getting the feeling you've worked on many large codebases in managed languages to be honest? I know you said you did but these patterns and problems you're raising just aren't problems such codebases have. Top level exception handlers are meant to be general, they aren't supposed to be specific to certain kinds of error, they're meant to recover from unpredictable or unknown errors in a general way (e.g. return a 500).
> It's Rust's fault. It provides a function in its standard library that's widely used and which aborts the process. There's nothing like that in the stdlibs of Java or .NET
It is the same as runtime exceptions in Java. In Rust, if you want to have a top-level "exception handler" that catches everything, you can do
::std::panic::catch_unwind(|| {
// ...
})
In case of Cloudflare, the programmer simply chose to not handle the error. It would have been the same if the code was written in Java. There simply would be no top-level try-catch block.Look at how much additional boilerplate it took in your example to ignore the error.
In the Rust case you just don’t call unwrap() if you want to swallow errors like that.
It’s also false that catching all exceptions is how you end up with reliable software. In highly available architectures (e.g. many containers managed by kubernetes), if you end up in a state where you can’t complete work at all, it’s better to exit the process immediately to quickly get removed from load balancing groups, etc.
General top level exceptions handlers are a huge code smell because catching exceptions you (by definition) didn’t expect is a great way to have corrupted data.
When dotnet has an unhandled exception, it terminates with abort.
unwrap is NOT idiomatic in Rust
Did you consider to rewrite your joke in rust?
it's never the technology, it's the implementation
cc: @oncall then trigger pagerduty :)
> Cynicism aside, something seems to be going wrong in our industry.
Started after the GFC and the mass centralisation of infrastructure
I'm just realizing how much we depend on Cloudflare working. Every service I use is unreachable. Even worse than last time. It's almost impossible to do any work atm.
https://downdetectorsdowndetector.com/ is up :) but the status is not correct.
Cloudflare uptime has worsened a lot lately, AI coding has increased exponentially, hmm
Not only they make my browsing experience a LOT worse (seconds per site for bot detection and additional "are you human" clicks even without VPNs), now they are bringing the entire Internet down. They don't deserve the position they currently have.
> Not only they make my browsing experience a LOT worse
No, I did (metaphorically, for the websites I control). And I did it because otherwise those sites are fully offline or unusable thanks to the modern floods of unfilterable scrapers.
Months of piecemeal mitigations, but Attack Mode is the only thing that worked. Blame the LLM gold rush and the many, many software engineers with no ethics and zero qualms about racing to find the bottom of the Internet.
The whole “not a bot” prompt every three hours seems like it has potential to get out of the way more often.
You make it sound like the DDoS and Bots are their fault.
They make gazillions. I'm sure they can do better than that.
How many awful things in tech can be rationalized away by "sorry, but this is for you/our protection"?
Claude offline too. 500 errors on the web and the mobile app has been knocked out.
I had to switch to Gemini for it to help me form a thought so I could type this reply. Its dire.
Even LinkedIn is now down. Opening linkedin.com gives me a 500 server error and Cloudflare at the bottom. Quite embarassing.
At least they were available when Front Door was down!
Somebody at Cloudflare is stretching that initial investigation time as much as possible to avoid having to update their status to being down and losing that Christmas bonus.
Wow, three times in a month is really crushing their trust.
I'll need to checkup on DigitalOcean uptime, may be better than Cloudflare.
My Hetzner servers have been running fine for years. Okay, there were times when I broke something, but at least I was able to fix it quickly and never felt dependent on others.
CxOs want to be dependent on someone else, specifically suppliers with pieces of paper saying "we are great, here's a 1% discount on next years renewal"
If the in house tech team breaks something and fixes it, that's great from an engineer point of view - we like to be useful, but the person at the top is blamed.
If an outsourced supplier (one which the consultants recommend, look at Gartner Quadrants etc) fails, then the person at the top is not blamed, even though they are powerless and the outage is 10 times longer and 10 times as frequent.
Outsourcing is not about outcome, it's about accountability, and specifically avoiding it.
3?! When was the second>
I can imagine the horror of pressure of the people responsible for resolution. On that scale of impact it is very hard to keep calm - but still the hive of minds have to cooperate and solve the puzzle while the world is basically halted and ready to blame the company you work for.
For us also Digital Ocean, Render, and a few other vendors are down.
At this point picking vendors that don't use Cloudflare in any way becomes the right thing to do.
Claude was also down (which brought me here)
I have 10B idea: cloudflare that does not fail so often.
How about: internet that is actually decentralized.
Yes, on one hand, it was so wonderful. Cloudflare came and said, "Yeah, now we'll save everyone from DDoS, everything's perfect, we'll speed up your site," and bam, they became a bottleneck for the entire internet. It's some kind of nightmare. Why didn't several other such popular startups appear, into which more money was invested, and which would allow some failure points to be created? I don't understand this. Or at least Cloudflare itself should have had some backup mechanism, so that in case of failures, something still works, even slowly, or at least they could redirect traffic directly, bypassing their proxies. They just didn't do that at all. Something is definitely wrong.
> Why didn't several other such popular startups appear
bunny.net
fastly.com
gcore.com
keycdn.com
Cloudfront
Probably some more I forgot now. CF is not the only option and definitely not the best option.
> Yeah, now we'll save everyone from DDoS, everything's perfect, we'll speed up your site,
... and host the providers selling DDoS services. https://privacy-pc.com/articles/spy-jacking-the-booters.html
Thank you for sending these alternatives, they look good. And, of course, the most important thing is that Cloudflare is free, while these alternatives cost money. And they cost hundreds of dollars at my traffic volume of tens of terabytes. Of course, I really don't want to pay. So, as they say, mice wept and jabbed, but they kept gnawing on the cactus.
Nothing's free - one day they will come knocking. Better be prepared to serve at an affordable level.
Nobody got fired for choosing clownflare
It exists and it's called Bunny.net
Looking at their market cap it’s 71.5B idea
Ooof, this one looks like a big one!
canva.com
chess.com
claude.com
coinbase.com
kraken.com
linkedin.com
medium.com
notion.so
npmjs.com
shopify.com (!)
and many more I won't add bc I don't want to be spammy.
Edit: Just checked all my websites hosted there (~12), they're all ok. Other people with small websites are doing well.
Only huge sites seem to be down. Perhaps they deal with them separately, the premium-tier of Cloudflare clients, ... and those went down, dang.
My small websites are also up. I wonder if they're going to go down soon, or if we're safe.
readthedocs down is hurting me the most. My small websites are doing OK.
zoom