[HN Gopher] Understanding the Worst .NET Vulnerability
___________________________________________________________________
Understanding the Worst .NET Vulnerability
Author : ingve
Score : 187 points
Date : 2025-10-28 11:03 UTC (11 hours ago)
(HTM) web link (andrewlock.net)
(TXT) w3m dump (andrewlock.net)
| r0x0r007 wrote:
| That feeling when you open a brand new project in VS and
| immediately get: "The solution contains packages with
| vulnerabilities"
| CharlieDigital wrote:
| That's a Good Thing rather than shipping vulnerable code.
| cm2187 wrote:
| And now that everything is a package, it won't get fixed with
| windows update. Which means that if the website isn't actively
| developed and regularly deployed, it will remain vulnerable
| lsbehe wrote:
| M$ offers system wide installations. Those don't seem to be
| updated automatically either but at least I don't have to
| deploy 6 servers now.
| Uvix wrote:
| On Linux, system-wide installations are handled through the
| system's package manager.
|
| On Windows, if you have the "Install updates for other
| Microsoft products" option enabled, .NET [Core] runtimes
| will be updated through Windows Update.
|
| If the domain's group policy won't let you turn it on from
| the UI (or if you want to turn it on programmatically for
| other reasons), the PowerShell 7 installer has a PowerShell
| script that can be adapted to do the trick: https://github.
| com/PowerShell/PowerShell/blob/ba02868d0fa1d7...
| lsbehe wrote:
| archlinux doesn't offer the new version yet.
| https://archlinux.org/packages/extra/x86_64/aspnet-
| runtime/ Only exposing stuff behind caddy so it doesn't
| seem to be an issue.
| voxic11 wrote:
| Actually this bug is in Microsoft.AspNetCore.App.Runtime
| which is an implict package that comes from the runtime. So
| simply updating your version of the dotnet should fix any
| vulnerable applications.
| Traubenfuchs wrote:
| It's pretty much the same in Javaland with maven and spring.
|
| Create a new project with the latest spring version, and maven
| will warn you.
|
| At this point I consider this worthless noise.
| weinzierl wrote:
| I think Spring doesn't consider vulnerabilities in one of
| their components to be a Spring vulnerability. At least they
| do not release an updated version until the next scheduled
| patch version, not even in the paid version.
|
| You can either wait and accept being vulnerable or update the
| component yourself and therefore run an unsupported and
| untested configuration. Doomed if you do, doomed if you
| don't.
| pastage wrote:
| It has been in the making for at least ten years, the problem for
| me has been that that production environments and test
| environments are not the same when you use proxys. So you need to
| check both, and you need to have the same type of connection that
| your customers use.
|
| https://www.youtube.com/watch?v=B2qePLeI-s8
|
| From the HTTP must die thread a month ago.
| https://news.ycombinator.com/item?id=44915090
| radicalbyte wrote:
| The problem is that we have a culture of accepting mangled
| requests on the web. This happens in application code too -
| because web developers are sloppy it's common to either disable
| or not use strict input validation.
|
| In a pure .Net world it's the norm to use strict input
| validation and tell clients to fix their bad requests and this
| looks like one of those cultural blindspots. "We" wouldn't
| naturally consider a case where a server accepted a request
| which has not been strictly validated. With the move to .Net
| Core and a broadening of the scope to not only target
| enterprises and we'll find issues like this....
| jen20 wrote:
| I don't know about "not targeting enterprise" being the
| problem here - it's super common to find "enterprise" .NET
| APIs that return 200 for every possible condition and put
| some error text as a JSON blob in the response with "success"
| = "false" while setting caching headers.
|
| Mostly this stuff comes down to skill issues.
| j_w wrote:
| At one point I interacted with an API that would return 200
| for every condition, but with a "status" field that would
| have "OK" or "error", except on some browsers where it
| would use "OKAY" instead of "OK".
| PantaloonFlames wrote:
| If I transmit SOAP or JSONRPC over http, both of which use
| the response payload itself to contain whether the request
| was an error or not, what should the status be in case of
| error ?
|
| I jsonrpc I think 200 OK is correct with an error payload
| that says "you are not authorized" or similar.
| immibis wrote:
| We have to accept mangled requests when there are clients out
| there that send mangled requests, which they will continue to
| do as long as servers accept them. Postel's law was good for
| prototyping but created a security nightmare in production.
| sfn42 wrote:
| First you create an API, _then_ others start using it. So
| if you never allow mangled requests your clients will
| necessarily send proper requests.
|
| If you're maintaining an old api you can publish new
| versions of endpoints that don't accept mangled requests.
| If it's important you can give clients a time limit like
| let's say a few months to update their software to use your
| updated endpoints before you remove the old ones.
| pixl97 wrote:
| This tends to be huge in enterprise. Development, test/UAT, and
| production will all have different proxy methods and
| requirements. Devs may have a proxy with NTLM. Test may have
| something like proxy auto detect. Prod may be manually defined.
|
| It's really fun trying to test connectivity issues like this.
| giancarlostoro wrote:
| In a high quality setup you have a staging server that is a
| carbon copy of PROD. Bonus points if you make it so staging
| and PROD are 100% interchangeable, to the level that you can
| point PROD to staging, and then turn PROD into staging, and
| do the same next deployment. If you can do that, you have a
| stronger change of at least reproducing production issues.
|
| Dev, UAT / QA, Staging, PROD. This is the ideal setup in my
| eyes. It lets QA / UAT hold changes that are maybe not 100%
| ready, while not blocking testing that is mean to go into
| PROD ASAP because it can sit in staging.
| jen20 wrote:
| To be any use staging environments should be scientific
| tests. They should prove that a given change, if it goes to
| production, will work.
|
| You cannot do this if you're changing more than that one
| thing. The only way to make this work really is either
| dynamic environments that completely mirror everything,
| which tends to be time consuming or expensive or continuous
| delivery to a production-like environment via feature flags
| and so forth.
|
| Having a staging server that is a mirror of production[1]
| improves things a bit over doing nothing. You need the
| entire environment, including all your dependencies, to
| have a real test of anything, and that includes things that
| corporate IT departments typically hate.
|
| [1]: Why is it so common to see "PROD" written as if it
| were an acronym?
| giancarlostoro wrote:
| I always write it that way maybe for the same reason
| others do it, to emphasize how critical PROD is, so you
| don't overlook it if you just read prod, or production.
| If you see PRODUCTION you might slow down and go "oh
| crap" so it is definitely an emphasis I always add when
| talking about production in text. PROD is just shorter to
| write, but all caps makes the emphasis stick.
|
| If you staging environment is pointing to the exact same
| databases PROD is, and other similar dependencies,
| there's no reason you can't hotswap it with PROD itself,
| I mean I've done something like this before.
|
| It's much easier if your production deployment pipeline
| is setup for it though. You'd want to scale down
| drastically for staging, but in my eyes, if you're not
| going to have staging be as carbon copy of PROD as you
| humanely can have it, you might as well not have that
| fourth environment and just suffer when you cannot
| reproduce bugs. The real gem of staging is that if it
| would break in PROD, it would definitely break in
| staging. In the few companies where we had a carbon copy
| of PROD setup as a staging environment where key things
| are pulled from PROD itself, we've had way less bugs
| promoted to PROD when QA tests them in staging.
|
| In theory the ROI is worth it, if you care about quality.
| Sadly most places do not care about quality nearly
| enough.
| teddyh wrote:
| But it makes the text look like it was written by a
| schizophrenic: <https://web.archive.org/web/2023112216040
| 1/https://prestersp...>
| giancarlostoro wrote:
| I guess, but its consistently the same word being
| capitalized.
| pixl97 wrote:
| >In a high quality setup you have a staging server that is
| a carbon copy of PROD
|
| In low throughput environments I see stuff like this. The
| problem is with high throughput environments it doesn't
| tend to happen because of the massive expense incurred.
| throwaway201606 wrote:
| just saw your answer - we are thinking exactly the same
| thing but I took the long-winded route to saying it
| throwaway201606 wrote:
| There are many setups where this is not just not possible.
| In some cases, doing this is prohibitive because of cost or
| prohibited by law.
|
| + for case of cost: lots of very large companies have prod
| environments that cost big $$$. Business will not double
| prod cost for a staging environment mirroring prod. Take an
| example of any large bank you know. The online banking
| platform will cost tens if not hundreds of millions of
| dollars to run. Now consider that the bank will have
| hundreds of different platforms. It is just not
| economically feasible.
|
| + for the case of law: in some sectors, by law, only
| workers with "need to know" can access data. Any dev
| environment data cannot, by law, be a copy of prod. It has
| to be test data, even anonymization prod data is not
| allowed in dev/test because of de-anonymization risk.
|
| Given this, consider a platform / app that is multi-tenant
| (and therefore data driven ) eg a SaaS app in a legally
| regulated industry such as banking or health care. Or even
| something like Shopify or GMail for corporate where the app
| hosts multiple organizations and the org to be used is
| picked based on data (user login credentials).
|
| The app in this scenario is driven by data parameterization
| - the client site and content are data driven e.g. when
| clientXYZ logs on, the site becomes
| https://clientXYZ.yourAppName.com and all data, config etc
| are "clientXYZ" specific. And you have hundreds or
| thousands of clentsAAA through clientZZZ on this platform.
|
| In such a world, dev & test environments can never be
| matched with prod. Further, the behaviour of the client
| specific sites could be different even with the same code
| because data parameters drive app behaviour.
|
| Long story short, mirroring staging and prod is just not
| feasible in large corporate tech
| jon-wood wrote:
| Today in petty off-topic complaints I expect to burn some
| karma on: PROD isn't capitalised, is an abbreviation of
| Production, not an initialism of Public Ready Outside-world
| Delivery.
| drysart wrote:
| PROD isn't capitalized because it's an initialism. It's
| capitalized because the machine is screaming at you that
| this is production, be careful. ;)
| forksspoons wrote:
| It sounds like this is anything built upon Kestrel which is a
| lot. I was going to try to list it all here, but holy cow.
| nirvana99 wrote:
| ASP.NET Core:
|
| >= 6.0.0 <= 6.0.36
|
| >= 8.0.0 <= 8.0.20
|
| >= 9.0.0 <= 9.0.9
|
| <= 10.0.0-rc.1
|
| Microsoft.AspNetCore.Server.Kestrel.Core:
|
| <= 2.3.0
| Uvix wrote:
| Those are just the ones they're _fixing_. Versions <6.0 are
| still vulnerable, they're just not getting patched because
| they're out of support.
| ozim wrote:
| Don't use out of support software or at least don't use out
| of support software exposed to the internet.
| pixl97 wrote:
| Internal attacks are easy enough in a large enough
| network.
| haydenbarnes wrote:
| >= 6.0.0 <= 6.0.36 versions are not being fixed by Microsoft.
|
| Fixes are available for .NET 6 from HeroDevs ongoing security
| support for .NET 6, called NES* for .NET.
|
| *never ending support
| Bluescreenbuddy wrote:
| 7 is also EOL. It did not receive a patch. Last time it was
| updated was May 2024
| fabian2k wrote:
| > And as a final reminder, even though request smuggling is
| typically described and demonstrated using a proxy in front of
| your server, just not using a proxy does not mean you're
| automatically safe. If you're reading, manipulating, or
| forwarding request streams directly in ASP.NET Core, as opposed
| to just relying on the built-in model binding, then you might be
| at risk to request smuggling attacks.
|
| I'm probably missing something, but I still don't get how this
| would work without a proxy unless my own code manually parses the
| request from scratch. Or maybe that is what the author means.
|
| The vulnerability, as far as I understand it, relies on two
| components interpreting these chunks differently. So one of them
| has to read \r or \n as valid markers for the chunk end, and the
| other one must only allow \r\n as specified.
|
| Kestrel used to allow \r and \n (and the fix is to not do that
| anymore). So only if my own code parses these chunks and uses
| \r\n would I be vulnerable, or?
|
| The proxy version of the vulnerability seems quite clear to me,
| and pretty dangerous as .NET parses non-compliant and would
| thereby be vulnerable behind any compliant proxy (if the proxy is
| relevant for security aspects).
|
| But the single application version of the vulnerability seems to
| me to be very unlikely and to require essentially having a
| separate full HTTP parser in my own application code. Am I
| missing something here?
| gwbas1c wrote:
| Basically, if you handle the request at the stream level,
| there's a small chance you _might_ be vulnerable.
|
| For example, let's say you have an HTTP API that checks a few
| headers and then makes another outgoing HTTP request. You might
| just send the stream along, using
| incomingHttpRequestStream.CopyTo(outgoingHttpRequestStream) /
| (or CopyToAsync). (https://learn.microsoft.com/en-
| us/dotnet/api/system.io.strea...)
|
| That _might_ be vulnerable, because it could trick your server
| to send what appears to be two HTTP requests, where the 2nd one
| is whatever the malicious party wants it to be... But only if
| you allow incoming HTTP versions < 2. If you blanket disallow
| HTTP below 2.0, you aren't vulnerable.
|
| ---
|
| But I agree that this seems to be more "much ado about nothing"
| and doesn't deserve 9.9:
|
| > In the python aiohttp and ruby puma servers, for example,
| give the vulnerability only a moderate severity rating in both
| cases. In netty it's even given a low severity.
|
| I suspect the easiest way to handle this is to disallow HTTP <
| 2 and then update .Net on your own schedule. (Every minor
| release of .Net seemed to break something at my company, so we
| had to lock down to the patch otherwise our build was breaking
| every 2-3 months.)
| cimtiaz wrote:
| I also agree, it should be patched anyway, but the 9.9 score
| is somewhat misleading here ..... I think Microsoft is
| scoring the theoretical maximum impact across all possible
| ASP.NET Core applications, not the vulnerability in
| isolation. Most production deployments behind modern proxies
| like nginx, Cloudflare, AWS ALB etc., are likely already
| protected. Because these proxies reject the malformed chunked
| encoding that Kestrel was incorrectly accepting. The real
| risk is for apps directly exposing Kestrel to the internet or
| using older or misconfigured proxies.
| WorldMaker wrote:
| I think the big reason this escalates to such a high score is
| because the Middleware abstraction common in a lot of HTTP
| server designs today (including Kestrel, ASP.NET being
| sometimes viewed in its modern implementation as entirely a
| stack of Middleware in a single trenchcoat) can also be a
| series of nesting doll "micro-proxies" manipulating the HTTP
| request in various ways before passing it to code that trusts
| the Middleware did its job. With Middleware doing all sorts
| of jobs but especially various steps of Authentication and
| Authorization, there can be a lot of security risks if there
| were vulnerable middleware.
|
| It wouldn't surprise me _if_ Microsoft found a first-party or
| second-party (support contract) or open source /nuget
| Kestrel/ASP.NET Middleware somewhere in the wild that was
| affected by this vulnerability in a concerning way. In that
| case, it also somewhat makes sense that Microsoft doesn't
| necessarily want to victim blame the affected Middleware
| given that they recognized that Kestrel itself should have
| better handled the vulnerability before it ever passed to
| Middleware.
| fabian2k wrote:
| But the middleware would usually not work on the raw http
| request, but the version already parsed by Kestrel. So
| everything should see the same version of the request, the
| one with the non-spec-compliant parsing by Kestrel.
| WorldMaker wrote:
| "Usually", sure, but there's also nothing stopping a
| Middleware from doing whatever it likes with the raw HTTP
| request. A streaming large file upload middleware, for
| instance, might have reason to work more directly with
| Transfer-Encoding: Chunked to optimize its own processes,
| using a custom "BodyReader".
|
| The CVE points out (and the article as well) some issue
| with user-land code using `HttpRequest.BodyReader` on the
| "parsed" request, it just doesn't include specifics of
| who was using it to do what. Plenty of Middleware may
| have reason to do custom BodyReader parsing, especially
| if it applies _ahead of_ ASP.NET Model Binding.
| immibis wrote:
| There's actually a near 100% chance you're vulnerable if you
| handle HTTP - or any other non-binary protocol allowing
| connection reuse - at the stream level, and don't parse
| strictly (close connection on duplicate content-length, on
| chunked encoding with content-length, on duplicate transfer-
| encoding, on bare CR or LF, etc).
|
| If you blanket disallow old HTTP, clients will fail to reach
| you.
| froggertoaster wrote:
| I'm a simple man. I see Andrew Lock, I upvote.
| mzs wrote:
| https://w4ke.info/2025/06/18/funky-chunks.html
| colseph wrote:
| I wonder how many vulnerabilities have been accidentally created
| by adherence to postel's law rather than just being strict in
| what's accepted too.
| capitol_ wrote:
| If "A billion dollar mistake" wasn't already taken by 'null',
| then this would be a good candidate.
| ahoka wrote:
| Oh null is fine, but "everything is nullable" is the devil.
| klysm wrote:
| I frequently get into this argument with people about how
| Postel's law is misguided. Being liberal in what you accept
| comes at _huge_ costs to the entire ecosystem and there are
| much better ways to design flexibility into protocols.
| motorest wrote:
| > Being liberal in what you accept comes at _huge_ costs to
| the entire ecosyste
|
| Why do you believe that?
|
| Being liberal in what you accept doesn't mean you can't do
| input validation or you're forced to pass through unsupported
| parameters.
|
| It's pretty obvious you validate the input that is relevant
| to your own case, you do not throw errors if you stumble upon
| input parameters you don't support, and then you ignore the
| irrelevant fields.
|
| The law is "be conservative in what you send, be liberal in
| what you accept". The first one is pretty obvious.
|
| How do you add cost to the entire ecosystem by only using the
| fields you need to use?
| SAI_Peregrinus wrote:
| The problem with Postel's law is that people apply it to
| interpreting Postel's law. They read it as encouraging you
| to accept _any_ input, and trying to continue in the face
| of nonsense. They accept malformed input & attempt to make
| sense of it, instead of rejecting it because the fields
| they care about are malformed. Then the users depend on
| that behavior, and it ossifies. The system becomes brittle
| & difficult to change.
|
| I like to call it the "hardness principle". It makes your
| system take longer to break, but when it does it's more
| damaging than it would have been if you'd rejected
| malformed input in the first place.
| motorest wrote:
| > They accept malformed input & attempt to make sense of
| it, instead of rejecting it because the fields they care
| about are malformed.
|
| I don't think that's true at all. The whole point of the
| law is that your interfaces should be robust, and still
| accept input that might be nonconforming in some way but
| still be possible to validate.
|
| The principle still states that if you cannot validate
| input, you should not accept it.
| robertlagrant wrote:
| The state of HTML parsing should convince you that if you
| follow postel's law in one browser then every other
| browser has to follow it in the same way.
| drysart wrote:
| That's a truism in general. If you're liberal in what you
| accept, then the allowances you make effectively become
| part of your protocol specification; and if you hope for
| interoperability, then _everyone_ has to be follow the
| same protocol specification which now has to include all
| of those unofficial allowances you (and other
| implementors) have paved the road to hell with. If that
| 's not the case, then you don't really have compatible
| services, you just have services that coincidentally
| happen to work the same way sometimes, and fail other
| times in possibly spectacular ways.
|
| I have always been a proponent for the exact opposite of
| Postel's law: If it's important for a service to be
| accommodating in what it accepts, then those
| accommodations should be explicit in the written spec.
| Services MUST NOT be liberal in what they accept; they
| should start from the position of accepting nothing at
| all, and then only begrudgingly accept inputs the spec
| tells them they have to, and never more than that.
|
| HTML eventually found its way there after wandering
| blindly in the wilderness for a decade and dragging all
| of us behind it kicking and screaming the entire time;
| but at least it got there in the end.
| motorest wrote:
| > The state of HTML parsing should convince you that if
| you follow postel's law in one browser then every other
| browser has to follow it in the same way.
|
| No. Your claim expresses a critical misunderstanding of
| the principle. It's desirable that a browser should be
| robust to support broken but still perfectly parceable
| HTML. Otherwise, it fails to be even useable when dealing
| with anything but perfectly compliant documents, which
| mind you means absolutely none whatsoever.
|
| But just because a browser supports broken documents,
| that doesn't make them less broken. It just means that
| the severity of the issue is downgraded, and users of
| said browser have one less reason to migrate.
| capitol_ wrote:
| The reason the internet consists of 99% broken html is
| that all browsers accept that broken html.
|
| If browsers had conformed to a rigid specification and
| only accepted valid input from the start, then people
| wouldn't have produced all that broken html and we
| wouldn't be in this mess that we are in now.
| Timwi wrote:
| It sounds like you didn't read the article. The
| vulnerability occurs precisely because a request parser
| tried to be lenient.
| wvenable wrote:
| My counter argument is that the entire web exists because of
| Postel's law. HTML would just be another obsolete boring
| document format from the 1980s.
|
| I agree that there are better ways to design flexibility into
| protocols but that requires effort, forethought, and most of
| all imagination. You might not imagine that your little
| scientific document format would eventually become the
| world's largest application platform and plan accordingly.
| marcosdumay wrote:
| There are different interpretations about what "being
| liberal" means.
|
| For example, some JSON parsers extend the language to accept
| comments and trailing commas. That is not a change that
| creates vulnerability.
|
| Other parsers extend the language by accepting duplicated
| keys and disambiguate them with some random rule. That is is
| a vulnerability factory.
|
| Being flexible by creating a well defined superlanguage is
| completely different from doing it with an ill-defined one
| that depends on heuristics and implementation details to be
| evaluated.
| kikimora wrote:
| Isn't the problem comes down to proxy not rejecting a request
| with two Content-Length headers? If proxy and upstream parse HTTP
| correctly they would either won't touch data past the Content-
| Length or they would never see two HTTP requests even if content
| is chunked and contain bytes similar to a HTTP request.
| throw7 wrote:
| Many moons ago, we used to run a full application level http
| proxy firewall. It didn't last the year. False positives were a
| headache and sites would just send shit down the pipe and
| browsers would happily power through.
|
| I don't hate postel's law, but I admit I try not to think about
| it lest I get triggered by a phone call that such and such site
| doesn't work.
| marcosdumay wrote:
| False positives on snake-oil security software have no relation
| at all with Postel's law.
| cimtiaz wrote:
| On a related note, I would recommend readers using the affected
| .NET 8/9 runtime in containerized applications to consider
| rebuilding their container images using the patched base images
| and redeploy them. Unlike Azure App Service, the .NET runtime is
| embedded within container images and is not automatically patched
| by Microsoft's platform updates. It has to be rebuild and
| redeploy to receive security fixes.
___________________________________________________________________
(page generated 2025-10-28 23:02 UTC)