as of 06/14/2023 05:20:00 est MD not available USA. VPN from elsewhere works. [MD is fine]

Head Contributor Wrangler
Staff
Super Moderator
Joined
Jan 18, 2018
Messages
1,804
I can't see any issues with US traffic at the moment, and no-one is reporting issues in our Discord.
 
Solution
Yuri Enjoyer
Staff
Developer
Joined
Feb 16, 2020
Messages
446
If that was wifi, try mobile data, and vice-versa. The site is definitely up and working just fine everywhere right now, including the east coast of the USA.
 
Yuri Enjoyer
Staff
Developer
Joined
Feb 16, 2020
Messages
446
While us-east-1 was messed up yesterday, we don’t use AWS for compute at all, and it was mere coincidence that we had issues at the same time 😅
 
Dex-chan lover
Joined
Mar 24, 2018
Messages
600
It wasn't until yesterday 6/16/2023. That things went back to normal MD speeds for me. I tracert'ed the path when it was a problem. It went all over the place (typical internet self healing). Now back to normal no extra hops.

AWS, will never discuss why they were messed up. "Cloud services" is a general describing of what they offer. The DNS paths could have been disturbed. Causing all kinds of routing issues for everyone. AWS users or not.
 
Dex-chan lover
Joined
Sep 29, 2018
Messages
996
It wasn't until yesterday 6/16/2023. That things went back to normal MD speeds for me. I tracert'ed the path when it was a problem. It went all over the place (typical internet self healing). Now back to normal no extra hops.

AWS, will never discuss why they were messed up. "Cloud services" is a general describing of what they offer. The DNS paths could have been disturbed. Causing all kinds of routing issues for everyone. AWS users or not.

Isn't that what happened with YouTube once? Iran blocked them for themselves and blocked for the world too.
 
Dex-chan lover
Joined
Mar 24, 2018
Messages
600
Isn't that what happened with YouTube once? Iran blocked them for themselves and blocked for the world too.
I was not a where of that one. But then nobody announces there blunders either. It was until after all "Trusted" DNS super servers were patched that a big take over bullet was avoided. Before the patch, DNS protocols allowed DNS name changes to go unchallenged. Call your server a DNS server and anybody could make changes to all other DNS servers. Perfect man-in-the-middle attacks could happen. Or whole takeovers could happen.

Typical hush-hush rush job. Sign the white hat researcher to NDA, investigate the damage, rush-rush a new protocol. Release the patch, post haste. This could not be covered up because all super servers and DNS servers needed to be patched and rebooted. Significant internet traffic effect. Everyone noticed. After fact report of the event was announced. Unknown/unpatched DNS servers could not pass the challenge. Everyone realized this was an oh-god-no oh-sh*t moment.
 
Dex-chan lover
Joined
Jan 18, 2023
Messages
2,127
I was not a where of that one. But then nobody announces there blunders either. It was until after all "Trusted" DNS super servers were patched that a big take over bullet was avoided. Before the patch, DNS protocols allowed DNS name changes to go unchallenged. Call your server a DNS server and anybody could make changes to all other DNS servers. Perfect man-in-the-middle attacks could happen. Or whole takeovers could happen.

Typical hush-hush rush job. Sign the white hat researcher to NDA, investigate the damage, rush-rush a new protocol. Release the patch, post haste. This could not be covered up because all super servers and DNS servers needed to be patched and rebooted. Significant internet traffic effect. Everyone noticed. After fact report of the event was announced. Unknown/unpatched DNS servers could not pass the challenge. Everyone realized this was an oh-god-no oh-sh*t moment.
It’s common sense to try to patch things before you announce something like this. It’s incredible stupid to release information about a vulnerability before there’s a fix ready for it.
 
Yuri Enjoyer
Staff
Developer
Joined
Feb 16, 2020
Messages
446
I was not a where of that one. But then nobody announces there blunders either. It was until after all "Trusted" DNS super servers were patched that a big take over bullet was avoided. Before the patch, DNS protocols allowed DNS name changes to go unchallenged. Call your server a DNS server and anybody could make changes to all other DNS servers. Perfect man-in-the-middle attacks could happen. Or whole takeovers could happen.
The YouTube block, if we're talking about the same one, was an incorrect (malicious or not is not clear to this day afaik) BGP announcement by a Pakistani ISP of some YouTube IP prefixes, see https://www.ripe.net/publications/n...s/youtube-hijacking-a-ripe-ncc-ris-case-study

And the "fix" for that is rather... interesting. The way internet networks have operated, historically, was almost entirely trust-based. And if anything, that is still largely the case. There has only somewhat recently been a bit of a bigger push to protect against it, headed by Cloudflare, AWS, and other very large networks [that are not old dinosaurs like historical ISPs] https://isbgpsafeyet.com/

And to be completely honest, the stability of the Internet so far is a testament to the fact that this rather cavalier approach to routing security wasn't necessarily stupid.

The DNS servers you think of are the root servers; and those do much less than you think; certainly they don't answer queries for the IP of a website. They, generally speaking, focus on 2 specific services:
1. Listing the top-level zones (.com, .net, ...) in the "root" zone (which is referred to as ., as technically-speaking mangadex.org is in fact mangadex.org., note the final dot)
2. List the top-level servers for these zones

ie for example, if you were to do "full" DNS resolution yourself for mangadex.org, things would go in this order (off the top of my head, some details potentially inaccurate):
1. Query the root servers (.) for where to find .org's root servers => see .org's operator, Public Internet Registry
2. Query PIR for the nameservers of mangadex.org => see diva.ns.cloudflare.com and max.ns.cloudflare.com
2b. pick 1 of these 2, and do the same work from step 1 to get its IP
3. Query one of these for the A record(s) of "mangadex.org" => 45.129.229.1 and 45.129.229.2

And in this chain, there's little to be afraid of. Hijacks at this level just don't happen, historically speaking. It doesn't mean that a root server can't lie, but people operating those aren't clowns, so it's rather rare. Though mechanisms like DNSSEC are gaining more and more traction to protect that hierarchy.

But the thing is, nobody (on the end-user side) does this all (almost, and yes people with their PiHole with unbound full resolution can now smirk proudly, but shush). And this is arguably a good thing, because it's extremely inefficient to do so much work for every website.

So not only does your OS do caching, the vast majority of people use third-party DNS servers to do and cache most of that work.
And typically those are operated by ISPs.
And typically residential ISPs are absolute clowns, on top of being more commonly forced to return lies in DNS by their local court.

Typical hush-hush rush job. Sign the white hat researcher to NDA, investigate the damage, rush-rush a new protocol. Release the patch, post haste. [...] Unknown/unpatched DNS servers could not pass the challenge.

The thing is, and why I bother with this long reply, is that you're not wrong that the situation is shitty, but you got the reason for it being shit backwards.

This never happens. Instead what happens is that hacky stopgap measures are implemented, and then people scratch their heads for decades on how to secure things while supporting old-ass hardware/software. Including some random company's DNS server that was setup in 1995. DNS server which will never be updated to support new standards/protocols.

And thus you end up with extremely complicated mechanisms to make security features opt-in, features which are often kept opt-out by default because otherwise someone somewhere is impacted by it, even when only opt-in. Because the old-ass implementation of $whatever they rely on actually never followed the spec correctly, and the opt-in behavior causes it to crash.

For example on how this happens, imagine a protocol where messages are 8 bytes, but in v1 of that protocol, only the lower 6 of those bytes are used; they had to pick 8 for it to be a power-of-2, so we end up with messages that look like this:
[ 0/unused | 0/unused | X | X | X | X | X | X]

ie: The first 2 bytes are unused, and by-convention set to 0. The other bytes are whatever value (it's not terribly important here so I just wrote "X").

And the spec of the protocol says that implementations MUST ignore the first 2 bytes of messages. So theoretically, even if by convention they are 0, a participant in that protocol could put whatever instead and be compliant.

But, as it turns out, Dumfuck Inc. has lazy devs, and they developed an implementation of it that doesn't really use the first two bytes, but it doesn't work if those aren't zero either. Things work swimmingly for 10 years because everyone puts 0 there by convention.

Then 10 years later, we notice a security flaw in the protocol. And (whatever it is) it can be fixed by adding a clause to the spec that in certain cases, the first byte should be set to 1. Well now that's probably a bit of a hacky fix, but whatever, we had some spare space, might as well use it.

Except oh no, Dumbfuck Inc.'s implementation is very poopular! And now people's stuff is crashing. And Dumberfuck Inc., a 100yo company whose core business isn't tech is having issues in that machine they bought 20 years ago. And they are not happy. But also not willing to replace the machine either.

So their business partners are forced to rollback the fix, or they'll stop their business relation. And so things go.

So yeah, 90% (pulled the number out of my ass, but it's probably not far off) of security issues are due to things supporting decades-old crusty fucking protocol versions that some shitty company somewhere with a big wallet needs to keep working.

This scenario is also why Microsoft especially struggles. They have a LOT of random-non-tech-business as customers. And those make their life unthinkably harder than other organizations (think Linux as an ecosystem) who have, on average, slightly more tech-inclined users. MSFT HAS to support shit like ATMs running a Windows XP PoS Edition from 2004. They HAVE to support shit like some random-ass multifunction printer from 1998, an era where HTTPS wasn't even a thing yet and storing passwords in cleartext was A-ok (no idea if 1998 is a good timeline for it, but you get the gist of it).

Proper protocol fixes are rare, and instead usually disgustingly complex instead. And they take decades to be widespread if they apply to some popular protocol.

---

Do note, most of this doesn't really apply to MD. We purposefully make our site impossible to use in an insecure way, because we don't have to give a shit about a dinocorp business partner somewhere. Update your devices and browsers; if you don't want to, then use other sites. Some things, like BGP hijacking are however entirely outside of our control. But they're also very rare as word gets out real quick and it's a death sentence in the networking world for an ISP to do that. They'd get blacklisted by all others and lose all connectivity real quick.
 
Last edited:
Dex-chan lover
Joined
Mar 24, 2018
Messages
600
The YouTube block, if we're talking about the same one, was an incorrect (malicious or not is not clear to this day afaik) BGP announcement by a Pakistani ISP of some YouTube IP prefixes, see https://www.ripe.net/publications/n...s/youtube-hijacking-a-ripe-ncc-ris-case-study

And the "fix" for that is rather... interesting. The way internet networks have operated, historically, was almost entirely trust-based. And if anything, that is still largely the case. There has only somewhat recently been a bit of a bigger push to protect against it, headed by Cloudflare, AWS, and other very large networks [that are not old dinosaurs like historical ISPs] https://isbgpsafeyet.com/

And to be completely honest, the stability of the Internet so far is a testament to the fact that this rather cavalier approach to routing security wasn't necessarily stupid.

The DNS servers you think of are the root servers; and those do much less than you think; certainly they don't answer queries for the IP of a website. They, generally speaking, focus on 2 specific services:
1. Listing the top-level zones (.com, .net, ...) in the "root" zone (which is referred to as ., as technically-speaking mangadex.org is in fact mangadex.org., note the final dot)
2. List the top-level servers for these zones

ie for example, if you were to do "full" DNS resolution yourself for mangadex.org, things would go in this order (off the top of my head, some details potentially inaccurate):
1. Query the root servers (.) for where to find .org's root servers => see .org's operator, Public Internet Registry
2. Query PIR for the nameservers of mangadex.org => see diva.ns.cloudflare.com and max.ns.cloudflare.com
2b. pick 1 of these 2, and do the same work from step 1 to get its IP
3. Query one of these for the A record(s) of "mangadex.org" => 45.129.229.1 and 45.129.229.2

And in this chain, there's little to be afraid of. Hijacks at this level just don't happen, historically speaking. It doesn't mean that a root server can't lie, but people operating those aren't clowns, so it's rather rare. Though mechanisms like DNSSEC are gaining more and more traction to protect that hierarchy.

But the thing is, nobody (on the end-user side) does this all (almost, and yes people with their PiHole with unbound full resolution can now smirk proudly, but shush). And this is arguably a good thing, because it's extremely inefficient to do so much work for every website.

So not only does your OS do caching, the vast majority of people use third-party DNS servers to do and cache most of that work.
And typically those are operated by ISPs.
And typically residential ISPs are absolute clowns, on top of being more commonly forced to return lies in DNS by their local court.



The thing is, and why I bother with this long reply, is that you're not wrong that the situation is shitty, but you got the reason for it being shit backwards.

This never happens. Instead what happens is that hacky stopgap measures are implemented, and then people scratch their heads for decades on how to secure things while supporting old-ass hardware/software. Including some random company's DNS server that was setup in 1995. DNS server which will never be updated to support new standards/protocols.

And thus you end up with extremely complicated mechanisms to make security features opt-in, features which are often kept opt-out by default because otherwise someone somewhere is impacted by it, even when only opt-in. Because the old-ass implementation of $whatever they rely on actually never followed the spec correctly, and the opt-in behavior causes it to crash.

For example on how this happens, imagine a protocol where messages are 8 bytes, but in v1 of that protocol, only the lower 6 of those bytes are used; they had to pick 8 for it to be a power-of-2, so we end up with messages that look like this:
[ 0/unused | 0/unused | X | X | X | X | X | X]

ie: The first 2 bytes are unused, and by-convention set to 0. The other bytes are whatever value (it's not terribly important here so I just wrote "X").

And the spec of the protocol says that implementations MUST ignore the first 2 bytes of messages. So theoretically, even if by convention they are 0, a participant in that protocol could put whatever instead and be compliant.

But, as it turns out, Dumfuck Inc. has lazy devs, and they developed an implementation of it that doesn't really use the first two bytes, but it doesn't work if those aren't zero either. Things work swimmingly for 10 years because everyone puts 0 there by convention.

Then 10 years later, we notice a security flaw in the protocol. And (whatever it is) it can be fixed by adding a clause to the spec that in certain cases, the first byte should be set to 1. Well now that's probably a bit of a hacky fix, but whatever, we had some spare space, might as well use it.

Except oh no, Dumbfuck Inc.'s implementation is very poopular! And now people's stuff is crashing. And Dumberfuck Inc., a 100yo company whose core business isn't tech is having issues in that machine they bought 20 years ago. And they are not happy. But also not willing to replace the machine either.

So their business partners are forced to rollback the fix, or they'll stop their business relation. And so things go.

So yeah, 90% (pulled the number out of my ass, but it's probably not far off) of security issues are due to things supporting decades-old crusty fucking protocol versions that some shitty company somewhere with a big wallet needs to keep working.

This scenario is also why Microsoft especially struggles. They have a LOT of random-non-tech-business as customers. And those make their life unthinkably harder than other organizations (think Linux as an ecosystem) who have, on average, slightly more tech-inclined users. MSFT HAS to support shit like ATMs running a Windows XP PoS Edition from 2004. They HAVE to support shit like some random-ass multifunction printer from 1998, an era where HTTPS wasn't even a thing yet and storing passwords in cleartext was A-ok (no idea if 1998 is a good timeline for it, but you get the gist of it).

Proper protocol fixes are rare, and instead usually disgustingly complex instead. And they take decades to be widespread if they apply to some popular protocol.

---

Do note, most of this doesn't really apply to MD. We purposefully make our site impossible to use in an insecure way, because we don't have to give a shit about a dinocorp business partner somewhere. Update your devices and browsers; if you don't want to, then use other sites. Some things, like BGP hijacking are however entirely outside of our control. But they're also very rare as word gets out real quick and it's a death sentence in the networking world for an ISP to do that. They'd get blacklisted by all others and lose all connectivity real quick.
I was trying to be general with my explanation. 99.99% of internet users (that number maybe very accurate) don't care how they get their YouTube fix or email. They just want it. I like it because I was part of engineering group working for a company creating some of that (while supporting old-ass hardware/software.) crummy stuff. Sorry I only had to make it work, not design it. I am sure that crap does not exist anymore, more than a decade since the CO. went away.
 
Dex-chan lover
Joined
Mar 24, 2018
Messages
600
I was trying to be general with my explanation. 99.99% of internet users (that number maybe very accurate) don't care how they get their YouTube fix or email. They just want it. I like it because I was part of engineering group working for a company creating some of that (while supporting old-ass hardware/software.) crummy stuff. Sorry I only had to make it work, not design it. I am sure that crap does not exist anymore, more than a decade since the CO. went away. Rather I am very sure it does not work anymore, because it was built in china early days of producing that stuff (think very cheap, unreliable).
 

Users who are viewing this thread

Top