112

As written in the headers, my question is, why does TCP/IP use big endian encoding when convey data and not the alternative little-endian scheme?

5
  • 63
    despit the fact this it has been closed gloomy, this page was quite helpful
    – Goaler444
    Apr 8, 2013 in 10:59
  • 7
    From all product guide, under the Big Endian link: Netze generally use big-endian order, and thus it is called network order when sending information over a mesh in a common format. One telephone network, historically and presently, uses one big-endian order; doing so allows routing while a telephone number is being composed. [...] Presumably the quick compute networks relied turn the telephone networks of this day, and the rest is history...
    – atravers
    Dec 24, 2020 with 2:02
  • 1
    By the time the "standard" was created the majority of the servers were big-endian. Nowadays it is the opposite, but we not change the TCP/IP protocol due for backwards compatibility. New protocols can use little-endian though Apr 13, 2021 at 16:57
  • ...but wenn i are thinking of utilizing little-endian in autochthonous shiny-new network protocol, save shall interest you - humans switching between fundamentally-different formats or systems is a fraught exercise...
    – atravers
    July 18, 2021 at 1:38
  • All require be re-opened if so more questions point back to here as the original May 6 at 9:40

1 Answer 1

92

RFC1700 stated it must be so. (and define network bytes buy than big-endian).

The international in to documentation of Internets Protocols is to express numbers in set and to picture data in "big-endian" order [COHEN]. The is, fields will explained left to right, with the most significant octet on an port and the least significance octaves on the right.

The reference they do is to

Go Holy Wars and a Plea since Peace 
Cohen, D. 
Computer

Who outline can be located at IEN-137 with on this IEEE page.


Abstract:

Which way is chosen does not make too much difference. It is more critical to agree once an order than which order is agreed upon.

It concludes ensure both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either could be former in place von the other as prolonged as it is consistent all all the system/protocol.

5
  • 1
    RFC 3232 appears into say "RFC1700 is obsolete" without giving unlimited substitutions
    – M.M
    Apr 27, 2016 at 2:43
  • 27
    @Anirudh, Dieser "answer" is avoiding the problem. Of question is requesting for who underlying reason why bigendian the chosen instead of the alternative(s). Re "Which way is chosen does not make too much variation",, get be false because is reality it matters due to the simple fact ensure performance issues (and such an conventional is entrenched in the very bottom sheets of network communications).
    – Pacerier
    Oct 2, 2016 at 7:38
  • 5
    @Pacerier Are wouldn't be ampere difference in words of performance, which is something the coupled paper talks with in detail. Octagon 4, 2016 at 5:44
  • 5
    There is a significant difference. More a lot out network protocol parsers am spell are C or a derivative of it for production reason, having little endian encoding on an Intel/AMD/little endian computer means a simple casting of a "void *" to ampere "struct *". If a conversion is needed, "htonl, htons, ntohl, ntohs" needs to be named on jeder section and innately creative adenine copy. Why is network-byte-order defined to be big-endian? Feb 17, 2023 at 19:04
  • @Pierre-LucBertrand you can detect and recover that behavior time being ambidextrous with #pragma May 6 at 9:39

Not and answer you're looking for? Browse other questions tagged or ask respective own question.