21 Full Copyright Statement

Copyright (C) The Internet Society (1999). All Rights Reserved.

This document and translations of it may be copied and furnished to others, and derivative works that comment on or otherwise explain it or assist in its implementation may be prepared, copied, published and distributed, in whole or in part, without restriction of any kind, provided that the above copyright notice and this paragraph are included on all such copies and derivative works. However, this document itself may not be modified in any way, such as by removing the copyright notice or references to the Internet Society or other Internet organizations, except as needed for the purpose of developing Internet standards in which case the procedures for copyrights defined in the Internet Standards process must be followed, or as required to translate it into languages other than English.

The limited permissions granted above are perpetual and will not be revoked by the Internet Society or its successors or assigns.

This document and the information contained herein is provided on an "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Acknowledgement

Funding for the RFC Editor function is currently provided by the Internet Society.

internet engineering task force document rfc2616

IETF RFC 2616 - Hypertext Transfer Protocol -- HTTP/1.1

  • ICT Standards for Procurement
  • Topics: Energy and environment

IETF https://www.ietf.org/secretariat.html

Quick links

The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. This specification defines the protocol referred to as "HTTP/1.1".

Detailed information

  • 1.1.     Requirements Notation
  • 1.2.     Syntax Notation
  • 2.1.     Client/Server Messaging
  • 2.2.     Implementation Diversity
  • 2.3.     Intermediaries
  • 2.4.     Caches
  • 2.5.     Conformance and Error Handling
  • 2.6.     Protocol Versioning
  • 2.7.1.     http URI Scheme
  • 2.7.2.     https URI Scheme
  • 2.7.3.     http and https URI Normalization and Comparison
  • 3.1.1.     Request Line
  • 3.1.2.     Status Line
  • 3.2.1.     Field Extensibility
  • 3.2.2.     Field Order
  • 3.2.3.     Whitespace
  • 3.2.4.     Field Parsing
  • 3.2.5.     Field Limits
  • 3.2.6.     Field Value Components
  • 3.3.1.     Transfer-Encoding
  • 3.3.2.     Content-Length
  • 3.3.3.     Message Body Length
  • 3.4.     Handling Incomplete Messages
  • 3.5.     Message Parsing Robustness
  • 4.1.1.     Chunk Extensions
  • 4.1.2.     Chunked Trailer Part
  • 4.1.3.     Decoding Chunked
  • 4.2.1.     Compress Coding
  • 4.2.2.     Deflate Coding
  • 4.2.3.     Gzip Coding
  • 4.3.     TE
  • 4.4.     Trailer
  • 5.1.     Identifying a Target Resource
  • 5.2.     Connecting Inbound
  • 5.3.1.     origin-form
  • 5.3.2.     absolute-form
  • 5.3.3.     authority-form
  • 5.3.4.     asterisk-form
  • 5.4.     Host
  • 5.5.     Effective Request URI
  • 5.6.     Associating a Response to a Request
  • 5.7.1.     Via
  • 5.7.2.     Transformations
  • 6.1.     Connection
  • 6.2.     Establishment
  • 6.3.1.     Retrying Requests
  • 6.3.2.     Pipelining
  • 6.4.     Concurrency
  • 6.5.     Failures and Timeouts
  • 6.6.     Tear-down
  • 6.7.     Upgrade
  • 7.     ABNF List Extension: #rule
  • 8.1.     Header Field Registration
  • 8.2.     URI Scheme Registration
  • 8.3.1.     Internet Media Type message/http
  • 8.3.2.     Internet Media Type application/http
  • 8.4.1.     Procedure
  • 8.4.2.     Registration
  • 8.5.     Content Coding Registration
  • 8.6.1.     Procedure
  • 8.6.2.     Upgrade Token Registration
  • 9.1.     Establishing Authority
  • 9.2.     Risks of Intermediaries
  • 9.3.     Attacks via Protocol Element Length
  • 9.4.     Response Splitting
  • 9.5.     Request Smuggling
  • 9.6.     Message Integrity
  • 9.7.     Message Confidentiality
  • 9.8.     Privacy of Server Log Information
  • 10.     Acknowledgments
  • 11.1.     Normative References
  • 11.2.     Informative References
  • A.1.1.     Multihomed Web Servers
  • A.1.2.     Keep-Alive Connections
  • A.1.3.     Introduction of Transfer-Encoding
  • A.2.     Changes from RFC 2616
  • Appendix B.     Collected ABNF

Authors' Addresses

Internet Engineering Task Force (IETF)R. Fielding, Editor
Request for Comments: 7230Adobe
Obsoletes: , J. Reschke, Editor
Updates: , greenbytes
Category: Standards TrackJune 2014
ISSN: 2070-1721

Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing

The Hypertext Transfer Protocol (HTTP) is a stateless application-level protocol for distributed, collaborative, hypertext information systems. This document provides an overview of HTTP architecture and its associated terminology, defines the "http" and "https" Uniform Resource Identifier (URI) schemes, defines the HTTP/1.1 message syntax and parsing requirements, and describes related security concerns for implementations.

Status of This Memo

This is an Internet Standards Track document.

This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741 .

Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7230 .

Copyright Notice

Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents ( http://trustee.ietf.org/license-info ) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

This document may contain material from IETF Documents or IETF Contributions published or made publicly available before November 10, 2008. The person(s) controlling the copyright in some of this material may not have granted the IETF Trust the right to allow modifications of such material outside the IETF Standards Process. Without obtaining an adequate license from the person(s) controlling the copyright in such materials, this document may not be modified outside the IETF Standards Process, and derivative works of it may not be created outside the IETF Standards Process, except to format it for publication as an RFC or to translate it into languages other than English.

1.   Introduction

The Hypertext Transfer Protocol (HTTP) is a stateless application-level request/response protocol that uses extensible semantics and self-descriptive message payloads for flexible interaction with network-based hypertext information systems. This document is the first in a series of documents that collectively form the HTTP/1.1 specification:

  • "Message Syntax and Routing" (this document)
  • "Semantics and Content" [RFC7231]
  • "Conditional Requests" [RFC7232]
  • "Range Requests" [RFC7233]
  • "Caching" [RFC7234]
  • "Authentication" [RFC7235]

This HTTP/1.1 specification obsoletes RFC 2616 and RFC 2145 (on HTTP versioning). This specification also updates the use of CONNECT to establish a tunnel, previously defined in RFC 2817 , and defines the "https" URI scheme that was described informally in RFC 2818 .

HTTP is a generic interface protocol for information systems. It is designed to hide the details of how a service is implemented by presenting a uniform interface to clients that is independent of the types of resources provided. Likewise, servers do not need to be aware of each client's purpose: an HTTP request can be considered in isolation rather than being associated with a specific type of client or a predetermined sequence of application steps. The result is a protocol that can be used effectively in many different contexts and for which implementations can evolve independently over time.

HTTP is also designed for use as an intermediation protocol for translating communication to and from non-HTTP information systems. HTTP proxies and gateways can provide access to alternative information services by translating their diverse protocols into a hypertext format that can be viewed and manipulated by clients in the same way as HTTP services.

One consequence of this flexibility is that the protocol cannot be defined in terms of what occurs behind the interface. Instead, we are limited to defining the syntax of communication, the intent of received communication, and the expected behavior of recipients. If the communication is considered in isolation, then successful actions ought to be reflected in corresponding changes to the observable interface provided by servers. However, since multiple clients might act in parallel and perhaps at cross-purposes, we cannot require that such changes be observable beyond the scope of a single response.

This document describes the architectural elements that are used or referred to in HTTP, defines the "http" and "https" URI schemes, describes overall network operation and connection management, and defines HTTP message framing and forwarding requirements. Our goal is to define all of the mechanisms necessary for HTTP message handling that are independent of message semantics, thereby defining the complete set of requirements for message parsers and message-forwarding intermediaries.

1.1.   Requirements Notation

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119] .

Conformance criteria and considerations regarding error handling are defined in Section 2.5 .

1.2.   Syntax Notation

This specification uses the Augmented Backus-Naur Form (ABNF) notation of [RFC5234] with a list extension, defined in Section 7 , that allows for compact definition of comma-separated lists using a '#' operator (similar to how the '*' operator indicates repetition). Appendix B shows the collected grammar with all list operators expanded to standard ABNF notation.

The following core rules are included by reference, as defined in [RFC5234] , Appendix B.1 : ALPHA (letters), CR (carriage return), CRLF (CR LF), CTL (controls), DIGIT (decimal 0-9), DQUOTE (double quote), HEXDIG (hexadecimal 0-9/A-F/a-f), HTAB (horizontal tab), LF (line feed), OCTET (any 8-bit sequence of data), SP (space), and VCHAR (any visible [USASCII] character).

As a convention, ABNF rule names prefixed with "obs-" denote "obsolete" grammar rules that appear for historical reasons.

2.   Architecture

HTTP was created for the World Wide Web (WWW) architecture and has evolved over time to support the scalability needs of a worldwide hypertext system. Much of that architecture is reflected in the terminology and syntax productions used to define HTTP.

2.1.   Client/Server Messaging

HTTP is a stateless request/response protocol that operates by exchanging messages ( Section 3 ) across a reliable transport- or session-layer " connection " ( Section 6 ). An HTTP " client " is a program that establishes a connection to a server for the purpose of sending one or more HTTP requests. An HTTP " server " is a program that accepts connections in order to service HTTP requests by sending HTTP responses.

The terms "client" and "server" refer only to the roles that these programs perform for a particular connection. The same program might act as a client on some connections and a server on others. The term " user agent " refers to any of the various client programs that initiate a request, including (but not limited to) browsers, spiders (web-based robots), command-line tools, custom applications, and mobile apps. The term " origin server " refers to the program that can originate authoritative responses for a given target resource. The terms " sender " and " recipient " refer to any implementation that sends or receives a given message, respectively.

HTTP relies upon the Uniform Resource Identifier (URI) standard [RFC3986] to indicate the target resource ( Section 5.1 ) and relationships between resources. Messages are passed in a format similar to that used by Internet mail [RFC5322] and the Multipurpose Internet Mail Extensions (MIME) [RFC2045] (see Appendix A of [RFC7231] for the differences between HTTP and MIME messages).

Most HTTP communication consists of a retrieval request (GET) for a representation of some resource identified by a URI. In the simplest case, this might be accomplished via a single bidirectional connection (===) between the user agent (UA) and the origin server (O).

A client sends an HTTP request to a server in the form of a request message, beginning with a request-line that includes a method, URI, and protocol version ( Section 3.1.1 ), followed by header fields containing request modifiers, client information, and representation metadata ( Section 3.2 ), an empty line to indicate the end of the header section, and finally a message body containing the payload body (if any, Section 3.3 ).

A server responds to a client's request by sending one or more HTTP response messages, each beginning with a status line that includes the protocol version, a success or error code, and textual reason phrase ( Section 3.1.2 ), possibly followed by header fields containing server information, resource metadata, and representation metadata ( Section 3.2 ), an empty line to indicate the end of the header section, and finally a message body containing the payload body (if any, Section 3.3 ).

A connection might be used for multiple request/response exchanges, as defined in Section 6.3 .

The following example illustrates a typical message exchange for a GET request ( Section 4.3.1 of [RFC7231] ) on the URI "http://www.example.com/hello.txt":

Client request:

Server response:

2.2.   Implementation Diversity

When considering the design of HTTP, it is easy to fall into a trap of thinking that all user agents are general-purpose browsers and all origin servers are large public websites. That is not the case in practice. Common HTTP user agents include household appliances, stereos, scales, firmware update scripts, command-line programs, mobile apps, and communication devices in a multitude of shapes and sizes. Likewise, common HTTP origin servers include home automation units, configurable networking components, office machines, autonomous robots, news feeds, traffic cameras, ad selectors, and video-delivery platforms.

The term "user agent" does not imply that there is a human user directly interacting with the software agent at the time of a request. In many cases, a user agent is installed or configured to run in the background and save its results for later inspection (or save only a subset of those results that might be interesting or erroneous). Spiders, for example, are typically given a start URI and configured to follow certain behavior while crawling the Web as a hypertext graph.

The implementation diversity of HTTP means that not all user agents can make interactive suggestions to their user or provide adequate warning for security or privacy concerns. In the few cases where this specification requires reporting of errors to the user, it is acceptable for such reporting to only be observable in an error console or log file. Likewise, requirements that an automated action be confirmed by the user before proceeding might be met via advance configuration choices, run-time options, or simple avoidance of the unsafe action; confirmation does not imply any specific user interface or interruption of normal processing if the user has already made that choice.

2.3.   Intermediaries

HTTP enables the use of intermediaries to satisfy requests through a chain of connections. There are three common forms of HTTP intermediary : proxy, gateway, and tunnel. In some cases, a single intermediary might act as an origin server, proxy, gateway, or tunnel, switching behavior based on the nature of each request.

The figure above shows three intermediaries (A, B, and C) between the user agent and origin server. A request or response message that travels the whole chain will pass through four separate connections. Some HTTP communication options might apply only to the connection with the nearest, non-tunnel neighbor, only to the endpoints of the chain, or to all connections along the chain. Although the diagram is linear, each participant might be engaged in multiple, simultaneous communications. For example, B might be receiving requests from many clients other than A, and/or forwarding requests to servers other than C, at the same time that it is handling A's request. Likewise, later requests might be sent through a different path of connections, often based on dynamic configuration for load balancing.

The terms " upstream " and " downstream " are used to describe directional requirements in relation to the message flow: all messages flow from upstream to downstream. The terms "inbound" and "outbound" are used to describe directional requirements in relation to the request route: " inbound " means toward the origin server and " outbound " means toward the user agent.

A " proxy " is a message-forwarding agent that is selected by the client, usually via local configuration rules, to receive requests for some type(s) of absolute URI and attempt to satisfy those requests via translation through the HTTP interface. Some translations are minimal, such as for proxy requests for "http" URIs, whereas other requests might require translation to and from entirely different application-level protocols. Proxies are often used to group an organization's HTTP requests through a common intermediary for the sake of security, annotation services, or shared caching. Some proxies are designed to apply transformations to selected messages or payloads while they are being forwarded, as described in Section 5.7.2 .

A " gateway " (a.k.a. " reverse proxy ") is an intermediary that acts as an origin server for the outbound connection but translates received requests and forwards them inbound to another server or servers. Gateways are often used to encapsulate legacy or untrusted information services, to improve server performance through " accelerator " caching, and to enable partitioning or load balancing of HTTP services across multiple machines.

All HTTP requirements applicable to an origin server also apply to the outbound communication of a gateway. A gateway communicates with inbound servers using any protocol that it desires, including private extensions to HTTP that are outside the scope of this specification. However, an HTTP-to-HTTP gateway that wishes to interoperate with third-party HTTP servers ought to conform to user agent requirements on the gateway's inbound connection.

A " tunnel " acts as a blind relay between two connections without changing the messages. Once active, a tunnel is not considered a party to the HTTP communication, though the tunnel might have been initiated by an HTTP request. A tunnel ceases to exist when both ends of the relayed connection are closed. Tunnels are used to extend a virtual connection through an intermediary, such as when Transport Layer Security (TLS, [RFC5246] ) is used to establish confidential communication through a shared firewall proxy.

The above categories for intermediary only consider those acting as participants in the HTTP communication. There are also intermediaries that can act on lower layers of the network protocol stack, filtering or redirecting HTTP traffic without the knowledge or permission of message senders. Network intermediaries are indistinguishable (at a protocol level) from a man-in-the-middle attack, often introducing security flaws or interoperability problems due to mistakenly violating HTTP semantics.

For example, an " interception proxy " [RFC3040] (also commonly known as a " transparent proxy " [RFC1919] or " captive portal ") differs from an HTTP proxy because it is not selected by the client. Instead, an interception proxy filters or redirects outgoing TCP port 80 packets (and occasionally other common port traffic). Interception proxies are commonly found on public network access points, as a means of enforcing account subscription prior to allowing use of non-local Internet services, and within corporate firewalls to enforce network usage policies.

HTTP is defined as a stateless protocol, meaning that each request message can be understood in isolation. Many implementations depend on HTTP's stateless design in order to reuse proxied connections or dynamically load balance requests across multiple servers. Hence, a server MUST NOT assume that two requests on the same connection are from the same user agent unless the connection is secured and specific to that agent. Some non-standard HTTP extensions (e.g., [RFC4559] ) have been known to violate this requirement, resulting in security and interoperability problems.

2.4.   Caches

A " cache " is a local store of previous response messages and the subsystem that controls its message storage, retrieval, and deletion. A cache stores cacheable responses in order to reduce the response time and network bandwidth consumption on future, equivalent requests. Any client or server MAY employ a cache, though a cache cannot be used by a server while it is acting as a tunnel.

The effect of a cache is that the request/response chain is shortened if one of the participants along the chain has a cached response applicable to that request. The following illustrates the resulting chain if B has a cached copy of an earlier response from O (via C) for a request that has not been cached by UA or A.

A response is " cacheable " if a cache is allowed to store a copy of the response message for use in answering subsequent requests. Even when a response is cacheable, there might be additional constraints placed by the client or by the origin server on when that cached response can be used for a particular request. HTTP requirements for cache behavior and cacheable responses are defined in Section 2 of [RFC7234] .

There is a wide variety of architectures and configurations of caches deployed across the World Wide Web and inside large organizations. These include national hierarchies of proxy caches to save transoceanic bandwidth, collaborative systems that broadcast or multicast cache entries, archives of pre-fetched cache entries for use in off-line or high-latency environments, and so on.

2.5.   Conformance and Error Handling

This specification targets conformance criteria according to the role of a participant in HTTP communication. Hence, HTTP requirements are placed on senders, recipients, clients, servers, user agents, intermediaries, origin servers, proxies, gateways, or caches, depending on what behavior is being constrained by the requirement. Additional (social) requirements are placed on implementations, resource owners, and protocol element registrations when they apply beyond the scope of a single communication.

The verb "generate" is used instead of "send" where a requirement differentiates between creating a protocol element and merely forwarding a received element downstream.

An implementation is considered conformant if it complies with all of the requirements associated with the roles it partakes in HTTP.

Conformance includes both the syntax and semantics of protocol elements. A sender MUST NOT generate protocol elements that convey a meaning that is known by that sender to be false. A sender MUST NOT generate protocol elements that do not match the grammar defined by the corresponding ABNF rules. Within a given message, a sender MUST NOT generate protocol elements or syntax alternatives that are only allowed to be generated by participants in other roles (i.e., a role that the sender does not have for that message).

When a received protocol element is parsed, the recipient MUST be able to parse any value of reasonable length that is applicable to the recipient's role and that matches the grammar defined by the corresponding ABNF rules. Note, however, that some received protocol elements might not be parsed. For example, an intermediary forwarding a message might parse a header-field into generic field-name and field-value components, but then forward the header field without further parsing inside the field-value.

HTTP does not have specific length limitations for many of its protocol elements because the lengths that might be appropriate will vary widely, depending on the deployment context and purpose of the implementation. Hence, interoperability between senders and recipients depends on shared expectations regarding what is a reasonable length for each protocol element. Furthermore, what is commonly understood to be a reasonable length for some protocol elements has changed over the course of the past two decades of HTTP use and is expected to continue changing in the future.

At a minimum, a recipient MUST be able to parse and process protocol element lengths that are at least as long as the values that it generates for those same protocol elements in other messages. For example, an origin server that publishes very long URI references to its own resources needs to be able to parse and process those same references when received as a request target.

A recipient MUST interpret a received protocol element according to the semantics defined for it by this specification, including extensions to this specification, unless the recipient has determined (through experience or configuration) that the sender incorrectly implements what is implied by those semantics. For example, an origin server might disregard the contents of a received Accept-Encoding header field if inspection of the User-Agent header field indicates a specific implementation version that is known to fail on receipt of certain content codings.

Unless noted otherwise, a recipient MAY attempt to recover a usable protocol element from an invalid construct. HTTP does not define specific error handling mechanisms except when they have a direct impact on security, since different applications of the protocol require different error handling strategies. For example, a Web browser might wish to transparently recover from a response where the Location header field doesn't parse according to the ABNF, whereas a systems control client might consider any form of error recovery to be dangerous.

2.6.   Protocol Versioning

HTTP uses a "<major>.<minor>" numbering scheme to indicate versions of the protocol. This specification defines version "1.1". The protocol version as a whole indicates the sender's conformance with the set of requirements laid out in that version's corresponding specification of HTTP.

The version of an HTTP message is indicated by an HTTP-version field in the first line of the message. HTTP-version is case-sensitive.

The HTTP version number consists of two decimal digits separated by a "." (period or decimal point). The first digit ("major version") indicates the HTTP messaging syntax, whereas the second digit ("minor version") indicates the highest minor version within that major version to which the sender is conformant and able to understand for future communication. The minor version advertises the sender's communication capabilities even when the sender is only using a backwards-compatible subset of the protocol, thereby letting the recipient know that more advanced features can be used in response (by servers) or in future requests (by clients).

When an HTTP/1.1 message is sent to an HTTP/1.0 recipient [RFC1945] or a recipient whose version is unknown, the HTTP/1.1 message is constructed such that it can be interpreted as a valid HTTP/1.0 message if all of the newer features are ignored. This specification places recipient-version requirements on some new features so that a conformant sender will only use compatible features until it has determined, through configuration or the receipt of a message, that the recipient supports HTTP/1.1.

The interpretation of a header field does not change between minor versions of the same major HTTP version, though the default behavior of a recipient in the absence of such a field can change. Unless specified otherwise, header fields defined in HTTP/1.1 are defined for all versions of HTTP/1.x. In particular, the Host and Connection header fields ought to be implemented by all HTTP/1.x implementations whether or not they advertise conformance with HTTP/1.1.

New header fields can be introduced without changing the protocol version if their defined semantics allow them to be safely ignored by recipients that do not recognize them. Header field extensibility is discussed in Section 3.2.1 .

Intermediaries that process HTTP messages (i.e., all intermediaries other than those acting as tunnels) MUST send their own HTTP-version in forwarded messages. In other words, they are not allowed to blindly forward the first line of an HTTP message without ensuring that the protocol version in that message matches a version to which that intermediary is conformant for both the receiving and sending of messages. Forwarding an HTTP message without rewriting the HTTP-version might result in communication errors when downstream recipients use the message sender's version to determine what features are safe to use for later communication with that sender.

A client SHOULD send a request version equal to the highest version to which the client is conformant and whose major version is no higher than the highest version supported by the server, if this is known. A client MUST NOT send a version to which it is not conformant.

A client MAY send a lower request version if it is known that the server incorrectly implements the HTTP specification, but only after the client has attempted at least one normal request and determined from the response status code or header fields (e.g., Server ) that the server improperly handles higher request versions.

A server SHOULD send a response version equal to the highest version to which the server is conformant that has a major version less than or equal to the one received in the request. A server MUST NOT send a version to which it is not conformant. A server can send a 505 (HTTP Version Not Supported) response if it wishes, for any reason, to refuse service of the client's major protocol version.

A server MAY send an HTTP/1.0 response to a request if it is known or suspected that the client incorrectly implements the HTTP specification and is incapable of correctly processing later version responses, such as when a client fails to parse the version number correctly or when an intermediary is known to blindly forward the HTTP-version even when it doesn't conform to the given minor version of the protocol. Such protocol downgrades SHOULD NOT be performed unless triggered by specific client attributes, such as when one or more of the request header fields (e.g., User-Agent ) uniquely match the values sent by a client known to be in error.

The intention of HTTP's versioning design is that the major number will only be incremented if an incompatible message syntax is introduced, and that the minor number will only be incremented when changes made to the protocol have the effect of adding to the message semantics or implying additional capabilities of the sender. However, the minor version was not incremented for the changes introduced between [RFC2068] and [RFC2616] , and this revision has specifically avoided any such changes to the protocol.

When an HTTP message is received with a major version number that the recipient implements, but a higher minor version number than what the recipient implements, the recipient SHOULD process the message as if it were in the highest minor version within that major version to which the recipient is conformant. A recipient can assume that a message with a higher minor version, when sent to a recipient that has not yet indicated support for that higher version, is sufficiently backwards-compatible to be safely processed by any implementation of the same major version.

2.7.   Uniform Resource Identifiers

Uniform Resource Identifiers (URIs) [RFC3986] are used throughout HTTP as the means for identifying resources ( Section 2 of [RFC7231] ). URI references are used to target requests, indicate redirects, and define relationships.

The definitions of "URI-reference", "absolute-URI", "relative-part", "scheme", "authority", "port", "host", "path-abempty", "segment", "query", and "fragment" are adopted from the URI generic syntax. An "absolute-path" rule is defined for protocol elements that can contain a non-empty path component. (This rule differs slightly from the path-abempty rule of RFC 3986, which allows for an empty path to be used in references, and path-absolute rule, which does not allow paths that begin with "//".) A "partial-URI" rule is defined for protocol elements that can contain a relative URI but not a fragment component.

Each protocol element in HTTP that allows a URI reference will indicate in its ABNF production whether the element allows any form of reference (URI-reference), only a URI in absolute form (absolute-URI), only the path and optional query components, or some combination of the above. Unless otherwise indicated, URI references are parsed relative to the effective request URI ( Section 5.5 ).

2.7.1.   http URI Scheme

The "http" URI scheme is hereby defined for the purpose of minting identifiers according to their association with the hierarchical namespace governed by a potential HTTP origin server listening for TCP ( [RFC0793] ) connections on a given port.

The origin server for an "http" URI is identified by the authority component, which includes a host identifier and optional TCP port ( [RFC3986] , Section 3.2.2 ). The hierarchical path component and optional query component serve as an identifier for a potential target resource within that origin server's name space. The optional fragment component allows for indirect identification of a secondary resource, independent of the URI scheme, as defined in Section 3.5 of [RFC3986] .

A sender MUST NOT generate an "http" URI with an empty host identifier. A recipient that processes such a URI reference MUST reject it as invalid.

If the host identifier is provided as an IP address, the origin server is the listener (if any) on the indicated TCP port at that IP address. If host is a registered name, the registered name is an indirect identifier for use with a name resolution service, such as DNS, to find an address for that origin server. If the port subcomponent is empty or not given, TCP port 80 (the reserved port for WWW services) is the default.

Note that the presence of a URI with a given authority component does not imply that there is always an HTTP server listening for connections on that host and port. Anyone can mint a URI. What the authority component determines is who has the right to respond authoritatively to requests that target the identified resource. The delegated nature of registered names and IP addresses creates a federated namespace, based on control over the indicated host and port, whether or not an HTTP server is present. See Section 9.1 for security considerations related to establishing authority.

When an "http" URI is used within a context that calls for access to the indicated resource, a client MAY attempt access by resolving the host to an IP address, establishing a TCP connection to that address on the indicated port, and sending an HTTP request message ( Section 3 ) containing the URI's identifying data ( Section 5 ) to the server. If the server responds to that request with a non-interim HTTP response message, as described in Section 6 of [RFC7231] , then that response is considered an authoritative answer to the client's request.

Although HTTP is independent of the transport protocol, the "http" scheme is specific to TCP-based services because the name delegation process depends on TCP for establishing authority. An HTTP service based on some other underlying connection protocol would presumably be identified using a different URI scheme, just as the "https" scheme (below) is used for resources that require an end-to-end secured connection. Other protocols might also be used to provide access to "http" identified resources — it is only the authoritative interface that is specific to TCP.

The URI generic syntax for authority also includes a deprecated userinfo subcomponent ( [RFC3986] , Section 3.2.1 ) for including user authentication information in the URI. Some implementations make use of the userinfo component for internal configuration of authentication information, such as within command invocation options, configuration files, or bookmark lists, even though such usage might expose a user identifier or password. A sender MUST NOT generate the userinfo subcomponent (and its "@" delimiter) when an "http" URI reference is generated within a message as a request target or header field value. Before making use of an "http" URI reference received from an untrusted source, a recipient SHOULD parse for userinfo and treat its presence as an error; it is likely being used to obscure the authority for the sake of phishing attacks.

2.7.2.   https URI Scheme

The "https" URI scheme is hereby defined for the purpose of minting identifiers according to their association with the hierarchical namespace governed by a potential HTTP origin server listening to a given TCP port for TLS-secured connections ( [RFC5246] ).

All of the requirements listed above for the "http" scheme are also requirements for the "https" scheme, except that TCP port 443 is the default if the port subcomponent is empty or not given, and the user agent MUST ensure that its connection to the origin server is secured through the use of strong encryption, end-to-end, prior to sending the first HTTP request.

Note that the "https" URI scheme depends on both TLS and TCP for establishing authority. Resources made available via the "https" scheme have no shared identity with the "http" scheme even if their resource identifiers indicate the same authority (the same host listening to the same TCP port). They are distinct namespaces and are considered to be distinct origin servers. However, an extension to HTTP that is defined to apply to entire host domains, such as the Cookie protocol [RFC6265] , can allow information set by one service to impact communication with other services within a matching group of host domains.

The process for authoritative access to an "https" identified resource is defined in [RFC2818] .

2.7.3.   http and https URI Normalization and Comparison

Since the "http" and "https" schemes conform to the URI generic syntax, such URIs are normalized and compared according to the algorithm defined in Section 6 of [RFC3986] , using the defaults described above for each scheme.

If the port is equal to the default port for a scheme, the normal form is to omit the port subcomponent. When not being used in absolute form as the request target of an OPTIONS request, an empty path component is equivalent to an absolute path of "/", so the normal form is to provide a path of "/" instead. The scheme and host are case-insensitive and normally provided in lowercase; all other components are compared in a case-sensitive manner. Characters other than those in the "reserved" set are equivalent to their percent-encoded octets: the normal form is to not encode them (see Sections 2.1 and 2.2 of [RFC3986] ).

For example, the following three URIs are equivalent:

3.   Message Format

All HTTP/1.1 messages consist of a start-line followed by a sequence of octets in a format similar to the Internet Message Format [RFC5322] : zero or more header fields (collectively referred to as the "headers" or the "header section"), an empty line indicating the end of the header section, and an optional message body.

The normal procedure for parsing an HTTP message is to read the start-line into a structure, read each header field into a hash table by field name until the empty line, and then use the parsed data to determine if a message body is expected. If a message body has been indicated, then it is read as a stream until an amount of octets equal to the message body length is read or the connection is closed.

A recipient MUST parse an HTTP message as a sequence of octets in an encoding that is a superset of US-ASCII [USASCII] . Parsing an HTTP message as a stream of Unicode characters, without regard for the specific encoding, creates security vulnerabilities due to the varying ways that string processing libraries handle invalid multibyte character sequences that contain the octet LF (%x0A). String-based parsers can only be safely used within protocol elements after the element has been extracted from the message, such as within a header field-value after message parsing has delineated the individual fields.

An HTTP message can be parsed as a stream for incremental processing or forwarding downstream. However, recipients cannot rely on incremental delivery of partial messages, since some implementations will buffer or delay message forwarding for the sake of network efficiency, security checks, or payload transformations.

A sender MUST NOT send whitespace between the start-line and the first header field. A recipient that receives whitespace between the start-line and the first header field MUST either reject the message as invalid or consume each whitespace-preceded line without further processing of it (i.e., ignore the entire line, along with any subsequent lines preceded by whitespace, until a properly formed header field is received or the header section is terminated).

The presence of such whitespace in a request might be an attempt to trick a server into ignoring that field or processing the line after it as a new request, either of which might result in a security vulnerability if other implementations within the request chain interpret the same message differently. Likewise, the presence of such whitespace in a response might be ignored by some clients or cause others to cease parsing.

3.1.   Start Line

An HTTP message can be either a request from client to server or a response from server to client. Syntactically, the two types of message differ only in the start-line, which is either a request-line (for requests) or a status-line (for responses), and in the algorithm for determining the length of the message body ( Section 3.3 ).

In theory, a client could receive requests and a server could receive responses, distinguishing them by their different start-line formats, but, in practice, servers are implemented to only expect a request (a response is interpreted as an unknown or invalid request method) and clients are implemented to only expect a response.

3.1.1.   Request Line

A request-line begins with a method token, followed by a single space (SP), the request-target, another single space (SP), the protocol version, and ends with CRLF.

The method token indicates the request method to be performed on the target resource. The request method is case-sensitive.

The request methods defined by this specification can be found in Section 4 of [RFC7231] , along with information regarding the HTTP method registry and considerations for defining new methods.

The request-target identifies the target resource upon which to apply the request, as defined in Section 5.3 .

Recipients typically parse the request-line into its component parts by splitting on whitespace (see Section 3.5 ), since no whitespace is allowed in the three components. Unfortunately, some user agents fail to properly encode or exclude whitespace found in hypertext references, resulting in those disallowed characters being sent in a request-target.

Recipients of an invalid request-line SHOULD respond with either a 400 (Bad Request) error or a 301 (Moved Permanently) redirect with the request-target properly encoded. A recipient SHOULD NOT attempt to autocorrect and then process the request without a redirect, since the invalid request-line might be deliberately crafted to bypass security filters along the request chain.

HTTP does not place a predefined limit on the length of a request-line, as described in Section 2.5 . A server that receives a method longer than any that it implements SHOULD respond with a 501 (Not Implemented) status code. A server that receives a request-target longer than any URI it wishes to parse MUST respond with a 414 (URI Too Long) status code (see Section 6.5.12 of [RFC7231] ).

Various ad hoc limitations on request-line length are found in practice. It is RECOMMENDED that all HTTP senders and recipients support, at a minimum, request-line lengths of 8000 octets.

3.1.2.   Status Line

The first line of a response message is the status-line, consisting of the protocol version, a space (SP), the status code, another space, a possibly empty textual phrase describing the status code, and ending with CRLF.

The status-code element is a 3-digit integer code describing the result of the server's attempt to understand and satisfy the client's corresponding request. The rest of the response message is to be interpreted in light of the semantics defined for that status code. See Section 6 of [RFC7231] for information about the semantics of status codes, including the classes of status code (indicated by the first digit), the status codes defined by this specification, considerations for the definition of new status codes, and the IANA registry.

The reason-phrase element exists for the sole purpose of providing a textual description associated with the numeric status code, mostly out of deference to earlier Internet application protocols that were more frequently used with interactive text clients. A client SHOULD ignore the reason-phrase content.

3.2.   Header Fields

Each header field consists of a case-insensitive field name followed by a colon (":"), optional leading whitespace, the field value, and optional trailing whitespace.

The field-name token labels the corresponding field-value as having the semantics defined by that header field. For example, the Date header field is defined in Section 7.1.1.2 of [RFC7231] as containing the origination timestamp for the message in which it appears.

3.2.1.   Field Extensibility

Header fields are fully extensible: there is no limit on the introduction of new field names, each presumably defining new semantics, nor on the number of header fields used in a given message. Existing fields are defined in each part of this specification and in many other specifications outside this document set.

New header fields can be defined such that, when they are understood by a recipient, they might override or enhance the interpretation of previously defined header fields, define preconditions on request evaluation, or refine the meaning of responses.

A proxy MUST forward unrecognized header fields unless the field-name is listed in the Connection header field ( Section 6.1 ) or the proxy is specifically configured to block, or otherwise transform, such fields. Other recipients SHOULD ignore unrecognized header fields. These requirements allow HTTP's functionality to be enhanced without requiring prior update of deployed intermediaries.

All defined header fields ought to be registered with IANA in the "Message Headers" registry, as described in Section 8.3 of [RFC7231] .

3.2.2.   Field Order

The order in which header fields with differing field names are received is not significant. However, it is good practice to send header fields that contain control data first, such as Host on requests and Date on responses, so that implementations can decide when not to handle a message as early as possible. A server MUST NOT apply a request to the target resource until the entire request header section is received, since later header fields might include conditionals, authentication credentials, or deliberately misleading duplicate header fields that would impact request processing.

A sender MUST NOT generate multiple header fields with the same field name in a message unless either the entire field value for that header field is defined as a comma-separated list [i.e., #(values)] or the header field is a well-known exception (as noted below).

A recipient MAY combine multiple header fields with the same field name into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field value to the combined field value in order, separated by a comma. The order in which header fields with the same field name are received is therefore significant to the interpretation of the combined field value; a proxy MUST NOT change the order of these field values when forwarding a message.

3.2.3.   Whitespace

This specification uses three rules to denote the use of linear whitespace: OWS (optional whitespace), RWS (required whitespace), and BWS ("bad" whitespace).

The OWS rule is used where zero or more linear whitespace octets might appear. For protocol elements where optional whitespace is preferred to improve readability, a sender SHOULD generate the optional whitespace as a single SP; otherwise, a sender SHOULD NOT generate optional whitespace except as needed to white out invalid or unwanted protocol elements during in-place message filtering.

The RWS rule is used when at least one linear whitespace octet is required to separate field tokens. A sender SHOULD generate RWS as a single SP.

The BWS rule is used where the grammar allows optional whitespace only for historical reasons. A sender MUST NOT generate BWS in messages. A recipient MUST parse for such bad whitespace and remove it before interpreting the protocol element.

3.2.4.   Field Parsing

Messages are parsed using a generic algorithm, independent of the individual header field names. The contents within a given field value are not parsed until a later stage of message interpretation (usually after the message's entire header section has been processed). Consequently, this specification does not use ABNF rules to define each "Field-Name: Field Value" pair, as was done in previous editions. Instead, this specification uses ABNF rules that are named according to each registered field name, wherein the rule defines the valid grammar for that field's corresponding field values (i.e., after the field-value has been extracted from the header section by a generic field parser).

No whitespace is allowed between the header field-name and colon. In the past, differences in the handling of such whitespace have led to security vulnerabilities in request routing and response handling. A server MUST reject any received request message that contains whitespace between a header field-name and colon with a response code of 400 (Bad Request) . A proxy MUST remove any such whitespace from a response message before forwarding the message downstream.

A field value might be preceded and/or followed by optional whitespace (OWS); a single SP preceding the field-value is preferred for consistent readability by humans. The field value does not include any leading or trailing whitespace: OWS occurring before the first non-whitespace octet of the field value or after the last non-whitespace octet of the field value ought to be excluded by parsers when extracting the field value from a header field.

Historically, HTTP header field values could be extended over multiple lines by preceding each extra line with at least one space or horizontal tab (obs-fold). This specification deprecates such line folding except within the message/http media type ( Section 8.3.1 ). A sender MUST NOT generate a message that includes line folding (i.e., that has any field-value that contains a match to the obs-fold rule) unless the message is intended for packaging within the message/http media type.

A server that receives an obs-fold in a request message that is not within a message/http container MUST either reject the message by sending a 400 (Bad Request) , preferably with a representation explaining that obsolete line folding is unacceptable, or replace each received obs-fold with one or more SP octets prior to interpreting the field value or forwarding the message downstream.

A proxy or gateway that receives an obs-fold in a response message that is not within a message/http container MUST either discard the message and replace it with a 502 (Bad Gateway) response, preferably with a representation explaining that unacceptable line folding was received, or replace each received obs-fold with one or more SP octets prior to interpreting the field value or forwarding the message downstream.

A user agent that receives an obs-fold in a response message that is not within a message/http container MUST replace each received obs-fold with one or more SP octets prior to interpreting the field value.

Historically, HTTP has allowed field content with text in the ISO‑8859‑1 charset [ISO-8859-1] , supporting other charsets only through use of [RFC2047] encoding. In practice, most HTTP header field values use only a subset of the US-ASCII charset [USASCII] . Newly defined header fields SHOULD limit their field values to US‑ASCII octets. A recipient SHOULD treat other octets in field content (obs‑text) as opaque data.

3.2.5.   Field Limits

HTTP does not place a predefined limit on the length of each header field or on the length of the header section as a whole, as described in Section 2.5 . Various ad hoc limitations on individual header field length are found in practice, often depending on the specific field semantics.

A server that receives a request header field, or set of fields, larger than it wishes to process MUST respond with an appropriate 4xx (Client Error) status code. Ignoring such header fields would increase the server's vulnerability to request smuggling attacks ( Section 9.5 ).

A client MAY discard or truncate received header fields that are larger than the client wishes to process if the field semantics are such that the dropped value(s) can be safely ignored without changing the message framing or response semantics.

3.2.6.   Field Value Components

Most HTTP header field values are defined using common syntax components (token, quoted-string, and comment) separated by whitespace or specific delimiting characters. Delimiters are chosen from the set of US-ASCII visual characters not allowed in a token (DQUOTE and "(),/:;<=>?@[\]{}").

A string of text is parsed as a single value if it is quoted using double-quote marks.

Comments can be included in some HTTP header fields by surrounding the comment text with parentheses. Comments are only allowed in fields containing "comment" as part of their field value definition.

The backslash octet ("\") can be used as a single-octet quoting mechanism within quoted-string and comment constructs. Recipients that process the value of a quoted-string MUST handle a quoted-pair as if it were replaced by the octet following the backslash.

A sender SHOULD NOT generate a quoted-pair in a quoted-string except where necessary to quote DQUOTE and backslash octets occurring within that string. A sender SHOULD NOT generate a quoted-pair in a comment except where necessary to quote parentheses ["(" and ")"] and backslash octets occurring within that comment.

3.3.   Message Body

The message body (if any) of an HTTP message is used to carry the payload body of that request or response. The message body is identical to the payload body unless a transfer coding has been applied, as described in Section 3.3.1 .

The rules for when a message body is allowed in a message differ for requests and responses.

The presence of a message body in a request is signaled by a Content-Length or Transfer-Encoding header field. Request message framing is independent of method semantics, even if the method does not define any use for a message body.

The presence of a message body in a response depends on both the request method to which it is responding and the response status code ( Section 3.1.2 ). Responses to the HEAD request method ( Section 4.3.2 of [RFC7231] ) never include a message body because the associated response header fields (e.g., Transfer-Encoding , Content-Length , etc.), if present, indicate only what their values would have been if the request method had been GET ( Section 4.3.1 of [RFC7231] ). 2xx (Successful) responses to a CONNECT request method ( Section 4.3.6 of [RFC7231] ) switch to tunnel mode instead of having a message body. All 1xx (Informational) , 204 (No Content) , and 304 (Not Modified) responses do not include a message body. All other responses do include a message body, although the body might be of zero length.

3.3.1.   Transfer-Encoding

The Transfer-Encoding header field lists the transfer coding names corresponding to the sequence of transfer codings that have been (or will be) applied to the payload body in order to form the message body. Transfer codings are defined in Section 4 .

Transfer-Encoding is analogous to the Content-Transfer-Encoding field of MIME, which was designed to enable safe transport of binary data over a 7-bit transport service ( [RFC2045] , Section 6 ). However, safe transport has a different focus for an 8bit-clean transfer protocol. In HTTP's case, Transfer-Encoding is primarily intended to accurately delimit a dynamically generated payload and to distinguish payload encodings that are only applied for transport efficiency or security from those that are characteristics of the selected resource.

A recipient MUST be able to parse the chunked transfer coding ( Section 4.1 ) because it plays a crucial role in framing messages when the payload body size is not known in advance. A sender MUST NOT apply chunked more than once to a message body (i.e., chunking an already chunked message is not allowed). If any transfer coding other than chunked is applied to a request payload body, the sender MUST apply chunked as the final transfer coding to ensure that the message is properly framed. If any transfer coding other than chunked is applied to a response payload body, the sender MUST either apply chunked as the final transfer coding or terminate the message by closing the connection.

For example,

indicates that the payload body has been compressed using the gzip coding and then chunked using the chunked coding while forming the message body.

Unlike Content-Encoding ( Section 3.1.2.1 of [RFC7231] ), Transfer-Encoding is a property of the message, not of the representation, and any recipient along the request/response chain MAY decode the received transfer coding(s) or apply additional transfer coding(s) to the message body, assuming that corresponding changes are made to the Transfer-Encoding field-value. Additional information about the encoding parameters can be provided by other header fields not defined by this specification.

Transfer-Encoding MAY be sent in a response to a HEAD request or in a 304 (Not Modified) response ( Section 4.1 of [RFC7232] ) to a GET request, neither of which includes a message body, to indicate that the origin server would have applied a transfer coding to the message body if the request had been an unconditional GET. This indication is not required, however, because any recipient on the response chain (including the origin server) can remove transfer codings when they are not needed.

A server MUST NOT send a Transfer-Encoding header field in any response with a status code of 1xx (Informational) or 204 (No Content) . A server MUST NOT send a Transfer-Encoding header field in any 2xx (Successful) response to a CONNECT request ( Section 4.3.6 of [RFC7231] ).

Transfer-Encoding was added in HTTP/1.1. It is generally assumed that implementations advertising only HTTP/1.0 support will not understand how to process a transfer-encoded payload. A client MUST NOT send a request containing Transfer-Encoding unless it knows the server will handle HTTP/1.1 (or later) requests; such knowledge might be in the form of specific user configuration or by remembering the version of a prior received response. A server MUST NOT send a response containing Transfer-Encoding unless the corresponding request indicates HTTP/1.1 (or later).

A server that receives a request message with a transfer coding it does not understand SHOULD respond with 501 (Not Implemented) .

3.3.2.   Content-Length

When a message does not have a Transfer-Encoding header field, a Content-Length header field can provide the anticipated size, as a decimal number of octets, for a potential payload body. For messages that do include a payload body, the Content-Length field-value provides the framing information necessary for determining where the body (and message) ends. For messages that do not include a payload body, the Content-Length indicates the size of the selected representation ( Section 3 of [RFC7231] ).

An example is

A sender MUST NOT send a Content-Length header field in any message that contains a Transfer-Encoding header field.

A user agent SHOULD send a Content-Length in a request message when no Transfer-Encoding is sent and the request method defines a meaning for an enclosed payload body. For example, a Content-Length header field is normally sent in a POST request even when the value is 0 (indicating an empty payload body). A user agent SHOULD NOT send a Content-Length header field when the request message does not contain a payload body and the method semantics do not anticipate such a body.

A server MAY send a Content-Length header field in a response to a HEAD request ( Section 4.3.2 of [RFC7231] ); a server MUST NOT send Content-Length in such a response unless its field-value equals the decimal number of octets that would have been sent in the payload body of a response if the same request had used the GET method.

A server MAY send a Content-Length header field in a 304 (Not Modified) response to a conditional GET request ( Section 4.1 of [RFC7232] ); a server MUST NOT send Content-Length in such a response unless its field-value equals the decimal number of octets that would have been sent in the payload body of a 200 (OK) response to the same request.

A server MUST NOT send a Content-Length header field in any response with a status code of 1xx (Informational) or 204 (No Content) . A server MUST NOT send a Content-Length header field in any 2xx (Successful) response to a CONNECT request ( Section 4.3.6 of [RFC7231] ).

Aside from the cases defined above, in the absence of Transfer-Encoding, an origin server SHOULD send a Content-Length header field when the payload body size is known prior to sending the complete header section. This will allow downstream recipients to measure transfer progress, know when a received message is complete, and potentially reuse the connection for additional requests.

Any Content-Length field value greater than or equal to zero is valid. Since there is no predefined limit to the length of a payload, a recipient MUST anticipate potentially large decimal numerals and prevent parsing errors due to integer conversion overflows ( Section 9.3 ).

If a message is received that has multiple Content-Length header fields with field-values consisting of the same decimal value, or a single Content-Length header field with a field value containing a list of identical decimal values (e.g., "Content-Length: 42, 42"), indicating that duplicate Content-Length header fields have been generated or combined by an upstream message processor, then the recipient MUST either reject the message as invalid or replace the duplicated field-values with a single valid Content-Length field containing that decimal value prior to determining the message body length or forwarding the message.

3.3.3.   Message Body Length

The length of a message body is determined by one of the following (in order of precedence):

Any response to a HEAD request and any response with a 1xx (Informational) , 204 (No Content) , or 304 (Not Modified) status code is always terminated by the first empty line after the header fields, regardless of the header fields present in the message, and thus cannot contain a message body.

Any 2xx (Successful) response to a CONNECT request implies that the connection will become a tunnel immediately after the empty line that concludes the header fields. A client MUST ignore any Content-Length or Transfer-Encoding header fields received in such a message.

If a Transfer-Encoding header field is present and the chunked transfer coding ( Section 4.1 ) is the final encoding, the message body length is determined by reading and decoding the chunked data until the transfer coding indicates the data is complete.

If a Transfer-Encoding header field is present in a response and the chunked transfer coding is not the final encoding, the message body length is determined by reading the connection until it is closed by the server. If a Transfer-Encoding header field is present in a request and the chunked transfer coding is not the final encoding, the message body length cannot be determined reliably; the server MUST respond with the 400 (Bad Request) status code and then close the connection.

If a message is received with both a Transfer-Encoding and a Content-Length header field, the Transfer-Encoding overrides the Content-Length. Such a message might indicate an attempt to perform request smuggling ( Section 9.5 ) or response splitting ( Section 9.4 ) and ought to be handled as an error. A sender MUST remove the received Content-Length field prior to forwarding such a message downstream.

If a message is received without Transfer-Encoding and with either multiple Content-Length header fields having differing field-values or a single Content-Length header field having an invalid value, then the message framing is invalid and the recipient MUST treat it as an unrecoverable error. If this is a request message, the server MUST respond with a 400 (Bad Request) status code and then close the connection. If this is a response message received by a proxy, the proxy MUST close the connection to the server, discard the received response, and send a 502 (Bad Gateway) response to the client. If this is a response message received by a user agent, the user agent MUST close the connection to the server and discard the received response.

If a valid Content-Length header field is present without Transfer-Encoding , its decimal value defines the expected message body length in octets. If the sender closes the connection or the recipient times out before the indicated number of octets are received, the recipient MUST consider the message to be incomplete and close the connection.

If this is a request message and none of the above are true, then the message body length is zero (no message body is present).

Otherwise, this is a response message without a declared message body length, so the message body length is determined by the number of octets received prior to the server closing the connection.

Since there is no way to distinguish a successfully completed, close-delimited message from a partially received message interrupted by network failure, a server SHOULD generate encoding or length-delimited messages whenever possible. The close-delimiting feature exists primarily for backwards compatibility with HTTP/1.0.

A server MAY reject a request that contains a message body but not a Content-Length by responding with 411 (Length Required) .

Unless a transfer coding other than chunked has been applied, a client that sends a request containing a message body SHOULD use a valid Content-Length header field if the message body length is known in advance, rather than the chunked transfer coding, since some existing services respond to chunked with a 411 (Length Required) status code even though they understand the chunked transfer coding. This is typically because such services are implemented via a gateway that requires a content-length in advance of being called and the server is unable or unwilling to buffer the entire request before processing.

A user agent that sends a request containing a message body MUST send a valid Content-Length header field if it does not know the server will handle HTTP/1.1 (or later) requests; such knowledge can be in the form of specific user configuration or by remembering the version of a prior received response.

If the final response to the last request on a connection has been completely received and there remains additional data to read, a user agent MAY discard the remaining data or attempt to determine if that data belongs as part of the prior response body, which might be the case if the prior message's Content-Length value is incorrect. A client MUST NOT process, cache, or forward such extra data as a separate response, since such behavior would be vulnerable to cache poisoning.

3.4.   Handling Incomplete Messages

A server that receives an incomplete request message, usually due to a canceled request or a triggered timeout exception, MAY send an error response prior to closing the connection.

A client that receives an incomplete response message, which can occur when a connection is closed prematurely or when decoding a supposedly chunked transfer coding fails, MUST record the message as incomplete. Cache requirements for incomplete responses are defined in Section 3 of [RFC7234] .

If a response terminates in the middle of the header section (before the empty line is received) and the status code might rely on header fields to convey the full meaning of the response, then the client cannot assume that meaning has been conveyed; the client might need to repeat the request in order to determine what action to take next.

A message body that uses the chunked transfer coding is incomplete if the zero-sized chunk that terminates the encoding has not been received. A message that uses a valid Content-Length is incomplete if the size of the message body received (in octets) is less than the value given by Content-Length. A response that has neither chunked transfer coding nor Content-Length is terminated by closure of the connection and, thus, is considered complete regardless of the number of message body octets received, provided that the header section was received intact.

3.5.   Message Parsing Robustness

Older HTTP/1.0 user agent implementations might send an extra CRLF after a POST request as a workaround for some early server applications that failed to read message body content that was not terminated by a line-ending. An HTTP/1.1 user agent MUST NOT preface or follow a request with an extra CRLF. If terminating the request message body with a line-ending is desired, then the user agent MUST count the terminating CRLF octets as part of the message body length.

In the interest of robustness, a server that is expecting to receive and parse a request-line SHOULD ignore at least one empty line (CRLF) received prior to the request-line.

Although the line terminator for the start-line and header fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.

Although the request-line and status-line grammar rules require that each of the component elements be separated by a single SP octet, recipients MAY instead parse on whitespace-delimited word boundaries and, aside from the CRLF terminator, treat any form of whitespace as the SP separator while ignoring preceding or trailing whitespace; such whitespace includes one or more of the following octets: SP, HTAB, VT (%x0B), FF (%x0C), or bare CR. However, lenient parsing can result in security vulnerabilities if there are multiple recipients of the message and each has its own unique interpretation of robustness (see Section 9.5 ).

When a server listening only for HTTP request messages, or processing what appears from the start-line to be an HTTP request message, receives a sequence of octets that does not match the HTTP-message grammar aside from the robustness exceptions listed above, the server SHOULD respond with a 400 (Bad Request) response.

4.   Transfer Codings

Transfer coding names are used to indicate an encoding transformation that has been, can be, or might need to be applied to a payload body in order to ensure "safe transport" through the network. This differs from a content coding in that the transfer coding is a property of the message rather than a property of the representation that is being transferred.

Parameters are in the form of a name or name=value pair.

All transfer-coding names are case-insensitive and ought to be registered within the HTTP Transfer Coding registry, as defined in Section 8.4 . They are used in the TE ( Section 4.3 ) and Transfer-Encoding ( Section 3.3.1 ) header fields.

4.1.   Chunked Transfer Coding

The chunked transfer coding wraps the payload body in order to transfer it as a series of chunks, each with its own size indicator, followed by an OPTIONAL trailer containing header fields. Chunked enables content streams of unknown size to be transferred as a sequence of length-delimited buffers, which enables the sender to retain connection persistence and the recipient to know when it has received the entire message.

The chunk-size field is a string of hex digits indicating the size of the chunk-data in octets. The chunked transfer coding is complete when a chunk with a chunk-size of zero is received, possibly followed by a trailer, and finally terminated by an empty line.

A recipient MUST be able to parse and decode the chunked transfer coding.

4.1.1.   Chunk Extensions

The chunked encoding allows each chunk to include zero or more chunk extensions, immediately following the chunk-size , for the sake of supplying per-chunk metadata (such as a signature or hash), mid-message control information, or randomization of message body size.

The chunked encoding is specific to each connection and is likely to be removed or recoded by each recipient (including intermediaries) before any higher-level application would have a chance to inspect the extensions. Hence, use of chunk extensions is generally limited to specialized HTTP services such as "long polling" (where client and server can have shared expectations regarding the use of chunk extensions) or for padding within an end-to-end secured connection.

A recipient MUST ignore unrecognized chunk extensions. A server ought to limit the total length of chunk extensions received in a request to an amount reasonable for the services provided, in the same way that it applies length limitations and timeouts for other parts of a message, and generate an appropriate 4xx (Client Error) response if that amount is exceeded.

4.1.2.   Chunked Trailer Part

A trailer allows the sender to include additional fields at the end of a chunked message in order to supply metadata that might be dynamically generated while the message body is sent, such as a message integrity check, digital signature, or post-processing status. The trailer fields are identical to header fields, except they are sent in a chunked trailer instead of the message's header section.

A sender MUST NOT generate a trailer that contains a field necessary for message framing (e.g., Transfer-Encoding and Content-Length ), routing (e.g., Host ), request modifiers (e.g., controls and conditionals in Section 5 of [RFC7231] ), authentication (e.g., see [RFC7235] and [RFC6265] ), response control data (e.g., see Section 7.1 of [RFC7231] ), or determining how to process the payload (e.g., Content-Encoding , Content-Type , Content-Range , and Trailer ).

When a chunked message containing a non-empty trailer is received, the recipient MAY process the fields (aside from those forbidden above) as if they were appended to the message's header section. A recipient MUST ignore (or consider as an error) any fields that are forbidden to be sent in a trailer, since processing them as if they were present in the header section might bypass external security filters.

Unless the request includes a TE header field indicating "trailers" is acceptable, as described in Section 4.3 , a server SHOULD NOT generate trailer fields that it believes are necessary for the user agent to receive. Without a TE containing "trailers", the server ought to assume that the trailer fields might be silently discarded along the path to the user agent. This requirement allows intermediaries to forward a de-chunked message to an HTTP/1.0 recipient without buffering the entire response.

4.1.3.   Decoding Chunked

A process for decoding the chunked transfer coding can be represented in pseudo-code as:

4.2.   Compression Codings

The codings defined below can be used to compress the payload of a message.

4.2.1.   Compress Coding

The "compress" coding is an adaptive Lempel-Ziv-Welch (LZW) coding [Welch] that is commonly produced by the UNIX file compression program "compress". A recipient SHOULD consider "x-compress" to be equivalent to "compress".

4.2.2.   Deflate Coding

The "deflate" coding is a "zlib" data format [RFC1950] containing a "deflate" compressed data stream [RFC1951] that uses a combination of the Lempel-Ziv (LZ77) compression algorithm and Huffman coding.

4.2.3.   Gzip Coding

The "gzip" coding is an LZ77 coding with a 32-bit Cyclic Redundancy Check (CRC) that is commonly produced by the gzip file compression program [RFC1952] . A recipient SHOULD consider "x-gzip" to be equivalent to "gzip".

4.3.   TE

The "TE" header field in a request indicates what transfer codings, besides chunked, the client is willing to accept in response, and whether or not the client is willing to accept trailer fields in a chunked transfer coding.

The TE field-value consists of a comma-separated list of transfer coding names, each allowing for optional parameters (as described in Section 4 ), and/or the keyword "trailers". A client MUST NOT send the chunked transfer coding name in TE; chunked is always acceptable for HTTP/1.1 recipients.

Three examples of TE use are below.

The presence of the keyword "trailers" indicates that the client is willing to accept trailer fields in a chunked transfer coding, as defined in Section 4.1.2 , on behalf of itself and any downstream clients. For requests from an intermediary, this implies that either: (a) all downstream clients are willing to accept trailer fields in the forwarded response; or, (b) the intermediary will attempt to buffer the response on behalf of downstream recipients. Note that HTTP/1.1 does not define any means to limit the size of a chunked response such that an intermediary can be assured of buffering the entire response.

When multiple transfer codings are acceptable, the client MAY rank the codings by preference using a case-insensitive "q" parameter (similar to the qvalues used in content negotiation fields, Section 5.3.1 of [RFC7231] ). The rank value is a real number in the range 0 through 1, where 0.001 is the least preferred and 1 is the most preferred; a value of 0 means "not acceptable".

If the TE field-value is empty or if no TE field is present, the only acceptable transfer coding is chunked. A message with no transfer coding is always acceptable.

Since the TE header field only applies to the immediate connection, a sender of TE MUST also send a "TE" connection option within the Connection header field ( Section 6.1 ) in order to prevent the TE field from being forwarded by intermediaries that do not support its semantics.

4.4.   Trailer

When a message includes a message body encoded with the chunked transfer coding and the sender desires to send metadata in the form of trailer fields at the end of the message, the sender SHOULD generate a Trailer header field before the message body to indicate which fields will be present in the trailers. This allows the recipient to prepare for receipt of that metadata before it starts processing the body, which is useful if the message is being streamed and the recipient wishes to confirm an integrity check on the fly.

5.   Message Routing

HTTP request message routing is determined by each client based on the target resource, the client's proxy configuration, and establishment or reuse of an inbound connection. The corresponding response routing follows the same connection chain back to the client.

5.1.   Identifying a Target Resource

HTTP is used in a wide variety of applications, ranging from general-purpose computers to home appliances. In some cases, communication options are hard-coded in a client's configuration. However, most HTTP clients rely on the same resource identification mechanism and configuration techniques as general-purpose Web browsers.

HTTP communication is initiated by a user agent for some purpose. The purpose is a combination of request semantics, which are defined in [RFC7231] , and a target resource upon which to apply those semantics. A URI reference ( Section 2.7 ) is typically used as an identifier for the " target resource ", which a user agent would resolve to its absolute form in order to obtain the " target URI ". The target URI excludes the reference's fragment component, if any, since fragment identifiers are reserved for client-side processing ( [RFC3986] , Section 3.5 ).

5.2.   Connecting Inbound

Once the target URI is determined, a client needs to decide whether a network request is necessary to accomplish the desired semantics and, if so, where that request is to be directed.

If the client has a cache [RFC7234] and the request can be satisfied by it, then the request is usually directed there first.

If the request is not satisfied by a cache, then a typical client will check its configuration to determine whether a proxy is to be used to satisfy the request. Proxy configuration is implementation-dependent, but is often based on URI prefix matching, selective authority matching, or both, and the proxy itself is usually identified by an "http" or "https" URI. If a proxy is applicable, the client connects inbound by establishing (or reusing) a connection to that proxy.

If no proxy is applicable, a typical client will invoke a handler routine, usually specific to the target URI's scheme, to connect directly to an authority for the target resource. How that is accomplished is dependent on the target URI scheme and defined by its associated specification, similar to how this specification defines origin server access for resolution of the "http" ( Section 2.7.1 ) and "https" ( Section 2.7.2 ) schemes.

HTTP requirements regarding connection management are defined in Section 6 .

5.3.   Request Target

Once an inbound connection is obtained, the client sends an HTTP request message ( Section 3 ) with a request-target derived from the target URI. There are four distinct formats for the request-target, depending on both the method being requested and whether the request is to a proxy.

5.3.1.   origin-form

The most common form of request-target is the origin-form .

When making a request directly to an origin server, other than a CONNECT or server-wide OPTIONS request (as detailed below), a client MUST send only the absolute path and query components of the target URI as the request-target. If the target URI's path component is empty, the client MUST send "/" as the path within the origin-form of request-target. A Host header field is also sent, as defined in Section 5.4 .

For example, a client wishing to retrieve a representation of the resource identified as

directly from the origin server would open (or reuse) a TCP connection to port 80 of the host "www.example.org" and send the lines:

followed by the remainder of the request message.

5.3.2.   absolute-form

When making a request to a proxy, other than a CONNECT or server-wide OPTIONS request (as detailed below), a client MUST send the target URI in absolute-form as the request-target.

The proxy is requested to either service that request from a valid cache, if possible, or make the same request on the client's behalf to either the next inbound proxy server or directly to the origin server indicated by the request-target. Requirements on such "forwarding" of messages are defined in Section 5.7 .

An example absolute-form of request-line would be:

To allow for transition to the absolute-form for all requests in some future version of HTTP, a server MUST accept the absolute-form in requests, even though HTTP/1.1 clients will only send them in requests to proxies.

5.3.3.   authority-form

The authority-form of request-target is only used for CONNECT requests ( Section 4.3.6 of [RFC7231] ).

When making a CONNECT request to establish a tunnel through one or more proxies, a client MUST send only the target URI's authority component (excluding any userinfo and its "@" delimiter) as the request-target. For example,

5.3.4.   asterisk-form

The asterisk-form of request-target is only used for a server-wide OPTIONS request ( Section 4.3.7 of [RFC7231] ).

When a client wishes to request OPTIONS for the server as a whole, as opposed to a specific named resource of that server, the client MUST send only "*" (%x2A) as the request-target. For example,

If a proxy receives an OPTIONS request with an absolute-form of request-target in which the URI has an empty path and no query component, then the last proxy on the request chain MUST send a request-target of "*" when it forwards the request to the indicated origin server.

For example, the request

would be forwarded by the final proxy as

after connecting to port 8001 of host "www.example.org".

5.4.   Host

The "Host" header field in a request provides the host and port information from the target URI, enabling the origin server to distinguish among resources while servicing requests for multiple host names on a single IP address.

A client MUST send a Host header field in all HTTP/1.1 request messages. If the target URI includes an authority component, then a client MUST send a field-value for Host that is identical to that authority component, excluding any userinfo subcomponent and its "@" delimiter ( Section 2.7.1 ). If the authority component is missing or undefined for the target URI, then a client MUST send a Host header field with an empty field-value.

Since the Host field-value is critical information for handling a request, a user agent SHOULD generate Host as the first header field following the request-line.

For example, a GET request to the origin server for <http://www.example.org/pub/WWW/> would begin with:

A client MUST send a Host header field in an HTTP/1.1 request even if the request-target is in the absolute-form, since this allows the Host information to be forwarded through ancient HTTP/1.0 proxies that might not have implemented Host.

When a proxy receives a request with an absolute-form of request-target, the proxy MUST ignore the received Host header field (if any) and instead replace it with the host information of the request-target. A proxy that forwards such a request MUST generate a new Host field-value based on the received request-target rather than forward the received Host field-value.

Since the Host header field acts as an application-level routing mechanism, it is a frequent target for malware seeking to poison a shared cache or redirect a request to an unintended server. An interception proxy is particularly vulnerable if it relies on the Host field-value for redirecting requests to internal servers, or for use as a cache key in a shared cache, without first verifying that the intercepted connection is targeting a valid IP address for that host.

A server MUST respond with a 400 (Bad Request) status code to any HTTP/1.1 request message that lacks a Host header field and to any request message that contains more than one Host header field or a Host header field with an invalid field-value.

5.5.   Effective Request URI

Since the request-target often contains only part of the user agent's target URI, a server reconstructs the intended target as an " effective request URI " to properly service the request. This reconstruction involves both the server's local configuration and information communicated in the request-target , Host header field, and connection context.

For a user agent, the effective request URI is the target URI.

If the request-target is in absolute-form , the effective request URI is the same as the request-target. Otherwise, the effective request URI is constructed as follows:

  • If the server's configuration (or outbound gateway) provides a fixed URI scheme , that scheme is used for the effective request URI. Otherwise, if the request is received over a TLS-secured TCP connection, the effective request URI's scheme is "https"; if not, the scheme is "http".
  • If the server's configuration (or outbound gateway) provides a fixed URI authority component, that authority is used for the effective request URI. If not, then if the request-target is in authority-form , the effective request URI's authority component is the same as the request-target. If not, then if a Host header field is supplied with a non-empty field-value, the authority component is the same as the Host field-value. Otherwise, the authority component is assigned the default name configured for the server and, if the connection's incoming TCP port number differs from the default port for the effective request URI's scheme, then a colon (":") and the incoming port number (in decimal form) are appended to the authority component.
  • If the request-target is in authority-form or asterisk-form , the effective request URI's combined path and query component is empty. Otherwise, the combined path and query component is the same as the request-target.
  • The components of the effective request URI, once determined as above, can be combined into absolute-URI form by concatenating the scheme, "://", authority, and combined path and query component.

Example 1: the following message received over an insecure TCP connection

has an effective request URI of

Example 2: the following message received over a TLS-secured TCP connection

Recipients of an HTTP/1.0 request that lacks a Host header field might need to use heuristics (e.g., examination of the URI path for something unique to a particular host) in order to guess the effective request URI's authority component.

Once the effective request URI has been constructed, an origin server needs to decide whether or not to provide service for that URI via the connection in which the request was received. For example, the request might have been misdirected, deliberately or accidentally, such that the information within a received request-target or Host header field differs from the host or port upon which the connection has been made. If the connection is from a trusted gateway, that inconsistency might be expected; otherwise, it might indicate an attempt to bypass security filters, trick the server into delivering non-public content, or poison a cache. See Section 9 for security considerations regarding message routing.

5.6.   Associating a Response to a Request

HTTP does not include a request identifier for associating a given request message with its corresponding one or more response messages. Hence, it relies on the order of response arrival to correspond exactly to the order in which requests are made on the same connection. More than one response message per request only occurs when one or more informational responses ( 1xx , see Section 6.2 of [RFC7231] ) precede a final response to the same request.

A client that has more than one outstanding request on a connection MUST maintain a list of outstanding requests in the order sent and MUST associate each received response message on that connection to the highest ordered request that has not yet received a final (non- 1xx ) response.

5.7.   Message Forwarding

As described in Section 2.3 , intermediaries can serve a variety of roles in the processing of HTTP requests and responses. Some intermediaries are used to improve performance or availability. Others are used for access control or to filter content. Since an HTTP stream has characteristics similar to a pipe-and-filter architecture, there are no inherent limits to the extent an intermediary can enhance (or interfere) with either direction of the stream.

An intermediary not acting as a tunnel MUST implement the Connection header field, as specified in Section 6.1 , and exclude fields from being forwarded that are only intended for the incoming connection.

An intermediary MUST NOT forward a message to itself unless it is protected from an infinite request loop. In general, an intermediary ought to recognize its own server names, including any aliases, local variations, or literal IP addresses, and respond to such requests directly.

5.7.1.   Via

The "Via" header field indicates the presence of intermediate protocols and recipients between the user agent and the server (on requests) or between the origin server and the client (on responses), similar to the "Received" header field in email ( Section 3.6.7 of [RFC5322] ). Via can be used for tracking message forwards, avoiding request loops, and identifying the protocol capabilities of senders along the request/response chain.

Multiple Via field values represent each proxy or gateway that has forwarded the message. Each intermediary appends its own information about how the message was received, such that the end result is ordered according to the sequence of forwarding recipients.

A proxy MUST send an appropriate Via header field, as described below, in each message that it forwards. An HTTP-to-HTTP gateway MUST send an appropriate Via header field in each inbound request message and MAY send a Via header field in forwarded response messages.

For each intermediary, the received-protocol indicates the protocol and protocol version used by the upstream sender of the message. Hence, the Via field value records the advertised protocol capabilities of the request/response chain such that they remain visible to downstream recipients; this can be useful for determining what backwards-incompatible features might be safe to use in response, or within a later request, as described in Section 2.6 . For brevity, the protocol-name is omitted when the received protocol is HTTP.

The received-by portion of the field value is normally the host and optional port number of a recipient server or client that subsequently forwarded the message. However, if the real host is considered to be sensitive information, a sender MAY replace it with a pseudonym. If a port is not provided, a recipient MAY interpret that as meaning it was received on the default TCP port, if any, for the received-protocol.

A sender MAY generate comments in the Via header field to identify the software of each recipient, analogous to the User-Agent and Server header fields. However, all comments in the Via field are optional, and a recipient MAY remove them prior to forwarding the message.

For example, a request message could be sent from an HTTP/1.0 user agent to an internal proxy code-named "fred", which uses HTTP/1.1 to forward the request to a public proxy at p.example.net, which completes the request by forwarding it to the origin server at www.example.com. The request received by www.example.com would then have the following Via header field:

An intermediary used as a portal through a network firewall SHOULD NOT forward the names and ports of hosts within the firewall region unless it is explicitly enabled to do so. If not enabled, such an intermediary SHOULD replace each received-by host of any host behind the firewall by an appropriate pseudonym for that host.

An intermediary MAY combine an ordered subsequence of Via header field entries into a single such entry if the entries have identical received-protocol values. For example,

could be collapsed to

A sender SHOULD NOT combine multiple entries unless they are all under the same organizational control and the hosts have already been replaced by pseudonyms. A sender MUST NOT combine entries that have different received-protocol values.

5.7.2.   Transformations

Some intermediaries include features for transforming messages and their payloads. A proxy might, for example, convert between image formats in order to save cache space or to reduce the amount of traffic on a slow link. However, operational problems might occur when these transformations are applied to payloads intended for critical applications, such as medical imaging or scientific data analysis, particularly when integrity checks or digital signatures are used to ensure that the payload received is identical to the original.

An HTTP-to-HTTP proxy is called a " transforming proxy " if it is designed or configured to modify messages in a semantically meaningful way (i.e., modifications, beyond those required by normal HTTP processing, that change the message in a way that would be significant to the original sender or potentially significant to downstream recipients). For example, a transforming proxy might be acting as a shared annotation server (modifying responses to include references to a local annotation database), a malware filter, a format transcoder, or a privacy filter. Such transformations are presumed to be desired by whichever client (or client organization) selected the proxy.

If a proxy receives a request-target with a host name that is not a fully qualified domain name, it MAY add its own domain to the host name it received when forwarding the request. A proxy MUST NOT change the host name if the request-target contains a fully qualified domain name.

A proxy MUST NOT modify the "absolute-path" and "query" parts of the received request-target when forwarding it to the next inbound server, except as noted above to replace an empty path with "/" or "*".

A proxy MAY modify the message body through application or removal of a transfer coding ( Section 4 ).

A proxy MUST NOT transform the payload ( Section 3.3 of [RFC7231] ) of a message that contains a no-transform cache-control directive ( Section 5.2 of [RFC7234] ).

A proxy MAY transform the payload of a message that does not contain a no-transform cache-control directive. A proxy that transforms a payload MUST add a Warning header field with the warn-code of 214 ("Transformation Applied") if one is not already in the message (see Section 5.5 of [RFC7234] ). A proxy that transforms the payload of a 200 (OK) response can further inform downstream recipients that a transformation has been applied by changing the response status code to 203 (Non-Authoritative Information) ( Section 6.3.4 of [RFC7231] ).

A proxy SHOULD NOT modify header fields that provide information about the endpoints of the communication chain, the resource state, or the selected representation (other than the payload) unless the field's definition specifically allows such modification or the modification is deemed necessary for privacy or security.

6.   Connection Management

HTTP messaging is independent of the underlying transport- or session-layer connection protocol(s). HTTP only presumes a reliable transport with in-order delivery of requests and the corresponding in-order delivery of responses. The mapping of HTTP request and response structures onto the data units of an underlying transport protocol is outside the scope of this specification.

As described in Section 5.2 , the specific connection protocols to be used for an HTTP interaction are determined by client configuration and the target URI . For example, the "http" URI scheme ( Section 2.7.1 ) indicates a default connection of TCP over IP, with a default TCP port of 80, but the client might be configured to use a proxy via some other connection, port, or protocol.

HTTP implementations are expected to engage in connection management, which includes maintaining the state of current connections, establishing a new connection or reusing an existing connection, processing messages received on a connection, detecting connection failures, and closing each connection. Most clients maintain multiple connections in parallel, including more than one connection per server endpoint. Most servers are designed to maintain thousands of concurrent connections, while controlling request queues to enable fair use and detect denial-of-service attacks.

6.1.   Connection

The "Connection" header field allows the sender to indicate desired control options for the current connection. In order to avoid confusing downstream recipients, a proxy or gateway MUST remove or replace any received connection options before forwarding the message.

When a header field aside from Connection is used to supply control information for or about the current connection, the sender MUST list the corresponding field-name within the Connection header field. A proxy or gateway MUST parse a received Connection header field before a message is forwarded and, for each connection-option in this field, remove any header field(s) from the message with the same name as the connection-option, and then remove the Connection header field itself (or replace it with the intermediary's own connection options for the forwarded message).

Hence, the Connection header field provides a declarative way of distinguishing header fields that are only intended for the immediate recipient ("hop-by-hop") from those fields that are intended for all recipients on the chain ("end-to-end"), enabling the message to be self-descriptive and allowing future connection-specific extensions to be deployed without fear that they will be blindly forwarded by older intermediaries.

The Connection header field's value has the following grammar:

Connection options are case-insensitive.

A sender MUST NOT send a connection option corresponding to a header field that is intended for all recipients of the payload. For example, Cache-Control is never appropriate as a connection option ( Section 5.2 of [RFC7234] ).

The connection options do not always correspond to a header field present in the message, since a connection-specific header field might not be needed if there are no parameters associated with a connection option. In contrast, a connection-specific header field that is received without a corresponding connection option usually indicates that the field has been improperly forwarded by an intermediary and ought to be ignored by the recipient.

When defining new connection options, specification authors ought to survey existing header field names and ensure that the new connection option does not share the same name as an already deployed header field. Defining a new connection option essentially reserves that potential field-name for carrying additional information related to the connection option, since it would be unwise for senders to use that field-name for anything else.

The " close " connection option is defined for a sender to signal that this connection will be closed after completion of the response. For example,

in either the request or the response header fields indicates that the sender is going to close the connection after the current request/response is complete ( Section 6.6 ).

A client that does not support persistent connections MUST send the "close" connection option in every request message.

A server that does not support persistent connections MUST send the "close" connection option in every response message that does not have a 1xx (Informational) status code.

6.2.   Establishment

It is beyond the scope of this specification to describe how connections are established via various transport- or session-layer protocols. Each connection applies to only one transport link.

6.3.   Persistence

HTTP/1.1 defaults to the use of " persistent connections ", allowing multiple requests and responses to be carried over a single connection. The " close " connection option is used to signal that a connection will not persist after the current request/response. HTTP implementations SHOULD support persistent connections.

A recipient determines whether a connection is persistent or not based on the most recently received message's protocol version and Connection header field (if any):

  • If the " close " connection option is present, the connection will not persist after the current response; else,
  • If the received protocol is HTTP/1.1 (or later), the connection will persist after the current response; else,
  • If the received protocol is HTTP/1.0, the "keep-alive" connection option is present, the recipient is not a proxy, and the recipient wishes to honor the HTTP/1.0 "keep-alive" mechanism, the connection will persist after the current response; otherwise,
  • The connection will close after the current response.

A client MAY send additional requests on a persistent connection until it sends or receives a " close " connection option or receives an HTTP/1.0 response without a "keep-alive" connection option.

In order to remain persistent, all messages on a connection need to have a self-defined message length (i.e., one not defined by closure of the connection), as described in Section 3.3 . A server MUST read the entire request message body or close the connection after sending its response, since otherwise the remaining data on a persistent connection would be misinterpreted as the next request. Likewise, a client MUST read the entire response message body if it intends to reuse the same connection for a subsequent request.

A proxy server MUST NOT maintain a persistent connection with an HTTP/1.0 client (see Section 19.7.1 of [RFC2068] for information and discussion of the problems with the Keep-Alive header field implemented by many HTTP/1.0 clients).

See Appendix A.1.2 for more information on backwards compatibility with HTTP/1.0 clients.

6.3.1.   Retrying Requests

Connections can be closed at any time, with or without intention. Implementations ought to anticipate the need to recover from asynchronous close events.

When an inbound connection is closed prematurely, a client MAY open a new connection and automatically retransmit an aborted sequence of requests if all of those requests have idempotent methods ( Section 4.2.2 of [RFC7231] ). A proxy MUST NOT automatically retry non-idempotent requests.

A user agent MUST NOT automatically retry a request with a non-idempotent method unless it has some means to know that the request semantics are actually idempotent, regardless of the method, or some means to detect that the original request was never applied. For example, a user agent that knows (through design or configuration) that a POST request to a given resource is safe can repeat that request automatically. Likewise, a user agent designed specifically to operate on a version control repository might be able to recover from partial failure conditions by checking the target resource revision(s) after a failed connection, reverting or fixing any changes that were partially applied, and then automatically retrying the requests that failed.

A client SHOULD NOT automatically retry a failed automatic retry.

6.3.2.   Pipelining

A client that supports persistent connections MAY " pipeline " its requests (i.e., send multiple requests without waiting for each response). A server MAY process a sequence of pipelined requests in parallel if they all have safe methods ( Section 4.2.1 of [RFC7231] ), but it MUST send the corresponding responses in the same order that the requests were received.

A client that pipelines requests SHOULD retry unanswered requests if the connection closes before it receives all of the corresponding responses. When retrying pipelined requests after a failed connection (a connection not explicitly closed by the server in its last complete response), a client MUST NOT pipeline immediately after connection establishment, since the first remaining request in the prior pipeline might have caused an error response that can be lost again if multiple requests are sent on a prematurely closed connection (see the TCP reset problem described in Section 6.6 ).

Idempotent methods ( Section 4.2.2 of [RFC7231] ) are significant to pipelining because they can be automatically retried after a connection failure. A user agent SHOULD NOT pipeline requests after a non-idempotent method, until the final response status code for that method has been received, unless the user agent has a means to detect and recover from partial failure conditions involving the pipelined sequence.

An intermediary that receives pipelined requests MAY pipeline those requests when forwarding them inbound, since it can rely on the outbound user agent(s) to determine what requests can be safely pipelined. If the inbound connection fails before receiving a response, the pipelining intermediary MAY attempt to retry a sequence of requests that have yet to receive a response if the requests all have idempotent methods; otherwise, the pipelining intermediary SHOULD forward any received responses and then close the corresponding outbound connection(s) so that the outbound user agent(s) can recover accordingly.

6.4.   Concurrency

A client ought to limit the number of simultaneous open connections that it maintains to a given server.

Previous revisions of HTTP gave a specific number of connections as a ceiling, but this was found to be impractical for many applications. As a result, this specification does not mandate a particular maximum number of connections but, instead, encourages clients to be conservative when opening multiple connections.

Multiple connections are typically used to avoid the "head-of-line blocking" problem, wherein a request that takes significant server-side processing and/or has a large payload blocks subsequent requests on the same connection. However, each connection consumes server resources. Furthermore, using multiple connections can cause undesirable side effects in congested networks.

Note that a server might reject traffic that it deems abusive or characteristic of a denial-of-service attack, such as an excessive number of open connections from a single client.

6.5.   Failures and Timeouts

Servers will usually have some timeout value beyond which they will no longer maintain an inactive connection. Proxy servers might make this a higher value since it is likely that the client will be making more connections through the same proxy server. The use of persistent connections places no requirements on the length (or existence) of this timeout for either the client or the server.

A client or server that wishes to time out SHOULD issue a graceful close on the connection. Implementations SHOULD constantly monitor open connections for a received closure signal and respond to it as appropriate, since prompt closure of both sides of a connection enables allocated system resources to be reclaimed.

A client, server, or proxy MAY close the transport connection at any time. For example, a client might have started to send a new request at the same time that the server has decided to close the "idle" connection. From the server's point of view, the connection is being closed while it was idle, but from the client's point of view, a request is in progress.

A server SHOULD sustain persistent connections, when possible, and allow the underlying transport's flow-control mechanisms to resolve temporary overloads, rather than terminate connections with the expectation that clients will retry. The latter technique can exacerbate network congestion.

A client sending a message body SHOULD monitor the network connection for an error response while it is transmitting the request. If the client sees a response that indicates the server does not wish to receive the message body and is closing the connection, the client SHOULD immediately cease transmitting the body and close its side of the connection.

6.6.   Tear-down

The Connection header field ( Section 6.1 ) provides a " close " connection option that a sender SHOULD send when it wishes to close the connection after the current request/response pair.

A client that sends a " close " connection option MUST NOT send further requests on that connection (after the one containing "close") and MUST close the connection after reading the final response message corresponding to this request.

A server that receives a " close " connection option MUST initiate a close of the connection (see below) after it sends the final response to the request that contained "close". The server SHOULD send a "close" connection option in its final response on that connection. The server MUST NOT process any further requests received on that connection.

A server that sends a " close " connection option MUST initiate a close of the connection (see below) after it sends the response containing "close". The server MUST NOT process any further requests received on that connection.

A client that receives a " close " connection option MUST cease sending requests on that connection and close the connection after reading the response message containing the "close"; if additional pipelined requests had been sent on the connection, the client SHOULD NOT assume that they will be processed by the server.

If a server performs an immediate close of a TCP connection, there is a significant risk that the client will not be able to read the last HTTP response. If the server receives additional data from the client on a fully closed connection, such as another request that was sent by the client before receiving the server's response, the server's TCP stack will send a reset packet to the client; unfortunately, the reset packet might erase the client's unacknowledged input buffers before they can be read and interpreted by the client's HTTP parser.

To avoid the TCP reset problem, servers typically close a connection in stages. First, the server performs a half-close by closing only the write side of the read/write connection. The server then continues to read from the connection until it receives a corresponding close by the client, or until the server is reasonably certain that its own TCP stack has received the client's acknowledgement of the packet(s) containing the server's last response. Finally, the server fully closes the connection.

It is unknown whether the reset problem is exclusive to TCP or might also be found in other transport connection protocols.

6.7.   Upgrade

The "Upgrade" header field is intended to provide a simple mechanism for transitioning from HTTP/1.1 to some other protocol on the same connection. A client MAY send a list of protocols in the Upgrade header field of a request to invite the server to switch to one or more of those protocols, in order of descending preference, before sending the final response. A server MAY ignore a received Upgrade header field if it wishes to continue using the current protocol on that connection. Upgrade cannot be used to insist on a protocol change.

A server that sends a 101 (Switching Protocols) response MUST send an Upgrade header field to indicate the new protocol(s) to which the connection is being switched; if multiple protocol layers are being switched, the sender MUST list the protocols in layer-ascending order. A server MUST NOT switch to a protocol that was not indicated by the client in the corresponding request's Upgrade header field. A server MAY choose to ignore the order of preference indicated by the client and select the new protocol(s) based on other factors, such as the nature of the request or the current load on the server.

A server that sends a 426 (Upgrade Required) response MUST send an Upgrade header field to indicate the acceptable protocols, in order of descending preference.

A server MAY send an Upgrade header field in any other response to advertise that it implements support for upgrading to the listed protocols, in order of descending preference, when appropriate for a future request.

The following is a hypothetical example sent by a client:

The capabilities and nature of the application-level communication after the protocol change is entirely dependent upon the new protocol(s) chosen. However, immediately after sending the 101 (Switching Protocols) response, the server is expected to continue responding to the original request as if it had received its equivalent within the new protocol (i.e., the server still has an outstanding request to satisfy after the protocol has been changed, and is expected to do so without requiring the request to be repeated).

For example, if the Upgrade header field is received in a GET request and the server decides to switch protocols, it first responds with a 101 (Switching Protocols) message in HTTP/1.1 and then immediately follows that with the new protocol's equivalent of a response to a GET on the target resource. This allows a connection to be upgraded to protocols with the same semantics as HTTP without the latency cost of an additional round trip. A server MUST NOT switch protocols unless the received message semantics can be honored by the new protocol; an OPTIONS request can be honored by any protocol.

The following is an example response to the above hypothetical request:

When Upgrade is sent, the sender MUST also send a Connection header field ( Section 6.1 ) that contains an "upgrade" connection option, in order to prevent Upgrade from being accidentally forwarded by intermediaries that might not implement the listed protocols. A server MUST ignore an Upgrade header field that is received in an HTTP/1.0 request.

A client cannot begin using an upgraded protocol on the connection until it has completely sent the request message (i.e., the client can't change the protocol it is sending in the middle of a message). If a server receives both an Upgrade and an Expect header field with the "100-continue" expectation ( Section 5.1.1 of [RFC7231] ), the server MUST send a 100 (Continue) response before sending a 101 (Switching Protocols) response.

The Upgrade header field only applies to switching protocols on top of the existing connection; it cannot be used to switch the underlying connection (transport) protocol, nor to switch the existing communication to a different connection. For those purposes, it is more appropriate to use a 3xx (Redirection) response ( Section 6.4 of [RFC7231] ).

This specification only defines the protocol name "HTTP" for use by the family of Hypertext Transfer Protocols, as defined by the HTTP version rules of Section 2.6 and future updates to this specification. Additional tokens ought to be registered with IANA using the registration procedure defined in Section 8.6 .

7.   ABNF List Extension: #rule

A #rule extension to the ABNF rules of [RFC5234] is used to improve readability in the definitions of some header field values.

A construct "#" is defined, similar to "*", for defining comma-delimited lists of elements. The full form is "<n>#<m>element" indicating at least <n> and at most <m> elements, each separated by a single comma (",") and optional whitespace (OWS).

In any production that uses the list construct, a sender MUST NOT generate empty list elements. In other words, a sender MUST generate lists that satisfy the following syntax:

and for n >= 1 and m > 1:

For compatibility with legacy list rules, a recipient MUST parse and ignore a reasonable number of empty list elements: enough to handle common mistakes by senders that merge values, but not so much that they could be used as a denial-of-service mechanism. In other words, a recipient MUST accept lists that satisfy the following syntax:

Empty elements do not contribute to the count of elements present. For example, given these ABNF productions:

Then the following are valid values for example-list (not including the double quotes, which are present for delimitation only):

In contrast, the following values would be invalid, since at least one non-empty element is required by the example-list production:

Appendix B shows the collected ABNF for recipients after the list constructs have been expanded.

8.   IANA Considerations

8.1.   header field registration.

HTTP header fields are registered within the "Message Headers" registry maintained at < http://www.iana.org/assignments/message-headers/ >.

This document defines the following HTTP header fields, so the "Permanent Message Header Field Names" registry has been updated accordingly (see [BCP90] ).

Header Field NameProtocolStatusReference
Connectionhttpstandard
Content-Lengthhttpstandard
Hosthttpstandard
TEhttpstandard
Trailerhttpstandard
Transfer-Encodinghttpstandard
Upgradehttpstandard
Viahttpstandard

Furthermore, the header field-name "Close" has been registered as "reserved", since using that name as an HTTP header field might conflict with the "close" connection option of the Connection header field ( Section 6.1 ).

Header Field NameProtocolStatusReference
Closehttpreserved

The change controller is: "IETF ([email protected]) - Internet Engineering Task Force".

8.2.   URI Scheme Registration

IANA maintains the registry of URI Schemes [BCP115] at < http://www.iana.org/assignments/uri-schemes/ >.

This document defines the following URI schemes, so the "Permanent URI Schemes" registry has been updated accordingly.

URI SchemeDescriptionReference
httpHypertext Transfer Protocol
httpsHypertext Transfer Protocol Secure

8.3.   Internet Media Type Registration

IANA maintains the registry of Internet media types [BCP13] at < http://www.iana.org/assignments/media-types >.

This document serves as the specification for the Internet media types "message/http" and "application/http". The following has been registered with IANA.

8.3.1.   Internet Media Type message/http

The message/http type can be used to enclose a single HTTP request or response message, provided that it obeys the MIME restrictions for all "message" types regarding line length and encodings.

8.3.2.   Internet Media Type application/http

The application/http type can be used to enclose a pipeline of one or more HTTP request or response messages (not intermixed).

8.4.   Transfer Coding Registry

The "HTTP Transfer Coding Registry" defines the namespace for transfer coding names. It is maintained at < http://www.iana.org/assignments/http-parameters >.

8.4.1.   Procedure

Registrations MUST include the following fields:

  • Description
  • Pointer to specification text

Names of transfer codings MUST NOT overlap with names of content codings ( Section 3.1.2.1 of [RFC7231] ) unless the encoding transformation is identical, as is the case for the compression codings defined in Section 4.2 .

Values to be added to this namespace require IETF Review (see Section 4.1 of [RFC5226] ), and MUST conform to the purpose of transfer coding defined in this specification.

Use of program names for the identification of encoding formats is not desirable and is discouraged for future encodings.

8.4.2.   Registration

The "HTTP Transfer Coding Registry" has been updated with the registrations below:

NameDescriptionReference
chunkedTransfer in a series of chunks
compressUNIX "compress" data format
deflate"deflate" compressed data ( ) inside the "zlib" data format ( )
gzipGZIP file format
x-compressDeprecated (alias for compress)
x-gzipDeprecated (alias for gzip)

8.5.   Content Coding Registration

IANA maintains the "HTTP Content Coding Registry" at < http://www.iana.org/assignments/http-parameters >.

The "HTTP Content Coding Registry" has been updated with the registrations below:

NameDescriptionReference
compressUNIX "compress" data format
deflate"deflate" compressed data ( ) inside the "zlib" data format ( )
gzipGZIP file format
x-compressDeprecated (alias for compress)
x-gzipDeprecated (alias for gzip)

8.6.   Upgrade Token Registry

The "Hypertext Transfer Protocol (HTTP) Upgrade Token Registry" defines the namespace for protocol-name tokens used to identify protocols in the Upgrade header field. The registry is maintained at < http://www.iana.org/assignments/http-upgrade-tokens >.

8.6.1.   Procedure

Each registered protocol name is associated with contact information and an optional set of specifications that details how the connection will be processed after it has been upgraded.

Registrations happen on a "First Come First Served" basis (see Section 4.1 of [RFC5226] ) and are subject to the following rules:

  • A protocol-name token, once registered, stays registered forever.
  • The registration MUST name a responsible party for the registration.
  • The registration MUST name a point of contact.
  • The registration MAY name a set of specifications associated with that token. Such specifications need not be publicly available.
  • The registration SHOULD name a set of expected "protocol-version" tokens associated with that token at the time of registration.
  • The responsible party MAY change the registration at any time. The IANA will keep a record of all such changes, and make them available upon request.
  • The IESG MAY reassign responsibility for a protocol token. This will normally only be used in the case when a responsible party cannot be contacted.

This registration procedure for HTTP Upgrade Tokens replaces that previously defined in Section 7.2 of [RFC2817] .

8.6.2.   Upgrade Token Registration

The "HTTP" entry in the upgrade token registry has been updated with the registration below:

ValueDescriptionExpected Version TokensReference
HTTPHypertext Transfer Protocolany DIGIT.DIGIT (e.g, "2.0")

The responsible party is: "IETF ([email protected]) - Internet Engineering Task Force".

9.   Security Considerations

This section is meant to inform developers, information providers, and users of known security considerations relevant to HTTP message syntax, parsing, and routing. Security considerations about HTTP semantics and payloads are addressed in [RFC7231] .

9.1.   Establishing Authority

HTTP relies on the notion of an authoritative response : a response that has been determined by (or at the direction of) the authority identified within the target URI to be the most appropriate response for that request given the state of the target resource at the time of response message origination. Providing a response from a non-authoritative source, such as a shared cache, is often useful to improve performance and availability, but only to the extent that the source can be trusted or the distrusted response can be safely used.

Unfortunately, establishing authority can be difficult. For example, phishing is an attack on the user's perception of authority, where that perception can be misled by presenting similar branding in hypertext, possibly aided by userinfo obfuscating the authority component (see Section 2.7.1 ). User agents can reduce the impact of phishing attacks by enabling users to easily inspect a target URI prior to making an action, by prominently distinguishing (or rejecting) userinfo when present, and by not sending stored credentials and cookies when the referring document is from an unknown or untrusted source.

When a registered name is used in the authority component, the "http" URI scheme ( Section 2.7.1 ) relies on the user's local name resolution service to determine where it can find authoritative responses. This means that any attack on a user's network host table, cached names, or name resolution libraries becomes an avenue for attack on establishing authority. Likewise, the user's choice of server for Domain Name Service (DNS), and the hierarchy of servers from which it obtains resolution results, could impact the authenticity of address mappings; DNS Security Extensions (DNSSEC, [RFC4033] ) are one way to improve authenticity.

Furthermore, after an IP address is obtained, establishing authority for an "http" URI is vulnerable to attacks on Internet Protocol routing.

The "https" scheme ( Section 2.7.2 ) is intended to prevent (or at least reveal) many of these potential attacks on establishing authority, provided that the negotiated TLS connection is secured and the client properly verifies that the communicating server's identity matches the target URI's authority component (see [RFC2818] ). Correctly implementing such verification can be difficult (see [Georgiev] ).

9.2.   Risks of Intermediaries

By their very nature, HTTP intermediaries are men-in-the-middle and, thus, represent an opportunity for man-in-the-middle attacks. Compromise of the systems on which the intermediaries run can result in serious security and privacy problems. Intermediaries might have access to security-related information, personal information about individual users and organizations, and proprietary information belonging to users and content providers. A compromised intermediary, or an intermediary implemented or configured without regard to security and privacy considerations, might be used in the commission of a wide range of potential attacks.

Intermediaries that contain a shared cache are especially vulnerable to cache poisoning attacks, as described in Section 8 of [RFC7234] .

Implementers need to consider the privacy and security implications of their design and coding decisions, and of the configuration options they provide to operators (especially the default configuration).

Users need to be aware that intermediaries are no more trustworthy than the people who run them; HTTP itself cannot solve this problem.

9.3.   Attacks via Protocol Element Length

Because HTTP uses mostly textual, character-delimited fields, parsers are often vulnerable to attacks based on sending very long (or very slow) streams of data, particularly where an implementation is expecting a protocol element with no predefined length.

To promote interoperability, specific recommendations are made for minimum size limits on request-line ( Section 3.1.1 ) and header fields ( Section 3.2 ). These are minimum recommendations, chosen to be supportable even by implementations with limited resources; it is expected that most implementations will choose substantially higher limits.

A server can reject a message that has a request-target that is too long ( Section 6.5.12 of [RFC7231] ) or a request payload that is too large ( Section 6.5.11 of [RFC7231] ). Additional status codes related to capacity limits have been defined by extensions to HTTP [RFC6585] .

Recipients ought to carefully limit the extent to which they process other protocol elements, including (but not limited to) request methods, response status phrases, header field-names, numeric values, and body chunks. Failure to limit such processing can result in buffer overflows, arithmetic overflows, or increased vulnerability to denial-of-service attacks.

9.4.   Response Splitting

Response splitting (a.k.a, CRLF injection) is a common technique, used in various attacks on Web usage, that exploits the line-based nature of HTTP message framing and the ordered association of requests to responses on persistent connections [Klein] . This technique can be particularly damaging when the requests pass through a shared cache.

Response splitting exploits a vulnerability in servers (usually within an application server) where an attacker can send encoded data within some parameter of the request that is later decoded and echoed within any of the response header fields of the response. If the decoded data is crafted to look like the response has ended and a subsequent response has begun, the response has been split and the content within the apparent second response is controlled by the attacker. The attacker can then make any other request on the same persistent connection and trick the recipients (including intermediaries) into believing that the second half of the split is an authoritative answer to the second request.

For example, a parameter within the request-target might be read by an application server and reused within a redirect, resulting in the same parameter being echoed in the Location header field of the response. If the parameter is decoded by the application and not properly encoded when placed in the response field, the attacker can send encoded CRLF octets and other content that will make the application's single response look like two or more responses.

A common defense against response splitting is to filter requests for data that looks like encoded CR and LF (e.g., "%0D" and "%0A"). However, that assumes the application server is only performing URI decoding, rather than more obscure data transformations like charset transcoding, XML entity translation, base64 decoding, sprintf reformatting, etc. A more effective mitigation is to prevent anything other than the server's core protocol libraries from sending a CR or LF within the header section, which means restricting the output of header fields to APIs that filter for bad octets and not allowing application servers to write directly to the protocol stream.

9.5.   Request Smuggling

Request smuggling ( [Linhart] ) is a technique that exploits differences in protocol parsing among various recipients to hide additional requests (which might otherwise be blocked or disabled by policy) within an apparently harmless request. Like response splitting, request smuggling can lead to a variety of attacks on HTTP usage.

This specification has introduced new requirements on request parsing, particularly with regard to message framing in Section 3.3.3 , to reduce the effectiveness of request smuggling.

9.6.   Message Integrity

HTTP does not define a specific mechanism for ensuring message integrity, instead relying on the error-detection ability of underlying transport protocols and the use of length or chunk-delimited framing to detect completeness. Additional integrity mechanisms, such as hash functions or digital signatures applied to the content, can be selectively added to messages via extensible metadata header fields. Historically, the lack of a single integrity mechanism has been justified by the informal nature of most HTTP communication. However, the prevalence of HTTP as an information access mechanism has resulted in its increasing use within environments where verification of message integrity is crucial.

User agents are encouraged to implement configurable means for detecting and reporting failures of message integrity such that those means can be enabled within environments for which integrity is necessary. For example, a browser being used to view medical history or drug interaction information needs to indicate to the user when such information is detected by the protocol to be incomplete, expired, or corrupted during transfer. Such mechanisms might be selectively enabled via user agent extensions or the presence of message integrity metadata in a response. At a minimum, user agents ought to provide some indication that allows a user to distinguish between a complete and incomplete response message ( Section 3.4 ) when such verification is desired.

9.7.   Message Confidentiality

HTTP relies on underlying transport protocols to provide message confidentiality when that is desired. HTTP has been specifically designed to be independent of the transport protocol, such that it can be used over many different forms of encrypted connection, with the selection of such transports being identified by the choice of URI scheme or within user agent configuration.

The "https" scheme can be used to identify resources that require a confidential connection, as described in Section 2.7.2 .

9.8.   Privacy of Server Log Information

A server is in the position to save personal data about a user's requests over time, which might identify their reading patterns or subjects of interest. In particular, log information gathered at an intermediary often contains a history of user agent interaction, across a multitude of sites, that can be traced to individual users.

HTTP log information is confidential in nature; its handling is often constrained by laws and regulations. Log information needs to be securely stored and appropriate guidelines followed for its analysis. Anonymization of personal information within individual entries helps, but it is generally not sufficient to prevent real log traces from being re-identified based on correlation with other access characteristics. As such, access traces that are keyed to a specific client are unsafe to publish even if the key is pseudonymous.

To minimize the risk of theft or accidental publication, log information ought to be purged of personally identifiable information, including user identifiers, IP addresses, and user-provided query parameters, as soon as that information is no longer necessary to support operational needs for security, auditing, or fraud control.

10.   Acknowledgments

This edition of HTTP/1.1 builds on the many contributions that went into RFC 1945 , RFC 2068 , RFC 2145 , and RFC 2616 , including substantial contributions made by the previous authors, editors, and Working Group Chairs: Tim Berners-Lee, Ari Luotonen, Roy T. Fielding, Henrik Frystyk Nielsen, Jim Gettys, Jeffrey C. Mogul, Larry Masinter, and Paul J. Leach. Mark Nottingham oversaw this effort as Working Group Chair.

Since 1999, the following contributors have helped improve the HTTP specification by reporting bugs, asking smart questions, drafting or reviewing text, and evaluating open issues:

Adam Barth, Adam Roach, Addison Phillips, Adrian Chadd, Adrian Cole, Adrien W. de Croy, Alan Ford, Alan Ruttenberg, Albert Lunde, Alek Storm, Alex Rousskov, Alexandre Morgaut, Alexey Melnikov, Alisha Smith, Amichai Rothman, Amit Klein, Amos Jeffries, Andreas Maier, Andreas Petersson, Andrei Popov, Anil Sharma, Anne van Kesteren, Anthony Bryan, Asbjorn Ulsberg, Ashok Kumar, Balachander Krishnamurthy, Barry Leiba, Ben Laurie, Benjamin Carlyle, Benjamin Niven-Jenkins, Benoit Claise, Bil Corry, Bill Burke, Bjoern Hoehrmann, Bob Scheifler, Boris Zbarsky, Brett Slatkin, Brian Kell, Brian McBarron, Brian Pane, Brian Raymor, Brian Smith, Bruce Perens, Bryce Nesbitt, Cameron Heavon-Jones, Carl Kugler, Carsten Bormann, Charles Fry, Chris Burdess, Chris Newman, Christian Huitema, Cyrus Daboo, Dale Robert Anderson, Dan Wing, Dan Winship, Daniel Stenberg, Darrel Miller, Dave Cridland, Dave Crocker, Dave Kristol, Dave Thaler, David Booth, David Singer, David W. Morris, Diwakar Shetty, Dmitry Kurochkin, Drummond Reed, Duane Wessels, Edward Lee, Eitan Adler, Eliot Lear, Emile Stephan, Eran Hammer-Lahav, Eric D. Williams, Eric J. Bowman, Eric Lawrence, Eric Rescorla, Erik Aronesty, EungJun Yi, Evan Prodromou, Felix Geisendoerfer, Florian Weimer, Frank Ellermann, Fred Akalin, Fred Bohle, Frederic Kayser, Gabor Molnar, Gabriel Montenegro, Geoffrey Sneddon, Gervase Markham, Gili Tzabari, Grahame Grieve, Greg Slepak, Greg Wilkins, Grzegorz Calkowski, Harald Tveit Alvestrand, Harry Halpin, Helge Hess, Henrik Nordstrom, Henry S. Thompson, Henry Story, Herbert van de Sompel, Herve Ruellan, Howard Melman, Hugo Haas, Ian Fette, Ian Hickson, Ido Safruti, Ilari Liusvaara, Ilya Grigorik, Ingo Struck, J. Ross Nicoll, James Cloos, James H. Manger, James Lacey, James M. Snell, Jamie Lokier, Jan Algermissen, Jari Arkko, Jeff Hodges (who came up with the term 'effective Request-URI'), Jeff Pinner, Jeff Walden, Jim Luther, Jitu Padhye, Joe D. Williams, Joe Gregorio, Joe Orton, Joel Jaeggli, John C. Klensin, John C. Mallery, John Cowan, John Kemp, John Panzer, John Schneider, John Stracke, John Sullivan, Jonas Sicking, Jonathan A. Rees, Jonathan Billington, Jonathan Moore, Jonathan Silvera, Jordi Ros, Joris Dobbelsteen, Josh Cohen, Julien Pierre, Jungshik Shin, Justin Chapweske, Justin Erenkrantz, Justin James, Kalvinder Singh, Karl Dubost, Kathleen Moriarty, Keith Hoffman, Keith Moore, Ken Murchison, Koen Holtman, Konstantin Voronkov, Kris Zyp, Leif Hedstrom, Lionel Morand, Lisa Dusseault, Maciej Stachowiak, Manu Sporny, Marc Schneider, Marc Slemko, Mark Baker, Mark Pauley, Mark Watson, Markus Isomaki, Markus Lanthaler, Martin J. Duerst, Martin Musatov, Martin Nilsson, Martin Thomson, Matt Lynch, Matthew Cox, Matthew Kerwin, Max Clark, Menachem Dodge, Meral Shirazipour, Michael Burrows, Michael Hausenblas, Michael Scharf, Michael Sweet, Michael Tuexen, Michael Welzl, Mike Amundsen, Mike Belshe, Mike Bishop, Mike Kelly, Mike Schinkel, Miles Sabin, Murray S. Kucherawy, Mykyta Yevstifeyev, Nathan Rixham, Nicholas Shanks, Nico Williams, Nicolas Alvarez, Nicolas Mailhot, Noah Slater, Osama Mazahir, Pablo Castro, Pat Hayes, Patrick R. McManus, Paul E. Jones, Paul Hoffman, Paul Marquess, Pete Resnick, Peter Lepeska, Peter Occil, Peter Saint-Andre, Peter Watkins, Phil Archer, Phil Hunt, Philippe Mougin, Phillip Hallam-Baker, Piotr Dobrogost, Poul-Henning Kamp, Preethi Natarajan, Rajeev Bector, Ray Polk, Reto Bachmann-Gmuer, Richard Barnes, Richard Cyganiak, Rob Trace, Robby Simpson, Robert Brewer, Robert Collins, Robert Mattson, Robert O'Callahan, Robert Olofsson, Robert Sayre, Robert Siemer, Robert de Wilde, Roberto Javier Godoy, Roberto Peon, Roland Zink, Ronny Widjaja, Ryan Hamilton, S. Mike Dierken, Salvatore Loreto, Sam Johnston, Sam Pullara, Sam Ruby, Saurabh Kulkarni, Scott Lawrence (who maintained the original issues list), Sean B. Palmer, Sean Turner, Sebastien Barnoud, Shane McCarron, Shigeki Ohtsu, Simon Yarde, Stefan Eissing, Stefan Tilkov, Stefanos Harhalakis, Stephane Bortzmeyer, Stephen Farrell, Stephen Kent, Stephen Ludin, Stuart Williams, Subbu Allamaraju, Subramanian Moonesamy, Susan Hares, Sylvain Hellegouarch, Tapan Divekar, Tatsuhiro Tsujikawa, Tatsuya Hayashi, Ted Hardie, Ted Lemon, Thomas Broyer, Thomas Fossati, Thomas Maslen, Thomas Nadeau, Thomas Nordin, Thomas Roessler, Tim Bray, Tim Morgan, Tim Olsen, Tom Zhou, Travis Snoozy, Tyler Close, Vincent Murphy, Wenbo Zhu, Werner Baumann, Wilbur Streett, Wilfredo Sanchez Vega, William A. Rowe Jr., William Chan, Willy Tarreau, Xiaoshu Wang, Yaron Goland, Yngve Nysaeter Pettersen, Yoav Nir, Yogesh Bang, Yuchung Cheng, Yutaka Oiwa, Yves Lafon (long-time member of the editor team), Zed A. Shaw, and Zhong Yu.

See Section 16 of [RFC2616] for additional acknowledgements from prior revisions.

11. References

11.1. normative references, 11.2. informative references, appendix a.   http version history.

HTTP has been in use since 1990. The first version, later referred to as HTTP/0.9, was a simple protocol for hypertext data transfer across the Internet, using only a single request method (GET) and no metadata. HTTP/1.0, as defined by [RFC1945] , added a range of request methods and MIME-like messaging, allowing for metadata to be transferred and modifiers placed on the request/response semantics. However, HTTP/1.0 did not sufficiently take into consideration the effects of hierarchical proxies, caching, the need for persistent connections, or name-based virtual hosts. The proliferation of incompletely implemented applications calling themselves "HTTP/1.0" further necessitated a protocol version change in order for two communicating applications to determine each other's true capabilities.

HTTP/1.1 remains compatible with HTTP/1.0 by including more stringent requirements that enable reliable implementations, adding only those features that can either be safely ignored by an HTTP/1.0 recipient or only be sent when communicating with a party advertising conformance with HTTP/1.1.

HTTP/1.1 has been designed to make supporting previous versions easy. A general-purpose HTTP/1.1 server ought to be able to understand any valid request in the format of HTTP/1.0, responding appropriately with an HTTP/1.1 message that only uses features understood (or safely ignored) by HTTP/1.0 clients. Likewise, an HTTP/1.1 client can be expected to understand any valid HTTP/1.0 response.

Since HTTP/0.9 did not support header fields in a request, there is no mechanism for it to support name-based virtual hosts (selection of resource by inspection of the Host header field). Any server that implements name-based virtual hosts ought to disable support for HTTP/0.9. Most requests that appear to be HTTP/0.9 are, in fact, badly constructed HTTP/1.x requests caused by a client failing to properly encode the request-target.

A.1.   Changes from HTTP/1.0

This section summarizes major differences between versions HTTP/1.0 and HTTP/1.1.

A.1.1.   Multihomed Web Servers

The requirements that clients and servers support the Host header field ( Section 5.4 ), report an error if it is missing from an HTTP/1.1 request, and accept absolute URIs ( Section 5.3 ) are among the most important changes defined by HTTP/1.1.

Older HTTP/1.0 clients assumed a one-to-one relationship of IP addresses and servers; there was no other established mechanism for distinguishing the intended server of a request than the IP address to which that request was directed. The Host header field was introduced during the development of HTTP/1.1 and, though it was quickly implemented by most HTTP/1.0 browsers, additional requirements were placed on all HTTP/1.1 requests in order to ensure complete adoption. At the time of this writing, most HTTP-based services are dependent upon the Host header field for targeting requests.

A.1.2.   Keep-Alive Connections

In HTTP/1.0, each connection is established by the client prior to the request and closed by the server after sending the response. However, some implementations implement the explicitly negotiated ("Keep-Alive") version of persistent connections described in Section 19.7.1 of [RFC2068] .

Some clients and servers might wish to be compatible with these previous approaches to persistent connections, by explicitly negotiating for them with a "Connection: keep-alive" request header field. However, some experimental implementations of HTTP/1.0 persistent connections are faulty; for example, if an HTTP/1.0 proxy server doesn't understand Connection , it will erroneously forward that header field to the next inbound server, which would result in a hung connection.

One attempted solution was the introduction of a Proxy-Connection header field, targeted specifically at proxies. In practice, this was also unworkable, because proxies are often deployed in multiple layers, bringing about the same problem discussed above.

As a result, clients are encouraged not to send the Proxy-Connection header field in any requests.

Clients are also encouraged to consider the use of Connection: keep-alive in requests carefully; while they can enable persistent connections with HTTP/1.0 servers, clients using them will need to monitor the connection for "hung" requests (which indicate that the client ought stop sending the header field), and this mechanism ought not be used by clients at all when a proxy is being used.

A.1.3.   Introduction of Transfer-Encoding

HTTP/1.1 introduces the Transfer-Encoding header field ( Section 3.3.1 ). Transfer codings need to be decoded prior to forwarding an HTTP message over a MIME-compliant protocol.

A.2.   Changes from RFC 2616

HTTP's approach to error handling has been explained. ( Section 2.5 )

The HTTP-version ABNF production has been clarified to be case-sensitive. Additionally, version numbers have been restricted to single digits, due to the fact that implementations are known to handle multi-digit version numbers incorrectly. ( Section 2.6 )

Userinfo (i.e., username and password) are now disallowed in HTTP and HTTPS URIs, because of security issues related to their transmission on the wire. ( Section 2.7.1 )

The HTTPS URI scheme is now defined by this specification; previously, it was done in Section 2.4 of [RFC2818] . Furthermore, it implies end-to-end security. ( Section 2.7.2 )

HTTP messages can be (and often are) buffered by implementations; despite it sometimes being available as a stream, HTTP is fundamentally a message-oriented protocol. Minimum supported sizes for various protocol elements have been suggested, to improve interoperability. ( Section 3 )

Invalid whitespace around field-names is now required to be rejected, because accepting it represents a security vulnerability. The ABNF productions defining header fields now only list the field value. ( Section 3.2 )

Rules about implicit linear whitespace between certain grammar productions have been removed; now whitespace is only allowed where specifically defined in the ABNF. ( Section 3.2.3 )

Header fields that span multiple lines ("line folding") are deprecated. ( Section 3.2.4 )

The NUL octet is no longer allowed in comment and quoted-string text, and handling of backslash-escaping in them has been clarified. The quoted-pair rule no longer allows escaping control characters other than HTAB. Non-US-ASCII content in header fields and the reason phrase has been obsoleted and made opaque (the TEXT rule was removed). ( Section 3.2.6 )

Bogus Content-Length header fields are now required to be handled as errors by recipients. ( Section 3.3.2 )

The algorithm for determining the message body length has been clarified to indicate all of the special cases (e.g., driven by methods or status codes) that affect it, and that new protocol elements cannot define such special cases. CONNECT is a new, special case in determining message body length. "multipart/byteranges" is no longer a way of determining message body length detection. ( Section 3.3.3 )

The "identity" transfer coding token has been removed. (Sections 3.3 and 4 )

Chunk length does not include the count of the octets in the chunk header and trailer. Line folding in chunk extensions is disallowed. ( Section 4.1 )

The meaning of the "deflate" content coding has been clarified. ( Section 4.2.2 )

The segment + query components of RFC 3986 have been used to define the request-target, instead of abs_path from RFC 1808. The asterisk-form of the request-target is only allowed with the OPTIONS method. ( Section 5.3 )

The term "Effective Request URI" has been introduced. ( Section 5.5 )

Gateways do not need to generate Via header fields anymore. ( Section 5.7.1 )

Exactly when "close" connection options have to be sent has been clarified. Also, "hop-by-hop" header fields are required to appear in the Connection header field; just because they're defined as hop-by-hop in this specification doesn't exempt them. ( Section 6.1 )

The limit of two connections per server has been removed. An idempotent sequence of requests is no longer required to be retried. The requirement to retry requests under certain circumstances when the server prematurely closes the connection has been removed. Also, some extraneous requirements about when servers are allowed to close connections prematurely have been removed. ( Section 6.3 )

The semantics of the Upgrade header field is now defined in responses other than 101 (this was incorporated from [RFC2817] ). Furthermore, the ordering in the field value is now significant. ( Section 6.7 )

Empty list elements in list productions (e.g., a list header field containing ", ,") have been deprecated. ( Section 7 )

Registration of Transfer Codings now requires IETF Review ( Section 8.4 )

This specification now defines the Upgrade Token Registry, previously defined in Section 7.2 of [RFC2817] . ( Section 8.6 )

The expectation to support HTTP/0.9 requests has been removed. ( Appendix A )

Issues with the Keep-Alive and Proxy-Connection header fields in requests are pointed out, with use of the latter being discouraged altogether. ( Appendix A.1.2 )

Appendix B.   Collected ABNF

A B C D E G H I K L M N O P R S T U V W

  • absolute-form (of request-target)   5.3.2
  • accelerator   2.3
  • application/http Media Type   8.3.2
  • asterisk-form (of request-target)   5.3.4
  • authoritative response   9.1
  • authority-form (of request-target)   5.3.3
  • BCP115    8.2 , 11.2
  • BCP13    8.3 , 11.2
  • BCP90    8.1 , 11.2
  • browser   2.1
  • cache   2.4
  • cacheable   2.4
  • captive portal   2.3
  • chunked (Coding Format)   3.3.1 , 3.3.3 , 4.1
  • client   2.1
  • close   3.2.1 , 4.3 , 5.7 , 6.1 , 6.1 , 6.3.2 , 6.6 , 6.6 , 6.7 , 8.1 , 8.1 , A.2
  • compress (Coding Format)   4.2.1
  • connection   2.1
  • Connection header field   3.2.1 , 4.3 , 5.7 , 6.1 , 6.1 , 6.3.2 , 6.6 , 6.6 , 6.7 , 8.1 , 8.1 , A.2
  • Content-Length header field   3.3.2 , 8.1 , A.2
  • deflate (Coding Format)   4.2.2
  • Delimiters   3.2.6
  • downstream   2.3
  • effective request URI   5.5
  • gateway   2.3
  • Georgiev    9.1 , 11.2
  • absolute-form    5.3 , 5.3.2
  • absolute-path    2.7
  • absolute-URI    2.7
  • ALPHA   1.2
  • asterisk-form    5.3 , 5.3.4
  • authority    2.7
  • authority-form    5.3 , 5.3.3
  • BWS    3.2.3
  • chunk    4.1
  • chunk-data    4.1
  • chunk-ext    4.1 , 4.1.1
  • chunk-ext-name    4.1.1
  • chunk-ext-val    4.1.1
  • chunk-size    4.1
  • chunked-body    4.1 , 4.1.1
  • comment    3.2.6
  • Connection    6.1
  • connection-option    6.1
  • Content-Length    3.3.2
  • CR   1.2
  • CRLF   1.2
  • ctext    3.2.6
  • CTL   1.2
  • DIGIT   1.2
  • DQUOTE   1.2
  • field-content    3.2
  • field-name    3.2 , 4.4
  • field-value    3.2
  • field-vchar    3.2
  • fragment    2.7
  • header-field    3.2 , 4.1.2
  • HEXDIG   1.2
  • Host    5.4
  • HTAB   1.2
  • HTTP-message    3
  • HTTP-name    2.6
  • http-URI    2.7.1
  • HTTP-version    2.6
  • https-URI    2.7.2
  • last-chunk    4.1
  • LF   1.2
  • message-body    3.3
  • method    3.1.1
  • obs-fold    3.2
  • obs-text    3.2.6
  • OCTET   1.2
  • origin-form    5.3 , 5.3.1
  • OWS    3.2.3
  • partial-URI    2.7
  • port    2.7
  • protocol-name    5.7.1
  • protocol-version    5.7.1
  • pseudonym    5.7.1
  • qdtext    3.2.6
  • query    2.7
  • quoted-pair    3.2.6
  • quoted-string    3.2.6
  • rank    4.3
  • reason-phrase    3.1.2
  • received-by    5.7.1
  • received-protocol    5.7.1
  • request-line    3.1.1
  • request-target    5.3
  • RWS    3.2.3
  • scheme    2.7
  • segment    2.7
  • SP   1.2
  • start-line    3.1
  • status-code    3.1.2
  • status-line    3.1.2
  • t-codings    4.3
  • t-ranking    4.3
  • tchar    3.2.6
  • TE    4.3
  • token    3.2.6
  • Trailer    4.4
  • trailer-part    4.1 , 4.1.2
  • transfer-coding    4
  • Transfer-Encoding    3.3.1
  • transfer-extension    4
  • transfer-parameter    4
  • Upgrade    6.7
  • uri-host    2.7
  • URI-reference    2.7
  • VCHAR   1.2
  • Via    5.7.1
  • gzip (Coding Format)   4.2.3
  • header field   3
  • header section   3
  • headers   3
  • Host header field   5.3.1 , 5.4 , 8.1 , A.1.1
  • http URI scheme   2.7.1
  • https URI scheme   2.7.2
  • inbound   2.3
  • interception proxy   2.3
  • intermediary   2.3
  • ISO-8859-1    3.2.4 , 11.2
  • Klein    9.4 , 11.2
  • Kri2001    3.2.2 , 11.2
  • Linhart    9.5 , 11.2
  • application/http   8.3.2
  • message/http   8.3.1
  • message   2.1
  • message/http Media Type   8.3.1
  • method   3.1.1
  • non-transforming proxy   5.7.2
  • origin server   2.1
  • origin-form (of request-target)   5.3.1
  • outbound   2.3
  • phishing   9.1
  • proxy   2.3
  • recipient   2.1
  • request   2.1
  • request-target   3.1.1
  • resource   2.7
  • response   2.1
  • reverse proxy   2.3
  • RFC0793    2.7.1 , 11.1
  • RFC1919    2.3 , 11.2
  • RFC1945    2.6 , 10 , 11.2 , A
  • RFC1950    4.2.2 , 8.4.2 , 8.5 , 11.1
  • RFC1951    4.2.2 , 8.4.2 , 8.5 , 11.1
  • RFC1952    4.2.3 , 8.4.2 , 8.5 , 11.1
  • Section 6    3.3.1
  • RFC2047    3.2.4 , 11.2
  • Section 19.7.1    6.3 , A.1.2
  • RFC2119    1.1 , 11.1
  • RFC2145    1 , 10 , 11.2
  • Section 16    10
  • Section 7.2    8.6.1 , A.2
  • Section 2.4    A.2
  • RFC3040    2.3 , 11.2
  • Section 2.1    2.7.3
  • Section 2.2    2.7.3
  • Section 3.1    2.7
  • Section 3.2    2.7
  • Section 3.2.1    2.7.1
  • Section 3.2.2    2.7 , 2.7.1
  • Section 3.2.3    2.7
  • Section 3.3    2.7 , 2.7
  • Section 3.4    2.7
  • Section 3.5    2.7 , 2.7.1 , 5.1
  • Section 4.1    2.7
  • Section 4.2    2.7
  • Section 4.3    2.7
  • Section 6    2.7.3
  • RFC4033    9.1 , 11.2
  • RFC4559    2.3 , 11.2
  • Section 4.1    8.4.1 , 8.6.1
  • Appendix B.1    1.2
  • RFC5246    2.3 , 2.7.2 , 11.2
  • Section 3.6.7    5.7.1
  • RFC6265    2.7.2 , 3.2.2 , 4.1.2 , 11.2
  • RFC6585    9.3 , 11.2
  • Section 2    2.7
  • Section 3    3.3.2
  • Section 3.1.2.1    3.3.1 , 8.4.1
  • Section 3.3    5.7.2
  • Section 4    3.1.1
  • Section 4.2.1    6.3.2
  • Section 4.2.2    6.3.1 , 6.3.2
  • Section 4.3.1    2.1 , 3.3
  • Section 4.3.2    3.3 , 3.3.2
  • Section 4.3.6    3.3 , 3.3.1 , 3.3.2 , 5.3.3
  • Section 4.3.7    5.3.4
  • Section 5    4.1.2
  • Section 5.1.1    6.7
  • Section 5.3.1    4.3
  • Section 6    2.7.1 , 3.1.2
  • Section 6.2    5.6
  • Section 6.3.4    5.7.2
  • Section 6.4    6.7
  • Section 6.5.11    9.3
  • Section 6.5.12    3.1.1 , 9.3
  • Section 7.1    4.1.2
  • Section 7.1.1.2    3.2
  • Section 8.3    3.2.1
  • Appendix A    2.1
  • Section 4.1    3.3.1 , 3.3.2
  • RFC7233    1 , 11.1
  • Section 2    2.4
  • Section 3    3.4
  • Section 5.2    5.7.2 , 6.1
  • Section 5.5    5.7.2
  • Section 8    9.2
  • RFC7235    1 , 4.1.2 , 11.1
  • sender   2.1
  • server   2.1
  • spider   2.1
  • target resource   5.1
  • target URI   5.1
  • TE header field   4 , 4.1.2 , 4.3 , 8.1
  • Trailer header field   4.4 , 8.1
  • Transfer-Encoding header field   3.3 , 3.3.1 , 4 , 8.1 , A.1.3
  • transforming proxy   5.7.2
  • transparent proxy   2.3
  • tunnel   2.3
  • Upgrade header field   5.7.1 , 6.7 , 8.1 , A.2
  • upstream   2.3
  • http   2.7.1
  • https   2.7.2
  • USASCII    1.2 , 3 , 3.2.4 , 11.1
  • user agent   2.1
  • Via header field   5.7.1 , 8.1 , A.2
  • Welch    4.2.1 , 8.4.2 , 8.5 , 11.1

RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1 (Q47469725)

  • Hypertext Transfer Protocol -- HTTP/1.1
Language Label Description Also known as
English

Identifiers

Wikipedia (1 entry).

  • enwiki RFC 2616

Wikibooks (0 entries)

Wikinews (0 entries), wikiquote (0 entries), wikisource (0 entries), wikiversity (0 entries), wikivoyage (0 entries), wiktionary (0 entries), multilingual sites (0 entries).

internet engineering task force document rfc2616

Navigation menu

ACM Digital Library home

  • Advanced Search
  • United States

RFC2068: Hypertext Transfer Protocol -- HTTP/1.1

  • RFC 7235: Hypertext Transfer Protocol (HTTP/1.1): Authentication ,
  • RFC 7234: Hypertext Transfer Protocol (HTTP/1.1): Caching ,
  • RFC 7231: Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content ,
  • RFC 7230: Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing ,
  • RFC 7233: Hypertext Transfer Protocol (HTTP/1.1): Range Requests ,

RFC 7232: Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests

  • RFC 6266: Use of the Content-Disposition Header Field in the Hypertext Transfer Protocol (HTTP) ,
  • RFC 6585: Additional HTTP Status Codes ,
  • RFC 5785: Defining Well-Known Uniform Resource Identifiers (URIs) ,
  • RFC2817: Upgrading to TLS Within HTTP/1.1

Save to Binder

ACM Digital Library

The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. It is a generic, stateless, protocol which can be used for many tasks beyond its use for hypertext, such as name servers and distributed object management systems, through extension of its request methods, error codes and headers [47]. A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred.

RFC Downloads

ACM

  • Li Z, Xue K, Li J, Chen L, Li R, Wang Z, Yu N, Wei D, Sun Q and Lu J (2023). Entanglement-Assisted Quantum Networks: Mechanics, Enabling Technologies, Challenges, and Research Directions, IEEE Communications Surveys & Tutorials , 25 :4 , (2133-2189), Online publication date: 1-Oct-2023 .
  • Huang Q, Chiu M, Chen Y, Sun H and Yeh K (2022). Attacking Websites, Security and Communication Networks , 2022 , Online publication date: 1-Jan-2022 .
  • Cheng Z, Cui B, Qi T, Yang W, Fu J and Liu Z (2021). An Improved Feature Extraction Approach for Web Anomaly Detection Based on Semantic Structure, Security and Communication Networks , 2021 , Online publication date: 1-Jan-2021 .
  • Oest A, Safaei Y, Zhang P, Wardman B, Tyers K, Shoshitaishvili Y, Doupé A and Ahn G PhishTime Proceedings of the 29th USENIX Conference on Security Symposium, (379-396)
  • Oest A, Zhang P, Wardman B, Nunes E, Burgis J, Zand A, Thomas K, Doupé A and Ahn G Sunrise to sunset Proceedings of the 29th USENIX Conference on Security Symposium, (361-377)
  • Mareca M and Bordel B (2019). The educative model is changing: toward a student participative learning framework 3.0—editing Wikipedia in the higher education, Universal Access in the Information Society , 18 :3 , (689-701), Online publication date: 1-Aug-2019 .
  • Tumuluri R, Dahl D, Paternò F and Zancanaro M Standardized representations and markup languages for multimodal interaction The Handbook of Multimodal-Multisensor Interfaces, (347-392)
  • Fu X, Wang Z, Chen Y, Zhang Y and Wu H (2019). Bead Strand Model, Service Oriented Computing and Applications , 13 :2 , (95-103), Online publication date: 1-Jun-2019 .
  • Ayala I, Amor M, Fuentes L and Risi M (2019). An Energy Efficiency Study of Web-Based Communication in Android Phones, Scientific Programming , 2019 , Online publication date: 1-Jan-2019 .
  • Vetterl A and Clayton R Bitter harvest Proceedings of the 12th USENIX Conference on Offensive Technologies, (9-9)
  • Kohout J, Komrek T, ech P, Bodnr J and Loko J (2018). Learning communication patterns for malware discovery in HTTPs data, Expert Systems with Applications: An International Journal , 101 :C , (129-142), Online publication date: 1-Jul-2018 .
  • Kesavan S and Jayakumar J (2018). Effective client-driven three-level rate adaptation (TLRA) approach for adaptive HTTP streaming, Multimedia Tools and Applications , 77 :7 , (8081-8114), Online publication date: 1-Apr-2018 .
  • Rodriguez-Gil L, Orduña P, García-Zubia J and López-De-Ipiña D (2018). Interactive live-streaming technologies and approaches for web-based applications, Multimedia Tools and Applications , 77 :6 , (6471-6502), Online publication date: 1-Mar-2018 .
  • Wang D, Zhang X, Ming J, Chen T, Wang C, Niu W and Liu X (2018). Resetting Your Password Is Vulnerable, Wireless Communications & Mobile Computing , 2018 , Online publication date: 1-Jan-2018 .
  • Tripathi N and Hubballi N (2018). Slow rate denial of service attacks against HTTP/2 and detection, Computers and Security , 72 :C , (255-272), Online publication date: 1-Jan-2018 .
  • Antoniazzi F, Paolini G, Roffia L, Masotti D, Costanzo A and Cinotti T A Web of Things Approach for Indoor Position Monitoring of Elderly and Impaired People Proceedings of the 21st Conference of Open Innovations Association FRUCT, (51-56)
  • Diogo P, Lopes N and Reis L (2017). An ideal IoT solution for real-time web monitoring, Cluster Computing , 20 :3 , (2193-2209), Online publication date: 1-Sep-2017 .
  • Barbaglia G, Murzilli S and Cudini S (2017). Definition of REST web services with JSON schema, Software—Practice & Experience , 47 :6 , (907-920), Online publication date: 1-Jun-2017 .
  • Vega C, Roquero P and Aracil J (2017). Multi-Gbps HTTP traffic analysis in commodity hardware based on local knowledge of TCP streams, Computer Networks: The International Journal of Computer and Telecommunications Networking , 113 :C , (258-268), Online publication date: 11-Feb-2017 .
  • Kesavan S and Jayakumar J (2017). Improvement of adaptive HTTP streaming using advanced real-time rate adaptation, Computers and Electrical Engineering , 58 :C , (49-66), Online publication date: 1-Feb-2017 .
  • Quezada-Naquid M, Marcelín-Jiménez R and González-Compeán J (2016). Babel, International Journal of Web Services Research , 13 :4 , (36-53), Online publication date: 1-Oct-2016 .
  • Hwang J, Lee J and Yoo C (2016). Eliminating bandwidth estimation from adaptive video streaming in wireless networks, Image Communication , 47 :C , (242-251), Online publication date: 1-Sep-2016 .
  • Zhuang E, Tian Z, Cui X, Li J and Wang Z ERI Proceedings of the 9th EAI International Conference on Mobile Multimedia Communications, (126-129)
  • Lu Y, Motani M and Wong W (2016). A QoE-aware resource distribution framework incentivizing context sharing and moderate competition, IEEE/ACM Transactions on Networking , 24 :3 , (1364-1377), Online publication date: 1-Jun-2016 .
  • Banos V and Manolopoulos Y (2016). A quantitative approach to evaluate Website Archivability using the CLEAR+ method, International Journal on Digital Libraries , 17 :2 , (119-141), Online publication date: 1-Jun-2016 .
  • Pérez Méndez A, Marín López R and López Millán G (2016). Providing efficient SSO to cloud service access in AAA-based identity federations, Future Generation Computer Systems , 58 :C , (13-28), Online publication date: 1-May-2016 .
  • Lokoăź J, Kohout J, Čech P, Skopal T and Pevný T k-NN Classification of Malware in HTTPS Traffic Using the Metric Space Approach Proceedings of the 11th Pacific Asia Workshop on Intelligence and Security Informatics - Volume 9650, (131-145)
  • Li Z, Wang W, Xu T, Zhong X, Li X, Liu Y, Wilson C and Zhao B Exploring cross-application cellular traffic optimization with Baidu TrafficGuard Proceedings of the 13th Usenix Conference on Networked Systems Design and Implementation, (61-76)
  • Zheng X, Jiang J, Liang J, Duan H, Chen S, Wan T and Weaver N Cookies lack integrity Proceedings of the 24th USENIX Conference on Security Symposium, (707-721)
  • Hartig O and Pirrò G A Context-Based Semantics for SPARQL Property Paths Over the Web Proceedings of the 12th European Semantic Web Conference on The Semantic Web. Latest Advances and New Domains - Volume 9088, (71-87)
  • Bai W, Chen L, Chen K, Han D, Tian C and Wang H Information-agnostic flow scheduling for commodity data centers Proceedings of the 12th USENIX Conference on Networked Systems Design and Implementation, (455-468)
  • Pieczul O and Foley S The Dark Side of the Code Revised Selected Papers of the 23rd International Workshop on Security Protocols XXIII - Volume 9379, (1-11)
  • Shin Y, Myers S, Gupta M and Radivojac P (2015). A link graph-based approach to identify forum spam, Security and Communication Networks , 8 :2 , (176-188), Online publication date: 25-Jan-2015 .
  • Zohar E and Cassuto Y Automatic and dynamic configuration of data compression for web servers Proceedings of the 28th USENIX conference on Large Installation System Administration, (97-108)
  • Brambilla G, Picone M, Cirani S, Amoretti M and Zanichelli F A simulation platform for large-scale internet of things scenarios in urban environments Proceedings of the First International Conference on IoT in Urban Space, (50-55)
  • Karapanos N and Capkun S On the effective prevention of TLS man-in-the-middle attacks in web applications Proceedings of the 23rd USENIX conference on Security Symposium, (671-686)
  • Frey D, Goessens M and Kermarrec A Behave Proceedings of the 14th IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems - Volume 8460, (89-103)
  • Flohr J and Charzinski J A Comparative Study of Traffic Properties for Web Pages Optimized for Mobile Hand-Held and Non-mobile Devices Proceedings of the 17th International GI/ITG Conference on Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance - Volume 8376, (29-42)
  • Weaver N, Kreibich C, Dam M and Paxson V Here Be Web Proxies Proceedings of the 15th International Conference on Passive and Active Measurement - Volume 8362, (183-192)
  • Gisbert J, Palau C, Uriarte M, Prieto G, Palazn J, Esteve M, Lpez O, Correas J, Lucas-Esta M, Gimnez P, Moyano A, Collantes L, Gozlvez J, Molina B, Lzaro O and Gonzlez A (2014). Integrated system for control and monitoring industrial wireless networks for labor risk prevention, Journal of Network and Computer Applications , 39 :C , (233-252), Online publication date: 1-Mar-2014 .
  • Faria B, Korhonen J and Souto E (2014). A comparison study between the TLS-based security framework and IKEv2 when protecting DSMIPv6 signaling, Computer Standards & Interfaces , 36 :3 , (489-500), Online publication date: 1-Mar-2014 .
  • Braun B, Pollak C and Posegga J A Survey on Control-Flow Integrity Means in Web Application Frameworks Proceedings of the 18th Nordic Conference on Secure IT Systems - Volume 8208, (231-246)
  • Zefferer T, Golser F and Lenz T Towards Mobile Government Proceedings of the Second Joint International Conference on Technology-Enabled Innovation for Democracy, Government and Governance - Volume 8061, (140-151)
  • Polleres A, Hogan A, Delbru R and Umbrich J RDFS and OWL reasoning for linked data Proceedings of the 9th international conference on Reasoning Web: semantic technologies for intelligent data access, (91-149)
  • Auer S, Lehmann J, Ngonga Ngomo A and Zaveri A Introduction to linked data and its lifecycle on the web Proceedings of the 9th international conference on Reasoning Web: semantic technologies for intelligent data access, (1-90)
  • Al-Zoubi K and Wainer G (2013). RISE, Journal of Parallel and Distributed Computing , 73 :5 , (580-594), Online publication date: 1-May-2013 .
  • Qian F, Huang J, Erman J, Mao Z, Sen S and Spatscheck O How to reduce smartphone traffic volume by 30%? Proceedings of the 14th international conference on Passive and Active Measurement, (42-52)
  • Järvinen I, Chemmagate B, Ding A, Daniel L, Isomäki M, Korhonen J and Kojo M Effect of competing TCP traffic on interactive real-time communication Proceedings of the 14th international conference on Passive and Active Measurement, (94-103)
  • Cheng Y, Çetinkaya E and Sterbenz J Transactional traffic generator implementation in ns-3 Proceedings of the 6th International ICST Conference on Simulation Tools and Techniques, (182-189)
  • Castronova A, Goodall J and Elag M (2013). Models as web services using the Open Geospatial Consortium (OGC) Web Processing Service (WPS) standard, Environmental Modelling & Software , 41 , (72-83), Online publication date: 1-Mar-2013 .
  • Braun B, Gemein P, Reiser H and Posegga J Control-Flow integrity in web applications Proceedings of the 5th international conference on Engineering Secure Software and Systems, (1-16)
  • Granell C, Díaz L, Schade S, Ostländer N and Huerta J (2013). Enhancing integrated environmental modelling by designing resource-oriented interfaces, Environmental Modelling & Software , 39 :C , (229-246), Online publication date: 1-Jan-2013 .
  • Villegas N and Müller H The smartercontext ontology and its application to the smart internet The Personal Web, (151-184)
  • Berger L, Schwager A and Escudero-Garzás J (2013). Power line communications for smart grid applications, Journal of Electrical and Computer Engineering , 2013 , (3-3), Online publication date: 1-Jan-2013 .
  • Dunn J and Crosby B What your CDN won't tell you Proceedings of the 26th international conference on Large Installation System Administration: strategies, tools, and techniques, (195-202)
  • Renzel D, Schlebusch P and Klamma R Today's top "RESTful" services and why they are not restful Proceedings of the 13th international conference on Web Information Systems Engineering, (354-367)
  • Thatmann D, Slawik M, Zickau S and Küpper A Towards a federated cloud ecosystem Proceedings of the 9th international conference on Economics of Grids, Clouds, Systems, and Services, (223-233)
  • Gionta J, Ning P and Zhang X iHTTP Proceedings of the 10th international conference on Applied Cryptography and Network Security, (381-399)
  • McCusker J, Lebo T, Graves A, Difranzo D, Pinheiro P and McGuinness D Functional requirements for information resource provenance on the web Proceedings of the 4th international conference on Provenance and Annotation of Data and Processes, (52-66)
  • Yang C, Shih W and Huang C Implementation of a distributed data storage system with resource monitoring on cloud computing Proceedings of the 7th international conference on Advances in Grid and Pervasive Computing, (64-73)
  • Tumin S and Encheva S A closer look at authentication and authorization mechanisms for web-based applications Proceedings of the 5th WSEAS congress on Applied Computing conference, and Proceedings of the 1st international conference on Biologically Inspired Computation, (100-105)
  • Nowlan M, Tiwari N, Iyengar J, Aminy S and Fordy B Fitting square pegs through round pipes Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation, (28-28)
  • Maggiorini D, Ripamonti L and Scambia A Videogame technology to support seniors Proceedings of the 5th International ICST Conference on Simulation Tools and Techniques, (270-277)
  • Halvorson T, Szurdi J, Maier G, Felegyhazi M, Kreibich C, Weaver N, Levchenko K and Paxson V The BIZ top-level domain Proceedings of the 13th international conference on Passive and Active Measurement, (221-230)
  • Kastaniotis G, Maragos E, Douligeris C and Despotis D (2012). Using data envelopment analysis to evaluate the efficiency of web caching object replacement strategies, Journal of Network and Computer Applications , 35 :2 , (803-817), Online publication date: 1-Mar-2012 .
  • Maciá-Fernández G, Wang Y, Rodrıguez-Gómez R and Kuzmanovic A (2012). Extracting user web browsing patterns from non-content network traces, Computer Networks: The International Journal of Computer and Telecommunications Networking , 56 :2 , (598-614), Online publication date: 1-Feb-2012 .
  • Yadav R, Likhar P and Rao M SecWEM Proceedings of the 7th international conference on Information Systems Security, (309-321)
  • Salmon S and ElAarag H Simulation based experiments using EDNAS Proceedings of the Winter Simulation Conference, (3266-3277)
  • Diallo S, Tolk A, Graff J and Barraco A Using the levels of conceptual interoperability model and model-based data engineering to develop a modular interoperability framework Proceedings of the Winter Simulation Conference, (2576-2586)
  • Hogan A, Harth A, Umbrich J, Kinsella S, Polleres A and Decker S (2011). Searching and browsing Linked Data with SWSE, Web Semantics: Science, Services and Agents on the World Wide Web , 9 :4 , (365-401), Online publication date: 1-Dec-2011 .
  • François J, State R, Engel T and Festor O Enforcing security with behavioral fingerprinting Proceedings of the 7th International Conference on Network and Services Management, (64-72)
  • Ell B, Vrandečic D and Simperl E Labels in the web of data Proceedings of the 10th international conference on The semantic web - Volume Part I, (162-176)
  • Wendzel S and Keller J Low-attention forwarding for mobile network covert channels Proceedings of the 12th IFIP TC 6/TC 11 international conference on Communications and multimedia security, (122-133)
  • Zhou Y and Evans D Protecting private web content from embedded scripts Proceedings of the 16th European conference on Research in computer security, (60-79)
  • Jarnikov D and Doumen J Watermarking for adaptive streaming protocols Proceedings of the 8th VLDB international conference on Secure data management, (101-113)
  • Hogan A, Pan J, Polleres A and Ren Y Scalable OWL 2 reasoning for linked data Proceedings of the 7th international conference on Reasoning web: semantic technologies for the web of data, (250-325)
  • Auer S, Lehmann J and Ngomo A Introduction to linked data and its lifecycle on the web Proceedings of the 7th international conference on Reasoning web: semantic technologies for the web of data, (1-75)
  • Ocaya R (2011). A framework for collaborative remote experimentation for a physical laboratory using a low cost embedded web server, Journal of Network and Computer Applications , 34 :4 , (1408-1415), Online publication date: 1-Jul-2011 .
  • Westermann B and Kesdogan D Malice versus AN.ON Proceedings of the 15th international conference on Financial Cryptography and Data Security, (62-76)
  • Li N, Xie T, Jin M and Liu C (2010). Perturbation-based user-input-validation testing of web applications, Journal of Systems and Software , 83 :11 , (2263-2274), Online publication date: 1-Nov-2010 .
  • Suoranta S, Heikkinen J and Silvekoski P Authentication session migration Proceedings of the 15th Nordic conference on Information Security Technology for Applications, (17-32)
  • Gruschka N and Iacono L Security for XML data binding Proceedings of the 11th IFIP TC 6/TC 11 international conference on Communications and Multimedia Security, (53-63)
  • Haslhofer B and Schandl B (2010). Interweaving OAI-PMH data sources with the linked data cloud, International Journal of Metadata, Semantics and Ontologies , 5 :1 , (17-31), Online publication date: 1-Apr-2010 .
  • Salah K, Sattar K, Baig Z, Sqalli M and Calyam P (2010). Discovering last-matching rules in popular open-source and commercial firewalls, International Journal of Internet Protocol Technology , 5 :1/2 , (23-31), Online publication date: 1-Apr-2010 .
  • Rieck K, Krueger T, Brefeld U and Müller K (2010). Approximate Tree Kernels, The Journal of Machine Learning Research , 11 , (555-580), Online publication date: 1-Mar-2010 .
  • De Ryck P, Desmet L, Heyman T, Piessens F and Joosen W CsFire Proceedings of the Second international conference on Engineering Secure Software and Systems, (18-34)
  • Kaspar D, Evensen K, Engelstad P, Hansen A, Halvorsen P and Griwodz C Enhancing video-on-demand playout over multiple heterogeneous access networks Proceedings of the 7th IEEE conference on Consumer communications and networking conference, (47-51)
  • Bozzon A, Brambilla M, Ceri S, Corcoglioniti F and Gatti N Chapter 14 Search Computing, (268-290)
  • Ruth M, Diakov V, Goldsby M and Sa T Macro-system model Winter Simulation Conference, (1555-1561)
  • Bromberg Y, Réveillère L, Lawall J and Muller G Automatic generation of network protocol gateways Proceedings of the 10th ACM/IFIP/USENIX International Conference on Middleware, (1-20)
  • Adida B, Barth A and Jackson C Rootkits for JavaScript environments Proceedings of the 3rd USENIX conference on Offensive technologies, (4-4)
  • Evans N, Dingledine R and Grothoff C A practical congestion attack on tor using long paths Proceedings of the 18th conference on USENIX security symposium, (33-50)
  • Gajek S, Manulis M and Schwenk J (2009). User-aware provably secure protocols for browser-based mutual authentication, International Journal of Applied Cryptography , 1 :4 , (290-308), Online publication date: 1-Aug-2009 .
  • Olmedo V, Villagrá V, Konstanteli K, Burgos J and Berrocal J (2009). Network mobility support for Web Service-based Grids through the Session Initiation Protocol, Future Generation Computer Systems , 25 :7 , (758-767), Online publication date: 1-Jul-2009 .
  • Yue C, Chu Z and Wang H RCB Proceedings of the 2009 conference on USENIX Annual technical conference, (29-29)
  • Luo X, Chan E and Chang R Design and implementation of TCP data probes for reliable and metric-rich network path monitoring Proceedings of the 2009 conference on USENIX Annual technical conference, (4-4)
  • Agarwal Y, Hodges S, Chandra R, Scott J, Bahl P and Gupta R Somniloquy Proceedings of the 6th USENIX symposium on Networked systems design and implementation, (365-380)
  • Zander S and Murdoch S An improved clock-skew measurement technique for revealing hidden services Proceedings of the 17th conference on Security symposium, (211-225)
  • Chi C, Chua C and Song W A novel ownership scheme to maintain web content consistency Proceedings of the 3rd international conference on Advances in grid and pervasive computing, (352-363)
  • Szymaniak M, Presotto D, Pierre G and van Steen M (2008). Practical large-scale latency estimation, Computer Networks: The International Journal of Computer and Telecommunications Networking , 52 :7 , (1343-1364), Online publication date: 1-May-2008 .
  • Schneider F, Agarwal S, Alpcan T and Feldmann A The new web Proceedings of the 9th international conference on Passive and active network measurement, (31-40)
  • Patterson M, Sassaman L and Chaum D Freezing more than bits Proceedings of the 1st Conference on Usability, Psychology, and Security, (1-5)
  • Vazquez J and Lopez-De-Ipina D Social devices Proceedings of the 1st international conference on The internet of things, (308-324)
  • Billington J and Han B (2007). Formalising TCP's Data Transfer Service Language: A Symbolic Automaton and its Properties, Fundamenta Informaticae , 80 :1-3 , (49-74), Online publication date: 1-Mar-2008 .
  • Boteanu D, Fernandez J, McHugh J and Mullins J Queue management as a DoS counter-measure? Proceedings of the 10th international conference on Information Security, (263-280)
  • Ravid G, Bar-Ilan J, Baruchson-Arbib S and Rafaeli S (2007). Popularity and findability through log analysis of search terms and queries, Journal of Information Science , 33 :5 , (567-583), Online publication date: 1-Oct-2007 .
  • Curbera F, Duftler M, Khalaf R and Lovell D Bite Proceedings of the 5th international conference on Service-Oriented Computing, (94-106)
  • Jorissen P, Di Fiore F, Vansichem G and Lamotte W A virtual interactive community platform supporting education for long-term sick children Proceedings of the 4th international conference on Cooperative design, visualization, and engineering, (58-69)
  • Ingham K and Inoue H Comparing anomaly detection techniques for HTTP Proceedings of the 10th international conference on Recent advances in intrusion detection, (42-62)
  • Brumley D, Caballero J, Liang Z, Newsome J and Song D Towards automatic discovery of deviations in binary implementations with applications to error detection and fingerprint generation Proceedings of 16th USENIX Security Symposium on USENIX Security Symposium, (1-16)
  • Lufei H and Shi W (2007). Energy-aware QoS for application sessions across multiple protocol domains in mobile computing, Computer Networks: The International Journal of Computer and Telecommunications Networking , 51 :11 , (3125-3141), Online publication date: 1-Aug-2007 .
  • Chadwick D and Anthony S Using WebDAV for improved certificate revocation and publication Proceedings of the 4th European conference on Public Key Infrastructure: theory and practice, (265-279)
  • Chan K and Chu X Design of a Fuzzy PI Controller to Guarantee Proportional Delay Differentiation on Web Servers Proceedings of the 3rd international conference on Algorithmic Aspects in Information and Management, (389-398)
  • Cheng R and Lin H (2007). Protecting TCP from a misbehaving receiver, International Journal of Network Management , 17 :3 , (209-218), Online publication date: 1-Jun-2007 .
  • Mondal A and Kuzmanovic A When TCP Friendliness Becomes Harmful Proceedings of the IEEE INFOCOM 2007 - 26th IEEE International Conference on Computer Communications, (152-160)
  • Casado M and Freedman M Peering through the shroud Proceedings of the 4th USENIX conference on Networked systems design & implementation, (13-13)
  • Cvrk L, Vrba V and Molnar K Advanced autonomous access control system for web-based server applications Proceedings of the third conference on IASTED International Conference: Advances in Computer Science and Technology, (84-89)
  • Ingham K, Somayaji A, Burge J and Forrest S (2007). Learning DFA representations of HTTP for protecting web applications, Computer Networks: The International Journal of Computer and Telecommunications Networking , 51 :5 , (1239-1255), Online publication date: 1-Apr-2007 .
  • Billington J and Han B (2007). Formalising TCP's Data Transfer Service Language: A Symbolic Automaton and its Properties, Fundamenta Informaticae , 80 :1-3 , (49-74), Online publication date: 1-Jan-2007 .
  • Yeh P, Li J and Yuan S Tracking the changes of dynamic web pages in the existence of URL rewriting Proceedings of the fifth Australasian conference on Data mining and analystics - Volume 61, (169-176)
  • Koukis D, Antonatos S and Anagnostakis K On the privacy risks of publishing anonymized IP network traces Proceedings of the 10th IFIP TC-6 TC-11 international conference on Communications and Multimedia Security, (22-32)
  • Marquis S, Dean T and Knight S Packet decoding using context sensitive parsing Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research, (20-es)
  • Cerny R Topincs Proceedings of the 2nd international conference on Topic maps research and applications, (175-183)
  • Gonzalez-Barahona J, Dimitrova V, Chaparro D, Tebb C, Romera T, Canas L, Matravers J and Kleanthous S Towards community-driven development of educational materials Proceedings of the First European conference on Technology Enhanced Learning: innovative Approaches for Learning and Knowledge Sharing, (125-139)
  • Yoo S, Ju H and Hong J Performance improvement methods for NETCONF-Based configuration management Proceedings of the 9th Asia-Pacific international conference on Network Operations and Management: management of Convergence Networks and Services, (242-252)
  • Gonzalez J and Paxson V Enhancing network intrusion detection with integrated sampling and filtering Proceedings of the 9th international conference on Recent Advances in Intrusion Detection, (272-289)
  • Lu C, Lu Y, Abdelzaher T, Stankovic J and Son S (2006). Feedback Control Architecture and Design Methodology for Service Delay Guarantees in Web Servers, IEEE Transactions on Parallel and Distributed Systems , 17 :9 , (1014-1027), Online publication date: 1-Sep-2006 .
  • Wei Y, Lin C, Chu X and Voigt T (2006). Fuzzy control for guaranteeing absolute delays in web servers, International Journal of High Performance Computing and Networking , 4 :5/6 , (338-346), Online publication date: 11-Aug-2006 .
  • Chi C, Liu L and Yu X Data integrity related markup language and HTTP protocol support for web intermediaries Proceedings of the 2006 international conference on Embedded and Ubiquitous Computing, (328-335)
  • Øverlier L and Syverson P Valet services Proceedings of the 6th international conference on Privacy Enhancing Technologies, (223-244)
  • Benedyczak K, Nowiński A, Nowiński K and Bała P Unigrids streaming framework Proceedings of the 8th international conference on Applied parallel computing: state of the art in scientific computing, (809-818)
  • Ali A Zero footprint secure internet authentication using network smart card Proceedings of the 7th IFIP WG 8.8/11.2 international conference on Smart Card Research and Advanced Applications, (91-104)
  • Bry F and Eckert M Twelve theses on reactive rules for the web Proceedings of the 2006 international conference on Current Trends in Database Technology, (842-854)
  • Sun H, Fang B and Zhang H User-Perceived web qos measurement and evaluation system Proceedings of the 8th Asia-Pacific Web conference on Frontiers of WWW Research and Development, (157-165)
  • Miyamoto D, Hazeyama H and Kadobayashi Y SPS Proceedings of the First Asian Internet Engineering conference on Technologies for Advanced Heterogeneous Networks, (195-209)
  • Sugiki A, Kono K and Iwasaki H A practical approach to automatic parameter-tuning of web servers Proceedings of the 10th Asian Computing Science conference on Advances in computer science: data management on the web, (146-159)
  • Arlitt M, Krishnamurthy B and Mogul J Predicting short-transfer latency from TCP arcana Proceedings of the 5th ACM SIGCOMM conference on Internet measurement, (19-19)
  • Pang R, Allman M, Bennett M, Lee J, Paxson V and Tierney B A first look at modern enterprise traffic Proceedings of the 5th ACM SIGCOMM conference on Internet measurement, (2-2)
  • Savorić M, Karl H, Schläger M, Poschwatta T and Wolisz A (2005). Analysis and performance evaluation of the EFCM common congestion controller for TCP connections, Computer Networks: The International Journal of Computer and Telecommunications Networking , 49 :2 , (269-294), Online publication date: 5-Oct-2005 .
  • Bahat O and Makowski A (2005). Measuring consistency in TTL-based caches, Performance Evaluation , 62 :1-4 , (439-455), Online publication date: 1-Oct-2005 .
  • Groß T, Pfitzmann B and Sadeghi A Browser model for security analysis of browser-based protocols Proceedings of the 10th European conference on Research in Computer Security, (489-508)
  • Alanen M and Porres I Model Interchange Using OMG Standards Proceedings of the 31st EUROMICRO Conference on Software Engineering and Advanced Applications, (450-459)
  • Tomonaga K, Ohta M and Araki K Privacy-aware location dependent services over wireless internet with anycast Proceedings of the 3rd international conference on Human Society@Internet: web and Communication Technologies and Internet-Related Social Issues, (311-321)
  • Yuan J, Chi C and Sun Q Exploiting fine grained parallelism for acceleration of web retrieval Proceedings of the 3rd international conference on Human Society@Internet: web and Communication Technologies and Internet-Related Social Issues, (125-134)
  • Sayre R (2005). Atom, IEEE Internet Computing , 9 :4 , (71-78), Online publication date: 1-Jul-2005 .
  • Straub T, Ginkel T and Buchmann J A multipurpose delegation proxy for WWW credentials Proceedings of the Second European conference on Public Key Infrastructure, (1-21)
  • Sun Y, Yan C and Chen M Content-Aware automatic qos provisioning for upnp AV-Based multimedia services over wireless LANs Proceedings of the 5th international conference on Computational Science - Volume Part II, (444-452)
  • Elovici Y, Shapira B, Last M, Zaafrany O, Friedman M, Schneider M and Kandel A Content-Based detection of terrorists browsing the web using an advanced terror detection system (ATDS) Proceedings of the 2005 IEEE international conference on Intelligence and Security Informatics, (244-255)
  • Shieh A, Myers A and Sirer E Trickles Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation - Volume 2, (175-188)
  • Carrera E and Bianchini R (2005). PRESS, IEEE Transactions on Parallel and Distributed Systems , 16 :5 , (385-395), Online publication date: 1-May-2005 .
  • Younis O and Fahmy S (2005). FlowMate, IEEE/ACM Transactions on Networking , 13 :2 , (288-301), Online publication date: 1-Apr-2005 .
  • Park J and Chong K An implementation of the client-based distributed web caching system Proceedings of the 7th Asia-Pacific web conference on Web Technologies Research and Development, (759-770)
  • Cohen E, Halperin E and Kaplan H (2005). Performance aspects of distributed caches using, TTL-based consistency, Theoretical Computer Science , 331 :1 , (73-96), Online publication date: 15-Feb-2005 .
  • Debar H and Viinikka J Intrusion detection Foundations of Security Analysis and Design III, (207-236)
  • Margasiński I and Szczypiorski K VAST Enhanced methods in computer security, biometric and artificial intelligence systems, (71-82)
  • Olshefski D, Nieh J and Nahum E ksniffer Proceedings of the 6th conference on Symposium on Operating Systems Design & Implementation - Volume 6, (23-23)
  • Park J, Jin H and Kim D Intrusion detection system for securing geographical information system web servers Proceedings of the 4th international conference on Web and Wireless Geographical Information Systems, (110-119)
  • Nayate A, Dahlin M and Iyengar A Transparent information dissemination Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware, (212-231)
  • Zugenmaier A FLASCHE – a mechanism providing anonymity for mobile users Proceedings of the 4th international conference on Privacy Enhancing Technologies, (121-141)
  • Khare R and Taylor R Extending the Representational State Transfer (REST) Architectural Style for Decentralized Systems Proceedings of the 26th International Conference on Software Engineering, (428-437)
  • Bavier A, Bowman M, Chun B, Culler D, Karlin S, Muir S, Peterson L, Roscoe T, Spalink T and Wawrzoniak M Operating system support for planetary-scale network services Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1, (19-19)
  • Mogul J, Chan Y and Kelly T Design, implementation, and evaluation of duplicate transfer detection in HTTP Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation - Volume 1, (4-4)
  • Funasaka J, Nagayasu K and Ishida K Improvements on Block Size Control Method for Adaptive Parallel Downloading Proceedings of the 24th International Conference on Distributed Computing Systems Workshops - W7: EC (ICDCSW'04) - Volume 7, (648-653)
  • Kawash J Consistency models for Internet caching Proceedings of the winter international synposium on Information and communication technologies, (1-6)
  • Billington J and Han B Closed form expressions for the state space of TCP's Data Transfer Service operating over unbounded channels Proceedings of the 27th Australasian conference on Computer science - Volume 26, (31-39)
  • Hexel R, Johnson C, Kummerfeld B and Quigley A "Powerpoint to the people" Proceedings of the fifth conference on Australasian user interface - Volume 28, (49-56)
  • References Grid resource management, (507-566)
  • Craswell N, Crimmins F, Hawking D and Moffat A Performance and cost tradeoffs in Web search Proceedings of the 15th Australasian database conference - Volume 27, (161-169)
  • Wu C and Jan R (2003). System integration of WAP and SMS for home network system, Computer Networks: The International Journal of Computer and Telecommunications Networking , 42 :4 , (493-502), Online publication date: 15-Jul-2003 .
  • Yuan J and Chi C Web caching performance Proceedings of the 2nd international conference on Human.society@internet, (23-33)
  • Delicato F, Pires P, Pirmez L and da Costa Carmo L A flexible middleware system for wireless sensor networks Proceedings of the ACM/IFIP/USENIX 2003 International Conference on Middleware, (474-492)
  • Fiedler U and Plattner B Using latency quantiles to engineer QoS guarantees for web services Proceedings of the 11th international conference on Quality of service, (345-362)
  • Anderson M, Altas I and Fellows G Web personalisation with the cover coefficient algorithm Proceedings of the 2003 international conference on Computational science: PartIII, (422-431)
  • Park K and Ryou H Anomaly detection scheme using data mining in mobile environment Proceedings of the 2003 international conference on Computational science and its applications: PartII, (21-30)
  • Charzinski J (2003). Observed performance of elastic Internet applications, Computer Communications , 26 :8 , (914-925), Online publication date: 1-May-2003 .
  • Cardellini V, Colajanni M and Yu P (2003). Request Redirection Algorithms for Distributed Web Systems, IEEE Transactions on Parallel and Distributed Systems , 14 :4 , (355-368), Online publication date: 1-Apr-2003 .
  • VanderMeer D, Datta A, Dutta K, Ramamritham K and Navathe S (2003). Mobile User Recovery in the Context of Internet Transactions, IEEE Transactions on Mobile Computing , 2 :2 , (132-146), Online publication date: 1-Apr-2003 .
  • Di Nitto E, Sassaroli G and Zuccalà M Adaptation of Web Contents and Services to Terminals Capabilities Proceedings of the First IEEE International Conference on Pervasive Computing and Communications
  • Savorić M, Karl H and Wolisz A (2003). The TCP control block interdependence in fixed networks-new performance results, Computer Communications , 26 :4 , (366-375), Online publication date: 1-Mar-2003 .
  • Curcio I Multimedia streaming over mobile networks Wireless internet handbook, (77-104)
  • Wong J, Mirlas L, Kou W and Lin X Credit card-based secure online payment Payment technologies for E-commerce, (227-243)
  • Hoschek W The Web Service Discovery Architecture Proceedings of the 2002 ACM/IEEE conference on Supercomputing, (1-15)
  • Wang J, Min R, Zhu Y and Hu Y (2002). UCFS-A Novel User-Space, High Performance, Customized File System for Web Proxy Servers, IEEE Transactions on Computers , 51 :9 , (1056-1073), Online publication date: 1-Sep-2002 .
  • Libman L and Orda A (2002). Optimal retrial and timeout strategies for accessing network resources, IEEE/ACM Transactions on Networking , 10 :4 , (551-564), Online publication date: 1-Aug-2002 .
  • Fu Y, Vahdat A, Cherkasova L and Tang W EtE Proceedings of the General Track of the annual conference on USENIX Annual Technical Conference, (115-130)
  • Colajanni M and Yu P (2002). A Performance Study of Robust Load Sharing Strategies for Distributed Heterogeneous Web Server Systems, IEEE Transactions on Knowledge and Data Engineering , 14 :2 , (398-414), Online publication date: 1-Mar-2002 .
  • Graunke P, Findler R, Krishnamurthi S and Felleisen M Automatically Restructuring Programs for the Web Proceedings of the 16th IEEE international conference on Automated software engineering
  • Wong J, Evans D and Kwok M On staleness and the delivery of web pages Proceedings of the 2001 conference of the Centre for Advanced Studies on Collaborative research
  • Plank J, Bassi A, Beck M, Moore T, Swany D and Wolski R (2001). Managing Data Storage in the Network, IEEE Internet Computing , 5 :5 , (50-58), Online publication date: 1-Sep-2001 .
  • Fu K, Sit E, Smith K and Feamster N Dos and don'ts of client authentication on the web Proceedings of the 10th conference on USENIX Security Symposium - Volume 10
  • Regan J and Jensen C Capability file names Proceedings of the 10th conference on USENIX Security Symposium - Volume 10
  • Ghandeharizadeh S Alternative Approaches to Distribute An E-Commerce Document Management System Proceedings of the 11th International Workshop on research Issues in Data Engineering
  • Ju H, Choi M and Hong J (2001). EWS-Based Management Application Interface and Integration Mechanisms for Web-Based Element Management, Journal of Network and Systems Management , 9 :1 , (31-50), Online publication date: 1-Mar-2001 .
  • Henricksen K and Indulska J Adapting the web interface Proceedings of the 2nd Australasian conference on User interface, (21-28)
  • Henricksen K and Indulska J (2001). Adapting the web interface, Australian Computer Science Communications , 23 :5 , (21-28), Online publication date: 25-Jan-2001 .
  • Saif U, Gordon D and Greaves D (2001). Internet Access to a Home Area Network, IEEE Internet Computing , 5 :1 , (54-63), Online publication date: 1-Jan-2001 .
  • Bianchini R and Carrera E (2000). Analytical and experimental evaluation of cluster-based network servers, World Wide Web , 3 :4 , (215-229), Online publication date: 1-Dec-2000 .
  • Yang C and Luo M Realizing fault resilience in Web-server cluster Proceedings of the 2000 ACM/IEEE conference on Supercomputing, (21-es)
  • Yang C and Luo M Building an Adaptable, Fault Tolerant, and Highly Manageable Web Server on Clusters of Non-Dedicated Workstations Proceedings of the Proceedings of the 2000 International Conference on Parallel Processing
  • Curtin M Shibboleth Proceedings of the 9th conference on USENIX Security Symposium - Volume 9, (20-20)
  • Smith B, Acharya A, Yang T and Zhu H Exploiting result equivalence in caching dynamic web content Proceedings of the 2nd conference on USENIX Symposium on Internet Technologies and Systems - Volume 2, (19-19)
  • Krannig A Towards web security using PLASMA Proceedings of the 7th conference on USENIX Security Symposium - Volume 7, (14-14)
  • Publication Years 1996 - 2012
  • Publication counts 8
  • Citation count 761
  • Available for Download 8
  • Downloads (cumulative) 4,159
  • Downloads (12 months) 445
  • Downloads (6 weeks) 51
  • Average Downloads per Article 520
  • Average Citation per Article 95
  • Publication Years 1997 - 1999
  • Publication counts 3
  • Citation count 536
  • Available for Download 3
  • Downloads (cumulative) 3,005
  • Downloads (12 months) 293
  • Downloads (6 weeks) 40
  • Average Downloads per Article 1,002
  • Average Citation per Article 179
  • Publication Years 1984 - 2023
  • Publication counts 117
  • Citation count 8,876
  • Available for Download 85
  • Downloads (cumulative) 76,122
  • Downloads (12 months) 6,432
  • Downloads (6 weeks) 772
  • Average Downloads per Article 896
  • Average Citation per Article 76
  • Publication Years 1996 - 1999
  • Publication counts 4
  • Citation count 664
  • Available for Download 4
  • Downloads (cumulative) 3,619
  • Downloads (12 months) 353
  • Downloads (6 weeks) 42
  • Average Downloads per Article 905
  • Average Citation per Article 166
  • Publication Years 1972 - 2017
  • Publication counts 40
  • Citation count 1,417
  • Available for Download 33
  • Downloads (cumulative) 10,861
  • Downloads (12 months) 1,325
  • Downloads (6 weeks) 156
  • Average Downloads per Article 329
  • Average Citation per Article 35
  • Publication Years 1997 - 2000
  • Publication counts 5
  • Citation count 477
  • Available for Download 5
  • Downloads (cumulative) 2,668
  • Downloads (12 months) 209
  • Downloads (6 weeks) 27
  • Average Downloads per Article 534
  • Publication Years 1994 - 2005
  • Publication counts 7
  • Citation count 855
  • Available for Download 7
  • Downloads (cumulative) 5,284
  • Downloads (12 months) 662
  • Downloads (6 weeks) 72
  • Average Downloads per Article 755
  • Average Citation per Article 122

Recommendations

Rfc1945: hypertext transfer protocol -- http/1.0, export citations.

  • Please download or close your previous search result export first before starting a new bulk export. Preview is not available. By clicking download, a status dialog will open to start the export process. The process may take a few minutes but once it finishes a file will be downloadable from your browser. You may continue to browse the DL while the export process is in progress. Download
  • Download citation
  • Copy citation

We are preparing your search results for download ...

We will inform you here when the file is ready.

Your file of search results citations is now ready.

Your search export query has expired. Please try again.

Window Sizing for Zstandard Content Encoding draft-ietf-httpbis-zstd-window-size-00

  • IESG evaluation record
  • IESG writeups
  • Email expansions
Document Type Active Internet-Draft ( )
Authors ,
Last updated 2024-06-11
Replaces
RFC stream Internet Engineering Task Force (IETF)
Intended RFC status (None)
Formats txt html xml htmlized pdf bibtex bibxml
Additional resources
Stream WG Document
Document shepherd (None)
IESG I-D Exists
Consensus boilerplate Unknown
Telechat date (None)
Responsible AD (None)
Send notices to (None)

internet engineering task force document rfc2616

  • DOI: 10.17487/RFC7231
  • Corpus ID: 14399078

Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content

  • R. Fielding , Julian Reschke
  • Published in Request for Comments 1 June 2014
  • Computer Science

301 Citations

Internet engineering task force (ietf) hypertext transfer protocol (http/1.1): authentication, hypertext transfer protocol (http) client-initiated content-encoding, http usage in the registration data access protocol (rdap), http digest access authentication, initial hypertext transfer protocol (http) method registrations.

  • Highly Influenced

The Hypertext Transfer Protocol Status Code 308 (Permanent Redirect)

Indicating character encoding and language for http header field parameters, internet engineering task force (ietf) initial hypertext transfer protocol (http) method registrations, internet engineering task force (ietf), the 'basic' http authentication scheme, related papers.

Showing 1 through 3 of 0 Related Papers

Centre for Internet & Society

  • Accessibility
  • Access to Knowledge
  • Internet Governance
  • News & Media
  • Publications
  • Knowledge Repository on Internet Access

Internet Engineering Task Force

The Internet Engineering Task Force (IETF) is an open standards body with no requirements for membership and does not have a formal membership process either.

It is responsible for developing and promoting Internet Standards. Internet Standards are technological specifications which are applicable to the internet and internet access. The IETF also closely works with the World Wide Web Consortium (W3C) and other standard setting bodies. It mainly deals with the standards of the Internet Protocol suite (TCP/IP) which is a communication protocol used for the internet.

The mission of the IETF is to, "produce high quality, relevant technical and engineering documents that influence the way people design, use, and manage the internet in such a way as to make the internet work better." [1]

The IETF consists of working groups and informal discussion groups. The subject areas of the working group can be broadly divided into the following categories:

  • Applications
  • General  Internet
  • Operations and Management,
  • Real-time Applications and Infrastructure,
  • Security, and

The working groups are divided into, areas as mentioned above and they are managed by area directors.

IETF Standards Process

The process of developing Standards at the IETF looks simple but faces certain complications when put into practice.

A specification for a internet standards goes through a period of development followed by reviews by the community at large. Based upon the reviews and experiences, the specifications are revised and then the standards are adopted by an appropriate body after which it is published.

"In practice, the process is more complicated, due to (1) the difficulty of creating specifications of high technical quality; (2) the need to consider the interests of all of the affected parties; (3) the importance of establishing widespread community consensus; and (4) the difficulty of evaluating the utility of a particular specification for the internet community." [2]

The main goals of the Internet Standards Process are:

  • Technical Excellence;
  • Prior Implementation and Testing;
  • Clear, Concise, and Easily Understood Documentation;
  • Openness and Fairness; and
  • Timeliness [3]

World Wide Web Consortium (W3C)

W3C is a multi-stakeholder organization that involves groups from various sectors including multi nationals. W3C is also an international community dedicated to developing an open standard, "to ensure the long term growth of the web". It is led by the inventor of the web — Tim Berners-Lee.

The guiding principles of W3C" [4] are:

  • Web for All The W3C recognizes the social value of the internet as it enables communication, commerce and opportunities to share knowledge. One of their main goals is to make available these benefits to all irrespective of the hardware, software, network infrastructure, native language, culture, geographical location, or physical or mental ability.
  • Web on Everything The second guiding principle is to ensure that all devices are able to access the web. With the proliferation of the mobile device and smart phones; it has become more important to ensure access to the web irrespective the type of device.
  • Web for Rich Interaction The W3C Standards support and recognizes that the web was created as tool to share information and it has become more significant with the increasing demand for platforms such as Wikipedia and social networking platforms.
  • Web of Data and Services Web is often viewed as a giant repository or data and information but it is also seen as a set of services which includes exchange of messages. The two views complement each other and how web is perceived depends on the application.
  • Web of Trust Interaction on the web has increased and people ‘meet on the web’ and carry out commercial as well as social relationships. "W3C recognizes that trust is a social phenomenon, but technology design can foster trust and confidence."" [5]

[ 1 ]. Mission Statement for the IETF available at http://www.ietf.org/rfc/rfc3935.txt

[ 2 ]. http://www.ietf.org/about/standards-process.html

[ 3 ]. http://www.ietf.org/about/standards-process.html

[ 4 ]. http://www.w3.org/Consortium/mission

[ 5 ]. http://www.w3.org/Consortium/mission

Bengaluru: #32, 1st Floor, 2nd Block, Austin Town, Viveka Nagar, Bengaluru, Karnataka 560047.

null

Please help us defend citizen and user rights on the Internet!

You may donate online via Instamojo. Or, write a cheque in favour of ‘The Centre for Internet and Society’ and mail it to us at Ground Floor, No 173, 9th Cross, 2nd Stage, Indiranagar, Bangalore 560038. These charitable contributions will be towards the Institutional Corpus Fund of the Centre for Internet and Society.

Follow our Works

Newsletter: Subscribe

researchers@work blog: medium.com/rawblog

Twitter (CIS): @cis_india

Twitter (CIS-A2K): @cisa2k

Instagram: @cis.india

Youtube: Centre for Internet and Society

Request for Collaboration

We invite researchers, practitioners, artists, and theoreticians, both organisationally and as individuals, to engage with us on topics related internet and society, and improve our collective understanding of this field. To discuss such possibilities, please write to us at communications[at]cis-india[dot]org with an indication of the form and the content of the collaboration you might be interested in.

In general, we offer financial support for collaborative/invited works only through public calls.

The Centre for Internet and Society (CIS) is a non-profit organisation that undertakes interdisciplinary research on internet and digital technologies from policy and academic perspectives. The areas of focus include digital accessibility for persons with disabilities, access to knowledge, intellectual property rights, openness (including open data, free and open source software, open standards, open access, open educational resources, and open video), internet governance, telecommunication reform, digital privacy, and cyber-security. The research at CIS seeks to understand the reconfiguration of social processes and structures through the internet and digital media technologies, and vice versa.

Through its diverse initiatives, CIS explores, intervenes in, and advances contemporary discourse and regulatory practices around internet, technology, and society in India, and elsewhere.

  • Annual Reports
  • Organisational Policies
  • Newsletters
  • Internships
  • Certificates

© Centre for Internet & Society

Unless otherwise specified, content licensed under Creative Commons — Attribution 3.0 Unported.

  • Application development

Internet Engineering Task Force (IETF)

Katie Terrell Hanna

  • Katie Terrell Hanna

What is the Internet Engineering Task Force (IETF)?

The Internet Engineering Task Force (IETF) is the body that defines standard operating internet protocols such as TCP/IP .

The IETF is an open standards organization supervised by the Internet Society's Internet Architecture Board (IAB). However, prior to 1993, the IETF was supported by the United States federal government.

IETF organizational structure

IETF members are volunteers, drawn from the Internet Society's individual and organization membership . Members form working groups and area directors appoint a chairperson (or co-chairs) to deal with a particular area discussed in IETF meetings.

Ultimately, the area directors, working groups and chairs form the Internet Engineering Steering Group (IESG), which is responsible for creating internet standards expressed in the form of Requests for Comments ( RFCs ).

Decisions on a standards track are made by rough consensus instead of formal voting protocols.

As part of overseeing the work of the IETF, the IAB supervises the RFC editor and offers technical direction to ensure the smooth operation of the internet.

They are also responsible for the Internet Research Task Force (IRTF), an organization parallel to the IETF that focuses on long-term research on issues relevant to the evolution of the internet.

Additionally, the Internet Assigned Numbers Authority (IANA), an organization responsible for overseeing global IP address allocation, root zone management in the Domain Name System ( DNS ), autonomous system number allocation, and other Internet Protocol-related symbols and numbers, also works closely with the IETF.

Funding for IETF activities is provided by meeting dues, sponsors and proceeds provided by organizational membership to the Public Interest Registry.

IETF areas of focus

The common areas of focus for the IETF include:

  • applications
  • infrastructure
  • operations and management
  • real-time applications

The internet standards process includes proposing specifications, developing standards based on agreed-upon specifications, coordinating independent testing and revising proposals based on testing results.

Before proposals become official standards, multiple interoperable implementations must be successful. In fact, protocols can be used in many different systems or as running code used to flesh out system architectures.

The internet of things is one of many areas where Internet Engineering Task Force groups work to develop internet governance and regulation standards

IETF notable projects

In addition to the IETF standards process, the group also coordinates a number of other activities.

One such example is hackathons hosted by the IETF which are geared toward improving the interoperability and quality of the internet.

Internet of things (IoT)

The IoT is a network of software, electronics and sensors that facilitate data exchange and communication for manufacturers, operators and their connected devices. Multiple IETF working groups have developed the internet governance standards that regulate the IoT.

Legislation

The IETF also cooperates with a number of standards bodies that seek to regulate the internet and make it safer. Some examples include the International Standards Organization ( ISO ), the International Telecommunication Union (ITU) and the World Wide Web Consortium ( W3C ).

Continue Reading About Internet Engineering Task Force (IETF)

  • Opportunistic encryption: The IETF's 50 shades of protection
  • 12 common network protocols and their functions explained
  • Common application layer protocols in IoT explained
  • The future of trust will be built on data transparency
  • The 3 types of DNS servers and how they work

Related Terms

Secure access service edge (SASE), pronounced sassy, is a cloud architecture model that bundles together network and cloud-native...

Transmission Control Protocol (TCP) is a standard protocol on the internet that ensures the reliable transmission of data between...

NBASE-T Ethernet is an IEEE standard and Ethernet-signaling technology that enables existing twisted-pair copper cabling to ...

Cloud security, also known as 'cloud computing security,' is a set of policies, practices and controls deployed to protect ...

A privacy impact assessment (PIA) is a method for identifying and assessing privacy risks throughout the development lifecycle of...

A proof of concept (PoC) exploit is a nonharmful attack against a computer or network. PoC exploits are not meant to cause harm, ...

Green IT (green information technology) is the practice of creating and using environmentally sustainable computing resources.

A data protection impact assessment (DPIA) is a process designed to help organizations determine how data processing systems, ...

An artificial neuron is a connection point in an artificial neural network.

Diversity, equity and inclusion is a term used to describe policies and programs that promote the representation and ...

ADP Mobile Solutions is a self-service mobile app that enables employees to access work records such as pay, schedules, timecards...

Director of employee engagement is one of the job titles for a human resources (HR) manager who is responsible for an ...

Customer retention is a metric that measures customer loyalty, or an organization's ability to retain customers over time.

A virtual agent -- sometimes called an intelligent virtual agent (IVA) -- is a software program or cloud service that uses ...

A chatbot is a software or computer program that simulates human conversation or "chatter" through text or voice interactions.

Quiz 1: Request-Response Cycle

Profile Picture

Students also viewed

Profile Picture

Internet-Draft SLURM for RPKI ASPA May 2024
Snijders & Cartwright-Cox Expires 22 November 2024 [Page]

Simplified Local Internet Number Resource Management (SLURM) with RPKI Autonomous System Provider Authorizations (ASPA)

ISPs may want to establish a local view of exceptions to the Resource Public Key Infrastructure (RPKI) data in the form of local filters or additional attestations. This document defines an addendum to RFC 8416 by specifying a format for local filters and local assertions for Autonomous System Provider Authorizations (ASPA) for use with the RPKI. ¶

Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [ RFC2119 ] [ RFC8174 ] when, and only when, they appear in all capitals, as shown here. ¶

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. ¶

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/ . ¶

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." ¶

This Internet-Draft will expire on 22 November 2024. ¶

Copyright Notice

Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved. ¶

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents ( https://trustee.ietf.org/license-info ) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License. ¶

Table of Contents

1. introduction.

See [ RFC8416 ] for an overview of the SLURM mechanism, specifically Section 3 and Section 4. ¶

2. SLURM v2 File Overview

A SLURM file consists of a single JSON [ RFC8259 ] object containing the following members: ¶

  • A "slurmVersion" member that MUST be set to 2, encoded as a number ¶

A "validationOutputFilters" member whose value is an object. The object MUST contain exactly three members: ¶

  • A "prefixFilters" member, see Section 3.3.1 [ RFC8416 ] ¶
  • A "bgpsecFilters" member, see section 3.3.2 [ RFC8416 ] ¶
  • A "aspaFilters" member, see Section 3.1 ¶

A "locallyAddedAssertions" member whose value is an object. The object MUST contain exactly three members: ¶

  • A "prefixAssertions" member, see Section 3.4.1 [ RFC8416 ] ¶
  • A "bgpsecAssertions" member, see Section 3.4.2 [ RFC8416 ] ¶
  • A "aspaAssertions" member, see Section 3.2 ¶

The following JSON structure with JSON members represents a SLURM file that has no filters or assertions: ¶

3. Validation Output Filters for ASPA

3.1. validated aspa filters.

The RP can configure zero or more Validated ASPA Filters ("ASPA Filters" for short). Each ASPA Filter contains a single 'customerAsid' and optionally a single 'comment'. ¶

  • The 'customerAsid' member has as value a number. ¶
  • It is RECOMMENDED that an explanatory comment is included with each ASPA Filter so that it can be shown to users of the RP software. ¶

Any Validated ASPA Payload (VAP) [ I-D.ietf-sidrops-aspa-profile ] that matches any configured ASPA Filter MUST be removed from the RP's output. ¶

A VAP is considered to match with an ASPA Filter if the following condition applies: ¶

  • The VAP is considered to match if the VAP Customer ASID is equal to the ASPA Filter Customer ASID. ¶

The following example JSON structure represents a "aspaFilters" member with one object as described above: ¶

3.2. Locally Added ASPA Assertions

Each RP is locally configured with a (possibly empty) array of ASPA Assertions. Each ASPA Assertion MUST contain a 'customerAsid' member containing the Customer ASID and a 'providerSet' array of numbers, reflecting the set of Provider ASNs. It is RECOMMENDED that an explanatory comment is also included so that it can be shown to users of the RP software. ¶

The above is expressed as a value of the "aspaAssertions" member, as an array of zero or more objects. Each object MUST contain one each of all of the following members: ¶

  • An "customerAsid" member whose value is a number. ¶
  • A "providerSet" member whose value is an array of numbers. ¶
  • An optional "comment" member whose value is a string. ¶

The following example JSON structure represents a "aspaAssertions" member with one object as described above: ¶

Note that an "aspaAssertions" member matches the syntax of the ASPA PDU described in Section 5.12 of [ I-D.ietf-sidrops-8210bis ] . Relying Parties MUST add any "aspaAssertions" member thus found to the set of ASPA PDUs, excluding duplicates, when using version 2 of the RPKI-Router protocol [ I-D.ietf-sidrops-8210bis ] . An "aspaAssertions" does not act as an implicit filter. ¶

4. Example of a SLURM file with ASPA Filters and Assertions

5. security considerations.

For Security Considerations see Section 6 of [ RFC8416 ] . ¶

6. IANA Considerations

This document has no IANA actions. ¶

7. Acknowledgements

The authors would like to thank Tim Bruijnzeels for their helpful review of this document. ¶

8. References

8.1. normative references, 8.2. informative references, appendix a. implementation status - rfc editor: remove before publication.

This section records the status of known implementations of the protocol defined by this specification at the time of posting of this Internet-Draft, and is based on a proposal described in RFC 7942. The description of implementations in this section is intended to assist the IETF in its decision processes in progressing drafts to RFCs. Please note that the listing of any individual implementation here does not imply endorsement by the IETF. Furthermore, no effort has been spent to verify the information presented here that was supplied by IETF contributors. This is not intended as, and must not be construed to be, a catalog of available implementations or their features. Readers are advised to note that other implementations may exist. ¶

According to RFC 7942, "this will allow reviewers and working groups to assign due consideration to documents that have the benefit of running code, which may serve as evidence of valuable experimentation and feedback that have made the implemented protocols more mature. It is up to the individual working groups to use this information as they see fit". ¶

StayRTR [ stayrtr ] ¶

Authors' Addresses

IMAGES

  1. RFC 2616

    internet engineering task force document rfc2616

  2. Http 1.1 rfc2616

    internet engineering task force document rfc2616

  3. PPT Version

    internet engineering task force document rfc2616

  4. PPT

    internet engineering task force document rfc2616

  5. Internet Engineering Task Force

    internet engineering task force document rfc2616

  6. Report

    internet engineering task force document rfc2616

VIDEO

  1. Introducing the Internet Engineering Task Force (IETF)

  2. IETF Documents

  3. What is the meaning of the Internet Engineering Task Force (IETF)? [Audio Explainer]

  4. RFC,s and Internet Standards

  5. Multipath TCP Tutorial

  6. IETF 94

COMMENTS

  1. RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1

    RFC 2616 HTTP/1.1 June 1999 In HTTP/1.0, most implementations used a new connection for each request/response exchange. In HTTP/1.1, a connection may be used for one or more request/response exchanges, although connections may be closed for a variety of reasons (see section 8.1). 2 Notational Conventions and Generic Grammar 2.1 Augmented BNF All of the mechanisms specified in this document are ...

  2. Hypertext Transfer Protocol -- HTTP/1.1

    A feature of HTTP is the typing and negotiation of data representation, allowing systems to be built independently of the data being transferred. HTTP has been in use by the World-Wide Web global information initiative since 1990. This specification defines the protocol referred to as "HTTP/1.1", and is an update to RFC 2068 [33] .

  3. PDF Internet Engineering Task Force (IETF) M. Nottingham Updates: 2616 R

    This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.

  4. HTTP/1.1: Full Copyright Statement

    this document and the information contained herein is provided on an "as is" basis and the internet society and the internet engineering task force disclaims all warranties, express or implied, including but not limited to any warranty that the use of the information herein will not infringe any rights or any implied warranties of ...

  5. The Tao of IETF

    A Novice's Guide to the Internet Engineering Task Force. The "Tao of the IETF", previously published as a very long individual webpage, is in the process of being replaced by other documents covering the same topics. This page now contains only the remaining content still to be moved across, and retains the numbering scheme from the stand-alone ...

  6. IETF

    RFCs are the core output of the IETF. The IETF publishes its technical documentation as RFCs, an acronym for their historical title *Requests for Comments*. They define the Internet's technical foundations, such as addressing, routing and transport technologies. They recommend operational best practice and specify application protocols that are ...

  7. Information on RFC 2616 » RFC Editor

    File formats: Status: DRAFT STANDARD Obsoletes: RFC 2068 Obsoleted by: RFC 7230, RFC 7231, RFC 7232, RFC 7233, RFC 7234, RFC 7235 Updated by: RFC 2817, RFC 5785, RFC ...

  8. Hypertext Transfer Protocol (HTTP/1.1): Conditional Requests

    This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.

  9. IETF RFC 2616

    The Hypertext Transfer Protocol (HTTP) is an application-level protocol for distributed, collaborative, hypermedia information systems. This specification defines the protocol referred to as "HTTP/1.1". 1 Distribution. European Union Public Licence, Version 1.1 or later (EUPL)

  10. Internet Engineering Task Force (IETF) J. Reschke

    This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.

  11. Internet Engineering Task Force

    The Internet Engineering Task Force (IETF) is a standards organization for the Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster or requirements and all its participants are volunteers. Their work is usually funded by employers or other sponsors.

  12. The HTTP protocol

    The HTTP protocol. The HTTP protocol. The correct format for HTTP requests and responses depends on the version of the HTTP protocol (or HTTP specification) that is used by the client and by the server. The versions of the HTTP protocol (or "HTTP versions") commonly used on the Internet are HTTP/1.0, which is an earlier protocol including fewer ...

  13. Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing

    This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.

  14. RFC 2616: Hypertext Transfer Protocol -- HTTP/1.1

    RFC2616; Hypertext Transfer Protocol -- HTTP/1.1; Statements. instance of. Request for Comments. 1 reference. ... Internet Engineering Task Force. 1 reference. stated in. RFC Index. retrieved. 21 January 2018. language of work or name. English. ... MIME E-mail Encapsulation of Aggregate Documents, such as HTML (MHTML) 1 reference.

  15. IETF

    Older documents (before about RFC5705) say "Network Working Group" there, so you have to dig a bit more to find out whether they represent IETF consensus; look at the "Status of this Memo" section for a start, as well as the RFC Editor site. Under that is the "Request for Comments" number. If it says "Internet-Draft" instead, it ...

  16. RFC2616: Hypertext Transfer Protocol -- HTTP/1.1

    RFC2616: Hypertext Transfer Protocol -- HTTP/1.1 1999 RFC. June 1999. Read More ... Djalaliev P and Brustoloni J Secure web-based retrieval of documents with usage controls Proceedings of the 2009 ACM symposium on Applied Computing, (2062-2069) ... Hazeyama H and Kadobayashi Y SPS Proceedings of the First Asian Internet Engineering conference ...

  17. [PDF] Hypertext Transfer Protocol

    This paper discusses how URNs (Uniform Resource Names), a resource identification scheme proposed by a special work group of the Internet Engineering Task Force (IETF), can be employed to implement a support for the mirroring of Web resources.

  18. RFC 9110: HTTP Semantics

    This is an Internet Standards Track document.¶ This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG).

  19. draft-ietf-httpbis-zstd-window-size-00

    Status of This Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts.

  20. Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content

    This document defines the semantics of HTTP/1.1 messages, as expressed by request methods, request header fields, response status codes, and response headers, along with the payload of messages (metadata and body content) and mechanisms for content negotiation. The Hypertext Transfer Protocol (HTTP) is a stateless \%application- level protocol for distributed, collaborative, hypertext ...

  21. Internet Engineering Task Force

    It mainly deals with the standards of the Internet Protocol suite (TCP/IP) which is a communication protocol used for the internet. The mission of the IETF is to, "produce high quality, relevant technical and engineering documents that influence the way people design, use, and manage the internet in such a way as to make the internet work better."

  22. What is a Request for Comments (RFC)?

    A Request for Comments (RFC) is a formal document from the Internet Engineering Task Force ( IETF) that contains specifications and organizational notes about topics related to the internet and computer networking, such as routing, addressing and transport technologies. IETF is a large international community that includes researchers, vendors ...

  23. Internet Engineering Task Force (IETF)

    IETF (Internet Engineering Task Force): The IETF (Internet Engineering Task Force) is the body that defines standard Internet operating protocol s such as TCP/IP . The IETF is supervised by the Internet Society Internet Architecture Board ( IAB ). IETF members are drawn from the Internet Society's individual and organization membership. ...

  24. The Request / Response Cycle Flashcards

    What is the topic of the Internet Engineering Task Force document RFC2616? ... Which of these Internet Engineering Task Force (IETF) documents described the "Internet Control Message Protocol"? RFC792. What is the purpose of encode() in the socket1.py code: To convert the data to UTF-8 before sending. About us.

  25. IETF

    About the IETF. The Internet Engineering Task Force (IETF), founded in 1986, is the premier standards development organization (SDO) for the Internet. The IETF makes voluntary standards that are often adopted by Internet users, network operators, and equipment vendors, and it thus helps shape the trajectory of the development of the Internet.

  26. PDF Internet Engineering Task Force (IETF) J. Falk

    This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 5741.

  27. Quiz 1: Request-Response Cycle Flashcards

    What is the topic of the Internet Engineering Task Force document RFC2616? ... Which of these Internet Engineering Task Force (IETF) documents described the "Internet Control Message Protocol"? RFC792. What is the purpose of encode() in the socket1.py code: To convert the data to UTF-8 before sending. About us.

  28. Simplified Local Internet Number Resource Management (SLURM) with RPKI

    This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶ Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts.