Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

classic Classic list List threaded Threaded
51 messages Options
123
Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

heasley
Mon, Jun 12, 2017 at 06:29:30AM -0700, [hidden email]:
>         Title           : The Harmful Consequences of Postel's Maxim
> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01

Perhaps instead of requiring two implementations for a protocol draft to
proceed to rfc, it should first or also have a test suite that

        ... fails noisily in response to bad or undefined inputs.

Having a community-developed test suite for any protocol would be a great
asset.

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

John C Klensin


--On Tuesday, June 13, 2017 03:38 +0000 heasley
<[hidden email]> wrote:

> Mon, Jun 12, 2017 at 06:29:30AM -0700,
> [hidden email]:
>>         Title           : The Harmful Consequences of
>>         Postel's Maxim
>> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
>
> Perhaps instead of requiring two implementations for a
> protocol draft to proceed to rfc, it should first or also have
> a test suite that
>
>         ... fails noisily in response to bad or undefined
> inputs.
>
> Having a community-developed test suite for any protocol would
> be a great asset.

Actually, a number of standards bodies have found, to their
chagrin, that test suites that are developed and.or certified by
the standards body are a terrible idea.  The problem is that
they become the real standard, substituting "passes the test
suite" for "conformance to the standard" or the IETF's long
tradition of "interoperates successfully in practice".

And we have never had a global requirement for "two
implementations to proceed to rfc".

    john



Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

heasley
Tue, Jun 13, 2017 at 05:10:41AM -0400, John C Klensin:

> --On Tuesday, June 13, 2017 03:38 +0000 heasley
> <[hidden email]> wrote:
>
> > Mon, Jun 12, 2017 at 06:29:30AM -0700,
> > [hidden email]:
> >>         Title           : The Harmful Consequences of
> >>         Postel's Maxim
> >> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
> >
> > Perhaps instead of requiring two implementations for a
> > protocol draft to proceed to rfc, it should first or also have
> > a test suite that
> >
> >         ... fails noisily in response to bad or undefined
> > inputs.
> >
> > Having a community-developed test suite for any protocol would
> > be a great asset.
>
> Actually, a number of standards bodies have found, to their
> chagrin, that test suites that are developed and.or certified by
> the standards body are a terrible idea.  The problem is that
> they become the real standard, substituting "passes the test
> suite" for "conformance to the standard" or the IETF's long

reference?

> tradition of "interoperates successfully in practice".

so, both - test suite and a pair of interoperable implementation.

> And we have never had a global requirement for "two
> implementations to proceed to rfc".

Is that only IDR?

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Christian Huitema-3


On 6/13/2017 7:28 AM, heasley wrote:

> Tue, Jun 13, 2017 at 05:10:41AM -0400, John C Klensin:
>> --On Tuesday, June 13, 2017 03:38 +0000 heasley
>> <[hidden email]> wrote:
>>
>>> Mon, Jun 12, 2017 at 06:29:30AM -0700,
>>> [hidden email]:
>>>>         Title           : The Harmful Consequences of
>>>>         Postel's Maxim
>>>> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
>>> Perhaps instead of requiring two implementations for a
>>> protocol draft to proceed to rfc, it should first or also have
>>> a test suite that
>>>
>>>         ... fails noisily in response to bad or undefined
>>> inputs.
>>>
>>> Having a community-developed test suite for any protocol would
>>> be a great asset.

I am not a great fan of the test suite approach. I saw that used in OSI
protocols back in the days, and there are multiple issues -- including
for example special-casing the test suite in the implementation.

To come back to Martin's draft, I think two points are missing. One
about change, and one about grease. First, the part about change. The
Postel Principle was quite adequate in the 80's and 90's, when the
priority was to interconnect multiple systems and build the Internet.
Writing protocols can be hard, especially on machines with limited
resource. Many systems were gateways, interfacing for example Internet
mail with preexisting mainframe systems, and the protocol
implementations could not hide the quirkiness of the system that they
were interfacing. The principle was a trade-off. It made development and
interoperability easier, by tolerating some amount of non-conformance.
As the draft point out, it also tends to make evolution somewhat harder
-- although that probably cuts both ways, as overly rigid
implementations would also be ossified. Martin's draft advocates a
different trade-off. It would be nice to understand under which
circumstances the different trade-offs make sense.

Then there is grease, or greasing, which is a somewhat recent
development. The idea is to have some implementations forcefully
exercise the extension points in the protocol, which will trigger a
failure if the peer did not properly implement the negotiation of these
extensions, "grease the joints" in a way. That's kind of cool, but it
can only be implemented by important players. If an implementation has
0.5% of the market, it can try greasing all it wants, but the big
players might just as well ignore it, and the virtuous greasers will
find themselves cut off. On the other hand, if a big player does it, the
new implementations had better conform. Which means that greasing is
hard to distinguish from old fashioned "conformity with the big
implementations", which might have some unwanted consequences. Should it
be discussed?

-- Christian Huitema


Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Joe Touch
Hi, all,

...
>   Title           : The Harmful Consequences of
>         Postel's Maxim
> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
I completely agree with John Klensin that a test suite defines the
protocol standard (warts and all).

However, I disagree with the characterization of the Postel Principle in
this doc, and strongly disagree with one of its key conclusions ("fail
noisily in response to...undefined inputs"). Failing in response to bad
inputs is fine, but "undefined" needs to be treated agnostically.

IMO, the Postel Principle is an admission of that sort of agnosticism -
if you don't know how the other end will react, act conservatively. If
you don't know what the other end intends, react conservatively. Both
are conservative actions - in one sense, you try not to trigger
unexpected behavior (when you send), and in another you try not to
create that unexpected behavior (when you receive).

That's the very definition of how unused bits need to be handled. Send
conservatively (use the known default value), but allow any value upon
receipt.

The principle does not setup the feedback cycle in Sec 2; a bug is a bug
and should be fixed, and accommodating alternate behaviors is the very
definition of "be generous in what you receive". "Being conservative in
what you send" doesn't mean "never send anything new" - it means do so
only deliberately.

-----
Failing noisily is, even when appropriate (e.g., on a known incorrect
input), an invitation for a DOS attack.

That behavior is nearly as bad as interpreting unexpected (but not
prohibited) behavior as an attack. Neither one serves a useful purpose
other than overreaction, which provides increased leverage for a real
DOS attack.

Joe

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Dave Cridland
In reply to this post by Christian Huitema-3
On 13 June 2017 at 18:50, Christian Huitema <[hidden email]> wrote:

> Then there is grease, or greasing, which is a somewhat recent
> development. The idea is to have some implementations forcefully
> exercise the extension points in the protocol, which will trigger a
> failure if the peer did not properly implement the negotiation of these
> extensions, "grease the joints" in a way. That's kind of cool, but it
> can only be implemented by important players. If an implementation has
> 0.5% of the market, it can try greasing all it wants, but the big
> players might just as well ignore it, and the virtuous greasers will
> find themselves cut off. On the other hand, if a big player does it, the
> new implementations had better conform. Which means that greasing is
> hard to distinguish from old fashioned "conformity with the big
> implementations", which might have some unwanted consequences. Should it
> be discussed?

I think the pressure of large deployments on smaller ones works in
both good, and more often bad, ways. Within the XMPP community, we saw
this multiple times. Facebook and Google both deployed XMPP services
over the years, and both were larger than the rest of the community
put together. You've described this case as "important players", but
I'd prefer to simply describe them as large.

Facebook's involvement was restricted to client/server (C2S) links,
and in general worked reasonably well, though various custom
extensions were used that performed similar functions to standardized
ones.

Google's service did not support SRV lookups, and mandated the (at the
time) legacy immediate-mode TLS instead of XMPP's standard Start TLS,
for example. This was by design, apparently for security (though
likely for deployment considerations), but had the effect that client
developers often had to hardcode server discovery for the service.

Google did provide S2S peering, but - paradoxically given the above -
did not operate TLS at all, preventing any other service from
mandating TLS for several years. When Google withdrew support for the
service (about four years ago with the introduction of Hangouts), the
community almost universally switched to mandatory TLS within a few
months.

At the same time, Google's S2S peering made heavy use of multiplexing
(known as piggybacking within XMPP), which was unusual at the time,
and could easily cause problems with servers - these servers were
simply forced to update. One might consider this case "virtuous
greasing".

One can very easily look at DMARC as another example of larger players
versus small - DMARC obviously fails to work in the mailing list case
(amongst others), yet the large deployments simply don't care, since
it solves their problems, and to hell with everyone else.

Perhaps standards simply work best in a balanced community?

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Petr Špaček
In reply to this post by Joe Touch
On 14.6.2017 00:03, Joe Touch wrote:
> Hi, all,
>
> ...
>>   Title           : The Harmful Consequences of
>>         Postel's Maxim
>> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01

Before I dive into details, let me state I support this documents in its
current form.


> I completely agree with John Klensin that a test suite defines the
> protocol standard (warts and all).
>
> However, I disagree with the characterization of the Postel Principle in
> this doc, and strongly disagree with one of its key conclusions ("fail
> noisily in response to...undefined inputs"). Failing in response to bad
> inputs is fine, but "undefined" needs to be treated agnostically.
>
> IMO, the Postel Principle is an admission of that sort of agnosticism -
> if you don't know how the other end will react, act conservatively. If
> you don't know what the other end intends, react conservatively. Both
> are conservative actions - in one sense, you try not to trigger
> unexpected behavior (when you send), and in another you try not to
> create that unexpected behavior (when you receive).
>
> That's the very definition of how unused bits need to be handled. Send
> conservatively (use the known default value), but allow any value upon
> receipt.

This very much depends on the the original specification.

If the spec says "send zeros, ignore on receipt" and marks this clearly
as an extension mechanism then it might be okay as long as the extension
mechanism is well defined.


On the other hand, accepting values/features/requests which are not
specified is asking for trouble, especially in long-term. Look at DNS
protocol, it is a mess.

- CNAME at apex? Some resolvers will accept it, some will not.
- Differing TTLs inside RRset during zone transfer? Some servers will
accept it and some not.
...


To sum it up, decision what is acceptable and what is unacceptable
should be in protocol developer's hands. Implementations should reject
and non-specified messages/things unless protocol explicitly says
otherwise. No more "ignore this for interoperability"!


With my DNS-software-develoepr hat on, I very clearly see value of
The New Design Principle in section 4.

Set it to stone! :-)


> The principle does not setup the feedback cycle in Sec 2; a bug is a bug
> and should be fixed, and accommodating alternate behaviors is the very
> definition of "be generous in what you receive". "Being conservative in
> what you send" doesn't mean "never send anything new" - it means do so
> only deliberately.
>
> -----
> Failing noisily is, even when appropriate (e.g., on a known incorrect
> input), an invitation for a DOS attack.
>
> That behavior is nearly as bad as interpreting unexpected (but not
> prohibited) behavior as an attack. Neither one serves a useful purpose
> other than overreaction, which provides increased leverage for a real
> DOS attack.

Sorry but I cannot agree. This very much depens on properies of "hard
fail" messages.

If "error messages" are short enough they will not create significantly
more problems than mere flood of random packets (which can be used for
DoS no matter what we). In fact, short predictible error message is even
better because it gives you ability to filter it somewhere.

Also, passing underspecified messages further in the pipeline is causing
problems on its own. (Imagine cases when proxy passes
malformed/underspecified messages to the backend because it can.)


So again, I really like this document. Thank you!

--
Petr Špaček  @  CZ.NIC

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

heasley
In reply to this post by Joe Touch
Tue, Jun 13, 2017 at 03:03:07PM -0700, Joe Touch:
> Hi, all,
>
> ...
> >   Title           : The Harmful Consequences of
> >         Postel's Maxim
> > https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
> I completely agree with John Klensin that a test suite defines the
> protocol standard (warts and all).

This may be, even if a pair of implementations were required company,
but is that wholly negative?  The test suite is not stuck in time; it can
evolve to test things that its authors had not anticipated, its own bugs,
or evolution of the protocol itself.  It is also more likely to catch bugs
and protocol design flaws than interoperability testing alone.

At the very least, an open test suite allows everyone to test against a
know baseline, before they test interoperation with other implementations.
For example, a rigid test suite that fails when an reserved field is not
its prescribed value, would be an asset for future development that
allocates the field.

Certainly the mentioned JSON problems would have benefitted from a test
suite and interoperability testing.  Clearly, peer review and
interoperability testing are insufficient.

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Joel M. Halpern-3
Test suites are often quite useful.
But I can not imagine holding up an I-D from publication while we work
out a test suite.

Demonstrably, there are cases where we have had trouble because people
took divergent paths based on liberality in interpretation.
Equally clearly, the liberal acceptance approach has served us well in
many cases.

Unclear specifications (unclear RFCs) are a problem.  Even Jon's
oroginal formulation was not, from what I saw, intended to permit sloppy
writing.  Or sloppy implementation.

There are indeed contexts where an application calling attention to a
problem is very useful.  Silently ignoring things that indicate trouble
is usually a mistake (although not always.)

I would be very unhappy to see us take the lesson from cases where we
were sloppy to be that we should tell everyone to have their
implementations break at the slightest error.

Yours,
Joel

On 6/14/17 3:56 PM, heasley wrote:

> Tue, Jun 13, 2017 at 03:03:07PM -0700, Joe Touch:
>> Hi, all,
>>
>> ...
>>>    Title           : The Harmful Consequences of
>>>          Postel's Maxim
>>> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
>> I completely agree with John Klensin that a test suite defines the
>> protocol standard (warts and all).
>
> This may be, even if a pair of implementations were required company,
> but is that wholly negative?  The test suite is not stuck in time; it can
> evolve to test things that its authors had not anticipated, its own bugs,
> or evolution of the protocol itself.  It is also more likely to catch bugs
> and protocol design flaws than interoperability testing alone.
>
> At the very least, an open test suite allows everyone to test against a
> know baseline, before they test interoperation with other implementations.
> For example, a rigid test suite that fails when an reserved field is not
> its prescribed value, would be an asset for future development that
> allocates the field.
>
> Certainly the mentioned JSON problems would have benefitted from a test
> suite and interoperability testing.  Clearly, peer review and
> interoperability testing are insufficient.
>
>

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Paul Wouters-4
On Wed, 14 Jun 2017, Joel M. Halpern wrote:

> There are indeed contexts where an application calling attention to a problem
> is very useful.  Silently ignoring things that indicate trouble is usually a
> mistake (although not always.)
>
> I would be very unhappy to see us take the lesson from cases where we were
> sloppy to be that we should tell everyone to have their implementations break
> at the slightest error.

I fully agree. Postel's advise was and is good guidance. It is not a
dogma, and the reverse should also not become a dogma.

Paul

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Joe Touch
In reply to this post by heasley


On 6/14/2017 12:56 PM, heasley wrote:

> Tue, Jun 13, 2017 at 03:03:07PM -0700, Joe Touch:
>> Hi, all,
>>
>> ...
>>>   Title           : The Harmful Consequences of
>>>         Postel's Maxim
>>> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
>> I completely agree with John Klensin that a test suite defines the
>> protocol standard (warts and all).
> This may be, even if a pair of implementations were required company,
> but is that wholly negative?
Not necessarily, but it does negate the need for a specification.

"A designer with one set of requirements always knows what to follow; a
designer with two is never sure" (a variant of "a person with one watch
always knows what time it is...")

If you have a test suite, that singularly defines the protocol to the
exclusion of the spec, otherwise you don't have a complete suite.

>   The test suite is not stuck in time; it can
> evolve to test things that its authors had not anticipated, its own bugs,
> or evolution of the protocol itself.  It is also more likely to catch bugs
> and protocol design flaws than interoperability testing alone.
Interoperability tests are just a variant of test suites - just that
they are all relative to each other, rather than to a single entity.

However, forcing testing to a single entity will necessarily catch less
issues than cross-testing can.

> At the very least, an open test suite allows everyone to test against a
> know baseline, before they test interoperation with other implementations.
> For example, a rigid test suite that fails when an reserved field is not
> its prescribed value, would be an asset for future development that
> allocates the field.

That then begs the question:
    - what happens if my version passes the test suite but fails
interoperability tests?

> Certainly the mentioned JSON problems would have benefitted from a test
> suite and interoperability testing.

But what happens when they provide conflicting results? At that point,
you need to decide which one wins; if that wins, why do the other?

>  Clearly, peer review and
> interoperability testing are insufficient.
Nothing is ever sufficient unless it is complete, and complete protocol
tests are not feasible for anything other than toy examples. The state
space blows up.

Joe

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

heasley
In reply to this post by Joel M. Halpern-3
Wed, Jun 14, 2017 at 04:20:50PM -0400, Joel M. Halpern:
> There are indeed contexts where an application calling attention to a
> problem is very useful.  Silently ignoring things that indicate trouble
> is usually a mistake (although not always.)

I believe that it is always a mistake; it will either bite now or later,
or its not a real error and while ignoring it, real errors are missed.
It should be addressed, whether in your implementation, the test suite,
the spec or the other implementation.  And, to Joe's point of resolving
the conflicts between the those; that dispute is not for the individual
to resolve.  The standards body should determine what the correct behavior
is.  That loop improves the spec, the suite, and interoperability - for
everyone's benefit.

quality, speed, price; pick two.  if you dont pick quality, you will likely
be disappointed - whether beer, brakes or bgp.  testing is a large
contributor to quality.

> I would be very unhappy to see us take the lesson from cases where we
> were sloppy to be that we should tell everyone to have their
> implementations break at the slightest error.

That is the suggestion of the draft; i suggest only that a test suite
should follow this - be devilishly rude - about the slightest error.

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Joe Touch
In reply to this post by Petr Špaček


On 6/14/2017 8:41 AM, Petr Špaček wrote:
> To sum it up, decision what is acceptable and what is unacceptable
> should be in protocol developer's hands.
That should be in the specification.

What the specification leaves open, implementations should respect and
honor as allowed.

Joe


Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Brian E Carpenter-2
In reply to this post by Joel M. Halpern-3
On 15/06/2017 08:20, Joel M. Halpern wrote:
...
> I would be very unhappy to see us take the lesson from cases where we
> were sloppy to be that we should tell everyone to have their
> implementations break at the slightest error.

Indeed. We need implementations to be as robust as possible. That
means careful thought, both in the specification and in every
implementation, about how to handle malformed incoming messages.
There's no single correct answer, as I am certain Jon would have
agreed. Some types of malformation should simply be ignored,
because the rest of the message is valid. Others should cause the
message to be discarded, or should cause an error response to be
sent back, or should cause the error to be logged or reported to
the user. There is no single correct solution.

Clearly the Postel principle was intended as general guidance.

Looking at the core of the draft:

      Protocol designs and implementations should fail noisily in
      response to bad or undefined inputs.

that seems a very reasonable principle for *prototype* and
*experimental* implementations, and a lousy one for production
code, where the response to malformed messages should be much
more nuanced; and the users will prefer the Postel principle
as a fallback.

   Brian

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Mark Andrews-4
In reply to this post by Petr Špaček

In message <[hidden email]>, =?UTF-8?B?UGV0ciDFoHBhxI1law==?= writes:

> On 14.6.2017 00:03, Joe Touch wrote:
> > Hi, all,
> >
> > ...
> >>   Title           : The Harmful Consequences of
> >>         Postel's Maxim
> >> https://tools.ietf.org/html/draft-thomson-postel-was-wrong-01
>
> Before I dive into details, let me state I support this documents in its
> current form.
>
>
> > I completely agree with John Klensin that a test suite defines the
> > protocol standard (warts and all).
> >
> > However, I disagree with the characterization of the Postel Principle in
> > this doc, and strongly disagree with one of its key conclusions ("fail
> > noisily in response to...undefined inputs"). Failing in response to bad
> > inputs is fine, but "undefined" needs to be treated agnostically.
> >
> > IMO, the Postel Principle is an admission of that sort of agnosticism -
> > if you don't know how the other end will react, act conservatively. If
> > you don't know what the other end intends, react conservatively. Both
> > are conservative actions - in one sense, you try not to trigger
> > unexpected behavior (when you send), and in another you try not to
> > create that unexpected behavior (when you receive).
> >
> > That's the very definition of how unused bits need to be handled. Send
> > conservatively (use the known default value), but allow any value upon
> > receipt.
>
> This very much depends on the the original specification.
>
> If the spec says "send zeros, ignore on receipt" and marks this clearly
> as an extension mechanism then it might be okay as long as the extension
> mechanism is well defined.
>
>
> On the other hand, accepting values/features/requests which are not
> specified is asking for trouble, especially in long-term. Look at DNS
> protocol, it is a mess.
>
> - CNAME at apex? Some resolvers will accept it, some will not.

Well this is something that should be checked on the authoritative
server when the zone is loaded / updated.  A resolver can't check
this as DNS is loosely coherent.  Delegations can come or go.  Other
data comes and goes.

> - Differing TTLs inside RRset during zone transfer? Some servers will
> accept it and some not.

And we have rules to truncate to the minimum value.

Named fails hard for a number of zone content issues when running
as the master server for a zone which it deliberately ignores when
running in slave mode as the slave operator can't fix them.  That
said if there was IETF consensus that these should be fatal on slave
zones, so that we aren't left to pick up a bad reputation for failing
to serve the zone, it would be easy to change them.

We are already failing to resolve signed zones when validating where
the authoritative servers returns FORMERR or BADVERS or fail to
respond to queries with a DNS COOKIE EDNS option being present.  In
both cases we fallback to plain DNS queries which are incompatible
with DNSSEC.  There are a number of .GOV zone served by QWEST that
fall into this category.  Yes, we have attempted to inform QWEST
for the last 2+ years that their servers are broken.

See https://ednscomp.isc.org/compliance/gov-full-report.html#eo
for zones.  They are highlighted in orange.

If we ever need to set a EDNS flag or send EDNS version 1 queries
the number of zones that will fail on a validating resoler will
increase mostly because too many firewalls default to "these field
must be zero" and drop the request instead of listening to the EDNS
RFC which say to IGNORE unknown flags and to return BADVERS with
the highest version you do support if you don't support the version.

Firewalls are capable of generating TCP RST so they should be capable
of generating a BADVERS response if they don't want to pass version
!= 0 queries.

Note: It doesn't have to resolvers that detect DNS protocol errors.
You can test for these sorts of errors easily and refuse to delegate
to servers that don't follow the protocol.

Resolvers can't continue to workaround every stupid response
autoritative servers return.  They don't have enough time.

> To sum it up, decision what is acceptable and what is unacceptable
> should be in protocol developer's hands. Implementations should reject
> and non-specified messages/things unless protocol explicitly says
> otherwise. No more "ignore this for interoperability"!
>
>
> With my DNS-software-develoepr hat on, I very clearly see value of
> The New Design Principle in section 4.
>
> Set it to stone! :-)
>
>
> > The principle does not setup the feedback cycle in Sec 2; a bug is a bug
> > and should be fixed, and accommodating alternate behaviors is the very
> > definition of "be generous in what you receive". "Being conservative in
> > what you send" doesn't mean "never send anything new" - it means do so
> > only deliberately.
> >
> > -----
> > Failing noisily is, even when appropriate (e.g., on a known incorrect
> > input), an invitation for a DOS attack.
> >
> > That behavior is nearly as bad as interpreting unexpected (but not
> > prohibited) behavior as an attack. Neither one serves a useful purpose
> > other than overreaction, which provides increased leverage for a real
> > DOS attack.
>
> Sorry but I cannot agree. This very much depens on properies of "hard
> fail" messages.
>
> If "error messages" are short enough they will not create significantly
> more problems than mere flood of random packets (which can be used for
> DoS no matter what we). In fact, short predictible error message is even
> better because it gives you ability to filter it somewhere.
>
> Also, passing underspecified messages further in the pipeline is causing
> problems on its own. (Imagine cases when proxy passes
> malformed/underspecified messages to the backend because it can.)
>
>
> So again, I really like this document. Thank you!
>
> --
> Petr Å paček  @  CZ.NIC
>
--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742                 INTERNET: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

John C Klensin
In reply to this post by Brian E Carpenter-2
--On Thursday, June 15, 2017 10:44 +1200 Brian E Carpenter
<[hidden email]> wrote:

> On 15/06/2017 08:20, Joel M. Halpern wrote:
> ...
>> I would be very unhappy to see us take the lesson from cases
>> where we  were sloppy to be that we should tell everyone to
>> have their  implementations break at the slightest error.
>
> Indeed. We need implementations to be as robust as possible.
> That means careful thought, both in the specification and in
> every implementation, about how to handle malformed incoming
> messages. There's no single correct answer, as I am certain
> Jon would have agreed. Some types of malformation should
> simply be ignored, because the rest of the message is valid.
> Others should cause the message to be discarded, or should
> cause an error response to be sent back, or should cause the
> error to be logged or reported to the user. There is no single
> correct solution.
>
> Clearly the Postel principle was intended as general guidance.
>
> Looking at the core of the draft:
>
>       Protocol designs and implementations should fail noisily
> in       response to bad or undefined inputs.
>
> that seems a very reasonable principle for *prototype* and
> *experimental* implementations, and a lousy one for production
> code, where the response to malformed messages should be much
> more nuanced; and the users will prefer the Postel principle
> as a fallback.

+1 and exactly right.

It is also a good principle for independently-developed test
suites.  Indeed, I agree with

--On Wednesday, June 14, 2017 22:27 +0000 heasley
<[hidden email]> wrote:

>> I would be very unhappy to see us take the lesson from cases
>> where we  were sloppy to be that we should tell everyone to
>> have their  implementations break at the slightest error.
>
> That is the suggestion of the draft; i suggest only that a
> test suite should follow this - be devilishly rude - about the
> slightest error.

But I don't see that as a contradiction: test suites that are
developed by third parties reading standards, deciding what
should be tested, and being brutally narrow about requirements
may be very useful in developing good implementations.  The type
of reading of standards such developments that encourages can
also, if used as feedback, improve the standards.    The problem
arises when test suite (and/or certifications) are produced or
endorsed by the standards developer.  If they are endorsed in
that way, they become alternate statements of the standards
themselves and Joe's observation:

> Not necessarily, but it does negate the need for a
> specification.
>
> "A designer with one set of requirements always knows what to
> follow; a designer with two is never sure" (a variant of "a
> person with one watch always knows what time it is...")
>
> If you have a test suite, that singularly defines the protocol
> to the exclusion of the spec, otherwise you don't have a
> complete suite.

Applies.  If they are independently developed, they are just
like another implementation: two implementations either
interoperate or they do not; an implementation may conform to
the test suite or not but the standard is the standard and the
authority.  It is possible for the text suite to simply be in
error and for things to conform to the test suite but not the
standard.  It is the two sets of requirements that is the
problem.

The I-D notwithstanding, very little of that has anything to do
with the robustness principle, which, as Brian suggests, is
intended for production implementations and much less so for
experiments, demonstrations, prototypes, reference
implementations, etc.   Certainly the robustness principle has
been misused by sloppy or lazy implementers to claim that they
can produce and send any sort of garbage they like and that
recipients are responsible for figuring out what was intended,
but that was never the intent and (almost) everyone knows that,
including most of the sloppy/ lazy (or arrogant) implementers
who would probably behave that way whether Postel had ever
stated that principle [1]. I suggest that one of the reasons the
Internet has been successful is precisely because of sensible
application of the robustness principle.   Not only do things
mostly work, or at least produce sensible error messages or
warnings rather than blowing up, in the presence of small
deviations or errors, but (in recent years at least in theory)
it avoids out having to spend extra years in standards
development while we analyze every possible error and edge case
and specify what should happen.  Instead, when appropriate, we
get to say, when appropriate, "this is the conforming behavior,
if you don't conform, the standard has nothing to say to you,
but you should not depend on its working".  The robustness
principle is important guidance for those edge cases

That assumes a higher level of thinking and responsibility on
the part of implementers than may be justified in the current
world, but I suggest that the fix is not to abandon the
robustness principle.  Personally, I'd like to see more
litigation or other sorts of negative reinforcement against
sloppy implementations and implementers that cause damage.
YMMD, but there is lots of evidence that standards that try to
cover and specify every case are not the solution (see below).

Finally, because I don't want to write a lot of separate,
slightly-connected, notes...

--On Tuesday, June 13, 2017 14:28 +0000 heasley
<[hidden email]> wrote:

>> Actually, a number of standards bodies have found, to their
>> chagrin, that test suites that are developed and.or certified
>> by the standards body are a terrible idea.  The problem is
>> that they become the real standard, substituting "passes the
>> test suite" for "conformance to the standard" or the IETF's
>> long
>
> reference?

Sorry, but I don't have time to do the research to dig that
material out and some of what I found quickly has "members only"
availability.   I speak from a "been there, done that"
perspective, with a background that includes standards
development and oversight body leadership roles in ASNI SDOs,
ANSI, and ISO.  Most of my first-hand experience on that side of
things involved programming languages (including observing a
certain one named after a famous early woman programmer with a
three-letter first name that came, in practice, to be defined
entirely by the test/conformance suite) and, closer to IETF's
interests, some OSI-related protocols in which the effort to
specify ever case led to specifications and profiles that either
were never finished or that became so cumbersome as to be
unimplementable.  I won't claim that is what killed OSI and let
the Internet succeed instead (I think there is a rather long
list of contributors), but it was one element.

best,
    john

[1] I note that we have had some worked examples of software
vendors who have taken the position that they are so important,
or that their ideas have achieved sufficient perfection, that
their systems don't need to conform to standards and that
everyone else, including the standards bodies, just need to
conform to them and whatever they produce.  That behavior can't
be blamed on Postel either, nor can the consequences if the IETF
decides to go along and adjust the standards.

>
>    Brian
>




Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Joe Touch


> On Jun 15, 2017, at 6:54 AM, John C Klensin <[hidden email]> wrote:
>
> I suggest that one of the reasons the
> Internet has been successful is precisely because of sensible
> application of the robustness principle.   Not only do things
> mostly work, or at least produce sensible error messages or
> warnings rather than blowing up, in the presence of small
> deviations or errors, but (in recent years at least in theory)
> it avoids out having to spend extra years in standards
> development while we analyze every possible error and edge case
> and specify what should happen.  Instead, when appropriate, we
> get to say, when appropriate, "this is the conforming behavior,
> if you don't conform, the standard has nothing to say to you,
> but you should not depend on its working".  The robustness
> principle is important guidance for those edge cases.

+1

Further, testing for all those edge cases becomes itself a performance burden on operational code, amplifying the leverage of a DOS attack based on all that excess validation.

The Postel principle helps make achieving the balance between usefulness and correctness tractable.

Joe



Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Bob Hinden-3
In reply to this post by Brian E Carpenter-2
Brian,

> On Jun 14, 2017, at 3:44 PM, Brian E Carpenter <[hidden email]> wrote:
>
> On 15/06/2017 08:20, Joel M. Halpern wrote:
> ...
>> I would be very unhappy to see us take the lesson from cases where we
>> were sloppy to be that we should tell everyone to have their
>> implementations break at the slightest error.
>
> Indeed. We need implementations to be as robust as possible. That
> means careful thought, both in the specification and in every
> implementation, about how to handle malformed incoming messages.
> There's no single correct answer, as I am certain Jon would have
> agreed. Some types of malformation should simply be ignored,
> because the rest of the message is valid. Others should cause the
> message to be discarded, or should cause an error response to be
> sent back, or should cause the error to be logged or reported to
> the user. There is no single correct solution.
>
> Clearly the Postel principle was intended as general guidance.
>
> Looking at the core of the draft:
>
>      Protocol designs and implementations should fail noisily in
>      response to bad or undefined inputs.
>
> that seems a very reasonable principle for *prototype* and
> *experimental* implementations, and a lousy one for production
> code, where the response to malformed messages should be much
> more nuanced; and the users will prefer the Postel principle
> as a fallback.
I agree.

It also seems to me that having implementations "fail noisily in response to bad or undefined inputs" is a great formula to making implementations very fragile and consequently very easy to attack.  Overall, I think the approach outlined in this draft would not have allowed us to build the current Internet.

Bob

p.s. The file name chosen for this draft appears to be a good example of stepping on the toes of those who came before, instead of standing on their shoulders.  See: http://wiki.c2.com/?ShouldersOfGiants



>
>   Brian
>


signature.asc (507 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Christian Huitema-3


On 6/15/2017 11:28 AM, Bob Hinden wrote:
> p.s. The file name chosen for this draft appears to be a good example of stepping on the toes of those who came before, instead of standing on their shoulders.  See: http://wiki.c2.com/?ShouldersOfGiants

On the other hand, there is something to be said for "being nice
considered harmful".

Martin describes a very real failure mode. Implementations deviate from
the standard, gain market share as the deviations are happily tolerated,
and then prevent standard evolution that would contradict their own
"extensions". Martin gives examples of that happening with JSON.

Or, implementations fail to properly implement the extension mechanism
specified in the standard, and then prevent deployments of perfectly
good options. The slow deployment of early congestion notification comes
to mind.

--
Christian Huitema



signature.asc (484 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: I-D Action: draft-thomson-postel-was-wrong-01.txt

Mark Andrews-4

In message <[hidden email]>, Christian Huitema writes:

> This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
> On 6/15/2017 11:28 AM, Bob Hinden wrote:
> > p.s. The file name chosen for this draft appears to be a good example
> > of stepping on the toes of those who came before, instead of standing on
> > their shoulders.  See: http://wiki.c2.com/?ShouldersOfGiants
>
> On the other hand, there is something to be said for "being nice
> considered harmful".
>
> Martin describes a very real failure mode. Implementations deviate from
> the standard, gain market share as the deviations are happily tolerated,
> and then prevent standard evolution that would contradict their own
> "extensions". Martin gives examples of that happening with JSON.
>
> Or, implementations fail to properly implement the extension mechanism
> specified in the standard, and then prevent deployments of perfectly
> good options. The slow deployment of early congestion notification comes
> to mind.

DNS and EDNS fall into this category.  The AD bit in responses can't
be trusted as there are servers that just echo back (formally)
reserved bits.  STD 13 says these bits MUST be zero.

Then there are all the servers that fail to do EDNS version negotiation
correctly.  They talk EDNS but ignore the version field or return
FORMERR rather than the specified BADVERS error code. etc.  About
half the servers we have tested get EDNS version negotiation wrong
in one way or another.

When we tighted the unknown EDNS option behaviour (RFC 6891, RFC
2671 under specified the behaviour) updating the EDNS version field
would have been appropriate but we couldn't do this due to the level
of poor implementation of EDNS version negotiation and idiotic (yes,
name calling is appropriate) default firewall rules from multiple
vendors that dropped any EDNS request with a version field that
wasn't 0.

Mark
> --
> Christian Huitema

--
Mark Andrews, ISC
1 Seymour St., Dundas Valley, NSW 2117, Australia
PHONE: +61 2 9871 4742                 INTERNET: [hidden email]

123