On 1253075737 seconds since the Beginning of the UNIX epoch "Ken Raeburn via RT" wrote: > >On Sep 15, 2009, at 22:02, elric@mournblade.imrryr.org via RT wrote: >> Unfortunately, if you receive a datagram of over sizeof(pktbuf) >> you will succeed with cc == sizeof(pktbuf) not detecting the fact >> that there was additional data. This results in an ASN.1 parse >> error. What should happen is that the KDC should return an >> appropriate error to the client indicating that TCP should be used. > >Regardless of other options, it sounds like cc==sizeof(pktbuf) should >trigger the use-TCP error, since we can't distinguish between a packet >equal in size to the buffer and a packet that was larger but got >truncated. Either that, or we could peek at the size of the next >datagram and grow the buffer as needed, but I'm not sure that peeking >can be done portably. Yes, this sounds like exactly the approach I would think about implementing. >> I noticed this while debugging a JGSS problem. Apparently, the >> Java Kerberos libraries do not fail over from UDP to TCP unless >> the KDC specifically tells them to. And they have no default >> setting for udp_preference_limit. And so, if you are constructing >> tickets of over 4K because, let's say, a user is in a lot of groups >> in Windows, JGSS will just fail against an MIT KDC. > > From what I've read, the common wisdom still seems to be that some >gateways/routers/NAT boxes/firewalls/whatever will not properly >process UDP fragments, so UDP traffic over ~1500 bytes (or less) may >never get to the KDC. So this sounds like a bug in the Java Kerberos >libraries. It's most certainly a bug in the Java Kerberos libraries. I've also run into them breaking when frags are dropped, etc. Thanks, -- Roland Dowdeswell http://Imrryr.ORG/~elric/