On Sep 15, 2009, at 22:02, elric@mournblade.imrryr.org via RT wrote: > Unfortunately, if you receive a datagram of over sizeof(pktbuf) > you will succeed with cc == sizeof(pktbuf) not detecting the fact > that there was additional data. This results in an ASN.1 parse > error. What should happen is that the KDC should return an > appropriate error to the client indicating that TCP should be used. Regardless of other options, it sounds like cc==sizeof(pktbuf) should trigger the use-TCP error, since we can't distinguish between a packet equal in size to the buffer and a packet that was larger but got truncated. Either that, or we could peek at the size of the next datagram and grow the buffer as needed, but I'm not sure that peeking can be done portably. > Or maybe the buffer size should be increased to the maximum allowable > for UDP. I prefer the second option as there is nothing inherently > wrong with 64K UDP packets. With jumbograms, UDP messages larger than 64K are possible. (RFC 2675) Still, 64K does seem like a reasonable limit (i.e., way larger than we would normally expect). > I noticed this while debugging a JGSS problem. Apparently, the > Java Kerberos libraries do not fail over from UDP to TCP unless > the KDC specifically tells them to. And they have no default > setting for udp_preference_limit. And so, if you are constructing > tickets of over 4K because, let's say, a user is in a lot of groups > in Windows, JGSS will just fail against an MIT KDC. From what I've read, the common wisdom still seems to be that some gateways/routers/NAT boxes/firewalls/whatever will not properly process UDP fragments, so UDP traffic over ~1500 bytes (or less) may never get to the KDC. So this sounds like a bug in the Java Kerberos libraries. Ken -- Ken Raeburn / raeburn@mit.edu / no longer at MIT Kerberos Consortium