In src/kdc/network.c, in the function: process_packet(): We find: response = NULL; saddr_len = sizeof(saddr); cc = recvfrom(port_fd, pktbuf, sizeof(pktbuf), 0, (struct sockaddr *)&saddr, &saddr_len); if (cc == -1) { if (errno != EINTR /* This is how Linux indicates that a previous transmission was refused, e.g., if the client timed out before getting the response packet. */ && errno != ECONNREFUSED ) com_err(prog, errno, "while receiving from network"); return; } if (!cc) return; /* zero-length packet? */ Unfortunately, if you receive a datagram of over sizeof(pktbuf) you will succeed with cc == sizeof(pktbuf) not detecting the fact that there was additional data. This results in an ASN.1 parse error. What should happen is that the KDC should return an appropriate error to the client indicating that TCP should be used. Or maybe the buffer size should be increased to the maximum allowable for UDP. I prefer the second option as there is nothing inherently wrong with 64K UDP packets. I noticed this while debugging a JGSS problem. Apparently, the Java Kerberos libraries do not fail over from UDP to TCP unless the KDC specifically tells them to. And they have no default setting for udp_preference_limit. And so, if you are constructing tickets of over 4K because, let's say, a user is in a lot of groups in Windows, JGSS will just fail against an MIT KDC. Fix: change MAX_DGRAM_SIZE in /include/krb5/stock/osconf.h to be the actual maximum datagram size, 65536. -- Roland Dowdeswell http://Imrryr.ORG/~elric/