Received: from mailman.mit.edu (PCH.MIT.EDU [18.7.21.90]) by krbdev.mit.edu (8.9.3p2) with ESMTP id UAA29339; Tue, 11 Jan 2005 20:59:20 -0500 (EST) Received: from pch.mit.edu (pch.mit.edu [127.0.0.1]) by mailman.mit.edu (8.12.8p2/8.12.8) with ESMTP id j0C1x0YR017813 for ; Tue, 11 Jan 2005 20:59:00 -0500 Received: from biscayne-one-station.mit.edu (BISCAYNE-ONE-STATION.MIT.EDU [18.7.7.80]) by mailman.mit.edu (8.12.8p2/8.12.8) with ESMTP id j0C1x0YR017810 for ; Tue, 11 Jan 2005 20:59:00 -0500 Received: from outgoing.mit.edu (OUTGOING-AUTH.MIT.EDU [18.7.22.103]) j0C1wgf0020459; Tue, 11 Jan 2005 20:58:42 -0500 (EST) Received: from all-in-one.mit.edu (ALL-IN-ONE.MIT.EDU [18.18.1.71]) (authenticated bits=56) (User authenticated as raeburn@ATHENA.MIT.EDU) by outgoing.mit.edu (8.12.4/8.12.4) with ESMTP id j0C1weBQ021008 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT); Tue, 11 Jan 2005 20:58:41 -0500 (EST) Received: (from raeburn@localhost) by all-in-one.mit.edu (8.12.9) id j0C1we9i000457; Tue, 11 Jan 2005 20:58:40 -0500 To: krb5-bugs@mit.edu From: Ken Raeburn Date: Tue, 11 Jan 2005 20:58:40 -0500 Message-Id: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Spam-Score: -4.9 X-Spam-Flag: NO X-Scanned-BY: MIMEDefang 2.42 Subject: memory leak in dns code X-Beenthere: krb5-bugs-incoming@mit.edu X-Mailman-Version: 2.1 Precedence: list Sender: krb5-bugs-incoming-bounces@mit.edu Errors-To: krb5-bugs-incoming-bounces@mit.edu X-RT-Original-Encoding: us-ascii Content-Length: 1460 I set up a series of realm R1.MIT.EDU .. R4.MIT.EDU with cross-realm keys, got a ticket as principal x@R1, and ran "kvno service2@R4.MIT.EDU" with the current 1.4 branch sources, under valgrind on x86-linux. So intermediate TGTs were needed for R1->R2, R2->R3, R3->R4. Aside from the leaks related in ticket 2541, this one showed up. Some experimentation with different service principal realms and different sets of existing tickets indicates that the number of leaked blocks varies, presumably with the number of KDC requests. ==30513== 280 bytes in 10 blocks are definitely lost in loss record 7 of 7 ==30513== at 0x1B903D38: malloc (vg_replace_malloc.c:131) ==30513== by 0x1B9D118B: __libc_res_nsend (in /lib/libresolv-2.3.2.so) ==30513== by 0x1B9CFE19: __libc_res_nquery (in /lib/libresolv-2.3.2.so) ==30513== by 0x1B9D056A: __libc_res_nquerydomain (in /lib/libresolv-2.3.2.so) ==30513== by 0x1B9D0131: __libc_res_nsearch (in /lib/libresolv-2.3.2.so) ==30513== by 0x1B9D0479: __res_nsearch (in /lib/libresolv-2.3.2.so) ==30513== by 0x1B9787EC: krb5int_dns_init (dnsglue.c:106) ==30513== by 0x1B978C34: krb5int_make_srv_query_realm (dnssrv.c:106) ==30513== by 0x1B97BAB1: krb5_locate_srv_dns_1 (locate_kdc.c:518) ==30513== by 0x1B97BC45: krb5int_locate_server (locate_kdc.c:595) At first glance, I think it may be a glibc bug. There is a res_nclose routine that we aren't calling, but I don't think it'll fix this. Ken