Skip Menu |
 

Subject: fopen file descriptor limit
Download (untitled) / with headers
text/plain 1.8KiB
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6221296
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6234782
http://mail.opensolaris.org/pipermail/kerberos-discuss/2008-September/000272.html

In 32-bit mode, Solaris has an 8-bit field for storing the file descriptor number in a FILE
structure. Some types of applications may well have more file descriptors open when they
call into the krb5 library, where we use fopen. (Also kadm5, kdb, and rpc libraries.) From
some messages in the thread indicated above, it sounds like Sun's integration of 1.6.3 will
use a Sun-specific extension to stdio (not listed in the fopen man page in the Solaris 10 rev
we're running around MIT) to work around this; our code as shipped would just fail.

The FILE structure on the Mac has a 16-bit field for a file descriptor and a fileno_unlocked()
macro that examines it, though fileno() is a function that could bury some additional
workaround behavior. Using more that 65536 file descriptors does seem a bit excessive.
(But then, more than 640K of RAM did once too.)

GNU libc uses an int, so that looks okay.

I don't have function AIX, Tru64, etc., to examine. It's possible that it's only a real-world
problem on Solaris.

Possible approaches:
1) Ignore it unless someone gives us a patch. :)
2) Replace fopen calls with basic POSIX I/O, at least on these systems, and manage the
buffering (if we need it) ourselves. A shim layer would let us map it to stdio on systems
where it's not a problem, which is probably most of the modern ones too new to have big
legacy compatibility issues to worry about like Solaris does. (Keep ticket 6062 in mind if
tackling this.)
3) Use Sun's extension on recent-enough Solaris, ignore the problem elsewhere until/unless
we know it's an issue. I don't know if this supports using fileno() like we do for the close-
on-exec support.
4) ...?