<<< Date Index >>>     <<< Thread Index >>>

Re: gcc 4.1 bug miscompiles pointer range checks, may place you at risk



On Mon, 17 Apr 2006, Felix von Leitner wrote:

I wrote a small library of functions to do typical range checks as they
are needed in code that handles incoming packets or messages from
untrusted sources.  My impetus was SMB code, in case you want to know.

Here is one of my functions:

static inline int range_ptrinbuf(const void* buf,unsigned long len,const void* 
ptr) {
 register const char* c=(const char*)buf;      /* no pointer arithmetic on 
void* */
 return (c && c+len>c && (const char*)ptr-c<len);
}

Of course, when developing security critical code like this, you also
write a good test suite for it, that exercises all the cases.  Here is
part of my test suite:

 assert(range_ptrinbuf(buf,(unsigned long)-1,buf+1)==0);

Imagine my surprise when this assertion failed.  I had compiled the
code with gcc 4.1 and compiled it without optimizing (I mention this
because for most gcc bugs, a workaround is disabling the optimizer).

gcc 3 compiles this code correctly.  I tested this on x86 and amd64.
I mention this here because "c+len>c" is the code with which you would
typically check for integer overflows, which is a check that for example
an IP stack would do, or Samba.  So, if you compiled your kernel with
gcc 4.1, or your Samba, or some other packet handling code in a security
relevant context, you might want to recompile with gcc 3.

Hi,

This is interesting. But I am not sure that it is really a compiler bug. Rules for pointer arithmetic in C are rather restrictive, and stepping outside of them results in "undefined behavior". I don't have the current ANSI C standard available, but even my old copy of K&R I says that you shouldn't compare pointers which point to different arrays, or you may get nonsense. So I have a suspicion that this code may be illegal, and the different compiler versions just happen to have chosen different interpretations.

In fact, in some sense the new result is correct. What if buf is an array of size 2^32? Then buf + 0xffffffffU does in fact point to an element of buf beyond the 0th, so 'buf + 0xffffffffU > buf' is in that sense a true statement. Of course, no existing x86 operating system is set up to work like that, but the compiler doesn't know that.

I guess a more general question is "if q is a pointer, and buf is an array of size n, how to tell if q points to an element of buf?" You would like to be able to do

q >= buf && q < buf+n

but I think maybe this is not right. In fact, on a machine with very crazy memory management, it might be very difficult or impossible to answer that question.

I guess the correct test is not to try to apply the test to the pointer but to the index. E.g. if given an untrusted index i, rather than letting q=buf+i and then trying to validate q, just remember the value for i and check whether i >= 0 && i <= n. That seems foolproof to me.

You might want to bring this up on a forum like comp.lang.c where people know a lot about the C language. Alternatively, if you've already reported this as a bug to the gcc maintainers (which of course you would do before posting to bugtraq, right? :), they will probably be able to explain what's going on.

In any case, it's useful to know about this, if nothing else so that people know to avoid code like that. Thanks for bringing it up.

--
Nate Eldredge
nge@xxxxxxxxxx