AppSec Blog

Top 25 Series - Rank 18 - Incorrect Calculation of Buffer Size

Incorrect Calculation of Buffer Size (CWE-131) is another shameful member in the buffer overflow family. Buffer overflow is generally caused by copying or moving a piece of data to a smaller memory location hence overwriting some important data in the memory and corrupting the execution path of the computer. The most basic case of buffer overflow is not checking for buffer length before copying data. Even if the developer writes code to check length, there is still lots of room for error, this is exactly where incorrect calculation of buffer size fits in.

When the developer writes code or routine to check for the length of buffer to be moved or copied, sometimes the arithmetic is not exactly correct and this leads to incorrect calculation of size which in turn lead to buffer overflow. Most of the occurrences are due to human errors. A very common and well known flaw called the off by one is usually caused by developer forgetting about the NULL terminator at the end of the string or the fact that arithmetic starts at 0 rather than 1 within the programming language.

I will borrow an example from MITRE (example 4 on the page).

int *id_sequence;

/* Allocate space for an array of three ids. */

id_sequence = (int*) malloc(3);
if (id_sequence == NULL) exit(1);

/* Populate the id array. */

id_sequence[0] = 13579;
id_sequence[1] = 24680;
id_sequence[2] = 97531;

In this example, the intention of the developer is to create space for 3 integer (int), but coding mistake lead to only 3 bytes being reserved in memory. Each of the integer is 4 bytes in length, the three integers add up to 12 bytes. Now we have twelve bytes being put into 3 bytes which causes an overflow.

The solution to this problem is developer education and also review process. Peer review and also code scanner can help tremendously. Using more modern languages also tend to significantly reduce the possibility of these vulnerabilities.


Posted March 19, 2010 at 8:32 PM | Permalink | Reply

Matthew Wollenweber

I know the point that you're trying to make, but I don't think peer review is the way to go for an issue like this. Looking at real life code with this sort of problem is very tough. You're probably better off fuzzing/regression testing/using valgrind.
With your specific example, you're also *usually* safe. Malloc will usually at least 4 byte align depending on your compiler, CPU, and OS. Thus if you run the above code, you probably won't get a segfault as malloc is (usually) going to give you some padding.
I think part of the problem with this type of bug is that overall it looks right and it often works. So I think the education portion should emphasize the low level details of why this is a bug despite not presenting on many platforms.

Posted March 31, 2010 at 5:37 PM | Permalink | Reply

David Brodbeck

Buffer overflows of this type can also happen when moving from a 32-bit platform to a 64-bit platform. I've run into older code that hard-coded the size of an integer. When compiled on a 64-bit system, the resulting binary would segfault due to buffer overflows. The lesson here is the correct fix for the above code is *not* to use "malloc(12)", it's to use something like "malloc(sizeof(int)*3)".

Post a Comment


* Indicates a required field.