Security Considerations of x86 vs x64

This article discusses the security considerations of using an x64 architecture compared to an x86 architecture. It explores the advantages and disadvantages of each architecture in terms of memory overflow, address space layout randomization (ASLR), resource consumption, and vulnerability mitigation.

In most ways, x64 is better. A few reasons:

  • Because size_t is 64 bits, it’s far less likely that an integer overflow will occur due to addition or multiplication, which makes some common buffer overflow scenarios less likely.
  • Because the actual amount of physical or virtual memory that can be allocated is much lower than SIZE_T_MAX, some scenarios that might otherwise lead to overflows will instead lead to memory allocation failures, which are easier to detect. For example, if an attacker can instantiate an arbitrary number of 32-byte structs in a buffer based in the length of an input string, trying to get 0x0800 0001 of them will overflow a 32-bit size_t but not a 64-bit one while being a long (but not impossibly long) input, and 0x0800 0000 0000 0001 of them will overflow a 64-bit size_t but would also be an utterly impossibly long input.
  • Because the address space is so much larger, higher-entropy ASLR can be used. With low-entropy ASLR, an attacker that can quickly try the same attack multiple times, or that can try attacking many victims at once, may succeed simply out of sheer luck. The smaller address space also sometimes limits what relocations are even possible for loaded libraries, which also impairs ASLR. This is the main aspect of the question linked above.
  • Because 32-bit processes can only allocate (4GB-kernel reserved space) memory, a large but finite memory leak or expensive allocation can starve or crash them, where a 64-bit process would be fine (assuming there’s enough physical RAM).

However, it’s not all upside. One advantage of 32-bit processes is that a misbehaving one simply can’t consume all available RAM on a modern system. I once found my PC running extremely slowly, and realized it was because a chat app had a remotely triggerable memory leak and had consumed all 32GB of RAM on my PC. While in this case it wasn’t actually malicious, it was a sober reminder that a bug (or vuln, even if not usable for code execution) in single misbehaving app, running in the background, can severely impair the functioning of the whole computer. If the app had been 32-bit, it would only have been able to consume a few GB, and performance of the machine wouldn’t have been meaningfully impacted at all.

However, the correct way to achieve this advantage is with limits and/or sandboxes. All major consumer and server OSes have ways to put limits on how many resources a process can consume, which are both more customizable than simply relying on the 32-bit limit, and can be applied to any program regardless of how it was compiled. Furthermore, sandboxing can reduce the impact of other vulnerabilities (or non-exploitable bugs).

Leave a Reply

Your email address will not be published. Required fields are marked *