Finished reading? Continue your journey in Tech with these hand-picked guides and tutorials.
Boost your workflow with our browser-based tools
Share your expertise with our readers. TrueSolvers accepts in-depth, independently researched articles on technology, AI, and software development from qualified contributors.
TrueSolvers is an independent technology publisher with a professional editorial team. Every article is independently researched, sourced from primary documentation, and cross-checked before publication.
Apple's support documentation offers familiar advice for slow Macs: clear cache, close apps, restart. But when these standard fixes fail, the real culprits often lie in architectural decisions Apple doesn't publicize. Modern macOS slowdowns frequently stem from how the APFS file system interacts with different storage types, combined with aggressive memory management that assumes unlimited fast storage. Understanding these underlying mechanisms reveals why some performance problems persist regardless of basic troubleshooting.

Apple introduced APFS with macOS High Sierra in 2017, replacing HFS+, the file system that had served Macs since the 1980s. The design was built from scratch for solid-state storage, and the central architectural choice was to store filesystem metadata file names, folder hierarchy, attributes interleaved alongside file data across the drive, rather than keeping it in a dedicated, pre-allocated catalog zone.
On SSDs, this is a perfectly reasonable design. The drive reads any block at the same speed regardless of where it sits physically. Metadata scattered across the drive surface costs nothing extra to retrieve.
On spinning hard drives, that same design becomes structurally problematic. The mechanical read/write head must physically travel to each metadata location during file enumeration. Because the metadata is spread across the platter alongside all the file data, the head spends most of its time repositioning rather than reading. Users hear this as the chattering sound a Mac hard drive makes when opening folders or checking folder sizes.
Testing by Bombich Software quantified this precisely: measuring how long it took to enumerate one million files on APFS versus HFS+ partitions of identical hardware, they found APFS already ran three times slower from the first test cycle. After simulating normal usage over 20 cycles of adding and removing files, APFS enumeration degraded to 15 to 20 times slower than HFS+, while HFS+ performance remained essentially flat throughout.
Apple added background defragmentation to APFS in macOS Mojave, which might seem like the obvious remedy. However, this defragmentation addresses file data fragmentation, not the interleaved placement of filesystem metadata that causes the seek-time problem. Bombich's testing with defragmentation enabled found no measurable improvement in enumeration performance on HDDs.
The further complication is that this fragmentation doesn't stand still. According to the APFS specification, macOS has applied APFS automatically to hard drives and Fusion Drives since Mojave with no option to decline during installation. Users on older iMacs or Mac minis with spinning hard drives received this conversion through a routine OS upgrade, with no indication that the update would fundamentally alter how their storage performed under load.
The fragmentation problem on spinning hard drives has no straightforward software remedy. Each file addition worsens metadata fragmentation, and standard macOS maintenance tools cannot reset it. The only reliable fix is cloning the drive to a fresh volume, which restores a defragmented baseline by placing metadata sequentially during the write process. Normal maintenance does not reverse it.
macOS treats RAM as a staging area for active work and storage as overflow. The operating system deliberately fills available RAM with cached content, on the theory that keeping more data immediately accessible is better than leaving RAM unused. When physical memory fills, inactive data moves to a swap file on storage, freeing RAM for active processes. When that swapped data is needed again, it moves back into RAM.
This is a sensible design when the storage handling swap operations is both fast and plentiful. It degrades predictably when either condition is violated.
The swap file in APFS lives in a hidden volume called VM that shares the same free space pool as the main system volume. There is no dedicated swap partition with guaranteed space. When storage fills and free space shrinks, the swap volume competes with everything else for what remains.
Activity Monitor's memory pressure graph uses color bands to signal how hard the system is working to manage this balance. Based on testing by developer La Clementine, the green band covers 50 to 100 percent free memory, yellow covers 30 to 50 percent free, and red appears only when free memory drops below 30 percent. Swap usage typically does not climb meaningfully until the system crosses from yellow into red territory.
The practical implication is that yellow memory pressure is already a signal of significant constraint. A Mac showing sustained yellow for several hours is not running comfortably; it is actively moving data between RAM and storage on a regular basis, even if no individual application shows alarming memory consumption.
Both major indicators users rely on — the memory pressure graph and the storage available figure — are calibrated in ways that tend to understate actual system stress. The memory pressure graph does not display any numerical threshold labels, so yellow does not register as urgent. The storage figure, covered below, includes space that may not be genuinely available for writes. Together, these tools can make a Mac under meaningful architectural strain appear to be operating within normal parameters. This appears to be systematic rather than accidental, though Apple has not published documentation explaining the reasoning behind these specific interface calibration choices.
One additional note on Activity Monitor: swap file size is a lagging indicator. The system compresses memory aggressively before writing to swap, meaning significant memory pressure can build up while the swap file remains small. Checking swap size alone and concluding RAM is adequate is a diagnostic error the tool design invites.
When Finder displays available storage, that number includes space occupied by local Time Machine snapshots. Apple's own documentation describes how macOS saves hourly snapshots of the startup disk and counts that space as available on the premise that it can be reclaimed automatically when needed. The operating system will delete snapshots as storage fills, but until deletion occurs, the snapshot blocks are reported to the user as free space.
Disk Utility reports a different number. It counts only space that is not holding any data, snapshot or otherwise. The difference between these two figures can be tens of gigabytes on an actively used Mac.
This creates a consistent pattern: users believe they have more free space than is immediately available for new writes, swap operations, or the wear-leveling overhead that SSDs need to perform efficiently. The 20 percent free-space floor that storage engineers widely recommend for SSD performance a figure that comes from enterprise storage practice and reflects the headroom that wear-leveling algorithms need to redistribute blocks effectively is much harder to maintain when the baseline available figure includes purgeable space.
System Data adds another layer of opacity to this picture. Macworld documented a case where a 256GB Mac's System Data category had consumed 108 gigabytes through entirely routine use, representing more than 40 percent of total drive capacity. After a macOS update, System Data dropped to roughly 46 gigabytes, freeing over 70 gigabytes. The space had been there; it just had not been visible or controllable. Other editors at the same publication, running Macs with one-terabyte drives, showed System Data figures between 55 and 87 gigabytes substantial on their own, but far less consequential given the additional headroom.
A 256GB Mac under normal use will have significantly less genuinely free space than Finder reports, and that gap grows over time as snapshots accumulate and System Data expands. The recommended 10 to 15 percent free-space threshold Apple cites in its support documentation exists partly to leave room for swap operations and partly to give wear-leveling algorithms room to function. It was never a comfort zone — it is a floor, and reaching it is easier than the Finder display suggests.
The 256GB/8GB configuration at the bottom of Apple's Mac lineup sits at the intersection of every architectural pressure point described so far. Limited storage capacity creates less room for swap, less headroom for wear-leveling, and less buffer against System Data expansion. Eight gigabytes of unified memory generates more frequent swap operations than 16 gigabytes. And on the M2 generation, a hardware decision compounded all of it.
MacRumors reported that M2-era base MacBook Air and 13-inch MacBook Pro models with 256GB storage shipped with a single NAND chip rather than the two 128GB chips used in M1 predecessors. Multiple NAND chips allow the drive controller to run read and write operations in parallel, similar to how two-lane traffic flows faster than single-lane even at the same speed limit. A single chip processes all operations serially. Benchmark results showed M2 base-model SSD speeds 30 to 50 percent slower than equivalent M1 speeds as a result.
Apple addressed questions about the change by stating that the M2 chip's overall performance gains made the single-chip SSD acceptable in real-world use. Independent testing offered more mixed conclusions. With the M3 MacBook Air and M4 Mac mini, Apple returned to two-chip configurations.
The timing matters here. The M2 generation shipped with a slower SSD for swap operations at the same time that a 256GB drive could lose 40 percent of its total capacity to System Data through normal use. An 8GB system relying on slower swap speeds, with less genuinely free space than Finder reported, while the swap volume shared a shrinking pool of free blocks with system files each of these factors was defensible in isolation. Together, they produced a configuration where the architectural assumptions underlying macOS memory management were violated on hardware Apple was actively selling as an upgrade.
No single factor explains the performance gap some M2 base-model users reported. The slowdowns emerged from the interaction of three independent architectural constraints applying simultaneously to one hardware tier — each defensible in isolation, collectively corrosive.
For users weighing whether to manage these constraints or upgrade, the timeline for service access matters too: Apple's vintage product policy closes repair windows at five years from discontinuation and full service at seven, though Mac laptops retain battery replacement availability for a full decade a distinction that changes the calculus on how long a constrained configuration is worth maintaining.
The file operations that define development work installing npm packages, running Git commands, clearing and rebuilding caches, managing Docker images share a common characteristic: they create, modify, and delete enormous numbers of small files in rapid succession. This is precisely the workload pattern that accelerates APFS metadata fragmentation on any storage type.
On HDDs, this accelerates the enumeration degradation described earlier. On SSDs, it triggers a different APFS limitation: lock contention during concurrent file operations. APFS manages file-level locking for data integrity, but under heavy parallel access from multiple processes (common when package managers, build tools, and version control systems run simultaneously), that locking creates a bottleneck that would not exist on a filesystem designed around concurrent workload patterns. Developers have built dedicated benchmark repositories specifically to measure and document this APFS behavior during Git and package manager operations.
Time Machine local snapshots add overhead to every write-heavy development operation. Each file change that touches a snapshot-tracked volume adds filesystem overhead beyond the write itself, because APFS must maintain snapshot consistency as files change. Developers sometimes respond by excluding node_modules or build output directories from Time Machine, which reduces that overhead but sacrifices backup coverage for the directories most likely to contain work in progress.
The only remedy for the underlying metadata fragmentation problem is a full clone-and-restore cycle: cloning the drive to a fresh volume, then restoring from the clone. This defragments all metadata because it places it sequentially during the write process. Normal macOS maintenance tools cannot replicate this. For developers who generate high write volumes, the degradation rebuilds steadily after each reset, making periodic maintenance an ongoing requirement rather than a one-time fix.
Developers running Git repositories, package managers, and build tools on base-tier Mac configurations are working in an environment where every major APFS performance limitation applies simultaneously. The recommendation to stay on well-provisioned SSD with substantial free space is not generic advice; it addresses the specific architectural conditions that make development workflows disproportionately slow on underpowered configurations.
Intel-based Macs add a hardware constraint that compounds the storage and memory issues above under any sustained CPU workload. Understanding how Intel's turbo boost and thermal throttling interact clarifies why these Macs can feel slow during tasks where Activity Monitor shows available CPU capacity.
Intel processors boost to high clock frequencies when workload demands it and when the CPU is cool enough to sustain those speeds safely. Under sustained load, the processor generates heat that the chassis cooling system must dissipate. In Apple's thin laptop designs, the cooling system can handle brief thermal peaks but cannot sustain the heat output of maximum turbo frequencies indefinitely. The processor settles into a thermal equilibrium between its rated base clock and its maximum turbo frequency, wherever the cooling system can hold the temperature stable.
This equilibrium is not the severe throttling that occurs at thermal limits. True severe throttling on Intel Macs kicks in at approximately 100 degrees Celsius with fans running at maximum speed, at which point clock speeds drop to roughly 800MHz, effectively idle speed. That state represents cooling system failure, not normal sustained load.
What users experience during ordinary heavy work is different: the CPU running at a steady frequency well below the advertised turbo peak. Activity Monitor may show 60 or 70 percent CPU utilization with cycles available, suggesting the processor has more capacity. But the constraint is not processor cycles; it is thermal headroom. The chip cannot run faster without breaching the cooling system's limits.
Activity Monitor CPU graphs on Intel Macs during sustained workloads require thermal context to read correctly. The processor may show available cycles precisely because it has throttled below the speeds needed to generate more heat — not because the system genuinely has spare capacity. The result is degraded performance that does not produce an obvious alert in any single diagnostic view.
This matters most when multiple architectural pressures overlap. A Mac already handling heavy swap activity due to constrained storage and memory pressure is simultaneously writing to and reading from storage. If that workload also requires sustained CPU throughput compiling code, processing video, running simulations the thermal constraint on Intel Macs applies at exactly the moment when the system is already managing several other bottlenecks.
The architectural picture that emerges from this analysis is not a list of bugs to file with Apple. Each decision described above had a coherent design rationale. APFS was optimized for the storage that ships in modern Macs. Memory management was designed to get maximum utility from available RAM. Thermal design balanced cooling capacity against the weight and size of the laptop chassis. The base storage tier was priced for accessibility.
The problems emerge when real-world usage violates the assumptions each design was built around. Managing performance on these systems means understanding which assumptions your usage is closest to violating.
A few thresholds from this research are worth keeping in mind:
Storage headroom matters more than total capacity. Keeping at least 20 percent of SSD capacity free (30 percent if the system has 8GB RAM) gives wear-leveling algorithms adequate room to operate and preserves headroom for swap allocation. Below 20 percent, performance degradation from multiple sources begins simultaneously. The Finder display of available space is not a reliable baseline for this calculation; Disk Utility provides a more accurate figure.
Memory pressure color outweighs swap file size as a diagnostic signal. A Mac showing sustained yellow pressure for hours is under real memory constraint, even if swap usage appears modest. The system has been compressing and evicting RAM contents continuously. Adding RAM is the only structural solution; restarting provides temporary relief by clearing accumulated state.
HDD users face an irreversible degradation trend. APFS on spinning disks produces metadata fragmentation that standard macOS tools cannot reset. Performance will worsen gradually and consistently with use. Migrating to SSD resolves the underlying architectural mismatch.
The M2 base configuration deserves specific scrutiny. If performance has been unexpectedly poor on a base M2 Mac with 256GB storage, the single-NAND chip design is a contributing hardware factor, not a software problem that can be tuned away. The M3 and later generations returned to two-chip storage.
Developer workflows require different specifications. Running Git, npm, package managers, or Docker on a base-tier configuration with less than 20 percent free space will produce performance that no amount of optimization corrects. Development work generates the exact file operation patterns that trigger worst-case APFS behavior.
Apple's architecture performs well within its assumed operating conditions, and those conditions are more specific than the marketing framing of base configurations suggests. A Mac used primarily for browsing, documents, and light media work on a well-maintained SSD with adequate free space rarely encounters these problems. A Mac running development tools, managing local backups, handling memory-intensive applications, and running consistently near storage capacity hits every assumption simultaneously.
Apple's support documentation addresses the surface symptoms. Understanding the architecture below those symptoms makes it possible to identify which threshold your specific usage is crossing and respond accordingly.
Does APFS fragmentation affect SSDs the same way it affects hard drives?
The metadata interleaving that causes catastrophic performance on HDDs has no seek-time cost on SSDs, so enumeration performance does not degrade the same way. However, APFS on SSDs does face a different limitation under developer workloads: lock contention during concurrent multi-process file operations. This is a separate problem from fragmentation, affects solid-state storage, and does not have a straightforward user-side remedy.
How can I tell if my Mac's slowness is storage-related or memory-related?
Open Activity Monitor and check the Memory tab. If the memory pressure graph is showing sustained yellow or red, memory constraint is a primary factor. Then open Disk Utility and compare its free-space figure to what Finder shows. A large gap between the two indicates significant snapshot space is being counted as available, which may be masking how close you are to the storage thresholds that affect swap performance.
Will adding an external SSD help if my internal storage is nearly full?
Offloading files to an external SSD can restore free space on the internal drive, which improves wear-leveling efficiency and swap allocation headroom. It will not directly address memory pressure, but more usable free space on the boot volume does allow the swap file to operate more reliably. For Time Machine, sending backups to an external drive rather than relying solely on local snapshots also reduces the snapshot overhead on the internal volume.
Is the 256GB storage issue on M2 Macs fixable with software updates?
No. The single-NAND chip architecture is a hardware design choice. No software update can add a second chip or create the parallel I/O pathways that two chips provide. The performance gap relative to M1 base models persists at the hardware level.
How often should I restart to manage memory pressure?
Daily restarts are more beneficial than they might seem for 8GB RAM systems running memory-intensive workflows. Memory compression accumulates over uptime; a clean restart returns the system to its lowest swap and pressure state. For users seeing sustained yellow pressure in afternoon hours after a morning start, restarting at the beginning of each workday and periodically during heavy sessions is a practical mitigation for what is otherwise a hardware configuration constraint.