Finished reading? Continue your journey in Tech with these hand-picked guides and tutorials.
Boost your workflow with our browser-based tools
Share your expertise with our readers. TrueSolvers accepts in-depth, independently researched articles on technology, AI, and software development from qualified contributors.
TrueSolvers is an independent technology publisher with a professional editorial team. Every article is independently researched, sourced from primary documentation, and cross-checked before publication.
Samsung is developing camera sensor technology that captures all pixels simultaneously rather than scanning line-by-line, eliminating the warped action shots and bent vertical lines that plague current smartphone photography. The 12-megapixel sensor uses embedded converters and motion compensation algorithms to achieve global shutter-level performance with practical smartphone pixel sizes, targeting the Galaxy S27 series where motion artifacts are most problematic.

If you've switched from a Galaxy phone to a Pixel and suddenly found it easier to capture clean action shots of your dog mid-sprint or your kid's soccer goal, that experience reflects something real and measurable. Android Authority's stopwatch-based testing found that the Galaxy S24 Ultra averaged approximately 300 to 400 milliseconds slower shutter response than the Pixel 8 Pro under identical conditions. Samsung closed about half that gap with the Galaxy S25 Ultra, but the testing still identified the Galaxy as the only phone in the comparison where testers repeatedly missed the subject entirely.
The Galaxy S26, which launched in February 2026, didn't change this picture in any meaningful way. Camera improvements centered on a wider f/1.4 aperture, not a new sensor architecture. Samsung executives acknowledged at the post-Unpacked roundtable that more significant hardware changes are coming with the Galaxy S27. That hint lines up directly with Samsung's new sensor development. The engineering resources that did not go into a new S26 sensor architecture had to land somewhere, and the research Samsung published weeks later answers that question clearly.
After reviewing documented shutter lag tests across Galaxy S23 through S25, the pattern makes clear that Samsung's action photography gap has two distinct root causes. The first is capture timing: Galaxy phones historically fire later relative to the button press than competitors, a software and pipeline issue Samsung has been incrementally addressing. The second is rolling shutter geometry distortion: vertical objects that bend, subjects that warp, and video that wobbles even when timing is perfect. That second problem is structural, and software patches cannot fully correct it. The new sensor Samsung is building addresses the geometry problem at the hardware level.
Every CMOS image sensor in current smartphones reads pixel data sequentially. The sensor doesn't capture the entire frame at the same moment. Instead, it works from the top row of pixels downward, recording each horizontal line before moving to the next. By the time the sensor reaches the bottom of the frame, several milliseconds have passed since it recorded the top.
Under ordinary conditions, those milliseconds are imperceptible. But when the camera or subject moves quickly, the top and bottom of the image represent two different positions in time. A vertical pole appears straight to the eye, but if the camera pans across it during those milliseconds of scan time, the top of the pole was recorded when the camera was pointing slightly left and the bottom when it was pointing slightly right. The pole leans in the final image, not because anything bent physically, but because time elapsed between the top and bottom exposures.
The distortion scales with speed. A fast lateral pan produces leaning vertical lines. A running athlete's legs appear stretched or compressed depending on their direction relative to the scan. A spinning propeller or wheel becomes a warped arc rather than a circle. Concert photography introduces a different manifestation: LED stage lighting cycles on and off faster than the eye detects, but the rolling shutter records some lines at the bright phase and others at the dark phase, producing horizontal banding across the image.
Correcting these artifacts in post-processing requires guessing the motion that caused them, which produces its own artifacts. The only real fix is capturing the entire frame simultaneously so there is no temporal displacement to correct. That is what global shutter sensors do, and it is why the professional photography world has treated global shutter as the gold standard for action capture. The challenge for smartphones is that true global shutter imposes costs that consumer devices cannot easily absorb.
Samsung's approach threads the needle between the theoretical ideal and the practical constraints of a device that fits in a pocket. Rather than a true global shutter, the company developed what it formally calls a "global-shutter-equivalent" sensor. The distinction matters, and Samsung itself maintains it.
The sensor's core mechanism works like this: every four pixels share a single analog-to-digital converter arranged in a 2x2 group. Within each group, those four pixels still operate sequentially, retaining rolling shutter behavior at the micro-level. Across all the groups simultaneously, however, the exposure is effectively coordinated. The ADC sits inside the pixel array itself rather than being routed externally. That proximity cuts the distance a signal must travel before conversion, which tightens the time window during which motion can introduce geometric errors into the readout.
The residual distortion from those 2x2 bundles is handled computationally. Samsung's optical flow algorithm analyzes per-pixel brightness changes across the frame, extracts motion vectors indicating how the camera or subject moved, and applies geometric correction in real time during capture, not as a post-processing step applied afterward. Samsung described the process to Korean publication Sisa Journal this way: it extracts optical flow from pixel brightness changes during motion and performs motion compensation from that data. Samsung also acknowledged directly that because rolling shutter operation is included in the architecture, some slight image distortion remains possible, and the result "cannot be seen as a perfect global shutter."
That candor is important. This sensor will not eliminate rolling shutter distortion with the absolute completeness that a true global shutter achieves. What it offers is a substantial reduction in geometric artifacts, achieved within a pixel size and power envelope that fits inside a smartphone.
The clearest evidence that this is no longer a concept comes from Samsung's presentation at ISSCC 2026 in San Francisco this past February. The International Solid-State Circuits Conference is the semiconductor industry's most rigorous annual academic forum. Samsung's paper, titled "A 1.09e--Random-Noise 1.5μm-Pixel-Pitch 12MP Global-Shutter-Equivalent CMOS Image Sensor with 3μm Digital Pixels Using Quad-Phase-Staggered Zigzag Readout and Motion Compensation," passed peer review for that forum. Academic conference presentation requires reproducible technical results, not marketing projections.
At ISSCC 2025, Samsung presented a 3-stacked sensor capable of switching between a 50MP rolling shutter mode and a 12.5MP global shutter mode. That design registered 2.4 electrons of random noise in global shutter mode. The 2026 sensor brought that figure down to 1.09 electrons. One architecture targeted switchable mode capability for high-resolution applications; the other optimized purely for global-shutter-equivalent performance on secondary camera positions. Samsung is not experimenting. It is refining a product track.
The Sony A9 III, released in 2024 as the world's first full-frame mirrorless camera with a native global shutter, provides the clearest available evidence of what true global shutter costs at the hardware level. Every significant independent review documented the same pattern. DXOMark's sensor measurements recorded a peak dynamic range of 13 EVs for the A9 III, trailing competitors like the Nikon Z9 at 14.4 EVs, and documented a base ISO of 250 versus 100 on rolling shutter peers. That single-stop ISO penalty is applied before the photographer raises the camera.
The underlying reason is structural. A true global shutter requires every pixel to simultaneously capture light and store the resulting charge until the readout cycle can process it. Each pixel must therefore include both a photodiode for capturing photons and a storage region for holding the charge. Those two functions compete for the same microscopic area. The storage region physically displaces some of the photodiode, reducing how much light each pixel can collect. Smaller effective photodiode area means less light sensitivity, which forces higher ISO for equivalent exposure, which introduces noise.
Samsung's hybrid architecture is specifically designed to avoid exactly the A9 III's penalties. By retaining rolling shutter as the operational foundation and adding embedded ADCs and optical flow correction on top, Samsung keeps the pixels' light-gathering geometry largely intact. The photodiodes do not need to dedicate space to charge storage, because the architecture does not require simultaneous hold-and-readout. The optical flow algorithm then corrects the residual distortion that the hardware cannot eliminate on its own. The extent to which this approach successfully avoids the A9 III's dynamic range penalty will only become visible when independent lab testing examines production hardware. Samsung has not published pixel-level geometry data that would allow a pre-release estimate.
For photographers, the practical translation is this: the A9 III delivers zero rolling shutter distortion at the cost of reduced low-light and dynamic range performance. Samsung's sensor targets a substantial reduction in rolling shutter distortion while attempting to preserve the low-light characteristics that smartphone users expect. Whether it threads that needle cleanly is the open question.
The 12MP specification immediately tells experienced observers where this sensor will not go: the primary camera. Samsung's Galaxy flagship main cameras have operated at 50MP or 200MP across multiple generations. Dropping to 12MP on the main sensor would be a significant resolution step backward that Samsung's marketing positioning would not support.
Telephoto and ultrawide lenses are the logical targets, and not by default. Both positions experience rolling shutter distortion more severely than the main camera, for different reasons. Telephoto lenses magnify everything: subject detail, background elements, and motion artifacts alike. At 3x optical zoom, rolling shutter bending that might be subtle on the main camera becomes clearly visible in the final image. Ultrawide lenses present a different issue: they are the go-to choice for action video, wide panning shots, and any scenario involving movement across a broad frame. Rolling shutter wobble is particularly disruptive in ultrawide video precisely because the wide field of view exaggerates horizontal motion.
Telephoto and ultrawide are the exact positions where consumers encounter rolling shutter artifacts most frequently and most visibly, and 12MP provides perfectly adequate resolution for both still photography and video capture at either position.
The 1.5µm pixel pitch reflects a calibrated compromise. Smaller pixels gather less light individually, which would ordinarily hurt low-light performance. At 1.5µm, Samsung lands in a range that balances photon collection against the physical compactness a secondary smartphone lens module requires. This is not the largest pixel Samsung could engineer. It is the largest pixel that fits the deployment target.
Apple holds a granted patent, not merely a pending application, for a global shutter image sensor designed for use in iPhones. Patent No. 12274099, granted April 8, 2025, describes a vertically integrated design using three functional layers within each pixel: the first for light capture, the second as an in-pixel charge storage area, and the third for analog-to-digital conversion. The in-pixel storage layer is the key mechanism. It allows the entire frame to freeze simultaneously by holding captured charge while readout proceeds row by row, achieving functional global shutter without requiring simultaneous readout of every pixel at once. This is architecturally distinct from Samsung's optical flow approach, but it targets the same problem.
Parallel formal commitment from both Samsung and Apple to the same sensor architecture class in overlapping timeframes represents something more significant than coincidence. These are the two manufacturers whose camera capabilities attract the most consumer attention and competitive scrutiny. When both commit formal intellectual property and sustained research resources to the same problem simultaneously, the technology transitions from experimental to pre-commercial. Consumers will not have to wait for one company to prove it viable before the other follows.
The Galaxy S26's camera continuity turned out to be informative. Samsung shipped its fourth consecutive flagship generation using essentially the same core 200MP sensor architecture. The wider f/1.4 aperture was the headlining change, but the underlying sensor structure remained unchanged. That continuity created a specific dynamic: the engineering resources that did not go into a new S26 sensor architecture went somewhere. Samsung's ISSCC 2026 paper, presented weeks after the S26 launched, suggests strongly where.
Products that reach ISSCC presentation typically reflect hardware at approximately 12 to 24 months from consumer availability. The Galaxy S27 is expected in early 2027, placing it comfortably within that window. Samsung's executives, in post-S26 briefings, acknowledged that more meaningful hardware camera improvements are positioned for the S27 generation. Neither statement is a product announcement, and Samsung has not formally confirmed which sensor architecture the S27 will carry. The timeline is an inference from the research cadence and public statements, not a disclosure.
The S26 skip and the ISSCC 2026 paper both point in the same direction: the S27 is the most credible deployment window available. The question for users is what arrival on the S27 will actually mean in practice.
The sensor will not fully eliminate rolling shutter distortion. Samsung's own description is explicit on this. What it will substantially reduce is the geometric distortion that bends vertical lines, warps limbs, and wobbles video during handheld shooting. The timing-based shutter lag problem is separate and has been addressed by different means across recent generations. Whether the optical flow algorithm introduces its own processing artifacts under challenging conditions, particularly in low-contrast scenes where motion detection is more difficult, is not knowable until production hardware reaches independent testers.
For Samsung users who have watched Pixel and iPhone owners reliably freeze fast subjects while their own shots came back bent or blurry, the S27 development represents the first time Samsung has attacked the hardware root cause rather than patching the symptom. It is worth noting here that the embedded ADC design also carries a power consumption cost, similar to how ultra-slim phone designs trade battery capacity for thinness a reminder that hardware gains at the component level always require management at the system level. Samsung will need to balance the ADC power draw against the S27's thermal and battery constraints. That is a meaningful engineering challenge, and the ISSCC confirmation gives it more credibility than a camera upgrade claim typically carries this early in a product cycle.
Does Samsung's new sensor completely eliminate rolling shutter distortion?
No. Samsung describes it as a "global-shutter-equivalent" design, not a true global shutter. The 2x2 pixel bundles within the architecture still operate with some sequential rolling shutter behavior. The optical flow algorithm corrects for the resulting distortion in real time, but Samsung's own technical description acknowledges that slight image distortion remains possible. The improvement will be substantial compared to current Galaxy cameras, but it will not match the absolute zero-distortion performance of a true global shutter like the one in the Sony A9 III.
Why isn't Samsung putting this on the main 200MP camera?
The 12MP resolution rules out the primary camera position on Galaxy flagships, which have used 50MP or 200MP sensors for multiple generations. Telephoto and ultrawide cameras operate at lower resolution anyway, and they are the positions where rolling shutter artifacts are most severe and most frequently encountered by users.
Will image quality suffer compared to current Galaxy cameras?
That is the central engineering challenge Samsung is navigating. True global shutters, like the one in the Sony A9 III, require each pixel to incorporate charge storage circuitry that reduces light-gathering area, forcing trade-offs in dynamic range and low-light performance. Samsung's hybrid approach retains rolling shutter hardware as the foundation, which may preserve more of the photodiode's light-gathering area. Independent lab testing of production hardware will be the only reliable measure.
Is Apple doing the same thing?
Apple holds a granted patent for a different but related approach: in-pixel charge storage layers that allow the entire frame to freeze simultaneously before readout. The architectures differ, but both companies are formally developing global or near-global shutter technology for consumer smartphones in overlapping timeframes.
When will the Galaxy S27 arrive?
Samsung has not announced the Galaxy S27. Based on Samsung's established product cycle, a launch in early 2027 is consistent with historical patterns. The ISSCC 2026 paper, presented in February 2026, fits a timeline where production-ready hardware could be available for a 2027 device.