skip to Main Content

How to create a 40 Megapixel photo by 8 frames with sensor shift and processing?



I still do not know how exactly Olympus new sensor shift feature will work. I only know from trusted sources that you shoot up to 40 Megapixel from the 16 Megapixel sensor by combining 8 shots. A readers sent me this with his very personal guess about how this could work:

I am not a source but just a normal E-M5 user, so the information I give is just a suggestion to how Olympus can create a 40 Megapixel photo by 8 frames with sensor shift and processing and has nothing to do with what they have actually done. (I assume the engineers of Olympus are smarter than me.)
The attached illustration show how 8 frames with sensor shift may result in a 32 Megapixel raw photo where the pixels are arranged diagonally with all color information is recorded in each pixel position. This color information can then be rearranged to a normal photo through processing. The resolution of the photo depend on the chosen resolution in the processing and is therefore not the actual resolution of the sensor, but with all color information in all the recorded pixel positions, my guess is that the end result will be just as good as a 40 Megapixel sensor where the color information need to be calculated by interpolation between the pixels.
If you find the illustration and explanation useful, you are free to use it for your webpage, but make it clear that this is not a rumor, just a suggestion to how it could be done. Also, I want to be anonymous even though I write to you with my real name.

Hope sources can confirm or deny his “tech-guess” soon :)

  • BdV

    Whatever they do to improve IQ is fine with me, I’d just be interested to know if this means anything to fast shutter times. Can the sensor shake itself to 8 different positions within 1/8000 of a second?

    • Hubertus Bigend

      No. If it’s no new technology with a completely non-mechanical electronic shutter (“global shutter”), the eight exposures will need something like a half of a second, even if a very short exposure time like 1/8000 s is selected, similar to the E-M1’s “Hand-Held Starlight” Scene Mode or its HDR shooting modes. So that option will be for tripod shots only.

      If there was a completely new sensor with a “global shutter”, which will not be the case if the E-M5 II uses the same sensor as the original E-M5, eight exposures of 1/8000 s will still need at least 1/1000 s. Which would offer at least some range of hand-holdability, though. But I don’t expect we will see something like it yet.

      • BdV

        Thank you. This doesn’t sound like a big reason to get excited. Yet.

        There is something a bit confusing in your comment though. You said 8 exposures need around half a second, and 8 exposures need at least 1/1000 second. That’s a big difference…

        • prob different meaning those 2 things he said. 8x 1/1000 exposured shots plus the processing takes 1/2 sec total?

          • BdV

            Yes, this could be what he meant. I hope it’s possible to turn this sensor shift resolution option off.

        • Hubertus Bigend

          If the mechanical shutter is still needed to record each shot, like it is the case with current Olympus cameras, eight shots will need roughly half a second. If Olympus was able to eliminate the need for the mechanical shutter in the E-M5 II, e.g. by using a new sensor with a global shutter (which I don’t expect they will yet, though), it could theoretically be reduced to something slightly more than 1/1000s.

      • cab10886

        Well, which hardware component is the one that controls the electronic curtain / global shutter, is it the sensor itself or the image processor? If it’s the image processor, it doesn’t matter if the sensor is the same as long as the image processor knows how to filter out the information it does not need. Perhaps the old sensor already had a global shutter feature that was not implemented by Olympus due to some other technical limitation.
        It would be like taking a video, continuous image capture, with the sensor shift on but the processing chip takes care of deciding what images to consider.

        • Hubertus Bigend

          Not quite, because taking a video works without a global shutter, too – but suffers from “rolling shutter” effects which could be eliminated with a global shutter, too.

          Whatever the exact technical details, I think it is safe to assume that Olympus would have enabled a global shutter by all means if the sensor would offer it. The advantages of a global shutter are tremendous, and they are for video, too…

  • michael

    I dont care too much about how it works…. as long as it really works as good as advertised (40 MP with full color information on each pixel should be absolutely STUNNING. Think about the DP Merril series, and thats “only” 15MP with full color information).

    What I care more about:
    – will it be useable without tripod? (My guess: no)
    – any chance that a firmware update brings this tech to E-M1 owners?? (post-processing the 8 images at home with some Oly software would be fine for me…)

    • ElysiumFarm

      Hope my E-M1 can be upgraded too… I wonder why they need 8 separate positions though… maybe it’s RGBG at low sensitivity and then 4 for high sensitivity?

      • Piotr Kosewski

        Pictures in the rumor!

        AARGH Hulk smash!

        • Camaman


    • Andrea P.

      The Foveon is 15MP per layer, so you have to multiply it by 3 (for red, green and blue): 45MP worth of information. In terms of resolution this translates roughly into a 30MP bayer image. The Sigma cameras collect this information in one single take, not over 8 frames. It seems to me the Olympus solution would only work with still subjects.

      • Tom

        As you’ve written, the Merrill cameras capture images at 15MP whilst a CFA equipped sensor with or without OLPF blurs at the pixel level meaning any equivalence between the 2 is qualitative and depends on the subject being captured — the Merrills can approach (but ultimately fall short) a D800 for resolving power so there’s a flaw in CFA MPs but I think that it’s a mistake to say that a Merrill is anything other than 15MP. In a similar fashion to X-Trans, they just produce different looking photos.
        A 40MP multi-shot image should have beautiful tone, micro-contrast and resolve small details spectacularly. If the E-M5 II outputs it as RAW, I may not be waiting on a DP1 Quattro!

        • Andrea P.

          Yes, those conversions are always problematic. Probably the only truly reliable test would be to print bigger and bigger prints from, say, a Merrill and a D800, and see how far each can go in resolving detail. Foveon technology has a number of limitations. In the best of circumstances it creates stunning images though. The Quattro I am going to pass. If interpolation is introduced I might as well get a more flexible camera (which I did)

          • Tom

            Quattro’s interpolation was/is a worry but, Foveon’s layers are panchromatic so it’s a coin-toss as to whether Quattro is a significant step-backwards or not… theoretically anyhow. I certainly prefer the original Foveon design but I don’t like the Merrill colour palette/tonal response so they’re out! have some beautiful DP2Q images that pique my interest once more.
            But yes, I’m looking at a Pentax K-3 or E-M5 II as more complete landscape photographic tools.

    • John Norton

      E-M1 won’t be upgraded any further – you want something new you’ll have to wait and pay for the new model.

      • SendJunkMail

        Which is why the E-M1 may be my last Olympus camera if this is the case. I had hopes for better video on the E-M1 from the start among other things. The E-M1 is in reality the E-M5II. They had opportunity to leapfrog the competition and just release the 4k firmware and they didn’t. This is just another stinking pile of rumor designed to hold people off from their buying decisions. #AnotherOlyRumorFail

    • Peter

      I think it would be perfect with tripod!

      IBIS is not used on a tripod so it can be used for this.

      I wonder if you can use IBIS and this at the same time.

  • Jesus mutiplied the loaves, Olympus multiplie the pixels, unfortunately I’m an atheist ;(

  • Camaman

    4 shots vertical/horisontal and 4shots diagonal left right?

    • i am total noobish; 8 shots, i was thinking 4 shifts/shots would be enough to get a significant increase, but 8 is alot as i read before. I still dont understand the stuff. But seeing the super resolution used in macro photography where it works greatly. Not my kind of cup, but i can dig it.

      • Yun

        Not my cup of tea as well .
        I think Oly want to participate in MP race against Sony & Canon .
        But such tech is not convincing enough , better leave it to Pana ( 88dB sensor ) to do the job .

        • you think alot in Decibels lately Yun

          • Yun

            Maybe .
            2015 is very interesting year for digital photography world .

        • Warren Kato

          Nikon will be coming out with the D890 with 90 mp (based on the same technology) so m43 will always be behind in the megapixel race.

          • Yun

            Doesn’t matter .
            A 21 MP from current 16 MP & A 1.5 stops improvement from GX7 sensor would have delight me

            • Agachart Sukchouy

              i don’t think panny use shifting color capturing to their new lumix body,
              MCS V APCS was so stable light and color capturing,

            • Agachart Sukchouy

              APCS = active pixel color sampling
              MCS = micro color splitter

  • spam

    Hasselblad has used a similar technology for a couple of years. It can work well for static subjects, maybe also for subject with some movement if the shutter speed is fast enough.

    Olympus would need to be able to position the sensor accurately in several positions to implement this feature. Maybe the new camera have improved sensor hardware, if they can do it with existing sensor stabilizator then it should be possible to implement it on existing cameras via firmware updates.

    I’d be happy to get 8 separate files (if they really make 8 exposures) and use a utility on the PC to stitch the final image, I’d actually prefer that to in camera stitching to one file.

    • most of wat you said was in Admin’s first thread about this.
      And isnt stitching something else then this technique of sensor shift?

      • spam

        Stitching is a technique for increasing resolution (often also field of view) by combining more than one image. This is obviously not exactly the same as the traditional use ofthe word, but the algoritms used for combining the images in the new Olympus still is a form of stitching IMO.

        • pharque moi

          This is not photo stitching.

    • Dmitry Anisimov

      Have you tried PhotoAcute?

  • Gregori

    The whole process takes 2 minutes. It will be great for my product photography!

  • TheTree

    Thats fine but lets hope they did sth with AF-C, the current E-M5 is very weak in this area…

  • How long it takes is a critical factor. My take is that it shouldn’t take longer than an Art Filter shot, otherwise one loses patience. Even if the sensor is the same. Oly could include an electronic shutter. And why 8 shots, instead of a variable number? I suppose it is the best choice for resolution, while not taking too long, so probably the processing is speedier in the new camera.

    • Piotr Kosewski

      Because the number of shots is arbitrary for a method you process them.
      At least look at the picture in a text you’re commenting. :o

      This is a method for 8 shots. End of story.

      • rudeness shouldn’t be an option, What I mean is that we miss some v. important details on how the thing works in practice. If it’s fast, if it works in low light, etc. For instance I discarded superresolution Photoacute, because it was cumbersome.
        If this must become the main new feature of the new model, we need to know if it works well for everyday shots, so that IBIS makes a substantial difference, for instance in speed of acquisition. And if it is limited or not by the lack of electronic shutter.

        • A simplistic way to explain it is that it is ‘ Photoacute’, in Firmware, that is limited to accepting 8 Shots. Those 8 Shots are produced automatically (in Hardware, within the Camera) by shifting the Sensor a half pixel while the IBIS attempts to hold the Sensor steady so it can shift that tiny amount accurately.

          While automatic and simple it will certainly take 8 times longer to take exactly 8 times as many Shots under the same circumstances; thus objects might move and wreck the Shot.

          To attempt to avoid that you can use a faster Shutter speed and open it up (which changes the conditions) in order to get each Shot faster and avoid the blur (blur makes it more difficult to produce an enhanced Image with higher resolution).

          See post above for Link that explains more.

          • Yes. we are on the same page. But if one has to shoot at 1/6000 one will be limited to sunny days.

        • Piotr Kosewski

          First of all: this is still a rumor. I’ve read about it on multiple pages, but they all give 43rumors as the source. Given the fact what happened to other “sure things”, I’d suggest waiting.
          As for now, the best way to learn something is to read about Hasselblad implementation, because it actually exists and works. This is our expected solution. And it is slow, cumbersome and pretty much useful only in fully controlled environment with not moving objects.

          Maybe Olympus will optimize the process. Maybe this will be useful for shooting outdoor: landscapes, architecture and so on.
          But seriously… people have been asking questions like “will this work for shooting sport?”. No, it won’t!

          Olympus has used a mechanical IBIS in previous models. I’ll be very surprised if it could move a sensor by 0.5 px arbitrary.
          I would guess it has to measure the signal and “hunt” for the position. I would also guess that’s how Hasselblad’s shift works, because it’s *really slow*.

          If S is your shutter speed, you should not expect the process to take 8 * S.
          You should think about 8 * S + X [7 shifts] + Y [to process].

          To give you and idea: Hasselblad’s 4-shot exposure needs additional 20s (X+Y).

    • Joseph Ferrari

      Wouldn’t it require electronic shutter?

      If it uses mechanical shutter, I wouldn’t touch it with a ten foot pole!

  • JohnH

    This article is amazingly uninformative . . .

    • ever hopeful

      It’s not meant to inform anybody – if you read it again it is someone’s proposition as to how it may work.

  • Joseph Ferrari

    Somehow it seems that in order to play in the professional league, you need to step up to the plate with the largest bat possible.

    As a wedding photographer, I would want to shoot the few shots that are candidates for enlargement with at least 24mp camera. For everything else, 16mp is more than enough. So, when I do a wedding I take both.

    As an event photographer, 16mp is more than enough. The days of print are at an end. So in most cases, I leave the FF home.

    Since I brought home the E-M5 (I pre-ordered) it has been part of my bag in shooting paid gigs.

    I don’t see this feature (40mp) as compelling to my work.

    • days of printing are at an end, means you only use it for screenview, means you dont need much MP. or do you like to crop around alot?

      • Olli Burger

        Ulli, did you forget to take youre medicine? I now prints from the M43 sensors are not that good but the rest of the world prints a lot.

        • digifan

          What BS is this? Prints from a (m)43 are every bit as good as from whatever camera. What a st.p.d statement.
          Mr Olli Burger go troll somewhere else, do you even own a camera? If you do you have never had anything printed from it, that’s for sure!
          There are other factors at play when you put pictures on paper and the quality of the output very much depends on the printing method and equippement. A standard print will have a totally different output then if the machine is setup for a specific picture. Compare it to the developpement proces in the film age.

        • come here mr Hamburger and i show you some blurb mags and a 50×70 labprint

    • Dale

      I have not printed a picture in over five years.

  • BLI

    8 frames by 16 Mpx should be 104 Mpx — if they move the sensor precisely each time. Why 40 Mpx then? To compensate for slight inaccuracies in the sensor positioning?? To attempt to compensate for movement in the subject? To average out noise???

    • BLI

      …eh 128 Mpx, of course :-)

    • Tom

      Look at the image used in the article, it’s not photo stitching! It’s a method of producing panchromatic pixels whilst using a CFA, the fact that it produces 40MP is I’d guess, more side-effect than raison d’etre.

  • Early Leaf digital backs did something similar with four shot backs. Shooting time was slightly longer than four times the shutter speed. Really only useful when nothing in the field of view was moving.

  • > “Hope sources can confirm or deny his “tech-guess” soon :) ”

    To do something, as close as possible to this, with your existing Camera and obtain a “stacked” (and NOT “stitched”) result you could use this (or a similar) FREE Program. This is how it works:

  • Richard

    I think people are taking the 8 shots and going off on a tangent.

    the easiest implementation is for the implementation of drizzle /super resolution which doesn’t really constitute actual measured shifts but random shifting and combines them mathematically.

    the results are usually 2×2 meaning it would end up being less than 63Mp and probably “somewhere” around effective 40Mp of resolving power.

    an example – basically if something is spread over three pixels, if there is a shift in the subsequent image – if you analyze the position of the light in the three pixels again, you can sub-sample. the more samples you have the greater degree of accuracy and the more “effective” resolution you can get.
    There’s obvious throwaways – such as motion blur and the fact that heck, it’s not perfect.
    but in this case the motion of the sensor can be fairly random on the X,Y, and even roll to determine the effective pixel.

    this is the same technology that is used on surprisingly – Olympus microscopes, so it’s not a big stretch for their engineers to make the leap.

  • Abraham Latchin

    If they can do this and have the option to have a live composite style raw come out the other end it is even more exciting… i know for sure i would use it for my product work, but i can see a number of additional areas such as
    landscape photography,
    architecture and interior
    food and product
    Art and abstract

    Either way, as i difnt opt for the em1 this is almost a sure upgrade for the tethering, wifi, improved af and evf, colour corrector, and live composite… a bunch of goodies. But this mp boost is the cherry.

  • Techman

    It’s meaningless, just like HDR…

    • ever hopeful

      ….and your post

  • Jyri Kaasinen

    40mp from subjects that don’t move? If Olympus (or any other manufacturer) comes up with something like that in OMD price range, then basically it could be a “scanner killer”. I’m actually bit surprised to see it has not been discussed here(?). Or nobody cares about film shooting anymore? :D

    Your old negatives won’t move on your table! :)

    So you’d get all of this in one device:
    + negative scanner
    + OMD type of camera
    + medium format(?) quality digital camera for non-moving subjects
    + yet still you’d get decent FullHD video, though it’s not for “video guys” and people wanting 4K

    I’m a film shooter and I have to say it sounds tempting.

    • Jyri Kaasinen

      A negative scanner with autofocus, that is!

  • Duarte Bruno

    Move along people, there’s nothing to see here…
    If you want super-resolution done right, just shoot your 4-6 shot burst in camera and then feed those to PhotoAcute which will always do a much better job of subpixel aligning them than the camera will ever do.

    • Abraham Latchin

      I am not sure you are right there. The ibis can compensate to the pixel level for handhshake… but this could very well give multiple benefits such as lower noise, greater resolution and better colour.

      Getting one raw file would also make post work much easier. As they can provide 1 raw file in live composite, they can do it here.

      • Duarte Bruno

        Photoacute gives you all those benefits you have mentioned, plus focus stacking and object removal.

        You are right about workflow of course, but what I wanted to stress was that the possibility is already available in software. And no in-camera processor does radiometric alignment as far as I know. :(

        • Abraham Latchin

          Do they use Raw? I suspect they dont, so ultimately they are improving elements from Jpeg forward, while here we are talking about extra info for colour, dr and noise in a workable raw file… but of course i could be wrong.

          • Guest

            You suspect wrong.
            PhotoAcute reads mostly everything from JPEG/TIFF/RAWs (the last ones through Adobe DNG Converter)
            The image processor engine is either 16/32 bit, I can’t be sure but you are the master of your own workflow and it can’t give caviar if you feed it with sardines (even though sometimes the output almost looks like it).

          • Duarte Bruno

            You suspect wrong!
            PhotoAcute reads mostly everything from JPEG/TIFF/RAWs (the last ones through Adobe DNG Converter)
            The image processor engine is either 16/32 bit, I can’t be sure, but you are the master of your own workflow and it can’t give caviar if you feed it with sardines (even though sometimes the output almost looks like it).

            • Abraham Latchin

              Echo :)
              Well that certainly makes them more interesting, I would still suspect that if Oly is able to actually shift the sensor as accurately as is rumored we would be gaining a lot more resolution and clarity.

              My reason for thinking this is that if I am on a tripod and shoot for stacking in Photoaccute the sensor hasnt moved, we are technically not getting any more data per pixel point. What we are doing in resampling the same point for better output.

              Not to say they are different, but I suspect the Olympus output will be cleaner, sharper better colour etc.

              I will have a closer look at photoaccute though.

    • Richard

      except photoacute is so behind the times with camera and lens support.

      • Duarte Bruno

        PhotoAcute’s specific lens modules are mostly useful to correct for CA.
        Even if the profiles are behind the times as you said, it’s results are still above anything else, especially when it comes to aligning, which is the real challenge here.
        The camera support isn’t all that important either. You can still read RAWs indirectly, mark the images as having been taken by another camera with similar sensor and still obtain optimal results.

        Disclaimer: I’m not affiliated in anyway with Almalence, but I’ve profiled over a half a dozen cameras/lens combinations, so I haven’t paid for my PhotoAcute license.

      • Duarte Bruno

        BTW, I don’t really care for lens/camera support. As I shoot RAW, when PhotoAcute gets in the workflow, is always as the last step from RAWs processed into TIFF.

Back To Top

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

What are Cookies?
A cookie is a small file of letters and numbers that is stored in a temporary location on your computer to allow our website to distinguish you from other users of the website. If you don't want to accept cookies, you'll still be able to browse the site and use it for research purposes. Most web browsers have cookies enabled, but at the bottom of this page you can see how to disable cookies. Please note that cookies can't harm your computer. We don't store personally identifiable information in the cookies, but we do use encrypted information gathered from them to help provide you with a good experience when you browse our website and also allow us to improve our site. You can watch a simple video from Google to find more information about cookies.

Cookies used by our Website
The 43rumors website,, uses the following cookies for the collection of website usage statistics and to ensure that we can . These are anonymous and temporary. By using our website, you agree that we may place these types of cookies on your device.
Read how Google uses data when you use our partners' sites or apps:
Google Analytics Cookie Usage on Websites: Addthis cookies:
Disqus cookies:
Vimeo cookies:
Youtube cookies:

Disabling/Enabling Cookies
You have the ability to accept or decline cookies by modifying the settings in your browser. Please note however that by deleting our cookies or disabling future cookies you may not be able to access certain areas or features of our site. For information about how to disable cookies in your browser please visit the About Cookies website.