All is possible

Category: Techinque Page 14 of 18

The importance of ISO in digital

image-quality-and-iso-sensitivity

After chatting with a few people I thought I would throw out two thoughts on the sensitivity of modern sensors and the ignorance that is rampant …

By now if a camera does not shoot in the dark at 128,000 iso (I wrote 3 zeros) it seems to be unusable, in fact the cinema has always stocked itself with 128,000 iso film … of course …
when I said I used a digital camera that works at 400 iso I was taken for a fool, and they thought I was talking about a very old product.

Cinema is about sensitivity and light, light that you receive, that you capture, that you reflect …
When cinema meets adventure Fitzcarraldo
If you shoot documentaries, I can understand looking for natural light, but I don’t remember among the films approved by the National Geographics sensitivities above 1600 asa, when it came to photography, absolutely 400 as far as documentaries were concerned … the work was rejected, because it was not in their standards.

Kodak and Fuji produced motion picture film until last year, now only for archival copies, and they never made films that were too sensitive, because the more sensitive emulsions were less sharp, either by grain structure of the film or by processing during development. In fact golden rule in photography is use low sensitivity because it corresponds to fine graininess therefore detail and definition.

Digitalartists who were born with digital, and have not made an effort to know analog at least as a basis of theory, are convinced that everything is related to the sensitivity of the sensor and the camera, forgetting that then a sensor that is too sensitive, either digitally amplifies the signal, or becomes unusable in daytime, because you have to close the aperture so much (creating diffraction and therefore loss of sharpness) or using ND filters so thick that you lose sharpness and/or cause side vignetting dominants that can never be removed in post except with absurd labors.

Example, with a 400 iso sensor, at 3 p.m. I have to close in an average situation the aperture to 8-11 to expose correctly, or use an ND 16 to take away the stops needed to be able to handle the correct exposure.
If I have to shoot at more evening or nighttime times it would be unnatural to see anything and everything all the time, but still using faster prime optics (1.8-1.2) I can work with excellent, detailed images.

Too often we push for native images that between noise, and distortions given by signal amplification become unnatural…
A 50 mm 1.4 optic at 1600 iso 1/48 provides an image comparable to that of the average human eye, as contrast, brightness in nighttime exteriors.

Too many people think that the secret of filmlike is a use of very pushy technologies, while sometimes the real secret is working within the limits of the machines, and knowing how to handle them as well as possible…
Any serious DoP knows that if the machine offers 14 stops of dynamic range, it’s better to use 8-9 if it’s okay, because the head and tail… like making good whiskey have to be thrown away…that’s why the Log was born, which is used more than Raw today and with raw; you take the best of the dynamic range, remap it into the available range of recording by compressing all the information, but being able to re-expand it afterwards in addition to recovering all the dynamic range you have a quality that you would never have working with video in normal mode.

Another legend / mystification that arises is that iso is just metadata, this is not correct, it depends from camera to camera.

In most cameras it has a sensor with fixed sensitivity at xxx iso and you record this information in the recording of the raw file, then depending on the camera settings, if you have indicated an iso different from the native iso a re-processed image is shown depending on the metadata information recorded in the image. For this reason, saving a raw file allows you to manipulate the image in postproduction by taking advantage of all the original information.

However, this is only valid if the recording of the raw is done in a complete way, that is, we are talking about high-end cameras such as arriflex or red, while in the case of BlackMagic Cinema Camera there are yes metadata indicating what is the deviation between the native iso and the artificial iso, but there is a fundamental difference on how the data is recorded.

The native iso of the bmcc is 800 iso for 2.5k and 400 iso for 4k, while the other iso’s are artificially created by undersampling or oversampling the information. Theoretically we can manipulate the information in post, but in the case of the bmcc there is a though….

The bmcc is born with a very good 16bit sensor, but it records the information in a 12bit log raw dng file, which means it should record as much as possible, when it records at native iso, and that’s what bmd suggests for the most workspace in postproduction, but when they work in non-native format curves are applied to the log to prefer high and low light, so the 16bit remapped on the 12bit log is slightly pushed in the two directions, so it might actually be slightly less protected as tonal range on the extremes.

But so do all these extra iso’s help or not?

Hard to give a single answer, let’s say it depends on the situation and use of the camera :

PRO high iso and wider bass:

  • by recording in prores you can have more scope for post because you record more information in the shadow areas that would be more closed, or in the highlights
  • having higher isos you can see better to make the focus
  • having lower iso can be useful when you don’t have ND filters to reduce light

AGAINST higher iso and wider low iso

  • recording at too high iso can cause loss of definition and/or excessive noise that remains “etched” in the images
  • recording at low iso with downsampling can clip highlights depending on the algorithm doing the downsampling
  • if you use them without knowledge you might limit the action of post

Then there are exceptions, some cameras are equipped with dual ccd or cmos to handle low and high iso, so they have practically two native iso (new panasonic varicam) and in that case this argument does not apply.

bmcc

But so these BlackMagic Design insensitive cameras?

after doing a simple experiment, bmcc4K side by side with a canon 60D, shots in raw on one and the other, brought the shots inside lightroom, the native 400 iso shots of the bmcc4K were brightened in post to reach the details obtained with the 60D at 6400 iso raw, the result is pretty much the same, only noisier because the canon has noise reduction in the native camera, but once we pass the noise reduction we get the same level of information.

Of course there is a handicap for bmcc because blackmagic never thought of it, that higher iso also means having a brighter image for manual focus.

The crop. but what fear does it make me …

When photographing or cinematography was a naque, the various users used the lenses for their artistic and technical expression, but did not care about the crop and its consequences. Today it seems to be an insurmountable and above all dramatic problem.
Let's get some clarity on the various practical technical speeches.

In photography, the film set a standard for amateurs in compacts and professional seeds with the 24×36 format a 2×3 format, while super-professionals used the format of 6×9 backs or single-plate films of as much as 60×90 mm.

Horizontal scrolling.

At the cinema, the classic film was divided between:

  • 16mm with a frame of 10.26 x 7.49mm
  • super16mm with a frame of 7.41 x 12.52 mm sacrificing one of the two perforations
  • 35mm with The Academy format 16 x 22mm
  • 35mm with 1:85 wide format 18.6 x 21.98mm
  • super 35mm with 18.66 x 24.89mm (which takes full width up to edge drilling)

Vertical Scrolling

As you can see 35mm photographic film and film share only the width, but being used in a perpendicular sense one to the other are not absolutely comparable.

Today in digital photography has created several new formats to save on both lenses and sensors.

  • The 24 x 36 is standard format, it has been renamed FullFrame
  • The 15.7 x 23.6 is the Aps-C format, born in the cheapest range (which actually varies between houses by a few mm, for example canon reduces it to 14.8 x 22.2)
  • 13.5 x 18 is the micro four-thirds format created by the Olympus and Panasonic and Zeiss consortium

this means that the focal plane where the image is projected from a Fullframe lens (24x36mm) is smaller than and this involves a number of aesthetic-practical changes:

  • if the lens is larger than the sensor, it only takes one serving, so the viewing angle is narrower.
  • the captured light is lower so the lens is fast (open diaphragm) so it is not able to express itself at its best.
  • often to capture a greater angle you use wide-angle lenses that are pushed when narratively should not be used, so the perspective is more thrust.

Why and when does the crop problem arise?

Because the manufacturers (definitely by economic choice) who have developed the cameras and the camera prefer to hold a single fullframe mount and then mount over the same optics both in front of fullframe sensors and smaller, starting to talk about this infamous crop. However, with the so-called FullFrame as crop 0 as a reference, while there are larger formats such as 6×9, in reality the fullframe also has crop, to be precise crop 2.4.

Why is the crop feared?

measured sensor

measured sensor

because most people fear the multiplication of the focal point and as such to lose visual angle, so having to push more on the purchase of higher wide-angle lenses thinking of recovering the angle and everything is in place, just having paid for a more pushed wide-angle.

but it doesn't really happen if there are not very much-pushed differences between sensors. I wrote an article with direct comparisons between a 4/3 sensor and an s35 with comparison photographs.

What exactly is the crop? and what is NOT…

The crop cuts out the part of the image ON the focal plane, so when you talk about crop you have to think that you take the original image and cut out only the middle part; it partially affects the brightness because the projection cone of the image of a fullframe lens is not taken completely and therefore since the rays are not concentrated convergingly, so even if the light collected is X, the resulting will be X/ the percentage of crop.

What is the most common thought error?

Often reads, equivalent to full frame focal length xxx
Error
The Crop can't change the FOCAL LUNGHEZZA OF A LENT!!! It does not change the curvature of the lenses or the internal interleent distance, but only the focal plane part RACCOLTO then the focal angle.
often you read 12-35 type lenses on m4/3 format (crop x2) as the equivalent of a 24-70 on Fullframe format… but we should not speak of focal length, but of focal angle.

anyone who thinks that a pushed wide-angle like the 12 and the middle 35 mm can have the perspective distortions of a medium grandandolare 24 and a near-tele of 70 is an ignorant so great that it does not even deserve the right to write, and instead it was read in a Panasonic advertisement a few months ago about their excellent lens 12-35 2.8.

What is the biggest mistake caused by the crop?

many people, deceived by the conversion, think that the multiplication of the crop also changes the focal point and not the visual angle, so they mistakenly use the lenses and their focals.
There have always been optical rules in photography and it will not be post-production corrections that will change them.
When we talk about Normal we talk about the 50mm focal point that is defined as normal because it corresponds to the perspective distortion of the human eye.
When you photograph with a 50mm you work with a lens that if you photograph looking with one eye in the crosshairs and one towards reality we will have the same kind of vision.

There is a very simple rule in photography:

do you want to see exactly how in reality? use 50mm,
do you want to compress perspectives? go up to the telephotos.
do you want to expand the perspectives? you go down to the wide-angles.

The crop ALTERA ONLY the angle of vision, but can not change the shape of the internal lenses, nor their interseed distance between them.

Some manufacturers, such as Panasonic, add a room correction of images to give the illusion that the use of lenses not adequate to the sensor but more pushed towards wide anglers, do not create problems of prospective distortion, a pity that the algorithm works only on barrel distortion, then compensates for the ball distortion of the supergrandangolare reducing the lateral distortion, but can not alter the perspective , the distance between the planes etc, and therefore as such, the image is perceived differently.

so whether you're photographing, working with moving images you always have to take a lens for the perspective optical result that we want, and only in extreme cases you go to choose a shorter focal just to capture a wider image.

When using the wrong optics the results are obvious, especially on the neighboring elements or on the portraits, where 85mm is usually used (called the lens for the portrait par excellence) because it slightly crushes the perspective and makes the face more pleasant, as well as blurring the back to reduce background distractions.

The focal point makes all the differenceIf you use a wide-angle face will be deformed and round, the cheekbones inflated, the nose deformed, so the portrait will be more caricatured than natural. On the photo on the left just look just as you move your eyes relative to the side of your face.

So ultimately?

when you can choose a focal point, you do it for its perception of perspective and not for the visual angle, so choose the classic 35-50-85-135mm for different cinematic situations.

When passing through wider wide-angle lenses it should be a deliberate choice to emphasize a certain effect of space and perspective, not to have a wider visual angle.
It is no coincidence that those who photograph with prime lenses (fixed focal) move to look for the right frame, and in cinema it is known that often the sets have moving walls, but one is open to indie and frame even through the removable ghost wall.

But the depth of field with the crop?

Always with the crop it is feared that with the depth of field you will lose, memories of the half-format in photography or the super8 in the amateur film, but are again errors dictated by the superficiality of evaluation. It all stems from the fact that often in small sensors to manage the framing you use wide-angle lenses of which the focal angle is expressed and not the focal length, forcing the depth of field, so it is easier to have everything in focus, and more difficult to blur, BUT…
NOTE ALL this happens ONLY when we reason with the focal angle and not with the focal length.

In fact, those who do this kind of demonstration reasoning showing similar photographs made with same lens with different sensors, but being there the middle crop is not physically possible, so the self-styled demonstrator to cover the same focal angle has moved back, but this violates the rule of comparison, because as all professional photographers know, one of the elements of depth of field management is the distance of the subject put in focus , so if I have a very strong crop of a lens like a Nocktor 0.95, and to have the same focal angle I move back 5 meters from the subject it is easy that I go in hyperfocal, so it is not the sensor that blurs less, but the traditional optical principles on which the photograph is based, even if he took a photo with a 6×6 would have less blurred.

To confirm this speech I can attach trivially some photograph made with micro 4/3 (sensor with crop 2) that despite that can express remarkable nuances without problem, because it is the lens that blurs, not the sensor….

And who says otherwise?

Compressed_perspective_cheat_sheet

then come out articles, like what you can see above, that think they try the opposite, proving even more lack of common sense… they take an image of a panorama made with a wide-angle and compare a faraway element with the same shot made with the canvas, and they think to show that the perspective distortion depends not on the optics but on the distance bar position… while the whole thing is the sum of the two things.

by the way taking the central part of the image the distortion is even less noticeable and therefore even more useless and false as a comparison. They also do not explain why the elements with the rise of the focal point are close to each other…
[ironic mode on]maybe because if you move away you get closer to each other to be closer to you… [ironic mode off]

Nikkon 8 mm

take a picture with this and I challenge you to take any part of the image and replicate it with a canvas, however small you take it, the light collection curve involves a distortion so strong that it cannot be matched by any compensation factor.

comparison of items up close

comparison of items up close

The distortion of the perspective is related to the angle that is covered by the lens, so by the shape itself, the greater the curvature, the greater the dilation of the perspective, so it is normal that if you take with a wide angle a very distant element it covers such a narrow angle of the lens that the perspective distortion is reduced or nothing. But these kinds of demonstrations do not, because they can not, with elements closer to the body chamber, so where you feel a stronger difference.

I also challenge these self-styled experts to replicate with a wide-angle the repositioning of the elements on the planes like the branches behind the girl in the second photograph simply by shifting the point of view.

Too often these statements are made by finding an exception to the focal rule such as the very distant shooting of the elements, where the different characteristics between short and long focals are reduced, just take elements close to a few meters… and you immediately notice that it is impossible to achieve the same result.


FinalCut Pro 7, today is no longer Pro..

With a provocative title like this, I can imagine how many people, from fan boys to editors will be ready to burn me at the stake, but … read the article carefully and let’s see who should be burned at the stake

When Apple created the multimedia product under the name Quicktime, they created a very powerful tool, a software that is much more than a player, but a set of libraries, codecs, active video and audio data streaming services that manipulate in realtime the video as it is loaded into memory. Creating an editing program based on Quicktime was simple.

Apple several years ago created the Final Cut suite, combining their editing program with the Final Touch program by buying the company Silicon Color, in 2006, creating an edit and finishing system based on the Quicktime Prores. Final Touch became Color.
Color only supports prores, because it does not read system codecs but has internal decoder.

Fast, ultra-efficient, high-quality system at the time …

Let’s jump ahead … in 2009 Apple releases the latest version of FinalCutPro 7, a 32bit package for the 32bit SnowLeopard operating system… based Quicktime 7. Everything works fine, or nearly so, as Quicktime X pops up in the latest updates, but by default the system uses quicktime 7.

2015 Yosemite, Default Quicktime X, many professional users have noticed that things have changed, codecs that used to be read by the system are now not supported, the player claims to convert anything that is not H264 or prores… slowing down the viewing of movies and actually complicating life for those who have to use codecs other than them…

FinalCut 7 pro relies on Quicktime 7, its ability to handle codecs and act as a host for different processing layers in realtime, which on new systems doesn’t always and/or correctly work…

Quicktime giustoOn Yosemite we can still install quicktime 7 from leopard, because Apple still leaves it available, but by default there is still Quicktime X, to use quicktime properly we have to right click on a mov file, open the file properties, and change at the bottom where it says “Open with “

And it is confirmed that you will always use Quicktime 7 to open any .Mov file.

This helps you work with FinalCutPro 7, but it doesn’t solve all the problems, because anyway not being born for 64bit systems, at most it allocates 3 gb of ram even if we have 64 in the system, which means you have to mount with proxies even when we could work with the originals. Also, I have encountered more than one instability in different situations and particularly where there is file movement; while consolidating a movie project, the program crashed several times for me under modern systems, even if newly installed, while with a snow leopard laptop it resolved the operation without any errors or program crashes.

There are steps to be taken in life, and when dealing with computers, whatever your job, you have to keep up to date, not only with computers and operating systems, but also with programs. FinalCutPro 7 died in 2009, since then no updates were made, plugins were no longer developed, so gradually it became obsolete.

Apple developed a new program from scratch : FinalCut Pro X

  1. 64bit
  2. gPU acceleration
  3. support for every codec on the system
  4. hundreds of support plugins
  5. Completely new concept allowing for material and editing managment impossible before
  6. Support for every resolution
  7. Optimizations and presets to simplify video creation in time constrained situations with news, reporting etc
  8. Actively supported by hundreds of external software houses
  9. Integrated into the system, so it becomes a game to access internal resources such as images, music, movies
  10. A very simple workflow even in the most complex projects.

Now-at this point-who is to be burned at the stake?
Who developed Quicktime X that cut out hundreds of codecs, Quicktime VR technology, Quicktime 3D, The ability to easily edit movies directly from the player, and many other things… because for those who don’t know, from the quicktime 7 player in addition to watching a movie you can in a simple and direct way do the following :

  • edit audio video by cutting and pasting parts precisely to the frame without recompression
  • add and create a multi-track audio video file
  • add multiple audio tracks and prepare a multi-language movie like a dvd
  • add chapters and subtitles like a dvd but directly on the QT movie
  • add metadata tracks for information of all kinds
  • layer multiple layers with even built-in and supported alpha channels and in different ways
  • extrapolate parts of the movie by saving them as separate movies either as independent or reference without recompression
  • create movie variations with direct and reference realtime filters.
  • controlling about 200 features and parameters of the movie by starting it with a script.

If you want to pick on someone, you know who to pick on.

Nikon 85mm 1.8 H series, a lady (lens) that 50 years after its birth still offers much elegance

85mm 1.8 Nikon serie H

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In the digital world, where everything has to be pushed to the extreme, where everything has to be perfect, every image perfectly sharp, contrasted, and accentuated, I find much more elegance in vintage lenses of a certain type, which offer, yes, excellent sharpness, but a taste and elegance of image that I find in few modern lenses.

I had been looking for this historic lens for some time, because it ranks as one of the best 85mm lenses ever made by Nikon, and made famous by Antonioni’s film BlowUp, where David Hemmings used f1 and this lens to capture Vanessa Redgrave. The aura of magic surrounding this lens has been amplified by the fact that many famous photographers have made it their workhorse, making it difficult to find it on the secondhand market at affordable prices in good condition.

Nikon produced this gem between 1964 and 1972, an 85mm 1.8 lens designed for portraiture but expressing light and image pleasantness in many other situations as well. The lens began life as a full manual, solid metal body as it once was, ready to withstand gunfire, and with a pleasing focus ring, smooth and suitable for video and not just photographic use.

Mounted an adapter from Nikon to Eos EF, it was immediately love at first sight, the smoothness and at the same time the detail this little lady offers is extraordinary.

Photographing a beautiful girl would have been too easy as a test, it was born for this purpose, and would have given light to it, I preferred to stress it under other conditions, that is, in the long distance, although a pp of my cat to test its detail and cleanliness, I did it to see how it behaves in the close-up plane.

A few frames extracted from a shoot done with a Blackmagic 4k in raw, so I shot 24 frames per second in raw 16 bit saved in log 12bit dng, then developed in Lightroom as if they were photographs.

Each of these frames was captured at 400 asa, the camera’s native sensitivity, with aperture between 1.8 and 5.6 depending on the amount of light available, in most cases I worked in TA 1.8, and despite this the images offer good sharpness. Of course what you see here are simple jpegs where a little of the original sharpness is lost.


BMC Black magic camera 2.5k

Classic BMC picture

In 2012 the market was shaken by an ad, a camera that recorded on Raw with 13 stop latitude laying at less than 3000 dollars, with Canon EF attack…

Today the room is in use by several people around the world, along with the younger sister (BMC pocket) and the older sister in 4K.
Let’s understand on a practical level what this machine is and how it works, strengths and weaknesses.

I worked as an assistant director on the film “Quaffer”, with Dop Doriano Paolozza we used the BMC for the filming of the entire film in raw 2.5k format.

Thanks to 2.4 million recorded frames and about 20 tb of data recorded on HardDisk I developed a little experience on the machine and I can give some judgment on the product.

The machine in general, as value for money is great, has from its excellent quality for value for money unreachable from no room on the market. It has flaws that will be corrected with firmware updates and/or variations over time.

Highlights

  • In very high contrast situations it succeeds and saves enough information to recover a lot from RAW files
  • despite having a fan (noisy) under the car, even after hours in the sun to work did not miss a recovery or given problems of failure or overheating
  • Compatibility with EF canon format and diaphragms, focus, etc. support makes it very easy to manage optics
  • currently available with PL, EF, m4/3 passive attack
  • The sturdy body dug in an aluminum block with different holes and threads allows the use of accessories mounted directly on the camera
  • The camera includes an HD-SDI output and an audio input to record from outside
  • the use of an additional external battery keeps the external battery charging, so first you will finish the external one and then the camera will use the internal one
  • it is possible to customize the names of the clip folders quickly from the camera to avoid having folder files with names, dates, times and little more… enter metadata so that you can quickly retrieve information about the clip you’re working on.
  • the camera does not provide for the deletion of clips, which for some is a weak point, I consider it a strength, avoids the fragmentation of the support, which would involve a risk of frame loss during shooting.
  • Raw dng logging 12 logs from sensor to 16bit.

Weaknesses:

  • The camera monitor is very dark, although 100% brightness and with simulated color range, so on the outside it is unusable
  • the camera records 2.5k in raw DNG format, while in DNxHD and prores it only records in FullHD (of excellent quality, however).
  • The very small sensor (crop x 2.5) forces the use of wide-angles pushed indoors, which can limit machine movements. Although the micro4/3 passive-attack version can take advantage of the Metabone’s speedbooster adapter, which offers an extra stop and a great recovery of the operational angle.

To date the room is on sale for less than 1000 euros, and anyone who wants to make cinema in a serious way has the opportunity with this room and some lenses or for vintage s16, of excellent quality and therefore with a more advantageous crop ratio, or with lenses of various productions make products of the highest visual quality.

Many do not understand why 2.5k i.e. 2400 x 1350 pixels, the answer is very simple, you have a cinematic 1.85 format with the abundance outside the classic 2k allows you to stabilize the images, do a bit of reframing, without losing in quality and sharpness.
Blackmagic has studied both the sensor and the practical choice of formats well.


Page 14 of 18

Powered by WordPress & Theme by Anders Norén

error: Content is protected !!