All is possible

Category: Techinque Page 13 of 18

4k the old frontier of the new, waiting for the 8k TVs we won’t be able to see

While I am a proponent of 4k for filming, because it offers so much material to work with, so many advantages in any material manipulation operation, I am convinced that it is yet another 21st century hoax to sell the new TV, the new old content.

Why is 4k useless in televisions, cell phones, amateur cameras?

not to mention the fact that if it is not used for postproduction it is a waste of resources cpu of the capturing device, occupied memory space, overheating of the device, 4k on a device below certain size is a useless waste. The visual acuity of an average human being is such that it perceives about 1/10 mm at about 50cm distance, given that an average size 4k panel is a density of 110 ppi, i.e. on a 40 inch we are talking about 4 dots per square mm, too bad that a 40 inch will look from at least 150 cm where the resolving capacity has dropped to 2-3mm on average, so there are as much as ten times the information perceivable by an average human being…

this calculation is only valid if we have a pure 4k source, because if on that TV we see a 4k movie from streaming, youtube 4k, cell phone or other elements we actually won’t have all that information, because the panel offers that definition, but the transmitted data don’t have it, so the TV will be completely useless….

so why buy a 4k television today?

Let’s do a very simple and quick analysis of the pros and cons :

Pros:

  1. it is a great high-resolution digital frame for your photographs
  2. if I have a real 4k digital camera I can see the pictures in their real glory if I stand very close to the 40 inch I bought.

Cons:

  1. there is no real 4k content yet to take advantage of a 4k TV with
  2. the 4k bluray standard is still theoretical, but there are no 4k blurays on the market
  3. there are still no movies shot completely in 4k, so you would still see “bloated” and non-native 4k movies
  4. streming 4k actually offers less than 2.7 k effective resolution, the rest is born by interpolation so again useless.
  5. 4k broadcasting is theory, in reality to date there are few channels in fullHD (1920 x 1080), most at most are hd (1280×720).
  6. 4k TVs do not have digital decoders to receive 4k signals because 4k transmission standards have not yet been defined, so by the time 4k channels exist our TV will already be obsolete and unable to show movies at 4k resolution.
  7. reading FullHd movies offers a blurry view of the content (because inflated 4 times) or too sharpened because to mask the soft image is processed actually eating several levels of detail during the raw sharpening operations. So to view a bluray is a bad vehicle.
  8. it will cost much more than a quality fullHD equivalent, not offering to date the ability to view truly different images.
  9. in viewing 4k amateur images from cell phones, cameras etc. they may not have enough detail to show quality images to take advantage of the 4k matrix
  10. to perceive the real difference between a 4k TV and a Fhd TV you have to go from 50 inches up, which however you will have to see more from a distance and then you go back to the initial absurd, useless for most people who do not have the visual ability to appreciate the detail on a physiological level.

why is 4k not needed in cinema?

4k projection is a bit of a scam, because in reality most movies are shot with 2k digital cameras, such as alexa, so the fact that it is then projected in 4k gives no advantage in fact, in fact, it is an unnecessary flexing of the muscles of the whole system, because it will require 4 times the original space of 2k, more resources for play and content management without giving any real advantage.

But can you really not see the difference?

Nolan bragged that he shot The Last Dark Knight and Interstellar in Imax (ultra-high-definition film format), and a lot of people said they noticed the difference…
I’d be curious to ask those same people if they can tell me what shots they noticed the difference in, because neither of them are shot completely in Imax, too expensive, too big and uncomfortable cameras, etc etc… so traditional s35 shots were alternated with imax shots (purely of exteriors where it was easier to handle bulkier cameras)… especially since these films in most situations were seen in digital theaters on 2k projectors, so where everything was flattened and shifted down.

Another plus point for the overall mix is given by the fact that many films digitally undergo very heavy postproduction, even if only at the level of editing and color correction, one is then not able to distinguish in short takes made by dslr, go pro, cameras, from professional cameras. All thanks to the fact that they are used in the best situation to best extrapolate the visual quality from the different sensors by making them work to the best of their ability.

so for 4k is it early?

well you shoot 4k for the future, because you are extracting a very high quality 2k and fullHD, but directly using 4k at the home level is a waste, because it is not directly perceptible to the eye in most home situations.

why then are 4k televisions and all the 4k peripherals proliferating?

something has to sell you, or does it?
In marketing, numbers have always been used to give a tangible perception of a value, even if here numbers had no real connection to the value of the product.

for example, burners started with the 2x of cds, up to x52 of dvds, but no one tells you that x52 media does not exist, because burning is a balance between write speed and number of errors introduced, depending on the quality of the media the speed is dosed to introduce a minimum number of write errors, to allow the data to be read and thanks to an error correction system to be able to go back to the original data. The concept of read error correction was originally born to compensate for manufacturing defects, and/or scratches or damage to the media, over time this system has become a method of speeding up writing to the media based on the fact that in the end you can still read the data.

Where does the problem lie? In the fact that if you take a media to the limit of readability because we want to burn it at x52 instead of x8, all it takes is slight wear and tear to make the written data unreadable. Not only that, slow writing applies the write laser differently and by introducing fewer errors also makes the media more resistant to harder wear, uv exposure, deformation of the writable media, etc.
Which makes one think about how superficially one writes data to a medium, without having had notions of how to do it and how to store it.. good luck to formula 1 burners, maybe after 3-4 months they will still be able to reread something from their media.

another example, megapixels in cameras:

it has always seemed that megapixels are an indication of quality, but if you squeeze 40 megapixels onto a 1/4-inch sensor, you cannot expect to have the same cleanliness and light as 12 megapixels on a fullframe sensor, because the light captured by each receptor is greater. Actually it is not only the megapixels but also the ability of the megapixels to capture information, the area covered that offer the actual quality of the captured image, but the concept is too complicated, so for the masses megapixels = quality.

i still remember when I gave my sister a three megapixel compact camera, in a world where several 5-6 megapixel cameras had already come out, but the photographs and the detail of those photographs was unmatched by the equivalents as a price range, because some interpolated, some had yes more receptors but less sensitive etc etc.

today one competition in cameras and cameras is sensitivity (actually even 25 years ago, since we talk even then about shooting with a candle).
If you don’t shoot in the dark, and I’m not talking about natural light, I’m talking about dark, then the camera is not worthwhile…so a red and an alexa, digital cameras they use to make movies that have a native sensitivity of only 800 iso are scamorces…

Why is it already late to buy a 4k television?

let’s say mine is a provocative statement, but not too provocative….
because the Japanese are already experimenting with 8k broadcasts, so why buy an outdated product, you might as well go straight to 8k 😀

jokes aside, the Japanese have always been at the forefront with experimentation, I remember the first fullHD shooting and viewing system seen in person at SIM audio Hifi in milan in 1992, a joint experimentation between RAI and a Japanese giant Ikegami, ironically I had captured such images with my very powerful 200 line vhs and it seemed so far away as quality and power.

well before these pioneers back in 1986 Francis Ford Coppola, produced by George Lucas, made a special 4D (3d plus what is now called augmented reality) video clip using experimental HD cameras starring the great Michael Jackson in Captain EO.
This is to point out how if HD was already present as a technology in 1986, today after almost 30 years, it is still not the TV standard, so let’s consider well how far 4k can penetrate inside our homes in a couple of years.
Above all, one has to think about the fact that 4k does not only mean changing the reception systems, which are inexpensive, but changing all the airing and broadcasting systems, and for televisions it would be a monstrous investment, which I doubt they will make so quickly, since many are still at Standard definition.

Digital miracles

When shooting with a normal (amateur) camera, dslr or other low cost medium the footage is captured with a decent quality, designed to be viewed and edited as is, then to optimize the quality and the space occupied in memory a subsampling of the color is performed, so that less color information has to be recorded.

850px-Chroma_subsampling_ratios.svg

so a classic video has color recorded with 4:2:0 sampling, this means that once decoded into RGB the red channel will have much less information than the other colors; in a normal situation it will not cause any particular problems and most people will not see the problem, but …

during postproduction, saturating the colors brings out the problem causing an increase in the blocking problem, which is the display of the codec compression blocks, as you can see in the image below.

why red is a bad color for filmaker

There are colors based on the red channel that can give obvious problems, as you see in the image, ruining a shot.
Sometimes, and I stress sometimes, you can save these images by converting them in the most appropriate way, using utilities that upsample the red channel, so as to reduce those blocking effects.

There are different tools that act on these channels to reduce the defects, depending on the tools you use you can rely on different solutions :

  • Inside the RedGiant Shooter Suite the Deartifacter tool
  • The standalone 5D2RGB conversion utility that converts 4:2:2 files to Prores 4:2:0 files
  • The old HD LINK program from the Cineform Pro and Premium suites (no longer available thanks to GoPro eliminating the suite).

i personally recommend the RedGiant suite because you have more control elements, as well as many useful tools for any filmmaker.

The importance of backup

In a world where you use so many words disproportionately, and in particular the word Cloud, never before is backing up your data is key.

If something happens to your computer, your smartphone, your camera cards, your camera cards… you would lose everything … your data, your memories, your work…

I know what most people think: “So much won’t happen to me, the data is safe on my hard drive, I have a copy in the cloud…” etc…

Well… I’m going to tell you something disturbing… none of these storage systems is secure, no one guarantees you the salvation of your data and especially when you activate one of these services, when you buy a hard drive or a card, the only guarantee that you are given is that in the case of some cards, in case of data loss or loss, they return only a new card…

If you are on my site it means that we have something in common, for example do 3D animation, videos, images, photography, and therefore losing your data can be a problem not a little…

Many of you have a backup system and feel safe…

well do a couple of researches on the loss of The ToyStory 2 projects, and then come back here… you may find that no one is safe since an incredible company like Pixar risked losing Toystory 2’s designs to a trivial filesystem problem, and they have hundreds of servers and technicians who trade themselves in handling and managing backups… Now…

I have a raid, I’m safe…

I happened to hear often these words, even I was convinced, too bad that it was precisely the raid that betrayed me 10 years ago, when identical disks (because supertechnics advise to take identical discs for raids, even better that they are with consecutive serials, so they work better, say the ignorant) abandoned me at the same time, so my mirror raid made me hello… through my tears …

Then I relied on a wider raid, with 4 redundancy discs, guaranteed by the super experts, too bad that this time the damage was caused by a firmware defect on new discs, an entire pallet of hundreds with the same defect, recognized by the parent company, but so much my data had died, which made sure that within a few minutes, the head began to crash into the discs , and in a short time the damage had spread beyond the recoverability of the redundancy raid. more tears shed, about 6 years ago…

A solution?

no one can have the ultimate solution, I can only say what I use as a solution for backing up my data, three copies of the data : one local on the computer, two on external hard drives, updated on alternate days.

Each block of disks is of different brands, of different manufacturers (some brands are produced by the same manufacturers, with the same chipsets and hardware), to avoid chipset and firmware errors.

What do I use to keep my backups up to date?

under windows and under mac there are several packages to check the synchronicity of the data, to avoid updates by hand, because it is not possible to remember every single file updated from time to time.

under mac I used an app called Syncron, good until you are under HighSierra, from Mojave and upper it give a lots of troubles, AVOID IT; under windows I use the AllWaysinch program. For both OS an interesting solution is FreeFile Sinc.

Both have automation systems to synchronize multiple folders both on-premises, network or cloud.

Edit 2020: Synkron seems to have not been updated by the author and does not work properly under Catalina, I suggest another interesting free product for Windows, MacOsX and Linux that is called FreeSync

USB 3.0 … it depends let’s say 2.5 vah..

EU-4306_USB3

USB 3.0 an evolution of the standard to increase data transfer speed xx times.

Everyone happier that for copying data, securing our data, photographs, movies, the time is greatly reduced.

The market is full of usb 3.0 storage peripherals, laptops and desktops have almost only usb 3.0 ports, a paradise … almost …

The theory says that the 3.0 standard takes us from 60 mb/s to 640 mb/s so we are talking about over ten times faster in transferring data between devices.

The theory, but what about the practice?

That is the theory, while the reality is quite different, because the real performance is far superior to usb 2.0 but there are often bottlenecks that are not considered.

  • the speed of the computer’s usb 3.0 card chipset
  • the speed of the chipset of the data carrier controller
  • the speed of the media from which the data is copied
  • the speed of the media to which the data is copied
  • if the source and source media share the same controller the chipset is able to distribute the data stream evenly.

Let’s take a practical example : I buy the external USB 3.0 disk of known brand XX (considering the speed at which models and devices change, it makes no sense to indicate brand and model, since the same model purchased several times contained different disks of different speeds), I try to copy some data and I find it definitely slow…
I try to change the computer port, nothing; I try to change the computer, nothing; I try to change the data source, nothing… I’m a stubborn person, I open the disk box (voiding the warranty, but whatever), I find out that the disk contained is a 3900 rpm, which is a rugged, low-speed disk, which for a 2.5 laptop disk is great because it reduces the chances of damage in case of bumps and falls during rotation, but it reduces the actual performance during copying.

now in most cases, single mechanical disks don’t have the capacity to saturate the bandwidth of sata or USB 3.0, but if I use raids where the sum of the performance of the disks adds up, I might even reach it. In the average person, no one has this kind of problem, nor do they especially notice the differences.

On the other hand, those who have to handle a lot of data professionally (data backups, movie backups etc.) have to take into account several technical factors, not only relative but combined with each other, because a fast disk with little cache can be outperformed by a slightly slower disk, but with more cache; the difference in disk size affects performance, because if the disks are denser at the same RPM they offer more data output so they can offer more speed as they go up with size.

The incompatibilities that didn’t exist on USB 2.0

In a market where everyone is competing to offer the lowest-priced usb 3.0 product, it feels like paradise, but…
Not everyone knows that there are more or less strong incompatibilities between different chipsets of motherboard controllers and those of external boxes/nas/disks.

After having a series of problems with different motherboards unhooking different disks, I did a bit of research and found out that different chipset manufacturers are passing the buck among themselves for responsibility for media unhooks and/or communication problems between them. There are hundreds of threads in computer forums that point out that the couplings most at risk are when connecting chipsets:

– JMICRON JMS539 NEC/RENESAS D720200
– JMICRON JMS551 NEC/RENESAS D720200
– JMICRON JMS539 ETRON EJ168A
– JMICRON JMS551 ETRON EJ168A

when you combine these chipsets the risk, or certainty, since the resulting behavior is linear, is that the connected device will have slowdowns, disconnects every 10-15 minutes of the device.

The palliative is to keep the drivers for both chipsets up to date, disable any power saving on the drives and system related to the chipsets. There are firmware updates on the chipset manufacturers’ related sites, where you can hope to reduce the problems.

Why is it important to know which chipset we are using?

because depending on the products we can have multiple chipsets on the same machine, for example the gigabyte board I was using before had two different chipsets, and with an external board I introduced a third chipset that was not incriminated. The current Asus plate has three different usb 3.0 chipsets and so I have to be careful which USB 3.0 port I use for external hard drives, on two I have the incriminated chipsets, so if I connect the WD mini drive that doesn’t have the problematic controller, it’s okay, but if I connect the nas (I have three, two 4-disk and one 8-disk) I have to use the third set of USB 3.0 ports, which are external ports, though, so I bought an external adapter board to carry them behind within reach of the connectors, otherwise I get disconnected every 15 minutes the disks contained in the nas.

So it can be concluded that.

the Usb 3.0 standard allows us to copy data UP TO 640 mb/s as long as we copy data from a disk connected on a chipset DIFFERENT from the receiving one.

What can I do to optimize data transfer?

  • use disks connected on different chipsets and standards, such as internal disks on sata to or from fast usb 3.0 disks.
  • use external usb 3.0 disks on two different chipsets so that the chipset working on both data input and data output does not have some kind of slowdown
  • disable any kind of power saving on internal and external disks
  • disable antivirus checking on incoming data from disk X (only for safe data)
  • use software that is optimized for data copying and that uses parity check to be certain of data copying.

Aps-C lenses these mysterious hybrid creatures..

But when I buy an APS-C lens, do I always get the crop?

Crop arises when we have a sensor that does not cover the full circumference of the REAR lens, see article on crop, whereas when a lens is born for APS-C it is made to properly cover the size of the aps-C sensor.

image_circleAs you can see in the image to the left, when making an aps-c lens the size of the back lens that focuses light rays on the sensor.

The fact that in APS-C lenses the large front lens, larger than the sensor, does not mean that the back is not resized for proper focusing, otherwise there should be no point in making an APS-c lens, they would only make FullFrame lenses.

All the companies that make lenses, have lenses in their catalog that are optimized for FullFrame and lenses that are optimized for APS-C, so applying the concept of crop of fullframe lenses to aps-c lenses might be a wrong concept to begin with.

Again in this image you can also see the coverage of a cine sensor such as that of the BlackMagic Cinema Camera 4k, which is claimed to be an s35, slightly smaller in size than the aps-c, about half a millimeter in width and a millimeter and a half in height, this means that the lens is slightly abundant with respect to the sensor, just enough to avoid the risks of any vignetting with a wider angle lens, but at the same time no crop factors are risked that would force strange calculations or purchases on wide-angle lenses that are pushed.

This size factor allows me to safely say that on S35 machines like the BlackMagic Cinema Camera, quality lenses designed for APS-C are the optimal choice as a focal length-to-sensor ratio.

Why do I use the conditional verb? Because having done a bit of practical experimentation with multiple lenses, I have found that lens manufacturers really want to hurt their users … depending on the lenses, from the same manufacturer, there are aps-c lenses that insert the Crop or not, depending on how the lens groups within the lens are constructed and positioned..
So in a nutshell … you have to try and compare the lenses when you take them to see how and if they have a crop factor.

An example ? Canon’s excellent 17-55 2.8 is an APS-C lens designed for APS-C sensors but in reality it has only been optimized the rear lens for the APS-C format, it retains the crop as the position and management of internal lenses, so if you put this lens in 55mm we will have the same crop as if we had put a 55mm full frame lens…

For the proponents that focal length/angle is determined only by the internal lens spacing I remember that the different shape of the lenses determines the ability to collect light and convey rays from multiple directions, so the most pushed wide angle lenses are called fish eyes not only because of the angular reading ability but also because of the shape that resembles a fish eye dome, also if it was as they say it is only that, please explain to me the existence of focal multipliers, which allow you to double the focal length by not really lengthening the lens by the same distance from the sensor (200mm that becomes 400mm, but I don’t lengthen it by 20cm the tele), and focal reducers, which since the 1940s have existed either to be put in front of the lens (from anamorphic lenses to additional wide-angle lenses), or those like the speedboosters from metabone that pick up information that could not exist.
I reiterate the notion that there are lenses created for sensors to size and lenses cut for sensors to size, so the results are different and confirm both the crop and non-crop concepts, only that technically it is cheaper to cut a lens to size of a sensor or apply the crop directly than to introduce a focal reducer and then get the true focal length both as focal length (perspective distortion) and as focal angle (angular readability of the field of view).


Page 13 of 18

Powered by WordPress & Theme by Anders Norén

error: Content is protected !!