View on GitHub

Quorten Blog 1

First blog for all Quorten's blog-like writings

Now, media technology research continued. First, I start by reviewing the article on the Kinescope. Then, I come across an interesting term: Filmizing.

20150911/https://en.wikipedia.org/wiki/Kinescope

Why interlaced scan? Again, this is because early CRT technology wasn’t good enough to display progressive scan at 60 frames per second. The technology was too slow, so it could only display 30 frames per second.

Filmizing as an intentional effect? Yep. Okay, so what’s the differences between film and television? Film has a slower frame rate, a narrower shutter angle, a wider dynamic range (though the gap is closing with modern digital cameras), field of view is about the same, the depth of field may be wider (less blurability), film can be color timed whereas television is mainly white-balanced, film grain noise is significantly different than PMT (photo multiplier tube)/digital sensor noise, film features jump-and-weave whereas television does not.

“US productions most often use actual film for prime time dramas and situation comedy series and filmizing is more common outside North America. Video production is cheaper than film, in line with the traditionally lower budgets outside the US.” Americans have more money. Film is “richer.” “I know it’s richer, but it costs more money.” Plus, it is less convenient. That’s the main show-stopper for casual use. It’s interesting that the article goes into depth of how filmizing was once used to successfully trick viewers of the show Heartbeat that it was actually recorded on film, and when one episode was accidentally aired without filmizing, there was concern among fans that the style of the show would permanently change.

20150911/https://en.wikipedia.org/wiki/Filmizing
20150911/https://en.wikipedia.org/wiki/Color_grading
20150911/https://en.wikipedia.org/wiki/Film_tinting

What is shutter angle? Shutter Angle / 360 degrees = Exposure Time / Frame Interval. It is a way of measuring how choppy a video is, or how much motion blur is apparent in a video.

20150911/https://en.wikipedia.org/wiki/Shutter_angle

Where was that article on the Geneva drive being used to control film advancement in a camera? No, it wasn’t the Geneva drive article itself, it was some other tiny article. Oh, here it is!

20150911/https://en.wikipedia.org/wiki/Intermittent_mechanism

What’s reversal film? Oh, it’s basically transparent slides to be used with projectors.

20150911/https://en.wikipedia.org/wiki/Reversal_film

Remember, f-stop. Here, 1 f-stop means “black” (the dimmest possible non-totally-black color) is half the intensity of “white” (the brightest possible color). Two f-stops (or two stops of exposure) means 2^2 = 4, white is four times as intense as “black.” Measures of dynamic range. Here, we have photographers expressing dynamic range in terms of f-stops. F-stops on their own principally refer to the aperture setting. For apertures, increasing the f-stop halves the exposure area. DIN, ASA, ISO film speeds…

20150912/https://en.wikipedia.org/wiki/F-stop

Note optical resolution as in telescopes. “With telescopes, bigger is always better.” Oh, and shift lens?

20150912/https://en.wikipedia.org/wiki/View_camera
20150912/https://en.wikipedia.org/wiki/Perspective_control_lens

Color correction. Early systems adjusted the voltage gain. And, as we know, voltage gain has a logarithmic relation, so this in effect adjusted the original luminosity of the signals as would be perceived by a human. I think. Just as the same was so with voltages in analog modular synthesizers. “A happy coincidence between engineering and design.” Is it? I’ve got to do a fact check on this. Primary gain voltages of the three photomultiplier tubes.

“Military intelligence” is counterintuitive, as sometimes the term when used as “intelligence” actually refers to just information. However, when referring to the practice of the field, the entire term is correct. Gathering information to help commanders make a decision, I like that. It fits very well with my other example in regard to physics.

20150911/https://en.wikipedia.org/wiki/Military_intelligence

The differences between the camera and the human eye. The major difference is whereas a camera’s CPU gets every single pixel, the human retina does internal processing and only passes on 10% of the information to the rest of the brain. Then, the information is cross-analyzed in the chiasma before it reaches the subconscious brain, and the subconscious brain is coupled tightly with the memory system. And not until all of this low-level processing is finished can the information move on to the conscious, high-level, brain. Though the subconscious brain is reprogrammable, the retina in fact offers a fixed-function pipeline that can only be reconfigured in a finite number of ways. Also, the human eye cannot process both peripheral vision and central vision at the same time. Furthermore, the eye scans a surface rapidly. And persistence of vision obscures blinks. And as noted in nowyouknow (TODO ADD LINK), the stopped clock effect. Eye movements too fast for the brain to process, so processing is paused for a split second. Don’t be scanning the image too fast, though. During the scanning time, your brain just copies the image that your eye seen last since the motion is way too much for your brain to take, when the motion is too fast. Humans reconstruct scenes, whereas cameras can take 2D images exact. In fact, humans cannot see “pixels” from their retina. It is physically impossible for a human to see these.

Humans adjust to dynamic range as the eye is scanning the image area. Well, that is when you are standing in the environment. When you look at a shrunken-down photo, your brain processes the image differently. Your eye has special logic to process patterns, but only when they appear at a certain angular size. Larger than a certain size, the algorithms fail.

20150911/http://petapixel.com/2012/11/17/the-camera-versus-the-human-eye/
20150911/http://www.cambridgeincolour.com/tutorials/cameras-vs-human-eye.htm
20150911/http://www.madsci.org/posts/archives/2001-11/1006763147.Ns.r.html

Is it really true that people like black-and-white photography better than color photography? Maybe sometimes. Sometimes. I remember some times when I was very very annoyed looking at black and white feature animations, feeling like I was dying for the missing colors. And I especially remember those times being very annoyed looking at the rainbow-colored Apple logo on the Macintosh SE computer keyboard and comparing it with the black-and-white screen on the computer. “Oh my gosh, how did they print that color Apple? If the computer couldn’t even display color, they should have printed a black and white Apple, in varying dithered patterns, because of course the Apple computer couldn’t even display shades of gray, so should their Apple logo be constrained to dithers of black and white.” Later I would learn that the reason for the rainbow Apple logo was due to advertising for the Apple II computer’s color capabilities, which the Macintosh unfortunately did not have. Wow, that explains a lot. People don’t necessarily like black and white with no other choice. Sometimes, it can get really annoying to see no color for long hours of time.

50 frames per second might be fast enough for a movie where the viewer has no control over the on-screen action, but we know better that for video games, faster than that is noticeably better. By a large margin, for most people. Is 1000 FPS the limit? Definitely not! Why not? Because you might want to transfer machine-readable data through a video interface, of course! Hence, the technology should be manufactured to go as fast as can be feasibly economically done.

I really like this article. It’s what I’ve been looking for for a long time. I tried to argue, you could create a really high resolution computer monitor by repurposing a commercial high-volume laser printer. You would need a monitor to be able to display over 1000 frames per second in order to convey realistic motion. And sometimes people wouldn’t really believe me and such. But you know what? It’s true, and I was right. Of course, eventually I did learn, all of that is correct. True, we do have the technology to do so today; however, the reason why you don’t see that technology and much less hear of it is because it would cost way too much money to be worth commercializing. This article pounds the nail right on the head and puts this in concrete terms. We are still very far away for having cameras and display devices that can compete with the quality of the human eye. Much less are we able to have two of them for that matter. Such a setup would cost somewhere in the ballpark of $100 million dollars with today’s technology.

20150911/http://www.premiumbeat.com/blog/if-the-human-eye-was-a-camera-how-much-would-it-cost/

Take a peek at this article. It turns out it wasn’t as useful as I was hoping it to be.

20150911/http://www.cambridgeincolour.com/tutorials/graduated-neutral-density-filters.htm

Renote, just for safe keeping.

20150911/https://en.wikipedia.org/wiki/Exposure_range