Techniques and Technologies

Click on images to link.

Latest additions 2/6/12

There is a lot of HOW going around these days. How do I change this, tweak that, achieve the other? This page attempts to bring you some answers.

Audio Coolness for HDSLR 6/2/10

Zoom has just announced their H1 recorder for hand-held or camera-mounted stereo recording. At $99, your HDSLR needs it.

As with prior Zoom recorders, the stereo mic configuration is created by two cardioid mics at a 90° cross angle, yielding superior stereo imaging from a central location. With the H1, the mics are caged against mechanical contact, and the shape is much more like a hand mic.

Ergonomics of
the H1 make hand-holding much easier than previous models, and the tripod screw socket on the back lets you easily adapt it to the hot shoe atop your HDSLR.

Light at just 2 ounces, it runs on one AA cell (10 hours of recording) and can store up to 50 hours on a 32GB Micro SD card (16-bit/44.1 kHz). Audio quality is supreme. Options for 16- and 24-bit recording are available at 96 kHz, 48 kHz and 44.1 kHz. A low cut filter plus a foam puff wind sock control breeze noises. For the long-winded, it can record in MP3 format from 48 to 320 kbps.

Many will want to use the H1 for double-system sound recording, but if you wish, audio may be routed out of the H1 with a short cable to your HDSLR’s stereo input, if your’s has one.

Features include headphone out, external mic in, auto or manual levels, track marker, built-in reference speaker, Hi-Speed USB port, high SPL tolerance, and an optional accessory package includes a table tripod mic stand, windscreen, AC adapter and padded case. There’s even a lanyard mount so you can carry it around your neck.

Available this summer.

Shooting for Slit-Scan 4/1/10

The Photoshop software is not ready yet. But that doesn’t mean you must do nothing.

Here are the seven basic guidelines to shoot scenes with an HDSLR for future Slit-Scan processing:

  1. 1.The image size is a function of image time. This means the number of frames in the shot must be rather long for HDSLR shooting, so if you wish to make an image that is eventually going to show up as 2000 pixels wide, you must shoot an image a minimum of 2000 frames long. Shorter scenes will produce narrower images.

  1. 2.Slowly moving subject matter renders wider, so it spreads out wider in the final image. In the top image on the story below, some people walked through the shot close to the camera, so they end up being narrow vertical spikes. Speed across the slit determines relative amount of anamorphic distortion.

  1. 3.Faster frame rates, e.g., 720p60, produce more horizontally stretched results.

  1. 4.There are expensive remedies for slow frame rates. The image above required them.

  1. 5.A new, exclusive technique allows you to shoot slit scan against a realistic background, as in the shot above. The camera for this type of shot must be locked down.

  1. 6.Movie mode requires even longer original scenes and much longer processing time. A ten-second shot similar to the one above would need (in 720p60 mode) at least 35-ish seconds of original footage. Prep would take many hours. It would take a few days to render the final in PS CS4. Less in PS CS5.

  1. 7.It takes time to play with the techniques and their options to get a feel for what works and what doesn't. Plan to experiment a lot.

Photoshop Actions and Scripts for moving images.

A rather large number of Photoshop owners purchased Photoshop Extended as part of various bundles from Adobe. If you’re a web designer, for instance, you may have acquired one of the Design Premium, Web Premium, Production Premium or Master Collection bundles which includes PSx (a user’s nickname) along with various other Adobe programs. 

Regular Photoshop has the tools most used for still images and graphics, and Photoshop Extended has extra features for motion graphic production, movie scene processing, 3D model handling and more. For HDSLR photographers, it’s the right one to buy, because it can do Photoshop things to movie scenes.

A beach scene has been treated with a PSx Action that creates a Grad filter effect. The first segment is the original followed by six variations, created through color and blending options. Note how the blend can be hand-retouched to reveal the beach and add distant cloud effects. Click to run. This PSx Action is included within the HDSLR eBook.

Like regular Photoshop (not Photoshop Elements) Photoshop Extended allows users to create Actions, which are strings of steps in a Photoshop manipulation that are memorialized, and can be replayed on new images at a mouse-click. 

Photoshop also has a Scripting feature that allows its various functions to be orchestrated with Scripts, which are short programs. With Scripts, similar functions may be achieved as with Actions, but Scripts can do many things that Actions can’t. Scripts can track things by frame number. A gradual fade in would be a trivial example. But trivial only until a process or visual effect depends on it.

Between the two, Actions and Scripts, a near infinite number of specialized visual effects, scene repairs, image enhancements and visual retouching operations may be achieved.

ScriptAction Update 1.26.10

In late January, 2010 we will be releasing new software for Photoshop Extended that brings both Scripts and Actions together in the same visual effect. The simplicity of running an Action is married to the complexity of Scripts in these new ScriptActions.

The first ScriptAction combines a superlative image-sharpening technique with movie files, resulting in HDSLR frames that are as pixel-perfect as an image can be.

Please visit the Resources page for details as they emerge.





Moiré via Bayer Pattern Scanning Techniques
And the troubled history of a key HDSLR technology.

This article reveals some of the principles of HDSLR moving images that give rise to an artifact that can work against clear, clean images, and the contravening super technologies that are propelling it into our future.

Digital photography achieves its color images using a single image chip surface filtered for RGB color using a Bayer Pattern. Each 2 x 2 square is a tiny tile with two green and one each red and blue filtered photosites—individual light sensors. This pattern repeats over the whole surface, continuously. Sorting out a color image from this array of color samples is a task only a computer could achieve, and that’s the core principle of all digital photography and videography.

Electronic moving images were originally created with scan lines. Video tubes and CRT displays both scan images, horizontal line by horizontal line. Interlaced video scans every other image line, creating a “field” with half of a frame’s visual information, then the next scan is offset one image line, filling in the gaps. After two sweeps of the image, a whole frame is captured, with every other line of image offset slightly in time. For a 30 fps display (NTSC) each field is displaced by 1/60 sec.

When digital still cameras started capturing video images as a novel feature, it was through technologies that were added to image chips to implement real-time viewing. To speed up the viewing, individual photosites were polled in a pattern that skipped up to 96% of the available sensors. 

If only 1 in every 25 photosites were sampled (1 in each 5 x 5 grid), the speed of image display could speed up by 2500%, and early digital cameras needed all the help they could borrow. A camera that made a still image with 1500 x 2000 pixels could form a live image with a crude 60 x 80 pixels for live viewing, and some did. A faster camera that employed one sample in every nine (from a 3 x 3 grid) photosites would form a live image with 500 x 667 pixels while still running 900% faster than it could for continuous full still frames. Hey, that’s higher resolution than standard-definition video!

The actual schemes used for live viewing were far more complex than these examples, but the common feature in all were the skip-pixel sampling to increase the speed of image acquisition.

As megapixels rose from 2000’s 3 MP cameras to today’s 12-21 MP cameras, compact models kept increasing their video modes all the way up to HD, usually in 720p format where images are 1280 x 720 pixels per frame. Competition drives most technological developments, and when DSLR cameras adopted the feature of live viewing, they embraced the skip-pixel practice to form fast, live images.

Live View features became ubiquitous in DSLRs. Flip up the mirror, open the shutter, grab the image fast enough to look live on the monitor and you have done your job. Grab a smaller and smaller patch of the image chip, and the Live View image zooms in for fine focusing. 

Simultaneously, the idea of recording that Live View image was considered. Boosting camera processing speed and developing image chips that could lift a skip-pixel image fast enough to form HD movie frames became the engineering challenges and in late summer, 2008 Nikon showed its first HDSLR, the D90. Four months later, Canon showed the 5D Mark II with full 1080p30 HD cine capture, and showed Vincent Laforet’s mini movie, Reverie, which demonstrated what it could do.

A year after the D90 appeared, Nikon showed the D3s, a full-frame camera with ISO 102,400 available, along with 720p24 cine mode. Nikon had the spotlight all to themselves untll ten minutes later, when Canon announced the coming EOS 1D Mark II with identical hyper-low-light sensitivity.

Once again Canon put prototype cameras into Laforet’s hands, and once again his movie stunned the photographic world. Nocturne was shot in an industrial part of Los Angeles in the dead of night under nothing more than streetlights. This was late September, 2008.

We watched the movie, put our socks back on and vowed to save up the $5000 it would take to bring one of these techno-wonders into our hands, and were stunned when on October 8, Canon insisted that Nocturne be taken off the web! Wha?

Here was an example of science fiction become reality, and some Office of Great Ideas inside Canon didn’t want you to drool? By December 23, the embargo, or whatever you’d call it, was lifted, and Nocturne was once again available, as you can see by clicking on the image.

At this moment, it’s mid-January, and the D3s has not yet shown its face. But cinephotographers are acquiring the Canon 1DM4 (when it can be found) and peering into the dark through it.

Still, both of these supercams have skip-pixel video image scanning and can be coaxed into color moiré by repeating patterns in the image. Any tiny amount of focus drift or motion blur gets rid of it.

The problem with skip-pixel imaging techniques comes from the tendency of the lens to form fine detail that is small enough to only land on color-filtered photosites that are separated from each other. A white detail landing only on a line of red/green photosites looks red to the camera’s computer. And that sharp detail, by not being at all represented in the nearest blue photosites gives the image computer no choice. “Red it is!” says the computer, and suddenly the image has a false color hugging detail that wasn’t supposed to be there: Color moiré. Sacrebleu (ou rouge)!

The image-forming process in HDSLR cameras is sensitive to color moiré only from finely focused dots or lines of repeating detail that intercept the image chip at near-horizontal angles. You might see some in a focused spiderweb, but only in strands that are almost perfectly horizontal. With HDSLR cameras, the designers found a way to gather images by inhaling whole horizontal scan lines of photosites, skipping others, then grabbing a new horizontal line. In Bayer Pattern terms, this means absorbing a stream of green + red or green + blue photosites in each line. 

Since it takes a small moment of time to do this, the whole frame is scanned off the chip. By sequentially scanning horizontal lines of photosites off the image chip, the last scan line leaves the chip a fraction of a second later than the first one did. While they call this a “Rolling Shutter,” but it is a dead ringer for the scanning of video signals back in the day of image tubes and interlaced fields.  

In technology, we call this the two steps forward, one step back approach.

Moiré Busting Techniques
No sense living with that which is flawed.

Moiré, as we noted a few stories ago, is the result of detail impacting the surface of the image chip in a way that falls between the cracks, or, in this case, the gaps between photosites that are read during video frame construction. Briefly, video frames are made out of horizontal rows of Bayer patterned sensors, and that makes them some combination of green+blue or green+red for the whole row. By skipping whole rows of sensors, the opportunity exists for photographic subject matter to arrive mainly in the gap between rows. 

Two rows of sensors are skipped in most schemes. To skip one row would cause a space between Bayer patterned rows of identical color sensing filters, exacerbating the problem.

In schemes that read a full row of Bayer patterns, bgbg and grgr together as a unit, two or three rows are skipped before the next row(s) is (are) read. Why? Speed. It’s faster to read only some of the sensors, than to deal with all of them. Several times faster, and every microsecond counts.

In schemes that read 1, skip 2, the processing is sped up by 300% over reading every photosite. In schemes that read 2, skip 3, the processing is sped up 250%. Potentially a read 2, skip 2 scheme would speed up by 200%. Which cameras have which schemes? Not one of them is fessing up. They’re protecting hard-won technologies.

So how to get that moiré OFF your shot? By defeating its very nature. Moiré comes from fine, contrasty detail—generally repeating detail as in architectural subjects—being focused fine enough to fall in the gap between sensors that are polled for image content. If the detail is smaller than a pair of photosites, it won’t make moiré. But if it is linear, and nearly horizontal, it can fall on, or mostly on, a row that has only green plus a color. 

Green is a special case. 59% of luminance is defined by light passing through the green filters, and unless the camera’s image processor sees a much lower relative photon count of both red and blue adjacent photosites, it won’t paint that detail green in hue. But if it sees green and red or blue, it will tend to paint them reddish or bluish: Color moiré.

If the detail were blurred enough, that detail would spread several pixels, triggering both grgrgr and bgbgbg rows of photosites, and color moiré (as well as luminance moiré, or “detail moiré”) would evaporate.

Potential Remedies
Programmed defocusing. You can force this effect on some cameras by adjusting “back focus” controls, in theory, but that tends to be f-stop-sensitive. A back focus that de-sharpens the image at f/5.6 may have much less effect at f/8. Nikon cameras have focus fine tuning that may do this.

Shooting wide open with fast lenses usually fails to focus the image to best effect. But cine modes are way less accute than an HDSLR’s best still image, so you can afford lower performance from them. You actually want a lens that is ever-so unsharp, for cine shots, but only wide open, where the detail will be spread around a few pixels.

Manual defocusing intentionally may help. Use the focus magnification feature in live view to blur the image enough to fool the camera into avoiding moiré. Each focal length and f-stop will require a slightly different adjustment. You’ll be forced to test this to gain experience. Good hunting.

Filter it away. There’s a thin, low-pass filter above every HDSLR sensor. Its job is to microscopically defocus the image just enough to avoid color moiré for still images. Alas, if the camera designers had been clever enough to pull this filter off the image chip a few micrometers for cine mode, we wouldn’t have this topic. 

Can an external filter act as a low-pass filter? Theoretically, yes. And we have tried a bunch, but the only help we’ve seen is through an ancient Cokin 083 (not 830) low-level diffusion filter—and only on subject matter with low dynamic range, such as graphic material. It adds diffusion, to a small degree, so images shot of higher dynamic range subjects ends up somewhat fogged. 

Printer Fresher Freebie!

Keep your inkjet fresh


As we shoot more in movie mode, our printers are less frequently used. It’s just a fact of life. But we don’t want them to go through times of no use because stray ink dries out on their microscopic nozzles. That can put clogs, lines and flaws in the next print.

If you let the printer’s own refreshing process have its way, the machine will use a lot of ink to clean the nozzles. Ink = $. So what could you do to keep that printer exercised just enough to stay in shape?

It’s going to cost you very little, but very much. First the Bad News: You have to do this once a week on every infrequently-used ink-jet printer in your system, and that takes discipline. Now the Good News: It only costs a penny or two of ink to exercise your machine with our magic image.

Download the image file at right . It’s a full-size, letter paper image with faint grads of color plus red, green, blue, cyan, magenta, yellow and black mini-grads to purge clogged nozzles or show you where the “issue” may be.

It prints on only half the sheet, so you can turn and/or flip the page for additional prints.

HDSLR Shooting and Tele Converters

Go long for the touchdown.


Tele converters are optical devices that spread the output of a lens over a greater area. They’re negative lenses that retain the focus settings of a lens mounted on them, but they spread out the rays emitted at the back of the primary lens, thus enlarging and dimming the result. You see 1.4X and 2.0X tele converters, commonly, the first lowering the light by one stop, and the second lowering the light by two stops.

By spreading the image over a larger area, they can’t possibly retain all of the detail that the prime lens was prepared to deliver by itself, but modern tele converters work well with telephoto lenses which are often paragons of sharpness over the whole image surface. In a geometric sense, telephoto lenses are optics that only have to achieve image perfection in a proportionally smaller area behind the glass. It’s not unusual to see long lenses that are almost identical in performance from center of image right into the corners.

Some photographers pooh-pooh the use of tele converters because they can lower the ultimate resolution of the prime lens, but for HDSLR cinephotography, they don’t lower the resolution enough to see.

New Rule: If you can’t see a difference, there is no visual difference.

We mention this because today Kenko has released four new tele converters, two at 2.0X and two at 1.4X for automatic lenses fitted to both Canon and Nikon DSLRs. And HDSLRs.

The Bottom Line: If you have a long auto-focus, auto-aperture lens of good quality, you can now shoot longer tele shots with your HDSLR with complete freedom from image quality loss. Your camera shoots somewhere between 12 MP and 22 MP, but the best movie file you can shoot is 1 MP to 2 MP for 720p and 1080p, respectively.

These extenders will lose a stop or two of light, but the images they gather will look movie-perfect (assuming your prime tele is quite good). Above: 200mm Nikkor f/4 on a 2.0X tele converter, shooting wide open on a Canon 7D.

Tip: There are the MC4 versions and the 300 Pro versions which cost more. Since the prices range from $180 US for the MC4 1.4X to $380 US for the 300 Pro 2.0X, you can exercise your ability to save some cash with the MC4 versions.

More Mics for HDSLRs
Tech Three: On-camera 
stereo units. 2/20/10

Questions to ask before buying:

Suspension material lasts how long?

Røde: PVM, Pro Video Microphone. $250 MSRP. Dual cardioid elements at 90° (X-Y configuration), doubling forward sensitivity. 

2-step filter blocks low frequencies. Coolest looking camera mic. Hairy wind screen included. Rubber band suspension. Aluminum. 9v battery.


Sennheiser: MKE 400 mini shotgun. $200 US. Super cardioid pattern.

Mono audio pickup. Stereo plug. Metal housing.

Foam wind screen (hairy available). AAA battery. Bottom flex support sound suspension.

Low cut filter. 


Audio Technica: PM 24-CM. $140 US [$72 Internet]. Super cardioid pattern, X-Y configuration.

Foam wind screen included.

Surround collar/foam plastic sound suspension.

Battery or battery-free powered by camera.


Double System Sound 
Tech Two: Mics/Recorders
Better sound some more.

To shoot with double system sound, you need a separate recorder and microphone. Today’s tech offers several low-cost solutions. Perhaps the best small, portable recorder is the Zoom Handy Recorder H4n. So-called “list price” for this is $610 US, but you can get one from various Web sources for about $300 brand new. Its mics are oriented at 90 degrees for wide front stereo capture. For ambient sound, this is fine. For dialog, you will want something more directional. The H4n is fundamentally a digital hand-held 4-channel recorder that grabs pristine digital sound. With an external shotgun mic and enough hands to hold everything, you can catch ambient audio and isolated microphones all at once.

A second idea is exemplified with the Blue Microphones’ Mikey. Now in its second generation, it’s a cardioid stereo mic that plugs into the 30-pin connector of any iPhone or most any iPod (not for iPad, though). A three-position switch simplifies selecting for high, medium and low sound levels. With digital audio, you have great latitude for mixing volume, but the thing you dare not do is crash the audio level during recording. The original Mikey can be found for about $55 US on the Internet (good for the 3G iPhone, but not the 3Gs), and the new Mikey (mounts to the 3Gs) has an MSRP of $99 US.

Sound quality: CD. iPhone not included.

Double System Sound
Better sound through tech.

HDSLR shooting means in-camera microphones, usually mono and less than stellar performance. That’s fine if the sound around the camera is just for low ambience or verbal notes as you shoot, but the camera is not the tool you need for good interview or sound effect audio.

Do what Hollywood does all the time: Shoot “double system,” meaning a separate audio recorder—stereo would be nice—that gathers much higher quality audio, then synchronize the second audio source with the image from your camera.

For years, the only way to do this was to employ some sort of synch “pop,” to give the editor a chance of matching a visual (clapperboard slate, hand clap, etc.) with the distinct whack picked up by a mic.

That will work. And it has the advantage of being cheap. Especially the hand clap. But the downside is that you need to do it for Every Single Shot. %$#@!

Tech to the rescue. Software that drops right into Final Cut Pro and several other video edit programs, can match the sound from camera audio to other sound clips from a second recording device, automatically re-aligning everything to synchronize image and sound. 

Say goodbye to the %$#@!

We hope to bring you some examples of this in coming articles. In the mean time, check out PluralEyes from Singular Software.

The Trouble With Hairy Audio

Audio for HDSLR shooting has a big hairy problem: the camera. Every HDSLR we’ve seen has its audio processed by circuits an engineer decided you would screw up if he didn’t “fix” things for you. So he added compression circuits and automatic gain circuits between the microphone and the digital audio recording elements. The sound going down with the picture is compromised, and nothing you can do to the camera is going to fix it.

Well, that’s not true. You could take the audio into a program such as Apple’s Soundtrack Pro (part of Final Cut Pro suite) or any of many audio manipulation editing programs, and manually ride the camera audio track, moment by moment, eventually getting the sound level you wanted. But that would take a truck load of time and effort, and the results would not be the best you could achieve for all that work.

Separate recorders with manual audio are the best solution. HDSLR photographers have quickly adopted the Zoom H4n recorder (c. $300 US) for double-system sound recording, and some top-level pros are working with the lesser-known Tascam DR-07 (c. $141 US). Both of these have stereo recording with manual volume settings (auto, optional) and both accept input from external microphones.

The H4n wins my vote due to its extreme range of features. It’s a 4-channel machine, and can play two tracks while recording the other two. External inputs include XLR and 1/4-inch jacks, so you’re covered with almost any mics or line-level sources.

One caveat: With double-system sound, it’s a full-time job being the sound person. Both have a tripod screw socket, and some have used this with a blank hot-shoe mounting foot, creating a camera-mounted recorder, but that pretty much precludes using the camera on a Steadicam and requires a lot of extra consideration from the cinematographer. Every complexity added to the list of things that must be done before the shot starts will drive up the number of errors. Forgetting to also switch on audio before the shot starts is a killer, and forgetting to switch it off after Cut! wastes 1s and 0s.

Some shots may end up using the camera’s own mono mic after all.

Cinemetrics? Whazzat?
Measuring Hollywood for 1/f.

Here’s a real-world explanation of a novel way to analyze movies that was recently posted in a distinguished psychology journal.

Click on Whazzat for the scoop.

Slit-Scan HDSLR 3/10/10

Hall Of Ween. Original is over 4000 pixels wide. Peter iNova.

HDSLR cameras have a hidden attribute for still images that no still image camera can ordinarily touch. With a series of techniques and specialized software, they can be coaxed into creating photography that suspends the X-dimension of an image and replaces it with the T-dimension; Time. 

As in the above image, the camera was held approximately steady while people in Halloween costumes sauntered through the frame. The earliest people are on the left. The people on the right arrived many seconds later.


Slit-scan photography started with roll-film and 35mm film cameras which were modified to pull the film smoothly through the gate, but the gate was modified to be a tiny vertical slit. Practical use includes photographic proof of horse races, where the slit is aligned with the race finish line. The nose that made it to the wire first was the nose in front of whichever horse came in second on the interesting, but distorted image. 

Flong shows a number of slit-scan images on this page.

Digital slit-scan photography has long been a staple of machine vision techniques used in manufacturing. Devices capable of generating slit-scan images were almost universally custom-built cameras. Special one-line of pixels image chips have long been produced for use in machine vision cameras.

The core concept of slit-scan image generation is to isolate a narrow slice of moving, dynamic life, then spread that slice out over the X or Y axis of a two-dimensional image. If you set up a camera on a tripod, any subject that passes through the slit automatically scans itself into the picture over the time it took to pass through. Stable background elements, unmoving, are portrayed as horizontal streaks.

One movie; two different images. The position of the slit that generated the image was changed. The cars along the bottom were going in the opposite direction, but the whole concept of direction has been replaced by time. All the cars are really driving toward their future. The future in this case, is on the left.

HDSLR cameras can divide time into small moments sequentially. Today’s gear is limited to the frame rate of 60 frames per second at the fastest, and most cameras can shoot at 30 frames per second, delivering equally-spaced slices of both graphic space in the X, Y dimensions and time space in what I call the T (or perhaps “Z”) dimension. 

If you imagine a movie as a series of 2D images in a stack with the topmost image showing, then moving through the stack in 3D space’s Z-axis makes it move*. If you were to cut through the stack at any place, the side of your cut would show an image created by the passing of time.

When you gain a mental map of how this can be used to produce a T or Z-axis image, you can begin thinking in slit-scan terms. Pan the camera/scan the subject. Move the subject/it scans itself into the shot.

Once a movie has been shot, selecting a vertical column of pixels can define where the scan originates. Change the column position/a different image appears. Do this in an organized manner, and the 2D slit-scan image itself can become a series of frames.* Turn the slit-column sideways/get a different effect. The 3.5D movie here shows a small boy playing at the beach. Each frame is a time slice, part of a timed event. That’s worth at least 0.5 dimensions right there.

If a camera were produced* that lifted not a full 2D image with every frame, but a single column of pixels—and at a much higher frame rate—it could achieve slit-scan images, instantly.

People have exploited video cameras for slit-scan still image generation for several years, but these produce low-resolution, limited-interest images. Now all that has changed.

I have written a series of Actions* for Photoshop that create slit-scan images of high quality from movie files shot with HDSLR cameras and HD video cameras. Our writing associate, Uwe Steinmueller has converted some of these into Photoshop Script form.

We are considering selling this software so others can enjoy making slit-scan images. Let us know if you are interested. 

*Notes: Several ideas in this document are the subject of pending patents for both software and hardware implementation.

Site ipad friendly
New eBook
also iPad friendly

Click on the Cover for a
FREE Copyhttp://www.digitalsecrets.net/secrets/eBookCS5/ActionsCS5-Fast.pdfhttp://www.digitalsecrets.net/secrets/eBookCS5/ActionsCS5-Fast.pdfhttp://www.digitalsecrets.net/secrets/eBookCS5/ActionsCS5-Fast.pdfshapeimage_15_link_0shapeimage_15_link_1shapeimage_15_link_2