Infra-Red/UV Video Image Segmentation Technique Theory

October 27th, 2010

 

Infra-Red/UV Video Image

Segmentation Technique Theory

Mickael Maddison, May 2010

Currently the movie and photography industries utilize techniques often referred to as “chroma-key”, “luma-key” or “thermo-key” to film subjects for the purpose of removing the subject from the background of the image. Once the subject is removed from the background of the image, the subject can then be superimposed on alternative backgrounds. For example, filming an actor in front of a “green screen” and using the chroma-key technique to remove the actor from the green background would allow the video editor to place the actor on an image of the moon; without ever having to go to the moon.

Existing techniques for image segmentation require very careful and often expensive settings, lighting and filming techniques in addition to powerful post-production processing to achieve a quality result. This document proposes the use of the Near-infrared spectrum and optional ambient UV to replace the background; allowing for a much simpler means of extracting the desired image from an infrared background. Using a selection of isolated IR wavelengths and CCD or CMOS digital camera technologies adapted to capture and record these isolated wavelengths while at the same time recording the standard RGB (Red Green Blue) or RGBY (Red Green Blue Yellow) visible light would allow for software and devices to be produced will allow subject(s) to be removed from backgrounds with a higher degree of accuracy while requiring far less effort and processing.

* Adding in the detection of UV spectrum will also allow for additional processing options.

Uses: Cameras equipped with combined RGB/RGBY and IR/UV CMOS or CCD sensors would have a wide variety of uses.

  • Standard Video capture and recording.
  • Image Segmentation.
  • Capturing and recording Near-infrared and UV used for special effects/artistic purposes.
  • Capturing a wide range of light spectrum useful for night-vision image capture.
  • Reconnaissance and security systems.
  • Scientific research requiring combined access to visible and non-visible spectrum images.
  • Other techniques and uses not yet considered or developed.

Technologies:

1 – CCD: Existing CCD and CMOS sensors already have the capability to record near-infrared wavelengths. Most cameras use a special filter to block the infrared wavelengths from being captured. Cameras that do not have this filter in place store the infrared information in the RGB image. Currently, CCD and CMOS type sensors capture RGB light by using a special “Bayer Color Filter” as seen here:

Standard Bayer CCD Color Filter

Each square represents one “pixel” of information captured by the sensor. Processing techniques may vary, but in effect a square of 4 pixels are combined to create a single pixel of “true” color.

The following is an example of how a new filter could be designed to allow for the capture of the additional non-visible spectrum using the existing sensors:

Visible Light plus Infrared spectrum sensor

In this sample image, instead of using a pattern of 4 pixels, the pattern is spread over 9 pixels. The 4 existing RGB pixels are captured in addition to 5 additional pixels as represented by the white and various shades of grey boxes. The optimal configuration is subject to analysis, but for example it could be laid out like this:

Red = Red, Blue = Blue, Green1/2 = Green1/2
White = wide-spectrum UV
lightest grey = 840nm IR, second-lightest grey = 900nm IR
third-lightest grey = 950nm, darkest grey = 1000nm OR wide-spectrum IR.

As a future consideration, there are also technologies coming that could utilize the optical properties of carbon nanotubes to capture information on specific wavelengths. Research has shown that a single carbon nanotube connected to a pair of electrodes can can measure IR radiation effectively. This technology may be a long way from practical use, however, it provides an ongoing opportunity to continue developing and refining the technology.

2 – File Format: In addition to the filter, a new image/video file format would be created to store the captured information in a useful format. Many cameras have built-in processors that convert the RAW pixel data from the CCD into consumer file formats such as mpeg, jpeg, tiff, etc. A new processor may be developed to provide traditional file formats + a masking file, or for more advanced use the camera may save all the data together to allow for more advanced processing and usage of the recorded data.

The new image/video RAW format would have more information available and would need to store unique information for each displayed pixel to be effective for image external processing.

3 - Software: Image and video editing software would require modifications and/or filters to be developed that would make full use of the extended information available through the new file format and/or the processing of the masking file with the video file. This may be modifications and additions to versions of existing industry-standard software. In addition, research may deem entirely new software should be developed to make full and wider ranging use of the data.

Some image processing systems have experimented using non-visible light to increase the accuracy and quality of the visibly produced image. For the purpose of image segmentation, the various wavelengths of non-visible light would be used to create highly detailed “mask(s)” useful for cropping the background from the desired image(s).

4 - Non-Visible Illumination: To make the most of the technology, a wide range of electronic devices would be developed as the technology is adopted by the relevant industries. Some examples of devices that would be developed and produced:

  • IR floodlights – wavelength specific floodlights to provide a suitable non-visible background.
  • IR spotlights – wavelength specific spotlights that could be used to segment multiple objects within a single image by using multiple wavelengths.
  • IR backdrops – currently most greenscreen type applications use light shining evenly onto a controlled, smooth surface. An IR backdrop could be a “screen” that actually emits the light from it’s surface.
  • IR/RGB backdrops – Using modifications of technologies that are hitting the market today, large LCD screens using LED backlight technology could be redeveloped to emit a combination of visible and non-visible light, allowing a for a fully visible background while at the same time providing the non-visible background needed for image segmentation.
  • IR absorbing and reflecting materials – these could be used to achieve a variety of effects. For example, the current industry uses chroma-suits to allow the segmentation of parts of a subject such as a body-less person.

Benefits of the Technology

  • Subjects could be photographed or filmed against a variety of backgrounds and still be easily removed from the scene.
  • Subjects could be filmed with against a background that has similar colors and textures to the scene they would be placed into. For example, an actor could be filmed against a projection of a mountain scene similar to the one that will be added during post production. This would allow for complex images such as hair to more easily be segmented with minimal artifacts.
  • A single background could be used for full color filming, even if the background color matches the color of subjects being segmented from the scene.
  • Shadows appearing on the background may have little or no effect on the IR mask, allowing lighting of the subject to be tailored to the final scene rather than to achieving the best separation of color from the green screen.
  • Artistic photography, such as family portraits, would be able to eliminate the need to have a wide variety of backgrounds available to photograph subjects against. Instead, the photographer would use a generic or projected image as the background. After (or during) the photography session, the photographer could select the actual background image from an unlimited selection of background images. This allows the photographer to use a single quality photograph for any number of scenes.

 

Patent and related technology research:

Live Action Compositing example http://www.scribd.com/doc/654481/Live-Action-Compositing

Using human-IR heat for image segmentation http://nae-lab.org/project/thermo-key/

Patents:

http://ip.com/patapp/US20060066738

http://www.google.com/patents/about?id=4TwGAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=-iEzAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=ij8dAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=TPgfAAAAEBAJ&dq=6198503

This is not for IR - it is a device that uses a mapped background

http://www.google.com/patents/about?id=deMbAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=9NUWAAAAEBAJ&dq=6198503

References:

http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2004.10009


Practical Example of Theory

 

Apparatus

Sony DCR DVD203 Digital Camcorder used in photograph mode.

Stable Tripod.

IR narrow-beam floodlight (wide beam would be much more effective)

Blue floor mat for background

Doll with hair

A Dark room

Computer system with Adobe Photoshop Elements 8.0

Methodology

Step 1 - photograph still image of subject in Nightshot plus mode which uses NIR to enhance image visibility.

Infrared Greenscreen image

Step 2 - photograph still image of subject in normal RGB mode (sorry for poor quality photo).

Infrared Greenscreen image 2

Step 3 - Open these to images into a single layered file in Photoshop (elements)

Step 4 - On the nightshot (Mask) layer, convert to B/W and increase contrast and in this case, with a blue background used, I adjust the blue level to get the best differential I can considering the poor lighting. You can see in this step that the segmentation opportunity is pretty good but not nearly perfect. The beam from the IR spotlight is a little too intense and focused. Developing proper Infra-Red lighting would not be difficult, at least in this scene.

Infrared Greenscreen image 3

Step 5 - Due to inadequate lighting I will cut excess dark regions (which would normally appear as a fairly even white background with proper IR lighting) and delete to pure white to match the area around the subject. As part of this step I have increased the contrast to show clearly the mask that was created. Proper lighting and some minor filtering would remove the manual portions of this step making it easy to automate.

Infrared Greenscreen image 4

Step 6 - Using magic wand (which would be an automated part of the process) I select all the white area and invert the selection to have the mask selected.

Step 7 - switch to the layer with the RGB image and copy the subject and paste into a new layer.

Step 8 - Turn off the visibility on all layers except the cutout of the subject. You now have a fairly good cutout of the subject to work with.
Infrared Greenscreen image 5

Step 9
- Insert background and adjust cutout layer as desired.
Infrared Greenscreen image 6

Notes

Due to nightshot plus mode captures IR data and incorporates this into an RGB image. Due to this, the Mask layer is not nearly as good quality as if an actual IR layer were saved in addition to the RGB layer. This limitation also requires the Mask layer of the subject to be shot in (visible) darkness to generate a strong contrast. The RGB layer is then shot in RGB mode with visible lights enabled. If I had 2 nightshot cameras of the same type, split the image into the 2 cameras, and had a filter on the RGB mode camera to remove all IR data and a filter on the nightshot mode camera to remove all RGB data I believe I would get a much more accurate mask.

For example, in unedited images you can see that the hair sticking up off the head is visible. With even IR lighting and no conflicting RGB data being stored on the mask layer, this should result in an even sharper, more accurate mask layer. The sharper, more accurate mask layer could then be used to cut out these fine details from the RGB subject layer, providing a nice crisp image to work with.

Also of note based on previous experience doing still-frame image compositing the process outlined in this document was very quick and simple to do. With proper camera(s), proper lighting, suitable software, and a good studio environment the results should be far better than this simple test.

 

Working out the bugs in my studio

November 24th, 2005

I have hit a series of small yet demanding bugs as I've begun working on recording for my second CD. In the process of getting started, I had ordered a few items that I would need, now that James Lipoth is in Ottawa and his equipment is all with him.

With the electronic Drums (Pintech with Roland TD8) I'm having trouble with other toms triggering whenever I hit a cymbal or another tom. I'm using an M-Audio device that channels the midi signals through to nTrack on my PC, and when the midi track is reviewed, it shows all sorts of extra hits that shouldn't have been there. I suspect the easiest fix for this will be to get a number of cheap snare stands, which will take the toms off of the rack and isolate them.

The next issue was with the M-Audio transit USB soundcard I picked up to handle the recording of vocals and other analog tracks. Well, it seems to do a fine job of recording. It also plays back the recorded tracks fine. What it doesn't seem to be able to do is allow me to monitor the input while it's being recorded. Fortunately, Line6 has released their new product "Toneport", which should be able to handle this a lot more effectively. So that'll be on order.

The biggest issue I'm facing is some strange computer performance problems. I've got a nice 3.0ghp P4 with 2GB Dual Channel DDR400 and 2xSATA HD's. While this seemed to be a perfectly speedy machine last time through, suddenly it seems to be having latency issues. Fortunately I run a separate install of XP just for the music and video software I run, so I can reformat and clean things up fresh before I go hardware crazy.

Using well-built, task oriented yet inexpensive products does present a challenge. There are may options to chose from, and there's no real way of knowing what will handle the task until it's time to handle the task. Fortunately, there are a lot of great products on the market that are really making it easy to avoid the big expensive items. It'll just take a little figuring and tweaking to get things setup optimally.

Mickael Maddison

Brother Cadfael

November 12th, 2005

With a heavy heart, I find myself at the end of my journey with Brother Cadfael. Written by the late Ellis Peters, Cadfael is the main character in a series of murder mysteries centered around a Benedictine Monk from Shrewsbury, many centuries past.

Sir Derek Jacobi as Brother Cadfael Mickael Maddison the Monk
Left: Sir Derek Jacobi as Brother Cadfael -- Right: Myself dressed as a Monk for Halloween

I was first introduced to Brother Cadfael through the TV series starring Sir Derek Jacobi. At first, if I found it a little difficult to associate all the characters together and decipher good from evil, I found myself increasingly intrigued by the stories surrounding the Abbey of Shrewsbury. It wasn't until I began reading the series that I truly came to love and understand this fictional monk.

Although the books are based on murder mysteries, I find myself more interested in the characters than the mysteries themselves. Cadfael formed an unlikely friendship with Hugh, the sherrif of Shrewsbury, and often found himself in the company of Earls and Bishops, and the would be king or queen battling for control during the civil war. Ellis Peters had a masterful way of detailing her characters, bringing them to life in our imagination.

Unfortunately, the final chapter of Cadfael's many adventures, "Brother Cadfael's Penance" leaves me selfishly wishing that Ellis Peters could return from her eternal rest and continue the series until Cadfael finds his own ending. Her books never quite seemed to end until the new one began and it seems there were many stories left to be written.

Ellis Peters died well before I had ever heard of her works. I am glad that to have read her books and seen the stories that were put to film. Hopefully, when my time is due, she'll be able to tell me the rest of the story.

Mickael Maddison

It feels so good to pay $1.00/litre for fuel!

October 27th, 2005

As I went to run my errands yesterday, I noticed that the fuel prices here in Kamloops have dropped down to ~$1.004 per litre. I was honestly suprised to see them drop below $1.10 per litre, due to all the incremental weather and the world's growing demand for automotive fuel. What was even more suprising was the urge to go and fill the tank; to keep it filled right to the top now that gas is "cheap" again.

Well, it seems the executives of the big oil companies have successfully completed their latest campaign to increase profits. Supply and Demand is such a powerful tool, when only a handful of like-minded companies hold the keys to the supply. Within the last 7 years we've gone from around $0.40 per litre up to ~$1.20/litre. A 300% increase divided by 7 years definitely doesn't match the rate of inflation.

Sure, many of us may have held back from any long trips this summer to keep some of our money in our pockets. Perhaps the sales volume was down by 10-20% compared to most summers. Somehow bringing the price up by ~50% this last year leads me to believe that there's some nice bonuses coming for those executives who planned the whole thing. There's not many industries that can drop sales and increase profit in such a short period of time.

I wonder, how long will the current prices last? Will we see the prices drop below $1.00 per litre again? What will we be paying next summer?

Mickael Maddison

Suspicious network activity welcomed

October 13th, 2005

A couple of days ago I noticed the hits to my music videos had suddenly jumped from the usual few hundred for the start of the month, to over 3000 views. Since the largest combined views for a month was 2374 back in August, I was immediately suspicous of the traffic.

I did a check with AWStats, which reported a handful of IP's were responsible for the gross majority of the traffic. The next step was to find out who owned these IPs. First I tried "nslookup 192.168.1.10" (note: that's not the actual IP!):

bash$ nslookup 192.168.1.10

Server: 204.244.87.2
Address: 204.244.87.2#53

** server can't find 10.1.168.192.in-addr.arpa: SERVFAIL

Ok, so there's no reverse DNS entries for these IPs; not a definitive sign of something suspicious, but often amongst the indicators.

Next up, whois:

bash$ whois 192.168.1.10
[Querying whois.arin.net]
[whois.arin.net]
United Layer, Inc. NETBLK-UNITEDLAYER-3 (NET-192-168-1-0-1)
192.168.1.0 - 192.168.128.255

A quick search and I find out that United Layer is a San Francisco area data pipe provider. So next, I lookup their website, and email the log information showing their IP traffic and ask for assistance in tracking down the source.

The helpful techs on the other end reply. The traffic is from a new search engine called Truveo, which specializes in indexing web-based video content. They graciously accept my suggestion that they setup reverse DNS for their servers' IP addresses to help avoid future confusion.

Now the best part of the whole thing, is that the 3837 video views to my site so far this month are legitimate! Truveo seems to relay the content to their viewers through their servers (not sure what the reason for that is), but the views would be initiated by visitors to their search engine. A quick search confirms my video's are in their database. It's only October 13th today. I wonder how many views I'll be able to report by the end of this month?

Mickael Maddison