Infra-Red/UV Video Image Segmentation Technique Theory

October 27th, 2010

 

Infra-Red/UV Video Image

Segmentation Technique Theory

Mickael Maddison, May 2010

Currently the movie and photography industries utilize techniques often referred to as “chroma-key”, “luma-key” or “thermo-key” to film subjects for the purpose of removing the subject from the background of the image. Once the subject is removed from the background of the image, the subject can then be superimposed on alternative backgrounds. For example, filming an actor in front of a “green screen” and using the chroma-key technique to remove the actor from the green background would allow the video editor to place the actor on an image of the moon; without ever having to go to the moon.

Existing techniques for image segmentation require very careful and often expensive settings, lighting and filming techniques in addition to powerful post-production processing to achieve a quality result. This document proposes the use of the Near-infrared spectrum and optional ambient UV to replace the background; allowing for a much simpler means of extracting the desired image from an infrared background. Using a selection of isolated IR wavelengths and CCD or CMOS digital camera technologies adapted to capture and record these isolated wavelengths while at the same time recording the standard RGB (Red Green Blue) or RGBY (Red Green Blue Yellow) visible light would allow for software and devices to be produced will allow subject(s) to be removed from backgrounds with a higher degree of accuracy while requiring far less effort and processing.

* Adding in the detection of UV spectrum will also allow for additional processing options.

Uses: Cameras equipped with combined RGB/RGBY and IR/UV CMOS or CCD sensors would have a wide variety of uses.

  • Standard Video capture and recording.
  • Image Segmentation.
  • Capturing and recording Near-infrared and UV used for special effects/artistic purposes.
  • Capturing a wide range of light spectrum useful for night-vision image capture.
  • Reconnaissance and security systems.
  • Scientific research requiring combined access to visible and non-visible spectrum images.
  • Other techniques and uses not yet considered or developed.

Technologies:

1 – CCD: Existing CCD and CMOS sensors already have the capability to record near-infrared wavelengths. Most cameras use a special filter to block the infrared wavelengths from being captured. Cameras that do not have this filter in place store the infrared information in the RGB image. Currently, CCD and CMOS type sensors capture RGB light by using a special “Bayer Color Filter” as seen here:

Standard Bayer CCD Color Filter

Each square represents one “pixel” of information captured by the sensor. Processing techniques may vary, but in effect a square of 4 pixels are combined to create a single pixel of “true” color.

The following is an example of how a new filter could be designed to allow for the capture of the additional non-visible spectrum using the existing sensors:

Visible Light plus Infrared spectrum sensor

In this sample image, instead of using a pattern of 4 pixels, the pattern is spread over 9 pixels. The 4 existing RGB pixels are captured in addition to 5 additional pixels as represented by the white and various shades of grey boxes. The optimal configuration is subject to analysis, but for example it could be laid out like this:

Red = Red, Blue = Blue, Green1/2 = Green1/2
White = wide-spectrum UV
lightest grey = 840nm IR, second-lightest grey = 900nm IR
third-lightest grey = 950nm, darkest grey = 1000nm OR wide-spectrum IR.

As a future consideration, there are also technologies coming that could utilize the optical properties of carbon nanotubes to capture information on specific wavelengths. Research has shown that a single carbon nanotube connected to a pair of electrodes can can measure IR radiation effectively. This technology may be a long way from practical use, however, it provides an ongoing opportunity to continue developing and refining the technology.

2 – File Format: In addition to the filter, a new image/video file format would be created to store the captured information in a useful format. Many cameras have built-in processors that convert the RAW pixel data from the CCD into consumer file formats such as mpeg, jpeg, tiff, etc. A new processor may be developed to provide traditional file formats + a masking file, or for more advanced use the camera may save all the data together to allow for more advanced processing and usage of the recorded data.

The new image/video RAW format would have more information available and would need to store unique information for each displayed pixel to be effective for image external processing.

3 - Software: Image and video editing software would require modifications and/or filters to be developed that would make full use of the extended information available through the new file format and/or the processing of the masking file with the video file. This may be modifications and additions to versions of existing industry-standard software. In addition, research may deem entirely new software should be developed to make full and wider ranging use of the data.

Some image processing systems have experimented using non-visible light to increase the accuracy and quality of the visibly produced image. For the purpose of image segmentation, the various wavelengths of non-visible light would be used to create highly detailed “mask(s)” useful for cropping the background from the desired image(s).

4 - Non-Visible Illumination: To make the most of the technology, a wide range of electronic devices would be developed as the technology is adopted by the relevant industries. Some examples of devices that would be developed and produced:

  • IR floodlights – wavelength specific floodlights to provide a suitable non-visible background.
  • IR spotlights – wavelength specific spotlights that could be used to segment multiple objects within a single image by using multiple wavelengths.
  • IR backdrops – currently most greenscreen type applications use light shining evenly onto a controlled, smooth surface. An IR backdrop could be a “screen” that actually emits the light from it’s surface.
  • IR/RGB backdrops – Using modifications of technologies that are hitting the market today, large LCD screens using LED backlight technology could be redeveloped to emit a combination of visible and non-visible light, allowing a for a fully visible background while at the same time providing the non-visible background needed for image segmentation.
  • IR absorbing and reflecting materials – these could be used to achieve a variety of effects. For example, the current industry uses chroma-suits to allow the segmentation of parts of a subject such as a body-less person.

Benefits of the Technology

  • Subjects could be photographed or filmed against a variety of backgrounds and still be easily removed from the scene.
  • Subjects could be filmed with against a background that has similar colors and textures to the scene they would be placed into. For example, an actor could be filmed against a projection of a mountain scene similar to the one that will be added during post production. This would allow for complex images such as hair to more easily be segmented with minimal artifacts.
  • A single background could be used for full color filming, even if the background color matches the color of subjects being segmented from the scene.
  • Shadows appearing on the background may have little or no effect on the IR mask, allowing lighting of the subject to be tailored to the final scene rather than to achieving the best separation of color from the green screen.
  • Artistic photography, such as family portraits, would be able to eliminate the need to have a wide variety of backgrounds available to photograph subjects against. Instead, the photographer would use a generic or projected image as the background. After (or during) the photography session, the photographer could select the actual background image from an unlimited selection of background images. This allows the photographer to use a single quality photograph for any number of scenes.

 

Patent and related technology research:

Live Action Compositing example http://www.scribd.com/doc/654481/Live-Action-Compositing

Using human-IR heat for image segmentation http://nae-lab.org/project/thermo-key/

Patents:

http://ip.com/patapp/US20060066738

http://www.google.com/patents/about?id=4TwGAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=-iEzAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=ij8dAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=TPgfAAAAEBAJ&dq=6198503

This is not for IR - it is a device that uses a mapped background

http://www.google.com/patents/about?id=deMbAAAAEBAJ&dq=6198503

http://www.google.com/patents/about?id=9NUWAAAAEBAJ&dq=6198503

References:

http://www.computer.org/portal/web/csdl/doi/10.1109/MCG.2004.10009


Practical Example of Theory

 

Apparatus

Sony DCR DVD203 Digital Camcorder used in photograph mode.

Stable Tripod.

IR narrow-beam floodlight (wide beam would be much more effective)

Blue floor mat for background

Doll with hair

A Dark room

Computer system with Adobe Photoshop Elements 8.0

Methodology

Step 1 - photograph still image of subject in Nightshot plus mode which uses NIR to enhance image visibility.

Infrared Greenscreen image

Step 2 - photograph still image of subject in normal RGB mode (sorry for poor quality photo).

Infrared Greenscreen image 2

Step 3 - Open these to images into a single layered file in Photoshop (elements)

Step 4 - On the nightshot (Mask) layer, convert to B/W and increase contrast and in this case, with a blue background used, I adjust the blue level to get the best differential I can considering the poor lighting. You can see in this step that the segmentation opportunity is pretty good but not nearly perfect. The beam from the IR spotlight is a little too intense and focused. Developing proper Infra-Red lighting would not be difficult, at least in this scene.

Infrared Greenscreen image 3

Step 5 - Due to inadequate lighting I will cut excess dark regions (which would normally appear as a fairly even white background with proper IR lighting) and delete to pure white to match the area around the subject. As part of this step I have increased the contrast to show clearly the mask that was created. Proper lighting and some minor filtering would remove the manual portions of this step making it easy to automate.

Infrared Greenscreen image 4

Step 6 - Using magic wand (which would be an automated part of the process) I select all the white area and invert the selection to have the mask selected.

Step 7 - switch to the layer with the RGB image and copy the subject and paste into a new layer.

Step 8 - Turn off the visibility on all layers except the cutout of the subject. You now have a fairly good cutout of the subject to work with.
Infrared Greenscreen image 5

Step 9
- Insert background and adjust cutout layer as desired.
Infrared Greenscreen image 6

Notes

Due to nightshot plus mode captures IR data and incorporates this into an RGB image. Due to this, the Mask layer is not nearly as good quality as if an actual IR layer were saved in addition to the RGB layer. This limitation also requires the Mask layer of the subject to be shot in (visible) darkness to generate a strong contrast. The RGB layer is then shot in RGB mode with visible lights enabled. If I had 2 nightshot cameras of the same type, split the image into the 2 cameras, and had a filter on the RGB mode camera to remove all IR data and a filter on the nightshot mode camera to remove all RGB data I believe I would get a much more accurate mask.

For example, in unedited images you can see that the hair sticking up off the head is visible. With even IR lighting and no conflicting RGB data being stored on the mask layer, this should result in an even sharper, more accurate mask layer. The sharper, more accurate mask layer could then be used to cut out these fine details from the RGB subject layer, providing a nice crisp image to work with.

Also of note based on previous experience doing still-frame image compositing the process outlined in this document was very quick and simple to do. With proper camera(s), proper lighting, suitable software, and a good studio environment the results should be far better than this simple test.

 

Anti-Spam, Anti-Virus Conspiracy theory?

June 4th, 2006

We all hate 3 things on the internet. SPAM, Viruses, and hackers. Each of these elements are responsible for turning a very useful tool into something that terrifies even those of us whom work to embrace and expand the use of technologies on a daily basis.

Over the years there has been a lot of criticism of companies that work to help secure us from these threats. Companies such as Symantec provide "solutions" to help remove or reduce the risk of getting a virus (or worm), an inbox filled with SPAM, or hackers breaking into your computer. The industry that these massive companies are targeting is not small, it's billions of dollars per year of our dollars being shovelled into their coffers. We all know that companies like to protect their investments, so one has to wonder if these big companies are really out to fix the problem, or if they're purposely keeping the problem alive to stay in business?

Well, it's almost ridiculous to suggest they'll ever do anything to actually stop viruses, hackers, and spam from being rampant. If anyone in these big companies has come up with a solution to these problems, you can bet they've been nicely silenced. If the world was paying you millions of dollars each year to provide a service, and you realized a way to prevent the world from needing your service -- would you tell?

Some very interesting information about a specific company (Symantec) I'm having some difficulty with has emerged. First and foremost, as I am trying to work with some ISP's that use their "Brightmail" anti-spam software, I have come to realize that there appears to be absolutely no information listed anywhere on Symantec's website(s) about how companies affected by their blacklists can contact them to be removed. There's no contact information, there's no instructions, there's not lists of criteria, no information about how they put you on their lists, nothing at all. The only information you can find is the heavy marketing and user/administrator documentation.

I've spent dozens of hours searching their website, newsgroups and the internet, and talking to ISP's and have had no success in getting any information on how to work with them about being removed from their list. Further, there does not seem to be any way for us to actually look and see if we are on their blacklists and if so, why?

The most interesting tidbit of information came through today. If you are a company that uses Symantec Brightmail to filter email on your network (as in a paying customer), if you happen to be falsely listed in their blacklist, you have a recourse to be removed from the blacklist. Even the ISP contacts have no contact information or documentation that they can forward onto companies that contact them about being blocked. Simply stated, if you're not putting money in their pockets, they don't seem to care if you are affected by their actions.

I was put onto an interesting article that makes an interesting point about Symantec's angle:
http://deliver-my-mail.sitesell.com/deliver-my-mail-5.html#SEND-FALSE-POSITIVE-TO-BRIGHTMAIL

Another site explains the battle with ISP's very well here:
http://www.free-tarot-reading.net/contact/ispblock.php

While any small webhosting or ISP business owner can appreciate how difficult it is to keep up with all the technologies in use on the internet today, one of the biggest problems is dealing with these big companies that have no interest in working with the little guys. They make their big dollars with big companies buying big quantities of their big software. The big companies that buy their big software don't care any more than Symantec about the little companies that might get falsely blocked and have no recourse. When selecting an anti-spam solution, it seems no one is asking if the supplier is workin in an open and responsible way. They just want good stats that they can show to their bosses, so they can use the good stats in their marketing campaigns.

Yet again, I find myself wishing for a revolution in email. I firmly believe that there is a way to eliminate 99% of the junk mail from ever being sent, thus eliminating the need for anti-spam solutions. The problem is the same as always. Who's going to pay to develop the new generation of email server software, and who's going to get the big players on board?

Mickael Maddison

DDOS hits Tucows

May 4th, 2006

It happens to us all at some point. If you're running a website, sometime, someday, your site will be down. Yesterday my many attempts to log into my reseller area of Tucows/OpenSRS resulted in timeouts for most of the day. At first I thought I might be having some trouble with my wireless setup or my router. This wasn't the case as Tucows has notified us that there has been a Distributed Denial of Service (DDOS) attack that impacted their network of servers.

Tucows isn't a small fish. They have multiple upstream providers and multiple servers. Without knowing the details of their network and their systems administrators and programming staff, I'm sure it's safe to say that they have a talented staff and have taken many steps to ensure their network is operational under almost any circumstance. It's quite likely they have systems in place to help protect against DDOS attacks, yet the attack was so large that they found themselves knocked offline yesterday.

I've used this analogy when discussing hosting and websites with clients over the years. An online business is not a lot different than having a physical store or office location. It's incredibly difficult and terribly expensive to have a physical business that is open and doing business 24/7. The problem is, sometimes there are storms or earthquakes that knock out the power and gas lines. Sometimes there are traffic accidents that knock out the roads to your business. The point is, there's any number of possibilities that can bring business to a halt for both online and offline businesses.

That said... there are ways to ensure that "some" business is always running under both scenarios. If you have multiple stores or offices, anything that affects one location may not have any affect on the other. You may lose some business for awhile. If you have the cash and the know-how, you can emulate the same concept for online businesses. Any attempts to divide your eggs into multiple baskets is probably going to help you maximize your ability to cope with disasters, but don't expect that disasters will never come.

Today, Tucows seems to be back up and running much as they were before the attack. The outage did inconvenience me by delaying my ability to help some clients with domain name orders and renewals, but all is well today.

Mickael Maddison

Database driven web sites suck!

May 3rd, 2006

Well, that's a touch harsh I suppose, but the fact remains true. There are thousands of web sites out there with what is basically static content, yet every single web site visit results in this static data being pulled from a database. A good example of that would be this blog.

To my thinking, a database engine is best used to handle data that is dynamic, frequently updated, customized by the viewer, or requires strong search features. Using this blog software as my example, once I've completed this posting and hit submit, there's little chance I'll ever need to come back here and change the information. If I do, I don't really need the data to be pulled from a database.

Whenever a viewer comes to my site to read one of my posts, the blog ware must connect to MySQL and run a query to pull up the latest posts based on the configurations I've entered in my administrative area. It's a nice handy tool that makes the site easier for me to maintain, there's no doubt about that. Yet, for the amount of visitors in comparison to the number of times I access the administration area to post something this just doesn't seem necessary.

The comparison of load between a static web site and one with a database engine driven site are nominal on low traffic sites. When the traffic levels are high, however, there is a significant difference in the amount of visitors a static site can handle compared to a dynamic site. The difference can be very significant for sites that have huge traffic volumes on a tight budget.

In my thinking, this blog would be better served by some old-school programming solutions. Rather than have each visitor pull data out of the database directly, instead the software could be setup to dump the data out to regular HTML files at regular intervals. That way the load on the database is not tied to the number of visitors.

So now I've gone and made blogging software look bad. Well, I have to redeem B2Evolution. This blog software does have a feature to do exactly what I've suggested. It's not the default way the software is used, but it is a feature that is configurable in the administration area with some tweaking or possibly some add-on modules to get it setup just right. So in this specific case, the fact that my blog is not in HTML format is really my own fault... someday when time permits I'll likely go ahead and change that.

There are a lot of sites out there, most of which are not blogs, that don't have this flexibility. Over the last year we've had a number of clients that were sold on the idea of dynamic sites (by other companies) that they could easily update through a custom-built administration area -- all of which runs on a database. The concept is great, but these customers have simply found that they wish they'd never gone this route. The cost to make significant changes to their sites seems to high. The amount of use most businesses make of their custom built admin areas is negligible, and worst of all, most of these sites have terrible search engine content. By converting these clients to HTML based sites and connecting them with our SiteBuilder software, they've been able to enjoy even more flexibility in managing their sites, they've been able to implement Search Engine Optimization suggestions, their sites are faster, and they're a lot happier with the bill.

My suggestion to all businesses looking to change their web site. Be very careful when you are approached with the idea of a database driven web site. Consider carefully if your site needs to be served up this way. There are some sites that really do need to be driven from a database, but there are a whole lot that really don't.

Mickael Maddison

The joys of replicating

April 30th, 2006

One of the most wonderful and most frustrating parts of working in the web hosting and development industry is the continual need to learn how to implement new or alternate technologies to get the job done. For example, one of my most active e-commerce clients has outgrown their current dedicated server setup, and has asked me to provide a pair of servers for increased performance and redundancy. The idea of the setup is that 2 separate machines that have the traffic split between them provide better reliability and performance than a single machine.

This is where replication comes into the picture. In order to have 2 servers running as if they were both the same identical site, both servers need to have identical data in their databases. Fortunately, for me, this isn't a new concept and it's just up to me to learn how replication works with MySQL and test it out.

Initially, as with most technologies, MySQL's approach to replication looked very complicated but is in fact a very simple concept. Start both servers off with the equal data, have each server log any changes to the data, and send the changes to the other server(s) in the chain. With the help of a special auto-increment function, new data posted to both servers at the same time can prevent overlapping numeric id's.

With the 2 servers for this client, each server will run as a Master MySQL Server and as a Slave. This "Master to Master Replication" system will allow both machines to run independently while keeping it's peer updated. The major caveat to this setup seems to be that the replication circle can be broken causing data to no longer be synchronized. This requires a backup plan, a method of getting both machine back in synch when things inevitably go awry.

I'm still testing out the 2 servers, but so far things are looking very promising. I am, however, looking forward to the future release of MySQL which is expected to have even more robust replication support.

Mickael Maddison