Enquire to Annotate


Asking how a new generation of co-working spaces and networks are impacting your city.

Find Out More

This site has no mission. It serves as a meeting place between meeting places, joining you to a server hosted within London's Second Home. Using simple intelligence, the server mines content that you can find here. It's your touch card into a growing archive of information that responds to a network of arteries. The climate of how you work and live is developing, Enquire to Annotate hopes you'll check in and see how we expand. Stay tuned to see how you can contribute.

Vietnam - Saigon - Weaving - 2

Weaving is a method of textile production in which two distinct sets of yarns or threads are interlaced at right angles to form a fabric or cloth. Other methods are knitting, crocheting, felting, and braiding or plaiting. The longitudinal threads are called the warp and the lateral threads are the weft or filling. (Weft is an old English word meaning "that which is woven"; compare leave and left.[a]) The method in which these threads are inter-woven affects the characteristics of the cloth. Cloth is usually woven on a loom, a device that holds the warp threads in place while filling threads are woven through them. A fabric band which meets this definition of cloth (warp threads with a weft thread winding between) can also be made using other methods, including tablet weaving, back strap loom, or other techniques without looms.

The way the warp and filling threads interlace with each other is called the weave. The majority of woven products are created with one of three basic weaves: plain weave, satin weave, or twill. Woven cloth can be plain (in one colour or a simple pattern), or can be woven in decorative or artistic design.

PROCESS AND TERMINOLOGY
In general, weaving involves using a loom to interlace two sets of threads at right angles to each other: the warp which runs longitudinally and the weft (older woof) that crosses it. One warp thread is called an end and one weft thread is called a pick. The warp threads are held taut and in parallel to each other, typically in a loom. There are many types of looms.

Weaving can be summarized as a repetition of these three actions, also called the primary motion of the loom.

Shedding: where the warp threads (ends) are separated by raising or lowering heald frames (heddles) to form a clear space where the pick can pass
Picking: where the weft or pick is propelled across the loom by hand, an air-jet, a rapier or a shuttle.
Beating-up or battening: where the weft is pushed up against the fell of the cloth by the reed.

The warp is divided into two overlapping groups, or lines (most often adjacent threads belonging to the opposite group) that run in two planes, one above another, so the shuttle can be passed between them in a straight motion. Then, the upper group is lowered by the loom mechanism, and the lower group is raised (shedding), allowing to pass the shuttle in the opposite direction, also in a straight motion. Repeating these actions form a fabric mesh but without beating-up, the final distance between the adjacent wefts would be irregular and far too large.

The secondary motion of the loom are the:

Let off Motion: where the warp is let off the warp beam at a regulated speed to make the filling even and of the required design
Take up Motion: Takes up the woven fabric in a regulated manner so that the density of filling is maintained

The tertiary motions of the loom are the stop motions: to stop the loom in the event of a thread break. The two main stop motions are the

warp stop motion
weft stop motion

The principal parts of a loom are the frame, the warp-beam or weavers beam, the cloth-roll (apron bar), the heddles, and their mounting, the reed. The warp-beam is a wooden or metal cylinder on the back of the loom on which the warp is delivered. The threads of the warp extend in parallel order from the warp-beam to the front of the loom where they are attached to the cloth-roll. Each thread or group of threads of the warp passes through an opening (eye) in a heddle. The warp threads are separated by the heddles into two or more groups, each controlled and automatically drawn up and down by the motion of the heddles. In the case of small patterns the movement of the heddles is controlled by "cams" which move up the heddles by means of a frame called a harness; in larger patterns the heddles are controlled by a dobby mechanism, where the healds are raised according to pegs inserted into a revolving drum. Where a complex design is required, the healds are raised by harness cords attached to a Jacquard machine. Every time the harness (the heddles) moves up or down, an opening (shed) is made between the threads of warp, through which the pick is inserted. Traditionally the weft thread is inserted by a shuttle.

On a conventional loom, the weft thread is carried on a pirn, in a shuttle that passes through the shed. A handloom weaver could propel the shuttle by throwing it from side to side with the aid of a picking stick. The "picking" on a power loom is done by rapidly hitting the shuttle from each side using an overpick or underpick mechanism controlled by cams 80–250 times a minute. When a pirn is depleted, it is ejected from the shuttle and replaced with the next pirn held in a battery attached to the loom. Multiple shuttle boxes allow more than one shuttle to be used. Each can carry a different colour which allows banding across the loom.

The rapier-type weaving machines do not have shuttles, they propel the weft by means of small grippers or rapiers that pick up the filling thread and carry it halfway across the loom where another rapier picks it up and pulls it the rest of the way. Some carry the filling yarns across the loom at rates in excess of 2,000 metres per minute. Manufacturers such as Picanol have reduced the mechanical adjustments to a minimum, and control all the functions through a computer with a graphical user interface. Other types use compressed air to insert the pick. They are all fast, versatile and quiet.

The warp is sized in a starch mixture for smoother running. The loom warped (loomed or dressed) by passing the sized warp threads through two or more heddles attached to harnesses. The power weavers loom is warped by separate workers. Most looms used for industrial purposes have a machine that ties new warps threads to the waste of previously used warps threads, while still on the loom, then an operator rolls the old and new threads back on the warp beam. The harnesses are controlled by cams, dobbies or a Jacquard head.

The raising and lowering sequence of warp threads in various sequences gives rise to many possible weave structures:

plain weave: plain, and hopsacks, poplin, taffeta, poult-de-soie, pibiones and grosgrain.
twill weave: these are described by weft float followed by warp float, arranged to give diagonal pattern. 2/1 twill, 3/3 twill, 1/2 twill. These are softer fabrics than plain weaves.
satin weave: satins and sateens,
complex computer-generated interlacings.
pile fabrics : such as velvets and velveteens

Both warp and weft can be visible in the final product. By spacing the warp more closely, it can completely cover the weft that binds it, giving a warp faced textile such as repp weave. Conversely, if the warp is spread out, the weft can slide down and completely cover the warp, giving a weft faced textile, such as a tapestry or a Kilim rug. There are a variety of loom styles for hand weaving and tapestry.

HISTORY
There are some indications that weaving was already known in the Paleolithic era, as early as 27,000 years ago. An indistinct textile impression has been found at the Dolní Věstonice site. According to the find, the weavers of Upper Palaeolithic were manufacturing a variety of cordage types, produced plaited basketry and sophisticated twined and plain woven cloth. The artifacts include imprints in clay and burned remnants of cloth.

The oldest known textiles found in the Americas are remnants of six finely woven textiles and cordage found in Guitarrero Cave, Peru. The weavings, made from plant fibres, are dated between 10100 and 9080 BCE.

MIDDLE EAST AND AFRICA
The earliest known Neolithic textile production in the Old World is supported by a 2013 find of a piece of cloth woven from hemp, in burial F. 7121 at the Çatalhöyük site suggested to be from around 7000 B.C. Further finds come from the advanced civilisation preserved in the pile dwellings in Switzerland. Another extant fragment from the Neolithic was found in Fayum, at a site dated to about 5000 BCE. This fragment is woven at about 12 threads by 9 threads per cm in a plain weave. Flax was the predominant fibre in Egypt at this time (3600 BCE) and continued popularity in the Nile Valley, though wool became the primary fibre used in other cultures around 2000 BCE. Weaving was known in all the great civilisations, but no clear line of causality has been established. Early looms required two people to create the shed and one person to pass through the filling. Early looms wove a fixed length of cloth, but later ones allowed warp to be wound out as the fell progressed. The weavers were often children or slaves. Weaving became simpler when the warp was sized.

THE AMERICAS
The Indigenous people of the Americas wove textiles of cotton throughout tropical and subtropical America and in the South American Andes of wool from camelids, primarily domesticated llamas and alpacas. Cotton and the camelids were both domesticated by about 4,000 BCE. American weavers are "credited with independently inventing nearly every non-mechanized technique known today."

In the Inca Empire of the Andes, women did most of the weaving using backstrap looms to make small pieces of cloth and vertical frame and single-heddle looms for larger pieces. Andean textile weavings were of practical, symbolic, religious, and ceremonial importance and used as currency, tribute, and as a determinant of social class and rank. Sixteenth-century Spanish colonists were impressed by both the quality and quantity of textiles produced by the Inca Empire. Some of the techniques and designs are still in use in the 21st century.

The oldest-known weavings in North America come from the Windover Archaeological Site in Florida. Dating from 4900 to 6500 B.C. and made from plant fibres, the Windover hunter-gatherers produced "finely crafted" twined and plain weave textiles.

EAST ASIA
The weaving of silk from silkworm cocoons has been known in China since about 3500 BCE. Silk that was intricately woven and dyed, showing a well developed craft, has been found in a Chinese tomb dating back to 2700 BCE.

Silk weaving in China was an intricate process that was very involved. Men and women, usually from the same family, had their own roles in the weaving process. The actual work of weaving was done by both men and women. Women were often weavers since it was a way they could contribute to the household income while staying at home. Women would usually weave simpler designs within the household while men would be in charge of the weaving of more intricate and complex pieces of clothing. The process of sericulture and weaving emphasized the idea that men and women should work together instead of women being subordinate to men. Weaving became an integral part of Chinese women’s social identity. Several rituals and myths were associated with the promotion of silk weaving, especially as a symbol of female power. Weaving contributed to the balance between men and women’s economic contributions and had many economic benefits.

There were many paths into the occupation of weaver. Women usually married into the occupation, belonged to a family of weavers and or lived in a location that had ample weather conditions that allowed for the process of silk weaving. Weavers usually belonged to the peasant class. Silk weaving became a specialized job requiring specific technology and equipment that was completed domestically within households. Although most of the silk weaving was done within the confines of the home and family, there were some specialized workshops that hired skilled silk weavers as well. These workshops took care of the weaving process, although the raising of the silkworms and reeling of the silk remained work for peasant families. The silk that was woven in workshops rather than homes were of higher quality, since the workshop could afford to hire the best weavers. These weavers were usually men who operated more complicated looms, such as the wooden draw-loom. This created a competitive market of silk weavers.

The quality and ease of the weaving process depended on the silk that was produced by the silk worms. The easiest silk to work with came from breeds of silk worms that spun their cocoons so that it could be unwound in one long strand. The reeling, or unwinding of silk worm cocoons is started by placing the cocoons in boiling water in order to break apart the silk filaments as well as kill the silk worm pupae. Women would then find the end of the strands of silk by sticking their hand into the boiling water. Usually this task was done by women of ages eight to twelve, while the more complex jobs were given to older women. They would then create a silk thread, which could vary in thickness and strength from the unwound cocoons.

After the reeling of the silk, the silk would be dyed before the weaving process began. There were many different looms and tools for weaving. For high quality and intricate designs, a wooden draw-loom or pattern loom was used. This loom would require two or three weavers and was usually operated by men. There were also other smaller looms, such as the waist loom, that could be operated by a single woman and were usually used domestically.

Sericulture and silk weaving spread to Korea by 200 BCE, to Khotan by 50 CE, and to Japan by about 300 CE.

The pit-treadle loom may have originated in India though most authorities establish the invention in China. Pedals were added to operate heddles. By the Middle Ages such devices also appeared in Persia, Sudan, Egypt and possibly the Arabian Peninsula, where "the operator sat with his feet in a pit below a fairly low-slung loom." In 700 CE, horizontal looms and vertical looms could be found in many parts of Asia, Africa and Europe. In Africa, the rich dressed in cotton while the poorer wore wool. By the 12th century it had come to Europe either from the Byzantium or Moorish Spain where the mechanism was raised higher above the ground on a more substantial frame.

SOUTHEAST ASIA
In the Philippines, numerous pre-colonial weaving traditions exist among different ethnic groups. They used various plant fibers, mainly abacá or banana, but also including tree cotton, buri palm (locally known as buntal) and other palms, various grasses (like amumuting and tikog), and barkcloth. The oldest evidence of weaving traditions are Neolithic stone tools used for preparing barkcloth found in archeological sites in Sagung Cave of southern Palawan and Arku Cave of Peñablanca, Cagayan. The latter has been dated to around 1255–605 BCE.

MEDIEVAL EUROPE
The predominant fibre was wool, followed by linen and nettlecloth for the lower classes. Cotton was introduced to Sicily and Spain in the 9th century. When Sicily was captured by the Normans, they took the technology to Northern Italy and then the rest of Europe. Silk fabric production was reintroduced towards the end of this period and the more sophisticated silk weaving techniques were applied to the other staples.

The weaver worked at home and marketed his cloth at fairs. Warp-weighted looms were commonplace in Europe before the introduction of horizontal looms in the 10th and 11th centuries. Weaving became an urban craft and to regulate their trade, craftsmen applied to establish a guild. These initially were merchant guilds, but developed into separate trade guilds for each skill. The cloth merchant who was a member of a city’s weavers guild was allowed to sell cloth; he acted as a middleman between the tradesmen weavers and the purchaser. The trade guilds controlled quality and the training needed before an artisan could call himself a weaver.

By the 13th century, an organisational change took place, and a system of putting out was introduced. The cloth merchant purchased the wool and provided it to the weaver, who sold his produce back to the merchant. The merchant controlled the rates of pay and economically dominated the cloth industry. The merchants’ prosperity is reflected in the wool towns of eastern England; Norwich, Bury St Edmunds and Lavenham being good examples. Wool was a political issue. The supply of thread has always limited the output of a weaver. About that time, the spindle method of spinning was replaced by the great wheel and soon after the treadle-driven spinning wheel. The loom remained the same but with the increased volume of thread it could be operated continuously.

The 14th century saw considerable flux in population. The 13th century had been a period of relative peace; Europe became overpopulated. Poor weather led to a series of poor harvests and starvation. There was great loss of life in the Hundred Years War. Then in 1346, Europe was struck with the Black Death and the population was reduced by up to a half. Arable land was labour-intensive and sufficient workers no longer could be found. Land prices dropped, and land was sold and put to sheep pasture. Traders from Florence and Bruges bought the wool, then sheep-owning landlords started to weave wool outside the jurisdiction of the city and trade guilds. The weavers started by working in their own homes then production was moved into purpose-built buildings. The working hours and the amount of work were regulated. The putting-out system had been replaced by a factory system.

The migration of the Huguenot Weavers, Calvinists fleeing from religious persecution in mainland Europe, to Britain around the time of 1685 challenged the English weavers of cotton, woollen and worsted cloth, who subsequently learned the Huguenots’ superior techniques.

INDUSTRIAL REVOLUTION
Before the Industrial Revolution, weaving was a manual craft and wool was the principal staple. In the great wool districts a form of factory system had been introduced but in the uplands weavers worked from home on a putting-out system. The wooden looms of that time might be broad or narrow; broad looms were those too wide for the weaver to pass the shuttle through the shed, so that the weaver needed an expensive assistant (often an apprentice). This ceased to be necessary after John Kay invented the flying shuttle in 1733. The shuttle and the picking stick sped up the process of weaving. There was thus a shortage of thread or a surplus of weaving capacity. The opening of the Bridgewater Canal in June 1761 allowed cotton to be brought into Manchester, an area rich in fast flowing streams that could be used to power machinery. Spinning was the first to be mechanised (spinning jenny, spinning mule), and this led to limitless thread for the weaver.

Edmund Cartwright first proposed building a weaving machine that would function similar to recently developed cotton-spinning mills in 1784, drawing scorn from critics who said the weaving process was too nuanced to automate. He built a factory at Doncaster and obtained a series of patents between 1785 and 1792. In 1788, his brother Major John Cartwight built Revolution Mill at Retford (named for the centenary of the Glorious Revolution). In 1791, he licensed his loom to the Grimshaw brothers of Manchester, but their Knott Mill burnt down the following year (possibly a case of arson). Edmund Cartwight was granted a reward of £10,000 by Parliament for his efforts in 1809. However, success in power-weaving also required improvements by others, including H. Horrocks of Stockport. Only during the two decades after about 1805, did power-weaving take hold. At that time there were 250,000 hand weavers in the UK. Textile manufacture was one of the leading sectors in the British Industrial Revolution, but weaving was a comparatively late sector to be mechanised. The loom became semi-automatic in 1842 with Kenworthy and Bulloughs Lancashire Loom. The various innovations took weaving from a home-based artisan activity (labour-intensive and man-powered) to steam driven factories process. A large metal manufacturing industry grew to produce the looms, firms such as Howard & Bullough of Accrington, and Tweedales and Smalley and Platt Brothers. Most power weaving took place in weaving sheds, in small towns circling Greater Manchester away from the cotton spinning area. The earlier combination mills where spinning and weaving took place in adjacent buildings became rarer. Wool and worsted weaving took place in West Yorkshire and particular Bradford, here there were large factories such as Lister’s or Drummond’s, where all the processes took place. Both men and women with weaving skills emigrated, and took the knowledge to their new homes in New England, to places like Pawtucket and Lowell.

Woven ‘grey cloth’ was then sent to the finishers where it was bleached, dyed and printed. Natural dyes were originally used, with synthetic dyes coming in the second half of the 19th century. The need for these chemicals was an important factor in the development of the chemical industry.

The invention in France of the Jacquard loom in about 1803, enabled complicated patterned cloths to be woven, by using punched cards to determine which threads of coloured yarn should appear on the upper side of the cloth. The jacquard allowed individual control of each warp thread, row by row without repeating, so very complex patterns were suddenly feasible. Samples exist showing calligraphy, and woven copies of engravings. Jacquards could be attached to handlooms or powerlooms.

THE ROLE OF WEAVER
A distinction can be made between the role and lifestyle and status of a handloom weaver, and that of the powerloom weaver and craft weaver. The perceived threat of the power loom led to disquiet and industrial unrest. Well known protests movements such as the Luddites and the Chartists had hand loom weavers amongst their leaders. In the early 19th century power weaving became viable. Richard Guest in 1823 made a comparison of the productivity of power and hand loom weavers:

A very good Hand Weaver, a man twenty-five or thirty years of age, will weave two pieces of nine-eighths shirting per week, each twenty-four yards long, and containing one hundred and five shoots of weft in an inch, the reed of the cloth being a forty-four, Bolton count, and the warp and weft forty hanks to the pound, A Steam Loom Weaver, fifteen years of age, will in the same time weave seven similar pieces.

He then speculates about the wider economics of using powerloom weavers:

…it may very safely be said, that the work done in a Steam Factory containing two hundred Looms, would, if done by hand Weavers, find employment and support for a population of more than two thousand persons.

HAND LOOM WEAVERS
Hand loom weaving was done by both sexes but men outnumbered women partially due to the strength needed to batten. They worked from home sometimes in a well lit attic room. The women of the house would spin the thread they needed, and attend to finishing. Later women took to weaving, they obtained their thread from the spinning mill, and working as outworkers on a piecework contract. Over time competition from the power looms drove down the piece rate and they existed in increasing poverty.

POWER LOOM WEAVERS
Power loom workers were usually girls and young women. They had the security of fixed hours, and except in times of hardship, such as in the cotton famine, regular income. They were paid a wage and a piece work bonus. Even when working in a combined mill, weavers stuck together and enjoyed a tight-knit community. The women usually minded the four machines and kept the looms oiled and clean. They were assisted by ‘little tenters’, children on a fixed wage who ran errands and did small tasks. They learnt the job of the weaver by watching. Often they would be half timers, carrying a green card which teacher and overlookers would sign to say they had turned up at the mill in the morning and in the afternoon at the school. At fourteen or so they come full-time into the mill, and started by sharing looms with an experienced worker where it was important to learn quickly as they would both be on piece work. Serious problems with the loom were left to the tackler to sort out. He would inevitably be a man, as were usually the overlookers. The mill had its health and safety issues, there was a reason why the women tied their hair back with scarves. Inhaling cotton dust caused lung problems, and the noise was causing total hearing loss. Weavers would mee-maw as normal conversation was impossible. Weavers used to ‘kiss the shuttle’, that is, suck thread though the eye of the shuttle. This left a foul taste in the mouth due to the oil, which was also carcinogenic.

CRAFT WEAVERS
Arts and Crafts was an international design philosophy that originated in England and flourished between 1860 and 1910 (especially the second half of that period), continuing its influence until the 1930s. Instigated by the artist and writer William Morris (1834–1896) during the 1860s and inspired by the writings of John Ruskin (1819–1900), it had its earliest and most complete development in the British Isles[ but spread to Europe and North America. It was largely a reaction against mechanisation and the philosophy advocated of traditional craftsmanship using simple forms and often medieval, romantic or folk styles of decoration. Hand weaving was highly regard and taken up as a decorative art.

BAUHAUS WEAVING WORKSHOP
In the 1920s the weaving workshop of the Bauhaus design school in Germany aimed to raise weaving, previously seen as a craft, to a fine art, and also to investigate the industrial requirements of modern weaving and fabrics. Under the direction of Gunta Stölzl, the workshop experimented with unorthodox materials, including cellophane, fiberglass, and metal. From expressionist tapestries to the development of soundproofing and light-reflective fabric, the workshop’s innovative approach instigated a modernist theory of weaving. Former Bauhaus student and teacher Anni Albers published the seminal 20th-century text On Weaving in 1965. Other notables from the Bauhaus weaving workshop include Otti Berger, Margaretha Reichardt, and Benita Otte.

OTHER CULTURES
WEAVING IN THE AMERICAN COLONIES (1500-1800)
Colonial America relied heavily on Great Britain for manufactured goods of all kinds. British policy was to encourage the production of raw materials in colonies and discourage manufacturing. The Wool Act 1699 restricted the export of colonial wool. As a result, many people wove cloth from locally produced fibres. The colonists also used wool, cotton and flax (linen) for weaving, though hemp could be made into serviceable canvas and heavy cloth. They could get one cotton crop each year; until the invention of the cotton gin it was a labour-intensive process to separate the seeds from the fibres.

A plain weave was preferred as the added skill and time required to make more complex weaves kept them from common use. Sometimes designs were woven into the fabric but most were added after weaving using wood block prints or embroidery.

AMERIC’AN SOUTHWEST
Textile weaving, using cotton dyed with pigments, was a dominant craft among pre-contact tribes of the American southwest, including various Pueblo peoples, the Zuni, and the Ute tribes. The first Spaniards to visit the region wrote about seeing Navajo blankets. With the introduction of Navajo-Churro sheep, the resulting woolen products have become very well known. By the 18th century the Navajo had begun to import yarn with their favorite color, Bayeta red. Using an upright loom, the Navajos wove blankets worn as garments and then rugs after the 1880s for trade. Navajo traded for commercial wool, such as Germantown, imported from Pennsylvania.[citation needed] Under the influence of European-American settlers at trading posts, Navajos created new and distinct styles, including "Two Gray Hills" (predominantly black and white, with traditional patterns), "Teec Nos Pos" (colorful, with very extensive patterns), "Ganado" (founded by Don Lorenzo Hubbell), red dominated patterns with black and white, "Crystal" (founded by J. B. Moore), Oriental and Persian styles (almost always with natural dyes), "Wide Ruins," "Chinlee," banded geometric patterns, "Klagetoh," diamond type patterns, "Red Mesa" and bold diamond patterns. Many of these patterns exhibit a fourfold symmetry, which is thought to embody traditional ideas about harmony, or hózhó.

AMAZON CULTURES
Among the indigenous people of the Amazon basin densely woven palm-bast mosquito netting, or tents, were utilized by the Panoans, Tupinambá, Western Tucano, Yameo, Záparoans, and perhaps by the indigenous peoples of the central Huallaga River basin (Steward 1963:520). Aguaje palm-bast (Mauritia flexuosa, Mauritia minor, or swamp palm) and the frond spears of the Chambira palm (Astrocaryum chambira, A.munbaca, A.tucuma, also known as Cumare or Tucum) have been used for centuries by the Urarina of the Peruvian Amazon to make cordage, net-bags hammocks, and to weave fabric. Among the Urarina, the production of woven palm-fiber goods is imbued with varying degrees of an aesthetic attitude, which draws its authentication from referencing the Urarina’s primordial past. Urarina mythology attests to the centrality of weaving and its role in engendering Urarina society. The post-diluvial creation myth accords women’s weaving knowledge a pivotal role in Urarina social reproduction. Even though palm-fiber cloth is regularly removed from circulation through mortuary rites, Urarina palm-fiber wealth is neither completely inalienable, nor fungible since it is a fundamental medium for the expression of labor and exchange. The circulation of palm-fiber wealth stabilizes a host of social relationships, ranging from marriage and fictive kinship (compadrazco, spiritual compeership) to perpetuating relationships with the deceased.

COMPUTER SCIENCE
The Nvidia Parallel Thread Execution ISA derives some terminology (specifically the term Warp to refer to a group of concurrent processing threads) from historical weaving traditions.

WIKIPEDIA

Posted by asienman on 2019-06-13 20:11:47

Tagged: , Vietnam , Saigon , Weaving , asienman-videography

How to get Windows 10 cheap (or even for free) | PCWorld

via WordPress bit.ly/31wZ4bf

Windows 10 licenses are expensive—almost painfully so. Shelling out $139 for Windows 10 Home or $200 for Windows 10 Pro feels rough when Linux is free and Windows 7 still hasn’t been completely put down. That amount of cash can easily be a third of a budget PC build.

But with less developer support for Linux and the end-of-life deadline rapidly approaching for Windows 7, Windows 10 is an inescapable necessity for most of us. What’s not a given is paying full retail.

Cheap Windows 10: Fast summary

The list below outlines our suggested methods, with links to the explanations of each.

Yes, it’s possible to snag a discount on Windows 10. The amount you’ll save depends on how much hassle you can tolerate—as well as your circumstances. If you’re lucky, you could technically get Windows 10 for free. (Legitimately for free, because installing Windows 10 without ever activating it doesn’t quite count as getting a full, sanctioned copy of Windows.)

Here’s how.

Packrat’s loophole: Try a Windows 7 or 8 key

If you have an old Windows 7 or Windows 8 PC lying around, you may still be able to reuse its key to activate Windows 10.

When Microsoft first launched Windows 10 back in 2015, it offered Windows 7 and Windows 8 users a truly free, no-strings upgrade to the new operating system. The promotion was only available for just one year—presumably to accelerate push up Windows 10 adoption rates—and expired in July 2016.

But even though Microsoft officially ended this program three years ago, it still has yet completely shut everything down. The activation servers have been allowing Windows 7 and 8 keys on some Windows 10 installs.

The Windows 7 or Windows 8 product keys that commonly work for this method are the retail and OEM varieties, while only sporadic reports exist for volume license keys (i.e., enterprise or educational licenses) working with this loophole.

While there’s no exact science for what works, the following guidelines take into account various data points floating around in articles, forums, and Reddit. First off, you’re limited to using keys for a specific version of Windows 7 or 8 with the equivalent in Windows 10. If you have a Windows 7 or 8 Home license, that will only work for Windows 10 Home, and Windows 7 or 8 Pro only work for Windows 10 Pro.

If using a Windows 7 or 8 key works for activation, a digital license will be issued to you.

An additional rule of thumb is that you may need a retail product key if you’re doing a clean install of Windows 10 on a new computer. OEM product keys should work if you’re doing an upgrade or clean install of Windows 10 on the machine the Windows 7 or 8 license is tied to.

If you don’t have your license key easily accessible, you can find it by using a program like Magical Jelly Bean Product KeyFinder. (You can read our step-by-step guide for how to use that particular program here.)

Once you have that on hand, you’ll enter it one of two ways: Either when prompted during the installation process if you’re doing a clean install, or through the “Change product key” option in the Activation section of Windows 10’s settings.

If the product key is recognized, you’ll be issued a digital license that associates your machine with the key, so you should be good to go for the future if this method ever expires, as Microsoft had previously said it would.

Easiest discount: An OEM license

Our next suggestion is a method that’s available to everyone and has the least amount of hassle: Purchasing an OEM license.

License types are different than operating system versions: They dictate what you can do with the software, while OS versions are distinguished by the features available. Multiple Windows license types exist, but the two commonly available to a home user are the retail and OEM varieties.

When you walk into a store or pop over to Microsoft’s website, handing over that $139 for Windows 10 Home (or $200 for Windows 10 Pro) gets you the retail license. If you visit an online retailer like AmazonRemove non-product link or Newegg, you can find both retail and OEM licenses for sale. You can usually spot an OEM license by its price, which tends to run about $110 for a Windows 10 Home license and $150 for a Windows 10 Pro license.

All the features of the operating system version are the same for both license types. The difference is that with a retail license, you can transfer the license key to a different PC later on.

The process for activating a Windows 10 OEM license is the same as for a retail license.

You can’t do that with an OEM license. In exchange for a lower price, you get to use the license key on only one PC, period. If you build a system but roll a new one four years later, you can’t transfer the license to the new machine.

Also, if the hardware used to identify your system fails—namely, the motherboard—Microsoft’s registration servers won’t recognize your license as valid after you replace the dead part. Microsoft has historically been kind about such situations, however; you can usually call to reactivate the license after replacing a fried mobo. But it is an extra hassle.

For further savings, you’ll have to wait for the rare sale or Black Friday, when you can get an OEM license in the neighborhood of $85 (Windows 10 Home) to $120 (Windows 10 Pro). Otherwise, if you want to shave down costs further, it’s going to take work—or a deep locus of calm when your associates criticize your life choices. (Skip down to “Low prices with a caveat” for details.)

Deepest savings: The education discount

Not all student discounts are reserved for the under-24 set. Your local community college might be a source for a free or extremely discounted copy of Windows 10—and it’s nearly the equivalent of Windows 10 Enterprise, to boot. You’ll just have to put in some legwork (perhaps literally) to get it.

As mentioned above, license types determine what you can do with Windows—and who can use it, as well. Through the Academic Volume Licensing agreements, schools can purchase access to Windows 10 Education for their students, faculty, and staff. Some make it available only on campus machines. Others will grant a license for use on a home machine.

In that latter camp are a number of community colleges, and they often make the Windows 10 license free or supremely affordable (usually $15). The catch: You have to sign up for at least one course to qualify for campus discounts.

For California’s community colleges, CollegeBuys is vendor through which you’ll “buy” Windows 10. Other states use OnTheHub, which has a tool to look up your school.

To get access to the software, you’ll typically need to register for your class first, then find and register separately at whatever online store your campus uses for software purchases. (Many community colleges use OnTheHub as their distributor, so you can use their lookup tool to begin research about your school’s options.) The storefront will require verification of your student status before you can “buy” Windows 10.

A one-unit class suffices, though, and depending on your state, it costs as little as $76 including administrative fees. Typical options are usually of the physical education or dance variety (swim, ballet, jazz, boot camp workouts, etc.), but you can also find the occasional class on topics like Beginning Drawing, Intro to HTML & CSS, and Video for the Web.

If you were already planning on taking a class in one of these subjects, you’re getting an amazing deal. Windows 10 Education, which is similar to the enterprise version of Windows 10, includes popular Windows 10 Pro features like Bitlocker encryption and the Windows 10 May Update’s Sandbox feature. You’re essentially getting Windows 10 Pro (and then some) for as much as 60 percent off and you get to learn something new.

Windows 10 Pro’s Bitlocker feature makes encrypting a drive a very easy process.

Even if you aren’t interested in the classes, you’re still paying considerably less than what you would for even a Windows 10 Pro OEM license. We don’t encourage truancy, but there’s nothing saying you have to show up for class, so long as you’re comfortable with a failing grade on your record.

Obviously, if your local community college doesn’t have an agreement with Microsoft in place, this strategy won’t work. Also, if the total cost of the class, administrative fees, and license fee adds up to more than the retail cost of a Windows 10 Pro license, and you wouldn’t have otherwise taken the class, that negates this deal, too. In those cases, your main options are the OEM license (outlined above) or buying through Kinguin (detailed below).

Note: If you use this method, also keep an eye out for other software deals through your school. For example, your school might offer a free Microsoft 365 account, or a heavily discounted Adobe Creative Cloud account (usually $20/mo).

Low prices with a caveat: Kinguin

Scoring Windows 10 at an 85 percent discount isn’t too good to be true, but this surprisingly low-hassle approach comes with a large dose of controversy.

Kinguin is a website that allows buyers to purchase product keys from third-party sellers—think of it like an eBay or Amazon Marketplace for digital software sales. To buy Windows 10, you’ll look through the Windows 10 Home OEM or Windows 10 Pro OEM listings, pick a seller’s product to add to your cart, and then check out. It’s the same as any other digital storefront.

What makes the license keys so cheap—and opinions about buying through Kinguin so fierce—is that they’re gray market at best. In other words, while not illegal, they’re likely extra keys from a volume licensing agreement that were never meant to be sold individually to home users. Opponents of Kinguin swear the keys will eventually lose their activation status because of their unknown origins.

Like on Amazon Marketplace, you pick a specific seller from which to buy the product (in this case, the Windows 10 license key).

These keys are also for an OEM license of Windows 10, which means they’re meant for only one PC to use at one time. So as mentioned above, if these keys are already somehow tied to an original (but unactivated) PC, things could go sideways during your own activation process.

Additionally, if you want to transfer the license to a different PC down the road, you can’t. On top of that, if the hardware used to identify your system (i.e., the motherboard) bites the dust and you replace it, Microsoft’s registration servers won’t recognize your system and automatically activate the key. Though Microsoft has historically gone easy on home PC builders caught in this situation, their goodwill may be harder to rely upon if you can’t prove that you directly purchased an OEM license from an authorized retailer.

Kinguin’s Buyer Protection works like eBay’s: If anything goes wrong with your purchase, you’re covered. It’s a must if you buy a license key through them.

Proponents of this method counter that buying Kinguin’s Buyer Protection (currently an additional $5.57, and included in the price listed above) covers you in sticky situations with bad sellers and eliminates the risk of troubles. In our office, staff members who’ve advocated this method of obtaining Windows 10 for years have reported no problems so far.

Each camp makes valid arguments, so ultimately, your comfort level with risk and gray market goods should determine whether this is the option for you. If you opt for this path, we recommend ignoring Windows 10 Home. An extra $2 for the Pro license nets you Bitlocker encryption and other Pro features, which are more than worth it.

To comment on this article and other PCWorld content, visit our Facebook page or our Twitter feed.

This content was originally published here.

//embedr.flickr.com/assets/client-code.jshttps://platform.twitter.com/widgets.js

Posted by Sreys on 2019-06-14 10:15:40

Tagged: , Allgemein

Infrared HDR Palmer Park, Colorado Springs

Infrared converted Sony A6000 with Sony E 16mm F2.8 mounted with the Sony Ultra Wide Converter. HDR AEB +/-2 total of 3 exposures at F8, 16mm, auto focus and processed with Photomatix HDR software.

High Dynamic Range (HDR)

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

HDR images can represent a greater range of luminance levels than can be achieved using more ‘traditional’ methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or ‘one stop’, represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera’s raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full ‘set’ of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon’s approach is called ‘Active D-Lighting’ which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop
Aurora HDR
Dynamic Photo HDR
HDR Efex Pro
HDR PhotoStudio
Luminance HDR
MagicRaw
Oloneo PhotoEngine
Photomatix Pro
PTGui

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

HDR images often don’t use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don’t use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

History of HDR photography
The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

Mid 20th century
Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff’s detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

Late 20th century
Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera’s image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA’s Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann’s method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann’s process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

21st century
In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

HDR sensors
Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

en.wikipedia.org/wiki/High-dynamic-range_imaging

Infrared Photography

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood’s photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

Most apochromatic (‘APO’) lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke’s IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a ‘hot spot’ in the centre of the image as their coatings are optimised for visible light and not for IR.

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally – handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic ‘white foliage’ while rendering skies a glorious blue.

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was ‘restricted’ by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera’s sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

Phase One digital camera backs can be ordered in an infrared modified form.

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

en.wikipedia.org/wiki/Infrared_photography

Posted by Brokentaco on 2019-06-16 03:02:15

Tagged:

Infrared HDR Palmer Park, Colorado Springs

Infrared converted Sony A6000 with Sony E 16mm F2.8 mounted with the Sony Ultra Wide Converter. HDR AEB +/-2 total of 3 exposures at F8, 16mm, auto focus and processed with Photomatix HDR software.

High Dynamic Range (HDR)

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

HDR images can represent a greater range of luminance levels than can be achieved using more ‘traditional’ methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or ‘one stop’, represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera’s raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full ‘set’ of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon’s approach is called ‘Active D-Lighting’ which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop
Aurora HDR
Dynamic Photo HDR
HDR Efex Pro
HDR PhotoStudio
Luminance HDR
MagicRaw
Oloneo PhotoEngine
Photomatix Pro
PTGui

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

HDR images often don’t use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don’t use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

History of HDR photography
The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

Mid 20th century
Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff’s detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

Late 20th century
Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera’s image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA’s Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann’s method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann’s process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

21st century
In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

HDR sensors
Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

en.wikipedia.org/wiki/High-dynamic-range_imaging

Infrared Photography

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood’s photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

Most apochromatic (‘APO’) lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke’s IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a ‘hot spot’ in the centre of the image as their coatings are optimised for visible light and not for IR.

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally – handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic ‘white foliage’ while rendering skies a glorious blue.

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was ‘restricted’ by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera’s sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

Phase One digital camera backs can be ordered in an infrared modified form.

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

en.wikipedia.org/wiki/Infrared_photography

Posted by Brokentaco on 2019-06-16 20:34:14

Tagged:

Preparing Equipment for Overseas Shipping

pixabay.com/photos/ship-container-ship-port-of-loading-40…
[overlay:] Oversea cargo shipping is a rapidly growing industry
Credit: scholty1970

When you need equipment shipped across the globe, you need it done safely and efficiently, so that you can stay focused on your day-to-day operations. Understandably, the shipping process may be mysterious to many of us. This isn’t like shipping holiday gifts to your relatives! To alleviate concerns about your property, it helps to understand the steps that go into preparing equipment for overseas shipping. This process starts well before a cargo vessel sets off, and it concludes with a thorough inspection of all items, to make sure no damage has been incurred. To help this process go as smoothly as possible, be as prepared and informed as possible.

To feel confident about the shipping process, start by understanding its various steps. If this is your first time shipping large equipment, an experienced shipping company will walk you through the following options in detail.

unsplash.com/photos/Jx5IB1f5-4w
[overlay:] Sea spray causes rust if metal is unprotected
Credit: Peter Fogden

Rust Proofing
During sea travel, machinery may sit in an open-air environment on cargo ships. This exposes your valuable equipment to the elements, including wind and salt water. Travel by train can also leave cargo vulnerable to rain and snow. While any metal can suffer from corrosion, common rust (as we know it) is specific to iron-based metals. Other forms of corrosion may appear as green or white streaks. Because any corrosion can seriously compromise your equipment’s ability to function, a variety of protective techniques will be used to prevent damage. Anti-corrosion treatments are an essential first step before packaging. Depending on the size and condition of equipment, one of the following protective layers will be utilized:

Galvanization – The “hot dip” method coats your machinery in layers of steel and zinc. The final layer coating a galvanized item is usually pure zinc. Because zinc is shock absorbent, this outer layer also helps protect against jostling and impacts. This method can protect equipment for many years after transport. The “hot dip” substance requires no maintenance, and will not chip or peel.

Paint – While galvanization creates a compound that bonds with the surface of your machinery, paint sits on top as a separate layer, covering the metal. The paint will help prevent corrosion. Oil-based coatings are well suited to long journeys and extremely harsh conditions, but their removal is more involved than with water-based solutions. With oil, a degreaser or solvent-based compound will be required for removal.

Plating – Typically composed of zinc and tin, this will provide an additional layer to prevent corrosion. Plating can be used in conjunction with a coating layer or without, depending on the mode of transport. The plating will also provide enhanced protection against abrasions and chemical damage.

pixabay.com/photos/business-cargo-freight-industrial-21823/
[overlay:] Pallets increase stability
Credits: PublicDomainPictures

The specialized plastic coating used in this process is sensitive to temperature. Heat is applied to its surface, causing it to shrink snugly around the contours of your machinery.Shrink Wrapping
In some cases, vacuum packing is also employed to attain a perfect fit. The wrapping ultimately forms a customized outer container. This will resist wind, water and other corrosive substances while serving as a buffer to cushion against impacts. The plastic surface of this wrap is UV resistant and protects against road grime and dust as well as sea air (making it ideal for rail or sea travel). Shrink wrap can also be used to keep essential machine components together, preventing breakage or damage to delicate individual parts.

Because stability is pivotal when complex equipment is shipped, your shipping contractor may use a pallet to further secure large items. This sturdy platform will be shrink-wrapped with the machinery, reducing the likelihood of any shifting or sliding during export.

pixabay.com/photos/business-cargo-containers-crate-1845350/
[overlay:] Containers streamline the shipping process
Credit: Pexels
Containerization
This process entails loading large items into metal shipping containers. These containers come in standardized proportions, ranging from 20 to 53 feet in size. Pallets or skids may be loaded into the container with machinery, to increase stability. Smaller equipment items may be crated and shrink-wrapped before containerization. Containers provide the ultimate protection against corrosion and other damage. This system is most commonly used to transport over water, but smaller containers may also be shipped by rail or air.

pixabay.com/photos/loading-cargo-container-transport-652296/
[overlay:] Cranes are used to move transport containers
Credit: Skeeze

Once the freight vessel has reached its destination, containerization aids in the transportation of your machinery over land since containers are easy to move with cranes or forklifts. Your items will be carefully tracked since containers are monitored and sorted by an advanced computer system. This method also simplifies the storage process, since machines can be stored within their containers after arrival, until they’re needed for industrial use. Using a container streamlines the packing process, eliminating the additional travel costs and irritating delays that can result when working with unwieldy (or unusual) items.

pixabay.com/photos/train-transport-railroad-scene-863295/
[overlay:] Travel by train also requires careful preparation
Credit: Foundry

Rail Car Tie-Downs
Trains can be used to transport heavy machinery and large equipment. An expert crew will carefully load, tie down and secure items. This process requires customized lashes and hooks, and equipment will often be skidded as a first step.

The government regulates transport conditions, which helps promote worker safety. If the equipment is not expertly secured, it can damage the freight and threaten the health of anyone involved in the transportation process. Working with an experienced shipping company will guarantee that tie-down regulations are strictly adhered to, avoiding delays and fines as a result of potential mishaps.

Break Bulk Services
Break Bulk is a method of transporting oversized or awkwardly shaped items. It is used for equipment that cannot fit into regular shipping containers or for equipment that is too heavy to be moved by plane. Break bulk requires the use of cranes and experienced handlers to load it safely on (and off) the transport vessel. Examples of common break bulk items include transformers, cranes, and boats, as well as large construction vehicles.

With break bulk shipping, additional costs may be incurred due to the sensitive, time-consuming loading process, as well as the additional storage space required by large items in transit. However, this method also saves time and money upon arrival. The alternative is disassembling machinery to be treated, packed, and crated separately. Individually crated items need to be carefully unpacked and reassembled after transport. Break bulk shipping means that your machinery is intact and ready to roll upon arrival.

Before you Ship

In the export business, careful planning makes all the difference. Before export, take care to provide your shipping contractor with accurate weight counts and unit dimensions, and be sure to articulate any pressing concerns or questions. By better understanding your needs, a company will be fully equipped to determine which mode of transport will best suit your property – both by land to the freight carrier and during the ensuing journey itself. Clear communication will also help them select the most effective protective methods for your particular machines. Working with an experienced contractor means that they will be familiar with a wide array of industrial items, and will therefore be able to conduct a thorough risk assessment before packaging.

Whether shipping by rail, sea or air, be sure to work with a full-service moving company. By employing experts, your valuable equipment will arrive intact and unscathed, and you will have a clear understanding of the steps involved in the process. With questions about shipping any type of machinery, contact Ready Machinery Movers at 1-800-211-2500. With over 30 years of national and international shipping experience, we’re proud to serve industries throughout Toronto and the Kitchener area.

Posted by readymachinerymoverscanada on 2019-06-17 10:17:31

Tagged:

Waiting for Input

Posted by Spirou333 on 2019-06-19 08:25:30

Tagged: , lost , place , lostplace , old , dirt , dust , computer , typewriter

Forgotten Office

Posted by Spirou333 on 2019-06-19 08:25:30

Tagged: , lost , place , lostplace , old , dirt , dust , computer , typewriter

Infrared HDR Garden of the Gods, Colorado Springs

Infrared converted Sony A6000 with Sony E 16mm F2.8 mounted with the Sony Ultra Wide Converter. HDR AEB +/-2 total of 3 exposures at F8, 16mm, auto focus and processed with Photomatix HDR software. Blue and red color channels swapped.

High Dynamic Range (HDR)

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

HDR images can represent a greater range of luminance levels than can be achieved using more ‘traditional’ methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or ‘one stop’, represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera’s raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full ‘set’ of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon’s approach is called ‘Active D-Lighting’ which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop
Aurora HDR
Dynamic Photo HDR
HDR Efex Pro
HDR PhotoStudio
Luminance HDR
MagicRaw
Oloneo PhotoEngine
Photomatix Pro
PTGui

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

HDR images often don’t use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don’t use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

History of HDR photography
The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

Mid 20th century
Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff’s detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

Late 20th century
Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera’s image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA’s Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann’s method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann’s process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

21st century
In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

HDR sensors
Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

en.wikipedia.org/wiki/High-dynamic-range_imaging

Infrared Photography

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood’s photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

Most apochromatic (‘APO’) lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke’s IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a ‘hot spot’ in the centre of the image as their coatings are optimised for visible light and not for IR.

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally – handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic ‘white foliage’ while rendering skies a glorious blue.

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was ‘restricted’ by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera’s sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

Phase One digital camera backs can be ordered in an infrared modified form.

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

en.wikipedia.org/wiki/Infrared_photography

Posted by Brokentaco on 2019-06-22 19:11:40

Tagged:

Infrared HDR Palmer Park, Colorado Springs

Infrared converted Sony A6000 with Sony E 16-70mm F4 ZA OSS. HDR AEB +/-2 total of 3 exposures at F8, 16mm, auto focus and processed with Photomatix HDR software. Blue and red color channels swapped in GIMP.

High Dynamic Range (HDR)

High-dynamic-range imaging (HDRI) is a high dynamic range (HDR) technique used in imaging and photography to reproduce a greater dynamic range of luminosity than is possible with standard digital imaging or photographic techniques. The aim is to present a similar range of luminance to that experienced through the human visual system. The human eye, through adaptation of the iris and other methods, adjusts constantly to adapt to a broad range of luminance present in the environment. The brain continuously interprets this information so that a viewer can see in a wide range of light conditions.

HDR images can represent a greater range of luminance levels than can be achieved using more ‘traditional’ methods, such as many real-world scenes containing very bright, direct sunlight to extreme shade, or very faint nebulae. This is often achieved by capturing and then combining several different, narrower range, exposures of the same subject matter. Non-HDR cameras take photographs with a limited exposure range, referred to as LDR, resulting in the loss of detail in highlights or shadows.

The two primary types of HDR images are computer renderings and images resulting from merging multiple low-dynamic-range (LDR) or standard-dynamic-range (SDR) photographs. HDR images can also be acquired using special image sensors, such as an oversampled binary image sensor.

Due to the limitations of printing and display contrast, the extended luminosity range of an HDR image has to be compressed to be made visible. The method of rendering an HDR image to a standard monitor or printing device is called tone mapping. This method reduces the overall contrast of an HDR image to facilitate display on devices or printouts with lower dynamic range, and can be applied to produce images with preserved local contrast (or exaggerated for artistic effect).

In photography, dynamic range is measured in exposure value (EV) differences (known as stops). An increase of one EV, or ‘one stop’, represents a doubling of the amount of light. Conversely, a decrease of one EV represents a halving of the amount of light. Therefore, revealing detail in the darkest of shadows requires high exposures, while preserving detail in very bright situations requires very low exposures. Most cameras cannot provide this range of exposure values within a single exposure, due to their low dynamic range. High-dynamic-range photographs are generally achieved by capturing multiple standard-exposure images, often using exposure bracketing, and then later merging them into a single HDR image, usually within a photo manipulation program). Digital images are often encoded in a camera’s raw image format, because 8-bit JPEG encoding does not offer a wide enough range of values to allow fine transitions (and regarding HDR, later introduces undesirable effects due to lossy compression).

Any camera that allows manual exposure control can make images for HDR work, although one equipped with auto exposure bracketing (AEB) is far better suited. Images from film cameras are less suitable as they often must first be digitized, so that they can later be processed using software HDR methods.

In most imaging devices, the degree of exposure to light applied to the active element (be it film or CCD) can be altered in one of two ways: by either increasing/decreasing the size of the aperture or by increasing/decreasing the time of each exposure. Exposure variation in an HDR set is only done by altering the exposure time and not the aperture size; this is because altering the aperture size also affects the depth of field and so the resultant multiple images would be quite different, preventing their final combination into a single HDR image.

An important limitation for HDR photography is that any movement between successive images will impede or prevent success in combining them afterwards. Also, as one must create several images (often three or five and sometimes more) to obtain the desired luminance range, such a full ‘set’ of images takes extra time. HDR photographers have developed calculation methods and techniques to partially overcome these problems, but the use of a sturdy tripod is, at least, advised.

Some cameras have an auto exposure bracketing (AEB) feature with a far greater dynamic range than others, from the 3 EV of the Canon EOS 40D, to the 18 EV of the Canon EOS-1D Mark II. As the popularity of this imaging method grows, several camera manufactures are now offering built-in HDR features. For example, the Pentax K-7 DSLR has an HDR mode that captures an HDR image and outputs (only) a tone mapped JPEG file. The Canon PowerShot G12, Canon PowerShot S95 and Canon PowerShot S100 offer similar features in a smaller format.. Nikon’s approach is called ‘Active D-Lighting’ which applies exposure compensation and tone mapping to the image as it comes from the sensor, with the accent being on retaing a realistic effect . Some smartphones provide HDR modes, and most mobile platforms have apps that provide HDR picture taking.

Camera characteristics such as gamma curves, sensor resolution, noise, photometric calibration and color calibration affect resulting high-dynamic-range images.

Color film negatives and slides consist of multiple film layers that respond to light differently. As a consequence, transparent originals (especially positive slides) feature a very high dynamic range

Tone mapping
Tone mapping reduces the dynamic range, or contrast ratio, of an entire image while retaining localized contrast. Although it is a distinct operation, tone mapping is often applied to HDRI files by the same software package.

Several software applications are available on the PC, Mac and Linux platforms for producing HDR files and tone mapped images. Notable titles include

Adobe Photoshop
Aurora HDR
Dynamic Photo HDR
HDR Efex Pro
HDR PhotoStudio
Luminance HDR
MagicRaw
Oloneo PhotoEngine
Photomatix Pro
PTGui

Information stored in high-dynamic-range images typically corresponds to the physical values of luminance or radiance that can be observed in the real world. This is different from traditional digital images, which represent colors as they should appear on a monitor or a paper print. Therefore, HDR image formats are often called scene-referred, in contrast to traditional digital images, which are device-referred or output-referred. Furthermore, traditional images are usually encoded for the human visual system (maximizing the visual information stored in the fixed number of bits), which is usually called gamma encoding or gamma correction. The values stored for HDR images are often gamma compressed (power law) or logarithmically encoded, or floating-point linear values, since fixed-point linear encodings are increasingly inefficient over higher dynamic ranges.

HDR images often don’t use fixed ranges per color channel—other than traditional images—to represent many more colors over a much wider dynamic range. For that purpose, they don’t use integer values to represent the single color channels (e.g., 0-255 in an 8 bit per pixel interval for red, green and blue) but instead use a floating point representation. Common are 16-bit (half precision) or 32-bit floating point numbers to represent HDR pixels. However, when the appropriate transfer function is used, HDR pixels for some applications can be represented with a color depth that has as few as 10–12 bits for luminance and 8 bits for chrominance without introducing any visible quantization artifacts.

History of HDR photography
The idea of using several exposures to adequately reproduce a too-extreme range of luminance was pioneered as early as the 1850s by Gustave Le Gray to render seascapes showing both the sky and the sea. Such rendering was impossible at the time using standard methods, as the luminosity range was too extreme. Le Gray used one negative for the sky, and another one with a longer exposure for the sea, and combined the two into one picture in positive.

Mid 20th century
Manual tone mapping was accomplished by dodging and burning – selectively increasing or decreasing the exposure of regions of the photograph to yield better tonality reproduction. This was effective because the dynamic range of the negative is significantly higher than would be available on the finished positive paper print when that is exposed via the negative in a uniform manner. An excellent example is the photograph Schweitzer at the Lamp by W. Eugene Smith, from his 1954 photo essay A Man of Mercy on Dr. Albert Schweitzer and his humanitarian work in French Equatorial Africa. The image took 5 days to reproduce the tonal range of the scene, which ranges from a bright lamp (relative to the scene) to a dark shadow.

Ansel Adams elevated dodging and burning to an art form. Many of his famous prints were manipulated in the darkroom with these two methods. Adams wrote a comprehensive book on producing prints called The Print, which prominently features dodging and burning, in the context of his Zone System.

With the advent of color photography, tone mapping in the darkroom was no longer possible due to the specific timing needed during the developing process of color film. Photographers looked to film manufacturers to design new film stocks with improved response, or continued to shoot in black and white to use tone mapping methods.

Color film capable of directly recording high-dynamic-range images was developed by Charles Wyckoff and EG&G "in the course of a contract with the Department of the Air Force". This XR film had three emulsion layers, an upper layer having an ASA speed rating of 400, a middle layer with an intermediate rating, and a lower layer with an ASA rating of 0.004. The film was processed in a manner similar to color films, and each layer produced a different color. The dynamic range of this extended range film has been estimated as 1:108. It has been used to photograph nuclear explosions, for astronomical photography, for spectrographic research, and for medical imaging. Wyckoff’s detailed pictures of nuclear explosions appeared on the cover of Life magazine in the mid-1950s.

Late 20th century
Georges Cornuéjols and licensees of his patents (Brdi, Hymatom) introduced the principle of HDR video image, in 1986, by interposing a matricial LCD screen in front of the camera’s image sensor, increasing the sensors dynamic by five stops. The concept of neighborhood tone mapping was applied to video cameras by a group from the Technion in Israel led by Dr. Oliver Hilsenrath and Prof. Y.Y.Zeevi who filed for a patent on this concept in 1988.

In February and April 1990, Georges Cornuéjols introduced the first real-time HDR camera that combined two images captured by a sensor3435 or simultaneously3637 by two sensors of the camera. This process is known as bracketing used for a video stream.

In 1991, the first commercial video camera was introduced that performed real-time capturing of multiple images with different exposures, and producing an HDR video image, by Hymatom, licensee of Georges Cornuéjols.

Also in 1991, Georges Cornuéjols introduced the HDR+ image principle by non-linear accumulation of images to increase the sensitivity of the camera: for low-light environments, several successive images are accumulated, thus increasing the signal to noise ratio.

In 1993, another commercial medical camera producing an HDR video image, by the Technion.

Modern HDR imaging uses a completely different approach, based on making a high-dynamic-range luminance or light map using only global image operations (across the entire image), and then tone mapping the result. Global HDR was first introduced in 19931 resulting in a mathematical theory of differently exposed pictures of the same subject matter that was published in 1995 by Steve Mann and Rosalind Picard.

On October 28, 1998, Ben Sarao created one of the first nighttime HDR+G (High Dynamic Range + Graphic image)of STS-95 on the launch pad at NASA’s Kennedy Space Center. It consisted of four film images of the shuttle at night that were digitally composited with additional digital graphic elements. The image was first exhibited at NASA Headquarters Great Hall, Washington DC in 1999 and then published in Hasselblad Forum, Issue 3 1993, Volume 35 ISSN 0282-5449.

The advent of consumer digital cameras produced a new demand for HDR imaging to improve the light response of digital camera sensors, which had a much smaller dynamic range than film. Steve Mann developed and patented the global-HDR method for producing digital images having extended dynamic range at the MIT Media Laboratory. Mann’s method involved a two-step procedure: (1) generate one floating point image array by global-only image operations (operations that affect all pixels identically, without regard to their local neighborhoods); and then (2) convert this image array, using local neighborhood processing (tone-remapping, etc.), into an HDR image. The image array generated by the first step of Mann’s process is called a lightspace image, lightspace picture, or radiance map. Another benefit of global-HDR imaging is that it provides access to the intermediate light or radiance map, which has been used for computer vision, and other image processing operations.

21st century
In 2005, Adobe Systems introduced several new features in Photoshop CS2 including Merge to HDR, 32 bit floating point image support, and HDR tone mapping.

On June 30, 2016, Microsoft added support for the digital compositing of HDR images to Windows 10 using the Universal Windows Platform.

HDR sensors
Modern CMOS image sensors can often capture a high dynamic range from a single exposure. The wide dynamic range of the captured image is non-linearly compressed into a smaller dynamic range electronic representation. However, with proper processing, the information from a single exposure can be used to create an HDR image.

Such HDR imaging is used in extreme dynamic range applications like welding or automotive work. Some other cameras designed for use in security applications can automatically provide two or more images for each frame, with changing exposure. For example, a sensor for 30fps video will give out 60fps with the odd frames at a short exposure time and the even frames at a longer exposure time. Some of the sensor may even combine the two images on-chip so that a wider dynamic range without in-pixel compression is directly available to the user for display or processing.

en.wikipedia.org/wiki/High-dynamic-range_imaging

Infrared Photography

In infrared photography, the film or image sensor used is sensitive to infrared light. The part of the spectrum used is referred to as near-infrared to distinguish it from far-infrared, which is the domain of thermal imaging. Wavelengths used for photography range from about 700 nm to about 900 nm. Film is usually sensitive to visible light too, so an infrared-passing filter is used; this lets infrared (IR) light pass through to the camera, but blocks all or most of the visible light spectrum (the filter thus looks black or deep red). ("Infrared filter" may refer either to this type of filter or to one that blocks infrared but passes other wavelengths.)

When these filters are used together with infrared-sensitive film or sensors, "in-camera effects" can be obtained; false-color or black-and-white images with a dreamlike or sometimes lurid appearance known as the "Wood Effect," an effect mainly caused by foliage (such as tree leaves and grass) strongly reflecting in the same way visible light is reflected from snow. There is a small contribution from chlorophyll fluorescence, but this is marginal and is not the real cause of the brightness seen in infrared photographs. The effect is named after the infrared photography pioneer Robert W. Wood, and not after the material wood, which does not strongly reflect infrared.

The other attributes of infrared photographs include very dark skies and penetration of atmospheric haze, caused by reduced Rayleigh scattering and Mie scattering, respectively, compared to visible light. The dark skies, in turn, result in less infrared light in shadows and dark reflections of those skies from water, and clouds will stand out strongly. These wavelengths also penetrate a few millimeters into skin and give a milky look to portraits, although eyes often look black.

Until the early 20th century, infrared photography was not possible because silver halide emulsions are not sensitive to longer wavelengths than that of blue light (and to a lesser extent, green light) without the addition of a dye to act as a color sensitizer. The first infrared photographs (as distinct from spectrographs) to be published appeared in the February 1910 edition of The Century Magazine and in the October 1910 edition of the Royal Photographic Society Journal to illustrate papers by Robert W. Wood, who discovered the unusual effects that now bear his name. The RPS co-ordinated events to celebrate the centenary of this event in 2010. Wood’s photographs were taken on experimental film that required very long exposures; thus, most of his work focused on landscapes. A further set of infrared landscapes taken by Wood in Italy in 1911 used plates provided for him by CEK Mees at Wratten & Wainwright. Mees also took a few infrared photographs in Portugal in 1910, which are now in the Kodak archives.

Infrared-sensitive photographic plates were developed in the United States during World War I for spectroscopic analysis, and infrared sensitizing dyes were investigated for improved haze penetration in aerial photography. After 1930, new emulsions from Kodak and other manufacturers became useful to infrared astronomy.

Infrared photography became popular with photography enthusiasts in the 1930s when suitable film was introduced commercially. The Times regularly published landscape and aerial photographs taken by their staff photographers using Ilford infrared film. By 1937 33 kinds of infrared film were available from five manufacturers including Agfa, Kodak and Ilford. Infrared movie film was also available and was used to create day-for-night effects in motion pictures, a notable example being the pseudo-night aerial sequences in the James Cagney/Bette Davis movie The Bride Came COD.

False-color infrared photography became widely practiced with the introduction of Kodak Ektachrome Infrared Aero Film and Ektachrome Infrared EIR. The first version of this, known as Kodacolor Aero-Reversal-Film, was developed by Clark and others at the Kodak for camouflage detection in the 1940s. The film became more widely available in 35mm form in the 1960s but KODAK AEROCHROME III Infrared Film 1443 has been discontinued.

Infrared photography became popular with a number of 1960s recording artists, because of the unusual results; Jimi Hendrix, Donovan, Frank and a slow shutter speed without focus compensation, however wider apertures like f/2.0 can produce sharp photos only if the lens is meticulously refocused to the infrared index mark, and only if this index mark is the correct one for the filter and film in use. However, it should be noted that diffraction effects inside a camera are greater at infrared wavelengths so that stopping down the lens too far may actually reduce sharpness.

Most apochromatic (‘APO’) lenses do not have an Infrared index mark and do not need to be refocused for the infrared spectrum because they are already optically corrected into the near-infrared spectrum. Catadioptric lenses do not often require this adjustment because their mirror containing elements do not suffer from chromatic aberration and so the overall aberration is comparably less. Catadioptric lenses do, of course, still contain lenses, and these lenses do still have a dispersive property.

Infrared black-and-white films require special development times but development is usually achieved with standard black-and-white film developers and chemicals (like D-76). Kodak HIE film has a polyester film base that is very stable but extremely easy to scratch, therefore special care must be used in the handling of Kodak HIE throughout the development and printing/scanning process to avoid damage to the film. The Kodak HIE film was sensitive to 900 nm.

As of November 2, 2007, "KODAK is preannouncing the discontinuance" of HIE Infrared 35 mm film stating the reasons that, "Demand for these products has been declining significantly in recent years, and it is no longer practical to continue to manufacture given the low volume, the age of the product formulations and the complexity of the processes involved." At the time of this notice, HIE Infrared 135-36 was available at a street price of around $12.00 a roll at US mail order outlets.

Arguably the greatest obstacle to infrared film photography has been the increasing difficulty of obtaining infrared-sensitive film. However, despite the discontinuance of HIE, other newer infrared sensitive emulsions from EFKE, ROLLEI, and ILFORD are still available, but these formulations have differing sensitivity and specifications from the venerable KODAK HIE that has been around for at least two decades. Some of these infrared films are available in 120 and larger formats as well as 35 mm, which adds flexibility to their application. With the discontinuance of Kodak HIE, Efke’s IR820 film has become the only IR film on the marketneeds update with good sensitivity beyond 750 nm, the Rollei film does extend beyond 750 nm but IR sensitivity falls off very rapidly.

Color infrared transparency films have three sensitized layers that, because of the way the dyes are coupled to these layers, reproduce infrared as red, red as green, and green as blue. All three layers are sensitive to blue so the film must be used with a yellow filter, since this will block blue light but allow the remaining colors to reach the film. The health of foliage can be determined from the relative strengths of green and infrared light reflected; this shows in color infrared as a shift from red (healthy) towards magenta (unhealthy). Early color infrared films were developed in the older E-4 process, but Kodak later manufactured a color transparency film that could be developed in standard E-6 chemistry, although more accurate results were obtained by developing using the AR-5 process. In general, color infrared does not need to be refocused to the infrared index mark on the lens.

In 2007 Kodak announced that production of the 35 mm version of their color infrared film (Ektachrome Professional Infrared/EIR) would cease as there was insufficient demand. Since 2011, all formats of color infrared film have been discontinued. Specifically, Aerochrome 1443 and SO-734.

There is no currently available digital camera that will produce the same results as Kodak color infrared film although the equivalent images can be produced by taking two exposures, one infrared and the other full-color, and combining in post-production. The color images produced by digital still cameras using infrared-pass filters are not equivalent to those produced on color infrared film. The colors result from varying amounts of infrared passing through the color filters on the photo sites, further amended by the Bayer filtering. While this makes such images unsuitable for the kind of applications for which the film was used, such as remote sensing of plant health, the resulting color tonality has proved popular artistically.

Color digital infrared, as part of full spectrum photography is gaining popularity. The ease of creating a softly colored photo with infrared characteristics has found interest among hobbyists and professionals.

In 2008, Los Angeles photographer, Dean Bennici started cutting and hand rolling Aerochrome color Infrared film. All Aerochrome medium and large format which exists today came directly from his lab. The trend in infrared photography continues to gain momentum with the success of photographer Richard Mosse and multiple users all around the world.

Digital camera sensors are inherently sensitive to infrared light, which would interfere with the normal photography by confusing the autofocus calculations or softening the image (because infrared light is focused differently from visible light), or oversaturating the red channel. Also, some clothing is transparent in the infrared, leading to unintended (at least to the manufacturer) uses of video cameras. Thus, to improve image quality and protect privacy, many digital cameras employ infrared blockers. Depending on the subject matter, infrared photography may not be practical with these cameras because the exposure times become overly long, often in the range of 30 seconds, creating noise and motion blur in the final image. However, for some subject matter the long exposure does not matter or the motion blur effects actually add to the image. Some lenses will also show a ‘hot spot’ in the centre of the image as their coatings are optimised for visible light and not for IR.

An alternative method of DSLR infrared photography is to remove the infrared blocker in front of the sensor and replace it with a filter that removes visible light. This filter is behind the mirror, so the camera can be used normally – handheld, normal shutter speeds, normal composition through the viewfinder, and focus, all work like a normal camera. Metering works but is not always accurate because of the difference between visible and infrared refraction. When the IR blocker is removed, many lenses which did display a hotspot cease to do so, and become perfectly usable for infrared photography. Additionally, because the red, green and blue micro-filters remain and have transmissions not only in their respective color but also in the infrared, enhanced infrared color may be recorded.

Since the Bayer filters in most digital cameras absorb a significant fraction of the infrared light, these cameras are sometimes not very sensitive as infrared cameras and can sometimes produce false colors in the images. An alternative approach is to use a Foveon X3 sensor, which does not have absorptive filters on it; the Sigma SD10 DSLR has a removable IR blocking filter and dust protector, which can be simply omitted or replaced by a deep red or complete visible light blocking filter. The Sigma SD14 has an IR/UV blocking filter that can be removed/installed without tools. The result is a very sensitive digital IR camera.

While it is common to use a filter that blocks almost all visible light, the wavelength sensitivity of a digital camera without internal infrared blocking is such that a variety of artistic results can be obtained with more conventional filtration. For example, a very dark neutral density filter can be used (such as the Hoya ND400) which passes a very small amount of visible light compared to the near-infrared it allows through. Wider filtration permits an SLR viewfinder to be used and also passes more varied color information to the sensor without necessarily reducing the Wood effect. Wider filtration is however likely to reduce other infrared artefacts such as haze penetration and darkened skies. This technique mirrors the methods used by infrared film photographers where black-and-white infrared film was often used with a deep red filter rather than a visually opaque one.

Another common technique with near-infrared filters is to swap blue and red channels in software (e.g. photoshop) which retains much of the characteristic ‘white foliage’ while rendering skies a glorious blue.

Several Sony cameras had the so-called Night Shot facility, which physically moves the blocking filter away from the light path, which makes the cameras very sensitive to infrared light. Soon after its development, this facility was ‘restricted’ by Sony to make it difficult for people to take photos that saw through clothing. To do this the iris is opened fully and exposure duration is limited to long times of more than 1/30 second or so. It is possible to shoot infrared but neutral density filters must be used to reduce the camera’s sensitivity and the long exposure times mean that care must be taken to avoid camera-shake artifacts.

Fuji have produced digital cameras for use in forensic criminology and medicine which have no infrared blocking filter. The first camera, designated the S3 PRO UVIR, also had extended ultraviolet sensitivity (digital sensors are usually less sensitive to UV than to IR). Optimum UV sensitivity requires special lenses, but ordinary lenses usually work well for IR. In 2007, FujiFilm introduced a new version of this camera, based on the Nikon D200/ FujiFilm S5 called the IS Pro, also able to take Nikon lenses. Fuji had earlier introduced a non-SLR infrared camera, the IS-1, a modified version of the FujiFilm FinePix S9100. Unlike the S3 PRO UVIR, the IS-1 does not offer UV sensitivity. FujiFilm restricts the sale of these cameras to professional users with their EULA specifically prohibiting "unethical photographic conduct".

Phase One digital camera backs can be ordered in an infrared modified form.

Remote sensing and thermographic cameras are sensitive to longer wavelengths of infrared (see Infrared spectrum#Commonly used sub-division scheme). They may be multispectral and use a variety of technologies which may not resemble common camera or filter designs. Cameras sensitive to longer infrared wavelengths including those used in infrared astronomy often require cooling to reduce thermally induced dark currents in the sensor (see Dark current (physics)). Lower cost uncooled thermographic digital cameras operate in the Long Wave infrared band (see Thermographic camera#Uncooled infrared detectors). These cameras are generally used for building inspection or preventative maintenance but can be used for artistic pursuits as well.

en.wikipedia.org/wiki/Infrared_photography

Posted by Brokentaco on 2019-07-01 15:33:54

Tagged: