Archive for April 2019

Tulips at an Amsterdam flower market

The quest for the perfect tulip

In his 1850 novel The Black Tulip, French author Alexandre Dumas (père) describes a competition, initiated by the Dutch city of Haarlem in the 1670s, in which 100,000 florins (150 florins being the average yearly income at the time) would be given to the first person who could grow a black tulip. Although Dumas’s story is fictional, it is based on a real phenomenon that took place in the Netherlands in the early 17th century.

Between 1634 and 1637, the Netherlands (then called the United Provinces) saw the rise and fall of many fortunes due to an intense period of tulip trading. Now described as tulipomania, or simply tulip mania, it involved the wild overvaluation of certain types of tulip, leading to the eventual crash of the inflated market.

In Rare Form

First cultivated in the East, tulips were brought to Europe from the Ottoman Empire during the 16th century (the name tulip is derived from the Turkish word for turban). Soon after their introduction, tulips became popular in various countries, but nowhere so much as in the Netherlands. There are many theories as to why the Dutch developed such an avid interest in tulips; in his book The Botany of Desire, Michael Pollan suggests that the bleakness of the Dutch landscape may be one reason colorful tulips were so quickly embraced. He observes that “what beauty there is in the Netherlands is largely the result of human effort…” making the cultivation of beautiful blooms an attractive pastime.

Another reason for their popularity was their relative rarity. While tulips can be grown simply from seed, there is no guarantee that the resulting flowers will resemble their parent plants at all. The only way to obtain a particularly prized bloom is to grow one from an offset, which Pollan describes as “the little, genetically identical bulblets” found at the base of a tulip bulb. The process of cultivating offsets was a lengthy one, adding to the scarcity of tulips. In addition, the most valued tulips of the time were ones said to be “broken,” that is those tulips with bright flame or feather-like patterns on their petals. The most famous of this type of tulip was the Semper Augustus, a white flower marked by brilliant red strokes. These tulips produced fewer offsets, making them even rarer; although it was not known at the time, the “broken” effect was caused by a virus that weakened the plant.

Gone to Seed

The genesis of tulipomania is usually ascribed to the 1593 arrival in Leiden of Carolus Clusius, a plant collector and gardener. Bringing with him some tulip bulbs he had acquired while working as the director of the Imperial Botanical Garden in Vienna, Clusius proceeded to cultivate beautiful specimens from them, attracting attention from his new neighbors. However, Clusius was reluctant to part with his bulbs, refusing to sell to eager buyers. Frustrated by his refusal, thieves helped themselves to his garden, stealing many bulbs and selling the seeds they gained from them. These seeds were eventually distributed throughout all the Dutch territories, leading to the increased propagation and variation of tulips. Those lucky enough to grow a particularly beautiful bloom from seed could profit greatly from the sale of its offsets, making tulip cultivation an increasingly lucrative vocation.

As the taste for certain types of tulip became more focused, prices for the most valued bulbs rose dramatically among the upper classes. At first limited to collectors and the wealthy, the large amounts of money to be made soon inspired people of more limited means to sell everything they had to cash in on the trade. At the market’s highest point, single bulbs sold for thousands of florins, the most famous being a Semper Augustus bulb that sold for 6,000 florins (or 40 times the average yearly income).

As more people entered the trade, eventually the sale of real bulbs gave way to windhandel, or wind trade, meaning the future production of bulbs was bought and sold. This increasingly risky venture couldn’t last. The tulip bubble burst in February 1637 when the fear of oversupply and dramatic price increases in early 1637 caused prices to drop precipitously.

Back Petal

While the story of tulip mania often gets told as a cautionary tale and as an analogue to more modern forms of market inflation and decline, such as the dotcom bubble, historian Anne Goldgar thinks this description is overblown. In her book, Tulipmania: Money, Honor and Knowledge in the Dutch Golden Age, Goldgar finds that tulip speculation in reality was not as frenzied as the way it is commonly portrayed. She blames the writer Charles Mackay, whose book Extraordinary Popular Delusions and the Madness of Crowds from 1841 used satirical songs from 1637 as the basis for his depiction of the craze for tulips, which had a tendency to exaggerate the facts of the situation. Far from being irrational, Goldgar argues that there were valid reasons for treating tulips as a valuable commodity, and that the subsequent rise and fall of the market was not as precipitous, and did not personally bankrupt, hordes of unlucky investors.

Dutch Treat

Although this volatility in the tulip market was unsettling at the time, out of that early trade came an enduring business for the Netherlands. Now the tulip is a beloved symbol of the country, and plays an important role in economic and cultural activities. It seems unlikely that anyone at the time the tulip came to the Netherlands could have predicted the enormous effect this flower would have over a nation’s history and economy. It is a vivid reminder that when human nature meets Mother Nature, interesting results are sure to follow.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on April 6, 2007.

Image credit: Alice Achterhof alicegrace [CC0], via Wikimedia Commons

Source: Interesting Thing of the Day

A De'Longhi Superautomatic Coffee Machine

The lazy way to make a perfect cup of coffee

There are those who believe half the pleasure of a great cup of coffee comes from the ritual of making it. The details of the ritual vary from person to person and place to place, but the desired effect is the same: a perfect cup of hot, rich, fresh coffee. “Perfect,” of course, is quite subjective. Among people who take coffee seriously, there is a great deal of disagreement as to what types of bean, roast, and grind make the best coffee, how concentrated the grounds should be, whether the coffee should be infused into the water by dripping, steeping, or steaming, and many other details. Regardless of the precise outcome, however, coffee purists often insist that if you want coffee done right, you must make it by hand, with a great deal of care and attention to detail.

I certainly count myself among those who cherish a perfect cup of coffee. And yet, I’ve never been much for ritual. All things being equal, I’d prefer to have my coffee with as little effort as possible, but I draw the line at those trendy machines with the prefilled plastic pods (you know, Keurig K-Cups, Nespresso, and the like)—the beans are not freshly ground, there’s too much waste, it’s too expensive per cup, and you have too little control over the final product. Fortunately, technology allows me to have my café and drink it too, thanks to a breed of coffee maker known as a superautomatic.

Coffee Making 101

First, a few background concepts about coffee brewing. The standard American method for making coffee is to allow hot water to drip through a filter full of ground beans and then into a carafe sitting on a hot plate. You’ll get eight or ten cups of coffee this way in about five minutes. While operating the coffee maker itself is usually just a matter of flipping a switch, that doesn’t include measuring and pouring the water, inserting the filter, measuring the ground coffee, or disposing of the used grounds. (Add another step or two if you grind your own coffee beans—which you should.) The end result is a relatively dilute coffee whose taste rapidly deteriorates as it ages and evaporates. The person who drinks the first cup often has a much better experience than the one who drinks the last cup.

By contrast, espresso is made one or two cups at a time by forcing steam into a much finer grind of coffee and through a metal filter that allows slightly larger particles of grounds through than a paper filter would. This normally results in a stronger coffee, mainly because less water is used; if you kept forcing steam through the grounds for a longer period of time, the coffee would become increasingly weak, eventually reaching the strength most North Americans consider normal. (Think of the Americano, which is just espresso diluted with hot water.) Making espresso (and its milk-added cousins cappuccino and latte) is normally an exacting manual procedure, but one that results in a fresher cup because the coffee never sits around in a carafe becoming bitter.

I’ll Have a Digital Cappuccino

A superautomatic coffee machine uses the pressurized steam method of coffee production to make a single cup of coffee at a time, but without any of the manual steps. With the press of a single button, the machine grinds beans stored in an internal hopper; tamps them down into the filter assembly; squirts steam through them into your cup, then ejects the used grounds into a holding bin. The whole process takes about a minute, and it produces a wonderfully rich, creamy coffee. Most superautomatics allow you to adjust a wide variety of settings, such as the coarseness of the grind, the amount of ground coffee per cup, and the volume and temperature of the coffee. With various combinations of settings, you can get a tiny cup of ultra-concentrated espresso, a large mug of American-style coffee, or anything in between. (My personal preference is Swiss-style café crema, which is stronger than American but weaker than espresso, served in a demitasse cup with a golden foamy finish.)

My wife’s favorite feature of our first superautomatic was its automatic milk frother. This is not simply a wand that squirts steam into a container of milk (though you can do that too if you want). Instead, you drop a small hose into a container of milk, press a button, and the machine sucks in the cold milk and delivers hot frothed milk from a nozzle right into your coffee cup. The frother enabled us to make an excellent cappuccino by pressing exactly two buttons. (Due to reasons, our current superautomatic lacks a frother, but we’ll think about that again the next time we’re in the market for a new model.) Depending on the model and manufacturer, superautomatics have a variety of additional features. Some have a built-in cup warmer, an internal water filtering system, or a second steam pump so that they can brew coffee and steam milk at the same time. Programmable digital models feature an alphanumeric display and one-touch access to popular features, to save your delicate fingers from having to physically move levers or knobs to adjust settings.

You Can Put a Price on True Happiness

Superautomatics don’t come cheap. A good mid-range model, with a digital display and most of the bells and whistles, will run in the neighborhood of US$1,500. A high-end consumer machine can go for as much as $6,000 (which, by the way, is a bargain compared to commercial models); on the other end of the spectrum, if you’re willing to forgo a few of the more esoteric frills, you can find a good basic unit for as little as $500. Unsurprisingly, superautomatics are a frequent cause of buyer’s remorse, which means some good bargains on lightly used machines can often be found on eBay or at dealers with money-back guarantees.

The best-known manufacturers of superautomatic coffee machines are Saeco, Jura, De’Longhi, and Miele, all of which offer a wide selection of models in various price ranges. However, don’t expect to find a great selection of such machines on display at your local Wal-Mart. High-end kitchen stores like Sur la Table and Williams-Sonoma carry superautomatics; apart from that, your best bet is usually an online retailer (such as Seattle Coffee Gear) with a good return policy. Also be prepared to get picky when it comes to coffee beans. Shiny, oily beans are to be avoided; a dark but dry bean such as Illy will make your superautomatic purr.

I Love the Java Jive and It Loves Me

You may be thinking: My generic $25 drip coffee maker works just fine. Why should I spend such an outrageous amount of money on a fancy coffee machine? Sure, the coffee from these machines may be excellent, but is it really worth the difference in price? Speaking for myself, the answer is yes. The combination of outstanding coffee and one-button convenience is worth quite a lot to me, and I’ve never regretted buying either of the two superautomatic coffee machines I’ve owned. Needless to say, superautomatics are not for everyone. If you don’t drink much coffee or can’t tell the difference between instant and fresh-brewed, a superautomatic is a frivolous investment. On the other hand, if you are—or aspire to be—a coffee connoisseur, this marvel of engineering may lead you to wonder what you ever found so endearing about your beloved French press or copper coffee pot.

Since I bought my first superautomatic, my contributions to the Starbucks empire have fallen off dramatically. My kitchen may not have quite the ambiance of a local coffee shop, but the wireless network is faster and the coffee is better. That digital biscotti maker is still a dream, but I always know where to get a good cup of Joe.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on April 8, 2003, and again in a slightly revised form on June 7, 2004.

Image credit: De'Longhi Deutschland GmbH [CC BY-SA 3.0], via Wikimedia Commons

Source: Interesting Thing of the Day

Take Control of Your Browser cover

For most of us, the one app we couldn’t possibly live without is a web browser. You can do almost anything in a browser these days…but are you browsing with one hand tied behind your back? It’s easy to get into inefficient browsing habits, but you might be surprised at what a little know-how about this everyday tool can do for your efficiency and happiness.

Take Control of Your Browser, by veteran tech writer Robyn Weisman, helps you discover your browser’s hidden talents, increase browsing speed, solve many common problems, and configure settings and extensions for maximum efficiency. If you’re troubled by ads, frustrated by ineffective searches, or confused by inscrutable error messages, this book will help you overcome your problems. Beginners will find lots of practical how-to advice, and even power users will learn tips and tricks for better browsing.

This book, like all Take Control titles, comes as an ebook, and you can download any combination of formats—PDF, EPUB, and/or Kindle’s Mobipocket format—so you can read it on pretty much any computer, smartphone, tablet, or ebook reader. The cover price is $14.99, but as an Interesting Thing of the Day reader, you can buy it this week for 30% off, or just $10.49.

Source: Interesting Thing of the Day

Illustration of a cochlear implant

The sound and the fury

Today’s article was going to be a pretty straightforward technological exposition. I was going to describe a procedure that can improve hearing in ways that conventional hearing aids cannot, mention some of the limitations and risks involved, and pretty much leave it at that. Then I got an email from a friend wondering if I was planning to cover the political issues cochlear implants raise for the Deaf community. Um…political issues? I hadn’t known there were any. But after a bit of research, I discovered that the controversy surrounding this procedure is at least as interesting as the procedure itself, which has been called everything from a miracle cure to genocide.

Can You Hear Me Now?

First, a bit of background. There are many different types and causes of deafness. Some kinds of hearing loss can be compensated for very adequately with just a bit of amplification—namely, a hearing aid. However, if there is a defect or damage in the inner ear, a hearing aid may do no good. Our perception of sound results from the vibrations of tiny hairs lining the cochlea, a spiral, fluid-filled organ in the inner ear. When the hairs move, the hair cells convert the movement into nerve impulses, which are then sent to the brain for decoding. If the vibrations never reach the cochlea, or if the hair cells themselves are damaged, no neural stimulation occurs and deafness results.

However, if most of the underlying nerve fibers themselves (and the neural pathways to the brain) are intact, they can be stimulated electrically, producing a sensation interpreted by the brain as sound. A cochlear implant places a series of electrodes inside the cochlea to do just that; a wire connects these electrodes to a small receiver with its antenna placed under the skin. Outside the skin, a device that looks somewhat like a hearing aid picks up sounds with a microphone, digitizes them in such a way that they produce meaningful signals for the electrodes, and transmits them via radio waves to the receiver. The net result is the perception of sounds picked up by the microphone, but because this apparatus completely bypasses the eardrum and middle ear, it’s really an artificial ear rather than a hearing aid. The technology was developed by Dr. Graeme Clark at the University of Melbourne in the 1960s and 1970s; the first implant was performed in 1978.

Although any number of technological innovations have occurred in the decades since, cochlear implants are still by no means perfect. They vary greatly in their effectiveness, depending on a large number of variables. And the effect they produce, while auditory in nature, is not identical to what would be experienced with a fully functional ear. In addition, patients with cochlear implants require months or years of training to associate their new perceptions with sounds as they are usually known. In the most successful cases, implant recipients can eventually understand someone talking on the phone—but there is no guarantee of that level of hearing. Still, tens of thousands of people around the world have received the implants, and the procedure is rapidly gaining in popularity.

You Will All Be Assimilated

To a hearing person such as myself, all this sounds very rosy and optimistic. Of course, the surgery is rather delicate and carries with it the usual risks associated with putting holes in one’s head; plus, the cost of the procedure and rehabilitative therapy is quite high. But these are not the primary concerns of the Deaf community. Although the controversy has diminished greatly in recent years, cochlear implants—particularly for children—were strongly opposed by many deaf people for some time because of a fear that they would destroy the Deaf culture in general and the use of sign language in particular.

On the surface, this argument may seem sort of silly to hearing persons. But the Deaf community has a unique culture and language that they rightly consider quite valuable; the thought of losing such a culture to technology is understandably offensive. One of the key beliefs of the Deaf community is that deafness is simply another perfectly valid way of life, not a problem that needs to be fixed. So the intimation that deafness is a “disease” for which cochlear implants are a “cure” smacks of assimilationism: “You must all be like us.” (The 2000 documentary film Sound and Fury examines the controversy over cochlear implants in detail as it follows members of two families through their decisions about whether or not to undergo the procedure.)

Even detractors of cochlear implants allow that this must be an individual decision, and that implants may be a reasonable choice for people who have lost hearing later in life (and who therefore may not have integrated themselves into the Deaf community). But when it comes to implants for children, the story is different. If a deaf child does not receive an implant, he or she is likely to learn sign language easily and adopt the Deaf culture. With an implant, the child is more likely to be treated as a hearing child. However, the imperfect nature of “hearing” provided by the implants may make it difficult to learn spoken English; meanwhile, because the parents have little incentive to raise the child as a deaf person, the child may never learn sign language. The result is that the child has less ability to communicate than if the implant had not been performed. In addition, if the child has partial hearing, the implant may eliminate any possibility of later using a conventional hearing aid by impeding normal functioning of the cochlea.

On the whole, decades of experience with cochlear implants in thousands of children have not borne out these worries, so resistance to implants in children is decreasing somewhat. Conventional wisdom holds that someone with a cochlear implant is still deaf, and many people with implants—children and adults alike—continue to learn and use sign language, participating actively in the Deaf culture. If cochlear implants, in a roundabout way, can promote both bilingualism and biculturalism, that may be their most compelling advantage.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on October 14, 2004.

Image credit: BruceBlaus [CC BY-SA 4.0], via Wikimedia Commons

Source: Interesting Thing of the Day

An old clock

Split-second thinking

The whole notion of time fascinates me endlessly—speaking metaphorically, of course. Numerous articles here at Interesting Thing of the Day have involved time or timekeeping in one form or another. In one of these articles, about analog clocks, I made what I thought was a commonsense and uncontroversial remark:

…time itself is continuous, not an infinite series of discrete steps…. Units like seconds, minutes, and hours are just a convenient, arbitrary fiction, after all—they don’t represent anything objectively real in the world.

A reader wrote in to suggest that I wasn’t up to date on my quantum physics, according to some theories of which time is indeed quantized, or fundamentally composed of very tiny but indivisible units.

At first, I had a hard time getting my head around this notion, and after considerable research…I still have a hard time getting my head around this notion. Although I try to keep generally abreast of the latest developments in the world of science, I can’t claim to do anything more than dabble in theoretical physics, and complex equations simply make my eyes glaze over. Nevertheless, it’s not only true that many scientists take the notion of quantized time for granted, there was also a fairly major uproar in the early 2000s when a young upstart from New Zealand published a paper that dared to challenge this notion with a theory that says, in effect, that there’s no such thing as an indivisible moment in time.

Second Thoughts

To understand what it would mean for time to be quantized, think of a unit of time, such as a second. You can divide that in half, getting two shorter periods of a half-second each. You can go much smaller, too, dividing a second into a thousand parts called milliseconds, a million parts called microseconds, a billion parts called nanoseconds, a trillion parts called picoseconds, and so on. A trillionth of a second is, to me, such an unimaginably short period of time that I’d be happy to consider it, for all practical purposes, indivisible—an “atom” of time, as it were. But that’s nothing. A trillionth of a second is a decimal point, 12 zeroes, and a 1. Some scientists say that meaningful distinctions in time can be made down to 10–44 second, or 44 zeroes after the decimal point before you reach that 1. But the question is: how low can you go? Is there some point, some number of zeroes, beyond which time cannot be divided any further?

One of the fundamental notions of calculus, and of physics, is that one can determine a moving object’s exact position at some instant in time. That there should be such a thing as an “instant” is taken as a given. An instant effectively doesn’t have duration; that would imply that a moving object changes its position between the start of that instant and its end—in other words, that its position can’t be known precisely. However, seemingly it can, or at least that operational assumption has served calculus well all these centuries. But is the notion of an instant merely a convenient fiction, or does it in some sense represent reality?

Among scientists studying quantum theory, and particularly among those working on the quixotic task of unifying general relativity with quantum physics, the question of whether time is truly continuous or not is of particular interest. Some scientists say that, as far as general relativity goes, time is continuous, but that in order to create a Grand Unified Theory, we might have to accept that it can be treated as a succession of temporal quanta (or chronons), in much the same way that light can be treated as either a wave or a particle. Others say that time is not merely a fourth dimension, but is itself three-dimensional, so from our point of view time is continuous, but from a point of view that encompasses time’s other dimensions, it’s quantized.

But all kinds of mysterious things happen in the quantum realm. What about the macro world we’re all familiar with?

Time for a Kiwi

In 2003, a then-27-year-old student from New Zealand named Peter Lynds published a paper in the peer-reviewed journal Foundations of Physics Letters that caused a great deal of controversy. Lynds claimed, essentially, that the whole notion of an instant is flawed, because if there were such a thing, a moving object measured and observed at that instant would appear to be static, and thus indistinguishable from a genuinely static object measured at that same instant. Since the two measurements clearly represent objects with different states, Lynds argued, it must be the case that there really aren’t any instants, only intervals (though those intervals might be very tiny). If true, this means that a moving object’s position can only ever be approximated—whether at the macro level or at the quantum level. And for this very reason, most of Zeno’s paradoxes turn out not to be paradoxical after all. Lynds went on to claim that time doesn’t flow because flow presumes an ongoing series of instants, that there is no “now” as such, and that our perception of time is just an odd consequence of the way our brains are wired.

The term “snapshot” is frequently used to describe the instant of time at which an object’s position might be determined, but I think it actually helps to make Lynds’s point. If you’re taking a picture of something that’s moving, you need a fast shutter speed to “freeze” the action, and the faster your subject is moving, the faster the shutter speed has to be. But if you set your shutter to, say, 1/4000 of a second and the photograph shows an arrow in mid-flight, with no blurring to suggest motion, that still doesn’t mean the arrow didn’t cover any distance during that tiny portion of a second the shutter was open. Of course it did. It’s just that the distance was sufficiently small, given the resolution of the camera and the human eye, to create the illusion of being frozen. So even if your hypothetical “shutter speed” is a zillionth of a second long, so that your measurement appears to give an exact, fixed location, that, too, is merely an illusion. The object in fact occupies more than one position during that time. Nothing mysterious about that at all.

Instant Controversy

When I heard Lynds’s idea, I thought it made perfectly good sense, and what I couldn’t comprehend was how scientists claimed, with considerable fervor, that they either couldn’t understand it or thought it was wrong-headed. I confess that I have not followed the debate about Lynds’s paper very closely in the years since its publication, and that I can understand only part of what I’ve read. However, it seems to me that many criticisms tend to mention either or both of two facts. First, critics note that Lynds was uncredentialed—he only had six months of university study at the time, so who was he to gainsay PhDs with years of experience? And second, if he were correct, that would mean that calculus as we know it must be essentially wrong or at least incomplete. And we all know it’s right. Right?

As to the matter of Lynds’s erstwhile lack of an advanced degree, all I have to say is: if he’s correct, that doesn’t matter, and those who say otherwise take themselves, and their formal education, way too seriously. As for the supposed assault on calculus, well, Lynds implies that calculus is not exactly wrong so much as very slightly inaccurate. Calculus as it stands appears to be right, but then, so are Newton’s laws of physics. Except they aren’t always: Newtonian physics breaks down both at the quantum level and when objects approach the speed of light. It seems to me—and again, I’m speaking as a nonmathematician here—that the very same thing could be true in this case. Calculus can be right at one level, and the absence of quantized time can be right at another level.

Of course, those are not the only criticisms, and the debate between Lynds’s supporters and detractors has gone through so many rounds of rebuttals and rejoinders that I can no longer keep track of who thinks what. But on the whole, the debate has made me feel even more secure in my personal, nonscientific belief that time is continuous, and I’m not going to doubt that for one instant.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on July 21, 2006.

Image credit: Illymarry [CC BY-SA 4.0], via Wikimedia Commons

Source: Interesting Thing of the Day