New Rule: Just because a country elects a smart president doesn’t make it a smart country. A few weeks ago I was asked by Wolf Blitzer if I thought Sarah Palin could get elected president, and I said I hope not, but I wouldn’t put anything past this stupid country. It was amazing - in the minute or so between my calling America stupid and the end of the Cialis commercial, CNN was flooded with furious emails and the twits hit the fan. And you could tell that these people were really mad because they wrote entirely in CAPITAL LETTERS!!! It’s how they get the blood circulating when the Cialis wears off. Worst of all, Bill O’Reilly refuted my contention that this is a stupid country by calling me a pinhead, which A) proves my point, and B) is really funny coming from a doody-face like him.
Now, the hate mail all seemed to have a running theme: that I may live in a stupid country, but they lived in the greatest country on earth, and that perhaps I should move to another country, like Somalia. Well, the joke’s on them because I happen to have a summer home in Somalia... and no I can’t show you an original copy of my birth certificate because Woody Harrelson spilled bong water on it.
And before I go about demonstrating how, sadly, easy it is to prove the dumbness dragging down our country, let me just say that ignorance has life and death consequences. On the eve of the Iraq War, 69% of Americans thought Saddam Hussein was personally involved in 9/11. Four years later, 34% still did. Or take the health care debate we’re presently having: members of Congress have recessed now so they can go home and "listen to their constituents." An urge they should resist because their constituents don’t know anything. At a recent town-hall meeting in South Carolina, a man stood up and told his Congressman to "keep your government hands off my Medicare," which is kind of like driving cross country to protest highways.
I’m the bad guy for saying it’s a stupid country, yet polls show that a majority of Americans cannot name a single branch of government, or explain what the Bill of Rights is. 24% could not name the country America fought in the Revolutionary War. More than two-thirds of Americans don’t know what’s in Roe v. Wade. Two-thirds don’t know what the Food and Drug Administration does. Some of this stuff you should be able to pick up simply by being alive. You know, like the way the Slumdog kid knew about cricket.
Not here. Nearly half of Americans don’t know that states have two senators and more than half can’t name their congressman. And among Republican governors, only 30% got their wife’s name right on the first try.
Sarah Palin says she would never apologize for America. Even though a Gallup poll says 18% of Americans think the sun revolves around the earth. No, they’re not stupid. They’re interplanetary mavericks. A third of Republicans believe Obama is not a citizen, and a third of Democrats believe that George Bush had prior knowledge of the 9/11 attacks, which is an absurd sentence because it contains the words "Bush" and "knowledge."
People bitch and moan about taxes and spending, but they have no idea what their government spends money on. The average voter thinks foreign aid consumes 24% of our federal budget. It’s actually less than 1%. And don’t even ask about cabinet members: seven in ten think Napolitano is a kind of three-flavored ice cream. And last election, a full one-third of voters forgot why they were in the booth, handed out their pants, and asked, "Do you have these in a relaxed-fit?"
And I haven’t even brought up America’s religious beliefs. But here’s one fun fact you can take away: did you know only about half of Americans are aware that Judaism is an older religion than Christianity? That’s right, half of America looks at books called the Old Testament and the New Testament and cannot figure out which one came first.
And these are the idiots we want to weigh in on the minutia of health care policy? Please, this country is like a college chick after two Long Island Iced Teas: we can be talked into anything, like wars, and we can be talked out of anything, like health care. We should forget town halls, and replace them with study halls. There’s a lot of populist anger directed towards Washington, but you know who concerned citizens should be most angry at? Their fellow citizens. "Inside the beltway" thinking may be wrong, but at least it’s thinking, which is more than you can say for what’s going on outside the beltway.
And if you want to call me an elitist for this, I say thank you. Yes, I want decisions made by an elite group of people who know what they’re talking about. That means Obama budget director Peter Orszag, not Sarah Palin.
Which is the way our founding fathers wanted it. James Madison wrote that "pure democracy" doesn’t work because "there is nothing to check... an obnoxious individual." Then, in the margins, he doodled a picture of Joe the Plumber.
Until we admit there are things we don’t know, we can’t even start asking the questions to find out. Until we admit that America can make a mistake, we can’t stop the next one. A smart guy named Chesterton once said: "My country, right or wrong is a thing no patriot would ever think of saying... It is like saying ’My mother, drunk or sober.’" To which most Americans would respond: "Are you calling my mother a drunk?"
A Judge on Tuesday ordered Microsoft (NSDQ: MSFT) to stop selling its popular Word document creation application in the United States in 60 days, after finding that the software contains technology that violates a patent held by a third party.
Microsoft Office, which includes Word, accounted for more than $3 billion in worldwide sales in Microsoft’s most recent fiscal year and is used by literally millions of businesses and consumers for everyday tasks like word processing and making spreadsheets and presentations.
The judge said the ruling applies to Word 2003 and Word 2007.
Investors shrugged off the news—perhaps in anticipation of a higher court overturning the ruling, which arose from the plaintiff-friendly Eastern Texas federal jurisdiction. Microsoft shares were up 1.6% to $23.50 in early trading Wednesday.
Judge Leonard Davis, of U.S. District Court for Eastern Texas, said Microsoft "unlawfully infringed" on a patent that describes how programs go about "manipulating a document’s content and architecture separately." The patent, No. 5,787,449, is held by Toronto-based i4i, Inc. i4i develops "collaborative content solutions," according to its Web site. i4i originally sued Microsoft for patent infringement in 2007.
"We are disappointed by the court’s ruling," Microsoft spokesman Kevin Kutz said in an e-mail. "We believe the evidence clearly demonstrated that we do not infringe and that the i4i patent is invalid. We will appeal the verdict."
Davis on Tuesday ordered Microsoft to pay $240 million in damages to i4i, plus court costs and interest. More significantly, he enjoined Microsoft from continuing to sell Microsoft Word, in its current form, in the U.S.
Specifically, Davis said Microsoft can’t sell versions of Word that can open documents saved in the .XML, .DOCX, or .DOCM formats that contain custom XML. Those formats were at the heart of the patent dispute. .DOCX is the defaultformat for the most current version of Word, which is included in Microsoft Office 2007. Custom XML is used by businesses to link their corporate data to Word documents.
"Microsoft Corporation is hereby permanently enjoined" from selling Word 2003 and Word 2007 in the U.S. Davis, wrote in his order.
Davis also prohibited Microsoft from providing technical support for infringing products sold after the injunction takes effect, or from "testing, demonstrating, or marketing the ability of the infringing and future Word products to open an XML file containing custom XML."
Davis said the injunction does not apply to versions of Word that open an XML file as plain text or which apply a transform that removes all custom XML elements—possibly paving the way for Microsoft to issue a patch that rectifies the problem.
Coming soon? And apparently different from both the rather childish original and the rather fantastic recent Ron Moore version. By your command:
One of the many casualties of Sept. 11, 2001 was the Tom De Santo/Bryan Singer version of "Battlestar Galactica," designed as a sequel to the ’70s show, which was less than three months from shooting when the attacks on America happened. Since the Cylon sneak attack was a big part of the $14 million backdoor pilot they were about to shoot, Sci-Fi got very nervous about the film, and everything fell apart.
In the time since, obviously, Ron Moore and David Eick and the entire amazing creative team who did bring "Battlestar Galactica" back to television managed to not only get a new show on the air, but they’ve completed their run and they’re gearing up on a spinoff series, "Caprica." Despite the amount of critical love that was displayed for the show during its run, I wouldn’t call Moore’s "Galactica" a phenomenon. It was more like a very enthusiastic and vocal cult audience. As a result, Universal seems to feel that there’s more life in the property, and that there is room for another interpretation.
That’s why they’re nearing a deal with Bryan Singer to produce and possibly direct a brand-new "Battlestar Galactica" feature film.
The question this raises, of course, is how close will this be to the plans that Singer had for the material before Ron Moore’s show aired? Right now, my sources indicate that the big decisions haven’t been made yet. Singer is the first major creative element to be approached, so once they sign him, they’ll go find a writer and they’ll figure out exactly which story they’re telling. It seems like he’d want to get back to the ideas he originally loved about the piece, but since that was developed with another studio, I’m not sure that would work.
And I’m not sure I buy that Singer’s going to come in just to direct a big-screen version of the show that just finished its run. The series wrapped up pretty conclusively, with "The Plan" and "Caprica" already in motion as extensions of that story in different directions.
So is this going to be yet another all-new take on the premise? In February of this year, the announcement was made that Glen Larson had signed a deal with Universal to develop a "Galactica" film that was not tied to any previous version. This has got to be that same project, right? So I guess that means Singer and Larson are going to be sitting down to figure out what take they want to pursue.
Tell me... does Singer’s attachment mean you’re interested? Should Larson revisit this material already? Do you think Universal’s going to be able to get lightning to strike again in the same spot?
UPDATED (8.13) - Variety has confirmed, through Universal, that Singer has come onboard the project. (...)
Classically ’usury’ meant any rate whatsoever charged on a loan, no matter how low. The prohibition of usury was a prohibition against any interest charge on a loan.
With one exception, no one in the ancient world - whether in Greece, China, India or Mesopotamia - prohibited interest. That one exception was the Hebrews who (...) permitted charging interest to non-Jews but prohibited it among Jews.
The fierce medieval Christian assault on usury is decidedly odd. For one thing, there is nothing in the Gospels or the early Fathers, despite their hostility to trade, that can be construed as urging the prohibition of usury. In fact, the parable of the talents in Matthew can easily be taken as approval for earning interest on commercial loans. The campaign against usury begins with the first Church council, in Necaea in 325 (....)
(T)he prohibition of all usury enters secular legislation for the first time in the all-embracing totalitarian regime of the Emperor Charlemagne.
Murray Rothbard, Economic Thought Before Adam Smith
While we clearly need health-care reform, the last thing our country needs is a massive new health-care entitlement that will create hundreds of billions of dollars of new unfunded deficits and move us much closer to a government takeover of our health-care system. Instead, we should be trying to achieve reforms by moving in the opposite direction—toward less government control and more individual empowerment. Here are eight reforms that would greatly lower the cost of health care for everyone:
• Remove the legal obstacles that slow the creation of high-deductible health insurance plans and health savings accounts (HSAs). The combination of high-deductible health insurance and HSAs is one solution that could solve many of our health-care problems. For example, Whole Foods Market pays 100% of the premiums for all our team members who work 30 hours or more per week (about 89% of all team members) for our high-deductible health-insurance plan. We also provide up to $1,800 per year in additional health-care dollars through deposits into employees’ Personal Wellness Accounts to spend as they choose on their own health and wellness.
(...) Our plan’s costs are much lower than typical health insurance, while providing a very high degree of worker satisfaction.
• Equalize the tax laws so that employer-provided health insurance and individually owned health insurance have the same tax benefits. Now employer health insurance benefits are fully tax deductible, but individual health insurance is not. This is unfair.
• Repeal all state laws which prevent insurance companies from competing across state lines. We should all have the legal right to purchase health insurance from any insurance company in any state and we should be able use that insurance wherever we live. Health insurance should be portable.
• Repeal government mandates regarding what insurance companies must cover. These mandates have increased the cost of health insurance by billions of dollars. What is insured and what is not insured should be determined by individual customer preferences and not through special-interest lobbying.
• Enact tort reform to end the ruinous lawsuits that force doctors to pay insurance costs of hundreds of thousands of dollars per year. These costs are passed back to us through much higher prices for health care.
• Make costs transparent so that consumers understand what health-care treatments cost. How many people know the total cost of their last doctor’s visit and how that total breaks down? What other goods or services do we buy without knowing how much they will cost us?
• Enact Medicare reform. We need to face up to the actuarial fact that Medicare is heading towards bankruptcy and enact reforms that create greater patient empowerment, choice and responsibility.
• Finally, revise tax forms to make it easier for individuals to make a voluntary, tax-deductible donation to help the millions of people who have no insurance and aren’t covered by Medicare, Medicaid or the State Children’s Health Insurance Program.
Many promoters of health-care reform believe that people have an intrinsic ethical right to health care—to equal access to doctors, medicines and hospitals. While all of us empathize with those who are sick, how can we say that all people have more of an intrinsic right to health care than they have to food or shelter?
Health care is a service that we all need, but just like food and shelter it is best provided through voluntary and mutually beneficial market exchanges. A careful reading of both the Declaration of Independence and the Constitution will not reveal any intrinsic right to health care, food or shelter. That’s because there isn’t any. This "right" has never existed in America
Even in countries like Canada and the U.K., there is no intrinsic right to health care. Rather, citizens in these countries are told by government bureaucrats what health-care treatments they are eligible to receive and when they can receive them. All countries with socialized medicine ration health care by forcing their citizens to wait in lines to receive scarce treatments.
Although Canada has a population smaller than California, 830,000 Canadians are currently waiting to be admitted to a hospital or to get treatment, according to a report last month in Investor’s Business Daily. In England, the waiting list is 1.8 million.
(...) Rather than increase government spending and control, we need to address the root causes of poor health. This begins with the realization that every American adult is responsible for his or her own health.
Unfortunately many of our health-care problems are self-inflicted: two-thirds of Americans are now overweight and one-third are obese. Most of the diseases that kill us and account for about 70% of all health-care spending—heart disease, cancer, stroke, diabetes and obesity—are mostly preventable through proper diet, exercise, not smoking, minimal alcohol consumption and other healthy lifestyle choices.
(...) We should be able to live largely disease-free lives until we are well into our 90s and even past 100 years of age.
Health-care reform is very important. Whatever reforms are enacted it is essential that they be financially responsible, and that we have the freedom to choose doctors and the health-care services that best suit our own unique set of lifestyle choices. (...)
De Belgische historicus Jean Stengers heeft geschat dat de Congo Leopold tot 1908 ongeveer 60 miljoen frank aan winst opleverde, met nog eens 24 miljoen na de overname door België. Het koloniale bestuur, defensie en transport kostten de koning en het land alles bij elkaar zo’n 210 miljoen: een nettoverlies van 126 miljoen frank.
Gelet op de verandering in bevolkingspatronen, de rapporten en schattingen van missionarissen van het aantal zielen in hun missiegebied en recent historisch onderzoek is het aannemelijk dat tussen 1885 en 1908 meer dan tien miljoen mensen (...) zijn vermoord door Leopolds beulen.(...)
Maar zelfs met zijn ongekende wreedheid was de exploitatie van Belgisch-Congo maar een paar jaar lang echt winstgevend. Leopolds ’moordzaak’ kon alleen functioneren, doordat hij de winsten in eigen zak stopte en zijn schulden en (...) kosten van het (...) bestuur afwentelde op zijn land. Leopold moordde in de Congo en bestal zijn eigen volk en in ruil daarvoor liet hij grootse gebouwen en bronzen schandbeelden van zichzelf neerzetten.
Philip Blom. De duizelingwekkende jaren. Blz. 157-158
Consider the following definition of freedom: the absence of monopoly.
The absence of monopoly means that you can exercise exit, even if you cannot exercise voice. The presence of monopoly means that, at most, you can exercise voice.
Neither my local supermarket nor any of its suppliers has a way for me to exercise voice. They don’t hold elections. They don’t have town-hall meetings where they explain their plans for what will be in the store. By democratic standards, I am powerless in the supermarket.
And yet, I feel much freer in the supermarket than I do with respect to my county, state, or federal government. For each item in the supermarket, I can choose whether to put it into my cart and pay for it or leave it on the shelf. I can walk out of the supermarket at any time and go to a competing grocery.
The exercise of voice, including the right to vote, is not the ultimate expression of freedom. Rather, it is the last refuge of those who suffer under a monopoly. If we take it as given that the political jurisdiction where I reside is a monopoly, then perhaps I will have more influence over that monopoly if I have a right to vote and a right to organize opposition than if I do not. However, (...), the reality is that the amount of influence I have is shrinking while the scope of the monopolist is growing.
The idea of charter cities (or seasteading) will be a success to the extent that it creates a viable exit option vis-a-vis government. Suppose that the Chinese government loses its monopoly power, because it becomes easy for people residing in China to choose to live under alternative governments. In that hypothetical case, I would argue that those residents are free, even if those who choose the Chinese government are not allowed to vote in contested elections or to freely criticize their government. If you lived in North Korea, which would you rather have--the right to vote or the right to leave?
In fact, if we had real competitive government, then we would be no more interested in elections and speaking out to government officials than we are in holding elections and town-hall meetings at the supermarket. I repeat: real freedom is the absence of monopoly.
Gebruikers van YouTube, de site waarop iedereen kosteloos filmpjes kan uploaden, bekijken en delen, heeft tientallen muziekvideo’s van Belgische artiesten laten verwijderen.
Dat doet de site na vaststellingen van auteursrechtenvereniging Sabam. "YouTube moet niet proberen ons de zwartepiet toe te schuiven", reageert Sabam-woordvoerder Thierry Dachelet in Het Laatste Nieuws. "De oplossing is dat ze met ons een akkoord afsluiten over de rechten."
Laat duizend businessmodellen eindelijk eens bloeien, Sabam!
Incredibly, President George W. Bush told French President Jacques Chirac in early 2003 that Iraq must be invaded to thwart Gog and Magog, the Bible’s satanic agents of the Apocalypse.
Honest. This isn’t a joke. The president of the United States, in a top-secret phone call to a major European ally, asked for French troops to join American soldiers in attacking Iraq as a mission from God.
Now out of office, Chirac recounts that the American leader appealed to their “common faith” (Christianity) and told him: “Gog and Magog are at work in the Middle East…. The biblical prophecies are being fulfilled…. This confrontation is willed by God, who wants to use this conflict to erase his people’s enemies before a New Age begins.”
This bizarre episode occurred while the White House was assembling its “coalition of the willing” to unleash the Iraq invasion. Chirac says he was boggled by Bush’s call and “wondered how someone could be so superficial and fanatical in their beliefs.”
After the 2003 call, the puzzled French leader didn’t comply with Bush’s request. Instead, his staff asked Thomas Romer, a theologian at the University of Lausanne, to analyze the weird appeal. Dr. Romer explained that the Old Testament book of Ezekiel contains two chapters (38 and 39) in which God rages against Gog and Magog, sinister and mysterious forces menacing Israel. Jehovah vows to smite them savagely, to “turn thee back, and put hooks into thy jaws,” and slaughter them ruthlessly. In the New Testament, the mystical book of Revelation envisions Gog and Magog gathering nations for battle, “and fire came down from God out of heaven, and devoured them.”
In 2007, Dr. Romer recounted Bush’s strange behavior in Lausanne University’s review, Allez Savoir. A French-language Swiss newspaper, Le Matin Dimanche, printed a sarcastic account titled: “When President George W. Bush Saw the Prophesies of the Bible Coming to Pass.” France’s La Liberte likewise spoofed it under the headline “A Small Scoop on Bush, Chirac, God, Gog and Magog.” But other news media missed the amazing report.
Subsequently, ex-President Chirac confirmed the nutty event in a long interview with French journalist Jean-Claude Maurice, who tells the tale in his new book, Si Vous le Répétez, Je Démentirai (If You Repeat it, I Will Deny), released in March by the publisher Plon.
Oddly, mainstream media are ignoring this alarming revelation that Bush may have been half-cracked when he started his Iraq war. My own paper, The Charleston Gazette in West Virginia, is the only U.S. newspaper to report it so far. Canada’s Toronto Star recounted the story, calling it a “stranger-than-fiction disclosure … which suggests that apocalyptic fervor may have held sway within the walls of the White House.” Fortunately, online commentary sites are spreading the news, filling the press void.
The French revelation jibes with other known aspects of Bush’s renowned evangelical certitude. For example, a few months after his phone call to Chirac, Bush attended a 2003 summit in Egypt. The Palestinian foreign minister later said the American president told him he was “on a mission from God” to defeat Iraq. At that time, the White House called this claim “absurd.”
Recently, GQ magazine revealed that former Defense Secretary Donald Rumsfeld attached warlike Bible verses and Iraq battle photos to war reports he hand-delivered to Bush. One declared: “Put on the full armor of God, so that when the day of evil comes, you may be able to stand your ground.”
If each human brain had only one synapse—corresponding to a monumental stupidity—we would be capable of only two mental states. If we had two synapses, then 22 = 4 states; three synapses, then 23 = 8, and, in general for N synapses, 2N states. But the human brain is characterized by some 1013 synapses. Thus the number of different states of a human brain is 2 raised to this power—i.e., multiplied by itself ten trillion times. This is an unimaginably large number, far greater, for example, than the total number of elementary particles (electrons and protons) in the entire universe, which is much less than 2 raised to the power 103. … [Therefore] there must be an enormous number of mental configurations that have never been entered or even glimpsed by any human being in the history of mankind.
Perhaps the most dangerous legacy of the war was the Northern claim that it could use force and go to war to prevent any state from withdrawing from the Union. This has haunted us in the past decade and will continue to do so, as the Soviet Union’s Mikhail Gorbachev claimed the right to use force to hold his union together and cited Abraham Lincoln as good authority for doing so. In 1999, the Chinese premier reminded President Clinton that he had the right to use force to hold China together, to go to war to reclaim Taiwan, and he too cited Abraham Lincoln as good authority.
What happens when companies engage in fraudulent activity? Short-sellers get wind of it, and, by selling the stock of the company in question, depress the share price and save uninformed investors some of the loss they would otherwise have suffered had they bought in at an undepressed level. How much is that worth? According to Xiaoxia Lou and Jonathan Karpoff, somewhere between 0.2% and 1.5% of the firm’s market cap.
But what if the short sellers have it wrong, and the company in question is not engaged in fraud? Well, in that case the uninformed investors have just been given the opportunity to buy into that question at a discount, thanks to the shorts. They win again!
Is there any downside to short-selling? Not really: the authors say that “there is no evidence that short selling exacerbates a downward price spiral when the misconduct is publicly revealed”.
So thank you, short-sellers, for saving us from buying in to fraudulent firms at inflated prices, and from giving us a nice discount on the share price of non-fraudulent firms. You rock!
I think this distaste for efficient markets comes from two sources. First, many people distrust the “invisible hand.” They do not think markets are fair games that reward virtue and promote social welfare. Secondly, there are critics (stockbrokers, talking heads on CNBC, financial journalists) whose livelihood depends on markets being wrong; otherwise their special insight as to why one should be in telecoms, or bonds, would have no value.
By definition, an efficient outcome is one that cannot be improved upon. (...) I think government power should be minimized, because government failure is far more common than market failure, as the I.Q. of a group is diminished by centralized interaction. Not only do government bureaucrats suffer from the same cognitive and emotional limitations as consumers and investors, they are politically motivated rather than merely self-interested.
(...) Robert Shiller noted that housing was on an unprecedented tear in 2004. But Shiller did not predict an aggregate housing decline; instead, he merely stated the recent increase in home prices was unlikely to continue. In the 2005 edition of Irrational Exuberance, he wrote that in some cities “the price increases may start to slow down, and then to fall. At the same time, it is likely the boom will continue for quite a while in other cities.”
Now, compare this modest warning by a lone economist to the forces promoting home lending from all directions. It was not just a Wall Street phenomenon, but one pushed by our government, legislators, regulators, and even academics (...).
In 2002, President Bush bragged in a speech about how Freddie Mac had began a program to “help deserving families who have bad credit histories to qualify for homeownership loans.” Bank acquisitions were evaluated in part by their Community Reinvestment Act record, which necessitated lowering underwriting criteria on homes. Furthermore, the Federal Housing Administration was, and is, offering loans with only three percent down, and during the boom, the Department of Housing and Urban Development promoted a program where even this minor investment could be paid for by the homebuilder, allowing a homebuyer to purchase an overpriced house with no money down. As the Republicans discovered in 2004 when they tried to add more oversight to Fannie Mae, there was little legislative appetite for anything close to more stringent lending standards during the boom.
In light of this governmental housing exuberance, I doubt that a more powerful government would have mitigated the boom — rather, it would have made this crisis worse. Indeed, it was only the collapse of the subprime market at the beginning of 2007 as reflected by the ABX-HE subprime housing index that alerted people to the severity of this problem, and shut off financing by mid-2007, six months later. Market prices, not legislators, instigated the end of the insanity. How quickly are failed governmental initiatives usually stopped, once identified?
The nice thing is that markets rely on decentralized self-interest to keep prices in line, which is surely more dependable than legislators building patronage systems and pandering to their base with other people’s money. Letting markets, as opposed to bureaucrats, signal people how to get paid and how to invest, is simply better than the undefined alternative.
First, neoliberalism is employed asymmetrically across ideological divides: it is used frequently by those who are critical of free markets, but rarely by those who view marketization more positively. In part, proponents avoid the term because neoliberalism has come to signify a radical form of market fundamentalism with which no one wants to be associated. Second, neoliberalism is often left undefined in empirical research, even by those who employ it as a key independent or dependent variable. Third, the term is effectively used in many different ways, such that its appearance in any given article offers little clue as to what it actually means.
The contemporary use of neoliberalism is even more striking because scholars once employed the term nearly the opposite of how it is commonly used today. (...) (T)he term neoliberalism was first coined by the Freiberg School of German economists to denote a philosophy that was explicitly in comparison to classical liberalism, both in its rejection of laissez-faire policies and its emphasis on humanistic values. These characteristics imbued neoliberalism with a common substantive meaning and a positive normative valence: it denoted a that would improve upon its classical predecessor in specific ways. Only once the term had migrated to Latin America, and Chilean intellectuals starting using it to refer to radical economic reforms under the Pinochet dictatorship, did neoliberalism acquire negative normative connotations and cease to be used by market proponents.
Since it was introduced in February, Representative Ron Paul’s "Audit the Fed" bill (H.R. 1207) has gained 282 congressional cosponsors. If adopted, the bill would allow the Government Accountability Office to review, not only the Federal Reserve’s balance sheet, but its recent monetary policy deliberations and transactions.
Fed Chairman Ben Bernanke opposes the plan, saying it would undermine the Fed’s hallowed independence.
But Mr. Paul, a noted libertarian who ran for president last year, also wants to keep the Fed out of Congress’s clutches – by scrapping it altogether. That’s the goal of his follow-up Federal Reserve Board Abolition Act (H.R. 833). Although that measure has yet to gain a single cosponsor, it has plenty of grass-roots support, and Paul hopes that members of Congress will jump on the bandwagon once their eyes are opened by a no-holds-barred audit.
Wacky stuff? Well, if not having a ghost of a chance is enough to make a bill bonkers, Paul’s measure probably qualifies. But that doesn’t mean you’ve got to be crazy to believe that the US economy would be better off without the Fed.
The Fed’s apologists suggest otherwise, of course. They note that the US spent nearly half the years between 1854 to 1913 in recession, as opposed to just 21 percent of the time since the Fed’s establishment in 1913. Who would want to go back to those bad old days?
But consider: the US economy has actually grown less rapidly since 1914 than it did before. And inflation has been much worse, despite both the Civil War, which featured the nation’s worst inflation, and the Great Depression, which featured its severest deflation!
What’s more, the frequent downturns before 1914 were due, not to the lack of a central bank, but to foolish government regulations. Topping the list were bans on branch banking, initiated by state governments and then incorporated into federal banking law. The bans propped up thousands of undercapitalized and under-diversified banks – banks unfit to survive major local shocks, let alone macroeconomics ones. They also caused bank notes – competitively supplied counterparts of today’s Federal Reserve notes – to trade at discounts whenever they traveled far from the solitary offices of banks that issued them.
During the Civil War, state bank notes were taxed out of existence to make way for those of new national banks. Because national banks had to accept one another’s notes at full value, their currency was uniform. But national bank notes had to be backed by government bonds.
That requirement, designed to bolster the Union’s finances while the war raged on, proved disastrous afterward, when government surpluses led to a halving of the federal debt, and to a corresponding shortage of bonds for securing bank notes. The resulting currency panics – in 1873, 1884, 1893, and 1907 – prompted the Fed’s establishment.
But they didn’t have to. Until 1907, prominent reformers favored simply abolishing Civil War-era restrictions on banks’ freedom to issue notes and allowing all banks to branch nationwide to ease the mopping-up of unwanted paper money.
They drew inspiration from Canada, where a similar "asset currency" arrangement had been working smoothly for decades. Between the panic of 1893 and that of 1907, Congress considered more than a dozen "asset currency" measures But none got anywhere, thanks to local bankers’ determination to block any proposal for branch banking that would threaten their cozy monopolies.
It was only once these deregulatory efforts failed that reformers fell back on the plan of establishing a "central reserve bank." The resulting Federal Reserve Act was, in essence, merely a plan to allow 12 new banks to do what other banks were prevented from doing themselves, namely, establish branch networks and issue currency backed by commercial assets.
But the Federal Reserve plan proved to be a poor substitute for deregulation. By granting monopoly privileges to the Federal Reserve banks, it allowed them to inflate recklessly: By 1919, the US inflation rate, which had cleaved close to zero ever since the Civil War, was close to 20 percent! Yet the Fed was also capable of failing to supply enough money to avert crises. The first downturn over which it presided – that of 1921 – was among the sharpest in US history. Still it was nothing compared to the unprecedented monetary contraction of 1929-1933.
Would asset currency have been any better? Canada’s was: Between 1929 and 1933, for instance, 6,000 US banks failed, and a third of the US money stock was wiped out. In contrast, and despite a fixed Canadian-US dollar exchange rate, Canada’s money stock shrank by just 13 percent, and no Canadian bank failed.
Notwithstanding this superior outcome, the Canadian government itself abandoned asset currency in favor of central banking in 1935, to placate a growing Canadian movement for easy money.
So a call to end the Fed would have been anything but crazy in 1934. Three-quarters of a century and a dozen crises later, there are plenty of grounds for insisting that it hasn’t gotten any crazier.
The adoption of Keynesian and monetarist ideas by central bankers and elected officials subsequently cast the Fed in a proactive macroeconomic role. William McChesney Martin, who served as chairman from 1951 to 1970, said that the job of the Fed was “to take away the punch bowl just as the party gets going.” This might have been wise in theory, but it wasn’t mandated by the law. In 1977, an amendment to the 1913 Act explicitly charged the Fed with promoting “maximum” employment and “stable” prices. The Humphrey-Hawkins Full Employment Act that followed in 1978 mandated the Fed to promote “full” employment and while maintaining “reasonable” price stability.
Legislation also has increased the Fed’s responsibilities for overseeing the mechanics of the financial system. The Bank Holding Company Act of 1956 gave the Fed responsibility over holding companies designed to circumvent restrictions placed on individual banks. It was tasked with regulating the formation and acquisition of such companies.
Congress further tasked the Fed with enforcing consumer-protection and fair-lending rules. The Fed was made the primary regulator of the 1968 Truth in Lending Act that required proper disclosure of interest rates and terms. Similarly, the Community Reinvestment Act of 1977 forced the Fed to address discrimination against borrowers from poor neighborhoods.
The expansion of bank holding companies into activities such as investment banking and off-balance-sheet exposures to complex instruments such as credit-default swaps also required the Fed to increase the scope of its supervisory capabilities.
In principle, an exceptionally talented theorist might capably run a Fed focused just on monetary policy. Setting the discount rate and regulating the money supply are centralized, top-down activities that do not require much administrative capacity. But without deep managerial experience and considerable industry knowledge, effective chairmanship of a Fed that relies on far-flung staff to regulate financial institutions and practices is almost unimaginable. The vast territory the Fed covers would challenge the most exceptional and experienced executives.
As it happens, the Fed has been led for more than 20 years by chairmen who had no senior management experience. Prior to running the Fed, Alan Greenspan started a small consulting firm and Ben Bernanke was head of Princeton’s economics department. Given their understandable preoccupation with monetary and macroeconomic matters, how much attention could they be expected to devote to mastering and managing the plumbing side of the Fed? While the record of the Fed’s monetary policy has been mixed, its supervision of financial institutions has been a predictable and comprehensive failure.
The Fed’s excessively broad mandate also has thwarted accountability. The CEOs of Citibank, AIG, Bear Stearns, Lehman and Countrywide are all gone—albeit with too much delay and with no clawback of unmerited compensation. At the Fed, no high-level heads have rolled. Mr. Geithner was promoted to treasury secretary. Mr. Bernanke is treated with great deference as he solemnly testifies that if it weren’t for the Fed, the crisis would have been much worse. But then, how can anyone be held responsible for failing at a job no human could do?
At the very least we should split the monetary policy and regulatory functions of the Fed, as was done through the Maastricht Treaty that established the European Central Bank. What we need now is a debate about how to break up the Fed—and some of the sprawling financial institutions it supervises—in order to make both the regulator and the regulated more manageable and accountable.
Met een clublied moet je oppassen. Dat weten ze nu bij Schalke ook. De Duitse club kreeg de afgelopen dagen boze telefoontjes en mails wegens de derde strofe in het clublied. Daar komt de profeet Mohammed ter sprake.
"Mohammed was een profeet, die niks van voetbal begrijpt. Maar van alle mooie kleuren koos hij voor blauw en wit", zingen de fans van Schalke al jaren. Maar nu vormt het plots een probleem. Vooral de moslimgemeenschap stoort zich aan de woorden.
"We nemen de situatie serieus en hebben contact opgenomen met de politie", zegt Schalke. "We hebben ook een expert aangesteld en die zal het liedje bestuderen. Daarna zullen we zien wat we doen."
Much media attention has relentlessly focused on the influence of "Big Oil"—but the numbers don’t add up. Exxon Mobil is still vilified for giving around 23 million dollars, spread over roughly ten years, to skeptics of the enhanced greenhouse effect. It amounts to about $2 million a year, compared to the US government input of well over $2 billion a year. The entire total funds supplied from Exxon amounts to less than one fivethousandth of the value of carbon trading in just the single year of 2008.
Apparently Exxon was heavily "distorting the debate" with a mere 0.8% of what the US government spent on the climate industry each year at the time. (If so, it’s just another devastating admission of how effective government funding really is.)
As an example for comparison, nearly three times the amount Exxon has put in was awarded to the Big Sky sequestration project to store just 0.1% of the annual carbon‐ dioxide output of the United States of America in a hole in the ground. The Australian government matched five years of Exxon funding with just one feel‐good advertising campaign, "Think Climate. Think Change." (but don’t think about the details).
Perhaps if Exxon had balanced up its input both for and against climate change, it would have been spared the merciless attacks? It seems not, since it has donated more than four times as much to the Stanford‐based Global Climate and Energy Project (GCEP).Exxon’s grievous crime is apparently just to help give skeptics a voice of any sort. The censorship must remain complete.
The vitriol against Exxon reached fever pitch in 2005‐2008. Environmental groups urged a boycott of Exxon for its views on Global Warming. It was labeled An Enemy of the Planet. James Hansen called for CEOs of fossil energy companies to be "tried for high crimes against humanity and nature." In the next breath he mentioned Exxon.
What do you think are the odds that you will die during the next year? Try to put a number to it — 1 in 100? 1 in 10,000? Whatever it is, it will be twice as large 8 years from now.
This startling fact was first noticed by the British actuary Benjamin Gompertz in 1825 and is now called the “Gompertz Law of human mortality.” Your probability of dying during a given year doubles every 8 years. For me, a 25-year-old American, the probability of dying during the next year is a fairly miniscule 0.03% — about 1 in 3,000. When I’m 33 it will be about 1 in 1,500, when I’m 42 it will be about 1 in 750, and so on. By the time I reach age 100 (and I do plan on it) the probability of living to 101 will only be about 50%. This is seriously fast growth — my mortality rate is increasing exponentially with age.
And if my mortality rate (the probability of dying during the next year, or during the next second, however you want to phrase it) is rising exponentially, that means that the probability of me surviving to a particular age is falling super-exponentially.
There are many good reasons for legalising drugs. Some of the better arguments are found in government reports:
A report yesterday from the UK Drug Policy Commission, an independent body, came to the stark conclusion that trying to reduce supply by enforcement of the law has little or no impact. The drugs market, the report says, is “large, resilient and quick to adapt”. Arrest a dealer, and another appears. But our attempts to slay this modern Hydra lead to more problems; turf wars and violence. The report concludes that the police should target their resources more effectively: ignore the low-level dealers and concentrate on the harm caused by the market.
But the report does not go far enough. The harm caused by the drugs market derives both from its existence and its illegality. We have tried and failed to do something about the former; it is time to tackle the latter. Drugs must be legal and licensed, like alcohol. Not decriminalised, the fudge policy of the defeated, but legalised.
Those who want to take drugs should be able to walk into a shop and order a pill from the blue jar, or a powder from the red, like a chemist crossed with a sweetie shop. Free, clean needles should be available; two free with every bag of heroin!
The market must come out of the darkness. It must be made a headache for the Revenue bit of Her Majesty’s Revenue & Customs, leaving the Customs bit to deal with the odd bootlegged stash. The economics of legalisation suggest that consumption would rise initially. Prohibition makes drugs expensive; the risk of illegal production and distribution is built into the price. Cheaper drugs are likely to lead to higher use; there is a correlation between street price and which drugs land users in hospital emergency rooms.
But price matters to those already embroiled in drugs. Would it make a difference to the unconverted? Would anyone not already immersed in the culture seriously choose an evening’s entertainment based on the relative price of smack versus a pint?
The present system does even less for addicts than for collateral victims. They are thrown on to methadone and abandoned. Licensed, regulated and taxed, there could be a hypothecated revenue from drugs to pay for rehabilitation and life creation. Life as an ex-junkie can be even worse than life as a junkie.
The Gateway theory, which suggests that teenagers start on cheap cider and fags, graduate on to Ecstasy by way of pot, and end up injecting smack into their eyeballs, has been debunked. The Advisory Council on the Misuse of Drugs found no convincing evidence of a causal chain, linking cider in a bus stop with smack in a burger bar toilet. Any teenager can stop at any point in the chain. Whether or not drugs are legal is irrelevant. But the illegality of drugs means that every teenager who smokes pot is exposed to a dealer who has a vested interest in making him an addict.
There is already a horrifyingly casual attitude to recreational drug use in Britain. I am one of the only people I know who has never popped a pill or used cocaine. Legalisation could not make drugs much more ubiquitous than they already are.
All drugs are dangerous. As I walk my daughter to her nursery in the morning, we see the alkies and the junkies split in tribal groupings. The alkies are as toothless and grubby as the junkies, although more cheerful at that time of the morning — the first few cans have taken the edge off. My greatest fear is that my daughter will choose to join their zombie-ish ranks; the legal status of drugs makes no difference at all as to how likely that is.
The main difference between the tribes is that the alkies surrender their souls to Special Brew sitting on the grass. The junkies hide in the bushes. Legalisation could draw them out. But this would be an effective deterrent. The likes of Pete Doherty have a kind of doomed glamour in a photograph. Close up and shooting up, there’s no glamour at all; only revulsion and pity.
By legalising, we would have a fighting chance of wresting the market from the hands of the drug barons: the ones who ruin lives and distort global politics and are untouched by our laughable efforts to police them. They are the only winners in the current futile war.
Says the Mail, the Food Standards Authority wants to "persuade" chocolate manufacturers to reduce the size of their bars "by up to a fifth". At the moment, they’re talking the language of voluntary agreement, but their long term aim is presumably some form of legislation.
(...) Chocolate bars (and cans of fizzy drinks) are the size they are for good reasons. They are the optimum compromise between the manufacturer’s desire to make the largest possible profit and the consumer’s desire to have a moderately filling snack. If they are legislated smaller, or perhaps made smaller because of a voluntary agreement, then they would no longer fulfil their function. Many people would respond by buying more, rendering the whole scheme counterproductive. In any case, the notion that some quango should be setting more or less arbitrary targets for what people should consume would be scary were it not so absurd.
The alleged obesity "epidemic" is largely nonsense anyway, and not just because fatness is not a contagious disease. As reputable scientific studies show, there’s almost no link between being "overweight" - as defined by the notoriously arbitrary Body Mass Index - and health problems. If anything, technically overweight people actually live longer than those whose svelte physiques meet with government approval. (As waistlines expand, after all, so does life expectancy.) Of course, there’s such a thing as being morbidly obese. We all know what that looks like. Morbidly obese people are susceptible to diabetes and heart attacks, and probably get less sex, but they are and will remain in the minority.
(...) It is not the great mass of moderately overweight or even borderline obese people who concern the government calculators, it’s the fact that, with the rise in average weight, there will be more people who are morbidly obese. That’s why they’re set on a course of bullying the whole population into losing weight, even when there’s no health benefit for the great majority.
Officially, this is just a "consultation". FSA’s Gill Fine denied that her quango was telling people what to eat. "We want to make it easier for people to make healthier choices — to choose foods with reduced saturated fat and sugar — or smaller portion sizes." I suppose she means that if people have the option of a large bar of chocolate or a small one, they are likely to choose the big one. Taking away that option will therefore make it easier to "choose" the smaller bar.
The most exciting and underreported news of the past few weeks in Iran has been that the emerging challenger to the increasingly frantic and isolated "Supreme Leader" Ayatollah Ali Khamenei is former President Ali Akbar Hashemi Rafsanjani. And Rafsanjani has recently made a visit to the city of Najaf in Iraq to confer with Ayatollah Ali Husaini Sistani, a long-standing opponent of the Khamenei doctrines, as well as meeting in the city of Qum with Jawad al-Shahristani, who is Sistani’s representative in Iran. It is this dialectic between Iraqi and Iranian Shiites that underlies the flabbergasting statement issued from Qum last weekend to the effect that the Ahmadinejad government has no claim to be the representative of the Iranian people.
One of the apparent paradoxes involved in visiting Iran is this: If you want to find deep-rooted opposition to the clerical autocracy, you must make a trip to the holy cities of Mashad and Qum. It is in places like this, consecrated to the various imams of Shiite mythology, that the most stubborn and vivid criticism is often to be heard—as well as the sort of criticism that the ruling mullahs find it hardest to deal with.
So it is very hard to overstate the significance of the statement made last Saturday by the Association of Teachers and Researchers of Qum, a much-respected source of religious rulings, which has in effect come right out with it and said that the recent farcical and prearranged plebiscite in the country was just that: a sham event. (In this, the clerics of Qum are a lot more clear-eyed than many American "experts" on Iranian public opinion, who were busy until recently writing about Mahmoud Ahmadinejad as the rough-hewn man of the people.)
It’s not too much to read two things into the association’s statement. The first is that public discontent with the outrages of the last few weeks must be extremely deep and extremely widespread. Differences among the clerisy are usually solved in much more discreet ways. If the Shiite scholars of Qum are willing to go public and call the Ahmadinejad regime an impostor, they must be impressed with the intensity of feeling at the grass roots. The second induction follows from the first: It is not an exaggeration to say that the Islamic republic in its present form is now undergoing a serious crisis of legitimacy.
An excellent article by Abbas Milani in the current issue of the New Republic gives a historical and ideological backdrop to the discrepant forces within Shiism and in particular to the long disagreement between those who think that the clergy must rule on behalf of the people (the ultra-reactionary notion of the velayat-e faqui, ...) and those who do not. Among the more surprising members of the anti-Khomeini opposition is the late ayatollah’s grandson Sayeed Khomeini, a relatively junior cleric in Qum (...). And among the best-known of those who think it is profane for the clergy to degrade and compromise themselves with political power is Grand Ayatollah Sistani, spiritual leader of neighboring Iraq. (To emphasize the cross-fertilization a bit further, bear in mind that Sistani is in fact an Iranian, while Ayatollah Khomeini did much of his brooding on a future religious despotism while in exile in Iraq.)
Which brings me to a question that I think deserves to be asked: Did the overthrow of the Saddam Hussein regime, and the subsequent holding of competitive elections in which many rival Iraqi Shiite parties took part, have any germinal influence on the astonishing events in Iran? Certainly when I interviewed Sayeed Khomeini in Qum some years ago, where he spoke openly about "the liberation of Iraq," he seemed to hope and believe that the example would spread. One swallow does not make a summer. But consider this: Many Iranians go as religious pilgrims to the holy sites of Najaf and Kerbala in southern Iraq. They have seen the way in which national and local elections have been held, more or less fairly and openly, with different Iraqi Shiite parties having to bid for votes (and with those parties aligned with Iran’s regime doing less and less well). They have seen an often turbulent Iraqi Parliament holding genuine debates that are reported with reasonable fairness in the Iraqi media. Meanwhile, an Iranian mullah caste that classifies its own people as children who are mere wards of the state puts on a "let’s pretend" election and even then tries to fix the outcome. Iranians by no means like to take their tune from Arabs—perhaps least of all from Iraqis—but watching something like the real thing next door may well have increased the appetite for the genuine article in Iran itself.
There are, no doubt, other determining factors as well. Contrary to the simplistic distinction between the "liberal urban" and the "conservative rural" that is made by so many glib commentators, Iran is a country where very rapid urbanization of a formerly rural population is being undergone, and all good Marxists ought to know that historically this has always been a moment pregnant with revolutionary discontent. In Saddam’s Iraq, the possession of a satellite dish was punishable by death; everybody knows that the mullahs in Iran cannot enforce their own ban on informal media and unofficial transmission. And yet, precisely because they are so dense and so fanatical, they doom themselves to keep on trying. Every Iranian I know is now convinced that if this is not the end for the Khamenei system, it is at least the harbinger of the beginning of the end.