Labeling carbon like calories

The world emitted 32.5 gigatons, or 32.5 billion metric tons, in 2017. Globally, agriculture accounts for 24% of these greenhouse gas emissions, with livestock accounting for about 14.5% to 18%. This makes animal production alone more polluting than the entire global transportation industry. And though agriculture isn’t always the most popular topic when it comes to policy conversations around climate change, the data make a compelling argument to change the way we consume livestock.

Pie chart showing emissions by sector. 25% is from electricity and heat production; 14% from transport; 6% from residential and commercial buildings; 21% from industry; 24% from agriculture, forestry and other land use; 10% from other energy uses.

Source: EPA

According to Drawdown, the self-proclaimed  “most comprehensive plan […] to reverse global warming,” a shift to plant-rich diets is the fourth most impactful solution (out of 80) to achieve this goal, with the potential to reduce atmospheric CO2 by 66.11 gigatons.

 If 50 percent of the world’s population restricts their diet to a healthy 2,500 calories per day and reduces meat consumption overall, we estimate at least 26.7 gigatons of emissions could be avoided from dietary change alone. If avoided deforestation from land use change is included, an additional 39.3 gigatons of emissions could be avoided, making healthy, plant-rich diets one of the most impactful solutions at a total of 66 gigatons reduced.

Despite the well-documented research on the benefits of a plant-rich diet, global meat demand has remained untouched, and consumption in all animal categories has increased linearly since the 1960s. It doesn’t appear to be slowing down.

Source

A recent paper authored by Joseph Poore of Oxford University and published in Science indicates both alarming new research regarding food emissions and revisits pragmatic solutions. Poore’s findings corroborate existing research that animal products, particularly red meat, contribute substantially more to greenhouse gas emissions than any plant based food. Average emissions, or kilograms of CO2 equivalent, for 100g of protein in beef are 50 kgCO2eq. For peas, that number is 0.4 kg, or 0.8%. Beef emissions are unequivocally higher, even when factoring in high quantities of “food miles” that many plants bear. Food miles indicate the distance, say, an avocado from Mexico has to travel to the café in Notting Hill where you enjoy your avocado toast on a Sunday morning.

Poore revisits a powerful solution: give more power to the consumer. Food consumption is a uniquely personal choice involving individual preferences and dietary requirements, making any restriction on high carbon food consumption undesirable. Poore advocates for labeling a food’s carbon impact, a measure that would aim to reduce the overall demand for animal products. Reducing demand would theoretically reduce production, as is necessary in the free market, profit-maximizing model. In practice, carbon labeling would look like an additional piece of information required on nutrition labels. Carbon impact, like calorie content, would be required.

On unpackaged food, carbon may be labeled on the grocery store tags or on the glass panes that shelter the meat counter. Enforcement of a policy such as this not only democratizes information regarding food impact, but also allows customers to make conscious choices about the foods he chooses to purchase.

Providing more information to the consumer nearly always sounds like a good idea. But there are very real costs associated with a benefit such as this. Labeling all food products requires impact studies and manufacturing changes. Simply updating a food label costs businesses, on average, $6,000 per SKU, a significant cost for firms that produce hundreds of food items. We could imagine that labeling carbon may cost much more, as there are no current metrics to update—the data would have to be created for the first time. This information would be gleaned from impact studies, research that derives from tracing each ingredient to its origin and calculating its carbon impact throughout the supply chain, an activity that is sure to be much more costly than traditional nutrition labels, where information can be tested and obtained in a lab. Supply chains are rarely transparent or easy to track, and doing so will cost substantial amounts of money for companies to comply. There is also the matter of verification. Should companies be charged with labeling the carbon impact of each product, it would be easy, and almost predicted, that some of those numbers may be inaccurate and, therefore, counterproductive. To execute this successfully there must be verification agencies in place—auditing for environmental impacts, not just financial ones.

To many companies, $6,000 or more per item is pocket change. For others, like emerging start-ups in the food industry or small family farms, it’s the end of a company. That said, food giants are constantly updating their labels to market their product’s “new look, same great taste!” Carbon labels gives companies a new excuse to rebrand! This does, however, puts extraordinary pressure on small players in the industry like family grocers, many of whom are providing the healthiest, least polluting items. This dynamic indicates the need for government subsidies to assist financing large projects such as this.

Government subsidies may cause public outcry, particularly given the intense budget negotiations and lobbying power in Washington. In 2017, the United States government issued $16,185,786,300 dollars in farm subsidies, over $7 billion of which was allocated to commodities. Over $5 billion dollars was allocated to corn subsidies alone in 2017, a crop that’s primary use is—yes—to feed livestock.

chartSource: World of Corn

If we were to allocate a fraction of these subsidies away from crops that we artificially overproduce, we could provide substantial funding for these impact studies that may assist in tangibly relieving the environmental impact of carbon in the food system. This money would not be difficult to find—the low-hanging inefficiency fruit in the budget office is bountiful and ripe. The Economist reports that “between 2007 and 2011 Uncle Sam paid some $3m in subsidies to 2,300 farms where no crop of any sort was grown. Between 2008 and 2012, $10.6m was paid to farmers who had been dead for over a year.”

By offering initial subsidies to domestic companies, we could numb the pain of a potentially jarring regulation to offset the initial start-up costs associated with new research and a new labels. After this research and methodology improves and becomes standardized, government subsidies could eventually be eliminated, and costs to individual companies would be normalized.

Do we really need to put a carbon label on kale, though? Can’t we trust the common consumer be educated enough to distinguish between environmentally impactful foods and benign ones? Not really, and it’s not because consumers are inept. Adrian Williams, an agricultural researcher commissioned by the British government to study the carbon imprint of different foods, addressed this succinctly in a 2008 New Yorker essay.

“Everyone always wants to make ethical choices about the food they eat and the things they buy… And they should. It’s just that what seems obvious often is not. And we need to make sure people understand that before they make decisions on how they ought to live.”

Perhaps the most glaring hurdle to implementing these labels is educating consumers enough that they understand them. In 2007, Tesco, the largest supermarket chain in Britain, pledged to put carbon labels on all 70,000 of their products. Four years into the project, the grocer abandoned the initiative “because the message [was] too complicated” after labeling only 500 products. Though the move to change business strategy was multi-faceted, Tesco’s decision ultimately came down to two elements: no one else followed suit, and consumers didn’t know how to read them.

Developed with the Carbon Trust, most of these labels appeared like black and white footprints with the correlated grams of CO2 emitted printed in the middle. And while these labels offer a point of comparison in the lower corner, most consumers simply look over this fact. When I myself was living in London for a few months, I saw these labels frequently and had no idea what they were, and I didn’t bother to find out, either.

These labels must not only be designed better, but we must also educate the public about what they mean. This means more media coverage, more government campaigns, and more exposure to the labels at a young age. Demand and, therefore, the carbon impact of the food system, will not change if consumers don’t know what’s going on.

As with all new ideas, these suggestions are bound to bring warranted debate and discussion, but the debates alone should not discourage us from enacting such policies. As with any action, there are trade-offs. An investment in labeling carbon is a plausible first step towards investing in a new version of economic growth that considers environmental health in addition to financial.

A common argument against carbon labeling is the question of where in the supply chain tracking begins. And doesn’t it all get too complicated? We can get wrapped up in the idea of where the carbon tracking starts and if the gasoline the farmer used to buy the seed that planted the corn should be accounted for in that model. But those nuances, while crucial in the execution, miss the point. Carbon labeling gives us the benefit of comparison between products. It doesn’t matter where in the supply chain it starts, as long as its standardized and a true reflection of reality. The most crucial element of this all is that the average man going grocery shopping on his way home from work can easily see that a pound of ground beef produces a lot more carbon than a pound of turkey. It’s then up to the consumer how far he wants to exercise his carbon freedoms. Maybe he’ll become a vegetarian, but maybe he’ll just choose to eat turkey tonight. That may be where we are as a society right now. And that’s okay.

Because there will not be a strong economic advantage to choose a food item lower on the carbon impact scale, it’s necessary to note that this measure alone will not sufficiently change market conditions to reduce emissions. It is, however, a powerful way for early adopters to advocate through purchases, and an effective way to spread information about the impacts of individual choices on the environment.

On a practical level, this is a policy for consumer education. On a philosophical level, this is a policy to get people closer to the goods they consume, exposing, label by label, what’s really going on in the supply chain.

To be clear, labeling carbon will not curb emissions enough to actually meet the IPCC’s goal of 1.5 degrees C of warming. True and meaningful action requires putting a real price on carbon reflective of its value and integrating the environment into our economic fabric. This policy must be part of an ecosystem of changing action, thought, and discussion. This policy is forcing consumers to literally look at their choices in black and white and show them the environmental costs of a lifestyle. Carbon labeling is not going to save the icecaps, but it may be our best chance at bringing consumers closer to the goods we consume.

Labeling carbon like calories: can food labels change consumer choice?

The United States emitted 6,511 million metric tons of carbon dioxide in 2016. Approximately 8% of those emissions are associated with agricultural industry. Globally, agriculture accounts for 24% of global emissions. Livestock alone accounts for about 14.5% to 18% of global totals, making the impact of animal production alone greater than that imposed by the entirety of global transportation. These emissions are warming the planet and cause for grave economic concern, should inaction remain the dominant strategy.

Pie chart showing emissions by sector. 25% is from electricity and heat production; 14% from transport; 6% from residential and commercial buildings; 21% from industry; 24% from agriculture, forestry and other land use; 10% from other energy uses.

Source: EPA

According to Drawdown, the self-proclaimed  “most comprehensive plan ever proposed to reverse global warming,” shifting to plant-rich diets is ranked the fourth most impactful solution (out of 80), reducing atmospheric CO2 by 66.11 gigatons.

If 50 percent of the world’s population restricts their diet to a healthy 2,500 calories per day and reduces meat consumption overall, we estimate at least 26.7 gigatons of emissions could be avoided from dietary change alone. If avoided deforestation from land use change is included, an additional 39.3 gigatons of emissions could be avoided, making healthy, plant-rich diets one of the most impactful solutions at a total of 66 gigatons reduced.

 Despite the well-documented research of the benefits of a plant-rich diet, there has been little impact on global meat consumption. Global meat consumption in all categories has increased linearly since the 1960s and doesn’t appear to be slowing down.

Source: Our World in Data

A recent paper authored by Joseph Poore of Oxford University and published in Science indicates both alarming new research regarding food emissions and hopeful solutions. Poore’s findings indicate that animal products, particularly red meat, contribute substantially more to carbon impact than any plant based food. Average greenhouse gas emissions, or kilograms of CO2 equivalent, for 100g of protein from beef are 50 kgCO2eq. For peas, that number is 0.4 kg.

Source: SciencePoore offers a powerful solution: give more power to the consumer. Food consumption is a uniquely personal choice requiring individual change, making any restriction on consuming foods with a high carbon impact (namely, animal products) rather impossible. Poore, and I, contend that labeling a food’s carbon impact is a viable action that may yield substantial results. In practice, this would look like an added piece of information required on nutrition labels. Carbon impact, like calorie content, would be required. On unpackaged food, carbon would be labeled on grocery store tags. Enforcement of a policy such as this not only democratizes information regarding food impact, but also allows customers to make conscious choices about the foods he chooses to purchase.

The execution of a policy such as this must be nuanced, to be sure, and there are several questions that must be answered. Who will absorb the costs to obtain this information? Will all ground beef be labeled with the same carbon footprint, or will the label vary by supplier? How would we educate the public so that they understand the meaning of these new labels?

 

Providing more information to the consumer nearly always sounds like a good idea. But there are very real costs associated with a benefit such as this. Labeling all food products requires impact studies and manufacturing changes. Simply updating a food label costs businesses, on average, $6,000 per SKU, a significant cost for firms that produce hundreds of food items. We could imagine that labeling carbon may cost much more, as there are no current metrics to update—the information would have to be completely created. This information would be gleaned from impact studies, research that derives from tracing each ingredient to its origin and calculating its carbon impact in each setting, an activity that is sure to be much more costly than traditional nutrition labels, where information can be tested in a lab. Supply chains are rarely transparent or easy to track, and doing so will cost substantial amounts of money in compliance. There is also the matter of verification. Should companies be charged with labeling the carbon impact of each product, it would be easy, and almost predicted, that some of those numbers may be inaccurate and therefore, counterproductive. To execute this policy successfully there must be verification agencies in place—auditing for environmental impacts, not just financial ones.

To many companies, $6,000 or more per item is pocket change. For others, like emerging start-ups in the food industry, it’s the end of the company. For some farmers and suppliers of meat and produce, it’s unthinkable. For a policy such as this to be effective, all foods, not simply packaged foods, must be labeled. This puts extraordinary pressure on small players in the industry, many of which are those providing the healthiest, least polluting items. This dynamic indicates the need for government subsidies to assist financing large projects such as this.

Government subsidies may cause public outcry, particularly given the intense budget negotiations and lobbying power in Washington. In 2017, the United States government issued $16,185,786,300 dollars in farm subsidies, over $7 billion of which was allocated to commodities alone. If we were to allocate a fraction of these subsidies away from crops that we artificially overproduce, we could provide substantial funding for these impact studies that may assist in tangibly relieving the environmental impact of carbon in the food system.

As with all new ideas, these suggestions are bound to bring warranted debate and discussion, but the debates alone should not discourage us from enacting such policies. As with any action, there are trade-offs. An investment in labeling carbon is a plausible first step towards investing in a new version of the economic growth that considers environmental health in addition to financial.

Rate of change: how to utilize a changing workforce

There are about 6.6 million job openings in the United States right now, according to the Bureau of Labor Statistics. Despite these opportunities, however, an increasing number of individuals are moving towards the “gig economy.” 42 million workers are anticipated to be self-employed by 2020. More than 36% of the workforce currently works freelance, latest estimates project. This trend towards an increasingly independent workforce, a workforce not tied to any employer restraints or benefits, presents promising opportunities in industries that need the most innovation.

With the healthcare industry constantly under scrutiny for excessive costs, mismanagement, and poor patient outcomes, this new trend in labor preference may prove to be a promising opportunity for providers to cut costs. The United States currently faces a dramatic shortages of healthcare workers from the home care level to the operating room, and the country is currently on track to face a shortage of between 40,000 to 104,000 physicians by 2030.

By connecting the gig economy to the healthcare industry, the result appears to be win-win. Workers have more flexibility, can negotiate their own contracts, and can select opportunities most appealing to them. Employers, like hospitals, can dramatically reduce costs and more nimbly respond to varying demand. In an industry that is the poster-child for egregious costs, treating health aids and doctors like Uber drivers starts to look appealing for the bottom line, particularly when labor accounts for 60% of spending.

Before we quickly begin allocating freelance workers into the healthcare (or any other) industry, we must consider the broader implications of incentivizing such volatile jobs. The irony of suggesting that freelancers, members of the gig economy, enter the healthcare workforce is that one central tenant of working project-to-project, operation-to-operation, is that employers do not offer healthcare benefits to these temporary workers. As Reuters points out, freelancers’ income is constantly in flux, making coverage options ever-changing as well. This makes finding health insurance a particularly perplexing problem. “If you are a freelancer facing the pure retail cost of healthcare, then it is horrifying,” notes Kathy Hempstead, senior advisor of the Robert Wood Johnson Foundation.

This dilemma affects more than just freelancers, however. Without a critical mass of individuals insured through traditional insurance plans, we may face a new problem of not having enough enrolled individuals to pool risk, making our health insurance program obsolete. While programs like the Affordable Care Act has attempted to address this growing problem, legislation is too slow and resistance too large to address the evolving problem’s rapid growth.

While we should be finding ways to innovate the labor market as we innovate industry, we must also be futurists, considering not merely the short-term benefits of our actions, but also long-term implications. The main problem with this employment shift may be, like so many others, not the evolution itself, but the rate at which it is occurring.

Reality check: the lack of issue literacy among voters

It’s Tuesday, November 6th. Tonight marks the conclusion of election day, the day where Americans will vote to elect or re-elect representatives in the House and the Senate, and, perhaps more importantly, voice the direction in which he or she thinks the country should move. Americans are opinionated, and they’re sharing that opinion. 36 million people voted early in this election, indicating a voting turnout some are comparing to a presidential election.

So why are Americans turning up to vote more than expected? They have something to say.

According to the Pew Research Center, immigration is a top priority for both Republicans and Democrats, and is the most important issue being voted on today among voters, a potential reason voters are showing up in staggering quantities at the polls. As a caravan of migrants moving from Central America towards the United States increasingly became an election strategy over the past two weeks, the topic has emerged as a “closing statement” on the campaign trail, making the issue more immediately relevant. And voters have strong opinions about this. Basic immigration literacy, however, like so many other policy issues, is largely disconnected from reality.

Americans have voiced that immigration policy is a top priority, even a reason they are showing up in record numbers at the polls, but data show that many fail to grasp basic facts about the subject.

More than 42% of individuals polled incorrectly identified that fewer than half of US immigrants are here legally. Current estimates suggest that around 75% of immigrants are in the United States legally. Perhaps even more alarming is the fact that immigration information, supposedly grounded in fact rather than opinion, is so starkly split along party lines.

The above graph, reporting public opinion about crime rates, shows a wide spread in public opinion—distinctively along party lines—regarding an issue that is theoretically nonpartisan.  While 42% of red voters believe that undocumented immigrants are more likely than US citizens to commit serious crimes, 12% of blue voters hold the same opinion, representing a 30% spread.

While the divide between the right and the left is ever-reported and increasingly evident in everyday interactions, this problem manifests itself in crucial policy discussions. Without basic knowledge of rudimentary facts—facts integral to voters’ most crucial issues—the public is useless in engaging in the meaningful and nuanced conversations that are required to debate immigration policy.

I would argue that the emphasis placed on immigration in the polls is both warranted and necessary. These policies are part of an elite group that impact almost every facet of the American story—GDP, employment, inequality, and social trajectory. But we cannot possibly discuss the true issues behind the policy—issues of economic value creation and job displacement and inequality enhancement—if we do not first address the fact that Americans don’t understand the issues we claim to care the most about.

We should fear—and fight—those who speak (and vote) without adequate information, regardless of what colors they wear.

The prisoner’s dilemma: how conflicting incentives make healthcare worse

A healthcare provider, an insurance payer, and a patient all walk into a bar. You already know how well this is going to go.

National Health Expenditure; that is, the amount the United States spends on healthcare each year, was $3.3 trillion dollars in 2016. That’s $10,348 per capita or 17.9% of GDP. The OECD average in the same year was $4,003 per capita or 9% of GDP.

We have a dataset showing that we spend more, disproportionately more, on healthcare services than our fellow OECD members; that’s clear. But spending money is not inherently bad. But in exchange for these expenditures, nearly $6,000 more per capita, we must expect superior results. We must expect our citizens to have less chronic disease, higher life expectancy, and lower infant mortality rates, but we do not see that realized. The Wall Street Journal published the following infographic in July, illustrating the failure of the American healthcare system in comparison to our fellow economically-developed nations.

 

This story, the story of why it is happening and how it can change is excessively complex, and conversations on all chapters of this story deserve to be had and heard. I’m going to discuss one element of this story, one that I believe sheds the most light on what is actually going on in our system—the payer-provider relationship that makes the patient, the payer, and the provider all worse off.

There are three main players in this game, the three walking into a bar—provider, payer, patient. What does each player want?

The patient wants to get treated, treated well, and treated well at a good price.

The payer (also known as the insurance company) wants to provide care at the cheapest level of care possible that will meet the patient’s minimum requirements.

The provider wants to treat the patient at the highest cost while maintaining exclusive relationships with the payers. The insurance companies are ultimately the ones to pay the bills.

The payer-provide relationship is a complex one to say the least. Let’s look at it through the lens of Mr. Edward Winchell. Mr. Winchell is a 65-year-old, California resident who was admitted to Mercy San Juan’s hospital facility for a right hip fracture in 2014—he had fallen down. Throughout his time at the hospital, Mr. Winchell began suffering from symptoms of C. difficile, or inflammation of the colon that can cause severe colon damage or even death. According to the public record complaint filed on Mr. Winchell’s behalf, Mercy San Juan attempted to discharge and relocate him to a skilled nursing facility once his Medicare coverage was due to expire; however, the hospital “consciously or reckless[ly] chose to omit the fact that Mr. Winchell had C. difficile from his records,” knowing that such a condition would make him an undesirable candidate for another facility—he was deemed too expensive to care for. “Mr. Winchell was unsafely discharged” to a skilled nursing facility with no knowledge of his presenting symptoms. This is a phenomenon called “patient dumping” where providers will essentially pawn off patients that are too expensive, or for whom their insurers won’t pay, to lower, cheaper care facilities. The end of the story? Mr. Winchell’s colon was removed and will be forced to use a colostomy bag for the remainder of his life.

This story is both devastating and disheartening, but let’s look at it from a business perspective. The hospital, Mercy San Juan, had an expensive patient. The insurance provider was nearing the end of its contractual agreement to pay and was refusing to pay any more. The hospital could not deliver care at a cheaper rate, but a skilled nursing facility could. Therefore, the logical option for the hospital is to transfer the patient to that cheaper facility. This cheaper facility also ends up delivering a lower level of care, precisely the opposite of what Mr. Winchell actually needed to be discharged more quickly and more safely.

Health care is an industry, though often forgotten, an industry with a vibrant economy. Each firm is competing against the other in an attempt to claim profits, just as firms in the automobile or CPG industry do. The only difference is the extreme amount of influence that the suppliers—in this case, insurance companies—command over the firms.

So what are the consequence of this payer-provider relationship, of this patient dumping, of this subprime care?

To examine these consequences, hospital readmission rates become a useful tool. Theoretically, if a patient is treated with poor levels of care before being discharged, they will have to return to the hospital again in order to receive the care they originally needed. This scenario is illustrated by hospital readmission rates. Though data are severely disjointed, one study published in the AAP Journal examining this rate among children with chronic complex conditions (CCCs) reveals that, among children with 1 or more CCC, 19% had at least 1 readmission within 30 days of discharge. In patients taking 8 or more medications, that number was 29%.

In 2011, The New Yorker ran a story following doctor Jeremy Brenner playing around with data in a New Jersey town. Brenner “found that between January of 2002 and June of 2008 some nine hundred people in […] two buildings accounted for more than four thousand hospital visits and about two hundred million dollars in health-care bills. One patient had three hundred and twenty-four admissions in five years. The most expensive patient cost insurers $3.5 million.”

Another wide-spread analysis found that 14.4% of 12.5 million discharged patients were readmitted. These readmissions resulted in annual costs of $50.7 billion.

Of these hospital readmissions, 26.9% are considered potentially preventable.

These hospital readmissions are discouraging and costly, but they also represent an impossible relationship between provider and payer. Due to pressures from insurance companies, hospitals are financially pressured into discharging, often prematurely, patients such as Mr. Winchell in order to cut costs down for the payer. Insurance companies, you recall, want the patient discharged as quickly as possible in order to cut down costs. However, when these patients are forced to return to the hospital to receive adequate levels of care, like those 26.9% are, these hospitals are hit with large fines (3% of total Medicare payments in 2015). This puts the providers in the ultimate Catch-22. Do they discharge the patients and risk readmission penalties or keep the patient longer despite the provider’s refusal to pay, which will naturally eat into the hospital’s bottom line and destroy relationships with insurance companies?

Herein lies the prisoner’s dilemma: with incomplete information, neither payer nor provider knows how to best treat a patient at the lowest cost. As a result, the dominant strategy for the payer will always be the lowest cost option that produces the lowest level of care, an option that will indeed result in increased costs for the provider as the patient is readmitted yet again.

Both readmission rates and the penalties slapped on these readmissions argue that poor care inevitably ends up costing everyone more in the long-run—more pressure and time for the provider, more money for the payer, more grief for the patient. Even more, these numbers tell a story, a powerful story that shows the careless costs within the American healthcare system that make the payer, the provider, and the patient all worse off. Can we shave $50.7 billion off the total National Health Expenditure by simply improving this relationship?

This is not even to speak of the effects that incentivizing proper nutrition and exercise, addressing the pharmaceutical market, and reforming regulations on price of care could have on this cost. If we analyze this situation from purely a financial standpoint, completely ignoring every humanitarian, moral, and ethical argument, our healthcare system is inefficient at best and damning at worst.

Healthcare spending affects more than just the chronically sick; it affects you, the taxpayer, whose dollars directly fund our national budget.

As data from CBPR shows, Medicare and Medicaid represent roughly 26% of our national budget—that 26% is part of our “mandatory” spending, the part of our budget that politicians claim we can’t touch.

I would argue that we can touch mandatory spending. We can shrink it through lowering hospital readmission rates, through raising the level of care, through changing policy to encourage collaborative behavior between provider and payer, not pitting them against one another.

A healthcare provider, an insurance payer, and a patient all walk into a bar. Let’s not let them get into a bar fight.

A positive feedback loop: how interest rates drive poverty

In 2015 it was Greece. In 2018 it has been Argentina, Venezuela, and Turkey just to name a few. Across the globe, those countries unable to execute the delicate ballet of monetary policy have seen staggering inflation, interest rates to match, and devalued currency. Perhaps most urgent, however, is the inability for these countries to pay back loans. Even more, 40% of low-income developing countries (not always sprawled across headlines) are either in a debt crisis or nearly there.

Let’s look at Turkey, for example. Despite raised interest rates (now at 24%), inflation is at 18% and the country doesn’t appear solvent enough to repay the foreign money dumped into the economy over the past few years. GDP is expected to contract in Q3. Just yesterday, Turkey’s Treasury borrowed around $347 million at the interest rate of 25.05%. These numbers scream impending doom. As interest payments come due, Turkey will likely have to refinance the loans or, more probably, borrow more money, this time at an even higher interest rate. This process will increase the deficit again, drive interest rates higher, and propel the positive feedback loop yet again.

It’s easy to sit behind the news headlines, shaking our heads at the recklessness of Treasury officials in Washington or Ankara or Buenos Aires or Athens. The reality is, however, that this insidious cycle happens not merely within governments, but also within a much quieter space—the world’s poorest.

A financial epiphany hit the world in the 1970s: microfinance. As inflation was skyrocketing in the US, nonprofits began to understand and prove that the “poor are creditworthy.” This realization opened the door to a now widely used process that allows individuals who wouldn’t ordinarily have access to capital, such as women in sub-Saharan Africa, to be granted loans to start small businesses. This reinvention of the financial system promised the potential to become a powerful tool in alleviating worldwide poverty.

But people, like governments, have trouble with debt.

Source: CGAP

While the lending interest rate in the United States was about 8% in 2006,  the average microfinance interest rate was about 35%, as shown in the figure above. For individuals in Uzbekistan, that number was 85%. It’s intuitive, practical, and even necessary for lenders to be compensated for risky investments through higher interest rates; this reality, however, leaves individuals trying to start a small business, like Greece in 2015, broke, desperate, and unable to attempt repayment. For microloan recipients in India, the reality was bleaker than desperation. “More than 80 people [took] their own lives in the last few months after defaulting on micro-loans,” reported the BBC in December 2010. These are devastating realities for individuals who fall prey to interest rates that are wildly unsustainable.

Let’s suppose that a microfinance organization has agreed to lend $100 to a woman in Malawi to make and sell resilient water jugs in her community. Completely ignoring start-up costs, she will have to expect returns of 37% ($137) in the first year just to pay off the loan and make a 2% ($2) profit. When this woman cannot pay off her loan at the end of the year, she’s forced to borrow more money at a higher interest rate in order to pay off the first loan. Of course, it’s unlikely that she’ll be able to pay off the second loan either. This is an infeasible system.

When interest rates are highest among individuals with the least amount of power to pay them back, these citizens turn into a personification of Greece or Argentina or Venezuela, desperately looking around, pleading for someone to help pay their debts. There is no IMF for individuals.

Invisible influence—why traditional supply and demand aren’t the only forces driving commodity prices

If you’re like me, and every other average American, you’re happy to reach for your 1.96 cups of coffee each morning. You’re probably even happier that market prices for commodity coffee have been in sharp decline since 2015 and that, if you’re reading this, you’re living in the US where the average price of a cup of coffee is $2.70. Life is good for the caffeine addict and the general consumer alike—commodity prices are down. Theoretically, this means that abundant supply and stable demand have reconciled to make low prices and happy consumers.

As a zealous consumer of coffee, it is simple to examine the economy from my perspective as a consumer, my perspective where lower prices equate to a “better” economy. That perspective, however, is not only narrow-minded, but also ignorant. A full economic picture must examine each step of the supply chain, green coffee grower to distributor. Why? Because each player takes a cut of that $2.70 you pay at Starbucks each morning before your 8am lecture, morning meeting, or daily workout.

The free-market economy suggests that prices are determined by two factors—supply and demand.

Coffee

These economic rules may seem fairly pedestrian, but may be visually represented by the above graph showing the steady decline in coffee prices in the late 90s and early 2000s. General consensus credits this decline in the commodity price of coffee to a new entrant in the coffee producer scene—Vietnam (“The Global Coffee Trade,” Stanford Graduate School of Business). In contrast, 2014 yielded abnormally high coffee prices due to a prolonged drought in Brazil which rendered inadequate supply to satiate the appetites of worldwide caffeine consumers. With coffee demand growing relatively uniformly worldwide, market price fluxes appear to be most strongly correlated with supply disruptions.

Supply volatility, however, is not the singular issue.

Here’s the reality—coffee farmers cannot live on their negotiated wages, are subject to the whims of climate (even more so in the past decade), and lack the infrastructure needed to process and roast their coffee beans, leaving them with little bargaining power. While processors like Nestle and P&G certainly take advantage of their roasting infrastructure and supply chain, there is a deeper truth about commodity pricing—consumers demand superior products at low prices. How can free markets support and satisfy all members of the economy, producer to consumer, while operating under these parameters?

An attempt at a short answer to a long question about coffee economics, supply, and demand is this: because consumers, coffee enthusiasts like you and me, want lower prices, companies will fight on our behalf to get those lower prices. That is their role as competitive businesses. When we think about supply and demand, we cannot merely consider how much we demand, but what exactly we are demanding. We—I—demand high quality, superior tasting, convenient, caffeine-rich coffee at a good price every time I step into the coffee shop.  Those are intangible properties I associate with my tangible good. If I’m not willing to pay for those intangibles at the coffee shop, a coffee grower in Brazil will be paying for them instead.