Hippo Reads is proud to be working with Oxford University Press to highlight unmissable excerpts from recent and upcoming titles. Today’s post is reprinted from From Field to Fork: Food Ethics for Everyone by Paul B. Thompson with permission from Oxford University Press, USA, © 2015, Paul B. Thompson. Dory lives on five acres near a major metropolitan area. She derives most of her income from substitute teaching in several nearby school districts. She likes that work because it allows her to taper off her teaching in the springtime so that she can farm her land. She sells fruits and vegetables to local chefs and to the public at farmers’ markets throughout the summer. Dory is especially known for her strawberries, grown without any synthetic pesticides or chemical fertilizers. She has a neighbor who also grows organic strawberries, so occasionally they team up by pooling their berries. One or the other of them will take a turn selling them at the downtown farmers’ market. Whether it’s Dory or her neighbor Pat behind the market stand, people seek out these berries both for their wonderful flavor and because they like to buy from people they know. Is Dory doing anything unethical? Many people will be surprised by such a question because the description just given hardly suggests any basis for suspecting unethical behavior. But by selling her neighbor’s strawberries Dory is violating the rules for many urban farmers’ markets. Although these rules are far from universal, many urban markets limit farmers to selling only the things they grow themselves. Such rules were put in place to give farmers an economic opportunity, but also because people who shop in farmers’ markets want to know that they are buying food directly from the person who grew it. Although it might seem unreasonable to apply such a rule so strictly in a case like Dory’s, a very similar type of horse-trading among local farmers led to a scandal in 2011 when people in Oregon were sickened by E. coli-contaminated strawberries. The source of the contamination was eventually traced to deer that had been on the farm of a single grower who was supplying numerous roadside stands as well as farmers selling in farmers’ markets. The scandal was less the result of the contamination itself than the difficulty authorities had in tracing the contaminated berries through the chain of trades being made by farmers whose customers thought that they were buying direct from the field. Consider Walker, a student who purchases the meal plan at a college located in an urban area. Walker is a health-conscious vegetarian who eschews most of what is available at lunch and dinner in the campus dining hall, but his meal plan includes a budget for items that can be purchased anytime at the campus snack bar. Walker is not much for snacks, but he can also use this part of his plan to buy a few nonperishable food items: candy, chips, processed meat sticks, and packaged bakery items. One of his similarly health conscious friends has started a campaign for like-minded students to spend their snack budget on these nonperishable goods and then donate them to the local food bank. The director of the food bank says he would love to have them. His clients like candy and chips and especially those peppered sticks of jerky and sausage! But Walker is not so sure. He is all in favor of lending a helping hand to people who are short on food—and after all, he’s already paid for the plan, whether he spends the money or not. Yet how can it be ethical to give needy people food that he is not willing to eat himself? I learned about Walker’s quandary by speaking with students at the university where I work, but questions very much like the one he is asking are faced by the managers of local food banks and charitable assistance programs everywhere. Similar questions apply to public policies offering supplemental nutrition assistance programs (SNAP) or what many still call “food stamps.” No one thinks that a diet consisting entirely of chips, candy, and soft drinks is healthy. Not even the manufacturers of these foods would suggest that. Still, snack foods consumed in moderation can be part of a healthy diet. Denying access to them seems like telling the client of a food bank that they cannot be trusted to make their own choices simply because they are poor or have fallen on hard times. It looks rather like a paternalistic form of disrespect. Yet as the old adage has it, “Beggars can’t be choosers.” Don’t people who contribute to food assistance programs have every right to insist that the churches, government agencies, and charitable organizations who run them shape the program according the donor’s values? And finally, take Camille, a local legislator from a part of the country that is heavily dependent on pork production for employment and tax revenue. Camille has just met one of her constituents, a pig farmer demanding that she support a new piece of legislation. It seems that one of his neighbors hired a college kid to work on his swine farm over the summer, but the kid turned out to be an animal-rights activist. The kid smuggled in some high-priced vodka to drink with a few of the farm’s regular employees and then cajoled them into play-acting some scenes inspired by the horrible Abu Ghraib photographs of tortured Iraqi prisoners—only this time with pigs playing the role of the torture victims. The neighbor was furious when he found out. He fired the college kid and docked the pay of his regulars, warning them never to let something like that happen again. But now the video that the kid took of these fake abuse scenes has gone viral on YouTube. The news stations are starting to pick it up and are playing the story as if this is what happens all the time on area pig farms! Camille’s constituent wants her to support a law that would make distribution and reproduction of photographs or video recordings obtained without the farmer’s permission a crime. Camille is not so sure. They call these “ag-gag” laws, and versions of them have been passed by state legislatures throughout the Midwestern farm belt of the United States. Although it’s easy to sympathize with the plight of her constituent’s neighbor (assuming he is telling the truth), such photographs and films are viewed as political speech by the animal protection organizations that circulate them. Camille suspects that her politically conservative farming constituents would not be very sympathetic to government interference in their own speech. And how can she (or anyone) be sure this neighbor was telling the truth when he accused the college kid of filming a set-up? Maybe those pigs were actually being abused. At the same time, she’s troubled because—having campaigned on plenty of pig farms herself—she’s satisfied that even if this kind of abuse happens from time to time it is rare. But the whole industry suffers when pictures like this are made public, and where is the justice in that? Dory, Walker, and Camille are struggling with tough questions in food ethics, but many issues in food ethics are not tough at all. In 2009, Chinese officials revealed a conspiracy in which infant formula had been deliberately adulterated through the substitution of melamine, an ingredient in industrial glues and plastics, for milk powder. At least three infants died as a result, and some estimates indicated that 300,000 were sickened and may experience long-term health consequences. The perpetrators of the conspiracy are believed to have gained millions of dollars, but at an intolerable cost in human misery and loss of life. Unlike the questions being posed for Dory, Walker, or Camille, there is no mystery here about what should be done. Yet as much as we might like to think that the matter of what we eat or how it is produced and distributed will always be simple and clear-cut, the preparation and consumption of foods we eat everyday are replete with opportunities for ambiguity, confusion, and disagreement. Some of the most enduring and deep disagreements occur when one person thinks the ethical choices are easy and unambiguous, but the next person is not so sure. The Rise of Food Ethics Dory, Walker, and Camille are philosophical thought experiments—stories cooked up to give us insight into an ethical problem—rather than real people. Their situations are typical of problems that will be discussed throughout this book. To many people, food ethics means making better dietary choices. Choices could be better in terms of health or they could have better environmental and social consequences for others. Food choices become ethical when they intersect with complex economic supply chains in ways that cause better or worse outcomes for other people, for nonhuman animals, or for the environment. It is worth reminding ourselves that this is a relatively new idea. Enthusiasm for farmers’ markets; humanely produced animal products; and fairly traded coffee, tea, and cocoa has grown markedly over the last decade. Over the same time period, we have also gained greater recognition of links between diet and the alarming growth in diabetes, heart disease, and other degenerative conditions. Thus food ethics might include not only making better choices yourself but also designing menus, public policies, or even cities to encourage better food choices by everyone. The examples of Dory, Walker, and Camille illustrate further problems in food ethics that do not even involve dietary choice in any simple or straightforward way. The growing number of ways that food becomes embroiled in ethical quandaries coincides with key industrial and commercial developments in the production and distribution of food. As food historians have demonstrated, the early decades of the twentieth century saw the emergence of food manufacturing firms and chain grocery stores. During this period, many factors conspired to create a food system in which consumers were quite ignorant of where their food came from and hence could not make choices on ethical grounds. On the one hand, urban populations simply lacked a kind of personal experience with food production that had been virtually ubiquitous a century earlier. On the other hand, technological changes in rail transport and food processing were creating longer supply chains and smoothing out seasonal variation in food availability. Branded products arose in response to consumer demands for some reasonable certainty as to the quality of processed foods, and with branding came food advertising. Home economists promoted the use of canned and packaged food as “progressive,” and as more women entered the workforce, it became necessary to economize on the time invested in procurement and preparation of meals at home. By the 1960s, these trends were being augmented by rapid growth in meals eaten outside the home. These developments in marketing and distribution were occurring as a two-centuries-long process of transformation was being completed in the agricultural sector of industrialized economies. The years after World War II were especially significant for a rapid growth in the use of chemical methods for controlling plant diseases and insect pests in crop production, and the creation of intensive concentrated animal-feeding operations (CAFOs) or, as their critics characterized them, “factory farms.” The combination of consumer ignorance and complex technological change began to be associated with a series of high-profile problems, beginning with food adulteration during the early decades of the twentieth century and continuing with Rachel Carson’s exposé Silent Spring in 1962. The consumer backlash began to mount in the counterculture with increased attention to the health and environmental impacts of industrially produced and distributed food. Sometimes this backlash took the form of small farms and food co-ops that attempted to create an alternative food system, but the more typical response has been for economically successful farmers and well-entrenched food industry firms to develop and market products that appeal to “alternative” values. In the 1970s, foods were advertised as “natural,” but consumers rapidly turned to “organic” as a more meaningful alternative value. Soon “humane” and “fairly traded” labels began to be added to the alternative food lexicon. By the early years of the twenty-first century, consumers who had been made skeptical by large food industry firms’ abilities to exploit all these terms began to seek “local” foods as a way to eat more ethically. This book engages these topics, but it does not tell you what to eat. Chapter 1 reviews the recent history for the rise of food ethics in more detail, emphasizing seminal ideas that emerged in the 1970s. One theme is to describe the difference between food ethics focused on supply chains and their socio-environmental impact, on the one hand, and an ethics constructed wholly in terms of one’s own dietary choices, on the other. From there, the book takes a deeper dive into a few of the big themes in food ethics. Injustice in the food system is the focus of Chapter 2. I ask whether food issues tell us something new about social justice, or if they are simply case studies for more general philosophical ideas about justice. Chapter 3 follows the ethics of diet and health from the ancient world to our current obesity crisis. In Chapter 4 we come to what I call the “fundamental problem” in food ethics: the enduring tension between the interests of poor farmers in the developing world and the hungry masses of growing urban centers. The case for vegetarianism is discussed briefly in Chapter 5, but the main focus is on the ethical difficulties in food animal production that we should be thinking about while we wait for people to become vegetarians. The environmental sustainability of the food system is the subject of Chapter 6, and Chapter 7 takes us back to the developing world to consider how Green Revolution-style development projects should be evaluated in ethical terms. The final chapter revisits themes of risk, personal diet, and the nature of ethical thinking itself by considering some questions in the debate over genetically engineered foods. These eight chapters provide a microcosm of the issues that might be included under the heading of “food ethics.” In each case I do more to complicate the ethical analysis of our contemporary food system than I support specific recommendations for policy change or personal choice. There are many more topics that could be added. What Is Ethics? I believe that ethics should be viewed as a discipline for asking better questions. Common speech equates “ethics” with “acting rightly,” and like many college professors who teach and write on ethics, I am frequently beset by someone who recounts an episode of outrageous behavior by some person or group. The tirade ends with the question, “So is that ethical?” It seems as if they want an expert to certify their opinion. When the behavior in question is truly outrageous, it is easy enough to agree, but philosophical ethics is more attuned toward developing the vocabulary and patterns of thinking that make for more perceptive and imaginative ethical reasoning than it is toward training someone to judge particular cases in a uniform manner. In this section and the next, I provide a brief introduction to the way that philosophers approach ethics. It is intended to ease readers who come to the book with a keen interest in food issues but little background in philosophical ethics. These remarks provide a sketch and cover some standard terminology for lay readers, but take caution: in answering the question, “What is ethics?,” this way, I do not pretend to be offering definitive accounts of the various schools or theories of ethics that are studied in philosophy departments. We can start with a graphic that illustrates some key elements of a decision-making situation in very simple terms. Figure 1 represents the agent, a person or group that will undertake some action or engage in some kind of activity. One can think of oneself occupying this position, but the figure might also represent an organization, such as a business or political party, or it might even be taken to represent society as a whole. The activity to be undertaken may or may not be the result of a conscious or deliberately considered and calculated choice. The situation of a very generalized “decision maker” orients us to some key elements of human conduct that can be the target of ethical reflection or evaluation. After noticing these elements we reflect equally on the actions that we undertake as individuals or on the combined activity of people acting in organizations, groups, and even random groupings (such as crowds). Human activity is always constrained in a number of different ways. Some things are simply impossible: they violate the laws of physics and chemistry. One cannot become invisible on whim, and in the world of food, biophysical constraints define some of the possibilities for the production and distribution of what people will eat. In the graphic, such constraints are represented by the ring labeled technology. In referring to these biophysical constraints as technology, we acknowledge that although the laws of physics, chemistry, and biology limit what can be done rather robustly, the way that these inflexible laws are reflected in our material circumstances is not fixed once and for all. The history of food and agricultural technology has dramatically changed the biophysical constraints that someone living in the nineteenth century would have faced. Some key issues in food ethics concern the way that we should invest research dollars in the quest to make even further changes in our food technology. However, the main focus of ethics is usually on the two “softer” types of constraint. First, there are things that are forbidden by law and policy. Some things are against the law, while others may violate a policy that one is required to follow as a condition of employment, for example. Second, there are biophysically possible courses of action that are forbidden by customs and norms. While not against the law, they are things that one knows full well not to do. In typical decision making, people have so thoroughly internalized both kinds of constraint that they simply do not even consider a course of action that would violate them. Eventually, people do something; the agents act. Perhaps they rather unreflectively order lunch at a restaurant, or perhaps they make a life-changing decision to quit their job and start an organic winery in Oregon. A company acts when it launches a new product or advertising program. Even disorganized groups can act. A riot would be one example, but we say that “the public” acts (or speaks) when a new product succeeds or fails to attract enough buyers to achieve market success. Whatever action or activity is performed or undertaken is referred to as conduct in the graphic. It might be cooking mashed potatoes or buying chips at the store, and it might be passing new laws on food safety or making a multi-million dollar investment in some speculative venture to produce test-tube meats. For firms, it might be manufacturing cheesecake or marketing a product through online sales. Any description of something that people or groups are doing could qualify as conduct, however broadly or narrowly it is constructed. We start to see that this very general depiction of the human situation has some relevance to ethics when we notice that all these forms of conduct have consequences. Here, consequences are changes or effects on the health, wealth, and well-being of any affected party, including the agent herself (or itself, as the case may be). From the perspective of clear ethical thinking, it is important to distinguish the conduct from its impacts or consequences. The sum total of all consequences to the health, wealth, and well-being of all affected parties is often referred to as an outcome. It would be possible to develop more detailed or nuanced terminology for describing ethically significant action or activity, but this very simple model will suffice for present purposes. Each element of an agent’s decision-making situation can have ethical significance, and the nature of this significance is often signaled by distinctive terminology. Ethically significant consequences are described as benefits, costs, or harms. If an agent does something that affects the health, wealth, or well-being of someone else positively, it can be characterized as a benefit. If it affects that person adversely, it is a harm. We commonly refer to adverse impacts as costs, but this is not how economists understand that word, so there are reasons to avoid it. Ethically significant constraints that take the form of law and policy, on the one hand, or customs and norms, on the other, are often described in terms of rights and duties. If someone has a right that I must respect, doing so constrains my potential range of action. Possible courses of action that would violate that person’s rights are considered out of bounds and I must not undertake them. If I have a duty (perhaps because I have made a contract or promise) my action is similarly constrained: I can consider only those possible activities that are consistent with fulfilling that duty. Generally speaking, rights and duties can be thought of as correlative: if I have a right, then others have a duty to respect it. Finally, there are certain types of conduct that are named directly with words that imply ethical significance. Lying, mendacity, and dishonesty are rough synonyms for one type of conduct; truthfulness, honesty, and sincerity name its opposite. Such words tie ethical significance directly to a given type of conduct without referring back to rights and duties or looking ahead to outcomes. They classify conduct in terms of virtue or vice. Philosophical ethics is an organized practice—a discipline. Its practitioners focus first and foremost on these three ways that action or activity can be characterized as ethically significant. They study the way that people formulate, specify, and discuss the rightness or wrongness of action and activity by characterizing it in terms of virtue or vice, noting the constraints that function as rights or duties, and attending carefully to its beneficial and harmful consequences. It is only a slight exaggeration to say that if a point of reference cannot be described in terms of relevance to virtues and vices, rights and duties, or benefits and harms, it is unlikely to have any ethical significance at all. Food ethics, then, is the study of how virtue, vice, rights, duties, benefits, and harms arise in connection with the way that we produce, process, distribute, and consume our food. How Philosophers Approach Ethics For the last two hundred years or so, academically trained philosophers have tended to organize themselves among several schools of thought on how actions and activity can be deemed ethically correct or, to say the same thing, ethically justified. They have developed theoretical accounts of what makes one action right and another wrong, and these accounts are referred to as ethical theory. Some of the most influential theories focus on only one of the elements described above. Utilitarianism is a theory (actually a family of theories) holding that ultimately only consequences matter. Claims about rights and duties are, at the end of the day, reducible to benefits and harms. A duty, for example, might simply be a rule that, if followed carefully, will lead to the best consequences—the best available outcome in terms of total benefit and harm. Classical utilitarianism specified an ethically right action as the one that achieves the “the greatest good” (i.e., the most net benefit) for “the greatest number” (i.e., the largest possible number of affected parties). This specification was modified by welfare economists who were willing to sanction any action that achieves a “potential Pareto improvement” (i.e., achieves more benefits that harm). Notice also that this simple description of utilitarianism does not say who or what counts as an affected party. Is it only one’s fellow citizens or is it all of humankind? What about nonhuman animals? Is it possible to benefit or harm an ecosystem or an endangered species? There are other details, too. Clearly, not every adverse event counts as a morally significant harm. If you beat me at Monopoly or Scrabble, it’s doubtful that I can claim to be harmed in any morally significant way. But what about the losers in an economic competition? Is losing your job because your employer went broke just an example of “the way things go” (like losing at Scrabble), or is it a morally significant form of harm? Here there are disagreements, and as we will see at several junctures, this disagreement matters in food ethics. For now, just note that spelling out the details of a full-fledged ethical theory becomes an exceedingly complex task. For present purposes it is the contrast between a utilitarian’s laser-beam attentiveness to consequences and an approach focused on rights and duties that is more significant to notice. Such a theory, which we will simply call rights theory, must derive an account of the way that actions are constrained by the rights of others. Although (once again) this can get very complicated, it is worth pointing out two general strategies. One approach is called contractualism (or sometimes social contract theory). This approach assumes that rights are grounded in the promises that we make to one another. If I promise to meet you at the pub at five o’clock, you have a right to expect me to keep that promise, and I have a duty to do so. When we (or our representatives) make laws or set policies, we are, in effect, promising to act according to a system of rights and duties. More generally, perhaps we can think of our social interaction as a set of implicit promises, and we can develop our ethical theory by asking each other, “What are the promises that we would most hope to govern our interactions?” Historically, this question has often been tied to rationality: what is the social contract—the system of rights and duties—that a rational person would accept? The contractualist approach emphasizes the reasons why a given social bargain (i.e., a system or configuration of rights and duties) would be rationally acceptable, rather than the benefits and harms that such a configuration can be expected to produce. The alternative approach to rights suggests that we can derive a binding set of rights and duties by thinking hard and deeply about the nature of human freedom. The widespread practice of human slavery was overturned largely because people came to believe that it could not be reconciled with basic human rights. Of course a truly free person must not be a slave to passion, either. And passion is governed (or constrained) by a good or moral will. On this view we can obtain a sense of mastery over passions that destroy our freedom—a perspective on freedom indicated by the term autonomy—by recognizing (or in some sense giving ourselves) a set of constraints to guide our conduct. Immanuel Kant developed the most influential version of this approach to ethical theory, arguing that we can impose correct moral constraints upon ourselves by asking whether we would be willing to see a proposed constraint treated as a universal law—as a principle of duty binding on all persons at all times. An approach to rights theory that probes the meaning and achievement of human freedom is often described as Kantian (or neo-Kantian in deference to some deviations from Kant’s own view). Again, fully specifying such a theory brings us into further complexities. The point here is definitely not to convey an adequate basis for understanding neo-Kantian ethical theory but simply to indicate how someone pursuing this line of reasoning will ask rather different questions than someone who thinks that ethics can be satisfactorily theorized by calculating the net value of benefits and harms. And there are still more options. The type of philosophy that gives rise to debates over rights and duties on the one hand and consequences on the other fails to really capture all of the meaning that is sometimes packed into the claim that a particular type of conduct is virtuous or vicious. While talk of the virtues seems less able to drive ethical thinking to a specific prescription—that is, to a formulation of which action really is the right thing to do—it nonetheless does articulate the way that patterns or habits of conduct become morally significant, even when they are undertaken relatively unreflectively. Emphasizing virtue (or arête, as Aristotle might have had it) may be especially useful when an overall pattern of behavior rather than a single instance of choice is morally significant. Some advocates of virtue theory emphasize an individual’s disposition or character, not whether any given decision conforms to a rule. Alternatively, achieving virtue may depend upon living in an environment or culture that shapes our behavior in ways that we are barely aware of. Promoting virtue may have more to do with structuring human behavior and social interaction in ways that make it easier to be reflective about the things that really matter and that steer us onto autopilot when proper action can be reliably left to habit. So some philosophers have defended virtue theory as an alternative to relying on close inspection of consequences or rights that arise in connection with any single instance of action. There is less than total agreement among contemporary philosophers that choosing one of these tracks in ethical theory is required at all. One might be a pluralist who sees each way of thinking as achieving a kind of partial truth. Or one might follow Jürgen Habermas, who argues that we should be focused on the process of engaging these different types of ethical reasoning in a form of dis- course or debate where discussants trade arguments in the spirit of reaching a kind of agreement on what is right for the case at hand (a view Habermas calls discourse ethics). Perhaps I should admit to being more fully persuaded by Habermas than by any contemporary advocates of utilitarianism, rights theory, or virtue theory. Suffice it to say that I will not pursue detailed development and application of any of these theoretical approaches in the following excursion through food ethics. Nevertheless, it will be helpful to readers to be attentive to the way that ethical arguments can function in these three somewhat different ways. Sometimes a reason for doing one thing rather than another is based on the outcome of action, and sometimes it is grounded in the way that action should be constrained, either by social convention or by the nature of our desire for true freedom. Sometimes a reason for doing something or even for thinking harder about what one is doing out of habit appeals to a more nebulous but nevertheless palpable sense of virtue. All these types of reasoning make occasional appearances in food ethics, and the point of this brief summary is simply a heads-up to the reader unschooled in the ways of the philosophers. A Note on My Method My approach in this book aims to steer a path between the Scylla of always keeping your peas and mashed potatoes separate and the Charybdis of mushy thinking rationalized by whatever seems right at the moment. In other words, I deploy the philosopher’s penchant for clear and distinct ideas, but I deploy it in moderation. I treat ethical theories (such as rights theory, utilitarianism, or natural law) as argument forms that provide alternative (but sometimes also complementary) ways to frame a descriptive account of the situation that confronts us. In doing so, these accounts make claims upon the emotions, habits, and institutional structures that allow us to act as individuals, as informally coordinated groups, and as formally structured organizations. I take it that one job for ethics is to inspect and query the circumstances in which such claims arise. Although these claims are often made explicitly by people who wish to motivate action, I do not presuppose that the claims upon us have always been clear or explicitly articulated. I do suppose that in investigating ethical issues in the food system over the last thirty-five years, I have been engaged in inquiry. By inquiry I mean a loosely structured activity that arises in reaction to some disturbance or disruption and that expects to conclude with an active response that resolves or at least responds to the distress or curiosity with which the inquiry began. I have been trying to think of these issues in the right way. However, in pledging my allegiance to getting it right I am not also promising to frame matters in terms of ethical or scientific theories. I am not intrinsically interested in portraying the issues as social constructions or functions of underlying biological drives, to note just two of the many ways that social or biophysical scientists treat food issues. There may be occasions in which either of these modalities are helpful to my inquiry, but I am not here to peddle theoretical constructs. In undertaking ethical inquiries, I am hoping to arrive at better and more correct answers than I started with. One influential tradition in philosophy has supposed that I must have some prior conception of what it means to get things right in order to do this, but my own view is that any conception of what it means to get things right would itself be the product of an inquiry. So I am inclined to think that in concluding this introduction it may be more important to say something about inquiry itself. John Dewey illustrated his basic conception of inquiry in the 1896 article “The Reflex Arc Concept in Psychology.” Sitting comfortably and engrossed in a book, he is startled by a noise. His first response is a bit scatterbrained as he reorients his attention to the disturbance that is making a claim on his attention. He forms a hypothesis: the wind has blown the window open. Next he forms a plan: get up and close the window. Finally, he actually gets up and closes the window, bringing the inquiry to a close through an action that simultaneously corroborates the hypothesis, executes the plan, and addresses the disturbance. For Dewey, it is the whole sequence that illustrates inquiry, including the scatterbrained part and the physical activity of closing the window itself. The educational psychologist David Kolb offers some helpful terminology and a schematic of Dewey’s learning theory that identifies four distinct phases—five phases if we count the disturbance that sets the whole thing off (see figure 2). The scatterbrained search for a general orientation is divergence. As the dizzy and unfocused divergent stage begins to coalesce into a more structured and organized search for answers, the hypothesis-forming or assimilation phase begins to construct an explanation or model that accounts for the disturbance. Once this is in place, it is possible to formulate conditional if-then components that, in turn, suggest a plan, a process that Kolb calls convergence. Finally, it is worth noticing how executing the plan requires many elements that were probably not anticipated by the hypothesis. One has to actually get up out of the chair, which may require finding a place to put one’s book. Closing the window may require jiggling or bumping it. These supplements to the hypothesis involve an active and engaged kind of intelligence that Kolb calls accommodation. A learning cycle is completed when a person or group has moved through all phases of the inquiry process. A detailed discussion of Dewey or Kolb would take us far from food ethics, yet it may prove helpful to notice how this four-phased account of inquiry helps us orient ourselves to a number of tasks that are crucial for practical ethics. Each phase in the process of inquiry is associated with a distinct potential for error, for mistakes that will result in a failure of the overall process. In the divergent phase, it is important to keep the opportunities for brainstorming and bringing possible responses to the disturbance open in order to avoid over-commitment. An unfruitful investment of cognitive resources (e.g., time and energy) into a hypothesis that turns out to be unrelated to the disturbance or curiosity that the process of inquiry was initiated to address is one kind of error. It is, in fact, an all too frequent kind of error in today’s world. In assimilation, the goal of explanation or modeling takes over. Assimilation points us toward classic philosophical characterizations of truth and moral correctness. To be in error here is simply to have a model or explanation that does not accurately map or correspond to the situation. Twentieth-century moral philosophers became obsessed with modeling a universal standard for moral correctness, so much so that for many philosophers, “getting it right” came to be understood exclusively in terms of limiting assimilative errors. As one moves into convergence, a number of more practical considerations begin to be relevant. There may be many ways that one could undertake action given the working model that has been developed in the assimilative phase, but it would be a mistake to ignore the relative costs and risks that would be associated with any given possibility as one converges toward implementation. In the accommodation phase, it becomes important for somebody to actually get up and close the window. It is here that the classic divide between theory and practice emerges, for the proverbial “man of action”—quite possibly a woman, I hasten to add—may be the person or group who is most able to avoid the distracting tendency to revise the theory instead of finally doing something. The accommodator adjusts the plan so that the initial disturbance is materially addressed in its particulars. This discussion of errors suggests that a strictly sequential interpretation of these four phases of inquiry may be misleading. Indeed, Kolb himself has stressed the idea that different individuals may be characterized by a particular learning style. Each learning style is typified by different types of intelligence, capability, or skill. The accommodative learning style encompasses a set of competencies that Kolb calls acting skills: leadership, initiative, and action. The diverging learning style is associated with valuing skills: relationship, helping others, and sense-making. The assimilating learning style is related to thinking skills: information gathering, information analysis, and theory building. Finally, the converging learning style is associated with decision skills like quantitative analysis, use of technology, and goal setting. Kolb argues that in the university we see these different learning styles strongly associated with different academic departments. Academics with diverging learning styles tend to become English professors or teach in the arts. The natural sciences are dominated by assimilators, while engineers and others in technology fields tend toward a convergent learning style. Accommodators wind up in the business school. Without endorsing the stereotyping implied by such classification of individuals, it is nonetheless striking that a lot of work by academic philosophers tends to valorize criteria that reflect the assimilation phase of inquiry—an approach that Richard Rorty critiqued as seeking to become “the mirror of nature.” My approach in this book is pragmatic in holding (with Dewey) that it is the totality that should remain foremost in our thinking while undertaking a process of inquiry. There are many ways in which we can err, and there are numerous ways in which we can get it right. Kolb’s schematic is most relevant to our topics in that it helps us recall that criteria for right thinking and right action will depend on where we are in the process of inquiry. It suggests a picture of learning or inquiry that errs when any one of these learning styles comes to dominate our thinking, and in this it intersects nicely with recent trends in feminist and postcolonial epistemology. We are not likely to get things right when we systematically exclude people who have a particular perspective from the processes of deliberation and social decision making. There is a growing recognition among people from many walks of life that this kind of exclusion is not only unjust, it is spectacularly stupid in its tendency to discard or ignore what may turn out to be crucial pieces of information. This has resulted in a wave of philosophical and social science research that explores the process of inclusion. Here, too, there is an important intersection point with the approach I take in this book. Social action to address the issues discussed in the following chapters must clearly take cognizance of this work and must experiment with more inclusive modes for organizing and affecting responses. However, right action in food ethics will likely require a bit more than a general theory of inclusion or social process. Consistent with feminist epistemology and critical theory, my approach to food ethics does indeed emphasize inclusion and listening. Consistent with an interest in social justice and with at least some goals that have bound people into a food movement, the following chapters probe a series of food issues with an eye toward identifying the key points at which divergent social concerns meet. Consistent with the pragmatist orientation, the analysis seeks to identify the points at which divergent, assimilative, convergent, and accommodative learning styles intersect. Consistent with yet another line of recent scholarship in science studies, my discussion of food and food systems will focus on the objects, organizations, and activities that reside along the boundaries of these intersections. But consistent with all of the above values, I largely ignore the theoretical apparatus in favor of plain talk, whenever I can do so. I hope that I uncover unnoticed and underappreciated sites where action might be focused, and that I identify some divergent themes that need to be recognized before we prematurely converge upon plans of action. Ironically perhaps, it is not necessary to call a great deal of attention to the theoretical and methodological themes of epistemology, pragmatism, and moral theory while doing so.