.

Monday, September 30, 2019

Merck Case

Pharmaceuticals: Merck Sustaining Long-term Advantage Through Information Technology Hiroshi Amari Working Paper No. 161 Working Paper Series Center on Japanese Economy and Business Columbia Business School December 1998 Columbia-Yale Project: Use of Software to Achieve Competitive Advantage PHARMACEUTICALS: MERCK Sustaining Long-term Advantage Through Information Technology Prepared by Hiroshi Amari Research Associate, Yale University William V. Rapp and Hugh T. Patrick Co-principal Project InvestigatorsCenter for International and Area Studies Yale University New Haven, CT 06520 203-432-9395 (Fax: 5963) e-mail: william. [email  protected] edu Revised December 1998 Table of Contents 1. Introduction: Objective of this Study 2. The Pharmaceutical Industry in a Global Context 3. Product R&D and Clinical Trials 4. Manufacturing and Process R&D 5. Technological Factors Structure-Based Drug (â€Å"Rational Drug†) Design Structure-Based Drug (â€Å"Rational Drug†) Design 6. Merck 7. Managerial Decision Making 8. Decision Making on IT projects 9. Joint Ventures 10. Information Technology and Organization 11.Appendix I – Summary Answers to Questions for Merck – Strategy & Operations 12. Appendix II – INDUSTRY AND FIRM BUSINESS DATA 13. Bibliography 2 Introduction: Objective of this Study This case study of Merck was completed under a three year research grant from the Sloan Foundation. The project's purpose is to examine in a series of case studies how U. S. and Japanese firms who are recognized leaders in using information technology to achieve long-term sustainable advantage have organized and managed this process. While each case is complete in itself, each is part of this larger study. This pharmaceutical industry case together with other cases2 support an initial research hypothesis that leading software users in both the U. S. and Japan are very sophisticated in the ways they have integrated software into their management stra tegies and use it to institutionalize organizational strengths and capture tacit knowledge on an iterative basis. In Japan this strategy has involved heavy reliance on customized and semicustomized software (Rapp 1995) but is changing towards a more selective use of package software managed via customized systems. In turn, U. S. ounterparts, such as Merck, who have often relied more on packaged software, are doing more customization, especially for systems needed to integrate software packages into something more closely linked with their business strategies, markets, and organizational structure. Thus, coming from different directions, there appears some convergence in approach by these leading software users. The cases thus confirm what some other analysts have hypothesized, a coherent business strategy is a necessary condition for a successful information technology strategy (Wold and Shriver 1993). These strategic links for Merck are presented in the following case. Industries a nd firms examined are food retailing (Ito-Yokado and H. Butts), semiconductors (NEC and AMD), pharmaceuticals (Takeda and Merck), retail banking (Sanwa and Citibank), investment banking (Nomura and Credit Suisse First Boston), life insurance (Meiji and USAA), autos (Toyota), steel (mini-mills and integrated mills, Nippon Steel, Tokyo Steel and Nucor), and apparel retailing (WalMart). The case writer and the research team wish to express their appreciation to the Alfred P.Sloan Foundation for making this work possible and to the Sloan industry centers for their invaluable assistance. They especially appreciate the time and guidance given by the center for research on pharmaceuticals at MTT as well as Mr. Sato at Takeda. This refers to cases for which interviews have been completed. See footnote 3. These and other summary results are presented in another Center on Japanese Economy and Business working paper: William V. Rapp, â€Å"Gaining and Sustaining Long-term Advantage Through In formation Technology: The Emergence of Controlled Production,† December 1998 strategy (Wold and Shriver 1993). 3 These strategic links for Merck are presented in the following case. Yet this case along with the other cases also illustrates that implementation and design of each company's software and software strategy is unique to its competitive situation, industry and strategic objectives. These factors influence how they choose between packaged and customized software options for achieving specific goals and how they measure their success.Indeed, as part of their strategic integration, Merck and the other leading software users interviewed have linked their software strategies with their overall management goals through clear mission statements that explicitly note the importance of information technology to firm success. They have coupled this with active CIO (Chief Information Officer) and IT (information technology) support group participation in the firm's business and decision making structure.Thus for firms like Merck the totally independent MIS (Management Information Systems) department is a thing of the past. This may be one reason why outsourcing for them has not been a real option, though their successful business performance is not based solely on software. Rather as shall be described below software is an integral element of their overall management strategy and plays a key role in serving corporate goals such as enhancing productivity, improving inventory management or strengthening customer relations.These systems thus must be coupled with an appropriate approach to manufacturing, R, and marketing reflecting Merck's clear understanding of their business, their industry and their firm's competitive strengths within this context. This clear business vision has enabled them to select, develop and use the software they require for each business function and to integrate these into a total support system for their operations to achieve corpo rate objectives. Since this vision impacts other corporateThese and other summary results are presented in another Center on Japanese Economy and Business working paper: William V. Rapp, â€Å"Gaining and Sustaining Long-term Advantage Through Information Technology: The Emergence of Controlled Production,† December 1998 3 4 decisions, they have good human resource and financial characteristics too (Appendix I & ii). Yet Merck does share some common themes with other leading software users such as the creation of large proprietary interactive databases that promote automatic feedback between various stages and/or players in the production, delivery and consumption process.Their ability to use IT to reduce inventories and improve control of the production process are also common to other leading software users. They are also able organizationally and competitively to build beneficial feedback cycles or loops that increase productivity in areas as different as R, design and man ufacturing while reducing cycle times and defects or integrating production and delivery. Improved cycle times reduce costs but increase the reliability of forecasts since they need to cover a shorter period.Customer satisfaction and lower inventories are improved through on-time delivery. Thus, software inputs are critical factors in Merck's and other leading users' overall business strategies with strong positive competitive implications for doing it successfully and potentially negative implications for competitors. An important consideration in this respect is the possible emergence of a new strategic manufacturing paradigm in which Merck is probably a leading participant.In the same way mass production dramatically improved on craft production through the economies of large scale plants that produced and used standardized parts and lean production improved on mass production through making the production line more continuous, reducing inventories and tying production more close ly to actual demand, what might be called â€Å"controlled† production seems to significantly improve productivity through monitoring, controlling and linking every aspect of producing and delivering a product or service including after sales service and repair.Such controlled production is only possible by actively using information technology and software systems to continuously provide the monitoring and control function to what had previously been a rather automatic system response to changes in 5 expected or actual consumer demand. This may be why their skillful use of information technology is seen by themselves and industry analysts as important to their business success, but only when it is integrated with the business from both an operation and organization standpoint reflecting their overall business strategy and clarity of competitive vision.Therefore at Merck the software and systems development people are part of the decision making structure while the system its elf is an integral part of organizing, delivering and supporting its drug pipeline from R through to sales post FDA approval. This sequence is particularly critical in pharmaceuticals where even after clinical trials there is a continuous need to monitor potential side effects. Therefore Seagate Technology may be correct for Merck too when they state in their 1997 Annual Report â€Å"We are experiencing a new industrial revolution, one more powerful than any before it.In this emerging digital world of the Third Millennium, the new currency will be information. How we harness it will mean the difference between success and failure, between having competitive advantage and being an also-ran. † In Merck's case, as with the other leading software users examined, the key to using software successfully is to develop a mix of packaged and customized software that supports their business strategies and differentiates them from competitors. However, they have not tried to adapt their organizational structure to the software.Given this perspective, functional and market gains have justified the additional expense incurred through customization, including the related costs of integrating customized and packaged software into a single information system. They do this by assessing the possible business uses of software organizationally and operationally and especially its role in enhancing their core competencies. While they will use systems used by competitors if there is no business advantage to developing their own, they reject the view that information systems are generic products best developed by outside vendors who can achieve low cost through economies of scale and who can more easily afford to invest in the latest technologies. 4 In undertaking this and the other case studies, the project team sought to answer certain key questions while still recognizing firm, country and industry differences. These have been explained in the summary paper referenced in fo otnote 3. We have set them forth in Appendix I where Merck's profile is presented based on our interviews and other research.Readers who wish to assess for themselves the way Merck's strategies and approaches to using information technology address these issues may wish to review Appendix I prior to reading the case. For others it may be a useful summary. 5 Merck and the other cases have been developed using a common methodology that examines cross national pairs of firms in key industries. In principle, each pair of case studies focuses on a Japanese and American firm in an industry where software is a significant and successful input into competitive performance.The firms examined are ones recognized by the Sloan industry centers and by the industry as ones using software successfully . To develop the studies, we combined analysis of existing research results with questionnaires and direct interviews. Further, to relate these materials to previous work as well as the expertise loc ated in each industry center, we held working meetings with each center and coupled new questionnaires with the materials used in the previous study to either update or obtain a questionnaire similar to the one used in the 1993-95 research (Rapp 1995).This method enabled us to relate each candidate and industry to earlier results. We also worked with the industry centers to develop a set of questions that specifically relate to a firm's business strategy and software's role within that. Some questions address issues that appear relatively general across industries such as inventory control. Others such as managing the drug pipeline are more specific to a particular industry. The focus has been to establish the firm's perception of its industry and its competitive position as well as its advantage in developing and using a software strategy.The team also contacted customers, competitors, and industry analysts to determine whether competitive benefits or impacts perceived by the firm were recognized outside the organization. These sources provided additional data on measures of competitiveness as well as industry strategies and structure. The case studies are thus based on extensive interviews by the project team on software's use and integration into management strategies to improve competitiveness in specific industries, augmenting existing data on industry dynamics,firmorganizational structure and management strategy collected from the Sloan industry enters.In addition, we gathered data from outside sources andfirmsor organizations with which we worked in the earlier project. Finally, the US and Japanese companies in each industry that were selected on the basis of being perceived as successfully using software in a key role in their competitive strategies in fact saw their use of software in this exact manner while these competitive benefits were generally confirmed after further research.The questions are broken into the following categories: General Manage ment and Corporate Strategy, Industry Related Issues, Competition, Country Related Issues, IT Strategy, IT Operations, Human Resources and Organization, Various Metrics such as Inventory Control, Cycle Times and Cost Reduction, andfinallysome Conclusions and Results.They cover a range of issuesfromdirect use of software to achieve competitive advantage, to corporate strategy, to criteria for selecting software, to industry economics, to measures of success, to organizational integration, to beneficial loops, to training and institutional dynamics, andfinallyto interindustry comparisons. 7 The Pharmaceutical Industry in a Global Context In advanced countries that represent Merck's primary market, the pharmaceutical industry is an exceptionally research intensive industry where many firms are large multinationals (MNCs).It is also heavily regulated for both local producers and MNCs. Regulations work as both constraints and performance boosters since drugs are used with other medical a nd healthcare services. Therefore, healthcare expenditures are divided among many industries and providers of which pharmaceuticals are only one. All parties involved are interested in influencing the regulatory environment and in participating in the growth in healthcare services. This means understanding the industry requires appreciating its political economic context.In this regard, healthcare providers in rich nations are currently under pressure to control costs due to aging populations. Regulators who have the authority to change the demand structure through laws and regulations are considering various measures to reduce costs such as generic drug substitution which may mean lower returns for discovering and developing drugs. Still, if drugs are more effective at reducing healthcare costs compared to other treatments, Pharmaceutical companies can benefit.Since R is at the heart of competition, each drug company must respond to these cost containment pressures cautiously and s trategically in competing for healthcare expenditures. Another important aspect of this industry is technological change arising from the convergence of life and biological sciences. Many disciplines now work together to uncover the mechanisms that lie behind our bodies and various diseases. Examples are molecular biology, cell biology, biophysics, genetics, evolutionary biology, and bioinformatics.As scientists see life from these new chemical and physical viewpoints, the ability to represent, process and organize the massive data based on these theories becomes critical. Because computers are very flexible scientific instruments (Rosenberg 1994), progress in information technology and computer science has broadened scientific frontiers for the life and biological sciences. These advances have opened new doors to 8 attack more complex diseases, including some chronic diseases of old age.These therapeutic areas are present opportunities for pharmaceutical companies since they addres s demographic and technical changes in advanced countries. Still, to take advantage of these opportunities requires information technology capabilities. Historically, the drug industry has been relatively stable where the big players have remained unchanged for years. This has been due to various entry barriers such as R costs, advertising expense, and strong expertise in managing clinical trials. It is difficult and expensive for a new company to acquire this combination of skills quickly.However, there are signs the industry and required mix of skills may be changing. There have been several cross national mergers especially between U. S. and European companies. In addition, new biotechnology companies are very good at basic research, which may force pharmaceutical R to transform itself. For example, no single company even among the new mega-companies is large enough to cover all new areas of expertise and therapeutic initiatives. Thus, many competitors have had to form strategic alliances to learn or access new technologies and to capture new markets. Conversely, a stand-alone company can have a lot to lose.The challenge facing large pharmaceutical companies is how fast and how effectively they can move to foster both technological innovation and cost containment without exposing themselves to too much risk. The pharmaceutical industry in all of Merck's major markets reflects these cost containment pressures, the need to harmonize expensive and time consuming clinical trials, and the impact of extensive regulations. Information technology has had its impacts too. For example, to respond to these challenges Merck is using more management techniques based on consensus decision making among top functional managers.This requires better communication support using e-mail and groupware combined with face-to-face communication. This is part of an industry trend towards greater parallel decision making in R&D and less sequential decision making where A must first c oncur on a project before moving to B, etc Now all elements of the firm evaluate the project simultaneously at each 9 stage. In this manner, Merck has significantly reduced coordination costs while centralizing and speeding the overall decision making process. Additionally, first-tier irms have had to follow a trend in R&D strategies that increasingly use information technologies. Exchange of data and ideas across national borders has become relatively easy, and contracts may specify access to another company's database. Because many companies share similar R instruments and methods, one company's instruments may be compatible with other companies'. Indeed, the trend towards greater use of Web-based technology in R and other operations may change our notion of a firm and its boundaries. Firms may eventually be characterized by knowledge creating capabilities (Nonaka and Takeuchi 1995).Having more ways to communicate with other companies makes frequent communication with greater nuan ce possible. This supports the trend towards more strategic alliances unless overtaken by the creation of larger firms through continued mergers. This is also partially due to the nature of the industry which is part of the fine chemical industry where changes in technologies are rapid and often discontinuous. It therefore requires different management skills from other technology based industries, especially as the knowledge required for innovation tends to be more specialized thus demanding less coordination than assembly industries.Transferring mass production know-how to R is also limited. Still, the U. S. and European industries have been undergoing massive reorganization to achieve economies of scope and scale in R and marketing where firms are taking advantage of the fact that the U. S. industry is much less regulated than most foreign industries (Bogner and Thomas 1996). The U. S. companies grew after World War II due to a huge home market combined with the global market for antibiotics this was before British firms began to recapture market share.At that time, European firms did not have the resources to sell drugs directly to U. S. doctors. The European recovery period gave U. S. firms enough time to take advantage of antibiotics. Then, when the U. S. market became saturated, U. S. 10 firms expanded into global markets in the early 1960s. This forced U. S. firms to diversify their R as well. At the same time, in 1962 amendments to the Food, Drug and Cosmetic Act increased the rigor of drug regulation creating an entry barrier to industry R that favored large established firms (Bogner and Thomas 1996).The U. S. effectively tightened their regulations after their industry had acquired sufficient R skills and resources. This timing seems to account for today's industry success. Another factor is that unlike the European industry, U. S. firms had few incentives to integrate vertically. During the War the military distributed antibiotics. Therefore, the U . S. firms were generally bulk chemical producers such as Merck and Phizer or sellers of branded drugs such as Abbott and Upjohn. At the end of the War, only a few firms such as Squibb were fully integrated.However, as promotion and other downstream functions became more critical, controlling functions such as distribution became a strategic objective. To accomplish this they acquired other firms (Merck acquired Sharpe and Dohine and Phizer acquired Roerig), developing expansion via merger and acquisition as a business strategy and core competency. This helped lay the foundation for subsequent industry consolidation. Today, American healthcare is based on the belief that while making progress in science is the best way to solve medical problems, cost containment is also important.As a result, while American healthcare is the most expensive in the world, it is also not available to everyone and is the most subject to cost scrutiny. Indeed, since drugs are just one way to improve heal th, consumers should want to remain healthy and choose cost effective means to do this. However, the reality is that insurance systems covering different services give incentives and disincentives for particular care (Schweitzer 1997). Thus, coordinated adjustment of prices for healthcare is necessary to get markets for healthcare products to work better. In the U. S. , this has led to a public policy push for HMOs.These healthcare purchasers have in turn set the reward schemes available to healthcare providers such as pharmaceutical companies so as to reduce transaction costs (Ikegami and Campbell 1996) 11 and promote innovation. These developments and trends are putting more pressure on major firms to put more resources into R&D, to focus more critically on just ethical drug development for the global market, and to be more careful in gathering information on clinical trials and side effects. The most important market for Merck in this regard is the U. S. where NTH has pursued a u nified approach.This is because the NIH (The National Institutes of Health) has actively supported basic life science research in U. S. universities, especially after World War II. NSF (National Science Foundation) also encouraged collaboration between academia and industry with partial funding by the government. Other federal and state funding has been important to the scientific community as well, especially in biotechnology. In biotechnology, the funding of basic research has led to a complex pattern of university-industry interaction that includes gene patenting and the immediate publishing of results (Rabinow 1996).U. S. drug companies are of course profit motivated but are regulated by the FDA (Federal Drug Administration) which is rigorous about its drug approvals, demanding clear scientific evidence in clinical research as its operation is basically science oriented. Product R&D and Clinical Trials Still, despite this R&D support, industry economics are driven by pharmaceuti cal R&D's very lengthy process, composed of discovering, developing and bringing to market new ethical drugs with the latter heavily determined by the drug approval process in major markets such as the U.S. , Europe and Japan6. These new therapeutic ethical products fall into four broad categories (U. S. Congress, OTA 1993): one, new chemical entities (NCEs) – new therapeutic entities (NTEs) – new therapeutic molecular compounds never before used or tested in humans; two, drug delivery mechanisms – new approaches to delivering therapeutic agents at the desired dose to the desired part of the body; three, 6 Ethical drugs are biological and medicinal chemicals advertised and promoted primarily to the medical, pharmacy, and allied professions.Ethical drugs include products available only by prescription as well as some over-the-counter drugs (Pharmaceutical Manufacturers Association 1970-1991). 12 next stage products – new combinations, formulations, dosing forms, or dosing strengths of existing compounds that must be tested in humans before market introduction; four, generic products – copies of drugs not protected by patents or other exclusive marketing rights. From the viewpoint of major pharmaceutical firms such as Merck, NCEs are the most important for the R of innovative drugs that drive industry success.Since it is a risky and very expensive process, understanding a company's R&D and drug approval process is critical to understanding the firm's strategy and competitiveness both domestically and globally. Statistics indicate that only about 1 in 60,000 compounds synthesized by laboratories can be regarded as â€Å"highly successful† (U. S. Congress, OTA 1993). Thus, it is very important to stop the R process whenever one recognizes success is not likely.Chemists and biologists used to decide which drugs to pursue, but R is now more systematic and is a collective company decision since it can involve expenditures of $250 to $350 million prior to market launch, thus the need for more parallel decision making. Key factors in the decision making process are expected costs and returns, the behavior of competitors, liability concerns, and possible future government policy changes (Schweitzer 1997). Therefore, stage reviews during drug R are common, and past experiences in development, manufacturing, regulatory approvals, and marketing can provide ample guidance.NCE's are discovered either through screening existing compounds or designing new molecules. Once synthesized, they go through a rigorous testing process. Their pharmacological activity, therapeutic promise, and toxicity are tested using isolated cell cultures and animals as well as computer models. It is then modified to a related compound to optimize its pharmacological activity with fewer undesirable biological properties (U. S. Congress, OTA 1993). Once preclinical studies are completed and the NCE has been proven safe on animals, the dru g sponsor applies for Investigational New Drug (IND) status.If it receives approval, it starts Phase I clinical trials to establish the 13 tolerance of healthy human subjects at different doses to study pharmacological effects on humans in anticipated dosage levels. It also studies its absorption, distribution, metabolism, and excretion patterns. This stage requires careful supervision since one does not know if the drug is safe on humans. During phase II clinical trials a relatively small number of patients participate in controlled trials of the compound's potential usefulness and short term risks.Phase III trials gather precise information on the drug's effectiveness for specific indications, determine whether it produces a broader range of adverse effects than those exhibited in the smaller phase I and II trials. Phase III trials can involve several hundred to several thousand subjects and are extremely expensive. Stage reviews occur before and during each phase, and drug develo pment may be terminated at any point in the pipeline if the risk of failure and the added cost needed to prove effectiveness outweigh the weighted probability of success.There is a data and safety monitoring board in the U. S.. This group has access to â€Å"unblinded data† throughout the conduct of a trial but does not let anyone else know what the data shows until it is necessary. For example, they will not divulge the efficacy data until the trial reaches a point where it seems appropriate to recommend stopping it because the null hypothesis of efficacy has been accepted or rejected. The FDA will usually insist on the drug proving efficacy with respect to ameliorating a disease before giving approval.If clinical trials are successful, the sponsor seeks FDA marketing approval by submitting a New Drug Application (NDA). If approved, the drug can be marketed immediately, though the FDA often requires some amendments before marketing can proceed (Schweitzer 1997). However, suc cessful drug development and sales not only requires approval of therapeutic value and validity but also that the manufacturing process meet stringent â€Å"best-practice† standards. To meet U. S. regulations, Phase IV trials are required. Manufacturers selling drugs must notify the FDA periodically about the 14 erformance of their products. This surveillance is designed to detect uncommon, yet serious, adverse reactions typically not revealed during premarket testing. This postapproval process is especially important when phase III trials were completed under smaller fast track reviews. These additional studies usually include use by children or by those using multiple drugs where potential interactions can be important (Schweitzer 1997). Furthermore, because drug development costs are so high relative to production costs, patent protection is another key aspect of a company's management strategy. Under U. S. aw, one must apply for a patent within one year of developing an N CE or the innovation enters the public domain. Therefore, patenting is usually early in the development cycle or prior to filing the NCE. But as this begins the patent life, shortening the approval period extends a drug's effective revenue life under patent. This makes managing clinical trials and the approval process an important strategic variable. Although creating a drug pipeline through various stages of development is relatively standardized, it is changing as companies use different methods to reduce time and related costs of new drug development.Companies are constantly pressuring the authorities to reduce NDA review times. As a consequence, the FDA did introduce an accelerated approval process for new drugs in oncology, HIV (AIDS) and other life threatening illnesses. A familiar feature of this new fast track review is the use of surrogate end points, or proxies for clinical end points which are measured by laboratory values but lack supporting clinical outcomes data. Accel erated approval speeds new drugs to market saving companies tens of millions of dollars in negative cash flow.However, it does not generate clinical values that insurers and managed care organizations demand. Countering this situation is thus the trend among drug firms to increase the complexity of their analyses during clinical trials. Companies have begun to use cost-effective analysis in their evaluation of new drugs in assessing competing product development investment alternatives and by integrating cost effectiveness analysis into their clinical trials. They also try to capture quality of life 15 measures such as how patients perceive their lives while using the new drug.Companies vary their analysis by country (Rettig 1997) since measures of effectiveness shift according to clinical practice, accessibility to doctors, and what different cultures value as important. There are no universal measures of the quality of life. At present, the components measured depend largely on th e objectives of each researcher but some companies are trying to introduce more systematic measures. Nevertheless, no matter what components are chosen for these studies, capturing, storing and using the data requires sophisticated software and data base management techniques which must be correlated with various families of molecules.Also, to avoid the moral hazard of focusing on the weaknesses in a competitor's drug or molecule, some analysts argue companies should examine all domains and their components (Spilker 1996) and move towards agreed performance standards. Furthermore, quality of life measures should only be used when they are of practical use to doctors in treating patients (Levine 1996). Such judgments should be sensitive and informed and should cover criteria related and important to a broad spectrum of patients while balancing measures which can be easily gathered and those that are more complex due to multiple treatments.These trends make clinical trials and data ga thering complex and expensive and put a premium on a firm's ability to manage the process efficiently, including creating and using large patient and treatment databases. Manufacturing and Process R&D The research process differs from production. Yet, both are important, particularly the firm's knowledge of scale-up. This is difficult because production requires uniformity at every stage. Making the average chemical make-up constant is not enough.Careful scale-up is essential to avoid contamination. Variations from the mean in commercial production must be very small. This requires constant control of variables such as the preparation of raw materials, solvents, reaction conditions, and yields. Often, experience will help achieve purer output in the intermediate processes. This better output alleviates problems in later processes. Thus, there is a learning curve in process R which starts at 16 the laboratory. An important distinction is between continuous process and batch process.I n the continuous process, raw materials and sub-raw materials go into a flow process that produces output continuously. This continuous process is more difficult because many parameters and conditions have to be kept constant. This requires a good understanding of both optimizing the chemical process and maintaining safeguards against abnormal conditions. However, continuous processes are less dangerous and require fewer people to control at the site than batch processing where the chemicals are produced in batches, put in pill form and then stored for future distribution and sale (Takeda 1992).The following compares initial process R once a compound is discovered and commercial manufacturing for a representative chemical entity proceeds (Pisano 1996). Comparison research process and commercial production for representative chemical 17 Process R in chemical pharmaceuticals involves three stages: (1) process research, where basic process chemistry (synthetic route) is explored and ch osen; (2) pilot development, where the process is run and refined in an intermediate-scale pilot plant; and (3) technology transfer and startup, where process is run at a commercial manufacturing site (Pisano 1997).Pisano argues that the scientific base of chemistry is more mature than biotechnology and this difference accounts for the more extensive use of computer simulations in drugs made by chemical synthesis than biotechnology-based drugs. Codifying the knowledge in chemistry and chemical engineering in software has a higher explanatory power than in biotechnology. In chemistry, many scientific laws are available for process variables such as pressure, volume, and temperature.Computer models can simulate these in response to given parameters to predict cost, throughput and yield (Pisano 1997). By contrast, biotechnology has aspects that resemble art dependent on an opprator's skill more than science which only requires the proper formulation. This is particularly true for large -scale biotechnology process (Pisano 1997). Simulation is thus less reliably extrapolated to commercial production. An additional factor is the importance of purification after large-scale production in bioreactors in biotechnology-based drugs.It is not rare at this stage of extraction and purification that commercial application becomes impossible, even though the scale-up is successful. Since avoiding contamination is the key in biotechnology-based drugs, extracting and purifying a small amount of the desired materials from a large amount of broth is critical. This process is done using filters, chromatography, and other methods specific to organisms (Koide 1994). Technological Factors All scientific frontiers affect pharmaceutical companies.Since no company can be an expert on everything, what technology to develop in-house and what to license or subcontract have become important issues. In general, pharmaceutical companies were skeptical of new developments in small biotechnolog y firms. Yet the latter now provide new techniques in basic research and fermentation to the MNCs. Other pharmaceutical 18 companies then tend to follow when competitors adopt ideas from less well known biotech companies. This is why many such companies announce platform deals with drug companies to get more financial resources and opportunities.Biotechnology based pharmaceuticals have entered a new development stage which requires the capital, manufacturing and marketing expertise of the large companies. New drug discovery methods and biotechnology each demand skills different from earlier times. Emerging biotech companies offer new ideas and research tools. Other new technologies such as stripping out side effects, specialized drug delivery systems, and â€Å"antisense† which cancels out the disease causing messages of faulty RNA also come from biotechnology (Fortune 1997).These are promising areas of drug research and potential products. Further, these biotech companies de velop new drugs more quickly than large firms. Where they often have difficulty is in managing clinical trials and the approval process, an area where large firms have considerable experience and expertise, including sophisticated software for tracking the large data bases and handling the new computerized application procedure. In addition, biotechnology demands skills in large scale commercial production which smaller startups may not possess.Thus, close association with large firms is logical and efficient, and one should expect more future alliances and joint ventures, though outsourcing to organizations that will manage clinical trials is growing. Another important factor which further encourages specialization in a network of companies is the industry's heavy use of information technology. Indeed, software strategies have become an important part of the industry through their impact on R, drug approval, including clinical trials, and control of manufacturing.If decisions in a science based industry are generally driven by knowledge creation capability dependent on human resources, having information sharing and access mechanisms so complementary capabilities can be efficiently exchanged and used becomes key to successful corporate strategy, especially when that knowledge is growing and becoming increasingly diverse. 19 There is some evidence suggesting when innovation is dependent on trial and error, it is best done when many players try different strategies and are held responsible for the projects they choose (Columbia Engineering Conference on Quality September 1997).If the large drug companies can successfully form principal-agent relationships with biotechnology companies doing advanced research in a particular area in the same way that Japanese parts manufacturers have with large assemblers, there may be opportunities for major breakthroughs without the drug companies having to put such trial and error processes inside the company where they may be less easy to manage. If the make or buy decision in a science based industry is generally driven by knowledge creation capability dependent on human resources, the basis for new product, i. . drug development, becomes more dependent on the nature and facility of information exchange between groups and individuals than asset ownership. Creating information sharing and access mechanisms so that complementary capabilities can be efficiently exchanged and used then becomes the key to successful corporate strategy in knowledge based industries, especially when that knowledge base is growing and becoming increasingly diverse as in the ethical drug industry. Another information sharing issue related to biotech is pharmacology.Classical pharmacology models are often irrelevant for biotech-based drugs. While some proteins express their activities across other species, others can be more species specific. Neither poor results nor good animal trial results need be predictive for humans. Parti cularly difficult problems are those related to toxicology since some animals develop neutralizing antibodies (Harris 1997). Technical support systems are important in biotechnology as well. One is transgenic animals. They provide information on the contribution of particular genes to a disease.This is done by inserting genes that have the function of expressing the phenotype, or interbreeding heterozygotic animals to produce â€Å"knockout animals† that suffer from inherited metabolic diseases. Transgenic animals are relevant to early phase clinical trials since the data from these animals contribute useful data on dose-selection 20 and therapeutic rations in human studies. In addition, they offer hints to which variables are secondary. This simplifies the clinical trial design.In general, significant input in the design and running of phase I and II trials must come from the bench scientists who built the molecule (Harris 1997). Since clinical trials for biotech drugs lack clear guidelines, inhouse communication among drug discovery, preclinical and clinical trials is important, especially due to the increased use of transgenic animals bred to examine inherited diseases. This process in phase I/II trials can be greatly facilitated by information sharing technologies and acts as another driver towards a more integrated approach to decision making using IT.Structure-Based Drug (â€Å"Rational Drug†) Design This is also true of structure-based drug (â€Å"rational drug†) design or molecular modeling which is a range of computerized techniques based on theoretical chemistry methods and experimental data used either to analyze molecules and molecular systems or to predict molecular and biological properties (Cohen 1996). Traditional methods of drug discovery consist of taking a lead structure and developing a chemical program for finding analog molecules exhibiting the desired biological properties in a systematic way. The nitial compounds we re found by chance or random screening. This process involved several trial and error cycles developed by medicinal chemists using their intuition to select a candidate analog for further development. This traditional method has been supplemented by structure-based drug design (Cohen 1996) which tries to use the molecular targets involved in a disorder. The relationship between a drug and its receptor is complex and not completely known. The structure-based ligand design attempts to create a drug that has a good fit with the receptor.This fit is optimized by minimizing the energies of interaction. But, this determination of optimum interaction energy of a ligand in a known receptor site remains difficult. Computer models permit manipulations such as superposition and energy calculation that are difficult with mechanical models. They also provide an exhaustive way to analyze molecules and to save and store this data for later 21 use or after a research chemist has left. However, mode ls must still be tested and used and eventually, chemical intuition is required to analyze the data (Gund 1996).Then the drug must proceed through animal and clinical trials. Still the idea behind this modeling is the principle that a molecule's biological properties are related to its structure. This reflects a better understanding in the 1970s of biochemistry. So rational drug design has also benefited from biotechnology. In the 1970s and 1980s, drug discovery was still grounded in organic chemistry. Now rational drug design provides customized drug design synthesized specifically to activate or inactivate particular physiological mechanisms.This technique is most useful in particular therapeutic areas. For example, histamine receptor knowledge was an area where firms first took advantage of rational design since its underlying mechanism was understood early (Bogner and Thomas 1996). The starting point is the molecular target in the body. So one is working from demand rather than finding a use for a new molecule. The scientific concepts behind this approach have been available for a long time. The existence of receptors and the lock-and-key concepts currently considered in drug design were formulated by P.Ehrlich (1909) and E. Fischer (1894). Its subtleties were understood, though, only in the 1970s with the use of X-ray crystallography to reveal molecular architecture of isolated pure samples of protein targets (Cohen 1996). The first generation of this technology conceived in the 1970s considered molecules as two topological dimensional entities. In 1980s it was used together with quantitative structureactivity relationships (QSAR) concepts. The first generation of this technology has proven to be useful only for the optimization of a given series (Cohen 1996).The second generation of rational drug design has considered the full detailed property of molecules in the three dimensional (3-D) formula. This difference is significant, since numerical parameters in the QSAR approaches do not tell the full story about the interaction between a ligand and a protein (Cohen 1996). 22 This has been facilitated by software and hardware becoming less costly. Thus many scientists are paying attention to computational techniques that are easier to use than mechanical models.This underscores the role of instrumentation in scientific research stressed by Rosenberg (1994). Availability of new instruments, including computers, has opened new opportunities in technological applications and furthered research in new directions. Three dimensional graphics particularly suits the needs of a multi-disciplinary team since everyone has different chemical intuition but appreciates the 3-D image. Rosenberg (1994) notes scientists who move across disciplines bring those concepts and tools to another scientific discipline such as from physics to biology and chemistry.This suggests the importance of sharing instruments, particularly computer images and databases th at help people work and think together. The predominant systems of molecular modeling calculations are UNIX workstations, particularly three dimensional graphics workstations such as those from Silicon Graphics. But other hardware such as desktop Macintoshes and MS-DOS personal computers on the low end and computer servers and supercomputers on the high end have been used. Computational power is required for more complex calculations and this guides the choice of hardware.A variety of commercial software packages are available from $50-$5,000 for PC-based systems to $100,000 or more for supercomputers. Universities, research institutes, and commercial laboratories develop these packages. Still, no one system meets all the molecular modeler's needs. The industry therefore desperately needs an open, high-level programming environment allowing various applications to work together (Gund 1996). This means those who for strategic reasons want to take advantage of this technology must now do their own software development. This is the competitive software compulsion facing many drug producers.In turn, the better they can select systems, develop their capabilities, and manage their use, the more successful they will be in drug development and in managing other aspects of the drug pipeline. 23 The choice of hardware is based on software availability and the performance criteria needed to run it. Current major constraints are the power of graphics programs and the way the chemist interacts with the data and its representation (Hubbard 1996). Apple computers have frequently been used in R because of superior graphics, though this edge may be eroded by new PCs using Pentium MMX as well as moves to more open systems.However, Dr. Popper, Merck's CIO, feels that the real issue, is the software packages for the MAC that research scientists know and rely on but that are not yet available for Windows NT. Thus, MACs continue to be used for Medical R&D which keeps the Windows ma rket from developing. There are, in addition, the elements of inertia, emotional attachment and training which are apparent at major medical schools too. In sum, rational design has opened a wide range of new research based on a firm's understanding of biochemical mechanisms. This means tremendous opportunities to enter new therapeutic areas.However, since rational design is very expensive, it has raised entry costs and the minimum effective size for pharmaceutical firms by putting a premium on those with a sequence of cash generating drugs. It also has favored firms with broader product lines able to spread the costs of equipment over many projects and to transfer knowledge across therapeutic areas, contributing to the increased cost of new drugs through higher R and systems support spending (Bogner and Thomas 1996). A similar analysis applies to the use of other new technologies because major U. S. nd Japanese companies to discover and develop drugs systematically, such as combina torial chemistry, robotic high-throughput screening, advances in medical genetics, and bioinformatics. These technologies affect not only R but also the organization and the way they deal with other organizations as many new technologies are complementary. For example, high-throughput screening automates the screening process to identify compounds for further testing or to optimize the lead compound. Thus, both regulatory and technological change have raised the advantage of developing innovative drugs, even 24 hough it is inherently risky and forces firms to develop better skills in using information technology to support the process. The Pharmaceutical Industry in the United States As explained above, healthcare and the pharmaceutical industry are closely intertwined, especially in the U. S.. Ever since the election of the Clinton Administration, U. S. healthcare has been the focus of heated debate. The pricing of pharmaceuticals in particular is one of the most controversial aspe cts of the industry. Estimates of the cost of bringing a new drug to market are up to over $250 million (DiMasi et. l. 1991). However, once drugs are on the market, the costs of manufacturing, marketing and distribution are relatively small. This loose connection between marginal cost and the market price seems to require further justification for drug pricing. While the obvious answer lies in the high fixed cost of drug development and the expensive and time consuming approval process prior to any positive cash flow, the answer is still not easy. Furthermore, the drug market is very complex for several reasons. First, there are many drug classes for which only a few products exist.Secondly, FDVIOs (health maintenance organizations) and other managed-care plans can negotiate substantial discounts because they are able to control the prescription decisions made by their participating physicians and because they buy in large quantities. These health organizations are highly price sens itive. This means drug prices are substantially determined by the purchaser's demand elasticity. This demand in turn determines investment decisions (Schweitzer 1997). Thirdly, the market for pharmaceuticals is highly segmented, both domestically and internationally, and price discrimination between and within national markets is common.Research studies cannot even agree on a common measure of wholesale price. Indeed, no measure captures actual transaction prices, including discounts and rebates (Schweitzer 1997). Fourth, consumers do not have enough scientific knowledge to assess different drugs. Thus, gatekeepers such as doctors are important (Hirsch 1975). 25 Yet, the current trend is towards managed care and HMOs who closely control costs. This development clearly indicates physicians are losing some autonomy in drug selection. Thus it is not surprising the market share of generic drugs has increased from 15% to over 41% between 1983 and 1996.This has forced the ethical drug man ufacturers to communicate both more effectively with the HMOs and managed care organizations in addition to physicians and to demonstrate the improved efficacy of their products as compared with generics. The acquisition of PBMs (pharmacy benefit managers) by pharmaceutical companies is an important development in this regard. Physicians now have to prescribe drugs available in the formularies of the managed-care organization. PBMs suggest cheaper alternatives to physicians for a given therapeutic benefit to save money.Eighty percent of the 100 million patient/member PBM market as of 1993 is controlled by the five big PBMs (Schweitzer 1997). In turn, when PBMs and mail-order companies expand, the small pharmacies lose the data necessary to examine various drug interactions. Since current U. S. law protects the propriety data of pharmacists and pharmacy chains, information on prescription for those patients who use pharmacies and mail-order companies actually becomes fragmented. It i s likely this development could affect pharmacists' jobs as well. A fifth reason is FDA approval does not mean new drugs are better than old ones.As noted above, this has pressured drug companies to prove the effectiveness in cost and quality of life their drugs bring to patients. Recently, drug companies have often tried to show how their drugs can help patients restore a normal quality of life. As already described, these concerns complicate the design of clinical trials. Consolidation among wholesalers, the greater complexity of clinical trials and globalization favor firms with substantial resources and are part of the reason for the industry's merger trend, especially between U.S. and European companies. The leading pharmaceutical firms ranked by 1994 sales are as follows (Scrip Magazine, Jan. 1996), with five of them the result of cross border mergers. Merck ranks 2d: 26 27 *3: Comparison is based on U. S. dollars *4: Calculation based on the sales of companies before mergers *5: Including OTC (over the counter drugs) *6: Excludes sales through strategic alliances Merck Merck is a multibillion dollar pharmaceutical firm with a long history going back to the 19th century in the U. S. and the 17th century in Germany.While in the past they have diversified into areas like animal health care, they are now very focused almost exclusively on human health, in particular, on ethical branded prescription drugs within human health care since they have found this is their most profitable business area. Also, given the many opportunities that exist, it will demand all their capital and energy for the foreseeable future. It has therefore spun off its animal health care business to a joint venture and sold its specialty chemical business.This strategy and motivation is similar to Takeda's focus on human health, whose market is more lucrative than its other businesses. The company appears to stress their ability to bring innovative drugs to market. Merck briefly tried to produce generic versions of their drugs, but found it was not worth the investment. In addition, they now assume someone else will produce their OTC (over the counter) versions too. This strategic focus is now underscored by their active formation of strategic alliances. For example, in the OTC medicine market in the U. S. nd Europe, but not in Japan, Merck relies on Johnson & Johnson through a joint venture with J to market, distribute and sell the OTC versions of Merck's prescription drugs. This means Merck has seen the OTC market as one way to lengthen the revenue stream for some of its products after their patents expire. In Japan, Merck's agreement is with Chugai Pharmaceutical Co. Ltd. They formed a joint venture in September 1996 to develop and market Merck's OTC medicines there (Merck 1996 Annual Report). Moreover, Merck and Rhone-Poulenc have announced plans to combine their animal health and poultry genetics businesses to form 28Merial, a new company that will be the wo rld's largest in animal health and poultry genetics (Merck 1996 Annual Report). Their primary strategic focus on ethical drugs seems appropriate, but as explained above it is also critical with respect to this strategy that they maintain relationships with those in scientifically related fields. Their work with Rhone-Poulenc must be examined in this light since improving their competence in the genetic business seems a good part of their strategy given developments in biotechnology and the Human Genome Project. This is because biotechnology-related drugs are often species-specific (Harris 1997).More knowledge about the genetic make-up of human and animal bodies may provide some insights into the appropriate choice of animals in pre-clinical trials from which to extrapolate observations to humans. Since this extrapolation is never perfect and you have to do animal experiments anyway, they have added to their competence in genetics via a joint venture with Du Pont called Du Pont-Merck Pharmaceuticals Co, whose investors are E. I. Du Pont (50%) and Merck (50%). This firm has capabilities in fermentation, genetic engineering/rDNA, cell culture, hybridoma, protein engineering, and tissue culture.By forming this alliance, Merck was able to exchange its strengths with Du Pont, an early investor in biotechnology. Du Pont-Merck Pharmaceutical has also developed its own drugs in cardiovascular disease. 7 Like other pharmaceutical companies, they continue to sell their branded products as long as they can once they have gone off patent but at a lower price in order to meet generic competition. Cost conscious HMO's increase this downward price pressure. Yet, according to Merck some demand for the branded product continues once they adjust the price downward.This is due to better quality, consistent dosage, and brand awareness of the original. Strategically, Merck sees itself as a growth company with a growth target of about 15% per year. This signals a continuing need for cash flow, i. e. from existing drugs, and a Merck sold its share to Dupont in 1998 for over $4billion, apparantly due to its ability to manage more drugs itself. 29 constant flow of new drugs, i. e. from R&D. They need this growth to continue to offer their shareholders the return they expect and to attract the personnel they need to develop drugs which is their corporate mission.Their products now cover 15-16 therapeutic categories. In five years this will expand to between 20 and 25 categories depending on the success of various stages of drug testing. Important new products in the pipeline include Singulair for asthma, Aggrastat for cardiovascular disorders, Maxalt for migraine headaches, and VIOXX, an anti-inflammatory drug, which works as a selective inhibitor targeted at rheumatoid arthritis. They are in phase III trials for all of these new drugs. Propecia for male pattern baldness recently received FDA approval. Merck's R is done internationally.To avoid duplicate investmen t, each research center tends to be focused. For example, the Neuroscience Research Centre in the Untied Kingdom focuses on compounds which affect the nervous system. Maxalt was developed in this Centre. The one laboratory in Italy studies viruses; while the one laboratory in Tsukuba, Japan (Banyu Pharmaceuticals) emphasizes the circulatory system, antibiotics, and anti-cancer research (Giga, Ueda and Kuramoto 1996). This concentration pattern often reflects the comparative strengths in R and the therapeutic demand structure in each local market.Still, selecting the appropriate R projects while critical to their success is very difficult. This is because no discipline in science has as blurred a distinction between basic and applied research as biotechnology. The distinction is usually not well-defined because applied research often contributes to basic research. Indeed, in molecular biology, science often follows technology. Still, as a general approach, Merck tries to focus on app lied research and development rather than basic science. They rely on universities and smaller biotech firms for the later.However, they do some basic research. For instance, th

Sunday, September 29, 2019

Language and Memory Essay

Language is the medium of communication. It can be verbal or written, making use of different conventional symbols and sounds. All social creatures on Earth have their own languages such as bees, ants, and apes. Human language is the most complicated of all because of speech. It is an evolving process of signs and symbols. It consists of different elements such as phonemes, syllables, words, grammatical categories, sentences, discourses, and many more. One of the characteristics of language is that it is symbolic. It makes use of symbols like pictures, diagrams, letters, numbers, and alike. Examples of this characteristic of language involve the hieroglyphics of ancient Egypt and the ancient symbols of the Mayans. Thus, it is important for humans to be able to understand and memorize the symbols in order to establish communication. Memory plays an important role in the process of language. It is the faculty of the mind which stores knowledge, previous thoughts, impression or events. Every word that is used, whether in isolation or used in a sentence has a meaning and that is stored in our brains (Kutas, et al, 2000). There are different types of memory. The first one is the short-term memory which recalls events that happened from a few seconds to a less than a minute ago. Long-term memory, on the other hand, is a stronger memory, which can recall events a few minutes after it happened. Episodic memory is responsible for personal experiences. Since language is composed of symbols and sounds, the human brain acts as a catalog of these symbols and their corresponding meanings. This is called semantic memory (â€Å"Types†, n.d.). Nature and Function of Semantic Memory Semantic memory is essential in language. It consists of independent ideas. These consist of information such as the location of the Great Wall, the shape of an apple, or the colors of the rainbow. Semantic memory organizes ideas and assigns them to words and language, which are essential in establishing communication. In a book entitled, â€Å"Essentials of Human Memory† written by Alan D. Baddeley, semantic memory does not actually mean an association between words (1999). Baddeley pointed out that semantic memory is actually concerned with concepts or ideas, having relation to words but are not words themselves. He argued that much of the information stored in the semantic system consists of perceptions and acquired knowledge. It is mainly a collection of experiences, more than what words can convey (p. 157). There are many views as to the nature of semantic memory. Baddeley quoted a number of psychologists that have their own theories. Roger Brown and Eric Lenneberg described the nature of semantic memory using colors. According to them, focal colors, or colors that have short names are easier to remember such as red, blue and green. The findings support the Whorfian hypothesis, which states that shorter words can easily be remembered (157). Functions of Language Language is a medium of expression that can either be spoken or written. According to Patrick Lockerby, language is â€Å"a coding system and a means by which information may be transmitted or shared between two or more communicators for purposes of command, instruction or play† (2009). Language has many functions but can be simplified into three. The first is the informative language function. This is essential in communication and channeling of information. It is used to describe the world or ideas towards it. This function involves statements with value or truth. The second is expressive language function. Here, language is used as a medium of feelings and attitudes. Examples of this are poetry and prose. There are two aspects in this function of language. These are evoking certain feelings and expressing feelings. The third function of language is called the directive language function. It is commonly found in requests or commands. It is not normally regarded to as true or false. There are other functions of language aside from the three basic functions. The ceremonial language, for example, is used in a way that it mixes the expressive and the directive language for the use of performance. The statement â€Å"I do† in a marriage is an example of performative utterances denoting action. There is also phallic language where there is a transition from spoken language to body language (â€Å"Functions, n.d†.). Stages of Production Basically, the process of language production begins at the source of the information, which is the sender. The message is conceptualized and then encoded to linguistic form, which involves the usage of words and sentences. The linguistic form is then encoded to speech. Speech is the one responsible for delivering the encoded information to the listener through sound. The sound is decoded by the listener into its linguistic form, which is then decoded to its original meaning (â€Å"Language†, n.d.). Memory and Language Bruce A. Crosson and Bruce Crosson discussed the relationship between language and memory in their book, â€Å"Subcortical Functions in Language and Memory†. Before any information is stored in the long term memory, it must first be converted to linguistic system with semantic characteristics. Thus, the ability to retrieve verbal memory of a certain entity is dependent on how the represented entity is accessed. This supports the importance of language since it is dependent on verbal memory (1992). Moreover, meanings or words and symbols are stored in the semantic memory. An evidence of this is the ability to develop one’s vocabulary (325). There are also studies which suggest a significant relationship between the semantic memory and language. A study by Marta Kutas and Kara D. Federmeier proved that semantic memory plays a role in language comprehension as revealed by electrophysiology. An electrophysiological brain component called the N400 reveals the nature and timing of an active semantic memory during language comprehension. Results show that sentence processing is influenced by the organization of semantic memory. In the left hemisphere, the semantic memory appears to pre-activate the meaning of forthcoming words (2000). The relationship between memory and language was studied by Viorica Marian and Margarita Kaushanskaya. Their study involved testing accessibility of general knowledge across two languages in bilinguals. Mandarin–English speakers were asked questions such as â€Å"name a statue of someone standing with a raised arm while looking into the distance†. The respondents were likely to answer Statute of Liberty for the English speakers and Statute of Mao for the Mandarin speakers. When the accuracy of the answers was measured, it showed that language-dependent memory has an effect on both languages. In measuring the speed of answering was measured, it showed that only the bilinguals’ more proficient language is the only ones affected by language-dependent memory (2007). The results of this study suggest that there is a strong relationship between memory and language. Also, linguistic context at the time of learning may become integrated into memory content. Conclusion In conclusion, language plays a very important role in communication and learning. It represents ideas, thoughts and attitudes that are embedded in the linguistic system. Language also has many different functions. Basically, these functions are informative, expressive, and directive. Memory and language are closely related. As mentioned before, any information, before, entering to the long term memory must be converted to a linguistic system first. Semantic memory thus, is significant in language production since the information in the verbal memory is dependent on how to access its representations. References Baddeley, A. D. (1999). â€Å"Essentials of Human Memory†. The Psychology Press, Ltd. â€Å"Common Forms and Functions of Language† (n.d.). Introduction to Logic. Retrieved 16 May 2010 from http://philosophy.lander.edu/logic/form_lang.html. Crosson, B. A., & Crosson, B. (1992). â€Å"Subcortical Functions in Language and Memory†. New York, New York: The Guilford Press. Kutas, Mand & Federmeier, K. D. (2000). â€Å"Electrophysiology Reveals Semantic Memory use in Language Comprehension†. Trends in Cognitiv Sciences, 4 (12). â€Å"Language Production†. (n.d.) Wikepedia. Retrieved 16 May 2010 from http://en.wikipedia.org/wiki/Language_production. Lockerby, P (n.d.). â€Å"What is Language?†. The Chatter Box. Retrieved 16 May 2010 from http://www.scientificblogging.com/chatter_box/blog/what_language. Marian, V. & Kaushanskaya, M. (2007). â€Å"Language Context Guides Memory Content†. Psychonomic Bulletin & Review. 14 (5), 925-933.   Ã¢â‚¬Å"Types of Memory†, (n.d.). Brain Training Software. Retrieved 16 May 2010 from http://www.positscience.com/about-the-brain/brain-facts/types-of-memory.

Saturday, September 28, 2019

Research Methods Essay

The main factors that influence a sociologist’s choice of research method depend on two different theoretical approaches to the study of society; positivism and interpretivism. Positivism is an approach in sociology that believes society can be studied using similar scientific techniques to those used in the natural sciences, such as physics, biology and chemistry. Interpretivism is an approach emphasizing that people have consciousness involving personal beliefs, values and interpretations that influence the way they act and that they do not simply respond to forces outside them. These two theoretical approaches often use different research methods because they have different assumptions about the nature of society, this influences the type of data they are interested in collecting. Practicality, ethics, theory and subject of study also contribute to the methods used for research. There are various methods sociologists use to carry out a research on society. The two common forms are quantitative and qualitative research methods. I will begin by analysing the meanings behind the words, qualitative and quantitative. Quantitative methods are used by people that support the use of scientific investigation, it usually includes numerical statistical methods; the purpose is to expand and utilize mathematical techniques, conjecture and hypothesis. In contrast to this the qualitative research method. This is usually used by sociologists that support the use of humanistic research. It differs from quantitative methods in the sense that, qualitative research methods depend on specific reasons behind the way some people in society behave. Using the qualitative method however, they are prone to ask questions like, ‘why? or ‘how? ’ compared to the quantitative data which would more likely ask straight forward questions like, ‘what? ’ or ‘where’. In qualitative methods the research usually focuses on small samples instead, unlike quantitative research on the other hand, focus lacks and the methods usually inhabits a large, random sample. Unlike a quantitative method where the research depends restrictedly on the investigation of arithmetical or quantifiable statistics, data from qualitative research comes in many medium e. g. moving images, text or sound. Qualitative research was first recognised in the 1970s. Examples of Qualitative data are participant observation, direct observation, unstructured interviews, case studies etc. Examples of Quantitative data are questionnaires, surveys, attitude scales or standardised tests. They are practical issues that affect the methods sociologist may use. These can come from a range of financial issues to ethical issues. * Coaching Interviewers is comparatively clear-cut and economical however it cost more to merely redistributing questionnaire to people. Surveys that resort to structured interviews can cover great group of people with restricted resources because they are moderately cheap to administer however they cannot match the huge numbers reached by postal questionnaire. * Questionnaire and interviews collect straight forward factual information * Questionnaire results are quantitative because they are closed-ended questions with coded answers. This makes them suitable for hypothesis- testing. Sometimes there are specific factors could cause problems amongst certain research methods. Such as: * Time – Questionnaires would be more time consuming while the workload of surveys can be shared by a team * Money -researchers need an income and costs large scale. Social surveys are more high-priced than small focus groups. * Characteristics and skills of the research some situations may be risky and not all sociologist could cope handle this, a woman may have difficulty doing P. O in a monastery access and opportunity. If there is no access to certain groups then secondary sources may have to be used as an alternative. An example of this is when you get researchers hoping to cover a survey on a specific gang or cult. This could be dangerous especially if that gang may have a record of crime and callous behaviour. The researcher may find it really hard if not impossible to get access into the gang or cult; and if he was to get access he could be in immense trouble especially if he went under cover. * Some issues include ethical issues, sometimes certain research is taken on an undercover basis. This could be seen as illusory. Some people would argue that researchers should be 100% honest with the people they are researching on, it is only ethical, moral and honest that this form of sincerity is shown amongst whom the research is based on nevertheless when doing a research as an undercover researcher the questions of ethics arises. Is it morally correct that someone should be studied and researched on without consent or acknowledgement of such thing? The law is that undercover research can only be approved as long as there is no other alternative that is available. Posivists like their research to be scientific whereas Interprevists like to get into the shoes and go through the situation. Feminists, Ann Oakley decides her choice of methods and topic according to her own experience of childhood and motherhood. As a feminist she avoided methods which she described as having a male-stream bias (positivism). She selected the more qualitative and intimate methods of unstructured interviews and participant observation. She deems that the commission of sociology is to include the lives of the respondents.

Friday, September 27, 2019

The traditional view of the legal supremacy of the UK Parliment Essay

The traditional view of the legal supremacy of the UK Parliment withstood all challenges to it. The UK's membership of the European Union has though finally ki - Essay Example on of the statute by both the Houses of Parliament and the grant of Royal Assent for those statutes, then the courts do not question the validity or legitimacy of the statutes; and only apply them. In Edinburgh & Dalkeith Railway Co. v Wauchope, the plaintiff railway company had obtained a private Act for its purposes. The defendant approached the court and argued that this private Act was detrimental to his interests and that it affected him unfavourably. He beseeched the court to examine the legitimacy of the Act. The court refused to intervene in the matter on the grounds that the Act had been passed in both the Houses of Parliament, and that it had also received the Royal Assent. Consequently, the court rejected the plea of the defendant. Thus, courts comply with statutes that have been properly enacted by Parliament (Edinburgh & Dalkeith Railway Co. v Wauchope). The tendency of courts in dealing with the legitimacy of statutes, enacted by Parliament was clearly exhibited in Ex Parte Canon Sewyn (Ex Parte Canon Sewyn) and Pickin v British Railways Board (Pickin v British Railways Board). The Factortame case challenged this sovereignty and compelled the English courts to suspend legislation that had been enacted by Parliament in due course. As such the Factortame case proved to be a major blow to the constitutional provisions of Parliamentary sovereignty. In R v. Secretary of State for Employment (R v Secretary of State for Employment, ex p. Equal Opportunities Commission); the House of Lords, on the basis of the Factortame decision, adopted a much more liberal approach. The Factortame decision had clearly demarcated the sovereignty of the Parliament; and this made it possible for their Lordships to bring about far reaching changes to the constitution. In this regard, their Lordships, refrained from instructing the Secretary of State and they also did not inform him that the EC law was being breached by him. The House of Lords restricted their intervention to

Thursday, September 26, 2019

How important was nationalism as a cause of the collapse of communism Essay

How important was nationalism as a cause of the collapse of communism in Eastern Europe - Essay Example In this paper, the term nationalism is used in the meaning of political doctrine and ideology which is justified by a goal to make a certain nation succeed in pursuing its interests. In this context, a nation is understood as â€Å"a political, ethnic, territorial, cultural, or religious group united by a common economy, mass culture, common legal rights and duties, and a belief system that emphasizes either shared history and genealogy or other common myths distinguishing this group from others† (Smith, 1991: 14). Molchanov (2000: 264) observes that nationalism today is a product of modernization as well as mass education. It is also a product of the elites’ conscious manipulation of country’s masses. Nationalism is based on national feeling, i.e. a feeling of belonging to a community which is culturally distinct and goes beyond people’s circles of important others, covers the barriers of statuses and classes, and on a legitimate basis commands its member s’ loyalty. In this respect, a national community is perceived as an imagined community with its contours being reconstructed during the process of national mobilization (Molchanov, 2000: 263). As for the national elite, it serves a mobilizing agent in this process. Its parochial interests get the status of national interests and become values for which all compatriots fight. Typically, the elite is made up of well-educated classes â€Å"from the indigenous nationality and local administrators† who have become dissatisfied with current social standing (Molchanov, 2000: 264). ... It is also a product of the elites’ conscious manipulation of country’s masses. Nationalism is based on national feeling, i.e. a feeling of belonging to a community which is culturally distinct and goes beyond people’s circles of important others, covers the barriers of statuses and classes, and on a legitimate basis commands its members’ loyalty. In this respect, a national community is perceived as an imagined community with its contours being reconstructed during the process of national mobilization (Molchanov, 2000: 263). As for the national elite, it serves a mobilizing agent in this process. Its parochial interests get the status of national interests and become values for which all compatriots fight. Typically, the elite is made up of well-educated classes â€Å"from the indigenous nationality and local administrators† who have become dissatisfied with current social standing (Molchanov, 2000: 264). Exploring the political agenda of contempor ary nationalism, one may state that it necessarily develops in its connection to state (Tibor, 2010: 36). The relationship between nationalism and the state is discussed in two major modes. The first one describes the situation when the representatives of the indigenous nationality already have control over the state. They make efforts to unite people and create their specific nation. It is characterized by a focus on cultural and linguistic homogenization, consolidation of political connections, and creating the feeling of solidarity. The activity of the government results in education standardization and decrease of the social distance which may be found between the society layers (Gellner, 1998). The second mode of nationalism development within the state is when the elites do not possess control over

TRADE LIBERALIZATION IN DEVELOPING COUNTRIES Essay

TRADE LIBERALIZATION IN DEVELOPING COUNTRIES - Essay Example As part of its conclusion, the paper also provided some recommendations in relation to the overall improvement of some problems identified in the conduct of the research. Introduction The African continent has been divided into three regions to be able to account for the differences in the level of development. The disaggregation is a manifestation of the level of development – North Africa, the South African Customs Union and the ‘Rest of Africa’. The South African Custom Union is composed of South Africa, Lesotho, Swaziland, Namibia and Botswana. Both North and South Africa are middle-income countries while most countries in the category of Rest of Africa are low-income countries1. North Africa is consists of seven countries namely: Algeria, Egypt, Libya, Morocco, Sudan, Tunisia and Western Sahara. As of today, the region has an estimated population of more than 208 million2. Among the African countries, North Africa has the biggest non-black population that com prise the more than half of the population at 160 million3. Largely, North Africa is an Arab region with the most number of Arab populations in the world. This can be traced from the historical, cultural and religious influence of the Middle East. North African countries have diverse cultural and historical backgrounds that affect the variance of their political, economic and cultural policies. Apart from being collectively referred to as part of the same region, sharing of a common adherence Islamic cultural identity, and colonial history, it is now difficult to find commonality among the countries of North Africa4. As the African continent is known for its enormous wealth of natural resources, with one of the world’s largest countries with vast deposits of diamonds, gold chrome, uranium, copper, iron, cobalt and many other minerals, the same is true with the North African region. It is endowed with richness of natural resources especially in terms of agriculture and mineral s. Libya and Algeria are also known to have large amounts of petroleum5. North Africa is a region that can rouse development internally with all of its’ natural and human resources. The full use and maximization of its natural resources alone can help this region into full economic development. The North African region is not a rookie when it comes to globalization. It basically relies on oil, natural gas, phosphates and agricultural products for exports. Tradable industrial output, non traded goods and services play a not-so significant role in North African’s economy in terms of manufacturing and most of the existing firms and entrepreneur are mainly family-owned and out of date if compared to other emerging markets in the global industry6. The North African region had also been a region of dispute. Just recently, three of its countries have suffered from political chaos, namely Tunisia, Egypt and Libya. The growing discontent of the people has greatly affected the r uling power in these countries and the government’s retaliation has further destabilized their respective governments. Protests come in different forms, yet, the peoples’ clamour is the same: protests against political suppression and significant economic reforms. It has been apparent from the series of

Wednesday, September 25, 2019

Basis of Emelio and Charitas Assets Essay Example | Topics and Well Written Essays - 750 words

Basis of Emelio and Charitas Assets - Essay Example Since the car is used for business use as well as personal use, its adjusted basis of $21,000 (19,500+1,500) is allocated to business purpose based on the extent of use for the same. Therefore the basis would be $14,280 [21000x6800/(6800+3200)]. The basis of the property received by Charita from her former spouse as part of divorce settlement would be same as former spouse's basis in it. Former spouse's basis in the property was lower of the following amounts: Since the alternate valuation is elected and the stock is distributed to Charita within 6 months after her uncle's death, Charita's basis in the stock would be FMV of the stock on date of distribution, which is $14,500. If alternate valuation was not elected by the executor, the basis would have been FMV on date of her uncle's death - $14,000. If alternate valuation was elected, but the date of distribution was not within 6 months of death, then the basis would be FMV on alternate valuation date - $13,300. .. FMV of the property on the date of its conversion to rental property - $90,000 Adjusted basis on the date of conversion - $50,000 Therefore, the basis of the property is $50,000. h. Desmond Inc Stocks Inherited Since the alternate valuation is elected and the stock is distributed to Charita within 6 months after her uncle's death, Charita's basis in the stock would be FMV of the stock on date of distribution, which is $14,500. If alternate valuation was not elected by the executor, the basis would have been FMV on date of her uncle's death - $14,000. If alternate valuation was elected, but the date of distribution was not within 6 months of death, then the basis would be FMV on alternate valuation date - $13,300. i. Stock in Software Corporation The basis of the original stock is $20,100 (price paid for the stock $20,000+brokerage $100). The basis of stocks after the split would be its FMV on date of split, which is $200,000 [(1000x2)x$100]. j. Shares gifted by Emelio's father The basis of any shares received as gift would be as follows: Gain basis - Adjusted basis of donor + Gift tax paid on appreciation = $3,500 (100x$35) Loss basis - Lower of Gain basis and FMV on date of gift = $3,500 (Lower of $3,500 and $4,500) If Gift tax was paid on the appreciation, the basis would have to be computed using the following formula: Adjusted basis of the donor + [Gift tax X (FMV - Adjusted basis)/(FMV -

Tuesday, September 24, 2019

Occupational Safety & Liability Case Study Example | Topics and Well Written Essays - 250 words

Occupational Safety & Liability - Case Study Example   As such, one is convinced that the comprehensive definition of a safe workplace should extend beyond focusing on it being injury free – which is only one facet in adhereing to the standards of safety as prescribed by the Occupational Safety and Health Administration (OSHA).Concurrently, it has been revealed that employers were required by OSHA for the provision of a safe workplace that conforms to the following description: â€Å"one that is free of dangers that could physically harm those who work there†¦ requiring employers to inform employees about potential hazards, to train them in how to deal with hazards, and to keep records of workplace injuries†.Overall, other facets of safety and conformity to health standards must be adhered; not only focusing on an injury free definition. Actually, it was emphasized that a safe workplace should consider communicating explicitly to all personnel sources of potential risks and hazards, identifying machinery that could endanger the employees’ lives, preventing illnesses, making sure that lighting, ventilation, emergency exits and fire protective strategies are in place, providing vaccination as deemed necessary, and â€Å"even tracking the effects of workplace conditions on employees’ health through periodic medical examinations†. Thus, the information confirms that workplace safety does not merely mean being injury-free. All aspects of hazards and risks in the work place must be properly addressed.

Monday, September 23, 2019

Adobe Frames Interface Essay Example | Topics and Well Written Essays - 3250 words

Adobe Frames Interface - Essay Example Executable files that can be played from a compact disc can also be created using Flash without the need for any additional software.Flash has the capability of capturing user input through the keyboard, mouse, camera or even the microphone.In order to use flash, one does not need to know any programming language even though flash itself contains a scripting language called ActionScript which is object-oriented and offers support for automation through JFL (JavaScript Flash Language). Overview of Action script ActionScript is an object-oriented scripting language which looks much more like JavaScript that is used by Flash to control objects within its movies. It’s based on ECMA-262 specification just like JavaScript. ActionScript enables interactive design in Flash by allowing execution of different actions within a movie (Waldron, 2006). It was initially referred to just as ‘actions’ and was introduced in Flash Player 4. It enabled simple interactivity in Flash a nd it was not as such a complicated language since its semantics and syntax was not anywhere close to ECMAScript. ECMAScript-based syntax was although later applied and thus ActionScript 1.0 was born and introduced in Flash Player 5. ActionScript evolved and its semantics tweaked with the release of Flash Player 6 and 7. In 2003, ActionScript 2.0 was introduced in Flex 1.0 and Flash MX 2004 but it could still work in Flash Player 6 and 7 since it was using an object model similar to that of ActionScript 1.0 (Waldron, 2006). In Flash Player 9, ActionScript 3.0 was introduced as result of a new ActionScript Virtual Machine (AVM2) which is now the main virtual machine for execution of ActionScript code although support for AVM1 is still provided in order to accommodate earlier versions of ActionScript. ActionScript 2.0 and ActionScript 3.0 ActionScript 2.0 was introduced in Flash MX 2004. Although it still utilized the same object model as ActionScript 1.0, it is well equipped for comp lex and larger applications. It adds some few new runtime capability and functionality by improving object-oriented programming in Flash through the introduction of syntax and semantics that is object-oriented. ActionScript 1.0 lacked an official vocabulary for the creation of objects and classes even though it was considered to be object-oriented. In ActionScript 1.0, prototypical objects were used as classes since there was no provision of class keyword for class creation and extend keyword for establishing inheritance which ActionScript 2.0 now provides thus making the language more familiar to those programmers with OOP backgrounds. ActionScript 2.0 has also made it possible for the creation of Java-like interfaces via the use of the interface statement. ActionScript 1.0 did not offer support for user interfaces. In ActionScript 1.0, the file extension for class files was .as which could be defined in in-line code or external files. ActionScript 2.0 now introduces a .class exten sion for class files and requires them to be defined in external class files. This enables editing class files in Flash MX professional 2004 editor or in any other external editor.

Sunday, September 22, 2019

Collision Course †NEOs Essay Example for Free

Collision Course – NEOs Essay When looking at the Earth in the Solar System, there are many fascinating objects, and also many dangerous ones. NEOs, or Near Earth Objects are constantly being studied to determine the actual possibility of collision with the Earth. Most NEOs consist of meteors, meteorites, comets and asteroids. Though most of the objects are too small to cause any sort of substantial damage, there are a few that are capable of causing the next major extinction. In order for an object to be considered a NEO, it must be within 1. 3 AUs (or astronomical units) from the Sun. 1.3 AU is the same as about 93 million miles. The NEOs are objects that have been bumped by the gravity of other planets which let them get close to the Earths orbit.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   One of the major groups of NEOs are meteoroids. The term meteor is actually used to describe the streak of visible light after its trip through the Earths orbit. One of the most famous craters, which   is like a giant scar caused by a NEO hitting the Earth, is in Arizona. Meteor Crater, or Barringer Meteorite Crater as it is also known as, is a jarring reminder of what kind of damage a NEO can do upon impact. Most meteors are small enough that once they are pulled in by the Earths orbit and hit the atmosphere, they burn up and disintegrate before they ever get the chance to actually hit the Earths surface.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Another group of NEOs are asteroids. One of the largest asteroids that astronomers keep their eyes on is Apophis. This giant is due to hit the Earth in 2036. The size of Apophis is estimated to be a bit larger then the Rose Bowl, and if it were to hit the Earth would cause global damage. If it hits the ocean, the damage occurring from the huge tsunamis by themselves would be catastrophic.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   Another class of objects that are visible to the naked eye are comets. When speaking of them as NEOs, then they are considered to be old comet nuclei whose perihelia are less then 1.3 AU from the Sun. One comet in particular that has been known through history is Halleys Comet. Also, the trail of cosmic dust, or tail of the comet can also be passed through by the Earth.   Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   In conclusion, NASA is taking steps to try and diminish the risk of being hit by a NEO by continually watching and cataloging the orbits and behaviors of NEOs. What remedies that have been looked at so far to try and deal with the risk of being hit is to explode nuclear weapons near the object to try and change its course. Other considerations that have been looked at is sending high-speed ballistic missiles towards the object to make an impact, or to send a hovering spacecraft to pull the object into a different orbit, thereby allowing it to miss the Earth altogether. The future for watching NEOs is strongly backed now more then ever before. Washington has allowed a $4 million dollar budget for listing potential and real threats to the Earth, and sent a new report to congress in March 2007. Considering the probability that the Earth will be hit again, as it has been hit in the past requires that the scientific community take heed of the risk, and not only be able to prepare the world for such a catastrophe, but be able to prevent it as well. References http://www.jpl.nasa.gov/multimedia/neo/index.cfm http://www.nasa.gov/centers/hq/home/index.html http://newton.dm.unipi.it/cgi-bin/neodys/neoibo?info:0;faq#nea

Saturday, September 21, 2019

Yum! Pizza-Hut KFC

Yum! Pizza-Hut KFC Yum! Brands Inc, Pizza Hut, and KFC The fast food industry has exploded over the preceding century in both the United States and foreign markets. Rising income, greater affluence among a larger percentage of American households, higher divorce rates, and the marriage of people later in life contributed to the rising number of single households and the demand for fast food (Krug (2004) pg. 632). In 2004, Yum! Brands, Inc. was the worlds largest fast food company. It operated more that 33,000 KFC, Taco Bell, Pizza Hut, Long John Silvers, and AW restaurants worldwide. Yum! Brands also operated more that 12,000 restaurants outside the United States (Krug (2004) pg. 627). In 2004, the company was focusing on international strategy and portfolio management to develop a strong market share with little high growth markets. The companies main focus in 2004 was to focus its international strategy on developing strong market share positions in a small number of high-growth markets such as Japan, Canada, the United Kingdom, China, Australia, Korea, and Mexico (Krug (2004) pg. 627). International strategy is based on diffusion and adaptation of the parent companys knowledge and expertise to foreign markets. The primary goal of the strategy is worldwide exploitation of the parent firms knowledge and capabilities (Dess, Lumkin, Eisner 2007 pg. 256). The analysis begins by looking at the strengths of the firm. Yum! Brands, Inc. has numerous strengths throughout its internal environment. The company was the market leader in the chicken, pizza, Mexican, and seafood segments of the U.S. fast food industry. It operates more than 33,000 units worldwide (Krug (2004) pg. 627). The focus of the company went from individual to multibranded units. Multibranded units attracted a larger consumer base by offering a broader menu selection in one location. The company operates more than 2400 multibrand restaurants in the U.S (Krug (2004) pg. 628). An additional strength within its internal environment comes from franchising. Franchising allowed firms to expand more quickly, minimize capital expenditures, and maximize return on invested capital (Krug (2004) pg. 633). Franchising has the advantage of limiting the risk exposure that a firm has in overseas markets while expanding the revenue base of the parent company (Dess, Lumkin, Eisner 2007 pg . 265). As we have come to realize, companies are never perfect and can have numerous weaknesses within its internal environment. Long distances between headquarters and foreign franchises made it more difficult to control the quality of individual restaurants. Large distances also caused servicing and support problems, and transportation and other resource costs were higher. In addition, time, cultural, and language differences increased communication problems and made it more difficult to get timely and accurate information (Krug (2004) pg. 635). A companys opportunities are the most influential to building an effective strategy. As the U.S. market matured, more restaurants turned to international markets to expand sales. Foreign markets were attractive because of their large customer bases and comparatively low competition. A great opportunity for Yum! Brands Inc. is to move its investment locations to Mexico. From a regional point of view, Latin America is appealing because of its close proximity to the United States, language and cultural similarities, and the potential for a future World Free Trade Area of the Americas, which would eliminate tariffs on trade within North and South America (Krug (2004) pg. 627). The external environment creates numerous threats for Yum! Brands Inc. One of the prime threats Yum! Brands, Inc. faces from the external environment is the increasing age in the population. Restaurants rely heavily on teenagers and college-aged workers. As the population ages, fewer young workers are available to fill food service jobs. Many restaurants were forced to hire less reliable workers, which affected both service and restaurant cleanliness. An additional weakness was that turnover rates were notoriously high. The National Restaurant Association estimated that 96% of all fast food workers quit within a year (Krug (2004) pg. 633). Another giant threat the company faces is the proliferation of new diets. Many Americans were eating pizza less often as they pursued the Atkins Diet (low carbohydrates), â€Å"The Zone† (balanced meals containing equal parts of carbohydrates, protein, and unsaturated fat), or a traditional low fat diet (Krug (2004) pg. 632). Chicken costs were also a threat to the company. A boneless chicken breast, which cost $1.20 per pound in early 2001, cost $2.50 per pound in 2004, an increase of more than 100 percent. Profit margins were being squeezed from both the revenue and cost sides (Krug (2004) pg. 632). In 2004, Yum! Brands Inc. started to pay more attention to portfolio management. The key purpose of creating portfolio models is to assist a firm in achieving a balanced portfolio of businesses. Businesses whose profitability, growth, and cash flow characteristics would complement each other and add up to a satisfactory overall corporate performance. Imbalance, for example, could be caused either by excessive cash generation with too few growth opportunities or by insufficient cash generation to fund the growth requirements in the portfolio (Dess, Lumkin, Eisner 2007 pg. 214). When using portfolio strategy approaches, a corporation tries to create synergies and shareholder value in a number of ways. One of the best portfolio strategy approaches is the Boston Consulting Groups (BCG) growth/share matrix. When using the (BCG) each business unit is broken down into four different quadrants, stars, cash cows, question marks, and dogs. Stars are the business units competing in high-growth industries with relatively high market shares. Question marks compete in high growth industries with weak market shares. Cash cows are business units with high market shares in low growth industries. Finally, dogs have weak market shares in low growth industries (Dess, Lumkin, Eisner 2007 pg. 214). Yum! Brands Inc. has several business units that are considered cash cows. The first business unit that is a cash cow is Pizza Hut. In 2003, Pizza Huts sales were 5 billion dollars. It has almost 50 percent of the industries market share. Although its market share is fairly high, its growth rate is only 1.3 percent. The average sales per unit are $605,700 throughout its 7,523 units (Krug (2004) pg. 631. Another cash cow is Kentucky Fried Chicken (KFC). As well as Pizza Hut, KFC is also the market leader in the chicken chain. In 2003, KFCs total sales were almost 5 billion dollars, more than 50 percent of the market share in the chicken chain segment. KFC had a growth rate of 2.8 percent. The average sales per unit are $897,800 throughout its 5,524 units. Despite its dominance, KFC is slowly losing market share as other chicken chains increases sales at a faster rate. Sales indicated that KFCs share of the chicken segment fell from a high of 64 percent in 1993, a 10 year drop of 14 percent (Krug (2004) pg. 631). The last cash cow of Yum! Brands Inc. is Taco Bell. Taco Bell is Yum Brand Inc. most profitable among the business units. In 2003, its sales were 5.3 billion dollars, averaging $879,700 per unit. Although it has a high market rate, it only has a growth rate of 2.8 percent (Krug, (2004) pg. 631). Taco Bell was able to generate greater overall profits because of its lower operating cost (Krug (2004) pg. 627). Its profits also were greater because the cooking machinery was simple, less costly, and required less space then a pizza oven or chicken broiler (Krug (2004) pg. 631). Despite the fact that the company has many cash cows throughout its business units, it also has two dogs in AW restaurants and Long John Silvers. In 2003, AW had sales of only 200 million dollars. That is over 5 billion dollars less than the sales that Taco Bell exceeded. Additionally, Long John Silvers had sales of 777 million dollars, averaging $640,000 throughout its units. Its growth rate was a low 2.8 percent six percent less than the industry leader McDonalds (Krug, (2004) pg. 631). Even though there are numerous benefits of portfolio models, there are also some downsides. First, the approach views each Strategic Business Unit (SBU) as a stand-alone entity, ignoring common core business practices and value-creating activities that may hold promise for synergies across business units. Second, unless care is exercised, the process becomes largely mechanical, substituting an oversimplified graphical model for the important contributions for the CEOs experience and judgment. Third, the reliance on â€Å"strict rules† regarding resource allocation across SBUs can be detrimental to a firms long term viability. Finally, while colorful and easy to comprehend the imagery of the BCG matrix can lead to some troublesome and overly simplistic prescriptions (Dess, Lumkin, Eisner 2007 pg. 216). Since 2004, Yum! Brands Inc. has been narrowing its focus on an international strategy. An international strategy is achieved by developing a strong market share position in a small number of high growth markets. There are a few advantages of international expansion. First, is it increases the size of potential markets for a firms products and services (Dess, Lumkin, Eisner 2007 pg. 243). Second, is reducing the costs of research and development as well as operating costs. Finally, it can enable a firm to optimize the physical location for every activity in its value chain (Dess, Lumkin, Eisner 2007 pg. 247). There are four risks when dealing with international strategy, political risk, economic risk, currency risk, and management risk. Political and economic risk can be any where from social unrest, military turmoil, elections, and even violent conflict or terrorist attacks. Any country that has this high risk is less attractive for most types of business. Currency risk can pose as a substantial risk for companies. When business units are in different countries they must pay very close attention to the exchange rates. Even a small change in the exchange rate can result in a significant difference in the cost of production or net profit when doing business overseas. Management risk is the risk manager face when they must respond to the inevitable differences that they encounter in foreign markets. Managers must also pay very close attention to the culture of the country they are looking to put there business units in (Dess, Lumkin, Eisner 2007 pg. 248-249). In conclusion, the SWOT analysis has given us a good view of the internal and external environments for Yum! Brands Inc. It has shown what the company can use for the building blocks for the strategic plan. To be successful, the firm must come across all the factors in the analysis. The Boston Consulting Group has shown which of the business units throughout Yum Brands Inc. are the most successful, and the units that need vast improvement. For Yum Brands Inc. to succeed with its international strategy, managers must pay close attention to the different risks that a country has. The international strategy must be success to develop a strong market share positions throughout the world. If the strategy fails the companys market share could drop significantly. Work Cited Krug, A. Jeffery (2004). Yum! Brands, Pizza Hut, and KFC. Appalachian State University, 627- 638. Dess, G. Gregory, Lumpkin, G.T, Eisner, B. Eisner (2007). Strategic Management 3e. McGraw-Hill.