Thursday, October 31, 2019

Comparison of the novel by Robert Penn Warren and the film Adaptation Essay - 1

Comparison of the novel by Robert Penn Warren and the film Adaptation - Essay Example The book displays Jack’s character in a more complex manner as compared to the film. Jack’s character takes a complex form in the book (Warren 45). His pessimist nature towards life is evident in the book with a clear obsession with Anne Stanton. In the film, Jack’s feelings are partially captured. The novel displays the philosophical discussion of Jack but â€Å"The Great Twitch† is not shown in the film. The 1930s era is characterized by racism (Warren 56). To keep up with the era, Jack displays racist aspects by standards of a different era. The film does not capture this aspect. Jack is a doctor and performs doctoral research. Which is more effective and powerful, the visit with judge Irwin and the conflicts presented between Willie, Jack and the Judge in the film version, or the novel? (pages 63-73) explain your answer, providing specific examples from both the film and the novel.   The film version is more strong as it displays a critical review of the scenario. The discussion of his doctoral research is presented in the novel but does not appear in the film. Jack studies Cass Mastern in his research who in a descent from the Antebellum South. Mastern fought in the Civil War. The book gives a detailed passage on Mastern and his influence on people’s life. This creates controversy as being the center of discussion in of the moral theme in the novel. Jacks stop his research on Mastern as he refuses to accept the study ratings of how people’s actions have impact on the destiny of others (Warren 98). The reaction of Jack in the book is more prevailing that that in the film. Jack is more enraged in the book on learning that Willie has Anne as his mistress. The book does not carry the storyline of Tom Stark as displayed in the film. The film does display Tom Stark but for short periods only. The book and the film display the scandal caused by Tom after he impregnated a girl. The father of the girl faces

Tuesday, October 29, 2019

Health communication Essay Example | Topics and Well Written Essays - 500 words

Health communication - Essay Example I can achieve greater success in the team through hard work to overcome the stress. I prefer feeling to thinking. The main reason for this is the need to maintain interpersonal relationships with other people. I always choose personal concerns about the welfare of others over impersonal facts and objective principle because it considers the points-of-view of persons involved in a situation. ‘Feeling’ enables one to maintain harmony and consider people’s values, unlike ‘thinking’, which requires the application of principles. Based on the article, the higher prevalence of â€Å"feeling† individuals in nursing is because of the need for harmony in the workplace. Those with a preference for ‘thinking’ endure less stress than those with a preference for ‘feeling’ (Nash, 2011). Nursing requires one to advocate for human-centered values that are in line with the way that people react and feel. It enables one to make individual exceptions for individual cases, as opposed to thinking, which requires one to stand firm to outlined principles and hold firm to policies. I agree with Anthony Espinoza that one can handle stress perfectly, depending on the situation. In addition, stress reduces a person’s level of focus, and it may hinder teamwork activities that require coordination of efforts. Feelings often override thinking because of the need to maintain interpersonal relationships. For instance, a nurse should be caring to patients, and they often achieve this through a display of feelings, despite the stress they endure in their long and exhausting work hours. I agree with Angela Bregstrom that a multitude of stress from different places is difficult to handle. For instance, if one experiences stress at the workplace, it is only right that they should not experience stress at home or at school. The ability of stress to arise from virtually all sectors makes it a variable that requires utmost attention before its

Sunday, October 27, 2019

Internet of Things Paradigm

Internet of Things Paradigm Introduction According to 2016 statistical forecast, there are almost 4.77 billion number of mobile phone users in globally and it is expected to pass the five billion by 2019. [1] The main attribute of this significant increasing trend is due to increasing popularity of smartphones. In 2012, about a quarter of all mobile users were smartphone users and this will be doubled by 2018 which mean there are be more than 2.6 million smartphone users. Of these smartphone users more than quarter are using Samsung and Apple smartphone. Until 2016, there are 2.2 million and 2 million of apps in google app store and apple store respectively. Such explosive growth of apps gives potential benefit to developer and also companies. There are about $88.3 billion revenue for mobile application market. Prominent exponents of the IT industry estimated that the IoT paradigm will generate $1.7 trillion in value added to the global economy in 2019. By 2020 the Internet of Things device will more than double the size of the smartphone, PC, tablet, connected car, and the wearable market combined. Technologies and services belonging to the Internet of Things have generated global revenues in $4.8 trillion in 2012 and will reach $8.9 trillion by 2020, growing at a compound annual rate (CAGR) of 7.9%. From this impressive market growth, malicious attacks also have been increased dramatically. According to Kaspersky Security Network(KSN) data report, there has been more than 171,895,830 malicious attacks from online resources among word wide. In second quarter of 2016, they have detected 3,626,458 malicious installation packages which is 1.7 times more than first quarter of 2016. Type of these attacks are broad such as RiskTool, AdWare, Trojan-SMS, Trojan-Dropper, Trojan, Trojan-Ransom,Trojan-Spy,Trojan-Banker,Trojan-Downloader,Backdoor, etc.. http://resources.infosecinstitute.com/internet-things-much-exposed-cyber-threats/#gref Unfortunately, the rapid diffusion of the Internet of Things paradigm is not accompanied by a rapid improvement of efficient security solutions for those smart objects, while the criminal ecosystem is exploring the technology as new attack vectors. Technological solutions belonging to the Internet of Things are forcefully entering our daily life. Lets think, for example, of wearable devices or the SmartTV. The greatest problem for the development of the paradigm is the low perception of the cyber threats and the possible impact on privacy. Cybercrime is aware of the difficulties faced by the IT community to define a shared strategy to mitigate cyber threats, and for this reason, it is plausible that the number of cyber attacks against smart devices will rapidly increase. As long there is money to be made criminals will continue to take advantage of opportunities to pick our pockets. While the battle with cybercriminals can seem daunting, its a fight we can win. We only need to break one link in their chain to stop them dead in their tracks. Some tips to success: Deploy patches quickly Eliminate unnecessary applications Run as a non-privileged user Increase employee awareness Recognize our weak points Reducing the threat surface Currently, both major app store companies, Google and Apple, takes different position to approach spam app detection. One takes an active and the other with passive approach. There is strong request of malware detection from global Background (Previous Study) The paper Early Detection of Spam Mobile Apps was published by dr. Surangs. S with his colleagues at the 2015 International World Wide Web conferences. In this conference, he has been emphasised importance of early detection of malware and also introduced a unique idea of how to detect spam apps. Every market operates with their policies to deleted application from their store and this is done thru continuous human intervention. They want to find reason and pattern from the apps deleted and identified spam apps. The diagram simply illustrates how they approach the early spam detection using manual labelling. Data Preparation New dataset was prepared from previous study [53]. The 94,782 apps of initial seed were curated from the list of apps obtained from more than 10,000 smartphone users. Around 5 months, researcher has been collected metadata from Goole Play Store about application name, application description, and application category for all the apps and discarded non-English description app from the metadata. Sampling and Labelling Process One of important process of their research was manual labelling which was the first methodology proposed and this allows to identify the reason behind their removal. Manual labelling was proceeded around 1.5 month with 3 reviewers at NICTA. Each reviewer labelled by heuristic checkpoint points and majority reason of voting were denoted as following Graph3. They identified 9 key reasons with heuristic checkpoints. These full list checkpoints can be find out from their technical report. (http://qurinet.ucdavis.edu/pubs/conf/www15.pdf)[] In this report, we only list checkpoints of the reason as spam. Graph3. Labelled spam data with checkpoint reason. Checkpoint S1-Does the app description describe the app function clearly and concisely? 100 word bigrams and trigrams were manually conducted from previous studies which describe app functionality. There is high probability of spam apps not having clear description. Therefore, 100 words of bigrams and trigrams were compared with each description and counted frequency of occurrence. Checkpoint S2-Does the app description contain too much details, incoherent text, or unrelated text? literary style, known as Stylometry, was used to map checkpoint2. In study, 16 features were listed in table 2. Table 2. Features associated with Checkpoint 2 Feature 1 Total number of characters in the description 2 Total number of words in the description 3 Total number of sentences in the description 4 Average word length 5 Average sentence length 6 Percentage of upper case characters 7 Percentage of punctuations 8 Percentage of numeric characters 9 Percentage of common English words 10 Percentage of personal pronouns 11 Percentage of emotional words 12 Percentage of misspelled word 13 Percentage of words with alphabet and numeric characters 14 Automatic readability index(AR) 15 Flesch readability score(FR) For the characterization, feature selection of greedy method [ ] was used with max depth 10 of decision tree classification. The performance was optimized by asymmetric F-Measure [55] They found that Feature number 2, 3, 8, 9, and 10 were most discriminativeand spam apps tend to have less wordy app description compare to non-spam apps. About 30% spam app had less than 100 words description. Checkpoint SÂ ­3 Does the app description contain a noticeable repetition of words or key words? They used vocabulary richness to deduce spam apps. Vocabulary Richness(VR) = Researcher expected low VR for spam apps according to repetition of keywords. However, result was opposite to expectation. Surprisingly VR close to 1 was likely to be spam apps and none of non-spam app had high VR result. [ ] This might be due to terse style of app description among spam apps. Checkpoint S4 Does the app description contain unrelated keywords or references? Common spamming technique is adding unrelated keyword to increase search result of app that topic of keyword can vary significantly. New strategy was proposed for these limitations which is counting the mentioning of popular applications name from apps description. In previous research name of top-100 apps were used for counting number of mentioning. Only 20% spam apps have mentioned the popular apps more than once in their description. Whereas, 40 to 60 % of non-spam had mention more than once. They found that many of top-apps have social media interface and fan pages to keep connection with users. Therefore, theses can be one of identifier to discriminate spam of non-spam apps. Checkpoint S5 Does the app description contain excessive references to other applications from the same developer? Number of times a developers other app names appear. Only 10 spam apps were considered as this checkpoint because the description contained links to the application rather than the app names. Checkpoint S6 Does the developer have multiple apps with approximately the same description? For this checkpoint, 3 features were considered: The total number of other apps developed by same developer. The total number of apps that written in English description to measure description similarity. Have description Cosine similarity(s) of over 60%, 70%, 80%, and 90% from the same developer. Pre-process was required to calculate the cosine similarity: [ ] Firstly, converting the words in lower case and removing punctuation symbols. Then calibrate each document with word frequency vector. Cosine similarity equation: http://blog.christianperone.com/2013/09/machine-learning-cosine-similarity-for-vector-space-models-part-iii/ They observed that the most discriminative of the similarity between app descriptions. Only 10% 15% of the non-spam had 60% of description similarity between 5 other apps that developed by same developer. On the other hand, more than 27% of the spam apps had 60% of description similarity result. This evidence indicates the tendency of the spam apps multiple cone with similar app descriptions. Checkpoint S7 Does the app identifier (applied) make sense and have some relevance to the functionality of the application or does it appear to be auto generated? Application identifier(appid) is unique identifier in Google Play Store, name followed by the Java package naming convention. Example, for the facebook , appid is com.facebook.katana. For 10% of the spam apps the average word length is higher than 10 and it was so only for 2%-3% of the non-spam apps. None of the non-spam apps had more than 20% of non-letter bigram appear in the appid, whereas 5% of spam apps had. Training and Result From 1500 of random sampling data 551 apps (36.73%) were suspicious as spam. [ ] Methods Automation We used Checkpoint S1 and S2 for data management due to its comparability and highest number of agreement from reviewers. Due to limitation of accessibility for collect description reason only 100 sample was used for the testing. We have automated checkpoint S1 and S2 according to following algorithm. Collected data were used log transformation to modify. This can be valuable both for making patterns in the data more interpretable and for helping to meet the assumptions of inferential statistics. To make a code most time consuming part was description collection which takes more than two weeks to find and store. The raw data directed the description link for appID. However, many of them where not founded due to old version or no more available. So we searched all this info manually from the web and founded description was saved as a file which named as appID. (Diagram.) This allowed us to recall the description more efficiently in automation code. S1 was automated by identified 100 word-bigrams and word-trigrams that are describing a functionality of applications. Because there is high probability of spam app doesnt have these words in their description, we have counted number of occurrence in each application. Full list of these bigrams and trigrams found in Table 1. Table 1. Bigrams and trigrams using the description of top apps play games are available is the game app for android you can get notified to find learn how get your is used to your phone to search way to core functionality a simple match your is a smartphone available for app for to play key features stay in touch this app is available that allows to enjoy take care of you have to you to can you beat buy your is effortless its easy to use try to allows you keeps you action game take advantage tap the take a picture save your makes it easy follow what is the free is a global brings together choose from is a free discover more play as on the go more information learn more turns on is an app face the challenges game from in your pocket your device on your phone make your life with android it helps delivers the offers essential is a tool full of features for android lets you is a simple it gives support for need your help enables your game of how to play at your fingertips to discover brings you to learn this game play with it brings navigation app makes mobile is a fun your answer drives you strategy game is an easy game on your way app which on android application which train your game which helps you make your S2 was second highest number of agreement from three reviewers in previous study. Among 551 identified spam apps, 144 apps were confirmed by S2, 63 from 3 reviewers and 81 from 2 reviewer agreed. We knew that from pre-research result, total number of words in the description, Percentages of numeric characters, Percentage of non-alphabet characters, and Percentage of common English words will give most distinctive feature. Therefore, we automated total number of words in the description and Percentage of common English words using C++. Algorithm 1. Counting the total number of bi/tri-grams in the description From literature [], they used 16 features of to find the information from checkpointS2. This characterization was done with wrapper method using decision tree classifier and they have found 30% of spam apps were have less than 100 words in their description and only 15% of most popular apps have less than 100 words. We extracted simple but key point from their result which was number of words in description and the percentage of common English words. This was developed in C++ as followed. Algorithm 2. Counting the total number of words in the description int count_Words(std::string input_text){ int number_of_words =1; for(int i =0; i if(input_text[i] == ) number_of_words++; return number_of_words; } } Percentage of common English words has not done properly due to difficulty of standard selection. However, here is code that we will develop in future study. Algorithm 3. Calculate the Percentage of common English words(CEW) in the description Int count_CEW(std::string input_text){ Int number_of_words=1; For(int i while(!CEW.eof(){ if(strcmp(input_text[i],CEW){ number_of_words++; } else{ getline(readFile, CEW); } } return number_of_words; } Int percentage(int c_words, int words){ return (c_words/words)*100 } Normalizaton We had variables between [ min, max] for S1 and S2. Because of high skewness of database, normalization was strongly required. Database normalization is the process of organizing data into tables in such a way that the results of using the database are always unambiguous and as intended. Such normalization is intrinsic to relational database theory. Using Excel, we had normalized data as following diagram. Thru normalization, we could have result of transformed data between 0 and 1. The range of 0 and 1 was important for later process in LVQ. Diagram. Excel spread sheet of automated data(left) and normalized data (right) After transformation we wanted to test data to show how LVQ algorithm works with modified attributes. Therefore, we sampled only 100 data from modified data set. Even the result was not significant, it was important to test. Because, after this step, we can add more attributes in future study and possible to adjust the calibration. We have randomly sampled 50 entities from each top rank 100 and from pre-identified spam data. Top 100 ranked apps was assumed and high likely identify as non-spam apps. Diagram. Initial Results We used the statistical package python to perform Learning Vector Quantification. LVQ is prototype-bases supervised classification algorithm which belongs to the field of Artificial Neural Networks. It can have implemented for multi-class classification problem and algorithm can modify during training process. The information processing objective of the algorithm is to prepare a set of codebook (or prototype) vectors in the domain of the observed input data samples and to use these vectors to classify unseen examples. An initially random pool of vectors was prepared which are then exposed to training samples. A winner-take-all strategy was employed where one or more of the most similar vectors to a given input pattern are selected and adjusted to be closer to the input vector, and in some cases, further away from the winner for runners up. The repetition of this process results in the distribution of codebook vectors in the input space which approximate the underlying distribution of samples from the test dataset Our experiments are done using only the for the manufactured products due to data size. We performed 10-fold cross validation on the data. It gives us the average value of 56%, which was quite high compare to previous study considering that only two attributes are used to distribute spam, non-spam. LVQ program was done by 3 steps; [ ] Euclidean Distance Best Matching Unit Training Codebook Vectors 1. Euclidean Distance. Distance between two rows in a dataset was required which generate multi-dimensions for the dataset. The formula for calculating the distance between dataset Where the difference between two datasets was taken, and squared, and summed for p variables def euclidean_distance(row1, row2): distance = 0.0 for i in range(len(row1)-1): distance += (row1[i] row2[i])**2 return sqrt(distance) 2. Best Matching Unit Once all the data was converted using Euclidean Distance, these new piece of data should sorted by their distance. def get_best_matching_unit(codebooks, test_row): distances = list() for codebook in codebooks: dist = euclidean_distance(codebook, test_row) distances.append((codebook, dist)) distances.sort(key=lambda tup: tup[1]) return distances [0][0] 3. Training Codebook Vectors Patterns were constructed from random feature in the training dataset def random_codebook(train): n_records = len(train) n_features = len(train [0]) codebook = [train[randrange(n_records)][i] for i in range(n_features)] return codebook Future work During writing process, I found that data collection from Google Play Store can be automated using Java client. This will induce number of dataset and possible to improve accuracy with high time saving. Because number of attributes and number of random sampling, result of the research is appropriate to call as significant result. However, basic framework was developed to improve accuracy. Acknowledgement In the last summer, I did some research reading work under the supervision of Associate Professor Julian Jang-Jaccard. Ive got really great support from Julian and INMS. Thanks to the financial support I received from INMS that I can fully focused on my academic research and benefited a great deal from this amazing opportunity. The following is a general report of my summer research: In the beginning of summer, I studied the paper A Detailed Analysis of the KDD CUP 99 Data Set by M. Trvallaee et. al. This gave basic idea of how to handle machine learning techniques. Approach of KNN and LVQ Main project was followed from a paper Why My App Got Deleted Detection of Spam Mobile Apps by Suranga Senevirane et. al. I have tried my best to keep report simple yet technically correct. I hope I succeed in my attempt. Reference Appendix Modified Data Number of Words in thousands bigram/tr-gram Identified as spam(b)/not(g) 0.084 0 b 0.18 0 b 0.121 0 b 0.009 1 b 0.241 0 b 0.452 0 b 0.105 1 b 0.198 0 b 0.692 1 b 0.258 1 b 0.256 1 b 0.225 0 b 0.052 0 b 0.052 0 b 0.021 0 b 0.188 1 b 0.188 1 b 0.092 1 b 0.098 0 b 0.188 1 b 0.161 1 b 0.107 0 b 0.375 0 b 0.195 0 b 0.112 0 b 0.11 1 g 0.149 1 g 0.368 1 g 0.22 1 g 0.121 1 g 0.163 1 g 0.072 1 g 0.098 1 g 0.312 1 g 0.282 1 g 0.229 1 g 0.256 1 g 0.298 0 g 0.092 0 g 0.189 0 g 0.134 1 g 0.157 1 g 0.253 1 g 0.12 1 g 0.34 1 g 0.57 1 g 0.34 1 g 0.346 1 g 0.126 1 g 0.241 1 g 0.162 1 g 0.084 0 g 0.159 0 g 0.253 1 g 0.231 1 g

Friday, October 25, 2019

Essay --

Jefferson Finis Davis was born on June 3,1808, in ____ Kentucky. He was the tenth of ten children. Davis was named after the third president of the United States of America, Thomas Jefferson. During his childhood Davis moved twice; he moved at the age of 3 to St.Mary Parish, Louisiana. Less than a year later he moved to Wilkinson County, Mississippi. Three of his brothers served in the war of 1812. He began his education in 1813 at Wilkinson Academy, near the family cotton plantation. Davis Later attended a catholic school called Saint Thomas. When he was there he was the only protestant student in attendance. Davis went on to attend Jefferson College in Washington, Mississippi, in 1818, and then attended Transylvania University at Lexington, Kentucky, in 1821. His father Samuel died on July 4, 1824, when Jefferson was 16 years old. He attended the United States Military Academy starting in 1824. He was placed under house arrest after his involvement in the eggnog riots. In June 182 8 he graduated 23rd in a class of 33. Following graduation, Second Lieutenant Davis was assigned to the 1st Infantry Regiment he was stationed at Fort Crawford, Prairie du Chien, Wisconsin Territory. Zachary Taylor had recently been placed in command of the fort when Davis arrived in early 1829. Davis returned to Mississippi on furlough in March 1832 this was his first leave since arriving at the fort. The Black Hawk war broke out while Davis was still in Mississippi. He quickly returned to the fort in August 1832. At the end of the war, Colonel Taylor assigned him to the transportation of Black Hawk to prison. Davis soon fell in love with Sarah Knox Taylor, his commanding officers daughter. They pursued Sarah’s father for permission to marry but he... ...inted and then elected to the U.S. Senate. He resigned his position to run for Governor of Mississippi. Although he was not successful, he was ultimately named Secretary of War under President Pierce. He went back to the Senate in the 1840s and remained there until Mississippi seceded January 9, 1861. Davis waited for official notification and addressed the Senate on January 21, 1861 calling it the â€Å"saddest day of his life.† He returned to Mississippi. Davis was first named as Major General for the Army of Mississippi on January 23, 1861 and then elected as Provisional President of the Confederate States of America and inaugurated in February. He was selected because of his military and political background. When Virginia joined the Confederacy, Davis moved the Capital to Richmond in May 1861. By November he had been elected to a full-six year term as President.

Thursday, October 24, 2019

How We Are Teaching Children to Think Inside the Box Essay

When children come home from school, parents usually sit down with them, go through their homework folders and ask their child, â€Å"so, what did you learn at school today?† Twenty years ago, the child may have commented on what they learned in art, music, social studies or geography. Now, a child will comment only on what they learned in their reading circle or in their math book. The fault for this lies within the No Child Left Behind (NCLB) Act. Standardized testing has turned teachers into test proctors and schools into testing facilities. Students are no longer receiving a broad education that covers many subjects; instead, their learning is streamlined to fit the content that is on the standardized tests. The NCLB Act is not working as it was intended, and as a result the American children are falling even further behind other developed nations. In fact, American students are ranked 19th out of 21 countries in math, 16th in science and last in physics (DeWeese 2). The No Child Left Behind Act needs to be tossed out before we do irreversible damage to the education system. It is not too late – we can turn everything around by getting rid of costly standardized tests, ensure students receive a broad education that includes classes in arts and music, which will better prepare them for higher education, and give control back to the individual states. In 2002, the No Child Left Behind Act was enacted by Congress, which was intended to close the learning gap between Caucasian students and minority students. The NCLB promised to promote accountability amongst teachers and school administrators, as well as assuring that all children would be proficient – according to standards set by the individual states – in reading and math by the end of the 2013-2014 school year (Ravitch 2). In addition, NCLB stated that by the end of the 2005-2006 school-year every classroom in America would have a highly qualified teacher (Paige 2). The most reliable way that the drafters of No Child Left Behind proposed collecting the data that they needed in order to keep track of accountability and proficiency was by mandating that each state issue their  students in grades 3 through 12 a standardized test annually that covers the subjects of reading, writing and math (Beveridge 1). The test that is issued is given to all students, whether they are Caucasian, African American, Hispanic, disabled, etc. and schools are graded based on the proficiency of their students. Each state sets a yearly goal that increases each year based on the mandates of the NCLB Act, in which all students will be 100 percent proficient in those three subjects by the year 2014 (Ravitch 2). On paper, the NCLB Act looked like a blessing to schools that are located in areas of low-income, minority areas and advocates for children with learning disabilities because these tests were meant to highlight the schools that are doing poorly and ensure they receive funding and training in order to turn the scores around (Darling-Hammond 1). In a letter that is addressed to parents on their website, the U.S. Department of Education explains that the NCLB Act provides â€Å"more resources to schools† through funding and â€Å"allows more flexibility† when allocating the funds (3). According to Linda Darling-Hammond, a Professor of Education at Stanford University, â€Å"the funding allocated by NCLB – less than 10 percent of most schools’ budgets – does not meet the needs of the under-resourced schools, where many students currently struggle to learn† (2). Another way schools get their funding is through the taxes that we pay. It makes sense that schools located in an area that has higher income would receive more funds than schools located in a low-income area. What happens is that with the limited funding, schools in low-income areas need to prioritize funding to raise the standardized test scores of their students because once a school fails to show improvement in their standar dized test scores, they are placed on probation the second year and parents are given a choice to leave the failing school, taking their child and the funding attached to that child to a school that is rated better. â€Å"In the third year of a school’s failure, students are entitled to free tutoring after school† according to Diane Ravitch, a research professor of education at New York University (2). The funding provided by NCLB is supposed to help pay for the free tutoring, but, like was stated before, the funding provided is not enough. What happens when a school is mandated by law to provide resources, but it cannot find room in their budget? That’s  right, they cut funding elsewhere. In an article written by Angela Pascopella, the Austin Independent School District superintendent Pascal D. Forgione explains that â€Å"NCLB also requires that schools in need of improvement set aside 10 percent of their local Title 1 funds for professional development †¦ this creates no flexibility in budgeting† (1). When schools need to restructure their budget in order to pay for tutoring and retraining teachers, the arts and music programs are the ones that suffer most. NCLB places so much emphasis on the outcome of the standardized tests. Can you really blame the school districts for re-emphasizing the importance of standardized tests when their funding relies on it? States were put in charge of providing their own assessment tests in order to provide a more focused education to their students and ensure that the students meet the state’s standards of proficiency. Tina Beveridge explains that â€Å"in 2007, the Washington Assessment of Student Learning (WASL) cost the state $113 million †¦ [and] many districts eliminated teaching positions as a result, despite the use of stimulus money. As budgets are cut nationwide, the funding for nontested subjects are affected first† (1). The fact that the distribution of funds is based on the outcome of the standardized test scores mea ns that we are blatantly failing the inner-city schools. A school will be placed on probation if they fail just one category ranging from proficiency of Caucasian students all the way down to the proficiency of the students who are just learning the English language. Schools located in higher income areas don’t really have to worry as much about budget cuts because those schools are located in areas that are predominately white and with parents who are active in their children’s education. On the other hand, schools in low income areas have to provide tutoring and other mandated actions in order to improve their proficiency rates, all the while their students are learning in â€Å"crumbling facilities, overcrowded classrooms, out-of-date textbooks, no science labs, no art or music courses and a revolving door of untrained teachers† (Darling-Hammond 2). After a few years of a school not showing improvement through their test scores, their entire teaching staff could be fired. We just saw this happen last year in Providence, Rhode Island. The school board terminated 1,976 teachers because of insufficient results and the need to make budget cuts (Chivvis 1). The turnover rate for  teachers is already extremely high, as much as 50 percent leave within 5 years in urban areas (McKinney et al 1) and the pressure of working in a low-income school district where schools are lacking basic teaching necessities is not all that appealing. The inability of low-income schools to offer teachers incentives because of funding, and with the added stress of job security, it makes one wonder how any highly qualified teachers are in the classroom. On top of that, the curriculum for students has gotten so narrow that it has taken a lot of the creativity and individualization that once attracted the best of the best to the teaching profession. Susan J. Hobart is an example of one of those teachers who used to love doing her job because she was leaving her mark on her students, in a positive way. In Hobart’s article, she tells of a letter she received from one of her students prior to the NCLB Act. The letter explained that Hobart was â€Å"differen t than other teachers, in a good way. [They] didn’t learn just from a textbook; [they] experienced the topics by ‘jumping into the textbook.’ [They] got to construct a rainforest in [their] classroom, have a fancy lunch on the Queen Elizabeth II, and go on a safari through Africa† (3). The student goes on to explain that the style of teaching she experienced during that time is what she hopes she can do when she becomes a teacher too. Unfortunately, that student’s dream will most likely not come true because the fact is that when schools are placed on probation, like Hobart’s school, they â€Å"teach test-taking strategies similar to those taught in Stanley Kaplan prep courses †¦ and spend an inordinate amount of time showing students how to ‘bubble up’† (1). With all the time and energy being placed on teaching children to read and write, you would think that they would be proficient by the time they enroll in college, right? Wrong. â€Å"42 percent of community college freshmen and 20 percent of freshmen in four-year institutions enroll in at least one remedial course †¦ 35 percent were enrolled in math, 23 percent in writing, and 20 percent in reading,† according to the Alliance for Excellent Education (1). Scho ols are so reliant on the standardized tests in order to gauge how students are understanding material that they have slacked-off in other areas like teaching basic study skills and critical thinking skills. When most of these kids graduate from high school and enter into a college setting, especially the ones who need to take remedial courses to catch-up to where  they should be when they graduate, they’re taken completely off guard with the course load and they will either succeed in managing it or struggle for the first few semesters, but the majority will drop out without a degree (Alliance for Excellent Education 1). High school is meant to prepare students for higher education or to enter the workforce, but the government is spending millions of dollars in order to remediate students and doing what high school teachers were meant to do (Alliance for Excellent Education 3). So, who is to blame? The supporters of No Child Left Behind acknowledge that there are some faults to the Act, but those like Kati Haycock believes that â€Å"although NCLB isn’t perfect, the Bush administration and Congress did something important by passing it. They called on educators to embrace a new challenge – not just access for all, but achievement for all †¦ there are no more invisible kids† (1). Supporters feel as though benefits such as holding teachers accountable for all students, including those with disabilities, and weeding out the schools that have a long history of doing poorly outweighs the negatives and that with time, the NCLB Act can be reformed to work as efficiently as it was enacted to work. Ravitch disagrees, stating that â€Å"Washington has neither the knowledge nor the capacity to micromanage the nation’s schools† (3). We have to agree with her as concerned citizens and parents. While the NCLB Act meant well when it was passed, it’s time to acknowledge that the government has spent billions of dollars trying to improve the education of America’s youth, yet 10 years later American students are still falling behind the mark set by other industrialized nations and the 201 3-2014 school year is quickly coming upon us. Not only are we falling behind globally, but minorities are still struggling behind Caucasian students. The gap between Caucasian students and minority students, that was intended to close through the NCLB Act, has remained just as far apart. E.E. Miller Elementary School, located here in Fayetteville, NC, just released their annual report card to parents. The chart below shows the break-down of students who passed both the reading and math tests provided at the end of the 2010-2011 school year. African American children, Hispanic children, and children with disabilities are still lagging far behind their Caucasian peers. African American children passed at 49.4 percent, 25.5 percent of students with disabilities passed and Hispanic children passed at rate of  56.9 percent. Remember that the NCLB expects this school, along with every other school in the Nation, to be at 100 percent proficiency by the end of the 2013-2014 school year. Source: Education First NC School Report Cards, E. E. Miller Elementary: 2010-11 School Year, Public Schools of North Carolina State Board of Education, Web, 26 Oct. 2011. In order to put this chart more in perspective, below is the 3-year trend for E.E. Miller. [pic] Source: Education First NC School Report Cards, E. E. Miller Elementary: 2010-11 School Year, Public Schools of North Carolina State Board of Education, Web, 26 Oct. 2011. While math scores are steadily improving, reading scores (the solid line) are declining. E.E. Miller has been on probation for at least 3 years, having provided tutoring to children who were struggling last year. Even with those efforts, the end of the year test suggests those students are still struggling in reading. These mandates are not working. States are spending millions of dollars per year to fulfill all of the required obligations without any fruition. We need to put education spending back into the hands of the states with more substantial federal funding. The federal government cannot expect every public elementary school, middle school and high school in this nation to fix a problem that has been prevalent for many, many years with this one-size-fits-all approach to learning. It will not happen with No Child Left Behind, and it definitely will not happen by the end of the 2013-2014 school year. We can no longer sit and watch while students in America struggle to compete o n a global level in nearly all subjects. Teachers are not educating our nation’s students to think critically and to form their own ideas or opinions; instead, teachers in failing schools are stuck teaching a curriculum that directly corresponds to what is being tested, and we are failing to prepare them for higher education. The future citizens we are molding will be of no use to society if they cannot think for themselves, which will happen if they remain in the current system. We need to undo this one-size-fits-all curriculum and re-broaden our children’s education to include subjects that will teach them think outside the box. Works Cited Alliance for Excellence in Education. â€Å"Paying Double: Inadequate High Schools and Community College Remediation.† Issue Brief: August (2006). All4Ed.Org. Web. 30 Oct. 2011. Beveridge, Tina. â€Å"No Child Left Behind and Fine Arts Classes.† Arts Education Policy Review 111.1 (2010): 4. MasterFILE Premier. EBSCO. Web. 20 Oct. 2011. Chivvis, Dana. â€Å"Providence, RI, School Board Votes to Lay Off All Teachers.† AOL News (2011). Web. 28 Oct. 2011. Darling-Hammond, Lisa. â€Å"No Child Left Behind is a Bad Law.† Opposing Viewpoints. Web. 14 Oct. 2011. DeWeese, Tom. â€Å"Public Education is Failing.† Opposing Viewpoints. Web. 14 Oct. 2011. Education First NC School Report Cards. â€Å"E. E. Miller Elementary: 2010-11 School Year.† Public Schools of North Carolina State Board of Education. Web. 26 Oct. 2011. McKinney, Sueanne E., et al. â€Å"Addressing Urban High-Poverty School Teacher Attrition by Addressing Urban High-Poverty School Teacher Retention: Why Effective Teachers Persevere.† Educational Research and Review Vol. 3 (1) pp. 001-009 (2007). Academic Journals. Web. 28 Oct. 2011. Paige, Rod. â€Å"No Child Left Behind: A Parent’s Guide.† U.S. Department of Education (2002). PDF File. 28 Oct. 2011. Pascopella, Angela. â€Å"Talking Details on NCLB.† District Administration 43.7 (2007): 22. MasterFILE Premier. EBSCO. Web. 28 Oct. 2011. Ravitch, Diane. â€Å"Time to Kill ‘No Child Left Behind’.† Education Digest 75.1 (2009): 4. MasterFILE Premier. EBSCO. Web. 20 Oct. 2011.

Wednesday, October 23, 2019

Tecsmart Electronics Case Study

CASE STUDY: CHAPTER 2 I. TECSMART ELECTRONICS 1. ) Discuss how the practices that TecSmart identified support Deming’s 14 points. * Create a Vision  and demonstrate commitment- The senior leaders set objectives (mission and vision) and strategic goals of the company. * Learn the new philosophy- The company uses customer feedback, and market research to learn new philosophy and improve quality of work. * Institute trainings – All employees are trained in a 5-step problem solving process and undergo customer relationship training. Improve Constantly and Forever – New product introduction teams work with design engineers and customers to ensure that design requirements are met during the manufacturing and testing. The company goes for a market research to come up with new products and designs. 2. ) How do these practices support the Baldrige criteria? Specifically, identify which of the questions in the criteria each of these practices address. * Leadership â€⠀œ Senior leaders guide cross-functional teams to review and develop individual plans for representation to employees. Strategic Planning- Senior leaders set company objectives * Customer and Market Focus- All complaints are handled by the vice president of sales. All employees received customer relationship training. * Human resource focus- All employees were trained for handling problems. * Process Management – All employees trained in 5 step problem solving process. * Business Results – Quality is assessed through internal audits, employee opinion surveys, and customer feedback. 3. ) What are some obvious opportunities for improvement relative to the Baldrige criteria?What actions would you recommend that TecSmart do to improve its pursuit of performance excellence using Baldrige criteria? * Key results areas were not defined. * The organization must become active in its governance and social responsibility measures. * Lack of performance measure system that looks i nto real time marketplace performance, and operational performance. II. Can Six Sigma Work in Health Care? 1. What would be your agenda for this meeting? Stress about accountability and recognize any achievements. 2. What questions would you need answered before proposing a Six Sigma implementation plan?Questions like does everyone understand our goal why we are implementing this Six Sigma plan and how it can help us improve. Question about S-M-A-R-T should also take into consideration. (Is it systematic, measureable, attainable, realistic, time bound? ) 3. How would you design an infrastructure to support Six Sigma at SLRMC? I will let everyone know that the effectively of Six Sigma depend on making decisions that are critical to your customers and the health of your business. it forces us to think strategically and critically where to allocate our limited resources to fix the most critical issues. CDL