Featured Post

Loss of Freedom Through Apathy essays

Loss of Freedom Through Apathy papers We do have opportunity in this nation however we basically decide to disregard it. We live in a vot...

Sunday, January 26, 2020

Software testing

Software testing 1.0 Software Testing Activities We start testing activities from the first phase of the software development life cycle. We may generate test cases from the SRS and SDD documents and use them during system and acceptance testing. Hence, development and testing activities are carried out simultaneously in order to produce good quality maintainable software in time and within budget. We may carry out testing at many levels and may also take help of a software testing tool. Whenever we experience a failure, we debug the source code to find reasons for such a failure. Finding the reasons of a failure is very significant testing activity and consumes huge amount of resources and may also delay the release of the software. 1.1 Levels of Testing Software testing is generally carried out at different levels. There are four such levels namely unit testing, integration testing, system testing, and acceptance testing as shown in figure 8.1. First three levels of testing activities are done by the testers and last level of testing (acceptance) is done by the customer(s)/user(s). Each level has specific testing objectives. For example, at unit testing level, independent units are tested using functional and/or structural testing techniques. At integration testing level, two or more units are combined and testing is carried out to test the integration related issues of various units. At system testing level, the system is tested as a whole and primarily functional testing techniques are used to test the system. Non functional requirements like performance, reliability, usability, testability etc. are also tested at this level. Load/stress testing is also performed at this level. Last level i.e. acceptance testing is done by the cus tomer(s)/users for the purpose of accepting the final product. 1.1.1 Unit Testing We develop software in parts / units and every unit is expected to have defined functionality. We may call it a component, module, procedure, function etc, which will have a purpose and may be developed independently and simultaney. A. Bertolino and E. Marchetti have defined a unit as [BERT07]: A unit is the smallest testable piece of software, which may consist of hundreds or even just few lines of source code, and generally represents the result of the work of one or few developers. The unit test cases purpose is to ensure that the unit satisfies its functional specification and / or that its implemented structure matches the intended design structure. [BEIZ90, PFLE01]. There are also problems with unit testing. How can we run a unit independently? A unit may not be completely independent. It may be calling few units and also called by one or more units. We may have to write additional source code to execute a unit. A unit X may call a unit Y and a unit Y may call a unit A and a unit B as shown in figure 8.2(a). To execute a unit Y independently, we may have to write additional source code in a unit Y which may handle the activities of a unit X and the activities of a unit A and a unit B. The additional source code to handle the activities of a unit X is called driver and the additional source code to handle the activities of a unit A and a unit B is called stub. The complete additional source code which is written for the design of stub and driver is called scaffolding. The scaffolding should be removed after the completion of unit testing. This may help us to locate an error easily due to small size of a unit. Many white box testing techniques may be effectively applicable at unit level. We should keep stubs and drivers simple and small in size to reduce the cost of testing. If we design units in such a way that they can be tested without writing stubs and drivers, we may be very efficient and lucky. Generally, in practice, it may be difficult and thus requirement of stubs and drivers may not be eliminated. We may only minimize the requirement of scaffolding depending upon the functionality and its division in various units. 1.1.2 Integration Testing A software may have many units. We test units independently during unit testing after writing required stubs and drivers. When we combine two units, we may like to test the interfaces amongst these units. We combine two or more units because they share some relationship. This relationship is represented by an interface and is known as coupling. The coupling is the measure of the degree of interdependence between units. Two units with high coupling are strongly connected and thus, dependent on each other. Two units with low coupling are weakly connected and thus have low dependency on each other. Hence, highly coupled units are heavily dependent on other units and loosely coupled units are comparatively less dependent on other units as shown in figure 8.3. Coupling increases as the number of calls amongst units increases or the amount of shared data increases. The design with high coupling may have more errors. Loose coupling minimize the interdependence and some of the steps to minimize the coupling are given as: (i) Pass only data, not the control information. (ii) Avoid passing undesired data. (iii) Minimize parent / child relationship between calling and called units. (iv) Minimize the number of parameters to be passed between two units. (v) Avoid passing complete data structure. (vi) Do not declare global variables. (vii) Minimize the scope of variables. Different types of coupling are data (best), stamp, control, external, common and content (worst). When we design test cases for interfaces, we should be very clear about the coupling amongst units and if it is high, large number of test cases should be designed to test that particular interface. A good design should have low coupling and thus interfaces become very important. When interfaces are important, their testing will also be important. In integration testing, we focus on the issues related to interfaces amongst units. There are several integration strategies that really have little basis in a rational methodology and are given in figure 8.4. Top down integration starts from the main unit and keeps on adding all called units of next level. This portion should be tested thoroughly by focusing on interface issues. After completion of integration testing at this level, add next level of units and as so on till we reach the lowest level units (leaf units). There will not be any requirement of drivers and only stubs will be designed. In bottom-up integration, we start from the bottom, (i.e. from leaf units) and keep on adding upper level units till we reach the top (i.e. root node). There will not be any need of stubs. A sandwich strategy runs from top and bottom concurren tly, depending upon the availability of units and may meet somewhere in the middle. (b) Bottom up integration (focus starts from edges i, j and so on) c) Sandwich integration (focus starts from a, b, i, j and so on) Each approach has its own advantages and disadvantages. In practice, sandwich integration approach is more popular. This can be started as and when two related units are available. We may use any functional or structural testing techniques to design test cases. The functional testing techniques are easy to implement with a particular focus on the interfaces and some structural testing techniques may also be used. When a new unit is added as a part of integration testing then the software is considered as a changed software. New paths are designed and new input(s) and output(s) conditions may emerge and new control logic may invoke. These changes may also cause problems with units that previously worked flawlessly. 1.1.3 System Testing We perform system testing after the completion of unit and integration testing. We test complete software alongwith its expected environment. We generally use functional testing techniques, although few structural testing techniques may also be used. A system is defined as a combination of the software, hardware and other associated parts that together provide product features and solutions. System testing ensures that each system function works as expected and it also tests for non-functional requirements like performance, security, reliability, stress, load etc. This is the only phase of testing which tests both functional and non-functional requirements of the system. A team of the testing persons does the system testing under the supervision of a test team leader. We also review all associated documents and manuals of the software. This verification activity is equally important and may improve the quality of the final product. Utmost care should be taken for the defects found during system testing phase. A proper impact analysis should be done before fixing the defect. Sometimes, if system permits, instead of fixing the defects are just documented and mentioned as the known limitation. This may happen in a situation when fixing is very time consuming or technically it is not possible in the present design etc. Progress of system testing also builds confidence in the development team as this is the first phase in which complete product is tested with a specific focus on customers expectations. After the completion of this phase, customers are invited to test the software. 1.1.4 Acceptance Testing This is the extension of system testing. When testing team feels that the product is ready for the customer(s), they invite the customer(s) for demonstration. After demonstration of the product, customer(s) may like to use the product for their satisfaction and confidence. This may range from adhoc usage to systematic well-planned usage of the product. This type of usage is essential before accepting the final product. The testing done for the purpose of accepting a product is known as acceptance testing. This may be carried out by the customer(s) or persons authorized by the customer. The venue may be developers site or customers site depending on the mutual agreement. Generally, acceptance testing is carried out at the customers site. Acceptance testing is carried out only when the software is developed for a particular customer(s). If, we develop software for anonymous customers (like operating systems, compilers, case tools etc), then acceptance testing is not feasible. In such c ases, potential customers are identified to test the software and this type of testing is called alpha / beta testing. Beta testing is done by many potential customers at their sites without any involvement of developers / testers. Although alpha testing is done by some potential customers at developers site under the direction and supervision of testers. 1.2 Debugging Whenever a software fails, we would like to understand the reason(s) of such a failure. After knowing the reason(s), we may attempt to find solution and may make necessary changes in the source code accordingly. These changes will hopefully remove the reason(s) of that software failure. The process of identifying and correcting a software error is known as debugging. It starts after receiving a failure report and completes after ensuring that all corrections have been rightly placed and the software does not fail with the same set of input(s). The debugging is quite a difficult phase and may become one of the reasons of the software delays. Every bug detection process is different and it is difficult to know how long it will take to find and fix a bug. Sometimes, it may not be possible to detect a bug or if a bug is detected, it may not be feasible to correct it at all. These situations should be handled very carefully. In order to remove bugs, developer must first discover that a problem exists, then classify the bug, locate where the problem actually lies in the source code, and finally correct the problem. 1.2.1 Why debugging is so difficult? Debugging is a difficult process. This is probably due to human involvement and their psychology. Developers become uncomfortable after receiving any request of debugging. It is taken against their professional pride. Shneiderman [SHNE80] has rightly commented on human aspect of debugging as: It is one of the most frustrating parts of programming. It has elements of problem solving or brain teasers, coupled with the annoying recognition that we have made a mistake. Heightened anxiety and the unwillingness to accept the possibility of errors, increase the task difficulty. Fortunately, there is a great sigh of relief and a lessening of tension when the bug is ultimately corrected. These comments explain the difficulty of debugging. Pressman [PRES97] has given some clues about the characteristics of bugs as: The debugging process attempts to match symptom with cause, thereby leading to error correction. The symptom and the cause may be geographically remote. That is, symptom may appear in one part of program, while the cause may actually be located in other part. Highly coupled program structures may further complicate this situation. Symptom may also disappear temporarily when another error is corrected. In real time applications, it may be difficult to accurately reproduce the input conditions. In some cases, symptom may be due to causes that are distributed across a number of tasks running on different processors. There may be many reasons which may make debugging process difficult and time consuming. However, psychological reasons are more prevalent over technical reasons. Over the years, debugging techniques have substantially improved and they will continue to develop significantly in the near future. Some debugging tools are available and they minimize the human involvement in the debugging process. However, it is still a difficult area and consumes significant amount of time and resources. 1.2.2 Debugging Process Debugging means detecting and removing bugs from the programs. Whenever a program generates an unexpected behaviour, it is known as a failure of the program. This failure may be mild, annoying, disturbing, serious, extreme, catastrophic or infectious. Depending on the type of failure, actions are required to be taken. Debugging process starts after receiving a failure report either from testing team or from users. The steps of the debugging process are replication of the bug, understanding the bug, locate the bug, fix the bug and retest the program. (i) Replication of the bug: The first step in fixing a bug is to replicate it. This means to recreate the undesired behaviour under controlled conditions. The same set of input(s) should be given under similar conditions to the program and the program, after execution, should produce similar unexpected behaviour. If this happens, we are able to replicate a bug. In many cases, this is simple and straight forward. We execute the program on a particular input(s) or we press a particular button on a particular dialog, and the bug occurs. In other cases, replication may be very difficult. It may require many steps or in an interactive program such as a game, it may require precise timing. In worst cases, replication may be nearly impossible. If we do not replicate the bug, how will we verify the fix? Hence, failure to replicate a bug is a real problem. If we cannot do it, any action, which cannot be verified, has no meaning, how so ever important it may be. Some of the reasons for non-replication of bug are: ÂÂ · The user incorrectly reported the problem. ÂÂ · The program has failed due to hardware problems like memory overflow, poor network connectivity, network congestion, non availability of system buses, deadlock conditions etc. ÂÂ · The program has failed due to system software problems. The reason may be the usage of different type of operating system, compilers, device drivers etc. there may be any above mentioned reason for the failure of the program, although there is no inherent bug in program for this particular failure. Our effort should be to replicate the bug. If we cannot do so, it is advisable to keep the matter pending till we are able to replicate it. There is no point in playing with the source code for a situation which is not reproducible. (ii) Understanding the bug After replicating the bug, we may like to understand the bug. This means, we want to find the reason(s) of this failure. There may be one or more reasons and is generally the most time consuming activity. We should understand the program very clearly for understanding a bug. If we are the designers and source code writers, there may not be any problem for understanding the bug. If not, then we may even have more serious problems. If readability of the program is good and associated documents are available, we may be able to manage the problem. If readability is not that good, (which happens in many situations) and associated documents are not proper, situation becomes very difficult and complex. We may call the designers, if we are lucky, they may be available with the company and we may get them. Imagine otherwise, what will happen? This is a real challenging situation and in practice many times, we have to face this and struggle with the source code and documents written by the per sons not available with the company. We may have to put effort in order to understand the program. We may start from the first statement of the source code to the last statement with a special focus on critical and complex areas of the source code. We should be able to know, where to look in the source code for any particular activity. It should also tell us the general way in which the program acts. The worst cases are large programs written by many persons over many years. These programs may not have consistency and may become poorly readable over time due to various maintenance activities. We should simply do the best and try to avoid making the mess worse. We may also take the help of source code analysis tools for examining the large programs. A debugger may also be helpful for understanding the program. A debugger inspects a program statement wise and may be able to show the dynamic behaviour of the program using a breakpoint. The breakpoints are used to pause the program at any time needed. At every breakpoint, we may look at values of variables, contents of relevant memory locations, registers etc. The main point is that in order to understand a bug, program understanding is essential. We should put desired effort before finding the reasons of the software failure. If we fail to do so, unnecessarily, we may waste our effort, which is neither required nor desired. (iii) Locate the bug There are two portions of the source code which need to be considered for locating a bug. First portion of the source code is one which causes the visible incorrect behaviour and second portion of the source code is one which is actually incorrect. In most of the situations, both portions may overlap and sometimes, both portions may be in different parts of the program. We should first find the source code which causes the incorrect behaviour. After knowing the incorrect behaviour and its related portion of the source code, we may find the portion of the source code which is at fault. Sometimes, it may be very easy to identify the problematic source code (second portion of the source code) with manual inspection. Otherwise, we may have to take the help of a debugger. If we have core dumps, a debugger can immediately identify the line which fails. A core dumps is the printout of all registers and relevant memory locations. We should document them and also retain them for possible futu re use. We may provide breakpoints while replicating the bug and this process may also help us to locate the bug. Sometimes simple print statements may help us to locate the sources of the bad behaviour. This simple way provides us the status of various variables at different locations of the program with specific set of inputs. A sequence of print statements may also portray the dynamics of variable changes. However, it is cumbersome to use in large programs. They may also generate superfluous data which may be difficult to analyze and manage. Another useful approach is to add check routines in the source code to verify that data structures are in a valid state. Such routines may help us to narrow down where data corruption occurs. If the check routines are fast, we may want to always enable them. Otherwise, leave them in the source code, and provide some sort of mechanism to turn them on when we need them. The most useful and powerful way is to do the source code inspection. This may help us to understand the program, understand the bug and finally locate the bug. A clear understanding of the program is an absolute requirement of any debugging activity. Sometimes, bug may not be in the program at all. It may be in a library routine or in the operating system, or in the compiler. These cases are very rare, but there are chances and if everything fails, we may have to look for such options. (iv) Fix the bug and retest the program After locating the bug, we may like to fix the bug. The fixing of a bug is a programming exercise rather than a debugging activity. After making necessary changes in the source code, we may have to retest the source code in order to ensure that the corrections have been rightly done at right place. Every change may affect other portions of the source code also. Hence an impact analysis is required to identify the affected portion and that portion should also be retested thoroughly. This retesting activity is called regression testing which is very important activity of any debugging process. 1.2.3 Debugging Approaches There are many popular debugging approaches, but success of any approach is dependant upon the understanding of the program. If the persons involved in debugging understand the program correctly, they may be able to detect and remove the bugs. (i) Trial and Error Method This approach is dependent on the ability and experience of the debugging persons. After getting a failure report, it is analyzed and program is inspected. Based on experience and intelligence, and also using hit and trial technique, the bug is located and a solution is found. This is a slow approach and becomes impractical in large programs. (ii) Backtracking This can be used successfully in small programs. We start at the point where program gives incorrect result such as unexpected output is printed. After analyzing the output, we trace backward the source code manually until a cause of the failure is found. The source code from the statement where symptoms of failure is found to the statement where cause of failure is found is analyzed properly. This technique brackets the locations of the bug in the program. Subsequent careful study of bracketed location may help us to rectify the bug. Another obvious variation of backtracking is forward tracking, where we use print statements or other means to examine a succession of intermediate results to determine at what point the result first became wrong. These approaches (backtracking and forward tracking) may be useful only when the size of the program is small. As the program size increases, it becomes difficult to manage these approaches. (iii) Brute Force This is probably the most common and efficient approach to identify the cause of a software failure. In this approach, memory dumps are taken and run time traces are invoked and the program is loaded with print statements. When this is done, we may find a clue by the information produced which leads to identification of cause of a bug. Memory traces are similar to memory dumps, except that the printout contains only certain memory and register contents and printing is conditional on some event occurring. Typically conditional events are entry, exit or use of one of the following: (a) A particular subroutine, statement or database (b) Communication with I/O devices (c) Value of a variable (d) Timed actuations (periodic or random) in certain real time system. A special problem with trace programs is that the conditions are entered in the source code and any changes require a recompilation. The huge amount of data is generated which although may help to identify the cause but may be difficult to manage and analyze. (iv) Cause Elimination Cause elimination is manifested by induction or deduction and also introduces the concept of binary partitioning. Data related to error occurrence are organized to isolate potential causes. Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each. Therefore, we may rule out causes one by one until a single one remains for validation. The cause is identified, properly fixed and retested accordingly. 1.2.4 Debugging Tools Many debugging tools are available to support the debugging process. Some of the manual activities can also be automated using a tool. We may need a tool that may execute every statement of a program at a time and print values of any variable after executing every statement of the program. We will be free from inserting print statements in the program manually. Thus, run time debuggers are designed. In principle, a run time debugger is nothing more than an automatic print statement generator. It allows us to trace the program path and the variables without having to put print statements in the source code. Every compiler available in the market comes with run time debugger. It allows us to compile and run the program with a single compilation, rather than modifying the source code and recompiling as we try to narrow down the bug. Run time debuggers may detect bugs in the program, but may fail to find the causes of failures. We may need a special tool to find causes of failures and correct the bug. Some errors like memory corruption and memory leaks may be detected automatically. The automation was the modification in debugging process, because it automated the process of finding the bug. A tool may detect an error, and our job is to simply fix it. These tools are known as automatic debugger and come in several varieties. The simplest ones are just a library of functions that can be linked into a program. When the program executes and these functions are called, the debugger checks for memory corruption, if it finds this, it reports it. Compilers are also used for finding bugs. Of course, they check only syntax errors and particular type of run time errors. Compilers should give proper and detailed messages of errors that will be of great help to the debugging process. Compilers may give all such information in the attribute table, which is printed along with the listing. The attribute table contains various levels of warnings which have been picked up by the compiler scan and which are noted. Hence, compilers are coming with error detection feature and there is no excuse to design compilers without meaningful error messages. We may apply wide variety of tools like run time debugger, automatic debugger, automatic test case generators, memory dumps, cross reference maps, compilers etc during the debugging process. However, tools are not the substitute for careful examination of the source code after thorough understanding. 1.3 Software Testing Tools The most important effort consuming task in software testing is to design the test cases. The execution of these test cases may not require much time and resources. Hence, designing part is more significant than execution part. Both parts are normally handled manually. Do we really need a tool? If yes, where and when can we use it? In first part (designing of test cases) or second part (execution of test cases) or both. Software testing tools may be used to reduce the time of testing and to make testing as easy and pleasant as possible. Automated testing may be carried out without human involvement. This may help us in the areas where similar data set is to be given as input to the program again and again. A tool may do the repeated testing, unattended also, during nights or weekends without human intervention. Many non-functional requirements may be tested with the help of a tool. We want to test the performance of a software under load, which may require many computers, manpower and other resources. A tool may simulate multiple users on one computer and also a situation when many users are accessing a database simultaneously. There are three broad categories of software testing tools i.e. static, dynamic and process management. Most of the tools fall clearly into one of the categories but there are few exceptions like mutation analysis system which falls in more than one the categories. A wide variety of tools are available with different scope and quality and they assist us in many ways. 1.3.1 Static software testing tools Static software testing tools are those that perform analysis of the programs without executing them at all. They may also find the source code which will be hard to test and maintain. As we all know, static testing is about prevention and dynamic testing is about cure. We should use both the tools but prevention is always better than cure. These tools will find more bugs as compared to dynamic testing tools (where we execute the program). There are many areas for which effective static testing tools are available, and they have shown their results for the improvement of the quality of the software. (i) Complexity analysis tools Complexity of a program plays very important role while determining its quality. A popular measure of complexity is the cyclomatic complexity as discussed in chapter 4. This gives us the idea about the number of independent paths in the program and is dependent upon the number of decisions in the program. Higher value of cyclomatic complexity may indicate about poor design and risky implementation. This may also be applied at module level and higher cyclomatic complexity value modules may either be redesigned or may be tested very thoroughly. There are other complexity measures also which are used in practice like Halstead software size measures, knot complexity measure etc. Tools are available which are based on any of the complexity measure. These tools may take the program as an input, process it and produce a complexity value as output. This value may be an indicator of the quality of design and implementation. (ii) Syntax and Semantic Analysis Tools These tools find syntax and semantic errors. Although compiler may detect all syntax errors during compilation, but early detection of such errors may help to minimize other associated errors. Semantic errors are very significant and compilers are helpless to find such errors. There are tools in the market that may analyze the program and find errors. Non-declaration of a variable, double declaration of a variable, divide by zero issue, unspecified inputs, non-initialization of a variable are some of the issues which may be detected by semantic analysis tools. These tools are language dependent and may parse the source code, maintain a list of errors and provide implementation information. The parser may find semantic errors as well as make an inference as to what is syntactically correct. (iii) Flow graph generator tools These tools are language dependent and take the program as an input and convert it to its flow graph. The flow graph may be used for many purposes like complexity calculation, paths identification, generation of definition use paths, program slicing etc. These

Saturday, January 18, 2020

Kant Hypothetical and Categorical Imperatives Essay

In the Grounding for the Metaphysics of Morals, by Immanuel Kant, Kant proposes a very significant discussion of imperatives as expressed by what one â€Å"ought† to do. He implies this notion by providing the audience with two kinds of imperatives: categorical and hypothetical. The discussion Kant proposes is designed to formulate the expression of one’s action. By distinguishing the difference between categorical and hypothetical imperatives, Kant’s argues that categorical imperatives apply moral conduct in relation to performing one’s duty within the contents of good will. According to Kant, the representation of an objective principle insofar as it necessitates the will is called a command which formulates the notion of an imperative . Imperatives are simply a formula of a reason. It determines the will of the action. Imperatives can be expressed in terms of what ought to do. For example, take the command â€Å"Sit Down! † Kant expresses this command as an imperative by stating, â€Å"You ought to sit down! † All imperatives are formulated by doing an action according to the standard of a will that it will provide a good ending in some way. If the end action is good, as a mean to something else than it is considered a hypothetical imperative. On the other hand, if the action is good according to itself than it is considered a categorical imperative. Thus, Kant implies a distinction between these two kinds of imperatives. The first imperative that Kant proposes is hypothetical. A hypothetical imperative states only that an action is good for some purpose, either possible or actual . In a hypothetical imperative the action is done out of necessary for some purpose. Hypothetical imperatives take on the general form of; â€Å"If †¦then†¦Ã¢â‚¬  â€Å"If† is considered the antecedent and â€Å"then† is considered conditional. Hypothetical imperatives tell us what we should do provided the fact that we have certain desires. For example, â€Å"If you want to get an A, then you ought to study. † Wanting to get an A is required of one insofar as one is committed to studying. In other terms, if one desire is to get an A then the action one must take is to study in order to fulfill that desire. Hypothetical imperatives can further more be explained by breaking them down into what Kant calls â€Å"rules of skills,† and â€Å"counsels of prudence†. Rules of skills simply imply the notion that there is something that you have to do; how one must accomplish something. An example of this is, â€Å"If you want to get well than you ought to take your medications. † The action in accordance to the rule of skills implies the importance of taking your medications. Kant noted that there is no question at all whether the end is reasonable and good, but there is only a question as to what must be done to attain it. Moreover, the counsel of prudence examines just that. The antecedent â€Å"If† refers to the varying degrees of happiness within an individual. â€Å"If you want to be happy then you ought to invest in a retirement plan. † One’s motive to be happy (happiness as it implies to individualism) is fulfilled through the action. The action is done through the perception of prudence as it commands not absolutely but only as a means to further the purpose. In this respect, hypothetical imperatives apply actions of good in a conditional way. It is formulated that you need to know what the condition is before you act. Conditions are based upon a posteriori referring to experiences of knowledge due to ones own result. Therefore hypothetical imperatives do not allow us to act in a moral way because they are based upon desires and experiences rather than good will or moral conduct. In contrast with hypothetical imperatives, which is dependent on an indivdual having a particular desires or purpose (such as wanting to get an A), categorical imperatives describe what we are required to do independently of what we may desire or prefer. A categorical imperative is the only imperative which immediately commands a certain conduct without having as its condition any other purpose to be attained by it. Categorical imperatives are moral obligations that do not have a â€Å"If†¦ and then†¦Ã¢â‚¬  form. In this respect they provide behavior categorically. They are not if you want x then you ought to do y. Rather they take the form of, you should do y. Kant states that categorical imperatives are limited by no condition, and can quite properly be called a command since it is absolutely, through practically necessary. Categorical imperative are concerned with the form of action and the princple from which the that action follows. The moral action is good within itself such the notion of practical reasoning. Unlike a hypothetical imperative, categorical imperatives rely on independent experience; a prior. This is due to the fact that one’s moral principle is not based upon previous experience, but instead it is rooted in good will and one’s ability to perform their moral duty. Kant refers to this principle as the principle of morality. For it is from this in which all our moral duties are derived. The basic principle of morality is important because it commands certain courses of action. It is a categorical imperative because it commands unconditional actions. It is also independent of the particular ends and desires of the moral actions. One can never really no the end motivate to why such an action is preformed, but one can concure that the action was done according to the moral duty of good will. Having good will or practical reasoning, lays a foundation that implies categorical imperatives to do what is pure and simple. A good will is good not because one wants to attain happiness or a purpose but it is good in itself. Kant explains that there is no possibility of thinking of anything at all in the world, or even out of it, which can be regarded as good without qualification, except a good will. Therefore in accordance to good will, one must act as if the maxim of their action was to become a universal law. Kant first mentioned the notion of categorical imperative when he proposed the moral or universal law. You should never act except in such as way that I can also will that my maxim should become a universal law. Since maxims are basicly principles of action, the categorical imperative commands that one should act only on universal principles, that could be adopted by all rational agents such as human beings. Actions that are done from duty are out of respect for the moral law. Duty is the necessity to act out of reverence for the law set by the categorical imperative. Because the consequences of an act are not the source of its moral worth, the source must be the maxim under which the act is performed, excluding all aspects of desires. Thus, a categorical imperative must have moral content if, and only if, it is carried out solely with regard to a sense of moral duty in coordination with good will. Clearly one can see that Kant believes in the expression of actions through imperatives. By proposing imperatives, he formulated a command of reason. As hypothetical imperatives address actions done for a desire or a purpose, categorical imperatives, on the other hand address actions that result from moral conduct and good will. In distinghing the difference between these two imperatives, Kant’s main objection is to provide his readers with a clear understanding that actions based upon imperatives can be projected from two different views but the end result always provides good, in some way.

Friday, January 10, 2020

The History Of Concrete In The Building Industry Construction Essay

Throughout history, the usage of concrete as a edifice stuff has contributed significantly to the built environment. Digesting illustrations of assorted signifiers of concrete can be found as far back as the early Egyptian civilization. Significant edifice leftovers still exist from the Roman civilisation, which used concretes made from of course happening volcanic ash pozzolans, assorted with H2O, sand and rock. Now concrete is being used in the building of lasting Bridgess, roads, H2O supply, infirmaries, churches, houses and commercial edifices, to give people a societal foundation, a booming economic system, and serviceable installations for many old ages. In the modern epoch, the belongingss of concrete were refined in the late 1800s, with the debut of a patented fabrication procedure for Portland cement. While it has ancient roots, concrete, as we know it today, is a modern and extremely advanced constructing stuff. In the last 150 old ages, concrete has become one of the most widely used edifice stuffs on Earth. Figure. Tranditional tools used for concrete commixture.Problem StatementConcrete is one of the most widely used building stuffs in the universe. However, the production of Portland cement, an indispensable stuff in concrete, leads to the release of important sum of CO2, a nursery gas. One ton of Portland cement cinder production is said to creates about one ton of CO2 and other nursery gases. Environmental issues are playing an of import function in the sustainable development of the cement and concrete industry. For illustration, if we run out of limestone, as it is predicted to go on in some topographic points, so we can non bring forth Portland cement ; and, hence, we can non bring forth concrete and all the employment associated with the concrete industry goes out-of-business. A sustainable concrete construction is one that is constructed so that the entire environmental impact during its full life rhythm is minimum. Concrete is a sustainable stuff because it has a really low built-in energy demand and is produced to order as needed with really small waste. It is made from some of the most plentiful resources on Earth and has a really high thermic mass. It can be made with recycled stuffs and is wholly reclaimable. Sustainable design and building of constructions have a little impact on the environment. Use of â€Å" green † stuffs embodies low energy costs. Their usage must hold high lastingness and low care taking to sustainable building stuffs. High public presentation cements and concrete can cut down the sum of cementitious stuffs and entire volume of concrete required. Concrete must maintain germinating to fulfill the increasing demands of all its users. Reuse of post-consumer wastes and industrial by-products in concrete is necessary to bring forth even â€Å" greener † concrete. â€Å" Greener † concrete besides improves air quality, minimizes solid wastes, and leads to sustainable cement and concrete industry.What is Sustainable Concrete?Concrete is a really environmentally friendly stuff. Concrete has been used for over 2,000 old ages. Concrete is best known for its durable and rel iable nature. However, extra ways that concrete contributes to societal advancement, economic growing, and environmental protection are frequently overlooked. Concrete constructions are superior in energy public presentation. They provide flexibleness in design every bit good as affordability, and are environmentally more responsible than steel or aluminium constructions. Entire geographical parts are running out of limestone resource to bring forth cement. Major metropolitan countries are running out of beginnings of sums for doing concrete. Sustainability requires that applied scientists consider a edifice ‘s â€Å" lifecycle † cost extended over the utile life-time. This includes the edifice building, care, destruction, and recycling [ ACI 2004 ] . A sustainable concrete construction is one that is constructed so that the entire social impact during its full life rhythm, including during its usage, is minimum. Planing for sustainability means accounting in the design and besides the short-run and long-run effects of the social impact. Therefore, lastingness is the cardinal issue. New coevals of admixtures/additives are needed to better lastingness. To construct in a sustainable mode and behavior scheduled & amp ; appropriate edifice care are the keys that represent the â€Å" new building political orientation † of this coevals. In peculiar, to construct in a sustainable mode means to concentrate attending on physical, environmental, and technological resources, jobs related to human wellness, energy preservation of new and existing edifices, and control of building engineerings and methods.Environmental Issues with ConcreteThe production of Portland cement releases CO2 and other nursery gases ( GHGs ) into the ambiance. Entire CO2 emanations worldwide were 21 billion dozenss in 2002, Table 1. Table. CO2 emanations by industrialised states in 2002 [ Malhorta 2004 ] . State Percentage CO2 Emissions USA 25 Europe 20 Soviet union 17 Japan 8 China & gt ; 15 India & gt ; 10 Environmental issues associated with the CO2 emanations from the production of Portland cement, energy demand ( six-million BTU of energy needed per ton of cement production ) , resource preservation consideration, and economic impact due to the high cost of Portland cement fabrication workss demand that auxiliary cementing stuffs in general and fly ash in peculiar be used in increasing measures to replace Portland cement in concrete [ Malhotra 1997, 2004 ] . Fly ash is a byproduct of the burning of powdered coal in thermic power workss. The dust aggregation system removes the fly ash, as a all right particulate residue from the burning gases before they are discharged in the ambiance. For each ton of Portland cement cinder, 3 to 20 pound. of NOx are released into the ambiance. In 2000, the world-wide cement cinder production was about 1.6 billion dozenss [ Malhotra 2004 ] . Longer enduring concrete constructions cut down energy demands for care and Reconstruction. Concrete is a loca lly available stuff ; hence, transit cost to the undertaking site is reduced. Light colored concrete walls cut down interior lighting demands. Permeable concrete paving and meshing concrete pavers can be used to cut down overflow and let H2O to return to the H2O tabular array. Therefore, concrete is, in many ways, environmentally friendly stuff. As good applied scientists, we must utilize more of it [ Malhotra 2004 ] . In position of the energy and nursery gas emanation concerns in the fabrication of Portland cement, it is imperative that either new environmentally friendly cement-manufacturing engineerings be developed or utility stuffs be found to replace a major portion of the Portland cement for usage in the concrete industry [ Malhotra 2004 ] . Energy ingestion is the biggest environmental concern with cement and concrete production. Cement production is one of the most energy intensifier of all industrial fabrication procedures. Including direct fuel usage for excavation and transporting natural stuffs, cement production takes about six million BTU ‘s for every ton of cement. The industry ‘s heavy trust on coal leads to particularly high emanation degrees of CO2, azotic oxide, and sulfurs, among other pollutants. A ample part of the electricity used is besides generated from coal.What types of stuffs are being used to do sustainable concrete?Coal burning merchandises ( CCPs ) It is of import to develop recycling engineering for high-volume applications of coal burning merchandises ( CCPs ) generated by utilizing both conventional and clean-coal engineerings. Many different types of CCPs are produced ; for illustration, wing ash, bottom ash, cyclone-boiler scoria, and clean coal ash. In general some of these CCPs can be used as a auxiliary cementitious stuffs and the usage of Portland cement, hence, can be reduced. The production of CCPs in USA is about 120 million dozenss per twelvemonth in 2004. Cyclone-boiler scoria is 100 % recycled. Overall recycling rate of all CCPs is approximately 40 % . Figure. Fly ash is a by-product of coal firing power workss. Today ‘s usage of other pozzolans, such as rice-husk ash, wood ash, GGBFS, silicon oxide smoke, and other similar pozzolanic stuffs such as volcanic ash, natural pozzolans, diatomaceous earth ( diatomaceous Earth ) , calcined clay/shale, metakaolin, really all right clean-coal ash ( microash ) , limestone pulverization, and all right glass can cut down the usage of manufactured Portland cement, and do concrete more lasting, every bit good as cut down GHG emanations. Chemical composing of ASTM Type I portland cement and selected pozzolans is given in Table 2. Table. Chemical compostion of CCPs. Oxides % Portland Cement St. Helen ‘s ash VPP Class F ash Columbia Unit # 1 fly ash P-4 Class C ash SiO2 20.1 62.2 48.2 44.8 32.9 Al2O3 4.4 17.6 26.3 22.8 19.4 CaO 57.5 5.7 2.7 17.0 28.9 MgO 1.6 2.2 1.1 5.1 4.8 Fe2O3 2.4 5.6 10.6 4.2 5.4 TiO2 0.3 0.8 1.2 1.0 1.6 K2O 0.7 1.2 2.3 0.4 0.3 Na2O 0.2 4.6 1.1 0.3 2.0 Moisture 0.2 0.4 0.4 0.1 0.8 LOI 1.1 0.6 7.9 0.3 0.7 Recycled- Aggregate Concrete Recycled-aggregate concrete ( RAC ) for structural usage can be prepared by wholly replacing natural sum, in order to accomplish the same strength category as the mention concrete, manufactured by utilizing merely natural sums. This is evidently a defeat, since a big watercourse of recycled sums to let for full permutation of natural sums is non available. However, it is utile to turn out that to fabricate structural concrete by partially replacing natural with recycled sums by up to fifty per centum is so executable. In any instance, if the acceptance of a really low H2O to cement ratio implies unsustainably high sums of cement in the concrete mixture, recycled-aggregate concrete may besides be manufactured by utilizing a water-reducing alloy in order to take down both H2O and cement dose, or even by adding fly ash as a partial all right sum replacing and by utilizing a ace plasticiser to accomplish the needed workability. High-volume fly ash recycled aggregative concrete ( HVFA-RAC ) can be manufactured with a H2O to cement ratio of 0.60, by at the same time adding to the mixture as much fly ash as cement, and replacing the all right sum fraction. Therefore, H2O to cementitious stuff ratio of 0.30 is obtained enabling the concrete to make the needed strength category ( Table 3 ) . This process is indispensable for planing an environmentally-friendly concrete. All the concretes can be prepared keeping the same fluid consistence by proper add-on of an appropriate category of a ace plasticiser. Table. Comparison of Recycled Aggregate Concrete and Virgin Aggregate.PropertyVirgin AggregateRACShape and Texture Well rounded, smooth ( crushed rocks ) to angular and unsmooth ( crushed stone ) . Angular with unsmooth surface. Absorption Capacity 0.8 – 3.7 per centum 3.7 – 8.7 per centum Specific Gravity 2.4 – 2.9 2.1 – 2.4 L. A. Abrasion Test Mass Loss 15 – 30 per centum 20 – 45 per centum Sodium Sulfate Soundness Test Mass Loss 7 – 21 per centum 18 – 59 per centum Magnesium Sulfate Soundness Mass Loss 4 – 7 per centum 1 – 9 per centum Chloride Content 0 – 1.2 kg/m3 0.6 – 7.1 kg/m3 SUSTAINABLE CONCRETE SOLUTIONS Concrete is a strong, lasting, low environmental impact, constructing stuff. It is the basis for edifice building and substructure that can set future coevalss on the route towards a sustainable hereafter [ Cement Association of Canada 2004 ] . Benefits of concrete building are many, for illustration [ Cement Association of Canada 2004 ] : concrete edifices – cut down care and energy usage ; concrete main roads – cut down fuel consumed by to a great extent loaded trucks ; insulating concrete places – cut down energy use by 40 % or more ; fly ash, cement kiln dust, or cement-based solidification/stabilization and unmoved intervention of waste for brownfield renovation ; and, agribusiness waste containment – reduces odor and prevents groundwater taint. The concrete industry must demo leading and decide, and do part to the sustainable development of the industry in the 21 century by following new engineerings to cut down emanation of the greenhousegases, and therefore lend towards run intoing the ends and aims set at the 1997 Kyoto Protocol. The fabrication of Portland cement is one such industry [ Malhotra 2004 ] . 6 PORTLAND CEMENT Portland cement is non environmentally really friendly stuff. As good applied scientists, we must cut down its usage in concrete [ Malhotra 2004 ] ; and, we must utilize more blended cements, particularly with chemical alloies. Clinker production is the most energy-intensive phase in cement production, accounting for over 90 % of entire energy usage, and virtually all of the fuel usage. Processing of natural stuffs in big kilns produces portland cement cinder. These kiln systems evaporate the built-in H2O in the natural stuffs blended to fabricate the cinder, calcine the carbonate components ( calcinations ) , and organize cement minerals ( clinkerization ) [ Worrell & A ; Galtisky 2004 ] . 6.1 Blended cements The production of blended cements involves the intergrinding of cinder with one or more additives ; e.g. , fly ash, bnb granulated blast furnace scoria, silicon oxide smoke, volcanic ash, in assorted proportions. The usage of blended cements is a peculiarly attractive efficiency option since the intergrinding of cinder with other additives non merely allows for a decrease in the energy used ( and reduced GHG emanations ) in cinder production, but besides straight corresponds to a decrease in C dioxide emanations in calcinations every bit good. Blended cement has been used for many decennaries around the universe [ Worrell & A ; Galtisky 2004 ] . 6.2 Concrete and the usage of blended cements Although it is most common to do usage of auxiliary cementing stuffs ( SCM ) in the replacing of cement in the concrete mixture, blended cement is produced at the crunching phase of cement production where fly ash, blast furnace scoria, or silicon oxide smokes are added to the cement itself. The advantages include expanded production capacity, reduced CO2 emanations, reduced fuel ingestion and close monitoring of the quality of SCMs [ Cement Association of Canada 2004 ] . â€Å" Kyoto Protocol ( UN Pact of 1997, requires to cut down GHGs, including CO2 ) . † It is now ratified. USA has non ratified it. â€Å" The Russian Government blessing allowed it to come into force worldwide. † By 2012, emanations must be cut below 1990 degrees ( in Japan by 6.0 + 7.6 = 13.6 % by 2012 ) [ The Daily Yomiuri 2004 ] . In Japan â€Å" ( Per ) householdaˆÂ ¦5,000 hankering green revenue enhancement † per twelvemonth is planned ( get downing April 2005 ) . This includes †Å" 3,600 hankerings in revenue enhancement per ton of C. † â€Å" The gross would be used to implement policies to accomplish the demands of Kyoto Protocol. † A study released ( on Oct. 21, 2004 ) showed that 61 % of those polled are in favour of the environmental revenue enhancement. † [ The Japan Times 2004 ] . Rate of CO2 emanation and planetary heating is shown in Figure 1. In last 2 year. CO2 has increased at a higher rate than expected [ Corinaldesi & A ; Moriconi 2004b ] . 6.3 Foundry byproducts Foundry byproducts include foundry sand, nucleus butts, abradants, and cupola scoria. Cores are used in doing coveted cavity/shapes in a sand cast in which liquefied metal is cast/poured. Cores are chiefly composed of silicon oxide sand with little per centums of either organic or inorganic binders.DecisionsThe most of import decision drawn appears to be that the compressive strength of the recycled aggregative concrete can be improved to be or even exceed that of natural-aggregate concrete by adding fly ash to the mixture as a all right aggregative replacing. In this manner, a given strength category value, as required for a broad scope of common utilizations, can be reached through both natural-aggregate concrete and recycled-aggregate concrete with fly ash, by adequately diminishing the H2O to cement ratio with the assistance of a superplasticizer in order to keep the workability. Concrete manufactured by utilizing recycled sum and wing ash shows no hurtful consequence on the lastingness of strengthened concrete, with some betterment for some instances. From an economical point of position, if merely the traditional costs are taken into history, recycledaggregate concrete with fly ash could be less attractive than natural-aggregate concrete. However, if the eco-balanced costs are considered, the exact antonym would be valid. Furthermore, the all right fraction with atom size up to 5 millimeters, when reused as sum for howitzers, allowed first-class bond strengths between howitzer and bricks, in malice of a lower mechanical public presentation of the howitzer itself. Besides the masonry debris can be productively treated and reused for fixing howitzers. Even for the all right fraction produced during the recycling procedure, that is the concrete-rubble pulverization, an first-class reuse was found, as filler in self-compacting concrete. The effort to better the quality of the recycled sums for new concretes by recycling in different ways the most damaging fractions, i.e. , the stuff coming from masonry debris and the finest recycled stuffs, allowed to accomplish surprising and unexpected public presentations for howitzers and selfcompacting concretes. Other industrial wastes, such as GRP waste pulverization, can turn out utile to be re-used in cementitious merchandises, by bettering some lastingness facets. â€Å" The concrete industry will be called upon to function the two pressing demands of human society ; viz. , protection of the environment and run intoing the infrastructural demand for increasing industrialisation and urbanisation of the universe. Besides due to big size, the concrete industry is unimpeachably the ideal medium for the economic and safe usage of 1000000s of dozenss of industrial by-products such as fly ash and scoria due to their extremely pozzolanic and cementitious belongingss. It is obvious that large-scale cement replacing ( 60 – 70 % ) in concrete with these industrial byproducts will be advantageous from the point of view of cost economic system, energy efficiency, lastingness, and overall ecological profile of concrete. Therefore, in the hereafter, the usage of byproduct auxiliary cementing stuffs ought to be made compulsory † [ Malhotra 2004 ] .

Thursday, January 2, 2020

Low Income Black And Hispanic Adolescent Females Essay

SLIDE 1: So why are we targeting low-income black and Hispanic adolescent females you may ask? They are at higher risk, than other ethnicities, for acquiring an STD and/or experiencing an unplanned pregnancy. Increased utilization of dual contraception is of great importance in these communities. Adolescent females in general are less likely to use dual forms of protection from STD’s and unplanned pregnancy. Young women, due to their anatomy tend to be more susceptible to STD’s than are young men. Black and Hispanic females are 4.9x and 2.1x, respectively, more likely to contract chlamydia (which is a common STD) than their white counterparts. When it comes to pregnancy 3 in 10 adolescent females will become pregnant before the age of 20, amongst black and Hispanic young women this figure jumps to 5 in 10. Black and Hispanics teens are more likely to be living in poverty than are other ethnicities and data has shown that teen pregnancy increases proportionally as socio economic status declines as does rate of STD contraction. SLIDE 2: Next you may ask why are we specifically focusing on teens who live in the south? Well, Geography matters! According to the national center for health statistics teen pregnancy is highest in the southern states vs. the Northeastern or Midwestern states. While the average teen birth rate was 24.2% nationwide it was between 30-39% in the southern United States. Currently, only 18 states and D.C. require that education regarding contraception isShow MoreRelatedTeenage Pregnancy Rates Among Ethnicities1059 Words   |  5 Pagesshape adolescent sexual behavior and lead to the decision of adolescent motherhood. This paper will illustrate teenage pregnancy rates over the last few years, the factors that impact teen pregnancy, contraception and the likelihood of teen pregnancies across different ethnic groups. Over the years, the rate of adolescent pregnancy in the United States has been declining by nearly 40% since the 1990’s. (Brown, 2016) Adolescent pregnancy rates have decreased across the nation for adolescents belongingRead MoreParental Power And Adult Authority1473 Words   |  6 Pagesparents use to exert their influence on the child† (Vargas, Busch-Rossnagel, Montero-Sieburth, and Villarruel, 2000). However, a recent study found that Hispanic children who are between the ages of four and six often struggle with depression, anxiety, and somatization due to common parenting styles within Hispanic culture (Cohen, 2015). Hispanic parents tend to control by teaching their children to be obedient and show absolute respect for adult authority. They also control their children by clearRead MoreTeenage Birth Rate Essay1099 Words   |  5 Pagesstill ranked the highest in teen pregnancy when compared to other developed nations (U.S. Department of Health Human Services, 2016). The term teen pregnancy is denoting young females being pregnant or giving birth under the age of 20. According to the CDC, in 2015, there were 229,715 babies born to adolescent females ages between 15-19, or a total of 22.3 live births per 1,000 in this age group in the United States; almost 89% of these births were from unmarried mothers (Centers for Disease ControlRead MorePrevalence Of Overweight And Obesity Essay1249 Words   |  5 Pagesthe prevalence of overweight/obesity among parents of children entering childhood obesity treatment and to evaluate changes in the parents’ weight during their child’s treatment (Trier, 2016). The study included the parents of 1,125 children and adolescents (aged 3-22) who were enrolled in a children obesity treatment program. They began by taking the heights and weights of the children and the BMI scores were calculated. After 2.5 years of treatment, the mean weight was taking from the parents ofRead MoreTeen Pregnancy And Public Perception1498 Words   |  6 Pagessummary of my research findings. The Office of Adolescent Health analyzed the trends in teen births, variations in teen birth rates across populations (ethnicity between ages 15-19) and characteristics associated with adolescent childbearing in their article entitled Trends in Teen Pregnancy and Childbearing. According to Office of Adolescent Health, in 2013, there were 26.5 births for every 1,000 adolescent females age 15-19 or 273,105 babies born to females in this age group. Nearly eighty-nine percentRead MoreLiterature Review On Teen Pregnancy1254 Words   |  6 PagesAmerica is about 57 per 1000 teens in 2010 (Knox 1). This has decreased to about 47 per 1000 teens, but at the state level, some states such as Texas have higher averages. Currently, the state of Texas recorded a rate of 73 teenage pregnancies per 1000 females aged between 15 and 19 years (Sayegh et al. 95). The main cause of teenage pregnancies is associated with severe social dislocations including race and ethnicity, socioeconomic status, and education among other factors. Literature Review Race andRead MoreEarly Intervention And Care Prevention1025 Words   |  5 PagesIntroduction Early intervention and care can prevent most of the oral health diseases. Nevertheless, dental caries remains the most common chronic disease among children and adolescents in the United States (Centers of Disease Control and Prevention, 2014). About 14.4% of children aged 3-5 years had untreated dental caries in 2009 -2010 (Dye 2012). In addition to pain and discomfort, untreated deciduous tooth caries can spread to roots and may lead to loss of tooth. This can subsequently affectRead MoreTeen Pregnancy Research Paper1273 Words   |  6 Pages The Effects of Pregnancy Among Adolescent Girls Heather Thedford HS 2013: Health Communications Texas Woman’s Universityâ€Æ' DESCRIPTION Teenage pregnancy is defined as a teenage girl, usually within the ages of 13-19, becoming pregnant (Unicef 2008). These are young girls that have not yet reached adulthood, who are engaging in unprotected sex and have conceived a child from that encounter. Risk Factors Associated with Teen Pregnancy Teen pregnancy has severe health risk factors for theRead MoreThe World Health Organization (Who, 2016) Has Recognized1510 Words   |  7 PagesUnited States to be associated with socioeconomic status (Drewnowski Specter, 2004). This vulnerable population has many known risk factors for obesity such as sedentary lifestyle secondary to unsafe impoverished neighborhoods (Levine, 2011), the low cost of energy-dense foods (Drewnowski Specter, 2004), and poor access to fresh foods (Levine, 2011). The risk factors are well researched and documented, but there is less exploration in rectifying this problem. My topic of interest is to researchRead MoreObese And Overweight Hispanic Children923 Words   |  4 PagesObese and overweight Hispanic children are a direct consequence of a low Socioeconomic Status. A famous American chef, Tom Colicchio, once said: â€Å"This is what people don t understand: obesity is a symptom of poverty. It s not a lifestyle choice where people are just eating and not exercising. It s because kids - and this is the problem with school lunch right now - are getting sugar, fat, empty calories - lots of calories - but no nutrition† . On the other hand, a socioeconomic status, according