Scientific Discussions 101http://www.acehsc.com.au/wp-content/uploads/2019/05/Scientific-Discussions-Chloe-1024x683.jpg 1024 683 acehsc acehsc http://2.gravatar.com/avatar/b2f4e8bb5ead577a210f1b6cb7b3786e?s=96&d=mm&r=g
Depth Studies: Writing a Discussion
The implementation of the depth study to the science syllabi is a change that has left a lot of students uneasy. After the hurdle of, in most cases, having to choose a depth study focus and carry out first hand investigations, students are prompted to engage in evaluations of their scientific procedure and suggest ways that future experimentation could be improved. As arguably one of the most difficult and crucial aspects of the depth study, we are here to breakdown the key elements of a successful discussion.
Before we can begin this process of evaluation, it is important that we understand the core differences between discussions of accuracy, reliability and validity, as it is not uncommon for students to use them interchangeably.
Reliability is concerned with the consistency of your results, that is, the degree to which each of the results deviates from the next. As such, reliability can only be determined if the experiment is repeated at least three times, for each change of the independent variable. If we sketch our data points, the scatter or deviation of points from the line of best fit, is also an indication of the reliability. If multiple outliers are present, we can deduce the experiment is not reliable.
What affects it:
Random errors are responsible for limiting the reliability of our experiments. Random errors are characterised by their varying magnitude on our experimental results as they are equally likely to be above or below the mean value. The degree to which they have altered our values above or below the expected is often difficult to determine.
How to assess:
One of the first means of identifying sources of random error is to reflect on the errors which have come about due to reliance on humans. Two common examples include human reaction time when using a stopwatch, approximately 0.2s, and the application of a force when releasing objects. Other sources of random error may be sudden temperature variations or fluctuations in ambient light in the school lab.
How to improve:
Whilst it is correct to say that repetition plays an important part in determining whether an experiment is reliable, repetition does not work to increase reliability. Instead, to increase the reliability of our experiment we must reduce the effect of random errors. The easiest way to minimise random errors is to increase the number of measurements we take within the data range, for example rather than measuring at 0, 20, 40, measuring at 0, 10, 20, 30, 40 or, increasing the readings over a larger data range. We can then consider reducing the potential for human errors by implementing different technologies. For example, instead of using a stopwatch to determine the time taken for an object to travel a set distance, a video camera can be set up to record the object’s motion or a set of light gates could be used.
Accuracy is a quantitative measure of the extent to which our experimental results deviate from the expected or published values. The most successful way of deducing the accuracy of an experiment is by calculating the percentage error,
% Error= (|Experimental Value – Expected Value| ÷ Expected Value) × 100%
It is often quoted that a percentage error below 5% renders the experiment accurate, however, depending on the quality of the equipment involved, an error of up to 10% can still be considered accurate. Be sure to define your percentage error tolerance when you make this judgement.
What affects it:
Systematic errors are responsible for limiting the accuracy of our results, and are often much more difficult to detect and hence harder to eliminate. Different to random errors, systematic errors alter our experimental results by a fixed, constant value each time we conduct the experiment.
How to assess:
For systematic errors, we often look at the equipment we used in our investigation. For example, a wooden ruler used to measure distances may introduce systematic errors since the zero increment may have faded and caused every measurement to be out by 1mm or the ruler itself may have shrunk or expanded into an irregular shape overtime by virtue of the wooden material and temperature changes in the environment. Other common systematic errors include; forgetting to tare an electronic balance, not zeroing the stopwatch, and not aligning our eyes with the increment markings we are reading, which results in parallax errors.
How to improve:
In order to improve the accuracy of our experiment, we want to replace existing equipment with equipment that is typically able to report to more significant figures, less susceptible to changing composition over time and which minimises the need for human interpretations. We must ensure that all measurements are taken with our eyes at level with the device to eliminate parallax errors and calibrate or zero all digital equipment before use. As examples, we could consider replacing the electronic balance with one that measures to 6 significant figures rather than just 4. Or, we could replace our generic thermometer with a digital one.
Validity is a measure of how successfully our method addresses the aim of the experiment, unlike accuracy and reliability which are concerned with the outcome or results of the experiment. A simple way of determining validity is to consider the various assumptions that have been made whilst you have conducted the experiment, whether these assumptions are necessarily true, and whether they may have impacted the data obtained. Validity also relies on the experiment having strictly one independent variable – the variable being changed, one dependent variable – the variable being measured, and all other variables controlled. Importantly, there is no spectrum of validity and as such you must make a definite judgement as to whether an experiment is valid or not.
How to assess:
A good starting point for assessing validity is to consider whether any physics principles or equations have been used to interpret the data. In the simple pendulum experiment, we relate the length of the pendulum string, L, to the time taken for the pendulum to complete one swing, T, using the equation:
This equation relies on the assumption that the pendulum undergoes strict harmonic motion and not conical pendulum motion. A related assumption is that the angle of deviation when first setting the pendulum into motion does not exceed 7-10o. A further assumption we make is that the string used is massless and inextensible. The validity of our experiment would be compromised if any of these conditions were not met. Some more factors which may affect the experiment’s validity are the presence of air resistance or friction along a surface, as these factors are often assumed to be negligible in physics equations.
From what we can see, discussions of reliability, accuracy and validity in relation to scientific experiments can vary greatly depending on the nature of your investigation. It is important that when you construct these discussions, you do not rely on generic responses when evaluating. Instead, use the above definitions and examples as guidelines for your own thinking – the more specific your response, the better.