Likert Data-Basic Analysis, Statistical-Tools & Statistics
Basic analysis of Likert Data
In 1932, Rensis Likert, an American Psychologist, developed a method for measuring attitudinal scale by classifying response to a statement on a scale given as Strongly disapprove, Disapprove, Undecided, Approve, Strongly approve. This data is now known as Likert data, Likert response, or Likert items.
We have discussed basics about Likert items and formation of Likert Scale from Likert items, their properties and how they need to be understood in the realm of field of making questionnaire about qualitative characteristics so that they could be analysed in digital formats. To read the basics again, may click here to read more about it.
Once response is collected, what tools are available for analysis of such data is the main subject of this article and what limitations are there on which we should delve upon while using such tools.
We will not discuss the details of such tools, rather name them in order to have a bird's eye view of the whole subject in this article. Now a days, in computer coding languages, the earlier used to call "function" or "sub-routines" are termed as methods. We can also call tools as methods, wherever contextually, it looks more appropriate.
2.Display methods: Pictorial or Tabular presentation-
Display is a tool whereby digital data is represented as pictorial diagrams or in tabular forms so that fast and quick assimilation of major facts could be comprehended. For that, we need computation of representatives of all the collected responses and present them either in picture form or in tabular form .
As we know that Mean is used for a measure of central tendency of all the data on which it is calculated, but in the Likert data we cannot use it because it doesn't convey a sensible meaning like "what is the average of undecided people?" The better will be to present "what is the percentage of undecided people?" Thus most appropriate measure of such data is to capture the nature of data based on its frequent occurrences. Like Median, Quartiles, Mode and percentages etc Thus, in picture form, the best way to display the distribution of responses i.e. (% that agree, disagree or undecided etc) is to use a bar chart.
3.Tools for Inference methods: Estimation & Testing-
There can be occasions where we like to test our conjectures emmanated from observed data. There may be also occasions where we want to estimate relevant statistics and it's range . All these things fall under inference methods.
3.1 Estimating the parameters: As mentioned above, the most appropriate measure of Likert data is to capture the nature of data based on its frequent occurrences, like Median, Quartiles, Mode and percentages etc Though, specially for Likert items, mean is not a preferred statistics, yet parametric analysis of ordinary averages of Likert scale data is justifiable due to the Central Limit Theorem. Likert scale is composed of a series of more than one Likert- items that represent similar questions combined into a single composite thematic characteristic/ variable. Therefore, Likert scale data can be analyzed as a interval data, i.e. there the mean can be used as a measure of central tendency.
3.2 Testing the hypothesis of certain conjecture: To test hypotheses one initially thinks about the questions one is interested in. Then it is converted to a statistical hypotheses. Generally that reduces to know either the validity of estimated value of some parameters, or testing the similarity in terms of their parameters between two populations from which samples of respondents are found or planned belonging to.
3.2.1 Z- test
3.2.2 students t-'test
3.2.3 Mann Whitney test.
3.2.4 Kruskal Wallis test.
3.2.5 Analysis of variance analysis (ANOVA) techniques
3.2.6 Chi-square test
.
4. Tools for knowing relationships: Likert scale Regression Analysis-
If the dependent variable is on a likert scale one should consider following tools to ;
4.1 Ordered logistic regression, or;
4.2 Multinomial logistic regression.
4.3 Binary logistic regression.
Statistical-Tools (More)
5. Tools for knowing necessary and sufficient information: Factors Analysis-
What information is necessary? At the end of any pilot survey, what data do one hopes one can have with what reliability? Pilot Survey is a survey that is done previous to any major survey at a small scale but with a wide spread characteristics so as to know sufficiency and necessity of the characteristics vis a vis aim of the survey. What information is sufficient for a decision? What we should ”need to know”. What are ”nice to know”. And how to segregate them.
6. Other types of Questions: Questionnaire building-
There are many types of questions that Investigators or researchers use in building a perfect Questionnaire. Some examples apart from Likert items are
• Dichotomous or binary i.e. Yes/No or Male/Female.
• Nominal level of measurement or coded data i.e. 1=lawyer, 2=veterinarian, etc..
• Ranking order of preference.
• Filter or Contingency Questions - i.e. Have you ever.... if so how often
• Identification characteristics like name, village, state,
There is a lot big literature scattered around, which deals about how to build a good Questionnaire, but perhaps people don't study them separately as a single subject. They rather already have ready-made questions for each specific survey subjects, which they prefer to use or study. Now-a-days, statistical survey packages are also available where such Questions are already imbibed into them. You are allowed to choose some or all of them to carry out a survey so that your attention should be basically on collection of good quality data, while the concerned package will take care of all other processes of starting from converting collected data into digital format to its table formations, and report writing.
What is Statistics ?
The above narration is bird's eye view of Statistics, which is in true sense a science of observations.
Statistician, in real sense, are the scientists of observations. Recently, data scientist have taken over this role of Statistician due to advent of computer science and extensive use of machine with ease in display, presentations, computations and communications. Many statistical formula were tedious and difficult to compute manually, computer has made their computation possible and easy. Moreover repetitive processes also become untiring which was not there before the advent of computer.