Rather, meaning is a product of the relationships among concepts in a text. Carley asserts that concepts are "ideational kernels;" these kernels can be thought of as symbols which acquire meaning through their connections to other symbols.
The kind of analysis that researchers employ will vary significantly according to their theoretical approach. Key theoretical approaches that inform content analysis include linguistics and cognitive science.
Linguistic approaches to content analysis focus analysis of texts on the level of a linguistic unit, typically single clause units. Another technique is to code a text grammatically into clauses and parts of speech to establish a matrix representation Carley, Approaches that derive from cognitive science include the creation of decision maps and mental models.
Decision maps attempt to represent the relationship s between ideas, beliefs, attitudes, and information available to an author when making a decision within a text.
These relationships can be represented as logical, inferential, causal, sequential, and mathematical relationships. Typically, two of these links are compared in a single study, and are analyzed as networks. For example, Heise used logical and sequential links to examine symbolic interaction. This methodology is thought of as a more generalized cognitive mapping technique, rather than the more specific mental models approach.
Mental models are groups or networks of interrelated concepts that are thought to reflect conscious or subconscious perceptions of reality.
According to cognitive scientists, internal mental structures are created as people draw inferences and gather information about the world. Mental models are a more specific approach to mapping because beyond extraction and comparison because they can be numerically and graphically analyzed.
Such models rely heavily on the use of computers to help analyze and construct mapping representations. Typically, studies based on this approach follow five general steps:. To create the model, a researcher converts a text into a map of concepts and relations; the map is then analyzed on the level of concepts and statements, where a statement consists of two concepts and their relationship. Carley asserts that this makes possible the comparison of a wide variety of maps, representing multiple sources, implicit and explicit information, as well as socially shared cognitions.
For relational analysis, it is important to first decide which concept type s will be explored in the analysis. Studies have been conducted with as few as one and as many as concept categories. Obviously, too many categories may obscure your results and too few can lead to unreliable and potentially invalid conclusions. Therefore, it is important to allow the context and necessities of your research to guide your coding procedures.
The steps to relational analysis that we consider in this guide suggest some of the possible avenues available to a researcher doing content analysis. We provide an example to make the process easier to grasp. However, the choices made within the context of the example are but only a few of many possibilities.
The diversity of techniques available suggests that there is quite a bit of enthusiasm for this mode of research. Once a procedure is rigorously tested, it can be applied and compared across populations over time.
The process of relational analysis has achieved a high degree of computer automation but still is, like most forms of research, time consuming. Perhaps the strongest claim that can be made is that it maintains a high degree of statistical rigor without losing the richness of detail apparent in even more qualitative methods. Affect extraction: This approach provides an emotional evaluation of concepts explicit in a text.
It is problematic because emotion may vary across time and populations. Gottschalk provides an example of this type of analysis. Proximity analysis: This approach, on the other hand, is concerned with the co-occurrence of explicit concepts in the text. In this procedure, the text is defined as a string of words. A given length of words, called a window , is determined. The window is then scanned across a text to check for the co-occurrence of concepts.
The result is the creation of a concept determined by the concept matrix. In other words, a matrix, or a group of interrelated, co-occurring concepts, might suggest a certain overall meaning. The technique is problematic because the window records only explicit concepts and treats meaning as proximal co-occurrence.
Other techniques such as clustering, grouping, and scaling are also useful in proximity analysis. Cognitive mapping: This approach is one that allows for further analysis of the results from the two previous approaches. It attempts to take the above processes one step further by representing these relationships visually for comparison.
Whereas affective and proximal analysis function primarily within the preserved order of the text, cognitive mapping attempts to create a model of the overall meaning of the text. This can be represented as a graphic map that represents the relationships between concepts. In this manner, cognitive mapping lends itself to the comparison of semantic connections across texts.
This variety is indicative of the theoretical assumptions that support mapping: mental models are representations of interrelated concepts that reflect conscious or subconscious perceptions of reality; language is the key to understanding these models; and these models can be represented as networks Carley, Given these assumptions, it's not surprising to see how closely this technique reflects the cognitive concerns of socio-and psycholinguistics, and lends itself to the development of artificial intelligence models.
The following discussion of the steps or, perhaps more accurately, strategies that can be followed to code a text or set of texts during relational analysis. These explanations are accompanied by examples of relational analysis possibilities for statements made by Bill Clinton during the hearings. The question is important because it indicates where you are headed and why. Without a focused question, the concept types and options open to interpretation are limitless and therefore the analysis difficult to complete.
Possibilities for the Hairy Hearings of might be:. For relational content analysis, the primary consideration is how much information to preserve for analysis. One must be careful not to limit the results by doing so, but the researcher must also take special care not to take on so much that the coding process becomes too heavy and extensive to supply worthwhile results.
Once the sample has been chosen for analysis, it is necessary to determine what type or types of relationships you would like to examine. There are different subcategories of relational analysis that can be used to examine the relationships in texts. In this example, we will use proximity analysis because it is concerned with the co-occurrence of explicit concepts in the text. In this instance, we are not particularly interested in affect extraction because we are trying to get to the hard facts of what exactly was said rather than determining the emotional considerations of speaker and receivers surrounding the speech which may be unrecoverable.
Once the subcategory of analysis is chosen, the selected text must be reviewed to determine the level of analysis. The researcher must decide whether to code for a single word, such as "perhaps," or for sets of words or phrases like "I may have forgotten. At the simplest level, a researcher can code merely for existence. This is not to say that simplicity of procedure leads to simplistic results. Many studies have successfully employed this strategy.
For example, Palmquist did not attempt to establish the relationships among concept terms in the classrooms he studied; his study did, however, look at the change in the presence of concepts over the course of the semester, comparing a map analysis from the beginning of the semester to one constructed at the end.
On the other hand, the requirement of one's specific research question may necessitate deeper levels of coding to preserve greater detail for analysis. In relation to our extended example, the researcher might code for how often Bill Clinton used words that were ambiguous, held double meanings, or left an opening for change or "re-evaluation. Once words are coded, the text can be analyzed for the relationships among the concepts set forth.
There are three concepts which play a central role in exploring the relations among concepts in content analysis. One of the main differences between conceptual analysis and relational analysis is that the statements or relationships between concepts are coded.
At this point, to continue our extended example, it is important to take special care with assigning value to the relationships in an effort to determine whether the ambiguous words in Bill Clinton's speech are just fillers, or hold information about the statements he is making. This step involves conducting statistical analyses of the data you've coded during your relational analysis.
This may involve exploring for differences or looking for relationships among the variables you've identified in your study. In addition to statistical analysis, relational analysis often leads to viewing the representations of the concepts and their associations in a text or across texts in a graphical -- or map -- form. Relational analysis is also informed by a variety of different theoretical approaches: linguistic content analysis, decision mapping, and mental models. The issues of reliability and validity are concurrent with those addressed in other research methods.
The reliability of a content analysis study refers to its stability , or the tendency for coders to consistently re-code the same data in the same way over a period of time; reproducibility , or the tendency for a group of coders to classify categories membership in the same way; and accuracy , or the extent to which the classification of a text corresponds to a standard or norm statistically.
Gottschalk points out that the issue of reliability may be further complicated by the inescapably human nature of researchers.
On the other hand, the validity of a content analysis study refers to the correspondence of the categories to the conclusions , and the generalizability of results to a theory. The validity of categories in implicit concept analysis, in particular, is achieved by utilizing multiple classifiers to arrive at an agreed upon definition of the category.
For example, a content analysis study might measure the occurrence of the concept category "communist" in presidential inaugural speeches. Using multiple classifiers, the concept category can be broadened to include synonyms such as "red," "Soviet threat," "pinkos," "godless infidels" and "Marxist sympathizers. The overarching problem of concept analysis research is the challenge-able nature of conclusions reached by its inferential procedures. The question lies in what level of implication is allowable, i.
For occurrence-specific studies, for example, can the second occurrence of a word carry equal weight as the ninety-ninth? Reasonable conclusions can be drawn from substantive amounts of quantitative data, but the question of proof may still remain unanswered. This problem is again best illustrated when one uses computer programs to conduct word counts. The problem of distinguishing between synonyms and homonyms can completely throw off one's results, invalidating any conclusions one infers from the results.
The word "mine," for example, variously denotes a personal pronoun, an explosive device, and a deep hole in the ground from which ore is extracted. One may obtain an accurate count of that word's occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage.
For example, one may find 50 occurrences of the word "mine. Any conclusions drawn as a result of that number would render that conclusion invalid. The generalizability of one's conclusions, then, is very dependent on how one determines concept categories, as well as on how reliable those categories are. Akin to this is the construction of rules. Developing rules that allow one, and others, to categorize and code the same data in the same way over a period of time, referred to as stability , is essential to the success of a conceptual analysis.
Reproducibility , not only of specific categories, but of general methods applied to establishing all sets of categories, makes a study, and its subsequent conclusions and results, more sound.
A study which does this, i. Content analysis offers several advantages to researchers who consider using it. In particular, content analysis:. Content analysis suffers from several disadvantages, both theoretical and procedural. The Palmquist, Carley and Dale study, a summary of "Applications of Computer-Aided Text Analysis: Analyzing Literary and Non-Literary Texts" is an example of two studies that have been conducted using both conceptual and relational analysis.
The Problematic Text for Content Analysis shows the differences in results obtained by a conceptual and a relational approach to a study. In this example, both students observed a scientist and were asked to write about the experience.
Content analysis coding for explicit concepts may not reveal any significant differences. For example, the existence of "I, scientist, research, hard work, collaboration, discoveries, new ideas, etc Relational analysis or cognitive mapping, however, reveals that while all concepts in the text are shared, only five concepts are common to both. Analyzing these statements reveals that Student A reports on what "I" found out about "scientists," and elaborated the notion of "scientists" doing "research.
Consider these two questions: How has the depiction of robots changed over more than a century's worth of writing? And, do students and writing instructors share the same terms for describing the writing process? One half of the study explored the depiction of robots in 27 science fiction texts written between and After texts were divided into three historically defined groups, readers look for how the depiction of robots has changed over time.
To do this, researchers had to create concept lists and relationship types, create maps using a computer software see Fig. The final product of the analysis revealed that over time authors were less likely to depict robots as metallic humanoids. Figure 1: A map representing relationships among concepts. The second half of the study used student journals and interviews, teacher interviews, texts books, and classroom observations as the non-literary texts from which concepts and words were taken.
The purpose behind the study was to determine if, in fact, over time teacher and students would begin to share a similar vocabulary about the writing process.
Again, researchers used computer software to assist in the process. This time, computers helped researchers generated a concept list based on frequently occurring words and phrases from all texts. By reducing the text to categories, the researcher can focus on and code for specific words or patterns that inform the research question.
Decide how many concepts to code for: develop pre-defined or interactive set of categories or concepts. Decide either: A. Decide whether to code for existence or frequency of a concept.
The decision changes the coding process. When coding for the existence of a concept, the researcher would count a concept only once if it appeared at least once in the data and no matter how many times it appeared. When coding for the frequency of a concept, the researcher would count the number of times a concept appears in a text. Should text be coded exactly as they appear or coded as the same when they appear in different forms?
The point here is to create coding rules so that these word segments are transparently categorized in a logical fashion. The rules could make all of these word segments fall into the same category, or perhaps the rules can be formulated so that the researcher can distinguish these word segments into separate codes. What level of implication is to be allowed? Words that imply the concept or words that explicitly state the concept?
Develop rules for coding your texts. After decisions of steps are complete, a researcher can begin developing rules for translation of text into codes. This will keep the coding process organized and consistent. Validity of the coding process is ensured when the researcher is consistent and coherent in their codes, meaning that they follow their translation rules. In content analysis, obeying by the translation rules is equivalent to validity.
Decide what to do with irrelevant information: should this be ignored e. Code the text: This can be done by hand or by using software. By using software, researchers can input categories and have coding done automatically, quickly and efficiently, by the software program.
When coding is done by hand, a researcher can recognize error far more easily e. If using computer coding, text could be cleaned of errors to include all available data. This decision of hand vs. Analyze your results: Draw conclusions and generalizations where possible. Determine what to do with irrelevant, unwanted or unused text: reexamine, ignore, or reassess the coding scheme. Interpret results carefully as conceptual content analysis can only quantify the information.
Typically, general trends and patterns can be identified. Relational analysis begins like conceptual analysis, where a concept is chosen for examination. However, the analysis involves exploring the relationships between concepts. Individual concepts are viewed as having no inherent meaning and rather the meaning is a product of the relationships among concepts. To begin a relational content analysis, first identify a research question and choose a sample or samples for analysis.
The research question must be focused so the concept types are not open to interpretation and can be summarized. Next, select text for analysis.
Select text for analysis carefully by balancing having enough information for a thorough analysis so results are not limited with having information that is too extensive so that the coding process becomes too arduous and heavy to supply meaningful and worthwhile results.
There are three subcategories of relational analysis to choose from prior to going on to the general steps. Affect extraction: an emotional evaluation of concepts explicit in a text. A challenge to this method is that emotions can vary across time, populations, and space. However, it could be effective at capturing the emotional and psychological state of the speaker or writer of the text.
Proximity analysis: an evaluation of the co-occurrence of explicit concepts in the text. Cognitive mapping: a visualization technique for either affect extraction or proximity analysis. Cognitive mapping attempts to create a model of the overall meaning of the text such as a graphic map that represents the relationships between concepts.
Determine the type of analysis: Once the sample has been selected, the researcher needs to determine what types of relationships to examine and the level of analysis: word, word sense, phrase, sentence, themes. Reduce the text to categories and code for words or patterns. A researcher can code for existence of meanings or words. Explore the relationship between concepts: once the words are coded, the text can be analyzed for the following:. Direction of relationship: the types of relationship that categories exhibit.
This has to be presented in a report format that can be easily understood by the recipient. This involves review of the final results, identifying patterns, arranging all the information in a sequence and finally presenting it in the form of a report.
The introductory sections of the report should address all basic information about the report such as:. The results section should contain detailed information about the various factors that were observed during the study.
The results should be supported by data and presented in the form of graphs and matrices. Clear presentation of the information makes it easy for the reader to understand and interpret the report.
The results section should be able to offer detailed analysis and summary of observations that were gathered during the study. It should be a straightforward commentary of the observations during the study. Include the important findings and avoid adding too much information that can bury the actual findings.
The results should try to narrate the findings without adding too much of judgements or solutions. This section should give a direction to the important stakeholders for further discussions and evaluations of the situation and encourage them to take decisions based on the report.
Search for:. What Are the Steps of Content Analysis? Here is a video summary of the steps of content analysis: Step 1: Identify and Collect Data There are numerous ways in which the data for qualitative content analysis can be collected. Example: Content analysis using social media information about the destination image of a city or country.
Step 2: Determine Coding Categories Measurement of content in content analysis is based on structured observation which is a systematic observation based on certain written rules. Step 3: Code the Content A code is the label that you assign to the text that has to be analyzed, and the text can be a word or a phrase. Frequency describes the number of times a particular code occurs.
Direction is the way in which the content appears, positive, negative, opposite, support etc. Intensity denotes the amount of the strength towards a particular direction. Space refers to the amount of space assigned to the text or the size of the message.
Example: Taking the above example, all the webpages that were shortlisted are combined into a master file. Step 4: Check Validity and Reliability The next stage involves the testing of the codes that have been designed.
Step 5: Analyze and Present Results After completing the analysis, there will be several sets of information organized and available as files.
0コメント