Language is a structure of meanings systematically applied to particular sounds and symbols, which are then consistently related through an organized grammar. The structure and function of language has been studied and discussed in a number of scholarly disciplines. While we recommend that our users maintain a working knowledge of the scholarly literature on language, a few of their conclusions are of particular interest to the use of Raven’s Eye. Therefore, we explain them in these Technicals.
Foremost is the intimate and influential relationship between language, cognition, and perception. For a majority of the 20th Century, scholars debated whether or not our perceptions, and the types and forms of our thoughts about them, are dictated by the concepts available in a particular language. As the scholarship in this area has grown, a consensus has arisen that, in general, variation in the concepts available within a given language on a particular topic influences—but does not absolutely determine—both general variation in the perceptions and experiences that a person may have with respect to that topic, and general variation in the thoughts expressed about those experiences. In other words, while our particular language influences both the specific thoughts we have about a given experience and the manner in which we express these thoughts, the particularities of our language do not preclude us from experiencing aspects of the topic for which we have no words1.
The lexical hypothesis proposes that languages will contain words for objects, events, and ideas that are common to the experiences of their respective speakers. It further proposes a positive relation between the commonality, centrality, or frequency of an experience to the speakers of a language, and the number of extant words available to describe various aspects of that experience2. In this way, the lexical hypothesis proposes that the words and concepts that comprise a given language will be influenced by the everyday experiences and environmental contingencies of those who speak that language.
In psychology, the lexical hypothesis has been utilized to compare the experiences of people within a given language group, and to compare such individually varying traits as cognitive tendencies, motivation, and personality. When applying the lexical hypothesis to individuals, it is the commonality, centrality, or frequency of a particular individual’s experiences or traits compared to the experiences or traits of the group of language speakers, which influences general variation in the type and frequency of particular words expressed. In this way, the lexical hypothesis is extended to propose that variation in the frequency of word usage within a given group of language speakers reflects individual variation in both experiences, and in psychological traits.
The lexical hypothesis can, therefore, be utilized to identify the relative variation in word or concept use, according to groups, individuals, and experiences.
A corpus is a collection of written documents; more than one such collection are referred to as corpora. A language corpus is an aggregated body of work, which is gathered together to facilitate the identification of popular words and their forms. Lists or tables of words, parts-of-speech, and other linguistic features are typically derived from such corpora, and are often organized according to their frequency of appearance in the corpus at-hand.
While language corpora serve many functions, in Raven’s Eye, they serve primarily as a background pool of words against which to compare an acquired natural language sample. When combined with the lexical hypothesis, these corpora facilitate the identification of words and themes that are relatively essential to your acquired natural language sample (and, as an extension, to your study).
Currently, Raven's Eye maintains corpora in 65 languages. These include:
|Language||Unique Words||Total Words|
|Language||Unique Words||Total Words|
|Language||Unique Words||Total Words|
Automatic Speech-to-text transcription.
Raven's Eye pairs with a world-class leader in artificial intelligence to provide automated speech-to-text transcription in 9 different languages. Our automatic transcription services are available for the following languages:
- Arabic (Modern Standard)
- English (UK and US)
- Portuguese (Brazilian)
Automated transcription involves probabilistic matching between the patterns of sound in the audio file and samples of sound patterns associated with specific words in the language selected as your current corpus. This process is basically the same process involved in voice-operated smart home devices, and predictive texting technologies.
Since are business is identifying themes in language, and not transcription itself, we partner with a world-class artificial intelligence leader to produce your transcript. As a result, you can rest assured that you are receiving state-of-the-art quality in your transcription. However, the state-of-the-art is not yet infallible, and correct transcription depends on facets of the recording beyond the control of our partner and ourselves (including quality of the recording, slurred speech or relative lack in enunciation, strong accents, loud background noise, etc.). We recommend that subscribers always download, review and edit, and re-upload their initial transcripts before making scientific claims or substantial business decisions based on their results
Transcription testing and review: Because each subscriber's transcript is derived from unique circumstances, the results of their individual transcription may vary substantially from those acquired by others. To help users get the best results, each new subscription comes with 100 free minutes of transcription (even though we still incur a cost for providing them). This allows subscribers to experiment with our partner's transcription services without incurring initial additional expense.
Prior to uploading a large audio file for transcription, we recommend that subscribers use their audio editing software to isolate a 3 - 5 minute selection of their recording, and then save this as a separate audio file for use in transcription testing. After uploading and transcribing this test file, subscribers can then export their results and troubleshoot any difficulties by adjusting the recording itself (such as through reducing background noise, or adjusting the volume and reverberation, or other means determined necessary by the subscriber). This newly adjusted audio file can then be uploaded again for further testing, and this process can be repeated until the best transcription is acquired.
Once the subscriber identifies the adjustments that produce the best transcription, these adjustments can be applied to the whole recording. This whole recording can then be uploaded for transcription.
While our partner advertises an error rate of 6% based on internal testing with recorded news casts, some may wish to pursue options with fewer errors. In this case, high quality human transcription might provide somewhat lower error rates (most advertise 1% - 2% error rates). However, we should note that such human transcription services are often incompatible with the confidentiality requirements for human subjects research established by the Institutional Review Boards of most academic and research institutions. Human transcription allows unaffiliated personnel without human subjects research training direct and unmonitored access to participant raw data.
Transcript annotation: We proceed from a phenomenological attitude, in which preserving the voice of the speaker or writer without interpretation in the raw data is paramount (interpretation instead occurs during the analysis). Therefore, annotation about non-spoken or behavioral aspects of the interview are not automatically included in the transcript. Our partner does, however, insert an annotation into the transcript in cases where verbal fillers (such as "um," "err," etc.) or false starts (i.e., a half-spoken syllable) are present. In these instances, the word, "hesitation," is inserted into the transcript. Subscribers can eliminate this annotation from their results by exporting them to their computer, and then using the Find and Replace All function of their spreadsheet program to identify and replace them with a space, " " all at once. Or, if you believe that hesitations might be meaningful to the themes in the data, it may be included as an annotation in the manner discussed in the next paragraph.
If annotation of the transcription text is desired, subscribers may do so by following our bracketing procedure in their spreadsheet programs while reviewing and editing their transcript (or anytime thereafter). As described in those bracketing procedures, our phenomenological perspective leads us to advocate that users insert annotations as a new column in their spreadsheet (perhaps labeled "annotation") and then assign this column as a Variable when uploading the edited transcript for analysis. In this way, each type of annotation is sortable, such that one might readily investigate whether those who hesitate—or engage in some other behavior of note—express different words or themes than those who do not (all while simultaneously preserving the original voice of the respondent).
1 In such instances, one may modify existing words (e.g., “bueish,” for something similar to blue, but not quite blue in the sense that the word already describes in the language), borrow a word from another language, create neologisms, or produce similes or metaphors that utilize words used to describe somewhat similar experiences.
2 For instance, languages from areas where horses have been found historically tend to have many words to describe horses and their various attributes. However, languages from areas without the historic presence of horses often do not contain a similarly diverse set of words to describe such attributes, if at all.