For some languages and time periods, these are really the only corpora available. For example, in spite of earlier corpora like the American National Corpus and the Bank of English, our Corpus of Contemporary American English is the only large, balanced corpus of contemporary American English. In spite of the Brown family of corpora and the ARCHER corpus, the Corpus of Historical American English is the only large and balanced corpus of historical American English. And the Corpus del EspaŮol and the Corpus do PortuguÍs are the only large, carefuly annotated corpora of these two languages. Beyond the "textual" corpora, however, the corpus architecture and interface that we have developed allows for speed, size, annotation, and a range of queries that we believe is unmatched with other architectures, and which makes it useful for corpora such as the British National Corpus, which does have other interfaces. Also, they're free -- a nice feature.
We have created our own corpus architecture, using Microsoft SQL Server as the backbone of the relational database approach. Our proprietary architecture allows for size, speed, and very good scalability that we don't believe are available with any other architecture. Even complex queries of the more than 450 million word COCA corpus or the 400 million word COHA corpus typically only take two or three seconds. In addition, because of the relational database design, we can keep adding on more annotation "modules" with little or no performance hit. Finally, the relational database design allows for a range of queries that we believe is unmatched by any other architecture for large corpora.
As measured by Google Analytics, as of October 2014 the corpora are used by more than 130,000 unique people each month. The most widely-used corpus is the Corpus of Contemporary American English -- with more than 65,000 unique users each month. And people don't just come in, look for one word, and move on -- average time at the site each visit is between 10-15 minutes. (More information...)
For lots of things. Linguists use the corpora to analyze variation and change in the different languages. Some are materials developers, who use the data to create teaching materials. A high number of users are language teachers and learners, who use the corpus data to model native speaker performance and intuition. Translators use the corpora to get precise data on the target languages. Other people in the humanities and social sciences look at changes in culture and society (especially with COHA and Hansard). Some businesses purchase data from the corpora to use in natural language processing projects. And lots of people are just curious about language, and (believe it or not) just use the corpora for fun, to see what's going on with the languages currently. To get a better idea of what people are doing with the corpora, check out (or search through) the entries from the Researchers page.
Our corpora contain hundreds of millions of words of copyrighted material. The only way that their use is legal (under US Fair Use Law) is because of the limited "Keyword in Context" (KWIC) displays. It's kind of like the "snippet defense" used by Google. They retrieve and index billions of words of copyright material, but they only allow end users to access "snippets" of this data from their servers. Click here for an extended discussion of US Fair Use Law and how it applies to our COCA texts.
Full-text data for COCA and GloWbE is now available (COCA = 440 million words, 190,000 texts / GloWbE = 1.8 billion words, 1.8 million texts). There is currently no full-text access for the other corpora, although we will probably release full-text data from COHA in early 2015.
No, there isn't. There are two main reasons for this. First, we don't have copyright access to the texts in the corpora, and so we can only provide limited access to the corpora, via the corpus interface. Second, we're already pretty "maxed out" in terms of the two corpus servers, and API access would probably lead to quite a bit more queries, which we can't handle right now. Although we don't allow API access, some people have programmed browsers (via VB.NET for IE, or Perl for Firefox) to allow for semi-automated queries (note, though, that we don't provide tech support for this).
"Non-researchers" (Level 1) have 50 queries a day, or about 3,000 queries per month. For most people, this is way more than enough. But if you are in fact a graduate student in languages or linguistics, but there isn't a web page with your name on it, and you really do need more than 1,500 queries per month, then click here. If that's not possible, you might want to contribute to help support the corpora, in which case you will have 200 queries a day.
Users can purchase offline data -- such as full text copies of the texts, frequency lists, collocates lists, n-grams lists (e.g. all two or three word strings of words). Click here for much more detailed information on this data, as well as downloadable samples.
There is a limit of 250 queries per 24 hours for a "group", where a group is typically a class of students or a department at a university. If you need more queries than this, you'd want an academic / site license..
There are a number of reasons for our move to a contributions-based model in early 2015. One important factor is that Mark Davies, the creator and administrator for the corpora, will probably be retiring in 2018 or 2019, and there needs to be some viable model for financial sustainability of the corpora beyond that date. It's probably not realistic to expect the College of Humanities at BYU (which has been extremely supportive to this point) to keep spending $15,000-20,000 for a new server every year or two after 2018-19. In addition, there will need to be someone working 10-15 hours/week each week as an administrator for the corpora (for a total of $10,000-15,000/year). Hopefully, with a few years of contributions stored up by 2018-19, and with contributions coming in after that date as well, this will provide the needed financial viability of the corpora (~$15,000-20,000 year). The other option, of course, is to go to a strict subscription-based model like some other corpora, and this is something that we really don't want to have to do.
In addition to the basic "contributions", we're considering the possibility of allowing organizations (such as publishers, ESL schools, universities, etc) to "sponsor" the corpora. The sponsorship would be for a limited time (e.g. 1-3 months) and it could be targeted to just those users from a particular country. All of the people from that country who use the corpora during that time period would see a small logo for your organization in the header at the top of the corpus page, and this would link to the website for your organization.
We've already done this for an ESL publisher from South Korea, targeted to all users of WordAndPhrase who are coming from South Korea, and it has resulted in thousands of additional visits to their website. To give another example, if you're from a university with a graduate program where corpus linguistics is an important part of the program, this might be a great way to attract to your program students who are already interested in corpus linguistics.
Anyway, we're just considering the possibility of doing this, and we probably wouldn't start until mid-2015. But if you think that your organization might be interested (and there's no obligation to follow through on this), please let us know (email@example.com).
Part of the rationale for these messages is to let you know about useful resources that are related to the corpora (such as the word frequency, n-grams, collocates, and full-text data, or WordAndPhrase, AcademicWords, etc). The other purpose is to help motivate people to contribute, in order to ensure the financial viability of the corpora.
Once you have made a minimal contribution to the corpora (or purchased data from one of the sites just listed, some for as little as $20), you won't see these messages anymore (during the month or year of your contribution or purchase).
If you don't want to contribute to the BYU corpora and are really bothered by the messages, you might want to consider other web-based corpora -- like those from Lancaster University (including BNCweb), CorpusEye, or the many excellent corpora from Sketch Engine. (Please be aware, though, that the subscription fee for the Sketch Engine corpora is somewhat more expensive than the suggested contribution for the BYU corpora.)
Please use the following information when you cite the corpus in academic publications or conference papers. Thanks.
In the first reference to the corpus in your paper, please use the full name. For example, for COCA: "the Corpus of Contemporary American English" with the appropriate citation to the references section of the paper, e.g. (Davies 2008-). After that reference, feel free to use something shorter, like "COCA" (for example: "...and as seen in COCA, there are..."). Also, please do not refer to the corpus in the body of your paper as "Davies' COCA corpus", "a corpus created by Mark Davies", etc. The bibliographic entry itself is enough to indicate who created the corpus. Otherwise, it just kind of sounds strange, and overly proprietary.