T-LAB Home
T-LAB 10.2 - ON-LINE HELP Prev Page Prev Page
T-LAB
Introduction
What T-LAB does and what it enables us to do
Requirements and Performances
Corpus Preparation
Corpus Preparation
Structural Criteria
Formal Criteria
File
Import a single file...
Prepare a Corpus (Corpus Builder)
Open an existing project
Settings
Automatic and Customized Settings
Dictionary Building
Co-occurrence Analysis
Word Associations
Co-Word Analysis and Concept Mapping
Comparison between Word pairs
Sequence and Network Analysis
Concordances
Co-occurrence Toolkit
Thematic Analysis
Thematic Analysis of Elementary Contexts
Modeling of Emerging Themes
Thematic Document Classification
Dictionary-Based Classification
Texts and Discourses as Dynamic Systems
Comparative Analysis
Specificity Analysis
Correspondence Analysis
Multiple Correspondence Analysis
Cluster Analysis
Singular Value Decomposition
Lexical Tools
Text Screening / Disambiguations
Corpus Vocabulary
Stop-Word List
Multi-Word List
Word Segmentation
Other Tools
Variable Manager
Advanced Corpus Search
Classification of New Documents
Key Contexts of Thematic Words
Export Custom Tables
Editor
Import-Export Identifiers list
Glossary
Analysis Unit
Association Indexes
Chi-Square
Cluster Analysis
Coding
Context Unit
Corpus and Subsets
Correspondence Analysis
Data Table
Disambiguation
Dictionary
Elementary Context
Frequency Threshold
Graph Maker
Homograph
IDnumber
Isotopy
Key-Word (Key-Term)
Lemmatization
Lexical Unit
Lexie and Lexicalization
Markov Chain
MDS
Multiwords
N-grams
Naïve Bayes
Normalization
Occurrences and Co-occurrences
Poles of Factors
Primary Document
Profile
Specificity
Stop Word List
Test Value
Thematic Nucleus
TF-IDF
Variables and Categories
Words and Lemmas
Bibliography
www.tlab.it

Requirements and Performances


Minimum configuration required:

- Windows 7 or later

- 4 Gb RAM

- Full HD screen resolution (1920 x 1080 recommended).

T-LAB performances depend basically on two factors: the corpus dimension and the kind of CPU available.

At present, T-LAB options have the following restrictions:

  • corpus dimension: max 90Mb, equal to about 55,000 pages in .txt format;
  • primary documents: max 30,000 (max 99,999 for short texts which do not exceed 2,000 characters each, e.g. responses to open-ended questions, Twitter messages, etc.);
  • categorical variables: max 50, each allowing max 150 subsets (categories) which can be compared;
  • modelling of emerging themes: max 5,000 lexical units (*) by 5,000,000 occurrences;
  • thematic analysis of elementary contexts: max 300,000 rows (context units) by 5,000 columns (lexical units);
  • thematic document classification: max 99,999 rows (i.e. documents) by 5,000 columns (lexical units);
  • specificity analysis (lexical units x categories): max 10,000 rows by 150 columns;
  • correspondence analysis (lexical units x categories): max 10,000 rows by 150 columns;
  • correspondence analysis (context units x lexical units): max 10,000 rows by 5,000 columns;
  • multiple correspondence analysis (elementary contexts x categories): max 150,000 rows by 250 columns;
  • singular value decomposition: max 300,000 rows by 5,000 columns;
  • cluster analysis that uses the results of a previous correspondence analysis (or SVD): max 10,000 rows (lexical units or elementary contexts);
  • word associations, comparison between word pairs and co-word analysis: max 5,000 lexical units;
  • sequence analysis: max 5,000 lexical units (or categories) by 3,000,000 occurrences.

    (*)
    In T-LAB, lexical units are words, multi-words, lemmas and semantic categories. So, when the automatic lemmatization is applied, 5,000 lexical units correspond to about 12,000 words (i.e. raw forms).