Skip to content
This repository has been archived by the owner on May 4, 2021. It is now read-only.

Latest commit

 

History

History
24 lines (13 loc) · 1.45 KB

README.md

File metadata and controls

24 lines (13 loc) · 1.45 KB

DataCollection

Collecting data for machine translation training from CommonCrawl is a two-phase process illustrated in the following diagram:

CommonCrawl process diagram

Phase 1: Language annotation, building a meta-data database and monolingual data extraction

The first phase detects the languages of the web pages contained in the crawl and other meta-data. A database is built from this data that can be accessed via a RESTful web API.

The metadata documentation describes phase 1 step-by-step.

In this phase monolingual data for language model training can be extracted. The data for some of the CommonCrawl crawls and some languages can be found on:

For more details on the monolingual data see ModernMT Deliverable 2.1.

Phase 2: Extracting parallel data and optional cleaning

In the second phase the meta-data collected in phase 1 is used to extract parallel data from CommonCrawl data based on URL pattern matching. Phase 2 is documented step-by-step in the baseline documentation

For the language pairs en↔it, en↔fr and en↔it matched URL data is available for quick data extraction in release 0.1.0