CSV (.csv)

Background & Context

    • MIME type: text/comma-separated-values, text/csv
    • CSV tabular data format.
    • Stores records of numerical and textual information as lines, using commas to separate fields.
    • Commonly used in spreadsheet applications as an exchange format.
    • CSV is an acronym for Comma-Separated Values.
    • Plain text format.
    • Similar to TSV.
    • Supports RFC 4180.

Import & Export

  • Import["file.csv"] returns a list of lists containing strings and numbers, representing the rows and columns stored in the file.
  • Import["file.csv",elem] imports the specified element.
  • Import["file.csv",{elem,subelem1,}] imports subelements subelemi, useful for partial data import.
  • The import format can be specified with Import["file","CSV"] or Import["file",{"CSV",elem,}].
  • Export["file.csv",expr] creates a CSV file from expr.
  • Supported expressions expr include:
  • {v1,v2,}a single column of data
    {{v11,v12,},{v21,v22,},}lists of rows of data
    arrayan array such as SparseArray, QuantityArray, etc.
    tseriesa TimeSeries, EventSeries or a TemporalData object
    Dataset[]a dataset
    Tabular[]a tabular object
  • See the following reference pages for full general information:
  • Import, Exportimport from or export to a file
    CloudImport, CloudExportimport from or export to a cloud object
    ImportString, ExportStringimport from or export to a string
    ImportByteArray, ExportByteArrayimport from or export to a byte array

Import Elements

  • General Import elements:
  • "Elements" list of elements and options available in this file
    "Summary"summary of the file
    "Rules"list of rules for all available elements
  • Data representation elements:
  • "Data"two-dimensional array
    "Grid"table data as a Grid object
    "RawData"two-dimensional array of strings
    "Dataset"table data as a Dataset
    "Tabular"table data as a Tabular object
  • Data descriptor elements:
  • "ColumnLabels"names of columns
    "ColumnTypes"association of column names and types
    "Schema"TabularSchema object
  • Import and Export use the "Data" element by default.
  • Subelements for partial data import for any element elem can take row and column specifications in the form {elem,rows,cols}, where rows and cols can be any of the following:
  • nnth row or column
    -ncounts from the end
    n;;mfrom n through m
    n;;m;;sfrom n through m with steps of s
    {n1,n2,}specific rows or columns ni
  • Metadata elements:
  • "ColumnCount"number of columns
    "Dimensions"a list of number of rows and maximum number of columns
    "RowCount"number of rows

Options

  • Import and Export options:
  • "EmptyField"""how to represent empty fields
    "QuotingCharacter""\""character used to delimit non-numeric fields
  • Data fields containing commas and line separators are typically wrapped in double-quote characters. By default, Export uses double-quote characters as delimiters. Specify a different character using "QuotingCharacter".
  • Double-quote characters delimiting text fields are not imported by default.
  • Import options:
  • CharacterEncoding"UTF8ISOLatin1"raw character encoding used in the file
    "ColumnTypeDetectionDepth"Automaticnumber of rows used for header detection
    "CurrencyTokens"Nonecurrency units to be skipped when importing numerical values
    "DateStringFormat"Nonedate format, given as a DateString specification
    "FieldSeparator"","string token taken to separate columns
    "FillRows"Automaticwhether to fill rows to the max column length
    "HeaderLines"Automaticnumber of lines to assume as headers
    "IgnoreEmptyLines"Falsewhether to ignore empty lines
    MissingValuePatternAutomaticpatterns used to specify missing elements
    "NumberPoint""."decimal point character
    "Numeric"Automaticwhether to import data fields as numbers if possible
    "Schema"Automaticschema used to construct Tabular object
    "SkipInvalidLines"Falsewhether to skip invalid lines
    "SkipLines"Automaticnumber of lines to skip at the beginning of the file
  • By default, Import attempts to interpret the data as "UTF8"-encoded text. If any sequence of bytes stored in the file cannot be represented in "UTF8", Import uses "ISOLatin1" instead.
  • With CharacterEncoding -> Automatic, Import attempts to infer the character encoding of the file.
  • Possible settings for "HeaderLines" and "SkipLines" are:
  • Automatictry to automatically determine the number of rows to skip or use as header
    nn rows to skip or to use as Dataset headers
    {rows,cols}rows and columns to skip or to use as headers
  • Import converts table entries formatted as specified by "DateStringFormat" to a DateObject.
  • Export options:
  • AlignmentNonehow data is aligned within table columns
    CharacterEncoding"UTF8"raw character encoding used in the file
    "FillRows"Falsewhether to fill rows to the max column length
    "IncludeQuotingCharacter"Automaticwhether to add quotations around exported values
    "TableHeadings"Automaticheadings for table columns and rows
  • Possible settings for Alignment are None, Left, Center, and Right.
  • "IncludeQuotingCharacter" can be set to the following values:
  • Nonedo not enclose any values in quotes
    Automaticonly enclose values in quotes when needed
    Allenclose all valid values in quotes
  • "TableHeadings" can be set to the following values:
  • Noneskip column labels
    Automaticexport column labels
    {"col1","col2",}list of column labels
    {rhead,chead}specifies separate labels for the rows and columns
  • Export encodes line separator characters using the convention of the computer system on which the Wolfram Language is being run.

Examples

open allclose all

Basic Examples  (3)

Import a CSV file:

Read and plot all data from the file:

Import summary of a CSV file:

Export an array of expressions to CSV:

Scope  (8)

Import  (4)

Import metadata from a CSV file:

Import a CSV file as a Tabular object with automatic header detection:

Import without headers, while skipping the first line:

Import a sample row of a CSV:

Analyze a single column of a file; start by looking at column labels and their types:

Get all values for one column:

Compute the mean:

Export  (4)

Export a Tabular object:

Use "TableHeadings" option to remove header from a Tabular object:

Export a TimeSeries:

Export an EventSeries:

Export a QuantityArray:

Import Elements  (27)

"ColumnCount"  (1)

Get the number of columns from a CSV file:

"ColumnLabels"  (1)

Get the inferred column labels from a CSV file:

"ColumnTypes"  (1)

Get the inferred column types from a CSV file:

"Data"  (6)

Import a CSV file as a 2D list of values:

This is also the default element:

Import a single row from a CSV file:

Import some specific rows from a CSV file:

Import the first 10 rows of a CSV file:

Import a single row and column from a CSV file:

Import a single column from a CSV file:

"Dataset"  (2)

Import a CSV file as a Dataset:

Use "HeaderLines" and "SkipLines" options to only import the data of interest:

"Dimensions"  (2)

Get the dimensions from a CSV file:

If all rows in the file do not have the same number of columns, some rows may be considered as invalid:

"Grid"  (1)

Import CSV data as a Grid:

"RawData"  (3)

Import CSV data as raw strings:

Compare to "Data":

By default for "RawData", "Numeric"->False is used:

Use "Numeric"->True:

By default for "RawData", "FillRows"->True is used:

Use "FillRows"->False:

"RowCount"  (1)

Get the number of rows from a CSV file:

"Schema"  (1)

Get the TabularSchema object:

"Summary"  (1)

Summary of a CSV file:

"Tabular"  (7)

Import a CSV file as a Tabular object:

Use "HeaderLines" and "SkipLines" options to only import the data of interest:

Import a single row:

Import multiple rows:

Import the first 5 rows:

Import a single element at a given row and column:

Import a single column:

Import Options  (15)

CharacterEncoding  (1)

The character encoding can be set to any value from $CharacterEncodings:

"ColumnTypeDetectionDepth"  (1)

By default, several dozen rows from the beginning of the file are used to detect column types:

Use more rows to detect column types:

"CurrencyTokens"  (1)

Currency tokens are not automatically skipped:

Use the "CurrencyTokens" option to skip selected currency tokens:

"DateStringFormat"  (1)

Convert dates to a DateObject using the date format specified:

By default, no conversion is happening:

"EmptyField"  (1)

Specify a default value for empty fields in CSV data:

"FieldSeparator"  (1)

By default, "," is used as a field separator:

Use tab as a field separator:

"FillRows"  (1)

For the "Data" element, row lengths are automatically preserved:

Pad rows:

For the "RawData" element, a full array is imported by default:

"HeaderLines"  (1)

The header line is automatically detected by default:

Use "HeaderLines" option when automatic header detection is incorrect:

Specify row headers:

Specify row and column headers:

"IgnoreEmptyLines"  (1)

Use "IgnoreEmptyLines" to remove lines with no data from the imported data:

MissingValuePattern  (1)

By default, an automatic set of values is considered missing:

Use MissingValuePatternNone to disable missing element detection:

Use string patterns to find missing elements:

"Numeric"  (1)

Use "Numeric"->True to interpret numbers:

By default, everything imports as strings:

"NumberPoint"  (1)

By default, "." is used to specify decimal point character for floating-point data:

Use "NumberPoint" option to specify decimal point character for floating-point data:

"QuotingCharacter"  (1)

The default quoting character is a double quote:

A different quoting character can be specified:

"Schema"  (1)

Import automatically infers column labels and types from data stored in a CSV file:

Use "Schema" option to specify column labels and types:

"SkipLines"  (1)

CSV files may include a comment line:

Skip the comment line:

Skip the comment line and use the next line as a Tabular header:

Export Options  (7)

Alignment  (1)

By default, no additional characters are added for any alignment:

Left-align column values:

Center-align column values:

CharacterEncoding  (1)

The character encoding can be set to any value from $CharacterEncodings:

"EmptyField"  (1)

By default, empty elements are exported as empty strings:

Specify a different value for empty elements:

"FillRows"  (1)

Row lengths are preserved by default:

Use "FillRows"->True to export full array:

"IncludeQuotingCharacter"  (1)

By default, Export only exports quotation characters for values that need them:

Use "IncludeQuotingCharacter"All to enclose all values in quotes:

Use "IncludeQuotingCharacter"None to export all values without quotes. Note that headers are always enclosed in quotes:

"QuotingCharacter"  (1)

The default quoting character used for non-numeric elements is a double quote:

Specify a different quoting character:

Use "QuotingCharacter"->"" to export all values without quotes. Note that headers are always enclosed in quotes:

"TableHeadings"  (1)

By default, column headers are exported:

Use "TableHeadings"None to skip column headers:

Export data using custom column headers:

Export data using custom column and row headers:

Applications  (1)

Export a list of European countries and their populations to a CSV file:

Import the data back and convert to expressions:

Possible Issues  (13)

If all rows in the file do not have the same number of columns, some rows may be considered as invalid:

Entries of the format "nnnEnnn" are interpreted as numbers with scientific notation:

Use the "Numeric" option to override this interpretation:

Numeric interpretation may result in a loss of precision:

Use the "Numeric" option to override this interpretation:

Starting from Version 14.2, currency tokens are not automatically skipped:

Use the "CurencyTokens" option to skip such tokens:

Starting from Version 14.2, quoting characters are added when the column of integer values contains numbers greater than Developer`$MaxMachineInteger:

Use "IncludeQuotingCharacter"->None to get the previous result:

Starting from Version 14.2, some strings are automatically considered missing:

Use MissingValuePatternNone to override this interpretation:

Starting from Version 14.2, real numbers with 0 fractional part are exported as integers:

Use "Backend"->"Table" to get the previous result:

Starting in Version 14.2, there is an automatic column type identification:

Use "Backend""Table", if non-homogeneous types in columns are expected:

Starting from Version 14.2, integers greater than Developer`$MaxMachineInteger are imported as real numbers:

Use "Backend"->"Table" to get the previous result:

Starting from Version 14.2, date and time columns of Tabular objects are exported using DateString:

Use "Backend"->"Table" to get the previous result:

Some CSV data generated from older versions of the Wolfram Language may have incorrectly delimited text fields and will not import as expected in Version 11.2 or higher:

Using "QuotingCharacter""" will give the previously expected result:

The top-left corner of data is lost when importing a Dataset with row and column headers:

Dataset may look different depending on the dimensions of the data: