CSV (.csv)
- Import and Export fully support the CSV format and provide various data conversion and formatting options.
- Import automatically recognizes common number formats, including C and Fortran notations.
- Numbers without decimal points are imported as integers.
Background & Context
-
- MIME type: text/comma-separated-values, text/csv
- CSV tabular data format.
- Stores records of numerical and textual information as lines, using commas to separate fields.
- Commonly used in spreadsheet applications as an exchange format.
- CSV is an acronym for Comma-Separated Values.
- Plain text format.
- Similar to TSV.
- Supports RFC 4180.
Import & Export
- Import["file.csv"] returns a list of lists containing strings and numbers, representing the rows and columns stored in the file.
- Import["file.csv",elem] imports the specified element from a CSV file.
- Import["file.csv",{elem,subelem1,…}] imports subelements subelemi, useful for partial data import.
- The import format can be specified with Import["file","CSV"] or Import["file",{"CSV",elem,…}].
- Export["file.csv",expr] creates a CSV file from expr.
- Supported expressions expr include:
-
{v1,v2,…} a single column of data {{v11,v12,…},{v21,v22,…},…} lists of rows of data array an array such as SparseArray, QuantityArray, etc. tseries a TimeSeries, EventSeries or a TemporalData object Dataset[…] a dataset - See the following reference pages for full general information:
-
Import, Export import from or export to a file CloudImport, CloudExport import from or export to a cloud object ImportString, ExportString import from or export to a string ImportByteArray, ExportByteArray import from or export to a byte array
Import Elements
- General Import elements:
-
"Elements" list of elements and options available in this file "Summary" summary of the file "Rules" list of rules for all available elements - Data representation elements:
-
"Data" two-dimensional array "Grid" table data as a Grid object "RawData" two-dimensional array of strings "Dataset" table data as a Dataset - Import and Export use the "Data" element by default.
- Subelements for partial data import for any element elem can take row and column specifications in the form {elem,rows,cols}, where rows and cols can be any of the following:
-
n nth row or column -n counts from the end n;;m from n through m n;;m;;s from n through m with steps of s {n1,n2,…} specific rows or columns ni - Metadata elements:
-
"Dimensions" a list of number of rows and maximum number of columns "MaxColumnCount" maximum number of columns "RowCount" number of rows
Options
- Import and Export options:
-
"EmptyField" "" how to represent empty fields "TextDelimiters" "\"" character used to delimit non-numeric fields - Data fields containing commas and line separators are typically wrapped in double-quote characters. By default, Export uses double-quote characters as delimiters. Specify a different character using "TextDelimiters".
- Double-quote characters delimiting text fields are not imported by default.
- Import options:
-
CharacterEncoding "UTF8ISOLatin1" raw character encoding used in the file "CurrencyTokens" {{"$", "", "", ""}, {"c", "", "p", "F"}} currency units to be skipped when importing numerical values "DateStringFormat" None date format, given as a DateString specification "FillRows" Automatic whether to fill rows to the max column length "HeaderLines" 0 number of lines to assume as headers "IgnoreEmptyLines" False whether to ignore empty lines "NumberPoint" "." decimal point string "Numeric" Automatic whether to import data fields as numbers if possible "SkipLines" 0 number of lines to skip at the beginning of the file - By default, Import attempts to interpret the data as "UTF8"-encoded text. If any sequence of bytes stored in the file cannot be represented in "UTF8", Import uses "ISOLatin1" instead.
- With CharacterEncoding -> Automatic, Import attempts to infer the character encoding of the file.
- Possible settings for "HeaderLines" and "SkipLines" are:
-
n n rows to skip or to use as Dataset headers {rows,cols} rows and columns to skip or to use as headers - Import converts table entries formatted as specified by "DateStringFormat" to a DateObject.
- Export options:
-
Alignment None how data is aligned within table columns CharacterEncoding "UTF8" raw character encoding used in the file "FillRows" False whether to fill rows to the max column length "TableHeadings" None headings for table columns and rows - Possible settings for Alignment are None, Left, Center, and Right.
- "TableHeadings" can be set to the following values:
-
None no labels Automatic gives successive integer labels for columns and rows {"col1","col2",…} list of column labels {rhead,chead} specifies separate labels for the rows and columns - Export encodes line separator characters using the convention of the computer system on which the Wolfram Language is being run.
Examples
open allclose allBasic Examples (3)
Read and plot all data from the file:
Import a CSV file as a Dataset with headers:
Scope (4)
Export a Dataset:
Use Values to remove headers from a dataset:
Use the "HeaderLines" option to import table headers:
Export a TimeSeries:
Export an EventSeries:
Export a QuantityArray:
Import Elements (17)
Import Options (10)
CharacterEncoding (1)
The character encoding can be set to any value from $CharacterEncodings:
"CurrencyTokens" (1)
Currency tokens are automatically skipped:
Use "CurrencyTokens"->None to include all currency tokens:
"DateStringFormat" (1)
Convert dates to a DateObject using the date format specified:
"FillRows" (1)
"HeaderLines" (1)
"Numeric" (1)
Use "Numeric"->True to interpret numbers:
"SkipLines" (1)
CSV files may include a comment line:
Skip the comment line, and use the next line as a Dataset header:
Export Options (7)
Alignment (1)
CharacterEncoding (1)
The character encoding can be set to any value from $CharacterEncodings:
"EmptyField" (1)
"FillRows" (1)
By default, a full array is exported:
Use "FillRows"->False to preserve row lengths:
"TableHeadings" (1)
Applications (1)
Possible Issues (5)
Some CSV data generated from older versions of the Wolfram Language may have incorrectly delimited text fields and will not import as expected in Version 11.2:
Using "TextDelimiters""" will give the previously expected result:
By default, a full array is exported using a double-quote for "TextDelimiters":
Use "TextDelimiters""" and "FillRows"False to export CSV data similar to versions prior to 11.2:
Entries of the format "nnnDnnn" or "nnnEnnn" are interpreted as numbers with scientific notation:
Use the "Numeric" option to override this interpretation:
The top-left corner of data is lost when importing a Dataset with row and column headers:
Dataset may look different depending on the dimensions of the data: