R/dat-to-arrow-formats.R
    dat_to_datasets.RdSome large files in the SRE are too large to fit into memory. dat_to_datasets
reads files into memory in smaller chunks (controlled by the chunk_size argument)
and converts them into Arrow Datasets. All ... argument are passed to dipr::read_dat
which is where column types can be specified.
dat_to_datasets(
  data_path,
  data_dict,
  chunk_size = 1e+06,
  path,
  partitioning,
  tz = "UTC",
  date_format = "%AD",
  time_format = "%AT",
  ...
)A path or a vector of paths to a .dat.gz file. If supplying a vector of paths,
they must share a common data dictionary.
A data.frame with start, stop and name columns
The number of rows to include in each chunk. The value of this
parameter you choose will depend on both the number of rows in the data you are
trying to process and the RAM available. You can check the RAM available using
memory.size(max = TRUE). The default for this values is currently 10 million.
string path, URI, or SubTreeFileSystem referencing a directory
to write to (directory will be created if it does not exist)
Partitioning or a character vector of columns to
use as partition keys (to be written as path segments). Default is to
use the current group_by() columns.
what timezone should datetime fields use? Default UTC. This is recommended to avoid timezone pain, but remember that the data is in UTC when doing analysis. See OlsonNames() for list of available timezones.
date format for columns where date format is not specified in col_types
time format for columns where time format is not specified in col_types
Arguments passed on to readr::read_fwf
fileEither a path to a file, a connection, or literal data (either a single string or a raw vector).
Files ending in .gz, .bz2, .xz, or .zip will
be automatically uncompressed. Files starting with http://,
https://, ftp://, or ftps:// will be automatically
downloaded. Remote gz files can also be automatically downloaded and
decompressed.
Literal data is most useful for examples and tests. To be recognised as
literal data, the input must be either wrapped with I(), be a string
containing at least one new line, or be a vector containing at least one
string with a new line.
Using a value of clipboard() will read from the system clipboard.
col_positionsColumn positions, as created by fwf_empty(),
fwf_widths() or fwf_positions(). To read in only selected fields,
use fwf_positions(). If the width of the last column is variable (a
ragged fwf file), supply the last end position as NA.
col_typesOne of NULL, a cols() specification, or
a string. See vignette("readr") for more details.
If NULL, all column types will be imputed from guess_max rows
on the input interspersed throughout the file. This is convenient (and
fast), but not robust. If the imputation fails, you'll need to increase
the guess_max or supply the correct types yourself.
Column specifications created by list() or cols() must contain
one column specification for each column. If you only want to read a
subset of the columns, use cols_only().
Alternatively, you can use a compact string representation where each character represents one column:
c = character
i = integer
n = number
d = double
l = logical
f = factor
D = date
T = date time
t = time
? = guess
_ or - = skip
By default, reading a file without a column specification will print a
message showing what readr guessed they were. To remove this message,
set show_col_types = FALSE or set `options(readr.show_col_types = FALSE).
col_selectColumns to include in the results. You can use the same
mini-language as dplyr::select() to refer to the columns by name. Use
c() or list() to use more than one selection expression. Although this
usage is less common, col_select also accepts a numeric column index. See
?tidyselect::language for full details on the
selection language.
idThe name of a column in which to store the file path. This is
useful when reading multiple input files and there is data in the file
paths, such as the data collection date. If NULL (the default) no extra
column is created.
localeThe locale controls defaults that vary from place to place.
The default locale is US-centric (like R), but you can use
locale() to create your own locale that controls things like
the default time zone, encoding, decimal mark, big mark, and day/month
names.
naCharacter vector of strings to interpret as missing values. Set this
option to character() to indicate no missing values.
commentA string used to identify comments. Any text after the comment characters will be silently ignored.
trim_wsShould leading and trailing whitespace (ASCII spaces and tabs) be trimmed from each field before parsing it?
skipNumber of lines to skip before reading data.
n_maxMaximum number of lines to read.
guess_maxMaximum number of lines to use for guessing column types.
See vignette("column-types", package = "readr") for more details.
progressDisplay a progress bar? By default it will only display
in an interactive session and not while knitting a document. The automatic
progress bar can be disabled by setting option readr.show_progress to
FALSE.
name_repairHandling of column names. The default behaviour is to
ensure column names are "unique". Various repair strategies are
supported:
"minimal": No name repair or checks, beyond basic existence of names.
"unique" (default value): Make sure names are unique and not empty.
"check_unique": no name repair, but check they are unique.
"universal": Make the names unique and syntactic.
A function: apply custom name repair (e.g., name_repair = make.names
for names in the style of base R).
A purrr-style anonymous function, see rlang::as_function().
This argument is passed on as repair to vctrs::vec_as_names().
See there for more details on these terms and the strategies used
to enforce them.
num_threadsThe number of processing threads to use for initial
parsing and lazy reading of data. If your data contains newlines within
fields the parser should automatically detect this and fall back to using
one thread only. However if you know your file has newlines within quoted
fields it is safest to set num_threads = 1 explicitly.
show_col_typesIf FALSE, do not show the guessed column types. If
TRUE always show the column types, even if they are supplied. If NULL
(the default) only show the column types if they are not explicitly supplied
by the col_types argument.
lazyRead values lazily? By default the file is initially only
indexed and the values are read lazily when accessed. Lazy reading is
useful interactively, particularly if you are only interested in a subset
of the full dataset. Note, if you later write to the same file you read
from you need to set lazy = FALSE. On Windows the file will be locked
and on other systems the memory map will become invalid.
skip_empty_rowsShould blank rows be ignored altogether? i.e. If this
option is TRUE then blank rows will not be represented at all.  If it is
FALSE then they will be represented by NA values in all the columns.
if (FALSE) {
data_dict_path <- dipr_example("starwars-dict.txt")
dict <- read.table(data_dict_path)
dat_path <- dipr_example("starwars-fwf.dat.gz")
## Create a partitioned datasets in the "bar" folder
dat_to_datasets(
    data_path = dat_path,
    data_dict = dict,
    path = "starwars_arrow",
    partitioning = "species",
    chunk_size = 2)
    }