API

GCSFileSystem(*args, **kwargs)

Connect to Google Cloud Storage.

GCSFileSystem.cat(path[, recursive, on_error])

Fetch (potentially multiple) paths' contents

GCSFileSystem.du(path[, total, maxdepth])

Space used by files within a path

GCSFileSystem.exists(path, **kwargs)

Is there a file at the given path

GCSFileSystem.get(rpath, lpath[, recursive, ...])

Copy file(s) to local.

GCSFileSystem.glob(path, **kwargs)

Find files by glob-matching.

GCSFileSystem.info(path, **kwargs)

Give details of entry at path

GCSFileSystem.ls(path[, detail])

List objects at path.

GCSFileSystem.mkdir(path[, acl, ...])

New bucket

GCSFileSystem.mv(path1, path2[, recursive, ...])

Move file(s) from one location to another

GCSFileSystem.open(path[, mode, block_size, ...])

Return a file-like object from the filesystem

GCSFileSystem.put(lpath, rpath[, recursive, ...])

Copy file(s) from local.

GCSFileSystem.read_block(fn, offset, length)

Read a block of bytes from

GCSFileSystem.rm(path[, recursive, ...])

Delete files.

GCSFileSystem.tail(path[, size])

Get the last size bytes from file

GCSFileSystem.touch(path[, truncate])

Create empty file, or update timestamp

GCSFileSystem.get_mapper([root, check, ...])

Create key/value store based on this file-system

GCSFile(gcsfs, path[, mode, block_size, ...])

Attributes

GCSFile.close()

Close file

GCSFile.flush([force])

Write buffered data to backend store.

GCSFile.info()

File information about this path

GCSFile.read([length])

Return data from cache, or fetch pieces as necessary

GCSFile.seek(loc[, whence])

Set current file location

GCSFile.tell()

Current file location

GCSFile.write(data)

Write data to buffer.

class gcsfs.core.GCSFileSystem(*args, **kwargs)[source]

Connect to Google Cloud Storage.

The following modes of authentication are supported:

  • token=None, GCSFS will attempt to guess your credentials in the following order: gcloud CLI default, gcsfs cached token, google compute metadata service, anonymous.

  • token='google_default', your default gcloud credentials will be used, which are typically established by doing gcloud login in a terminal.

  • token=='cache', credentials from previously successful gcsfs authentication will be used (use this after “browser” auth succeeded)

  • token='anon', no authentication is performed, and you can only access data which is accessible to allUsers (in this case, the project and access level parameters are meaningless)

  • token='browser', you get an access code with which you can authenticate via a specially provided URL

  • if token='cloud', we assume we are running within google compute or google container engine, and query the internal metadata directly for a token.

  • you may supply a token generated by the [gcloud](https://cloud.google.com/sdk/docs/) utility; this is either a python dictionary, the name of a file containing the JSON returned by logging in with the gcloud CLI tool, or a Credentials object. gcloud typically stores its tokens in locations such as ~/.config/gcloud/application_default_credentials.json, `` ~/.config/gcloud/credentials``, or ~\AppData\Roaming\gcloud\credentials, etc.

Specific methods, (eg. ls, info, …) may return object details from GCS. These detailed listings include the [object resource](https://cloud.google.com/storage/docs/json_api/v1/objects#resource)

GCS does not include “directory” objects but instead generates directories by splitting [object names](https://cloud.google.com/storage/docs/key-terms). This means that, for example, a directory does not need to exist for an object to be created within it. Creating an object implicitly creates it’s parent directories, and removing all objects from a directory implicitly deletes the empty directory.

GCSFileSystem generates listing entries for these implied directories in listing apis with the object properties:

  • “name”string

    The “{bucket}/{name}” path of the dir, used in calls to GCSFileSystem or GCSFile.

  • “bucket”string

    The name of the bucket containing this object.

  • “kind” : ‘storage#object’

  • “size” : 0

  • “storageClass” : ‘DIRECTORY’

  • type: ‘directory’ (fsspec compat)

GCSFileSystem maintains a per-implied-directory cache of object listings and fulfills all object information and listing requests from cache. This implied, for example, that objects created via other processes will not be visible to the GCSFileSystem until the cache refreshed. Calls to GCSFileSystem.open and calls to GCSFile are not effected by this cache.

In the default case the cache is never expired. This may be controlled via the cache_timeout GCSFileSystem parameter or via explicit calls to GCSFileSystem.invalidate_cache.

Parameters
projectstring

project_id to work under. Note that this is not the same as, but often very similar to, the project name. This is required in order to list all the buckets you have access to within a project and to create/delete buckets, or update their access policies. If token='google_default', the value is overridden by the default, if token='anon', the value is ignored.

accessone of {‘read_only’, ‘read_write’, ‘full_control’}

Full control implies read/write as well as modifying metadata, e.g., access control.

token: None, dict or string

(see description of authentication methods, above)

consistency: ‘none’, ‘size’, ‘md5’

Check method when writing files. Can be overridden in open().

cache_timeout: float, seconds

Cache expiration time in seconds for object metadata cache. Set cache_timeout <= 0 for no caching, None for no cache expiration.

secure_serialize: bool (deprecated)
requester_paysbool, or str default False

Whether to use requester-pays requests. This will include your project ID project in requests as the userPorject, and you’ll be billed for accessing data from requester-pays buckets. Optionally, pass a project-id here as a string to use that as the userProject.

session_kwargs: dict

passed on to aiohttp.ClientSession; can contain, for example, proxy settings.

endpoint_url: str

If given, use this URL (format protocol://host:port , without any path part) for communication. If not given, defaults to the value of environment variable “STORAGE_EMULATOR_HOST”; if that is not set either, will use the standard Google endpoint.

default_location: str

Default location where buckets are created, like ‘US’ or ‘EUROPE-WEST3’. You can find a list of all available locations here: https://cloud.google.com/storage/docs/locations#available-locations

Attributes
base
buckets

Return list of available project buckets.

loop
on_google
project
session
transaction

A context within which files are committed together upon exit

Methods

cat(path[, recursive, on_error])

Fetch (potentially multiple) paths' contents

cat_file(path[, start, end])

Get the content of a file

checksum(path)

Unique value for current version of file

clear_instance_cache()

Clear the cache of filesystem instances.

copy(path1, path2[, recursive, on_error])

Copy within two locations in the filesystem

cp(path1, path2, **kwargs)

Alias of AbstractFileSystem.copy.

created(path)

Return the created timestamp of a file as a datetime.datetime

current()

Return the most recently instantiated FileSystem

delete(path[, recursive, maxdepth])

Alias of AbstractFileSystem.rm.

disk_usage(path[, total, maxdepth])

Alias of AbstractFileSystem.du.

download(rpath, lpath[, recursive])

Alias of AbstractFileSystem.get.

du(path[, total, maxdepth])

Space used by files within a path

end_transaction()

Finish write transaction, non-context version

exists(path, **kwargs)

Is there a file at the given path

expand_path(path[, recursive, maxdepth])

Turn one or more globs or directories into a list of all matching paths to files or directories.

find(path[, maxdepth, withdirs, detail])

List all files below path.

from_json(blob)

Recreate a filesystem instance from JSON representation

get(rpath, lpath[, recursive, callback])

Copy file(s) to local.

get_file(rpath, lpath[, callback, outfile])

Copy single remote file to local

get_mapper([root, check, create, ...])

Create key/value store based on this file-system

getxattr(path, attr)

Get user-defined metadata attribute

glob(path, **kwargs)

Find files by glob-matching.

head(path[, size])

Get the first size bytes from file

info(path, **kwargs)

Give details of entry at path

invalidate_cache([path])

Invalidate listing cache for given path, it is reloaded on next use.

isdir(path)

Is this entry directory-like?

isfile(path)

Is this entry file-like?

lexists(path, **kwargs)

If there is a file at the given path (including broken links)

listdir(path[, detail])

Alias of AbstractFileSystem.ls.

ls(path[, detail])

List objects at path.

makedir(path[, create_parents])

Alias of AbstractFileSystem.mkdir.

makedirs(path[, exist_ok])

Recursively make directories

merge(path, paths[, acl])

Concatenate objects within a single bucket

mkdir(path[, acl, default_acl, location, ...])

New bucket

mkdirs(path[, exist_ok])

Alias of AbstractFileSystem.makedirs.

modified(path)

Return the modified timestamp of a file as a datetime.datetime

move(path1, path2, **kwargs)

Alias of AbstractFileSystem.mv.

mv(path1, path2[, recursive, maxdepth])

Move file(s) from one location to another

open(path[, mode, block_size, ...])

Return a file-like object from the filesystem

pipe(path[, value])

Put value into path

pipe_file(path, value, **kwargs)

Set the bytes of given file

put(lpath, rpath[, recursive, callback])

Copy file(s) from local.

put_file(lpath, rpath[, callback])

Copy single file to remote

read_block(fn, offset, length[, delimiter])

Read a block of bytes from

rename(path1, path2, **kwargs)

Alias of AbstractFileSystem.mv.

rm_file(path)

Delete a file

rmdir(bucket)

Delete an empty bucket

setxattrs(path[, content_type, ...])

Set/delete/add writable metadata attributes

sign(path[, expiration])

Create a signed URL representing the given path.

size(path)

Size in bytes of file

sizes(paths)

Size in bytes of each file in a list of paths

split_path(path)

Normalise GCS path string into bucket and key.

start_transaction()

Begin write transaction for deferring files, non-context version

stat(path, **kwargs)

Alias of AbstractFileSystem.info.

tail(path[, size])

Get the last size bytes from file

to_json()

JSON representation of this filesystem instance

touch(path[, truncate])

Create empty file, or update timestamp

ukey(path)

Hash of file properties, to tell if it has changed

unstrip_protocol(name)

Format FS-specific path to generic, including protocol

upload(lpath, rpath[, recursive])

Alias of AbstractFileSystem.put.

url(path)

Get HTTP URL of the given path

walk(path[, maxdepth])

Return all files belows path

call

cat_ranges

close_session

cp_file

make_bucket_requester_pays

open_async

rm

property buckets

Return list of available project buckets.

getxattr(path, attr)

Get user-defined metadata attribute

invalidate_cache(path=None)[source]

Invalidate listing cache for given path, it is reloaded on next use.

Parameters
path: string or None

If None, clear all listings cached else listings at or under given path.

merge(path, paths, acl=None)

Concatenate objects within a single bucket

mkdir(path, acl='projectPrivate', default_acl='bucketOwnerFullControl', location=None, create_parents=True, **kwargs)

New bucket

If path is more than just a bucket, will create bucket if create_parents=True; otherwise is a noop. If create_parents is False and bucket does not exist, will produce FileNotFFoundError.

Parameters
path: str

bucket name. If contains ‘/’ (i.e., looks like subdir), will have no effect because GCS doesn’t have real directories.

acl: string, one of bACLs

access for the bucket itself

default_acl: str, one of ACLs

default ACL for objects created in this bucket

location: Optional[str]

Location where buckets are created, like ‘US’ or ‘EUROPE-WEST3’. If not provided, defaults to self.default_location. You can find a list of all available locations here: https://cloud.google.com/storage/docs/locations#available-locations

create_parents: bool

If True, creates the bucket in question, if it doesn’t already exist

rm(path, recursive=False, maxdepth=None, batchsize=20)

Delete files.

Parameters
path: str or list of str

File(s) to delete.

recursive: bool

If file(s) are directories, recursively delete contents and then also remove the directory

maxdepth: int or None

Depth to pass to walk for finding files to delete, if recursive. If None, there will be no limit and infinite recursion may be possible.

rmdir(bucket)

Delete an empty bucket

Parameters
bucket: str

bucket name. If contains ‘/’ (i.e., looks like subdir), will have no effect because GCS doesn’t have real directories.

setxattrs(path, content_type=None, content_encoding=None, fixed_key_metadata=None, **kwargs)

Set/delete/add writable metadata attributes

Note: uses PATCH method (update), leaving unedited keys alone. fake-gcs-server:latest does not seem to support this.

content_type: str

If not None, set the content-type to this value

content_encoding: str

This parameter is deprecated, you may use fixed_key_metadata instead. If not None, set the content-encoding. See https://cloud.google.com/storage/docs/transcoding

fixed_key_metadata: dict
Google metadata, in key/value pairs, supported keys:
  • cache_control

  • content_disposition

  • content_encoding

  • content_language

  • custom_time

More info: https://cloud.google.com/storage/docs/metadata#mutable

kw_args: key-value pairs like field=”value” or field=None

value must be string to add or modify, or None to delete

Returns
Entire metadata after update (even if only path is passed)
sign(path, expiration=100, **kwargs)[source]

Create a signed URL representing the given path.

Parameters
pathstr

The path on the filesystem

expirationint

Number of seconds to enable the URL for

Returns
URLstr

The signed URL

classmethod split_path(path)[source]

Normalise GCS path string into bucket and key.

Parameters
pathstring

Input path, like gcs://mybucket/path/to/file. Path is of the form: ‘[gs|gcs://]bucket[/key]’

Returns
(bucket, key) tuple
url(path)[source]

Get HTTP URL of the given path

class gcsfs.core.GCSFile(gcsfs, path, mode='rb', block_size=5242880, autocommit=True, cache_type='readahead', cache_options=None, acl=None, consistency='md5', metadata=None, content_type=None, timeout=None, fixed_key_metadata=None, **kwargs)[source]
Attributes
closed
details
full_name

Methods

close()

Close file

commit()

If not auto-committing, finalize file

discard()

Cancel in-progress multi-upload

fileno(/)

Returns underlying file descriptor if one exists.

flush([force])

Write buffered data to backend store.

info()

File information about this path

isatty(/)

Return whether this is an 'interactive' stream.

read([length])

Return data from cache, or fetch pieces as necessary

readable()

Whether opened for reading

readinto(b)

mirrors builtin file's readinto method

readline()

Read until first occurrence of newline character

readlines()

Return all data, split by the newline character

readuntil([char, blocks])

Return data between current position and first occurrence of char

seek(loc[, whence])

Set current file location

seekable()

Whether is seekable (only in read mode)

tell()

Current file location

truncate

Truncate file to size bytes.

url()

HTTP link to this file's data

writable()

Whether opened for writing

write(data)

Write data to buffer.

writelines(lines, /)

Write a list of lines to stream.

readinto1

commit()[source]

If not auto-committing, finalize file

discard()[source]

Cancel in-progress multi-upload

Should only happen during discarding this write-mode file

info()[source]

File information about this path

url()[source]

HTTP link to this file’s data