API

GCSFileSystem([project, access, token, …]) Connect to Google Cloud Storage.
GCSFileSystem.cat(self, path) Simple one-shot get of file data
GCSFileSystem.du(self, path[, total, maxdepth]) Space used by files within a path
GCSFileSystem.exists(self, path) Is there a file at the given path
GCSFileSystem.get(self, rpath, lpath[, …]) Copy file to local.
GCSFileSystem.glob(self, path, \*\*kwargs) Find files by glob-matching.
GCSFileSystem.info(self, path, \*\*kwargs) File information about this path.
GCSFileSystem.ls(self, path[, detail]) List objects under the given ‘/{bucket}/{prefix} path.
GCSFileSystem.mkdir(self, bucket[, acl, …]) New bucket
GCSFileSystem.mv(self, path1, path2, \*\*kwargs) Move file from one location to another
GCSFileSystem.open(self, path[, mode, …]) Return a file-like object from the filesystem
GCSFileSystem.put(self, lpath, rpath[, …]) Upload file from local
GCSFileSystem.read_block(self, fn, offset, …) Read a block of bytes from
GCSFileSystem.rm(self, path[, recursive]) Delete keys.
GCSFileSystem.tail(self, path[, size]) Get the last size bytes from file
GCSFileSystem.touch(self, path[, truncate]) Create empty file, or update timestamp
GCSFileSystem.get_mapper(self, root[, …]) Create key/value store based on this file-system
GCSFile(gcsfs, path[, mode, block_size, …])
Attributes:
GCSFile.close(self) Close file
GCSFile.flush(self[, force]) Write buffered data to backend store.
GCSFile.info(self) File information about this path
GCSFile.read(self[, length]) Return data from cache, or fetch pieces as necessary
GCSFile.seek(self, loc[, whence]) Set current file location
GCSFile.tell(self) Current file location
GCSFile.write(self, data) Write data to buffer.
class gcsfs.core.GCSFileSystem(project='', access='full_control', token=None, block_size=None, consistency='none', cache_timeout=None, secure_serialize=True, check_connection=False, requests_timeout=None, user_project=None, **kwargs)[source]

Connect to Google Cloud Storage.

The following modes of authentication are supported:

  • token=None, GCSFS will attempt to guess your credentials in the following order: gcloud CLI default, gcsfs cached token, google compute metadata service, anonymous.
  • token='google_default', your default gcloud credentials will be used, which are typically established by doing gcloud login in a terminal.
  • token=='cache', credentials from previously successful gcsfs authentication will be used (use this after “browser” auth succeeded)
  • token='anon', no authentication is preformed, and you can only access data which is accessible to allUsers (in this case, the project and access level parameters are meaningless)
  • token='browser', you get an access code with which you can authenticate via a specially provided URL
  • if token='cloud', we assume we are running within google compute or google container engine, and query the internal metadata directly for a token.
  • you may supply a token generated by the [gcloud](https://cloud.google.com/sdk/docs/) utility; this is either a python dictionary, the name of a file containing the JSON returned by logging in with the gcloud CLI tool, or a Credentials object. gcloud typically stores its tokens in locations such as ~/.config/gcloud/application_default_credentials.json, `` ~/.config/gcloud/credentials``, or ~\AppData\Roaming\gcloud\credentials, etc.

Specific methods, (eg. ls, info, …) may return object details from GCS. These detailed listings include the [object resource](https://cloud.google.com/storage/docs/json_api/v1/objects#resource)

GCS does not include “directory” objects but instead generates directories by splitting [object names](https://cloud.google.com/storage/docs/key-terms). This means that, for example, a directory does not need to exist for an object to be created within it. Creating an object implicitly creates it’s parent directories, and removing all objects from a directory implicitly deletes the empty directory.

GCSFileSystem generates listing entries for these implied directories in listing apis with the object properies:

  • “name” : string
    The “{bucket}/{name}” path of the dir, used in calls to GCSFileSystem or GCSFile.
  • “bucket” : string
    The name of the bucket containing this object.
  • “kind” : ‘storage#object’
  • “size” : 0
  • “storageClass” : ‘DIRECTORY’
  • type: ‘directory’ (fsspec compat)

GCSFileSystem maintains a per-implied-directory cache of object listings and fulfills all object information and listing requests from cache. This implied, for example, that objects created via other processes will not be visible to the GCSFileSystem until the cache refreshed. Calls to GCSFileSystem.open and calls to GCSFile are not effected by this cache.

In the default case the cache is never expired. This may be controlled via the cache_timeout GCSFileSystem parameter or via explicit calls to GCSFileSystem.invalidate_cache.

Parameters:
project : string

project_id to work under. Note that this is not the same as, but often very similar to, the project name. This is required in order to list all the buckets you have access to within a project and to create/delete buckets, or update their access policies. If token='google_default', the value is overriden by the default, if token='anon', the value is ignored.

access : one of {‘read_only’, ‘read_write’, ‘full_control’}

Full control implies read/write as well as modifying metadata, e.g., access control.

token: None, dict or string

(see description of authentication methods, above)

consistency: ‘none’, ‘size’, ‘md5’

Check method when writing files. Can be overridden in open().

cache_timeout: float, seconds

Cache expiration time in seconds for object metadata cache. Set cache_timeout <= 0 for no caching, None for no cache expiration.

secure_serialize: bool

If True, instances re-establish auth upon deserialization; if False, token is passed directly, which may be a security risk if passed across an insecure network.

check_connection: bool

When token=None, gcsfs will attempt various methods of establishing credentials, falling back to anon. It is possible for a method to find credentials in the system that turn out not to be valid. Setting this parameter to True will ensure that an actual operation is attempted before deciding that credentials are valid.

user_project : string

project_id to use for requester-pays buckets. This is included as the userProject parameter in requests made to Google Cloud Storage. If not provided, this falls back to project when that is specified.

Attributes:
buckets

Return list of available project buckets.

transaction

A context within which files are committed together upon exit

Methods

cat(self, path) Simple one-shot get of file data
checksum(self, path) Unique value for current version of file
clear_instance_cache() Clear the cache of filesystem instances.
connect(self[, method]) Establish session token.
copy(self, path1, path2[, acl]) Duplicate remote file
cp(self, path1, path2, \*\*kwargs) Alias of FilesystemSpec.copy.
current() Return the most recently created FileSystem
delete(self, path[, recursive, maxdepth]) Alias of FilesystemSpec.rm.
disk_usage(self, path[, total, maxdepth]) Alias of FilesystemSpec.du.
download(self, rpath, lpath[, recursive]) Alias of FilesystemSpec.get.
du(self, path[, total, maxdepth]) Space used by files within a path
end_transaction(self) Finish write transaction, non-context version
exists(self, path) Is there a file at the given path
find(self, path[, maxdepth, withdirs]) List all files below path.
get(self, rpath, lpath[, recursive]) Copy file to local.
get_mapper(self, root[, check, create]) Create key/value store based on this file-system
getxattr(self, path, attr) Get user-defined metadata attribute
glob(self, path, \*\*kwargs) Find files by glob-matching.
head(self, path[, size]) Get the first size bytes from file
info(self, path, \*\*kwargs) File information about this path.
invalidate_cache(self[, path]) Invalidate listing cache for given path, it is reloaded on next use.
isdir(self, path) Is this entry directory-like?
isfile(self, path) Is this entry file-like?
listdir(self, path[, detail]) Alias of FilesystemSpec.ls.
load_tokens() Get “browser” tokens from disc
ls(self, path[, detail]) List objects under the given ‘/{bucket}/{prefix} path.
makedir(self, path[, create_parents]) Alias of FilesystemSpec.mkdir.
makedirs(self, path[, exist_ok]) Recursively make directories
merge(self, path, paths[, acl]) Concatenate objects within a single bucket
mkdir(self, bucket[, acl, default_acl]) New bucket
mkdirs(self, path[, exist_ok]) Alias of FilesystemSpec.makedirs.
move(self, path1, path2, \*\*kwargs) Alias of FilesystemSpec.mv.
mv(self, path1, path2, \*\*kwargs) Move file from one location to another
open(self, path[, mode, block_size, …]) Return a file-like object from the filesystem
put(self, lpath, rpath[, recursive]) Upload file from local
read_block(self, fn, offset, length[, delimiter]) Read a block of bytes from
rename(self, path1, path2, \*\*kwargs) Alias of FilesystemSpec.mv.
rm(self, path[, recursive]) Delete keys.
rmdir(self, bucket) Delete an empty bucket
setxattrs(self, path[, content_type, …]) Set/delete/add writable metadata attributes
size(self, path) Size in bytes of file
start_transaction(self) Begin write transaction for deferring files, non-context version
stat(self, path, \*\*kwargs) Alias of FilesystemSpec.info.
tail(self, path[, size]) Get the last size bytes from file
touch(self, path[, truncate]) Create empty file, or update timestamp
ukey(self, path) Hash of file properties, to tell if it has changed
upload(self, lpath, rpath[, recursive]) Alias of FilesystemSpec.put.
url(path) Get HTTP URL of the given path
walk(self, path[, maxdepth]) Return all files belows path
buckets

Return list of available project buckets.

cat(self, path)[source]

Simple one-shot get of file data

connect(self, method=None)[source]

Establish session token. A new token will be requested if the current one is within 100s of expiry.

Parameters:
method: str (google_default|cache|cloud|token|anon|browser) or None

Type of authorisation to implement - calls _connect_* methods. If None, will try sequence of methods.

copy(self, path1, path2, acl=None)[source]

Duplicate remote file

getxattr(self, path, attr)[source]

Get user-defined metadata attribute

info(self, path, **kwargs)[source]

File information about this path.

invalidate_cache(self, path=None)[source]

Invalidate listing cache for given path, it is reloaded on next use.

Parameters:
path: string or None

If None, clear all listings cached else listings at or under given path.

static load_tokens()[source]

Get “browser” tokens from disc

ls(self, path, detail=False)[source]

List objects under the given ‘/{bucket}/{prefix} path.

merge(self, path, paths, acl=None)[source]

Concatenate objects within a single bucket

mkdir(self, bucket, acl='projectPrivate', default_acl='bucketOwnerFullControl')[source]

New bucket

Parameters:
bucket: str

bucket name. If contains ‘/’ (i.e., looks like subdir), will have no effect because GCS doesn’t have real directories.

acl: string, one of bACLs

access for the bucket itself

default_acl: str, one of ACLs

default ACL for objects created in this bucket

rm(self, path, recursive=False)[source]

Delete keys.

If a list, batch-delete all keys in one go (can span buckets)

Returns whether operation succeeded (a list if input was a list)

If recursive, delete all keys given by find(path)

rmdir(self, bucket)[source]

Delete an empty bucket

Parameters:
bucket: str

bucket name. If contains ‘/’ (i.e., looks like subdir), will have no effect because GCS doesn’t have real directories.

setxattrs(self, path, content_type=None, content_encoding=None, **kwargs)[source]

Set/delete/add writable metadata attributes

content_type: str
If not None, set the content-type to this value
content_encoding: str
If not None, set the content-encoding. See https://cloud.google.com/storage/docs/transcoding
kw_args: key-value pairs like field=”value” or field=None
value must be string to add or modify, or None to delete
Returns:
Entire metadata after update (even if only path is passed)
static url(path)[source]

Get HTTP URL of the given path

class gcsfs.core.GCSFile(gcsfs, path, mode='rb', block_size=5242880, autocommit=True, cache_type='readahead', cache_options=None, acl=None, consistency='md5', metadata=None, **kwargs)[source]
Attributes:
closed
trim

Methods

close(self) Close file
commit(self) If not auto-committing, finalize file
discard(self) Cancel in-progress multi-upload
fileno(self, /) Returns underlying file descriptor if one exists.
flush(self[, force]) Write buffered data to backend store.
info(self) File information about this path
isatty(self, /) Return whether this is an ‘interactive’ stream.
read(self[, length]) Return data from cache, or fetch pieces as necessary
readable(self) Whether opened for reading
readinto(self, b) mirrors builtin file’s readinto method
readline(self) Read until first occurrence of newline character
readlines(self) Return all data, split by the newline character
readuntil(self[, char, blocks]) Return data between current position and first occurrence of char
seek(self, loc[, whence]) Set current file location
seekable(self) Whether is seekable (only in read mode)
tell(self) Current file location
truncate() Truncate file to size bytes.
url(self) HTTP link to this file’s data
writable(self) Whether opened for writing
write(self, data) Write data to buffer.
readinto1  
writelines  
commit(self)[source]

If not auto-committing, finalize file

discard(self)[source]

Cancel in-progress multi-upload

Should only happen during discarding this write-mode file

info(self)[source]

File information about this path

url(self)[source]

HTTP link to this file’s data