Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update defaults in docstring #934

Merged
merged 3 commits into from
Jan 25, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,6 @@ jobs:
run: |
pip install git+https://github.com/fsspec/filesystem_spec
pip install --upgrade "aiobotocore${{ matrix.aiobotocore-version }}"
pip install --upgrade "botocore" --no-deps
pip install . --no-deps
pip list

Expand Down
6 changes: 3 additions & 3 deletions s3fs/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,7 +213,7 @@ class S3FileSystem(AsyncFileSystem):
If RequesterPays buckets are supported.
default_block_size: int (None)
If given, the default block size value used for ``open()``, if no
specific value is given at all time. The built-in default is 5MB.
specific value is given at all time. The built-in default is 50MB.
default_fill_cache : Bool (True)
Whether to use cache filling with open by default. Refer to
``S3File.open``.
Expand Down Expand Up @@ -241,9 +241,9 @@ class S3FileSystem(AsyncFileSystem):
session : aiobotocore AioSession object to be used for all connections.
This session will be used inplace of creating a new session inside S3FileSystem.
For example: aiobotocore.session.AioSession(profile='test_user')
max_concurrency : int (1)
max_concurrency : int (10)
The maximum number of concurrent transfers to use per file for multipart
upload (``put()``) operations. Defaults to 1 (sequential). When used in
upload (``put()``) operations. Defaults to 10. When used in
conjunction with ``S3FileSystem.put(batch_size=...)`` the maximum number of
simultaneous connections is ``max_concurrency * batch_size``. We may extend
this parameter to affect ``pipe()``, ``cat()`` and ``get()``. Increasing this
Expand Down
Loading