Skip to main content

Interact with your personal EDITO storage

Every user has access to a personal S3 storage hosted by EDITO. Discover here how to interact with it.

Updated over 3 weeks ago

Context


By default, this storage is used in the File Explorer tab and in service or process launch configurations. This can be changed in the Project settings under S3 Configurations. In there, you can connect and configure any external S3 bucket and use them seamlessly as part of EDITO.

📌 Note: keep in mind that S3 object storage does not work exactly like a regular filesystem. For example, folders, even if they are displayed on the “File Explorer” section of the datalab, do not really exists in object storage. There is a lot of documentation on the internet that cover here all the details of a S3 object storage, and you’ll easily find some basic API you can use to execute basic operations.

Graphically administrate your project bucket


The File Explorer page provides graphical capabilities to:

  • Browse your bucket as in regular filesystem

  • Upload and download files (1) and (2)

  • Rename files

  • Create a new directory (3)

  • Delete files (4)

  • Share files (5) and (6)

  • Open files with cloud-native formats in the datalab’s data explorer (7)

  • Check the CLI of the File Explorer (8)

  • Temporarily or permanently share files or folders publicly (9)

More advanced object management action can be performed from within a service.

Quickly open file(s) in a Jupyter notebook


You might have some files that you want to quickly change?

We have created an Open in Jupyter Notebook button to open the Jupyter-python-ocean-science service where the selected files/directories have been copied in its work directory. Be careful, everything done in the service will then need to be copied back in your personnal storage.

Interact with your personal storage inside a service


Within a service, like Jupyter-python-ocean-science, the environment variables that are required to interact with you S3 bucket are automatically set to facilitate the connexion to your personal storage. You can see them by opening a terminal and running env | grep AWS. In that same terminal, you can then use MinIO Client command line tool without any extra configuration.

For example, you can:

List the content of your personal storage

mc ls s3/oidc-[YOUR_USERNAME]

Copy a file from your personal storage to your running service

mc cp s3/oidc-[YOUR_USERNAME]/path/to/my_file /local/path/to/my_file

Recursively copy a “directory” from your personal storage to your running service

mc cp --recursive s3/oidc-[YOUR_USERNAME]/path/to/my_folder /local/path/to/my_folder

You can discover other examples of mc usage in “File Explorer” code snippets that appears when you execute some actions graphically or by reading the documentation of the mc tool.

Provide anonymous access to your content

Share a file from your personal storage to everyone as read-only

mc anonymous set download s3/oidc-[YOUR_USERNAME]/path/to_my_file

Then you can deduce the public path to your file:

Share a folder recursively from your personal storage to everyone as read-only

mc anonymous set download s3/oidc-[YOUR_USERNAME]/path/to_my_folder/

Share all files with a prefix from your personal storage to everyone as read-only

mc anonymous set download s3/oidc-[YOUR_USERNAME]/path/to_my_files/prefix

List all anonymous access from your personal storage

mc anonymous list s3/oidc-[YOUR_USERNAME]

Remove anonymous access

mc anonymous set private s3/oidc-[YOUR_USERNAME]/<my_anonymous_access>

Where <my_anonymous_access> must be replaced by an existing anonymous access you want to remove (see below on how to list all anonymous access from your personal storage).

📌 Note: In addition to private and download permissions, you can also set public (read/write access) and upload (write access) anonymous access.

Interact with your personal storage from outside EDITO


You can interact with your storage (add, share and download data) using with your favorite programming language. You can find many snippets of code using different programming languages and libraries in the datalab.

If you are not familiar with those libraries, you’ll have to search a bit on the internet for the documentation on how to use each of them. But basically, you’ll be able to execute some basic operations like listing, downloading, uploading files…

Interact with your personal storage from a process


Writing directly to the user personal storage

Thanks to the use of EDITO_INFRA_OUTPUT environment variable the template process contains an additional step that will copy the content of this path to the user personal storage. This step can be found in the file job.yaml as a container named copy-output.

These variables are added as environment variables in the container with:

envFrom: 
{{- if .Values.s3.enabled }}
- secretRef:
name: {{ include "library-chart.secretNameS3" . }}
{{- end }}

This container needs to have access to specific environment variables to be able to access the user S3 bucket:

  • AWS_ACCESS_KEY_ID

  • AWS_SECRET_ACCESS_KEY

  • AWS_SESSION_TOKEN

  • AWS_S3_ENDPOINT

  • AWS_DEFAULT_REGION

These variables have been exported thanks to the s3 section of the values.schema.json and the presence of the file secret-s3.yaml.

📌 Note: See “Writing directly to the user personal storage”.

What's next?


If you have any questions, problems, or suggestions, please feel free to contact us via chat using the widget available at the bottom right of the page.

Did this answer your question?