Go package that abstracts file systems (local, in-memory, Google Cloud Storage, S3) into a few interfaces. It includes convenience wrappers for simplifying common file system use cases such as caching, prefix isolation and more!
Forked from https://github.com/Shopify/go-storage
- Upload and Download of objects with nice
io.Reader
andio.Writer
interface. - SignedURL support for GCS and S3.
- Same interface for all types of storage types.
- Ability to
walk
the filesystem with a consistent implementation. - Extremely customizable with configurations, wrappers, and layers.
$ go get github.com/dhillondeep/go-storage
All storage implementations in this package follow two simple interfaces designed for using file systems.
type FS interface {
Walker
// Open opens an existing file at path in the filesystem. Callers must close the
// File when done to release all underlying resources.
Open(ctx context.Context, path string, options *ReaderOptions) (*File, error)
// Attributes returns attributes about a path
Attributes(ctx context.Context, path string, options *ReaderOptions) (*Attributes, error)
// Create makes a new file at path in the filesystem. Callers must close the
// returned WriteCloser and check the error to be sure that the file
// was successfully written.
Create(ctx context.Context, path string, options *WriterOptions) (io.WriteCloser, error)
// Delete removes a path from the filesystem.
Delete(ctx context.Context, path string) error
// URL resolves a path to an addressable URL
URL(ctx context.Context, path string, options *SignedURLOptions) (string, error)
}
// WalkFn is a function type which is passed to Walk.
type WalkFn func(path string) error
// Walker is an interface which defines the Walk method.
type Walker interface {
// Walk traverses a path listing by prefix, calling fn with each object path rewritten
// to be relative to the underlying filesystem and provided path.
Walk(ctx context.Context, path string, fn WalkFn) error
}
Local is the default implementation of a local file system (i.e. using os.Open
etc).
local := storage.NewLocalFS("/some/root/path")
f, err := local.Open(context.Background(), "file.json", nil) // will open "/some/root/path/file.json"
if err != nil {
// ...
}
// ...
f.Close()
Mem is the default in-memory implementation of a file system.
mem := storage.NewMemoryFS()
wc, err := mem.Create(context.Background(), "file.txt", nil)
if err != nil {
// ...
}
if _, err := io.WriteString(wc, "Hello World!"); err != nil {
// ...
}
if err := wc.Close(); err != nil {
// ...
}
And now:
f, err := mem.Open(context.Background(), "file.txt", nil)
if err != nil {
// ...
}
// ...
f.Close()
CloudStorage is the default implementation of Google Cloud Storage. This uses https://godoc.org/golang.org/x/oauth2/google#DefaultTokenSource for autentication.
store := storage.NewCloudStorageFS("some-bucket")
f, err := store.Open(context.Background(), "file.json", nil) // will fetch "gs://some-bucket/file.json"
if err != nil {
// ...
}
// ...
f.Close()
You can also use google.Credentials
to provide custom authentication:
creds, err := google.CredentialsFromJSON(context.Background(), []byte("JSON data"), "https://www.googleapis.com/auth/cloud-platform")
if err != nil {
// ...
}
store := storage.NewCloudStorageFS("some-bucket", creds)
S3 is the default implementation for AWS S3. This uses aws-sdk-go/aws/session.NewSession for authentication.
store := storage.NewS3FS("some-bucket")
f, err := store.Open(context.Background(), "file.json", nil) // will fetch "s3://some-bucket/file.json
if err != nil {
// ...
}
// ...
f.Close()
You can also provide aws.Config
to provide custom authentication:
store := store.NewS3FS("some-bucket", &aws.Config{
Region: aws.String("region"),
Credentials: credentials.NewStaticCredentials("secretId", "secretKey", ""),
})
To use Cloud Storage as a source file system, but cache all opened files in a local filesystem:
src := storage.NewCloudStorageFS("some-bucket")
local := storage.NewLocalFS("/scratch-space")
fs := storage.NewCacheWrapper(src, local)
f, err := fs.Open(context.Background(), "file.json", nil) // will try src then jump to cache ("gs://some-bucket/file.json")
if err != nil {
// ...
}
// ...
f.Close()
f, err := fs.Open(context.Background(), "file.json", nil) // should now be cached ("/scratch-space/file.json")
if err != nil {
// ...
}
// ...
f.Close()
This is particularly useful when distributing files across multiple regions or between cloud providers. For instance, we could add the following code to the previous example:
mainSrc := storage.NewCloudStorage("some-bucket-in-another-region")
fs2 := storage.NewCacheWrapper(mainSrc, fs) // fs is from previous snippet
// Open will:
// 1. Try local (see above)
// 2. Try gs://some-bucket
// 3. Try gs://some-bucket-in-another-region, which will be cached in gs://some-bucket and then local on its
// way back to the caller.
f, err := fs2.Open(context.Background(), "file.json", nil) // will fetch "gs://some-bucket-in-another-region/file.json"
if err != nil {
// ...
}
// ...
f.Close()
f, err := fs2.Open(context.Background(), "file.json", nil) // will fetch "/scratch-space/file.json"
if err != nil {
// ...
}
// ...
f.Close()
If you're writing code that relies on a set directory structure, it can be very messy to have to pass path-patterns around.
You can avoid this by wrapping storage.FS
implementations with storage.Prefix
that rewrites all incoming paths.
modelFS := storage.NewPrefixWrapper(rootFS, "models/")
f, err := modelFS.Open(context.Background(), "file.json", nil) // will call rootFS.Open with path "models/file.json"
if err != nil {
// ...
}
// ...
f.Close()
It's also now simple to write wrapper functions to abstract out more complex directory structures.
func NewUserFS(fs storage.FS, userID, mediaType string) FS {
return storage.NewPrefixWrapper(fs, fmt.Sprintf("%v/%v", userID, userType))
}
userFS := NewUserFS(rootFS, "1111", "pics")
f, err := userFS.Open(context.Background(), "beach.png", nil) // will call rootFS.Open with path "1111/pics/beach.png"
if err != nil {
// ...
}
// ...
f.Close()