Google Cloud has announced the general availability of its new 'Archive' storage class, with the new offering aimed at enabling the long-term storage of large amounts of data.
The Archive cloud storage class is targeted at enterprise users needing to store data that will be accessed less than once a year at a rate of US$0.0012 per gigabyte (GB) per month, translating to US$1.23 per terabyte (TB), according to a blog post by Geoffrey Noer, product manager at Google.
The Archive class can also be combined with Bucket Lock to prevent data from being modified, which can assist enterprises in meeting various data retention laws, according to Noer.
The Archive class can be set up in multi-regions or dual-regions and offers millisecond latency and checksum verification durability of '11 nines' -- 99.999999999 per cent.
Also on offer is open access for Google-specific and multi-cloud architectures, the ability to scale to and beyond exabytes and an application programming interface (API) that can work across all storage classes and integrate with object lifecycle management.
Users interested in employing the Archive class in Google Cloud Storage can do so by creating a new bucket and selecting “Archive” when prompted to choose a default storage class for data.
Data can then be uploaded through the Python application gsutil or Google’s Cloud Data Transfer Service.
Users with existing buckets interested in Archive can utilise Object Lifecycle Management to downgrade their Cloud Storage classes from Standard, Nearline and Coldline to Archive.