Adoptable Cookbooks List

Looking for a cookbook to adopt? You can now see a list of cookbooks available for adoption!
List of Adoptable Cookbooks

Supermarket Belongs to the Community

Supermarket belongs to the community. While Chef has the responsibility to keep it running and be stewards of its functionality, what it does and how it works is driven by the community. The chef/supermarket repository will continue to be where development of the Supermarket application takes place. Come be part of shaping the direction of Supermarket by opening issues and pull requests or by joining us on the Chef Mailing List.

Select Badges

Select Supported Platforms

Select Status


s3_file (34) Versions 2.8.5

Installs/Configures s3_file LWRP

cookbook 's3_file', '~> 2.8.5', :supermarket
cookbook 's3_file', '~> 2.8.5'
knife supermarket install s3_file
knife supermarket download s3_file
Quality 50%


An LWRP that can be used to fetch files from S3.

I created this LWRP to solve the chicken-and-egg problem of fetching files from S3 on the first Chef run on a newly provisioned machine. Ruby libraries that are installed on that first run are not available to Chef during the run, so I couldn't use a library like Fog to get what I needed from S3.

This LWRP has no dependencies beyond the Ruby standard library, so it can be used on the first run of Chef.


An Amazon Web Services account and something in S3 to fetch.

Multi-part S3 uploads do not put the MD5 of the content in the ETag header. If x-amz-meta-digest is provided in User-Defined Metadata on the S3 Object it is processed as if it were a Digest header (RFC 3230).

The MD5 of the local file will be checked against the MD5 from x-amz-meta-digest if it is present. It not it will check against the ETag. If there is no match or the local file is absent it will be downloaded.

If credentials are not provided, s3_file will attempt to use the first instance profile associated with the instance. See documentation at for more on instance profiles.


s3_file acts like other file resources. The only supported action is :create, which is the default.

Attribute Parameters:

  • aws_access_key_id - your AWS access key id. (optional)
  • aws_secret_access_key - your AWS secret access key. (optional)
  • token - token used for temporary IAM credentials. (optional)
  • bucket - the bucket to pull from.
  • s3_url - Custom S3 URL. If specified this URL must include the bucket name at the end. (optional)
  • remote_path - the S3 key to pull.
  • owner - the owner of the file. (optional)
  • group - the group owner of the file. (optional)
  • mode - the octal mode of the file. (optional)
  • decryption_key - the 32 character SHA256 key used to encrypt your S3 file. (optional)


s3_file "/tmp/somefile" do
    remote_path "/my/s3/key"
    bucket "my-s3-bucket"
    aws_access_key_id "mykeyid"
    aws_secret_access_key "mykey"
    s3_url ""
    owner "me"
    group "mygroup"
    mode "0644"
    action :create
    decryption_key "my SHA256 digest key"
    decrypted_file_checksum "SHA256 hex digest of decrypted file"

MD5 and Multi-Part Upload

s3_file compares the MD5 hash of a local file, if present, and the ETag header of the S3 object. If they do not match, then the remote object will be downloaded and notifiations will be fired.

In most cases, the ETag of an S3 object will be identical to its MD5 hash. However, if the file was uploaded to S3 via multi-part upload, then the ETag will be set to the MD5 hash of the first uploaded part. In these cases, MD5 of the local file and remote object will never match.

To work around this issue, set an X-Amz-Meta-Digest tag on your S3 object with value set to md5=MD5 of the entire object. s3_file will then use that value in place of the ETag value, and will skip downloading in case the MD5 of the local file matches the value of the X-Amz-Meta-Digest header.


s3_file can decrypt files that have been encrypted using an AES-256-CBC cipher. To use the decryption part of the resource, you must provide a decryption_key which can be generated by following the instructions below. You can also include an optional decrypted_file_checksum which allows Chef to check to see if it needs to redownload the encrypted file. Note that this checksum is different from the one in S3 because the file you compare to is already decrypted so a SHA256 checksum is used instead of the MD5. Instructions to generate the decrypted_file_checksum are below as well.

To use s3_file with encrypted files:

  1. Create a new key using bin/s3_crypto -g > my_new_key.
  2. Create a SHA256 hex digest checsksum of your source file by calling bin/s3_crypto -c -i my_source_file [ -o my_checksum_file ].
  3. Encrypt your file using the new key by calling bin/s3_crypto -e -k my_new_key -i my_source_file [ -o my_destination_file ].
  4. You can test decryption of your file using bin/s3_crypto -d -k my_new_key -i my_encoded_file [ -o my_decoded_destionation ].
  5. Upload your encrypted file to S3 as normal.
  6. In the s3_file resource call, provide the string within my_new_key as the decryption_key of the resource.
  7. In the s3_file resource call, provide the string within my_checksum_file as the decrypted_file_checksum of the resource.

Note that when you make the s3_file call, it is best if you make decryption_key a node property and provide it via an encrypted databag or pull the key from the environment. It is not wise to check in your decryption key to your recipe.

To create your cipher, run bin/s3_crypto -g > my_new_key and a new 256-bit (32 hexidecimal characters) will be generated for you. Paste that key into a file for later use. DO NOT include an endline in the file otherwise the encryption and decryption will fail.

Try bin/s3_crypto -g > my_new_key.

You can use the utility bin/s3_crypto to encrypt files prior to uploading to S3 and to decrypt files prior to make sure the encryption is working.

ChefSpec matcher

s3_file comes with a matcher to use in ChefSpec.

This spec checks the code from the USAGE example above:

it 'downloads some file from s3' do
    expect(chef_run).to create_s3_file('/tmp/somefile')
        .with(bucket: "my-s3-bucket", remote_path: "/my/s3/key")


This cookbook has Test Kitchen integration tests. To test, create a .s3.yml file with the following S3 details.

file: file
bucket: bucket
region: xx-xxxx-x

If you're using the ChefDK then type chef exec kitchen test, otherwise kitchen test.

Dependent cookbooks

This cookbook has no specified dependencies.

Contingent cookbooks

arcgis-repository Applicable Versions
chefgithook Applicable Versions
letsencryptaws Applicable Versions
nexus_repository_manager Applicable Versions
opsworks_ruby Applicable Versions
s3_dir Applicable Versions
tfs Applicable Versions

2015-06-03 version 2.5.4

  • Adds version constraint on rest-client to 1.7.3.
  • Adds a cookbook attribute for overwriting the rest-client gem version.

2015-03-20 version 2.5.3

version 2.5.2

  • Add retries for downloads

2014-12-09 version 2.5.1

2014-10-01 version 2.5.0

2014-04-17 version 2.4.0

2014-03-18 version 2.3.3

2014-02-20 version 2.3.2

  • Added documentation for multi-part ETag/MD5 issue.
  • Added changelog, backdated to 2014-02-14.

2014-02-14 version 2.3.1

Collaborator Number Metric

2.8.5 passed this metric

Contributing File Metric

2.8.5 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of, and your repo must contain a file

Foodcritic Metric

2.8.5 failed this metric

FC074: LWRP should use DSL to define resource's default action: s3_file/resources/default.rb:1
FC085: Resource using new_resource.updated_by_last_action to converge resource: s3_file/providers/default.rb:94
Run with Foodcritic Version 16.3.0 with tags metadata,correctness ~FC031 ~FC045 and failure tags any

No Binaries Metric

2.8.5 passed this metric

Testing File Metric

2.8.5 failed this metric

Failure: To pass this metric, your cookbook metadata must include a source url, the source url must be in the form of, and your repo must contain a file

Version Tag Metric

2.8.5 passed this metric