Commit d8e8f98b authored by Chris Bednarski's avatar Chris Bednarski

Change to 4 spaces

parent 555a8ba7
...@@ -13,5 +13,5 @@ format: ...@@ -13,5 +13,5 @@ format:
bundle exec htmlbeautifier -t 2 source/*.erb bundle exec htmlbeautifier -t 2 source/*.erb
bundle exec htmlbeautifier -t 2 source/layouts/*.erb bundle exec htmlbeautifier -t 2 source/layouts/*.erb
@pandoc -v > /dev/null || echo "pandoc must be installed in order to format markdown content" @pandoc -v > /dev/null || echo "pandoc must be installed in order to format markdown content"
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=2 --atx-headers -s --columns=80 {} > {}.new"\; || true pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=4 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "mv {}.new {}"\; || true pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "mv {}.new {}"\; || true
...@@ -29,7 +29,8 @@ list as contributors come and go. ...@@ -29,7 +29,8 @@ list as contributors come and go.
<div class="people"> <div class="people">
<div class="person"> <div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125"> <img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<div class="bio"> <div class="bio">
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3> <h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
...@@ -41,9 +42,11 @@ list as contributors come and go. ...@@ -41,9 +42,11 @@ list as contributors come and go.
described as "automation obsessed." described as "automation obsessed."
</p> </p>
</div> </div>
</div>
<div class="person"> </div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125"> <img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio"> <div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3> <h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
...@@ -52,9 +55,11 @@ list as contributors come and go. ...@@ -52,9 +55,11 @@ list as contributors come and go.
for Packer. Outside of Packer, Jack is an avid open source for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p> contributor and software consultant.</p>
</div> </div>
</div>
<div class="person"> </div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125"> <img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
<div class="bio"> <div class="bio">
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3> <h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
...@@ -65,9 +70,11 @@ list as contributors come and go. ...@@ -65,9 +70,11 @@ list as contributors come and go.
<a href="https://github.com/ironport">IronPort Python libraries</a>. <a href="https://github.com/ironport">IronPort Python libraries</a>.
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p> Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p>
</div> </div>
</div>
<div class="person"> </div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125"> <img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio"> <div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3> <h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
...@@ -78,9 +85,11 @@ VMware builder on Windows, and provides other valuable assistance. Ross is an ...@@ -78,9 +85,11 @@ VMware builder on Windows, and provides other valuable assistance. Ross is an
open source enthusiast, published author, and freelance consultant. open source enthusiast, published author, and freelance consultant.
</p> </p>
</div> </div>
</div>
<div class="person"> </div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125"> <img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio"> <div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3> <h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
...@@ -90,8 +99,11 @@ Rickard von Essen maintains our Parallels Desktop builder. Rickard is an ...@@ -90,8 +99,11 @@ Rickard von Essen maintains our Parallels Desktop builder. Rickard is an
polyglot programmer and consults on Continuous Delivery. polyglot programmer and consults on Continuous Delivery.
</p> </p>
</div> </div>
</div>
<div class="clearfix"></div> </div>
<div class="clearfix">
</div>
</div> </div>
...@@ -17,41 +17,41 @@ Luckily, there are relatively few. This page documents all the terminology ...@@ -17,41 +17,41 @@ Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical order required to understand and use Packer. The terminology is in alphabetical order
for easy referencing. for easy referencing.
- `Artifacts` are the results of a single build, and are usually a set of IDs or - `Artifacts` are the results of a single build, and are usually a set of IDs
files to represent a machine image. Every builder produces a single artifact. or files to represent a machine image. Every builder produces a
As an example, in the case of the Amazon EC2 builder, the artifact is a set of single artifact. As an example, in the case of the Amazon EC2 builder, the
AMI IDs (one per region). For the VMware builder, the artifact is a directory artifact is a set of AMI IDs (one per region). For the VMware builder, the
of files comprising the created virtual machine. artifact is a directory of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a - `Builds` are a single task that eventually produces an image for a
single platform. Multiple builds run in parallel. Example usage in a sentence: single platform. Multiple builds run in parallel. Example usage in a
"The Packer build produced an AMI to run our web application." Or: "Packer is sentence: "The Packer build produced an AMI to run our web application." Or:
running the builds now for VMware, AWS, and VirtualBox." "Packer is running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine image - `Builders` are components of Packer that are able to create a machine image
for a single platform. Builders read in some configuration and use that to run for a single platform. Builders read in some configuration and use that to
and generate a machine image. A builder is invoked as part of a build in order run and generate a machine image. A builder is invoked as part of a build in
to create the actual resulting images. Example builders include VirtualBox, order to create the actual resulting images. Example builders include
VMware, and Amazon EC2. Builders can be created and added to Packer in the VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
form of plugins. Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job. An - `Commands` are sub-commands for the `packer` program that perform some job.
example command is "build", which is invoked as `packer build`. Packer ships An example command is "build", which is invoked as `packer build`. Packer
with a set of commands out of the box in order to define its ships with a set of commands out of the box in order to define its
command-line interface. Commands can also be created and added to Packer in command-line interface. Commands can also be created and added to Packer in
the form of plugins. the form of plugins.
- `Post-processors` are components of Packer that take the result of a builder - `Post-processors` are components of Packer that take the result of a builder
or another post-processor and process that to create a new artifact. Examples or another post-processor and process that to create a new artifact.
of post-processors are compress to compress artifacts, upload to upload Examples of post-processors are compress to compress artifacts, upload to
artifacts, etc. upload artifacts, etc.
- `Provisioners` are components of Packer that install and configure software - `Provisioners` are components of Packer that install and configure software
within a running machine prior to that machine being turned into a within a running machine prior to that machine being turned into a
static image. They perform the major work of making the image contain static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef, useful software. Example provisioners include shell scripts, Chef,
Puppet, etc. Puppet, etc.
- `Templates` are JSON files which define one or more builds by configuring the - `Templates` are JSON files which define one or more builds by configuring
various components of Packer. Packer is able to read a template and use that the various components of Packer. Packer is able to read a template and use
information to create multiple machine images in parallel. that information to create multiple machine images in parallel.
...@@ -12,20 +12,21 @@ Packer is able to create Amazon AMIs. To achieve this, Packer comes with ...@@ -12,20 +12,21 @@ Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the AMI. multiple builders depending on the strategy you want to use to build the AMI.
Packer supports the following builders at the moment: Packer supports the following builders at the moment:
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by - [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
launching a source AMI and re-packaging it into a new AMI after provisioning. launching a source AMI and re-packaging it into a new AMI
If in doubt, use this builder, which is the easiest to get started with. after provisioning. If in doubt, use this builder, which is the easiest to
get started with.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create instance-store - [amazon-instance](/docs/builders/amazon-instance.html) - Create
AMIs by launching and provisioning a source instance, then rebundling it and instance-store AMIs by launching and provisioning a source instance, then
uploading it to S3. rebundling it and uploading it to S3.
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs - [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision [Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed AMI newcomers**. However, it is also the fastest way to build an EBS-backed AMI
since no new EC2 instance needs to be launched. since no new EC2 instance needs to be launched.
-&gt; **Don't know which builder to use?** If in doubt, use the [amazon-ebs -&gt; **Don't know which builder to use?** If in doubt, use the [amazon-ebs
builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
......
...@@ -34,41 +34,43 @@ builder. ...@@ -34,41 +34,43 @@ builder.
### Required: ### Required:
- `api_token` (string) - The client TOKEN to use to access your account. It can - `api_token` (string) - The client TOKEN to use to access your account. It
also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set. can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`,
if set.
- `image` (string) - The name (or slug) of the base image to use. This is the - `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See image that will be used to launch a new droplet and provision it. See
https://developers.digitalocean.com/documentation/v2/\#list-all-images for https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs. details on how to get a list of the the accepted image names/slugs.
- `region` (string) - The name (or slug) of the region to launch the droplet in. - `region` (string) - The name (or slug) of the region to launch the
Consequently, this is the region where the snapshot will be available. See droplet in. Consequently, this is the region where the snapshot will
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for be available. See
the accepted region names/slugs. https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See - `size` (string) - The name (or slug) of the droplet size to use. See
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for the https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for
accepted size names/slugs. the accepted size names/slugs.
### Optional: ### Optional:
- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean sets - `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
the hostname of the machine to this value. sets the hostname of the machine to this value.
- `private_networking` (boolean) - Set to `true` to enable private networking - `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled. for the droplet being created. This defaults to `false`, or not enabled.
- `snapshot_name` (string) - The name of the resulting snapshot that will appear - `snapshot_name` (string) - The name of the resulting snapshot that will
in your account. This must be unique. To help make this unique, use a function appear in your account. This must be unique. To help make this unique, use a
like `timestamp` (see [configuration function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info) templates](/docs/templates/configuration-templates.html) for more info)
- `state_timeout` (string) - The time to wait, as a duration string, for a - `state_timeout` (string) - The time to wait, as a duration string, for a
droplet to enter a desired state (such as "active") before timing out. The droplet to enter a desired state (such as "active") before timing out. The
default state timeout is "6m". default state timeout is "6m".
- `user_data` (string) - User data to launch with the Droplet. - `user_data` (string) - User data to launch with the Droplet.
## Basic Example ## Basic Example
......
...@@ -68,42 +68,42 @@ builder. ...@@ -68,42 +68,42 @@ builder.
### Required: ### Required:
- `commit` (boolean) - If true, the container will be committed to an image - `commit` (boolean) - If true, the container will be committed to an image
rather than exported. This cannot be set if `export_path` is set. rather than exported. This cannot be set if `export_path` is set.
- `export_path` (string) - The path where the final container will be exported - `export_path` (string) - The path where the final container will be exported
as a tar file. This cannot be set if `commit` is set to true. as a tar file. This cannot be set if `commit` is set to true.
- `image` (string) - The base image for the Docker container that will - `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it doesn't be started. This image will be pulled from the Docker registry if it doesn't
already exist. already exist.
### Optional: ### Optional:
- `login` (boolean) - Defaults to false. If true, the builder will login in - `login` (boolean) - Defaults to false. If true, the builder will login in
order to pull the image. The builder only logs in for the duration of order to pull the image. The builder only logs in for the duration of
the pull. It always logs out afterwards. the pull. It always logs out afterwards.
- `login_email` (string) - The email to use to authenticate to login. - `login_email` (string) - The email to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login. - `login_username` (string) - The username to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login. - `login_password` (string) - The password to use to authenticate to login.
- `login_server` (string) - The server address to login to. - `login_server` (string) - The server address to login to.
- `pull` (boolean) - If true, the configured image will be pulled using - `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already exists `docker pull` prior to use. Otherwise, it is assumed the image already
and can be used. This defaults to true if not set. exists and can be used. This defaults to true if not set.
- `run_command` (array of strings) - An array of arguments to pass to - `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to `docker run` in order to run the container. By default this is set to
`["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a `["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a
couple template variables to customize, as well. couple template variables to customize, as well.
- `volumes` (map of strings to strings) - A mapping of additional volumes to - `volumes` (map of strings to strings) - A mapping of additional volumes to
mount into this container. The key of the object is the host path, the value mount into this container. The key of the object is the host path, the value
is the container path. is the container path.
## Using the Artifact: Export ## Using the Artifact: Export
...@@ -226,11 +226,11 @@ Dockerfiles have some additional features that Packer doesn't support which are ...@@ -226,11 +226,11 @@ Dockerfiles have some additional features that Packer doesn't support which are
able to be worked around. Many of these features will be automated by Packer in able to be worked around. Many of these features will be automated by Packer in
the future: the future:
- Dockerfiles will snapshot the container at each step, allowing you to go back - Dockerfiles will snapshot the container at each step, allowing you to go
to any step in the history of building. Packer doesn't do this yet, but back to any step in the history of building. Packer doesn't do this yet, but
inter-step snapshotting is on the way. inter-step snapshotting is on the way.
- Dockerfiles can contain information such as exposed ports, shared volumes, and - Dockerfiles can contain information such as exposed ports, shared volumes,
other metadata. Packer builds a raw Docker container image that has none of and other metadata. Packer builds a raw Docker container image that has none
this metadata. You can pass in much of this metadata at runtime with of this metadata. You can pass in much of this metadata at runtime with
`docker run`. `docker run`.
...@@ -38,67 +38,67 @@ builder. ...@@ -38,67 +38,67 @@ builder.
### Required: ### Required:
- `flavor` (string) - The ID, name, or full URL for the desired flavor for the - `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created. server to be created.
- `image_name` (string) - The name of the resulting image. - `image_name` (string) - The name of the resulting image.
- `source_image` (string) - The ID or full URL to the base image to use. This is - `source_image` (string) - The ID or full URL to the base image to use. This
the image that will be used to launch a new server and provision it. Unless is the image that will be used to launch a new server and provision it.
you specify completely custom SSH settings, the source image must have Unless you specify completely custom SSH settings, the source image must
`cloud-init` installed so that the keypair gets assigned properly. have `cloud-init` installed so that the keypair gets assigned properly.
- `username` (string) - The username used to connect to the OpenStack service. - `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable `OS_USERNAME`, If not specified, Packer will use the environment variable `OS_USERNAME`,
if set. if set.
- `password` (string) - The password used to connect to the OpenStack service. - `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables `OS_PASSWORD`, If not specified, Packer will use the environment variables `OS_PASSWORD`,
if set. if set.
### Optional: ### Optional:
- `api_key` (string) - The API key used to access OpenStack. Some OpenStack - `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this. installations require this.
- `availability_zone` (string) - The availability zone to launch the server in. - `availability_zone` (string) - The availability zone to launch the
If this isn't specified, the default enforced by your OpenStack cluster will server in. If this isn't specified, the default enforced by your OpenStack
be used. This may be required for some OpenStack clusters. cluster will be used. This may be required for some OpenStack clusters.
- `floating_ip` (string) - A specific floating IP to assign to this instance. - `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect. `use_floating_ip` must also be set to true for this to have an affect.
- `floating_ip_pool` (string) - The name of the floating IP pool to use to - `floating_ip_pool` (string) - The name of the floating IP pool to use to
allocate a floating IP. `use_floating_ip` must also be set to true for this to allocate a floating IP. `use_floating_ip` must also be set to true for this
have an affect. to have an affect.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be done - `insecure` (boolean) - Whether or not the connection to OpenStack can be
over an insecure connection. By default this is false. done over an insecure connection. By default this is false.
- `networks` (array of strings) - A list of networks by UUID to attach to - `networks` (array of strings) - A list of networks by UUID to attach to
this instance. this instance.
- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the - `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. If not specified, instance into. Some OpenStack installations require this. If not specified,
Packer will use the environment variable `OS_TENANT_NAME`, if set. Packer will use the environment variable `OS_TENANT_NAME`, if set.
- `security_groups` (array of strings) - A list of security groups by name to - `security_groups` (array of strings) - A list of security groups by name to
add to this instance. add to this instance.
- `region` (string) - The name of the region, such as "DFW", in which to launch - `region` (string) - The name of the region, such as "DFW", in which to
the server to create the AMI. If not specified, Packer will use the launch the server to create the AMI. If not specified, Packer will use the
environment variable `OS_REGION_NAME`, if set. environment variable `OS_REGION_NAME`, if set.
- `ssh_interface` (string) - The type of interface to connect via SSH. Values - `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is to useful for Rackspace are "public" or "private", and the default behavior is
connect via whichever is returned first from the OpenStack API. to connect via whichever is returned first from the OpenStack API.
- `use_floating_ip` (boolean) - Whether or not to use a floating IP for - `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false. the instance. Defaults to false.
- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for - `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH. Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false. Defaults to false.
## Basic Example: Rackspace public cloud ## Basic Example: Rackspace public cloud
...@@ -138,7 +138,7 @@ appear in the template. That is because I source a standard OpenStack script ...@@ -138,7 +138,7 @@ appear in the template. That is because I source a standard OpenStack script
with environment variables set before I run this. This script is setting with environment variables set before I run this. This script is setting
environment variables like: environment variables like:
- `OS_AUTH_URL` - `OS_AUTH_URL`
- `OS_TENANT_ID` - `OS_TENANT_ID`
- `OS_USERNAME` - `OS_USERNAME`
- `OS_PASSWORD` - `OS_PASSWORD`
...@@ -16,16 +16,16 @@ Packer actually comes with multiple builders able to create Parallels machines, ...@@ -16,16 +16,16 @@ Packer actually comes with multiple builders able to create Parallels machines,
depending on the strategy you want to use to build the image. Packer supports depending on the strategy you want to use to build the image. Packer supports
the following Parallels builders: the following Parallels builders:
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file, - [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO
creates a brand new Parallels VM, installs an OS, provisions software within file, creates a brand new Parallels VM, installs an OS, provisions software
the OS, then exports that machine to create an image. This is best for people within the OS, then exports that machine to create an image. This is best
who want to start from scratch. for people who want to start from scratch.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an - [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels VM machine to create an image. This is best if you have an existing Parallels
export you want to use as the source. As an additional benefit, you can feed VM export you want to use as the source. As an additional benefit, you can
the artifact of this builder back into itself to iterate on a machine. feed the artifact of this builder back into itself to iterate on a machine.
## Requirements ## Requirements
......
...@@ -16,13 +16,14 @@ Packer actually comes with multiple builders able to create VirtualBox machines, ...@@ -16,13 +16,14 @@ Packer actually comes with multiple builders able to create VirtualBox machines,
depending on the strategy you want to use to build the image. Packer supports depending on the strategy you want to use to build the image. Packer supports
the following VirtualBox builders: the following VirtualBox builders:
- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO - [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
file, creates a brand new VirtualBox VM, installs an OS, provisions software file, creates a brand new VirtualBox VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best for within the OS, then exports that machine to create an image. This is best
people who want to start from scratch. for people who want to start from scratch.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports an - [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports
existing OVF/OVA file, runs provisioners on top of that VM, and exports that an existing OVF/OVA file, runs provisioners on top of that VM, and exports
machine to create an image. This is best if you have an existing VirtualBox VM that machine to create an image. This is best if you have an existing
export you want to use as the source. As an additional benefit, you can feed VirtualBox VM export you want to use as the source. As an additional
the artifact of this builder back into itself to iterate on a machine. benefit, you can feed the artifact of this builder back into itself to
iterate on a machine.
...@@ -15,14 +15,14 @@ Packer actually comes with multiple builders able to create VMware machines, ...@@ -15,14 +15,14 @@ Packer actually comes with multiple builders able to create VMware machines,
depending on the strategy you want to use to build the image. Packer supports depending on the strategy you want to use to build the image. Packer supports
the following VMware builders: the following VMware builders:
- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file, - [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
creates a brand new VMware VM, installs an OS, provisions software within the creates a brand new VMware VM, installs an OS, provisions software within
OS, then exports that machine to create an image. This is best for people who the OS, then exports that machine to create an image. This is best for
want to start from scratch. people who want to start from scratch.
- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an - [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
existing VMware machine (from a VMX file), runs provisioners on top of that existing VMware machine (from a VMX file), runs provisioners on top of that
VM, and exports that machine to create an image. This is best if you have an VM, and exports that machine to create an image. This is best if you have an
existing VMware VM you want to use as the source. As an additional benefit, existing VMware VM you want to use as the source. As an additional benefit,
you can feed the artifact of this builder back into Packer to iterate on you can feed the artifact of this builder back into Packer to iterate on
a machine. a machine.
...@@ -17,24 +17,26 @@ artifacts that are created will be outputted at the end of the build. ...@@ -17,24 +17,26 @@ artifacts that are created will be outputted at the end of the build.
## Options ## Options
- `-color=false` - Disables colorized output. Enabled by default. - `-color=false` - Disables colorized output. Enabled by default.
- `-debug` - Disables parallelization and enables debug mode. Debug mode flags - `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior the builders that they should output debugging information. The exact
of debug mode is left to the builder. In general, builders usually will stop behavior of debug mode is left to the builder. In general, builders usually
between each step, waiting for keyboard input before continuing. This will will stop between each step, waiting for keyboard input before continuing.
allow the user to inspect state and so on. This will allow the user to inspect state and so on.
- `-except=foo,bar,baz` - Builds all the builds except those with the given - `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders, comma-separated names. Build names by default are the names of their
unless a specific `name` attribute is specified within the configuration. builders, unless a specific `name` attribute is specified within
the configuration.
- `-force` - Forces a builder to run when artifacts from a previous build
prevent a build from running. The exact behavior of a forced build is left to - `-force` - Forces a builder to run when artifacts from a previous build
the builder. In general, a builder supporting the forced build will remove the prevent a build from running. The exact behavior of a forced build is left
artifacts from the previous build. This will allow the user to repeat a build to the builder. In general, a builder supporting the forced build will
without having to manually clean these artifacts beforehand. remove the artifacts from the previous build. This will allow the user to
repeat a build without having to manually clean these artifacts beforehand.
- `-only=foo,bar,baz` - Only build the builds with the given
comma-separated names. Build names by default are the names of their builders, - `-only=foo,bar,baz` - Only build the builds with the given
unless a specific `name` attribute is specified within the configuration. comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.
...@@ -19,7 +19,7 @@ The fix command will output the changed template to standard out, so you should ...@@ -19,7 +19,7 @@ The fix command will output the changed template to standard out, so you should
redirect standard using standard OS-specific techniques if you want to save it redirect standard using standard OS-specific techniques if you want to save it
to a file. For example, on Linux systems, you may want to do this: to a file. For example, on Linux systems, you may want to do this:
$ packer fix old.json > new.json \$ packer fix old.json &gt; new.json
If fixing fails for any reason, the fix command will exit with a non-zero exit If fixing fails for any reason, the fix command will exit with a non-zero exit
status. Error messages appear on standard error, so if you're redirecting status. Error messages appear on standard error, so if you're redirecting
......
...@@ -53,20 +53,22 @@ timestamp,target,type,data... ...@@ -53,20 +53,22 @@ timestamp,target,type,data...
Each component is explained below: Each component is explained below:
- **timestamp** is a Unix timestamp in UTC of when the message was printed. - **timestamp** is a Unix timestamp in UTC of when the message was printed.
- **target** is the target of the following output. This is empty if the message - **target** is the target of the following output. This is empty if the
is related to Packer globally. Otherwise, this is generally a build name so message is related to Packer globally. Otherwise, this is generally a build
you can relate output to a specific build while parallel builds are running. name so you can relate output to a specific build while parallel builds
are running.
- **type** is the type of machine-readable message being outputted. There are a
set of standard types which are covered later, but each component of Packer - **type** is the type of machine-readable message being outputted. There are
(builders, provisioners, etc.) may output their own custom types as well, a set of standard types which are covered later, but each component of
allowing the machine-readable output to be infinitely flexible. Packer (builders, provisioners, etc.) may output their own custom types as
well, allowing the machine-readable output to be infinitely flexible.
- **data** is zero or more comma-seperated values associated with the
prior type. The exact amount and meaning of this data is type-dependent, so - **data** is zero or more comma-seperated values associated with the
you must read the documentation associated with the type to understand fully. prior type. The exact amount and meaning of this data is type-dependent, so
you must read the documentation associated with the type to
understand fully.
Within the format, if data contains a comma, it is replaced with Within the format, if data contains a comma, it is replaced with
`%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'` `%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'`
......
...@@ -26,16 +26,16 @@ configuration](/docs/templates/push.html) must be completed within the template. ...@@ -26,16 +26,16 @@ configuration](/docs/templates/push.html) must be completed within the template.
## Options ## Options
- `-message` - A message to identify the purpose or changes in this Packer - `-message` - A message to identify the purpose or changes in this Packer
template much like a VCS commit message. This message will be passed to the template much like a VCS commit message. This message will be passed to the
Packer build service. This option is also available as a short option `-m`. Packer build service. This option is also available as a short option `-m`.
- `-token` - An access token for authenticating the push to the Packer build - `-token` - An access token for authenticating the push to the Packer build
service such as Atlas. This can also be specified within the push service such as Atlas. This can also be specified within the push
configuration in the template. configuration in the template.
- `-name` - The name of the build in the service. This typically looks like - `-name` - The name of the build in the service. This typically looks like
`hashicorp/precise64`. `hashicorp/precise64`.
## Examples ## Examples
......
...@@ -29,5 +29,5 @@ Errors validating build 'vmware'. 1 error(s) occurred: ...@@ -29,5 +29,5 @@ Errors validating build 'vmware'. 1 error(s) occurred:
## Options ## Options
- `-syntax-only` - Only the syntax of the template is checked. The configuration - `-syntax-only` - Only the syntax of the template is checked. The
is not validated. configuration is not validated.
...@@ -52,19 +52,19 @@ the following two packages, you're encouraged to use whatever packages you want. ...@@ -52,19 +52,19 @@ the following two packages, you're encouraged to use whatever packages you want.
Because plugins are their own processes, there is no danger of colliding Because plugins are their own processes, there is no danger of colliding
dependencies. dependencies.
- `github.com/mitchellh/packer` - Contains all the interfaces that you have to - `github.com/mitchellh/packer` - Contains all the interfaces that you have to
implement for any given plugin. implement for any given plugin.
- `github.com/mitchellh/packer/plugin` - Contains the code to serve the plugin. - `github.com/mitchellh/packer/plugin` - Contains the code to serve
This handles all the inter-process communication stuff. the plugin. This handles all the inter-process communication stuff.
There are two steps involved in creating a plugin: There are two steps involved in creating a plugin:
1. Implement the desired interface. For example, if you're building a builder 1. Implement the desired interface. For example, if you're building a builder
plugin, implement the `packer.Builder` interface. plugin, implement the `packer.Builder` interface.
2. Serve the interface by calling the appropriate plugin serving method in your 2. Serve the interface by calling the appropriate plugin serving method in your
main method. In the case of a builder, this is `plugin.ServeBuilder`. main method. In the case of a builder, this is `plugin.ServeBuilder`.
A basic example is shown below. In this example, assume the `Builder` struct A basic example is shown below. In this example, assume the `Builder` struct
implements the `packer.Builder` interface: implements the `packer.Builder` interface:
......
...@@ -51,21 +51,21 @@ Once the plugin is named properly, Packer automatically discovers plugins in the ...@@ -51,21 +51,21 @@ Once the plugin is named properly, Packer automatically discovers plugins in the
following directories in the given order. If a conflicting plugin is found following directories in the given order. If a conflicting plugin is found
later, it will take precedence over one found earlier. later, it will take precedence over one found earlier.
1. The directory where `packer` is, or the executable directory. 1. The directory where `packer` is, or the executable directory.
2. `~/.packer.d/plugins` on Unix systems or `%APPDATA%/packer.d/plugins` 2. `~/.packer.d/plugins` on Unix systems or `%APPDATA%/packer.d/plugins`
on Windows. on Windows.
3. The current working directory. 3. The current working directory.
The valid types for plugins are: The valid types for plugins are:
- `builder` - Plugins responsible for building images for a specific platform. - `builder` - Plugins responsible for building images for a specific platform.
- `command` - A CLI sub-command for `packer`. - `command` - A CLI sub-command for `packer`.
- `post-processor` - A post-processor responsible for taking an artifact from a - `post-processor` - A post-processor responsible for taking an artifact from
builder and turning it into something else. a builder and turning it into something else.
- `provisioner` - A provisioner to install software on images created by - `provisioner` - A provisioner to install software on images created by
a builder. a builder.
...@@ -79,11 +79,11 @@ creating a new artifact with a single file: the compressed archive. ...@@ -79,11 +79,11 @@ creating a new artifact with a single file: the compressed archive.
The result signature of this method is `(Artifact, bool, error)`. Each return The result signature of this method is `(Artifact, bool, error)`. Each return
value is explained below: value is explained below:
- `Artifact` - The newly created artifact if no errors occurred. - `Artifact` - The newly created artifact if no errors occurred.
- `bool` - If true, the input artifact will forcefully be kept. By default, - `bool` - If true, the input artifact will forcefully be kept. By default,
Packer typically deletes all input artifacts, since the user doesn't generally Packer typically deletes all input artifacts, since the user doesn't
want intermediary artifacts. However, some post-processors depend on the generally want intermediary artifacts. However, some post-processors depend
previous artifact existing. If this is `true`, it forces packer to keep the on the previous artifact existing. If this is `true`, it forces packer to
artifact around. keep the artifact around.
- `error` - Non-nil if there was an error in any way. If this is the case, the - `error` - Non-nil if there was an error in any way. If this is the case, the
other two return values are ignored. other two return values are ignored.
...@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of ...@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer build`. `packer build`.
<dl> <dl>
<dt>artifact (>= 2)</dt> <dt>
<dd> artifact (&gt;= 2)
</dt>
<dd>
<p> <p>
Information about an artifact of the targeted item. This is a Information about an artifact of the targeted item. This is a
fairly complex (but uniform!) machine-readable type that contains fairly complex (but uniform!) machine-readable type that contains
...@@ -37,10 +39,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -37,10 +39,12 @@ These are the machine-readable types that exist as part of the output of
data points related to the subtype. The exact count and meaning data points related to the subtype. The exact count and meaning
of this subtypes comes from the subtype documentation. of this subtypes comes from the subtype documentation.
</p> </p>
</dd>
<dt>artifact-count (1)</dt> </dd>
<dd> <dt>
artifact-count (1)
</dt>
<dd>
<p> <p>
The number of artifacts associated with the given target. This The number of artifacts associated with the given target. This
will always be outputted _before_ any other artifact information, will always be outputted _before_ any other artifact information,
...@@ -51,10 +55,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -51,10 +55,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: count</strong> - The number of artifacts as <strong>Data 1: count</strong> - The number of artifacts as
a base 10 integer. a base 10 integer.
</p> </p>
</dd>
<dt>artifact subtype: builder-id (1)</dt> </dd>
<dd> <dt>
artifact subtype: builder-id (1)
</dt>
<dd>
<p> <p>
The unique ID of the builder that created this artifact. The unique ID of the builder that created this artifact.
</p> </p>
...@@ -62,19 +68,23 @@ These are the machine-readable types that exist as part of the output of ...@@ -62,19 +68,23 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 1: id</strong> - The unique ID of the builder. <strong>Data 1: id</strong> - The unique ID of the builder.
</p> </p>
</dd>
<dt>artifact subtype: end (0)</dt> </dd>
<dd> <dt>
artifact subtype: end (0)
</dt>
<dd>
<p> <p>
The last machine-readable output line outputted for an artifact. The last machine-readable output line outputted for an artifact.
This is a sentinel value so you know that no more data related to This is a sentinel value so you know that no more data related to
the targetted artifact will be outputted. the targetted artifact will be outputted.
</p> </p>
</dd>
<dt>artifact subtype: file (2)</dt> </dd>
<dd> <dt>
artifact subtype: file (2)
</dt>
<dd>
<p> <p>
A single file associated with the artifact. There are 0 to A single file associated with the artifact. There are 0 to
"files-count" of these entries to describe every file that is "files-count" of these entries to describe every file that is
...@@ -89,10 +99,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -89,10 +99,12 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 2: filename</strong> - The filename. <strong>Data 2: filename</strong> - The filename.
</p> </p>
</dd>
<dt>artifact subtype: files-count (1)</dt> </dd>
<dd> <dt>
artifact subtype: files-count (1)
</dt>
<dd>
<p> <p>
The number of files associated with this artifact. Not all The number of files associated with this artifact. Not all
artifacts have files associated with it. artifacts have files associated with it.
...@@ -101,10 +113,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -101,10 +113,12 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 1: count</strong> - The number of files. <strong>Data 1: count</strong> - The number of files.
</p> </p>
</dd>
<dt>artifact subtype: id (1)</dt> </dd>
<dd> <dt>
artifact subtype: id (1)
</dt>
<dd>
<p> <p>
The ID (if any) of the artifact that was built. Not all artifacts The ID (if any) of the artifact that was built. Not all artifacts
have associated IDs. For example, AMIs built have IDs associated have associated IDs. For example, AMIs built have IDs associated
...@@ -115,18 +129,22 @@ These are the machine-readable types that exist as part of the output of ...@@ -115,18 +129,22 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 1: id</strong> - The ID of the artifact. <strong>Data 1: id</strong> - The ID of the artifact.
</p> </p>
</dd>
<dt>artifact subtype: nil (0)</dt> </dd>
<dd> <dt>
artifact subtype: nil (0)
</dt>
<dd>
<p> <p>
If present, this means that the artifact was nil, or that the targeted If present, this means that the artifact was nil, or that the targeted
build completed successfully but no artifact was created. build completed successfully but no artifact was created.
</p> </p>
</dd>
<dt>artifact subtype: string (1)</dt> </dd>
<dd> <dt>
artifact subtype: string (1)
</dt>
<dd>
<p> <p>
The human-readable string description of the artifact provided by The human-readable string description of the artifact provided by
the artifact itself. the artifact itself.
...@@ -135,10 +153,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -135,10 +153,12 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 1: string</strong> - The string output for the artifact. <strong>Data 1: string</strong> - The string output for the artifact.
</p> </p>
</dd>
<dt>error-count (1)</dt> </dd>
<dd> <dt>
error-count (1)
</dt>
<dd>
<p> <p>
The number of errors that occurred during the build. This will The number of errors that occurred during the build. This will
always be outputted before any errors so you know how many are coming. always be outputted before any errors so you know how many are coming.
...@@ -148,10 +168,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -148,10 +168,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: count</strong> - The number of build errors as <strong>Data 1: count</strong> - The number of build errors as
a base 10 integer. a base 10 integer.
</p> </p>
</dd>
<dt>error (1)</dt> </dd>
<dd> <dt>
error (1)
</dt>
<dd>
<p> <p>
A build error that occurred. The target of this output will be A build error that occurred. The target of this output will be
the build that had the error. the build that had the error.
...@@ -160,6 +182,6 @@ These are the machine-readable types that exist as part of the output of ...@@ -160,6 +182,6 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 1: error</strong> - The error message as a string. <strong>Data 1: error</strong> - The error message as a string.
</p> </p>
</dd>
</dd>
</dl> </dl>
...@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of ...@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer inspect`. `packer inspect`.
<dl> <dl>
<dt>template-variable (3)</dt> <dt>
<dd> template-variable (3)
</dt>
<dd>
<p> <p>
A <a href="/docs/templates/user-variables.html">user variable</a> A <a href="/docs/templates/user-variables.html">user variable</a>
defined within the template. defined within the template.
...@@ -32,10 +34,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -32,10 +34,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 3: required</strong> - If non-zero, then this variable <strong>Data 3: required</strong> - If non-zero, then this variable
is required. is required.
</p> </p>
</dd>
<dt>template-builder (2)</dt> </dd>
<dd> <dt>
template-builder (2)
</dt>
<dd>
<p> <p>
A builder defined within the template A builder defined within the template
</p> </p>
...@@ -48,10 +52,12 @@ These are the machine-readable types that exist as part of the output of ...@@ -48,10 +52,12 @@ These are the machine-readable types that exist as part of the output of
generally be the same as the name unless you explicitly override generally be the same as the name unless you explicitly override
the name. the name.
</p> </p>
</dd>
<dt>template-provisioner (1)</dt> </dd>
<dd> <dt>
template-provisioner (1)
</dt>
<dd>
<p> <p>
A provisioner defined within the template. Multiple of these may A provisioner defined within the template. Multiple of these may
exist. If so, they are outputted in the order they would run. exist. If so, they are outputted in the order they would run.
...@@ -60,6 +66,6 @@ These are the machine-readable types that exist as part of the output of ...@@ -60,6 +66,6 @@ These are the machine-readable types that exist as part of the output of
<p> <p>
<strong>Data 1: name</strong> - The name/type of the provisioner. <strong>Data 1: name</strong> - The name/type of the provisioner.
</p> </p>
</dd>
</dd>
</dl> </dl>
...@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of ...@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer version`. `packer version`.
<dl> <dl>
<dt>version (1)</dt> <dt>
<dd> version (1)
</dt>
<dd>
<p>The version number of Packer running.</p> <p>The version number of Packer running.</p>
<p> <p>
...@@ -21,19 +23,23 @@ These are the machine-readable types that exist as part of the output of ...@@ -21,19 +23,23 @@ These are the machine-readable types that exist as part of the output of
only including the major, minor, and patch versions. Example: only including the major, minor, and patch versions. Example:
"0.2.4". "0.2.4".
</p> </p>
</dd>
<dt>version-commit (1)</dt> </dd>
<dd> <dt>
version-commit (1)
</dt>
<dd>
<p>The SHA1 of the Git commit that built this version of Packer.</p> <p>The SHA1 of the Git commit that built this version of Packer.</p>
<p> <p>
<strong>Data 1: commit SHA1</strong> - The SHA1 of the commit. <strong>Data 1: commit SHA1</strong> - The SHA1 of the commit.
</p> </p>
</dd>
<dt>version-prerelease (1)</dt> </dd>
<dd> <dt>
version-prerelease (1)
</dt>
<dd>
<p> <p>
The prerelease tag (if any) for the running version of Packer. This The prerelease tag (if any) for the running version of Packer. This
can be "beta", "dev", "alpha", etc. If this is empty, you can assume can be "beta", "dev", "alpha", etc. If this is empty, you can assume
...@@ -44,6 +50,6 @@ These are the machine-readable types that exist as part of the output of ...@@ -44,6 +50,6 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: prerelease name</strong> - The name of the <strong>Data 1: prerelease name</strong> - The name of the
prerelease tag. prerelease tag.
</p> </p>
</dd>
</dd>
</dl> </dl>
...@@ -12,8 +12,10 @@ These are the machine-readable types that can appear in almost any ...@@ -12,8 +12,10 @@ These are the machine-readable types that can appear in almost any
machine-readable output and are provided by Packer core itself. machine-readable output and are provided by Packer core itself.
<dl> <dl>
<dt>ui (2)</dt> <dt>
<dd> ui (2)
</dt>
<dd>
<p> <p>
Specifies the output and type of output that would've normally Specifies the output and type of output that would've normally
gone to the console if Packer were running in human-readable gone to the console if Packer were running in human-readable
...@@ -28,6 +30,6 @@ machine-readable output and are provided by Packer core itself. ...@@ -28,6 +30,6 @@ machine-readable output and are provided by Packer core itself.
<strong>Data 2: output</strong> - The UI message that would have <strong>Data 2: output</strong> - The UI message that would have
been outputted. been outputted.
</p> </p>
</dd>
</dd>
</dl> </dl>
...@@ -24,12 +24,14 @@ Within each section, the format of the documentation is the following: ...@@ -24,12 +24,14 @@ Within each section, the format of the documentation is the following:
<br> <br>
<dl> <dl>
<dt>type-name (data-count)</dt> <dt>
<dd> type-name (data-count)
</dt>
<dd>
<p>Description of the type.</p> <p>Description of the type.</p>
<p> <p>
<strong>Data 1: name</strong> - Description. <strong>Data 1: name</strong> - Description.
</p> </p>
</dd>
</dd>
</dl> </dl>
...@@ -32,13 +32,13 @@ The format of the configuration file is basic JSON. ...@@ -32,13 +32,13 @@ The format of the configuration file is basic JSON.
Below is the list of all available configuration parameters for the core Below is the list of all available configuration parameters for the core
configuration file. None of these are required, since all have sane defaults. configuration file. None of these are required, since all have sane defaults.
- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and - `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum
maximum ports that Packer uses for communication with plugins, since plugin and maximum ports that Packer uses for communication with plugins, since
communication happens over TCP connections on your local host. By default plugin communication happens over TCP connections on your local host. By
these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range default these are 10,000 and 25,000, respectively. Be sure to set a fairly
here, since Packer can easily use over 25 ports on a single run. wide range here, since Packer can easily use over 25 ports on a single run.
- `builders`, `commands`, `post-processors`, and `provisioners` are objects that - `builders`, `commands`, `post-processors`, and `provisioners` are objects
are used to install plugins. The details of how exactly these are set is that are used to install plugins. The details of how exactly these are set
covered in more detail in the [installing plugins documentation is covered in more detail in the [installing plugins documentation
page](/docs/extend/plugins.html). page](/docs/extend/plugins.html).
...@@ -9,28 +9,28 @@ page_title: Environmental Variables for Packer ...@@ -9,28 +9,28 @@ page_title: Environmental Variables for Packer
Packer uses a variety of environmental variables. A listing and description of Packer uses a variety of environmental variables. A listing and description of
each can be found below: each can be found below:
- `PACKER_CACHE_DIR` - The location of the packer cache. - `PACKER_CACHE_DIR` - The location of the packer cache.
- `PACKER_CONFIG` - The location of the core configuration file. The format of - `PACKER_CONFIG` - The location of the core configuration file. The format of
the configuration file is basic JSON. See the [core configuration the configuration file is basic JSON. See the [core configuration
page](/docs/other/core-configuration.html). page](/docs/other/core-configuration.html).
- `PACKER_LOG` - Setting this to any value will enable the logger. See the - `PACKER_LOG` - Setting this to any value will enable the logger. See the
[debugging page](/docs/other/debugging.html). [debugging page](/docs/other/debugging.html).
- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be - `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be
set for any logging to occur. See the [debugging set for any logging to occur. See the [debugging
page](/docs/other/debugging.html). page](/docs/other/debugging.html).
- `PACKER_NO_COLOR` - Setting this to any value will disable color in - `PACKER_NO_COLOR` - Setting this to any value will disable color in
the terminal. the terminal.
- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for communication - `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for
with plugins, since plugin communication happens over TCP connections on your communication with plugins, since plugin communication happens over TCP
local host. The default is 25,000. See the [core configuration connections on your local host. The default is 25,000. See the [core
page](/docs/other/core-configuration.html). configuration page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for communication - `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for
with plugins, since plugin communication happens over TCP connections on your communication with plugins, since plugin communication happens over TCP
local host. The default is 10,000. See the [core configuration connections on your local host. The default is 10,000. See the [core
page](/docs/other/core-configuration.html). configuration page](/docs/other/core-configuration.html).
...@@ -25,14 +25,14 @@ location in Atlas. ...@@ -25,14 +25,14 @@ location in Atlas.
Here is an example workflow: Here is an example workflow:
1. Packer builds an AMI with the [Amazon AMI 1. Packer builds an AMI with the [Amazon AMI
builder](/docs/builders/amazon.html) builder](/docs/builders/amazon.html)
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas. 2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
The `atlas` post-processor is configured with the name of the AMI, for example The `atlas` post-processor is configured with the name of the AMI, for
`hashicorp/foobar`, to create the artifact in Atlas or update the version if example `hashicorp/foobar`, to create the artifact in Atlas or update the
the artifact already exists version if the artifact already exists
3. The new version is ready and available to be used in deployments with a tool 3. The new version is ready and available to be used in deployments with a tool
like [Terraform](https://terraform.io) like [Terraform](https://terraform.io)
## Configuration ## Configuration
...@@ -40,32 +40,33 @@ The configuration allows you to specify and access the artifact in Atlas. ...@@ -40,32 +40,33 @@ The configuration allows you to specify and access the artifact in Atlas.
### Required: ### Required:
- `token` (string) - Your access token for the Atlas API. This can be generated - `token` (string) - Your access token for the Atlas API. This can be
on your [tokens page](https://atlas.hashicorp.com/settings/tokens). generated on your [tokens
Alternatively you can export your Atlas token as an environmental variable and page](https://atlas.hashicorp.com/settings/tokens). Alternatively you can
remove it from the configuration. export your Atlas token as an environmental variable and remove it from
the configuration.
- `artifact` (string) - The shorthand tag for your artifact that maps to Atlas, - `artifact` (string) - The shorthand tag for your artifact that maps to
i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`. You must Atlas, i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`.
have access to the organization, hashicorp in this example, in order to add an You must have access to the organization, hashicorp in this example, in
artifact to the organization in Atlas. order to add an artifact to the organization in Atlas.
- `artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will - `artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will
always be `amazon.ami`. This field must be defined because Atlas can host always be `amazon.ami`. This field must be defined because Atlas can host
other artifact types, such as Vagrant boxes. other artifact types, such as Vagrant boxes.
-&gt; **Note:** If you want to upload Vagrant boxes to Atlas, use the [Atlas -&gt; **Note:** If you want to upload Vagrant boxes to Atlas, use the [Atlas
post-processor](/docs/post-processors/atlas.html). post-processor](/docs/post-processors/atlas.html).
### Optional: ### Optional:
- `atlas_url` (string) - Override the base URL for Atlas. This is useful if - `atlas_url` (string) - Override the base URL for Atlas. This is useful if
you're using Atlas Enterprise in your own network. Defaults to you're using Atlas Enterprise in your own network. Defaults to
`https://atlas.hashicorp.com/api/v1`. `https://atlas.hashicorp.com/api/v1`.
- `metadata` (map) - Send metadata about the artifact. If the artifact type is - `metadata` (map) - Send metadata about the artifact. If the artifact type is
"vagrant.box", you must specify a "provider" metadata about what provider "vagrant.box", you must specify a "provider" metadata about what provider
to use. to use.
### Example Configuration ### Example Configuration
......
...@@ -20,25 +20,25 @@ VMware or VirtualBox) and compresses the artifact into a single archive. ...@@ -20,25 +20,25 @@ VMware or VirtualBox) and compresses the artifact into a single archive.
You must specify the output filename. The archive format is derived from the You must specify the output filename. The archive format is derived from the
filename. filename.
- `output` (string) - The path to save the compressed archive. The archive - `output` (string) - The path to save the compressed archive. The archive
format is inferred from the filename. E.g. `.tar.gz` will be a format is inferred from the filename. E.g. `.tar.gz` will be a
gzipped tarball. `.zip` will be a zip file. If the extension can't be detected gzipped tarball. `.zip` will be a zip file. If the extension can't be
packer defaults to `.tar.gz` behavior but will not change the filename. detected packer defaults to `.tar.gz` behavior but will not change
the filename.
If you are executing multiple builders in parallel you should make sure If you are executing multiple builders in parallel you should make sure `output`
`output` is unique for each one. For example is unique for each one. For example `packer_{{.BuildName}}_{{.Provider}}.zip`.
`packer_{{.BuildName}}_{{.Provider}}.zip`.
### Optional: ### Optional:
If you want more control over how the archive is created you can specify the If you want more control over how the archive is created you can specify the
following settings: following settings:
- `compression_level` (integer) - Specify the compression level, for algorithms - `compression_level` (integer) - Specify the compression level, for
that support it, from 1 through 9 inclusive. Typically higher compression algorithms that support it, from 1 through 9 inclusive. Typically higher
levels take longer but produce smaller files. Defaults to `6` compression levels take longer but produce smaller files. Defaults to `6`
- `keep_input_artifact` (boolean) - Keep source files; defaults to `false` - `keep_input_artifact` (boolean) - Keep source files; defaults to `false`
### Supported Formats ### Supported Formats
......
...@@ -24,9 +24,9 @@ registry. ...@@ -24,9 +24,9 @@ registry.
The configuration for this post-processor is extremely simple. At least a The configuration for this post-processor is extremely simple. At least a
repository is required. repository is required.
- `repository` (string) - The repository of the imported image. - `repository` (string) - The repository of the imported image.
- `tag` (string) - The tag for the imported image. By default this is not set. - `tag` (string) - The tag for the imported image. By default this is not set.
## Example ## Example
......
...@@ -18,16 +18,16 @@ pushes it to a Docker registry. ...@@ -18,16 +18,16 @@ pushes it to a Docker registry.
This post-processor has only optional configuration: This post-processor has only optional configuration:
- `login` (boolean) - Defaults to false. If true, the post-processor will login - `login` (boolean) - Defaults to false. If true, the post-processor will
prior to pushing. login prior to pushing.
- `login_email` (string) - The email to use to authenticate to login. - `login_email` (string) - The email to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login. - `login_username` (string) - The username to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login. - `login_password` (string) - The password to use to authenticate to login.
- `login_server` (string) - The server address to login to. - `login_server` (string) - The server address to login to.
-&gt; **Note:** If you login using the credentials above, the post-processor -&gt; **Note:** If you login using the credentials above, the post-processor
will automatically log you out afterwards (just the server specified). will automatically log you out afterwards (just the server specified).
......
...@@ -25,7 +25,7 @@ familiar with this and vice versa. ...@@ -25,7 +25,7 @@ familiar with this and vice versa.
The configuration for this post-processor is extremely simple. The configuration for this post-processor is extremely simple.
- `path` (string) - The path to save the image. - `path` (string) - The path to save the image.
## Example ## Example
......
...@@ -27,12 +27,12 @@ that this works with committed resources, rather than exported. ...@@ -27,12 +27,12 @@ that this works with committed resources, rather than exported.
The configuration for this post-processor is extremely simple. At least a The configuration for this post-processor is extremely simple. At least a
repository is required. repository is required.
- `repository` (string) - The repository of the image. - `repository` (string) - The repository of the image.
- `tag` (string) - The tag for the image. By default this is not set. - `tag` (string) - The tag for the image. By default this is not set.
- `force` (boolean) - If true, this post-processor forcibly tag the image even - `force` (boolean) - If true, this post-processor forcibly tag the image even
if tag name is collided. Default to `false`. if tag name is collided. Default to `false`.
## Example ## Example
......
...@@ -36,16 +36,16 @@ and deliver them to your team in some fashion. ...@@ -36,16 +36,16 @@ and deliver them to your team in some fashion.
Here is an example workflow: Here is an example workflow:
1. You use Packer to build a Vagrant Box for the `virtualbox` provider 1. You use Packer to build a Vagrant Box for the `virtualbox` provider
2. The `vagrant-cloud` post-processor is configured to point to the box 2. The `vagrant-cloud` post-processor is configured to point to the box
`hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration `hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration
3. The post-processor receives the box from the `vagrant` post-processor 3. The post-processor receives the box from the `vagrant` post-processor
4. It then creates the configured version, or verifies the existence of it, on 4. It then creates the configured version, or verifies the existence of it, on
Vagrant Cloud Vagrant Cloud
5. A provider matching the name of the Vagrant provider is then created 5. A provider matching the name of the Vagrant provider is then created
6. The box is uploaded to Vagrant Cloud 6. The box is uploaded to Vagrant Cloud
7. The upload is verified 7. The upload is verified
8. The version is released and available to users of the box 8. The version is released and available to users of the box
## Configuration ## Configuration
...@@ -54,35 +54,35 @@ on Vagrant Cloud, as well as authentication and version information. ...@@ -54,35 +54,35 @@ on Vagrant Cloud, as well as authentication and version information.
### Required: ### Required:
- `access_token` (string) - Your access token for the Vagrant Cloud API. This - `access_token` (string) - Your access token for the Vagrant Cloud API. This
can be generated on your [tokens can be generated on your [tokens
page](https://vagrantcloud.com/account/tokens). page](https://vagrantcloud.com/account/tokens).
- `box_tag` (string) - The shorthand tag for your box that maps to Vagrant - `box_tag` (string) - The shorthand tag for your box that maps to Vagrant
Cloud, i.e `hashicorp/precise64` for `vagrantcloud.com/hashicorp/precise64` Cloud, i.e `hashicorp/precise64` for `vagrantcloud.com/hashicorp/precise64`
- `version` (string) - The version number, typically incrementing a - `version` (string) - The version number, typically incrementing a
previous version. The version string is validated based on [Semantic previous version. The version string is validated based on [Semantic
Versioning](http://semver.org/). The string must match a pattern that could be Versioning](http://semver.org/). The string must match a pattern that could
semver, and doesn't validate that the version comes after your be semver, and doesn't validate that the version comes after your
previous versions. previous versions.
### Optional: ### Optional:
- `no_release` (string) - If set to true, does not release the version on - `no_release` (string) - If set to true, does not release the version on
Vagrant Cloud, making it active. You can manually release the version via the Vagrant Cloud, making it active. You can manually release the version via
API or Web UI. Defaults to false. the API or Web UI. Defaults to false.
- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This - `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This
is useful if you're using Vagrant Private Cloud in your own network. Defaults is useful if you're using Vagrant Private Cloud in your own network.
to `https://vagrantcloud.com/api/v1` Defaults to `https://vagrantcloud.com/api/v1`
- `version_description` (string) - Optionally markdown text used as a - `version_description` (string) - Optionally markdown text used as a
full-length and in-depth description of the version, typically for denoting full-length and in-depth description of the version, typically for denoting
changes introduced changes introduced
- `box_download_url` (string) - Optional URL for a self-hosted box. If this is - `box_download_url` (string) - Optional URL for a self-hosted box. If this is
set the box will not be uploaded to the Vagrant Cloud. set the box will not be uploaded to the Vagrant Cloud.
## Use with Vagrant Post-Processor ## Use with Vagrant Post-Processor
......
...@@ -29,13 +29,13 @@ certain builders into proper boxes for their respective providers. ...@@ -29,13 +29,13 @@ certain builders into proper boxes for their respective providers.
Currently, the Vagrant post-processor can create boxes for the following Currently, the Vagrant post-processor can create boxes for the following
providers. providers.
- AWS - AWS
- DigitalOcean - DigitalOcean
- Hyper-V - Hyper-V
- Parallels - Parallels
- QEMU - QEMU
- VirtualBox - VirtualBox
- VMware - VMware
-&gt; **Support for additional providers** is planned. If the Vagrant -&gt; **Support for additional providers** is planned. If the Vagrant
post-processor doesn't support creating boxes for a provider you care about, post-processor doesn't support creating boxes for a provider you care about,
...@@ -51,28 +51,28 @@ However, if you want to configure things a bit more, the post-processor does ...@@ -51,28 +51,28 @@ However, if you want to configure things a bit more, the post-processor does
expose some configuration options. The available options are listed below, with expose some configuration options. The available options are listed below, with
more details about certain options in following sections. more details about certain options in following sections.
- `compression_level` (integer) - An integer representing the compression level - `compression_level` (integer) - An integer representing the compression
to use when creating the Vagrant box. Valid values range from 0 to 9, with 0 level to use when creating the Vagrant box. Valid values range from 0 to 9,
being no compression and 9 being the best compression. By default, compression with 0 being no compression and 9 being the best compression. By default,
is enabled at level 6. compression is enabled at level 6.
- `include` (array of strings) - Paths to files to include in the Vagrant box. - `include` (array of strings) - Paths to files to include in the Vagrant box.
These files will each be copied into the top level directory of the Vagrant These files will each be copied into the top level directory of the Vagrant
box (regardless of their paths). They can then be used from the Vagrantfile. box (regardless of their paths). They can then be used from the Vagrantfile.
- `keep_input_artifact` (boolean) - If set to true, do not delete the - `keep_input_artifact` (boolean) - If set to true, do not delete the
`output_directory` on a successful build. Defaults to false. `output_directory` on a successful build. Defaults to false.
- `output` (string) - The full path to the box file that will be created by - `output` (string) - The full path to the box file that will be created by
this post-processor. This is a [configuration this post-processor. This is a [configuration
template](/docs/templates/configuration-templates.html). The variable template](/docs/templates/configuration-templates.html). The variable
`Provider` is replaced by the Vagrant provider the box is for. The variable `Provider` is replaced by the Vagrant provider the box is for. The variable
`ArtifactId` is replaced by the ID of the input artifact. The variable `ArtifactId` is replaced by the ID of the input artifact. The variable
`BuildName` is replaced with the name of the build. By default, the value of `BuildName` is replaced with the name of the build. By default, the value of
this config is `packer_{{.BuildName}}_{{.Provider}}.box`. this config is `packer_{{.BuildName}}_{{.Provider}}.box`.
- `vagrantfile_template` (string) - Path to a template to use for the - `vagrantfile_template` (string) - Path to a template to use for the
Vagrantfile that is packaged with the box. Vagrantfile that is packaged with the box.
## Provider-Specific Overrides ## Provider-Specific Overrides
......
...@@ -21,35 +21,36 @@ each category, the available configuration keys are alphabetized. ...@@ -21,35 +21,36 @@ each category, the available configuration keys are alphabetized.
Required: Required:
- `cluster` (string) - The cluster to upload the VM to. - `cluster` (string) - The cluster to upload the VM to.
- `datacenter` (string) - The name of the datacenter within vSphere to add the - `datacenter` (string) - The name of the datacenter within vSphere to add the
VM to. VM to.
- `datastore` (string) - The name of the datastore to store this VM. This is - `datastore` (string) - The name of the datastore to store this VM. This is
*not required* if `resource_pool` is specified. *not required* if `resource_pool` is specified.
- `host` (string) - The vSphere host that will be contacted to perform the - `host` (string) - The vSphere host that will be contacted to perform the
VM upload. VM upload.
- `password` (string) - Password to use to authenticate to the vSphere endpoint. - `password` (string) - Password to use to authenticate to the
vSphere endpoint.
- `resource_pool` (string) - The resource pool to upload the VM to. This is *not - `resource_pool` (string) - The resource pool to upload the VM to. This is
required*. *not required*.
- `username` (string) - The username to use to authenticate to the - `username` (string) - The username to use to authenticate to the
vSphere endpoint. vSphere endpoint.
- `vm_name` (string) - The name of the VM once it is uploaded. - `vm_name` (string) - The name of the VM once it is uploaded.
Optional: Optional:
- `disk_mode` (string) - Target disk format. See `ovftool` manual for - `disk_mode` (string) - Target disk format. See `ovftool` manual for
available options. By default, "thick" will be used. available options. By default, "thick" will be used.
- `insecure` (boolean) - Whether or not the connection to vSphere can be done - `insecure` (boolean) - Whether or not the connection to vSphere can be done
over an insecure connection. By default this is false. over an insecure connection. By default this is false.
- `vm_folder` (string) - The folder within the datastore to store the VM. - `vm_folder` (string) - The folder within the datastore to store the VM.
- `vm_network` (string) - The name of the VM network this VM will be added to. - `vm_network` (string) - The name of the VM network this VM will be added to.
...@@ -35,83 +35,70 @@ The reference of available configuration options is listed below. ...@@ -35,83 +35,70 @@ The reference of available configuration options is listed below.
Required: Required:
- `playbook_file` (string) - The playbook file to be executed by ansible. This - `playbook_file` (string) - The playbook file to be executed by ansible. This
file must exist on your local system and will be uploaded to the file must exist on your local system and will be uploaded to the
remote machine. remote machine.
Optional: Optional:
- `command` (string) - The command to invoke ansible. Defaults - `command` (string) - The command to invoke ansible. Defaults
to "ansible-playbook". to "ansible-playbook".
- `extra_arguments` (array of strings) - An array of extra arguments to pass to - `extra_arguments` (array of strings) - An array of extra arguments to pass
the ansible command. By default, this is empty. to the ansible command. By default, this is empty.
- `inventory_groups` (string) - A comma-separated list of groups to which packer - `inventory_groups` (string) - A comma-separated list of groups to which
will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2` will packer will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2`
generate an Ansible inventory like: will generate an Ansible inventory like:
``` {.text} `{.text} [my_group_1] 127.0.0.1 [my_group_2] 127.0.0.1`
[my_group_1]
127.0.0.1
[my_group_2]
127.0.0.1
```
- `inventory_file` (string) - The inventory file to be used by ansible. This - `inventory_file` (string) - The inventory file to be used by ansible. This
file must exist on your local system and will be uploaded to the file must exist on your local system and will be uploaded to the
remote machine. remote machine.
When using an inventory file, it's also required to `--limit` the hosts to the When using an inventory file, it's also required to `--limit` the hosts to the
specified host you're buiding. The `--limit` argument can be provided in the specified host you're buiding. The `--limit` argument can be provided in the
`extra_arguments` option. `extra_arguments` option.
An example inventory file may look like: An example inventory file may look like:
``` {.text} \`\`\` {.text} \[chi-dbservers\] db-01 ansible\_connection=local db-02
[chi-dbservers] ansible\_connection=local
db-01 ansible_connection=local
db-02 ansible_connection=local
[chi-appservers] \[chi-appservers\] app-01 ansible\_connection=local app-02
app-01 ansible_connection=local ansible\_connection=local
app-02 ansible_connection=local
[chi:children] \[chi:children\] chi-dbservers chi-appservers
chi-dbservers
chi-appservers
[dbservers:children] \[dbservers:children\] chi-dbservers
chi-dbservers
[appservers:children] \[appservers:children\] chi-appservers \`\`\`
chi-appservers
```
- `playbook_dir` (string) - a path to the complete ansible directory structure - `playbook_dir` (string) - a path to the complete ansible directory structure
on your local system to be copied to the remote machine as the on your local system to be copied to the remote machine as the
`staging_directory` before all other files and directories. `staging_directory` before all other files and directories.
- `playbook_paths` (array of strings) - An array of paths to playbook files on - `playbook_paths` (array of strings) - An array of paths to playbook files on
your local system. These will be uploaded to the remote machine under your local system. These will be uploaded to the remote machine under
`staging_directory`/playbooks. By default, this is empty. `staging_directory`/playbooks. By default, this is empty.
- `group_vars` (string) - a path to the directory containing ansible group - `group_vars` (string) - a path to the directory containing ansible group
variables on your local system to be copied to the remote machine. By default, variables on your local system to be copied to the remote machine. By
this is empty. default, this is empty.
- `host_vars` (string) - a path to the directory containing ansible host - `host_vars` (string) - a path to the directory containing ansible host
variables on your local system to be copied to the remote machine. By default, variables on your local system to be copied to the remote machine. By
this is empty. default, this is empty.
- `role_paths` (array of strings) - An array of paths to role directories on - `role_paths` (array of strings) - An array of paths to role directories on
your local system. These will be uploaded to the remote machine under your local system. These will be uploaded to the remote machine under
`staging_directory`/roles. By default, this is empty. `staging_directory`/roles. By default, this is empty.
- `staging_directory` (string) - The directory where all the configuration of - `staging_directory` (string) - The directory where all the configuration of
Ansible by Packer will be placed. By default this Ansible by Packer will be placed. By default this
is "/tmp/packer-provisioner-ansible-local". This directory doesn't need to is "/tmp/packer-provisioner-ansible-local". This directory doesn't need to
exist but must have proper permissions so that the SSH user that Packer uses exist but must have proper permissions so that the SSH user that Packer uses
is able to create directories and write into this folder. If the permissions is able to create directories and write into this folder. If the permissions
are not correct, use a shell provisioner prior to this to configure are not correct, use a shell provisioner prior to this to configure
it properly. it properly.
...@@ -40,70 +40,71 @@ is running must have knife on the path and configured globally, i.e, ...@@ -40,70 +40,71 @@ is running must have knife on the path and configured globally, i.e,
The reference of available configuration options is listed below. No The reference of available configuration options is listed below. No
configuration is actually required. configuration is actually required.
- `chef_environment` (string) - The name of the chef\_environment sent to the - `chef_environment` (string) - The name of the chef\_environment sent to the
Chef server. By default this is empty and will not use an environment. Chef server. By default this is empty and will not use an environment.
- `config_template` (string) - Path to a template that will be used for the Chef - `config_template` (string) - Path to a template that will be used for the
configuration file. By default Packer only sets configuration it needs to Chef configuration file. By default Packer only sets configuration it needs
match the settings set in the provisioner configuration. If you need to set to match the settings set in the provisioner configuration. If you need to
configurations that the Packer provisioner doesn't support, then you should set configurations that the Packer provisioner doesn't support, then you
use a custom configuration template. See the dedicated "Chef Configuration" should use a custom configuration template. See the dedicated "Chef
section below for more details. Configuration" section below for more details.
- `execute_command` (string) - The command used to execute Chef. This has - `execute_command` (string) - The command used to execute Chef. This has
various [configuration template various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below variables](/docs/templates/configuration-templates.html) available. See
for more information. below for more information.
- `install_command` (string) - The command used to install Chef. This has - `install_command` (string) - The command used to install Chef. This has
various [configuration template various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below variables](/docs/templates/configuration-templates.html) available. See
for more information. below for more information.
- `json` (object) - An arbitrary mapping of JSON that will be available as node - `json` (object) - An arbitrary mapping of JSON that will be available as
attributes while running Chef. node attributes while running Chef.
- `node_name` (string) - The name of the node to register with the Chef Server. - `node_name` (string) - The name of the node to register with the
This is optional and by default is packer-{{uuid}}. Chef Server. This is optional and by default is packer-{{uuid}}.
- `prevent_sudo` (boolean) - By default, the configured commands that are - `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true, executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted. then the sudo will be omitted.
- `run_list` (array of strings) - The [run - `run_list` (array of strings) - The [run
list](http://docs.opscode.com/essentials_node_object_run_lists.html) for Chef. list](http://docs.opscode.com/essentials_node_object_run_lists.html)
By default this is empty, and will use the run list sent down by the for Chef. By default this is empty, and will use the run list sent down by
Chef Server. the Chef Server.
- `server_url` (string) - The URL to the Chef server. This is required. - `server_url` (string) - The URL to the Chef server. This is required.
- `skip_clean_client` (boolean) - If true, Packer won't remove the client from - `skip_clean_client` (boolean) - If true, Packer won't remove the client from
the Chef server after it is done running. By default, this is false. the Chef server after it is done running. By default, this is false.
- `skip_clean_node` (boolean) - If true, Packer won't remove the node from the - `skip_clean_node` (boolean) - If true, Packer won't remove the node from the
Chef server after it is done running. By default, this is false. Chef server after it is done running. By default, this is false.
- `skip_install` (boolean) - If true, Chef will not automatically be installed - `skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Opscode omnibus installers. on the machine using the Opscode omnibus installers.
- `staging_directory` (string) - This is the directory where all the - `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this configuration of Chef by Packer will be placed. By default this
is "/tmp/packer-chef-client". This directory doesn't need to exist but must is "/tmp/packer-chef-client". This directory doesn't need to exist but must
have proper permissions so that the SSH user that Packer uses is able to have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly. correct, use a shell provisioner prior to this to configure it properly.
- `client_key` (string) - Path to client key. If not set, this defaults to a - `client_key` (string) - Path to client key. If not set, this defaults to a
file named client.pem in `staging_directory`. file named client.pem in `staging_directory`.
- `validation_client_name` (string) - Name of the validation client. If not set, - `validation_client_name` (string) - Name of the validation client. If not
this won't be set in the configuration and the default that Chef uses will set, this won't be set in the configuration and the default that Chef uses
be used. will be used.
- `validation_key_path` (string) - Path to the validation key for communicating - `validation_key_path` (string) - Path to the validation key for
with the Chef Server. This will be uploaded to the remote machine. If this is communicating with the Chef Server. This will be uploaded to the
NOT set, then it is your responsibility via other means (shell remote machine. If this is NOT set, then it is your responsibility via other
provisioner, etc.) to get a validation key to where Chef expects it. means (shell provisioner, etc.) to get a validation key to where Chef
expects it.
## Chef Configuration ## Chef Configuration
...@@ -135,9 +136,9 @@ This template is a [configuration ...@@ -135,9 +136,9 @@ This template is a [configuration
template](/docs/templates/configuration-templates.html) and has a set of template](/docs/templates/configuration-templates.html) and has a set of
variables available to use: variables available to use:
- `NodeName` - The node name set in the configuration. - `NodeName` - The node name set in the configuration.
- `ServerUrl` - The URL of the Chef Server set in the configuration. - `ServerUrl` - The URL of the Chef Server set in the configuration.
- `ValidationKeyPath` - Path to the validation key, if it is set. - `ValidationKeyPath` - Path to the validation key, if it is set.
## Execute Command ## Execute Command
...@@ -155,10 +156,10 @@ This command can be customized using the `execute_command` configuration. As you ...@@ -155,10 +156,10 @@ This command can be customized using the `execute_command` configuration. As you
can see from the default value above, the value of this configuration can can see from the default value above, the value of this configuration can
contain various template variables, defined below: contain various template variables, defined below:
- `ConfigPath` - The path to the Chef configuration file. file. - `ConfigPath` - The path to the Chef configuration file. file.
- `JsonPath` - The path to the JSON attributes file for the node. - `JsonPath` - The path to the JSON attributes file for the node.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the - `Sudo` - A boolean of whether to `sudo` the command or not, depending on the
value of the `prevent_sudo` configuration. value of the `prevent_sudo` configuration.
## Install Command ## Install Command
......
...@@ -32,19 +32,19 @@ The file provisioner can upload both single files and complete directories. ...@@ -32,19 +32,19 @@ The file provisioner can upload both single files and complete directories.
The available configuration options are listed below. All elements are required. The available configuration options are listed below. All elements are required.
- `source` (string) - The path to a local file or directory to upload to - `source` (string) - The path to a local file or directory to upload to
the machine. The path can be absolute or relative. If it is relative, it is the machine. The path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed. If this is a relative to the working directory when Packer is executed. If this is a
directory, the existence of a trailing slash is important. Read below on directory, the existence of a trailing slash is important. Read below on
uploading directories. uploading directories.
- `destination` (string) - The path where the file will be uploaded to in - `destination` (string) - The path where the file will be uploaded to in
the machine. This value must be a writable location and any parent directories the machine. This value must be a writable location and any parent
must already exist. directories must already exist.
- `direction` (string) - The direction of the file transfer. This defaults to - `direction` (string) - The direction of the file transfer. This defaults to
"upload." If it is set to "download" then the file "source" in the machine wll "upload." If it is set to "download" then the file "source" in the machine
be downloaded locally to "destination" wll be downloaded locally to "destination"
## Directory Uploads ## Directory Uploads
......
...@@ -41,36 +41,36 @@ The reference of available configuration options is listed below. ...@@ -41,36 +41,36 @@ The reference of available configuration options is listed below.
The provisioner takes various options. None are strictly required. They are The provisioner takes various options. None are strictly required. They are
listed below: listed below:
- `client_cert_path` (string) - Path to the client certificate for the node on - `client_cert_path` (string) - Path to the client certificate for the node on
your disk. This defaults to nothing, in which case a client cert won't your disk. This defaults to nothing, in which case a client cert won't
be uploaded. be uploaded.
- `client_private_key_path` (string) - Path to the client private key for the - `client_private_key_path` (string) - Path to the client private key for the
node on your disk. This defaults to nothing, in which case a client private node on your disk. This defaults to nothing, in which case a client private
key won't be uploaded. key won't be uploaded.
- `facter` (object of key/value strings) - Additional Facter facts to make - `facter` (object of key/value strings) - Additional Facter facts to make
available to the Puppet run. available to the Puppet run.
- `ignore_exit_codes` (boolean) - If true, Packer will never consider the - `ignore_exit_codes` (boolean) - If true, Packer will never consider the
provisioner a failure. provisioner a failure.
- `options` (string) - Additional command line options to pass to `puppet agent` - `options` (string) - Additional command line options to pass to
when Puppet is ran. `puppet agent` when Puppet is ran.
- `prevent_sudo` (boolean) - By default, the configured commands that are - `prevent_sudo` (boolean) - By default, the configured commands that are
executed to run Puppet are executed with `sudo`. If this is true, then the executed to run Puppet are executed with `sudo`. If this is true, then the
sudo will be omitted. sudo will be omitted.
- `puppet_node` (string) - The name of the node. If this isn't set, the fully - `puppet_node` (string) - The name of the node. If this isn't set, the fully
qualified domain name will be used. qualified domain name will be used.
- `puppet_server` (string) - Hostname of the Puppet server. By default "puppet" - `puppet_server` (string) - Hostname of the Puppet server. By default
will be used. "puppet" will be used.
- `staging_directory` (string) - This is the directory where all the - `staging_directory` (string) - This is the directory where all the
configuration of Puppet by Packer will be placed. By default this configuration of Puppet by Packer will be placed. By default this
is "/tmp/packer-puppet-server". This directory doesn't need to exist but must is "/tmp/packer-puppet-server". This directory doesn't need to exist but
have proper permissions so that the SSH user that Packer uses is able to must have proper permissions so that the SSH user that Packer uses is able
create directories and write into this folder. If the permissions are not to create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly. correct, use a shell provisioner prior to this to configure it properly.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment