Commit d8e8f98b authored by Chris Bednarski's avatar Chris Bednarski

Change to 4 spaces

parent 555a8ba7
......@@ -13,5 +13,5 @@ format:
bundle exec htmlbeautifier -t 2 source/*.erb
bundle exec htmlbeautifier -t 2 source/layouts/*.erb
@pandoc -v > /dev/null || echo "pandoc must be installed in order to format markdown content"
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=2 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=4 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "mv {}.new {}"\; || true
......@@ -29,7 +29,8 @@ list as contributors come and go.
<div class="people">
<div class="person">
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<div class="bio">
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
......@@ -41,9 +42,11 @@ list as contributors come and go.
described as "automation obsessed."
</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
......@@ -52,9 +55,11 @@ list as contributors come and go.
for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
<div class="bio">
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
......@@ -65,9 +70,11 @@ list as contributors come and go.
<a href="https://github.com/ironport">IronPort Python libraries</a>.
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
......@@ -78,9 +85,11 @@ VMware builder on Windows, and provides other valuable assistance. Ross is an
open source enthusiast, published author, and freelance consultant.
</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
......@@ -90,8 +99,11 @@ Rickard von Essen maintains our Parallels Desktop builder. Rickard is an
polyglot programmer and consults on Continuous Delivery.
</p>
</div>
</div>
<div class="clearfix"></div>
</div>
<div class="clearfix">
</div>
</div>
......@@ -17,41 +17,41 @@ Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical order
for easy referencing.
- `Artifacts` are the results of a single build, and are usually a set of IDs or
files to represent a machine image. Every builder produces a single artifact.
As an example, in the case of the Amazon EC2 builder, the artifact is a set of
AMI IDs (one per region). For the VMware builder, the artifact is a directory
of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a
single platform. Multiple builds run in parallel. Example usage in a sentence:
"The Packer build produced an AMI to run our web application." Or: "Packer is
running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine image
for a single platform. Builders read in some configuration and use that to run
and generate a machine image. A builder is invoked as part of a build in order
to create the actual resulting images. Example builders include VirtualBox,
VMware, and Amazon EC2. Builders can be created and added to Packer in the
form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job. An
example command is "build", which is invoked as `packer build`. Packer ships
with a set of commands out of the box in order to define its
command-line interface. Commands can also be created and added to Packer in
the form of plugins.
- `Post-processors` are components of Packer that take the result of a builder
or another post-processor and process that to create a new artifact. Examples
of post-processors are compress to compress artifacts, upload to upload
artifacts, etc.
- `Provisioners` are components of Packer that install and configure software
within a running machine prior to that machine being turned into a
static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef,
Puppet, etc.
- `Templates` are JSON files which define one or more builds by configuring the
various components of Packer. Packer is able to read a template and use that
information to create multiple machine images in parallel.
- `Artifacts` are the results of a single build, and are usually a set of IDs
or files to represent a machine image. Every builder produces a
single artifact. As an example, in the case of the Amazon EC2 builder, the
artifact is a set of AMI IDs (one per region). For the VMware builder, the
artifact is a directory of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a
single platform. Multiple builds run in parallel. Example usage in a
sentence: "The Packer build produced an AMI to run our web application." Or:
"Packer is running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine image
for a single platform. Builders read in some configuration and use that to
run and generate a machine image. A builder is invoked as part of a build in
order to create the actual resulting images. Example builders include
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job.
An example command is "build", which is invoked as `packer build`. Packer
ships with a set of commands out of the box in order to define its
command-line interface. Commands can also be created and added to Packer in
the form of plugins.
- `Post-processors` are components of Packer that take the result of a builder
or another post-processor and process that to create a new artifact.
Examples of post-processors are compress to compress artifacts, upload to
upload artifacts, etc.
- `Provisioners` are components of Packer that install and configure software
within a running machine prior to that machine being turned into a
static image. They perform the major work of making the image contain
useful software. Example provisioners include shell scripts, Chef,
Puppet, etc.
- `Templates` are JSON files which define one or more builds by configuring
the various components of Packer. Packer is able to read a template and use
that information to create multiple machine images in parallel.
......@@ -12,20 +12,21 @@ Packer is able to create Amazon AMIs. To achieve this, Packer comes with
multiple builders depending on the strategy you want to use to build the AMI.
Packer supports the following builders at the moment:
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
launching a source AMI and re-packaging it into a new AMI after provisioning.
If in doubt, use this builder, which is the easiest to get started with.
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
launching a source AMI and re-packaging it into a new AMI
after provisioning. If in doubt, use this builder, which is the easiest to
get started with.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create instance-store
AMIs by launching and provisioning a source instance, then rebundling it and
uploading it to S3.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then
rebundling it and uploading it to S3.
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
since no new EC2 instance needs to be launched.
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a
[Chroot](http://en.wikipedia.org/wiki/Chroot) environment to provision
that device. This is an **advanced builder and should not be used by
newcomers**. However, it is also the fastest way to build an EBS-backed AMI
since no new EC2 instance needs to be launched.
-&gt; **Don't know which builder to use?** If in doubt, use the [amazon-ebs
builder](/docs/builders/amazon-ebs.html). It is much easier to use and Amazon
......
......@@ -34,41 +34,43 @@ builder.
### Required:
- `api_token` (string) - The client TOKEN to use to access your account. It can
also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
- `api_token` (string) - The client TOKEN to use to access your account. It
can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`,
if set.
- `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See
https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs.
- `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See
https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs.
- `region` (string) - The name (or slug) of the region to launch the droplet in.
Consequently, this is the region where the snapshot will be available. See
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
- `region` (string) - The name (or slug) of the region to launch the
droplet in. Consequently, this is the region where the snapshot will
be available. See
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for the
accepted size names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for
the accepted size names/slugs.
### Optional:
- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean sets
the hostname of the machine to this value.
- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
sets the hostname of the machine to this value.
- `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
- `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
- `snapshot_name` (string) - The name of the resulting snapshot that will appear
in your account. This must be unique. To help make this unique, use a function
like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
- `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. This must be unique. To help make this unique, use a
function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
- `state_timeout` (string) - The time to wait, as a duration string, for a
droplet to enter a desired state (such as "active") before timing out. The
default state timeout is "6m".
- `state_timeout` (string) - The time to wait, as a duration string, for a
droplet to enter a desired state (such as "active") before timing out. The
default state timeout is "6m".
- `user_data` (string) - User data to launch with the Droplet.
- `user_data` (string) - User data to launch with the Droplet.
## Basic Example
......
......@@ -68,42 +68,42 @@ builder.
### Required:
- `commit` (boolean) - If true, the container will be committed to an image
rather than exported. This cannot be set if `export_path` is set.
- `commit` (boolean) - If true, the container will be committed to an image
rather than exported. This cannot be set if `export_path` is set.
- `export_path` (string) - The path where the final container will be exported
as a tar file. This cannot be set if `commit` is set to true.
- `export_path` (string) - The path where the final container will be exported
as a tar file. This cannot be set if `commit` is set to true.
- `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it doesn't
already exist.
- `image` (string) - The base image for the Docker container that will
be started. This image will be pulled from the Docker registry if it doesn't
already exist.
### Optional:
- `login` (boolean) - Defaults to false. If true, the builder will login in
order to pull the image. The builder only logs in for the duration of
the pull. It always logs out afterwards.
- `login` (boolean) - Defaults to false. If true, the builder will login in
order to pull the image. The builder only logs in for the duration of
the pull. It always logs out afterwards.
- `login_email` (string) - The email to use to authenticate to login.
- `login_email` (string) - The email to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login.
- `login_server` (string) - The server address to login to.
- `login_server` (string) - The server address to login to.
- `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already exists
and can be used. This defaults to true if not set.
- `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already
exists and can be used. This defaults to true if not set.
- `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to
`["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a
couple template variables to customize, as well.
- `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to
`["-d", "-i", "-t", "{{.Image}}", "/bin/bash"]`. As you can see, you have a
couple template variables to customize, as well.
- `volumes` (map of strings to strings) - A mapping of additional volumes to
mount into this container. The key of the object is the host path, the value
is the container path.
- `volumes` (map of strings to strings) - A mapping of additional volumes to
mount into this container. The key of the object is the host path, the value
is the container path.
## Using the Artifact: Export
......@@ -226,11 +226,11 @@ Dockerfiles have some additional features that Packer doesn't support which are
able to be worked around. Many of these features will be automated by Packer in
the future:
- Dockerfiles will snapshot the container at each step, allowing you to go back
to any step in the history of building. Packer doesn't do this yet, but
inter-step snapshotting is on the way.
- Dockerfiles will snapshot the container at each step, allowing you to go
back to any step in the history of building. Packer doesn't do this yet, but
inter-step snapshotting is on the way.
- Dockerfiles can contain information such as exposed ports, shared volumes, and
other metadata. Packer builds a raw Docker container image that has none of
this metadata. You can pass in much of this metadata at runtime with
`docker run`.
- Dockerfiles can contain information such as exposed ports, shared volumes,
and other metadata. Packer builds a raw Docker container image that has none
of this metadata. You can pass in much of this metadata at runtime with
`docker run`.
......@@ -38,67 +38,67 @@ builder.
### Required:
- `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
- `flavor` (string) - The ID, name, or full URL for the desired flavor for the
server to be created.
- `image_name` (string) - The name of the resulting image.
- `image_name` (string) - The name of the resulting image.
- `source_image` (string) - The ID or full URL to the base image to use. This is
the image that will be used to launch a new server and provision it. Unless
you specify completely custom SSH settings, the source image must have
`cloud-init` installed so that the keypair gets assigned properly.
- `source_image` (string) - The ID or full URL to the base image to use. This
is the image that will be used to launch a new server and provision it.
Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly.
- `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable `OS_USERNAME`,
if set.
- `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable `OS_USERNAME`,
if set.
- `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables `OS_PASSWORD`,
if set.
- `password` (string) - The password used to connect to the OpenStack service.
If not specified, Packer will use the environment variables `OS_PASSWORD`,
if set.
### Optional:
- `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
- `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
- `availability_zone` (string) - The availability zone to launch the server in.
If this isn't specified, the default enforced by your OpenStack cluster will
be used. This may be required for some OpenStack clusters.
- `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters.
- `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
- `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
- `floating_ip_pool` (string) - The name of the floating IP pool to use to
allocate a floating IP. `use_floating_ip` must also be set to true for this to
have an affect.
- `floating_ip_pool` (string) - The name of the floating IP pool to use to
allocate a floating IP. `use_floating_ip` must also be set to true for this
to have an affect.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be done
over an insecure connection. By default this is false.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be
done over an insecure connection. By default this is false.
- `networks` (array of strings) - A list of networks by UUID to attach to
this instance.
- `networks` (array of strings) - A list of networks by UUID to attach to
this instance.
- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. If not specified,
Packer will use the environment variable `OS_TENANT_NAME`, if set.
- `tenant_id` or `tenant_name` (string) - The tenant ID or name to boot the
instance into. Some OpenStack installations require this. If not specified,
Packer will use the environment variable `OS_TENANT_NAME`, if set.
- `security_groups` (array of strings) - A list of security groups by name to
add to this instance.
- `security_groups` (array of strings) - A list of security groups by name to
add to this instance.
- `region` (string) - The name of the region, such as "DFW", in which to launch
the server to create the AMI. If not specified, Packer will use the
environment variable `OS_REGION_NAME`, if set.
- `region` (string) - The name of the region, such as "DFW", in which to
launch the server to create the AMI. If not specified, Packer will use the
environment variable `OS_REGION_NAME`, if set.
- `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is to
connect via whichever is returned first from the OpenStack API.
- `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API.
- `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
- `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false.
- `rackconnect_wait` (boolean) - For rackspace, whether or not to wait for
Rackconnect to assign the machine an IP address before connecting via SSH.
Defaults to false.
## Basic Example: Rackspace public cloud
......@@ -138,7 +138,7 @@ appear in the template. That is because I source a standard OpenStack script
with environment variables set before I run this. This script is setting
environment variables like:
- `OS_AUTH_URL`
- `OS_TENANT_ID`
- `OS_USERNAME`
- `OS_PASSWORD`
- `OS_AUTH_URL`
- `OS_TENANT_ID`
- `OS_USERNAME`
- `OS_PASSWORD`
......@@ -16,16 +16,16 @@ Packer actually comes with multiple builders able to create Parallels machines,
depending on the strategy you want to use to build the image. Packer supports
the following Parallels builders:
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file,
creates a brand new Parallels VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for people
who want to start from scratch.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO
file, creates a brand new Parallels VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels
VM export you want to use as the source. As an additional benefit, you can
feed the artifact of this builder back into itself to iterate on a machine.
## Requirements
......
......@@ -16,13 +16,14 @@ Packer actually comes with multiple builders able to create VirtualBox machines,
depending on the strategy you want to use to build the image. Packer supports
the following VirtualBox builders:
- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
file, creates a brand new VirtualBox VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
file, creates a brand new VirtualBox VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports an
existing OVF/OVA file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing VirtualBox VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports
an existing OVF/OVA file, runs provisioners on top of that VM, and exports
that machine to create an image. This is best if you have an existing
VirtualBox VM export you want to use as the source. As an additional
benefit, you can feed the artifact of this builder back into itself to
iterate on a machine.
......@@ -15,14 +15,14 @@ Packer actually comes with multiple builders able to create VMware machines,
depending on the strategy you want to use to build the image. Packer supports
the following VMware builders:
- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
creates a brand new VMware VM, installs an OS, provisions software within the
OS, then exports that machine to create an image. This is best for people who
want to start from scratch.
- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
creates a brand new VMware VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
existing VMware machine (from a VMX file), runs provisioners on top of that
VM, and exports that machine to create an image. This is best if you have an
existing VMware VM you want to use as the source. As an additional benefit,
you can feed the artifact of this builder back into Packer to iterate on
a machine.
- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
existing VMware machine (from a VMX file), runs provisioners on top of that
VM, and exports that machine to create an image. This is best if you have an
existing VMware VM you want to use as the source. As an additional benefit,
you can feed the artifact of this builder back into Packer to iterate on
a machine.
......@@ -17,24 +17,26 @@ artifacts that are created will be outputted at the end of the build.
## Options
- `-color=false` - Disables colorized output. Enabled by default.
- `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior
of debug mode is left to the builder. In general, builders usually will stop
between each step, waiting for keyboard input before continuing. This will
allow the user to inspect state and so on.
- `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
- `-force` - Forces a builder to run when artifacts from a previous build
prevent a build from running. The exact behavior of a forced build is left to
the builder. In general, a builder supporting the forced build will remove the
artifacts from the previous build. This will allow the user to repeat a build
without having to manually clean these artifacts beforehand.
- `-only=foo,bar,baz` - Only build the builds with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
- `-color=false` - Disables colorized output. Enabled by default.
- `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact
behavior of debug mode is left to the builder. In general, builders usually
will stop between each step, waiting for keyboard input before continuing.
This will allow the user to inspect state and so on.
- `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.
- `-force` - Forces a builder to run when artifacts from a previous build
prevent a build from running. The exact behavior of a forced build is left
to the builder. In general, a builder supporting the forced build will
remove the artifacts from the previous build. This will allow the user to
repeat a build without having to manually clean these artifacts beforehand.
- `-only=foo,bar,baz` - Only build the builds with the given
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.
......@@ -19,7 +19,7 @@ The fix command will output the changed template to standard out, so you should
redirect standard using standard OS-specific techniques if you want to save it
to a file. For example, on Linux systems, you may want to do this:
$ packer fix old.json > new.json
\$ packer fix old.json &gt; new.json
If fixing fails for any reason, the fix command will exit with a non-zero exit
status. Error messages appear on standard error, so if you're redirecting
......
......@@ -53,20 +53,22 @@ timestamp,target,type,data...
Each component is explained below:
- **timestamp** is a Unix timestamp in UTC of when the message was printed.
- **target** is the target of the following output. This is empty if the message
is related to Packer globally. Otherwise, this is generally a build name so
you can relate output to a specific build while parallel builds are running.
- **type** is the type of machine-readable message being outputted. There are a
set of standard types which are covered later, but each component of Packer
(builders, provisioners, etc.) may output their own custom types as well,
allowing the machine-readable output to be infinitely flexible.
- **data** is zero or more comma-seperated values associated with the
prior type. The exact amount and meaning of this data is type-dependent, so
you must read the documentation associated with the type to understand fully.
- **timestamp** is a Unix timestamp in UTC of when the message was printed.
- **target** is the target of the following output. This is empty if the
message is related to Packer globally. Otherwise, this is generally a build
name so you can relate output to a specific build while parallel builds
are running.
- **type** is the type of machine-readable message being outputted. There are
a set of standard types which are covered later, but each component of
Packer (builders, provisioners, etc.) may output their own custom types as
well, allowing the machine-readable output to be infinitely flexible.
- **data** is zero or more comma-seperated values associated with the
prior type. The exact amount and meaning of this data is type-dependent, so
you must read the documentation associated with the type to
understand fully.
Within the format, if data contains a comma, it is replaced with
`%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'`
......
......@@ -26,16 +26,16 @@ configuration](/docs/templates/push.html) must be completed within the template.
## Options
- `-message` - A message to identify the purpose or changes in this Packer
template much like a VCS commit message. This message will be passed to the
Packer build service. This option is also available as a short option `-m`.
- `-message` - A message to identify the purpose or changes in this Packer
template much like a VCS commit message. This message will be passed to the
Packer build service. This option is also available as a short option `-m`.
- `-token` - An access token for authenticating the push to the Packer build
service such as Atlas. This can also be specified within the push
configuration in the template.
- `-token` - An access token for authenticating the push to the Packer build
service such as Atlas. This can also be specified within the push
configuration in the template.
- `-name` - The name of the build in the service. This typically looks like
`hashicorp/precise64`.
- `-name` - The name of the build in the service. This typically looks like
`hashicorp/precise64`.
## Examples
......
......@@ -29,5 +29,5 @@ Errors validating build 'vmware'. 1 error(s) occurred:
## Options
- `-syntax-only` - Only the syntax of the template is checked. The configuration
is not validated.
- `-syntax-only` - Only the syntax of the template is checked. The
configuration is not validated.
......@@ -52,19 +52,19 @@ the following two packages, you're encouraged to use whatever packages you want.
Because plugins are their own processes, there is no danger of colliding
dependencies.
- `github.com/mitchellh/packer` - Contains all the interfaces that you have to
implement for any given plugin.
- `github.com/mitchellh/packer` - Contains all the interfaces that you have to
implement for any given plugin.
- `github.com/mitchellh/packer/plugin` - Contains the code to serve the plugin.
This handles all the inter-process communication stuff.
- `github.com/mitchellh/packer/plugin` - Contains the code to serve
the plugin. This handles all the inter-process communication stuff.
There are two steps involved in creating a plugin:
1. Implement the desired interface. For example, if you're building a builder
plugin, implement the `packer.Builder` interface.
1. Implement the desired interface. For example, if you're building a builder
plugin, implement the `packer.Builder` interface.
2. Serve the interface by calling the appropriate plugin serving method in your
main method. In the case of a builder, this is `plugin.ServeBuilder`.
2. Serve the interface by calling the appropriate plugin serving method in your
main method. In the case of a builder, this is `plugin.ServeBuilder`.
A basic example is shown below. In this example, assume the `Builder` struct
implements the `packer.Builder` interface:
......
......@@ -51,21 +51,21 @@ Once the plugin is named properly, Packer automatically discovers plugins in the
following directories in the given order. If a conflicting plugin is found
later, it will take precedence over one found earlier.
1. The directory where `packer` is, or the executable directory.
1. The directory where `packer` is, or the executable directory.
2. `~/.packer.d/plugins` on Unix systems or `%APPDATA%/packer.d/plugins`
on Windows.
2. `~/.packer.d/plugins` on Unix systems or `%APPDATA%/packer.d/plugins`
on Windows.
3. The current working directory.
3. The current working directory.
The valid types for plugins are:
- `builder` - Plugins responsible for building images for a specific platform.
- `builder` - Plugins responsible for building images for a specific platform.
- `command` - A CLI sub-command for `packer`.
- `command` - A CLI sub-command for `packer`.
- `post-processor` - A post-processor responsible for taking an artifact from a
builder and turning it into something else.
- `post-processor` - A post-processor responsible for taking an artifact from
a builder and turning it into something else.
- `provisioner` - A provisioner to install software on images created by
a builder.
- `provisioner` - A provisioner to install software on images created by
a builder.
......@@ -79,11 +79,11 @@ creating a new artifact with a single file: the compressed archive.
The result signature of this method is `(Artifact, bool, error)`. Each return
value is explained below:
- `Artifact` - The newly created artifact if no errors occurred.
- `bool` - If true, the input artifact will forcefully be kept. By default,
Packer typically deletes all input artifacts, since the user doesn't generally
want intermediary artifacts. However, some post-processors depend on the
previous artifact existing. If this is `true`, it forces packer to keep the
artifact around.
- `error` - Non-nil if there was an error in any way. If this is the case, the
other two return values are ignored.
- `Artifact` - The newly created artifact if no errors occurred.
- `bool` - If true, the input artifact will forcefully be kept. By default,
Packer typically deletes all input artifacts, since the user doesn't
generally want intermediary artifacts. However, some post-processors depend
on the previous artifact existing. If this is `true`, it forces packer to
keep the artifact around.
- `error` - Non-nil if there was an error in any way. If this is the case, the
other two return values are ignored.
......@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer build`.
<dl>
<dt>artifact (>= 2)</dt>
<dd>
<dt>
artifact (&gt;= 2)
</dt>
<dd>
<p>
Information about an artifact of the targeted item. This is a
fairly complex (but uniform!) machine-readable type that contains
......@@ -37,10 +39,12 @@ These are the machine-readable types that exist as part of the output of
data points related to the subtype. The exact count and meaning
of this subtypes comes from the subtype documentation.
</p>
</dd>
<dt>artifact-count (1)</dt>
<dd>
</dd>
<dt>
artifact-count (1)
</dt>
<dd>
<p>
The number of artifacts associated with the given target. This
will always be outputted _before_ any other artifact information,
......@@ -51,10 +55,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: count</strong> - The number of artifacts as
a base 10 integer.
</p>
</dd>
<dt>artifact subtype: builder-id (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: builder-id (1)
</dt>
<dd>
<p>
The unique ID of the builder that created this artifact.
</p>
......@@ -62,19 +68,23 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: id</strong> - The unique ID of the builder.
</p>
</dd>
<dt>artifact subtype: end (0)</dt>
<dd>
</dd>
<dt>
artifact subtype: end (0)
</dt>
<dd>
<p>
The last machine-readable output line outputted for an artifact.
This is a sentinel value so you know that no more data related to
the targetted artifact will be outputted.
</p>
</dd>
<dt>artifact subtype: file (2)</dt>
<dd>
</dd>
<dt>
artifact subtype: file (2)
</dt>
<dd>
<p>
A single file associated with the artifact. There are 0 to
"files-count" of these entries to describe every file that is
......@@ -89,10 +99,12 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 2: filename</strong> - The filename.
</p>
</dd>
<dt>artifact subtype: files-count (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: files-count (1)
</dt>
<dd>
<p>
The number of files associated with this artifact. Not all
artifacts have files associated with it.
......@@ -101,10 +113,12 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: count</strong> - The number of files.
</p>
</dd>
<dt>artifact subtype: id (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: id (1)
</dt>
<dd>
<p>
The ID (if any) of the artifact that was built. Not all artifacts
have associated IDs. For example, AMIs built have IDs associated
......@@ -115,18 +129,22 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: id</strong> - The ID of the artifact.
</p>
</dd>
<dt>artifact subtype: nil (0)</dt>
<dd>
</dd>
<dt>
artifact subtype: nil (0)
</dt>
<dd>
<p>
If present, this means that the artifact was nil, or that the targeted
build completed successfully but no artifact was created.
</p>
</dd>
<dt>artifact subtype: string (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: string (1)
</dt>
<dd>
<p>
The human-readable string description of the artifact provided by
the artifact itself.
......@@ -135,10 +153,12 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: string</strong> - The string output for the artifact.
</p>
</dd>
<dt>error-count (1)</dt>
<dd>
</dd>
<dt>
error-count (1)
</dt>
<dd>
<p>
The number of errors that occurred during the build. This will
always be outputted before any errors so you know how many are coming.
......@@ -148,10 +168,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: count</strong> - The number of build errors as
a base 10 integer.
</p>
</dd>
<dt>error (1)</dt>
<dd>
</dd>
<dt>
error (1)
</dt>
<dd>
<p>
A build error that occurred. The target of this output will be
the build that had the error.
......@@ -160,6 +182,6 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: error</strong> - The error message as a string.
</p>
</dd>
</dd>
</dl>
......@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer inspect`.
<dl>
<dt>template-variable (3)</dt>
<dd>
<dt>
template-variable (3)
</dt>
<dd>
<p>
A <a href="/docs/templates/user-variables.html">user variable</a>
defined within the template.
......@@ -32,10 +34,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 3: required</strong> - If non-zero, then this variable
is required.
</p>
</dd>
<dt>template-builder (2)</dt>
<dd>
</dd>
<dt>
template-builder (2)
</dt>
<dd>
<p>
A builder defined within the template
</p>
......@@ -48,10 +52,12 @@ These are the machine-readable types that exist as part of the output of
generally be the same as the name unless you explicitly override
the name.
</p>
</dd>
<dt>template-provisioner (1)</dt>
<dd>
</dd>
<dt>
template-provisioner (1)
</dt>
<dd>
<p>
A provisioner defined within the template. Multiple of these may
exist. If so, they are outputted in the order they would run.
......@@ -60,6 +66,6 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: name</strong> - The name/type of the provisioner.
</p>
</dd>
</dd>
</dl>
......@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer version`.
<dl>
<dt>version (1)</dt>
<dd>
<dt>
version (1)
</dt>
<dd>
<p>The version number of Packer running.</p>
<p>
......@@ -21,19 +23,23 @@ These are the machine-readable types that exist as part of the output of
only including the major, minor, and patch versions. Example:
"0.2.4".
</p>
</dd>
<dt>version-commit (1)</dt>
<dd>
</dd>
<dt>
version-commit (1)
</dt>
<dd>
<p>The SHA1 of the Git commit that built this version of Packer.</p>
<p>
<strong>Data 1: commit SHA1</strong> - The SHA1 of the commit.
</p>
</dd>
<dt>version-prerelease (1)</dt>
<dd>
</dd>
<dt>
version-prerelease (1)
</dt>
<dd>
<p>
The prerelease tag (if any) for the running version of Packer. This
can be "beta", "dev", "alpha", etc. If this is empty, you can assume
......@@ -44,6 +50,6 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: prerelease name</strong> - The name of the
prerelease tag.
</p>
</dd>
</dd>
</dl>
......@@ -12,8 +12,10 @@ These are the machine-readable types that can appear in almost any
machine-readable output and are provided by Packer core itself.
<dl>
<dt>ui (2)</dt>
<dd>
<dt>
ui (2)
</dt>
<dd>
<p>
Specifies the output and type of output that would've normally
gone to the console if Packer were running in human-readable
......@@ -28,6 +30,6 @@ machine-readable output and are provided by Packer core itself.
<strong>Data 2: output</strong> - The UI message that would have
been outputted.
</p>
</dd>
</dd>
</dl>
......@@ -24,12 +24,14 @@ Within each section, the format of the documentation is the following:
<br>
<dl>
<dt>type-name (data-count)</dt>
<dd>
<dt>
type-name (data-count)
</dt>
<dd>
<p>Description of the type.</p>
<p>
<strong>Data 1: name</strong> - Description.
</p>
</dd>
</dd>
</dl>
......@@ -32,13 +32,13 @@ The format of the configuration file is basic JSON.
Below is the list of all available configuration parameters for the core
configuration file. None of these are required, since all have sane defaults.
- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and
maximum ports that Packer uses for communication with plugins, since plugin
communication happens over TCP connections on your local host. By default
these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range
here, since Packer can easily use over 25 ports on a single run.
- `builders`, `commands`, `post-processors`, and `provisioners` are objects that
are used to install plugins. The details of how exactly these are set is
covered in more detail in the [installing plugins documentation
page](/docs/extend/plugins.html).
- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum
and maximum ports that Packer uses for communication with plugins, since
plugin communication happens over TCP connections on your local host. By
default these are 10,000 and 25,000, respectively. Be sure to set a fairly
wide range here, since Packer can easily use over 25 ports on a single run.
- `builders`, `commands`, `post-processors`, and `provisioners` are objects
that are used to install plugins. The details of how exactly these are set
is covered in more detail in the [installing plugins documentation
page](/docs/extend/plugins.html).
......@@ -9,28 +9,28 @@ page_title: Environmental Variables for Packer
Packer uses a variety of environmental variables. A listing and description of
each can be found below:
- `PACKER_CACHE_DIR` - The location of the packer cache.
- `PACKER_CACHE_DIR` - The location of the packer cache.
- `PACKER_CONFIG` - The location of the core configuration file. The format of
the configuration file is basic JSON. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_CONFIG` - The location of the core configuration file. The format of
the configuration file is basic JSON. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_LOG` - Setting this to any value will enable the logger. See the
[debugging page](/docs/other/debugging.html).
- `PACKER_LOG` - Setting this to any value will enable the logger. See the
[debugging page](/docs/other/debugging.html).
- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be
set for any logging to occur. See the [debugging
page](/docs/other/debugging.html).
- `PACKER_LOG_PATH` - The location of the log file. Note: `PACKER_LOG` must be
set for any logging to occur. See the [debugging
page](/docs/other/debugging.html).
- `PACKER_NO_COLOR` - Setting this to any value will disable color in
the terminal.
- `PACKER_NO_COLOR` - Setting this to any value will disable color in
the terminal.
- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for communication
with plugins, since plugin communication happens over TCP connections on your
local host. The default is 25,000. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for
communication with plugins, since plugin communication happens over TCP
connections on your local host. The default is 25,000. See the [core
configuration page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for communication
with plugins, since plugin communication happens over TCP connections on your
local host. The default is 10,000. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for
communication with plugins, since plugin communication happens over TCP
connections on your local host. The default is 10,000. See the [core
configuration page](/docs/other/core-configuration.html).
......@@ -25,14 +25,14 @@ location in Atlas.
Here is an example workflow:
1. Packer builds an AMI with the [Amazon AMI
builder](/docs/builders/amazon.html)
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
The `atlas` post-processor is configured with the name of the AMI, for example
`hashicorp/foobar`, to create the artifact in Atlas or update the version if
the artifact already exists
3. The new version is ready and available to be used in deployments with a tool
like [Terraform](https://terraform.io)
1. Packer builds an AMI with the [Amazon AMI
builder](/docs/builders/amazon.html)
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
The `atlas` post-processor is configured with the name of the AMI, for
example `hashicorp/foobar`, to create the artifact in Atlas or update the
version if the artifact already exists
3. The new version is ready and available to be used in deployments with a tool
like [Terraform](https://terraform.io)
## Configuration
......@@ -40,32 +40,33 @@ The configuration allows you to specify and access the artifact in Atlas.
### Required:
- `token` (string) - Your access token for the Atlas API. This can be generated
on your [tokens page](https://atlas.hashicorp.com/settings/tokens).
Alternatively you can export your Atlas token as an environmental variable and
remove it from the configuration.
- `token` (string) - Your access token for the Atlas API. This can be
generated on your [tokens
page](https://atlas.hashicorp.com/settings/tokens). Alternatively you can
export your Atlas token as an environmental variable and remove it from
the configuration.
- `artifact` (string) - The shorthand tag for your artifact that maps to Atlas,
i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`. You must
have access to the organization, hashicorp in this example, in order to add an
artifact to the organization in Atlas.
- `artifact` (string) - The shorthand tag for your artifact that maps to
Atlas, i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`.
You must have access to the organization, hashicorp in this example, in
order to add an artifact to the organization in Atlas.
- `artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will
always be `amazon.ami`. This field must be defined because Atlas can host
other artifact types, such as Vagrant boxes.
- `artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will
always be `amazon.ami`. This field must be defined because Atlas can host
other artifact types, such as Vagrant boxes.
-&gt; **Note:** If you want to upload Vagrant boxes to Atlas, use the [Atlas
post-processor](/docs/post-processors/atlas.html).
### Optional:
- `atlas_url` (string) - Override the base URL for Atlas. This is useful if
you're using Atlas Enterprise in your own network. Defaults to
`https://atlas.hashicorp.com/api/v1`.
- `atlas_url` (string) - Override the base URL for Atlas. This is useful if
you're using Atlas Enterprise in your own network. Defaults to
`https://atlas.hashicorp.com/api/v1`.
- `metadata` (map) - Send metadata about the artifact. If the artifact type is
"vagrant.box", you must specify a "provider" metadata about what provider
to use.
- `metadata` (map) - Send metadata about the artifact. If the artifact type is
"vagrant.box", you must specify a "provider" metadata about what provider
to use.
### Example Configuration
......
......@@ -20,25 +20,25 @@ VMware or VirtualBox) and compresses the artifact into a single archive.
You must specify the output filename. The archive format is derived from the
filename.
- `output` (string) - The path to save the compressed archive. The archive
format is inferred from the filename. E.g. `.tar.gz` will be a
gzipped tarball. `.zip` will be a zip file. If the extension can't be detected
packer defaults to `.tar.gz` behavior but will not change the filename.
- `output` (string) - The path to save the compressed archive. The archive
format is inferred from the filename. E.g. `.tar.gz` will be a
gzipped tarball. `.zip` will be a zip file. If the extension can't be
detected packer defaults to `.tar.gz` behavior but will not change
the filename.
If you are executing multiple builders in parallel you should make sure
`output` is unique for each one. For example
`packer_{{.BuildName}}_{{.Provider}}.zip`.
If you are executing multiple builders in parallel you should make sure `output`
is unique for each one. For example `packer_{{.BuildName}}_{{.Provider}}.zip`.
### Optional:
If you want more control over how the archive is created you can specify the
following settings:
- `compression_level` (integer) - Specify the compression level, for algorithms
that support it, from 1 through 9 inclusive. Typically higher compression
levels take longer but produce smaller files. Defaults to `6`
- `compression_level` (integer) - Specify the compression level, for
algorithms that support it, from 1 through 9 inclusive. Typically higher
compression levels take longer but produce smaller files. Defaults to `6`
- `keep_input_artifact` (boolean) - Keep source files; defaults to `false`
- `keep_input_artifact` (boolean) - Keep source files; defaults to `false`
### Supported Formats
......
......@@ -24,9 +24,9 @@ registry.
The configuration for this post-processor is extremely simple. At least a
repository is required.
- `repository` (string) - The repository of the imported image.
- `repository` (string) - The repository of the imported image.
- `tag` (string) - The tag for the imported image. By default this is not set.
- `tag` (string) - The tag for the imported image. By default this is not set.
## Example
......
......@@ -18,16 +18,16 @@ pushes it to a Docker registry.
This post-processor has only optional configuration:
- `login` (boolean) - Defaults to false. If true, the post-processor will login
prior to pushing.
- `login` (boolean) - Defaults to false. If true, the post-processor will
login prior to pushing.
- `login_email` (string) - The email to use to authenticate to login.
- `login_email` (string) - The email to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login.
- `login_username` (string) - The username to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login.
- `login_password` (string) - The password to use to authenticate to login.
- `login_server` (string) - The server address to login to.
- `login_server` (string) - The server address to login to.
-&gt; **Note:** If you login using the credentials above, the post-processor
will automatically log you out afterwards (just the server specified).
......
......@@ -25,7 +25,7 @@ familiar with this and vice versa.
The configuration for this post-processor is extremely simple.
- `path` (string) - The path to save the image.
- `path` (string) - The path to save the image.
## Example
......
......@@ -27,12 +27,12 @@ that this works with committed resources, rather than exported.
The configuration for this post-processor is extremely simple. At least a
repository is required.
- `repository` (string) - The repository of the image.
- `repository` (string) - The repository of the image.
- `tag` (string) - The tag for the image. By default this is not set.
- `tag` (string) - The tag for the image. By default this is not set.
- `force` (boolean) - If true, this post-processor forcibly tag the image even
if tag name is collided. Default to `false`.
- `force` (boolean) - If true, this post-processor forcibly tag the image even
if tag name is collided. Default to `false`.
## Example
......
......@@ -36,16 +36,16 @@ and deliver them to your team in some fashion.
Here is an example workflow:
1. You use Packer to build a Vagrant Box for the `virtualbox` provider
2. The `vagrant-cloud` post-processor is configured to point to the box
`hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration
3. The post-processor receives the box from the `vagrant` post-processor
4. It then creates the configured version, or verifies the existence of it, on
Vagrant Cloud
5. A provider matching the name of the Vagrant provider is then created
6. The box is uploaded to Vagrant Cloud
7. The upload is verified
8. The version is released and available to users of the box
1. You use Packer to build a Vagrant Box for the `virtualbox` provider
2. The `vagrant-cloud` post-processor is configured to point to the box
`hashicorp/foobar` on Vagrant Cloud via the `box_tag` configuration
3. The post-processor receives the box from the `vagrant` post-processor
4. It then creates the configured version, or verifies the existence of it, on
Vagrant Cloud
5. A provider matching the name of the Vagrant provider is then created
6. The box is uploaded to Vagrant Cloud
7. The upload is verified
8. The version is released and available to users of the box
## Configuration
......@@ -54,35 +54,35 @@ on Vagrant Cloud, as well as authentication and version information.
### Required:
- `access_token` (string) - Your access token for the Vagrant Cloud API. This
can be generated on your [tokens
page](https://vagrantcloud.com/account/tokens).
- `access_token` (string) - Your access token for the Vagrant Cloud API. This
can be generated on your [tokens
page](https://vagrantcloud.com/account/tokens).
- `box_tag` (string) - The shorthand tag for your box that maps to Vagrant
Cloud, i.e `hashicorp/precise64` for `vagrantcloud.com/hashicorp/precise64`
- `box_tag` (string) - The shorthand tag for your box that maps to Vagrant
Cloud, i.e `hashicorp/precise64` for `vagrantcloud.com/hashicorp/precise64`
- `version` (string) - The version number, typically incrementing a
previous version. The version string is validated based on [Semantic
Versioning](http://semver.org/). The string must match a pattern that could be
semver, and doesn't validate that the version comes after your
previous versions.
- `version` (string) - The version number, typically incrementing a
previous version. The version string is validated based on [Semantic
Versioning](http://semver.org/). The string must match a pattern that could
be semver, and doesn't validate that the version comes after your
previous versions.
### Optional:
- `no_release` (string) - If set to true, does not release the version on
Vagrant Cloud, making it active. You can manually release the version via the
API or Web UI. Defaults to false.
- `no_release` (string) - If set to true, does not release the version on
Vagrant Cloud, making it active. You can manually release the version via
the API or Web UI. Defaults to false.
- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This
is useful if you're using Vagrant Private Cloud in your own network. Defaults
to `https://vagrantcloud.com/api/v1`
- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This
is useful if you're using Vagrant Private Cloud in your own network.
Defaults to `https://vagrantcloud.com/api/v1`
- `version_description` (string) - Optionally markdown text used as a
full-length and in-depth description of the version, typically for denoting
changes introduced
- `version_description` (string) - Optionally markdown text used as a
full-length and in-depth description of the version, typically for denoting
changes introduced
- `box_download_url` (string) - Optional URL for a self-hosted box. If this is
set the box will not be uploaded to the Vagrant Cloud.
- `box_download_url` (string) - Optional URL for a self-hosted box. If this is
set the box will not be uploaded to the Vagrant Cloud.
## Use with Vagrant Post-Processor
......
......@@ -29,13 +29,13 @@ certain builders into proper boxes for their respective providers.
Currently, the Vagrant post-processor can create boxes for the following
providers.
- AWS
- DigitalOcean
- Hyper-V
- Parallels
- QEMU
- VirtualBox
- VMware
- AWS
- DigitalOcean
- Hyper-V
- Parallels
- QEMU
- VirtualBox
- VMware
-&gt; **Support for additional providers** is planned. If the Vagrant
post-processor doesn't support creating boxes for a provider you care about,
......@@ -51,28 +51,28 @@ However, if you want to configure things a bit more, the post-processor does
expose some configuration options. The available options are listed below, with
more details about certain options in following sections.
- `compression_level` (integer) - An integer representing the compression level
to use when creating the Vagrant box. Valid values range from 0 to 9, with 0
being no compression and 9 being the best compression. By default, compression
is enabled at level 6.
- `compression_level` (integer) - An integer representing the compression
level to use when creating the Vagrant box. Valid values range from 0 to 9,
with 0 being no compression and 9 being the best compression. By default,
compression is enabled at level 6.
- `include` (array of strings) - Paths to files to include in the Vagrant box.
These files will each be copied into the top level directory of the Vagrant
box (regardless of their paths). They can then be used from the Vagrantfile.
- `include` (array of strings) - Paths to files to include in the Vagrant box.
These files will each be copied into the top level directory of the Vagrant
box (regardless of their paths). They can then be used from the Vagrantfile.
- `keep_input_artifact` (boolean) - If set to true, do not delete the
`output_directory` on a successful build. Defaults to false.
- `keep_input_artifact` (boolean) - If set to true, do not delete the
`output_directory` on a successful build. Defaults to false.
- `output` (string) - The full path to the box file that will be created by
this post-processor. This is a [configuration
template](/docs/templates/configuration-templates.html). The variable
`Provider` is replaced by the Vagrant provider the box is for. The variable
`ArtifactId` is replaced by the ID of the input artifact. The variable
`BuildName` is replaced with the name of the build. By default, the value of
this config is `packer_{{.BuildName}}_{{.Provider}}.box`.
- `output` (string) - The full path to the box file that will be created by
this post-processor. This is a [configuration
template](/docs/templates/configuration-templates.html). The variable
`Provider` is replaced by the Vagrant provider the box is for. The variable
`ArtifactId` is replaced by the ID of the input artifact. The variable
`BuildName` is replaced with the name of the build. By default, the value of
this config is `packer_{{.BuildName}}_{{.Provider}}.box`.
- `vagrantfile_template` (string) - Path to a template to use for the
Vagrantfile that is packaged with the box.
- `vagrantfile_template` (string) - Path to a template to use for the
Vagrantfile that is packaged with the box.
## Provider-Specific Overrides
......
......@@ -21,35 +21,36 @@ each category, the available configuration keys are alphabetized.
Required:
- `cluster` (string) - The cluster to upload the VM to.
- `cluster` (string) - The cluster to upload the VM to.
- `datacenter` (string) - The name of the datacenter within vSphere to add the
VM to.
- `datacenter` (string) - The name of the datacenter within vSphere to add the
VM to.
- `datastore` (string) - The name of the datastore to store this VM. This is
*not required* if `resource_pool` is specified.
- `datastore` (string) - The name of the datastore to store this VM. This is
*not required* if `resource_pool` is specified.
- `host` (string) - The vSphere host that will be contacted to perform the
VM upload.
- `host` (string) - The vSphere host that will be contacted to perform the
VM upload.
- `password` (string) - Password to use to authenticate to the vSphere endpoint.
- `password` (string) - Password to use to authenticate to the
vSphere endpoint.
- `resource_pool` (string) - The resource pool to upload the VM to. This is *not
required*.
- `resource_pool` (string) - The resource pool to upload the VM to. This is
*not required*.
- `username` (string) - The username to use to authenticate to the
vSphere endpoint.
- `username` (string) - The username to use to authenticate to the
vSphere endpoint.
- `vm_name` (string) - The name of the VM once it is uploaded.
- `vm_name` (string) - The name of the VM once it is uploaded.
Optional:
- `disk_mode` (string) - Target disk format. See `ovftool` manual for
available options. By default, "thick" will be used.
- `disk_mode` (string) - Target disk format. See `ovftool` manual for
available options. By default, "thick" will be used.
- `insecure` (boolean) - Whether or not the connection to vSphere can be done
over an insecure connection. By default this is false.
- `insecure` (boolean) - Whether or not the connection to vSphere can be done
over an insecure connection. By default this is false.
- `vm_folder` (string) - The folder within the datastore to store the VM.
- `vm_folder` (string) - The folder within the datastore to store the VM.
- `vm_network` (string) - The name of the VM network this VM will be added to.
- `vm_network` (string) - The name of the VM network this VM will be added to.
......@@ -35,83 +35,70 @@ The reference of available configuration options is listed below.
Required:
- `playbook_file` (string) - The playbook file to be executed by ansible. This
file must exist on your local system and will be uploaded to the
remote machine.
- `playbook_file` (string) - The playbook file to be executed by ansible. This
file must exist on your local system and will be uploaded to the
remote machine.
Optional:
- `command` (string) - The command to invoke ansible. Defaults
to "ansible-playbook".
- `command` (string) - The command to invoke ansible. Defaults
to "ansible-playbook".
- `extra_arguments` (array of strings) - An array of extra arguments to pass to
the ansible command. By default, this is empty.
- `extra_arguments` (array of strings) - An array of extra arguments to pass
to the ansible command. By default, this is empty.
- `inventory_groups` (string) - A comma-separated list of groups to which packer
will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2` will
generate an Ansible inventory like:
- `inventory_groups` (string) - A comma-separated list of groups to which
packer will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2`
will generate an Ansible inventory like:
``` {.text}
[my_group_1]
127.0.0.1
[my_group_2]
127.0.0.1
```
`{.text} [my_group_1] 127.0.0.1 [my_group_2] 127.0.0.1`
- `inventory_file` (string) - The inventory file to be used by ansible. This
file must exist on your local system and will be uploaded to the
remote machine.
- `inventory_file` (string) - The inventory file to be used by ansible. This
file must exist on your local system and will be uploaded to the
remote machine.
When using an inventory file, it's also required to `--limit` the hosts to the
specified host you're buiding. The `--limit` argument can be provided in the
`extra_arguments` option.
When using an inventory file, it's also required to `--limit` the hosts to the
specified host you're buiding. The `--limit` argument can be provided in the
`extra_arguments` option.
An example inventory file may look like:
An example inventory file may look like:
``` {.text}
[chi-dbservers]
db-01 ansible_connection=local
db-02 ansible_connection=local
\`\`\` {.text} \[chi-dbservers\] db-01 ansible\_connection=local db-02
ansible\_connection=local
[chi-appservers]
app-01 ansible_connection=local
app-02 ansible_connection=local
\[chi-appservers\] app-01 ansible\_connection=local app-02
ansible\_connection=local
[chi:children]
chi-dbservers
chi-appservers
\[chi:children\] chi-dbservers chi-appservers
[dbservers:children]
chi-dbservers
\[dbservers:children\] chi-dbservers
[appservers:children]
chi-appservers
```
\[appservers:children\] chi-appservers \`\`\`
- `playbook_dir` (string) - a path to the complete ansible directory structure
on your local system to be copied to the remote machine as the
`staging_directory` before all other files and directories.
- `playbook_dir` (string) - a path to the complete ansible directory structure
on your local system to be copied to the remote machine as the
`staging_directory` before all other files and directories.
- `playbook_paths` (array of strings) - An array of paths to playbook files on
your local system. These will be uploaded to the remote machine under
`staging_directory`/playbooks. By default, this is empty.
- `playbook_paths` (array of strings) - An array of paths to playbook files on
your local system. These will be uploaded to the remote machine under
`staging_directory`/playbooks. By default, this is empty.
- `group_vars` (string) - a path to the directory containing ansible group
variables on your local system to be copied to the remote machine. By default,
this is empty.
- `group_vars` (string) - a path to the directory containing ansible group
variables on your local system to be copied to the remote machine. By
default, this is empty.
- `host_vars` (string) - a path to the directory containing ansible host
variables on your local system to be copied to the remote machine. By default,
this is empty.
- `host_vars` (string) - a path to the directory containing ansible host
variables on your local system to be copied to the remote machine. By
default, this is empty.
- `role_paths` (array of strings) - An array of paths to role directories on
your local system. These will be uploaded to the remote machine under
`staging_directory`/roles. By default, this is empty.
- `role_paths` (array of strings) - An array of paths to role directories on
your local system. These will be uploaded to the remote machine under
`staging_directory`/roles. By default, this is empty.
- `staging_directory` (string) - The directory where all the configuration of
Ansible by Packer will be placed. By default this
is "/tmp/packer-provisioner-ansible-local". This directory doesn't need to
exist but must have proper permissions so that the SSH user that Packer uses
is able to create directories and write into this folder. If the permissions
are not correct, use a shell provisioner prior to this to configure
it properly.
- `staging_directory` (string) - The directory where all the configuration of
Ansible by Packer will be placed. By default this
is "/tmp/packer-provisioner-ansible-local". This directory doesn't need to
exist but must have proper permissions so that the SSH user that Packer uses
is able to create directories and write into this folder. If the permissions
are not correct, use a shell provisioner prior to this to configure
it properly.
......@@ -40,70 +40,71 @@ is running must have knife on the path and configured globally, i.e,
The reference of available configuration options is listed below. No
configuration is actually required.
- `chef_environment` (string) - The name of the chef\_environment sent to the
Chef server. By default this is empty and will not use an environment.
- `chef_environment` (string) - The name of the chef\_environment sent to the
Chef server. By default this is empty and will not use an environment.
- `config_template` (string) - Path to a template that will be used for the Chef
configuration file. By default Packer only sets configuration it needs to
match the settings set in the provisioner configuration. If you need to set
configurations that the Packer provisioner doesn't support, then you should
use a custom configuration template. See the dedicated "Chef Configuration"
section below for more details.
- `config_template` (string) - Path to a template that will be used for the
Chef configuration file. By default Packer only sets configuration it needs
to match the settings set in the provisioner configuration. If you need to
set configurations that the Packer provisioner doesn't support, then you
should use a custom configuration template. See the dedicated "Chef
Configuration" section below for more details.
- `execute_command` (string) - The command used to execute Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
- `execute_command` (string) - The command used to execute Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `install_command` (string) - The command used to install Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
- `install_command` (string) - The command used to install Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `json` (object) - An arbitrary mapping of JSON that will be available as node
attributes while running Chef.
- `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
- `node_name` (string) - The name of the node to register with the Chef Server.
This is optional and by default is packer-{{uuid}}.
- `node_name` (string) - The name of the node to register with the
Chef Server. This is optional and by default is packer-{{uuid}}.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted.
- `run_list` (array of strings) - The [run
list](http://docs.opscode.com/essentials_node_object_run_lists.html) for Chef.
By default this is empty, and will use the run list sent down by the
Chef Server.
- `run_list` (array of strings) - The [run
list](http://docs.opscode.com/essentials_node_object_run_lists.html)
for Chef. By default this is empty, and will use the run list sent down by
the Chef Server.
- `server_url` (string) - The URL to the Chef server. This is required.
- `server_url` (string) - The URL to the Chef server. This is required.
- `skip_clean_client` (boolean) - If true, Packer won't remove the client from
the Chef server after it is done running. By default, this is false.
- `skip_clean_client` (boolean) - If true, Packer won't remove the client from
the Chef server after it is done running. By default, this is false.
- `skip_clean_node` (boolean) - If true, Packer won't remove the node from the
Chef server after it is done running. By default, this is false.
- `skip_clean_node` (boolean) - If true, Packer won't remove the node from the
Chef server after it is done running. By default, this is false.
- `skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Opscode omnibus installers.
- `skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Opscode omnibus installers.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this
is "/tmp/packer-chef-client". This directory doesn't need to exist but must
have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this
is "/tmp/packer-chef-client". This directory doesn't need to exist but must
have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
- `client_key` (string) - Path to client key. If not set, this defaults to a
file named client.pem in `staging_directory`.
- `client_key` (string) - Path to client key. If not set, this defaults to a
file named client.pem in `staging_directory`.
- `validation_client_name` (string) - Name of the validation client. If not set,
this won't be set in the configuration and the default that Chef uses will
be used.
- `validation_client_name` (string) - Name of the validation client. If not
set, this won't be set in the configuration and the default that Chef uses
will be used.
- `validation_key_path` (string) - Path to the validation key for communicating
with the Chef Server. This will be uploaded to the remote machine. If this is
NOT set, then it is your responsibility via other means (shell
provisioner, etc.) to get a validation key to where Chef expects it.
- `validation_key_path` (string) - Path to the validation key for
communicating with the Chef Server. This will be uploaded to the
remote machine. If this is NOT set, then it is your responsibility via other
means (shell provisioner, etc.) to get a validation key to where Chef
expects it.
## Chef Configuration
......@@ -135,9 +136,9 @@ This template is a [configuration
template](/docs/templates/configuration-templates.html) and has a set of
variables available to use:
- `NodeName` - The node name set in the configuration.
- `ServerUrl` - The URL of the Chef Server set in the configuration.
- `ValidationKeyPath` - Path to the validation key, if it is set.
- `NodeName` - The node name set in the configuration.
- `ServerUrl` - The URL of the Chef Server set in the configuration.
- `ValidationKeyPath` - Path to the validation key, if it is set.
## Execute Command
......@@ -155,10 +156,10 @@ This command can be customized using the `execute_command` configuration. As you
can see from the default value above, the value of this configuration can
contain various template variables, defined below:
- `ConfigPath` - The path to the Chef configuration file. file.
- `JsonPath` - The path to the JSON attributes file for the node.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the
value of the `prevent_sudo` configuration.
- `ConfigPath` - The path to the Chef configuration file. file.
- `JsonPath` - The path to the JSON attributes file for the node.
- `Sudo` - A boolean of whether to `sudo` the command or not, depending on the
value of the `prevent_sudo` configuration.
## Install Command
......
......@@ -32,19 +32,19 @@ The file provisioner can upload both single files and complete directories.
The available configuration options are listed below. All elements are required.
- `source` (string) - The path to a local file or directory to upload to
the machine. The path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed. If this is a
directory, the existence of a trailing slash is important. Read below on
uploading directories.
- `destination` (string) - The path where the file will be uploaded to in
the machine. This value must be a writable location and any parent directories
must already exist.
- `direction` (string) - The direction of the file transfer. This defaults to
"upload." If it is set to "download" then the file "source" in the machine wll
be downloaded locally to "destination"
- `source` (string) - The path to a local file or directory to upload to
the machine. The path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed. If this is a
directory, the existence of a trailing slash is important. Read below on
uploading directories.
- `destination` (string) - The path where the file will be uploaded to in
the machine. This value must be a writable location and any parent
directories must already exist.
- `direction` (string) - The direction of the file transfer. This defaults to
"upload." If it is set to "download" then the file "source" in the machine
wll be downloaded locally to "destination"
## Directory Uploads
......
......@@ -41,36 +41,36 @@ The reference of available configuration options is listed below.
The provisioner takes various options. None are strictly required. They are
listed below:
- `client_cert_path` (string) - Path to the client certificate for the node on
your disk. This defaults to nothing, in which case a client cert won't
be uploaded.
- `client_cert_path` (string) - Path to the client certificate for the node on
your disk. This defaults to nothing, in which case a client cert won't
be uploaded.
- `client_private_key_path` (string) - Path to the client private key for the
node on your disk. This defaults to nothing, in which case a client private
key won't be uploaded.
- `client_private_key_path` (string) - Path to the client private key for the
node on your disk. This defaults to nothing, in which case a client private
key won't be uploaded.
- `facter` (object of key/value strings) - Additional Facter facts to make
available to the Puppet run.
- `facter` (object of key/value strings) - Additional Facter facts to make
available to the Puppet run.
- `ignore_exit_codes` (boolean) - If true, Packer will never consider the
provisioner a failure.
- `ignore_exit_codes` (boolean) - If true, Packer will never consider the
provisioner a failure.
- `options` (string) - Additional command line options to pass to `puppet agent`
when Puppet is ran.
- `options` (string) - Additional command line options to pass to
`puppet agent` when Puppet is ran.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to run Puppet are executed with `sudo`. If this is true, then the
sudo will be omitted.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to run Puppet are executed with `sudo`. If this is true, then the
sudo will be omitted.
- `puppet_node` (string) - The name of the node. If this isn't set, the fully
qualified domain name will be used.
- `puppet_node` (string) - The name of the node. If this isn't set, the fully
qualified domain name will be used.
- `puppet_server` (string) - Hostname of the Puppet server. By default "puppet"
will be used.
- `puppet_server` (string) - Hostname of the Puppet server. By default
"puppet" will be used.
- `staging_directory` (string) - This is the directory where all the
configuration of Puppet by Packer will be placed. By default this
is "/tmp/packer-puppet-server". This directory doesn't need to exist but must
have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
- `staging_directory` (string) - This is the directory where all the
configuration of Puppet by Packer will be placed. By default this
is "/tmp/packer-puppet-server". This directory doesn't need to exist but
must have proper permissions so that the SSH user that Packer uses is able
to create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment