Commit d8e8f98b authored by Chris Bednarski's avatar Chris Bednarski

Change to 4 spaces

parent 555a8ba7
......@@ -13,5 +13,5 @@ format:
bundle exec htmlbeautifier -t 2 source/*.erb
bundle exec htmlbeautifier -t 2 source/layouts/*.erb
@pandoc -v > /dev/null || echo "pandoc must be installed in order to format markdown content"
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=2 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "pandoc -r markdown -w markdown --tab-stop=4 --atx-headers -s --columns=80 {} > {}.new"\; || true
pandoc -v > /dev/null && find . -iname "*.html.markdown" | xargs -I{} bash -c "mv {}.new {}"\; || true
......@@ -29,7 +29,8 @@ list as contributors come and go.
<div class="people">
<div class="person">
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/54079122b67de9677c1f93933ce8b63a.png?s=125">
<div class="bio">
<h3>Mitchell Hashimoto (<a href="https://github.com/mitchellh">@mitchellh</a>)</h3>
......@@ -41,9 +42,11 @@ list as contributors come and go.
described as "automation obsessed."
</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2acc31dd6370a54b18f6755cd0710ce6.png?s=125">
<div class="bio">
<h3>Jack Pearkes (<a href="https://github.com/pearkes">@pearkes</a>)</h3>
......@@ -52,9 +55,11 @@ list as contributors come and go.
for Packer. Outside of Packer, Jack is an avid open source
contributor and software consultant.</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/2f7fc9cb7558e3ea48f5a86fa90a78da.png?s=125">
<div class="bio">
<h3>Mark Peek (<a href="https://github.com/markpeek">@markpeek</a>)</h3>
......@@ -65,9 +70,11 @@ list as contributors come and go.
<a href="https://github.com/ironport">IronPort Python libraries</a>.
Mark is also a <a href="https://FreeBSD.org">FreeBSD committer</a>.</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/1fca64df3d7db1e2f258a8956d2b0aff.png?s=125">
<div class="bio">
<h3>Ross Smith II (<a href="https://github.com/rasa" target="_blank">@rasa</a>)</h3>
......@@ -78,9 +85,11 @@ VMware builder on Windows, and provides other valuable assistance. Ross is an
open source enthusiast, published author, and freelance consultant.
</p>
</div>
</div>
<div class="person">
</div>
<div class="person">
<img class="pull-left" src="http://www.gravatar.com/avatar/c9f6bf7b5b865012be5eded656ebed7d.png?s=125">
<div class="bio">
<h3>Rickard von Essen<br/>(<a href="https://github.com/rickard-von-essen" target="_blank">@rickard-von-essen</a>)</h3>
......@@ -90,8 +99,11 @@ Rickard von Essen maintains our Parallels Desktop builder. Rickard is an
polyglot programmer and consults on Continuous Delivery.
</p>
</div>
</div>
<div class="clearfix"></div>
</div>
<div class="clearfix">
</div>
</div>
......@@ -17,34 +17,34 @@ Luckily, there are relatively few. This page documents all the terminology
required to understand and use Packer. The terminology is in alphabetical order
for easy referencing.
- `Artifacts` are the results of a single build, and are usually a set of IDs or
files to represent a machine image. Every builder produces a single artifact.
As an example, in the case of the Amazon EC2 builder, the artifact is a set of
AMI IDs (one per region). For the VMware builder, the artifact is a directory
of files comprising the created virtual machine.
- `Artifacts` are the results of a single build, and are usually a set of IDs
or files to represent a machine image. Every builder produces a
single artifact. As an example, in the case of the Amazon EC2 builder, the
artifact is a set of AMI IDs (one per region). For the VMware builder, the
artifact is a directory of files comprising the created virtual machine.
- `Builds` are a single task that eventually produces an image for a
single platform. Multiple builds run in parallel. Example usage in a sentence:
"The Packer build produced an AMI to run our web application." Or: "Packer is
running the builds now for VMware, AWS, and VirtualBox."
single platform. Multiple builds run in parallel. Example usage in a
sentence: "The Packer build produced an AMI to run our web application." Or:
"Packer is running the builds now for VMware, AWS, and VirtualBox."
- `Builders` are components of Packer that are able to create a machine image
for a single platform. Builders read in some configuration and use that to run
and generate a machine image. A builder is invoked as part of a build in order
to create the actual resulting images. Example builders include VirtualBox,
VMware, and Amazon EC2. Builders can be created and added to Packer in the
form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job. An
example command is "build", which is invoked as `packer build`. Packer ships
with a set of commands out of the box in order to define its
for a single platform. Builders read in some configuration and use that to
run and generate a machine image. A builder is invoked as part of a build in
order to create the actual resulting images. Example builders include
VirtualBox, VMware, and Amazon EC2. Builders can be created and added to
Packer in the form of plugins.
- `Commands` are sub-commands for the `packer` program that perform some job.
An example command is "build", which is invoked as `packer build`. Packer
ships with a set of commands out of the box in order to define its
command-line interface. Commands can also be created and added to Packer in
the form of plugins.
- `Post-processors` are components of Packer that take the result of a builder
or another post-processor and process that to create a new artifact. Examples
of post-processors are compress to compress artifacts, upload to upload
artifacts, etc.
or another post-processor and process that to create a new artifact.
Examples of post-processors are compress to compress artifacts, upload to
upload artifacts, etc.
- `Provisioners` are components of Packer that install and configure software
within a running machine prior to that machine being turned into a
......@@ -52,6 +52,6 @@ for easy referencing.
useful software. Example provisioners include shell scripts, Chef,
Puppet, etc.
- `Templates` are JSON files which define one or more builds by configuring the
various components of Packer. Packer is able to read a template and use that
information to create multiple machine images in parallel.
- `Templates` are JSON files which define one or more builds by configuring
the various components of Packer. Packer is able to read a template and use
that information to create multiple machine images in parallel.
......@@ -82,24 +82,25 @@ builder.
instance metadata for IAM role keys.
- `source_ami` (string) - The source AMI whose root volume will be copied and
provisioned on the currently running instance. This must be an EBS-backed AMI
with a root volume snapshot that you have access to.
provisioned on the currently running instance. This must be an EBS-backed
AMI with a root volume snapshot that you have access to.
### Optional:
- `ami_description` (string) - The description to set for the resulting AMI(s).
By default this description is empty.
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
- `ami_groups` (array of strings) - A list of groups that have access to launch
the resulting AMI(s). By default no groups have permission to launch the AMI.
`all` will make the AMI publicly accessible.
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible.
- `ami_product_codes` (array of strings) - A list of product codes to associate
with the AMI. By default no product codes are associated with the AMI.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to. Tags
and attributes are copied along with the AMI. AMI copying takes time depending
on the size of the AMI, but will generally take many minutes.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the resulting AMI(s). By default no additional users other than the
......@@ -116,21 +117,22 @@ builder.
use this.
- `command_wrapper` (string) - How to run shell commands. This defaults
to "{{.Command}}". This may be useful to set if you want to set environmental
variables or perhaps run it with `sudo` or so on. This is a configuration
template where the `.Command` variable is replaced with the command to be run.
to "{{.Command}}". This may be useful to set if you want to set
environmental variables or perhaps run it with `sudo` or so on. This is a
configuration template where the `.Command` variable is replaced with the
command to be run.
- `copy_files` (array of strings) - Paths to files on the running EC2 instance
that will be copied into the chroot environment prior to provisioning. This is
useful, for example, to copy `/etc/resolv.conf` so that DNS lookups work.
that will be copied into the chroot environment prior to provisioning. This
is useful, for example, to copy `/etc/resolv.conf` so that DNS lookups work.
- `device_path` (string) - The path to the device where the root volume of the
source AMI will be attached. This defaults to "" (empty string), which forces
Packer to find an open device automatically.
source AMI will be attached. This defaults to "" (empty string), which
forces Packer to find an open device automatically.
- `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport)
on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS
IAM policy.
- `enhanced_networking` (boolean) - Enable enhanced
networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
......@@ -138,15 +140,15 @@ builder.
- `mount_path` (string) - The path where the volume will be mounted. This is
where the chroot environment will be. This defaults to
`packer-amazon-chroot-volumes/{{.Device}}`. This is a configuration template
where the `.Device` variable is replaced with the name of the device where the
volume is attached.
where the `.Device` variable is replaced with the name of the device where
the volume is attached.
- `mount_options` (array of strings) - Options to supply the `mount` command
when mounting devices. Each option will be prefixed with `-o` and supplied to
the `mount` command ran by Packer. Because this command is ran in a shell,
user discrestion is advised. See [this manual page for the mount
command](http://linuxcommand.org/man_pages/mount8.html) for valid file system
specific options
when mounting devices. Each option will be prefixed with `-o` and supplied
to the `mount` command ran by Packer. Because this command is ran in a
shell, user discrestion is advised. See [this manual page for the mount
command](http://linuxcommand.org/man_pages/mount8.html) for valid file
system specific options
- `root_volume_size` (integer) - The size of the root volume for the chroot
environment, and the resulting AMI
......
......@@ -66,49 +66,50 @@ builder.
- `source_ami` (string) - The initial AMI used as a base for the newly
created machine.
- `ssh_username` (string) - The username to use in order to communicate over SSH
to the running machine.
- `ssh_username` (string) - The username to use in order to communicate over
SSH to the running machine.
### Optional:
- `ami_block_device_mappings` (array of block device mappings) - Add the block
device mappings to the AMI. The block device mappings allow for keys:
- `device_name` (string) - The device name exposed to the instance (for
- `device_name` (string) - The device name exposed to the instance (for
example, "/dev/sdh" or "xvdh")
- `virtual_name` (string) - The virtual device name. See the documentation on
- `virtual_name` (string) - The virtual device name. See the documentation on
[Block Device
Mapping](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_BlockDeviceMapping.html)
for more information
- `snapshot_id` (string) - The ID of the snapshot
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
- `snapshot_id` (string) - The ID of the snapshot
- `volume_type` (string) - The volume type. gp2 for General Purpose (SSD)
volumes, io1 for Provisioned IOPS (SSD) volumes, and standard for Magnetic
volumes
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
- `volume_size` (integer) - The size of the volume, in GiB. Required if not
specifying a `snapshot_id`
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
- `delete_on_termination` (boolean) - Indicates whether the EBS volume is
deleted on instance termination
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `no_device` (boolean) - Suppresses the specified device included in the
- `encrypted` (boolean) - Indicates whether to encrypt the volume or not
- `no_device` (boolean) - Suppresses the specified device included in the
block device mapping of the AMI
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
- `iops` (integer) - The number of I/O operations per second (IOPS) that the
volume supports. See the documentation on
[IOPs](http://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_EbsBlockDevice.html)
for more information
- `ami_description` (string) - The description to set for the resulting AMI(s).
By default this description is empty.
- `ami_description` (string) - The description to set for the
resulting AMI(s). By default this description is empty.
- `ami_groups` (array of strings) - A list of groups that have access to launch
the resulting AMI(s). By default no groups have permission to launch the AMI.
`all` will make the AMI publicly accessible. AWS currently doesn't accept any
value other than "all".
- `ami_groups` (array of strings) - A list of groups that have access to
launch the resulting AMI(s). By default no groups have permission to launch
the AMI. `all` will make the AMI publicly accessible. AWS currently doesn't
accept any value other than "all".
- `ami_product_codes` (array of strings) - A list of product codes to associate
with the AMI. By default no product codes are associated with the AMI.
- `ami_product_codes` (array of strings) - A list of product codes to
associate with the AMI. By default no product codes are associated with
the AMI.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to. Tags
and attributes are copied along with the AMI. AMI copying takes time depending
on the size of the AMI, but will generally take many minutes.
- `ami_regions` (array of strings) - A list of regions to copy the AMI to.
Tags and attributes are copied along with the AMI. AMI copying takes time
depending on the size of the AMI, but will generally take many minutes.
- `ami_users` (array of strings) - A list of account IDs that have access to
launch the resulting AMI(s). By default no additional users other than the
......@@ -121,9 +122,9 @@ builder.
- `availability_zone` (string) - Destination availability zone to launch
instance in. Leave this empty to allow Amazon to auto-assign.
- `enhanced_networking` (boolean) - Enable enhanced networking (SriovNetSupport)
on HVM-compatible AMIs. If true, add `ec2:ModifyInstanceAttribute` to your AWS
IAM policy.
- `enhanced_networking` (boolean) - Enable enhanced
networking (SriovNetSupport) on HVM-compatible AMIs. If true, add
`ec2:ModifyInstanceAttribute` to your AWS IAM policy.
- `force_deregister` (boolean) - Force Packer to first deregister an existing
AMI if one with the same name already exists. Default `false`.
......@@ -136,38 +137,39 @@ builder.
block device mappings to the launch instance. The block device mappings are
the same as `ami_block_device_mappings` above.
- `run_tags` (object of key/value strings) - Tags to apply to the instance that
is *launched* to create the AMI. These tags are *not* applied to the resulting
AMI unless they're duplicated in `tags`.
- `run_tags` (object of key/value strings) - Tags to apply to the instance
that is *launched* to create the AMI. These tags are *not* applied to the
resulting AMI unless they're duplicated in `tags`.
- `security_group_id` (string) - The ID (*not* the name) of the security group
to assign to the instance. By default this is not set and Packer will
automatically create a new temporary security group to allow SSH access. Note
that if this is specified, you must be sure the security group allows access
to the `ssh_port` given below.
automatically create a new temporary security group to allow SSH access.
Note that if this is specified, you must be sure the security group allows
access to the `ssh_port` given below.
- `security_group_ids` (array of strings) - A list of security groups as
described above. Note that if this is specified, you must omit the
`security_group_id`.
- `spot_price` (string) - The maximum hourly price to pay for a spot instance to
create the AMI. Spot instances are a type of instance that EC2 starts when the
current spot price is less than the maximum price you specify. Spot price will
be updated based on available spot instance capacity and current spot
instance requests. It may save you some costs. You can set this to "auto" for
Packer to automatically discover the best spot price.
- `spot_price` (string) - The maximum hourly price to pay for a spot instance
to create the AMI. Spot instances are a type of instance that EC2 starts
when the current spot price is less than the maximum price you specify. Spot
price will be updated based on available spot instance capacity and current
spot instance requests. It may save you some costs. You can set this to
"auto" for Packer to automatically discover the best spot price.
- `spot_price_auto_product` (string) - Required if `spot_price` is set
to "auto". This tells Packer what sort of AMI you're launching to find the
best spot price. This must be one of: `Linux/UNIX`, `SUSE Linux`, `Windows`,
`Linux/UNIX (Amazon VPC)`, `SUSE Linux (Amazon VPC)`, `Windows (Amazon VPC)`
- `ssh_keypair_name` (string) - If specified, this is the key that will be used
for SSH with the machine. By default, this is blank, and Packer will generate
a temporary keypair. `ssh_private_key_file` must be specified with this.
- `ssh_keypair_name` (string) - If specified, this is the key that will be
used for SSH with the machine. By default, this is blank, and Packer will
generate a temporary keypair. `ssh_private_key_file` must be specified
with this.
- `ssh_private_ip` (boolean) - If true, then SSH will always use the private IP
if available.
- `ssh_private_ip` (boolean) - If true, then SSH will always use the private
IP if available.
- `subnet_id` (string) - If using VPC, the ID of the subnet, such as
"subnet-12345def", where Packer will launch the EC2 instance. This field is
......@@ -179,20 +181,20 @@ builder.
- `temporary_key_pair_name` (string) - The name of the temporary keypair
to generate. By default, Packer generates a name with a UUID.
- `token` (string) - The access token to use. This is different from the access
key and secret key. If you're not sure what this is, then you probably don't
need it. This will also be read from the `AWS_SECURITY_TOKEN`
- `token` (string) - The access token to use. This is different from the
access key and secret key. If you're not sure what this is, then you
probably don't need it. This will also be read from the `AWS_SECURITY_TOKEN`
environmental variable.
- `user_data` (string) - User data to apply when launching the instance. Note
that you need to be careful about escaping characters due to the templates
being JSON. It is often more convenient to use `user_data_file`, instead.
- `user_data_file` (string) - Path to a file that will be used for the user data
when launching the instance.
- `user_data_file` (string) - Path to a file that will be used for the user
data when launching the instance.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID in
order to create a temporary security group within the VPC.
- `vpc_id` (string) - If launching into a VPC subnet, Packer needs the VPC ID
in order to create a temporary security group within the VPC.
- `windows_password_timeout` (string) - The timeout for waiting for a Windows
password for Windows instances. Defaults to 20 minutes. Example value: "10m"
......
......@@ -13,12 +13,13 @@ multiple builders depending on the strategy you want to use to build the AMI.
Packer supports the following builders at the moment:
- [amazon-ebs](/docs/builders/amazon-ebs.html) - Create EBS-backed AMIs by
launching a source AMI and re-packaging it into a new AMI after provisioning.
If in doubt, use this builder, which is the easiest to get started with.
launching a source AMI and re-packaging it into a new AMI
after provisioning. If in doubt, use this builder, which is the easiest to
get started with.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create instance-store
AMIs by launching and provisioning a source instance, then rebundling it and
uploading it to S3.
- [amazon-instance](/docs/builders/amazon-instance.html) - Create
instance-store AMIs by launching and provisioning a source instance, then
rebundling it and uploading it to S3.
- [amazon-chroot](/docs/builders/amazon-chroot.html) - Create EBS-backed AMIs
from an existing EC2 instance by mounting the root device and using a
......
......@@ -34,34 +34,36 @@ builder.
### Required:
- `api_token` (string) - The client TOKEN to use to access your account. It can
also be specified via environment variable `DIGITALOCEAN_API_TOKEN`, if set.
- `api_token` (string) - The client TOKEN to use to access your account. It
can also be specified via environment variable `DIGITALOCEAN_API_TOKEN`,
if set.
- `image` (string) - The name (or slug) of the base image to use. This is the
image that will be used to launch a new droplet and provision it. See
https://developers.digitalocean.com/documentation/v2/\#list-all-images for
details on how to get a list of the the accepted image names/slugs.
- `region` (string) - The name (or slug) of the region to launch the droplet in.
Consequently, this is the region where the snapshot will be available. See
- `region` (string) - The name (or slug) of the region to launch the
droplet in. Consequently, this is the region where the snapshot will
be available. See
https://developers.digitalocean.com/documentation/v2/\#list-all-regions for
the accepted region names/slugs.
- `size` (string) - The name (or slug) of the droplet size to use. See
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for the
accepted size names/slugs.
https://developers.digitalocean.com/documentation/v2/\#list-all-sizes for
the accepted size names/slugs.
### Optional:
- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean sets
the hostname of the machine to this value.
- `droplet_name` (string) - The name assigned to the droplet. DigitalOcean
sets the hostname of the machine to this value.
- `private_networking` (boolean) - Set to `true` to enable private networking
for the droplet being created. This defaults to `false`, or not enabled.
- `snapshot_name` (string) - The name of the resulting snapshot that will appear
in your account. This must be unique. To help make this unique, use a function
like `timestamp` (see [configuration
- `snapshot_name` (string) - The name of the resulting snapshot that will
appear in your account. This must be unique. To help make this unique, use a
function like `timestamp` (see [configuration
templates](/docs/templates/configuration-templates.html) for more info)
- `state_timeout` (string) - The time to wait, as a duration string, for a
......
......@@ -93,8 +93,8 @@ builder.
- `login_server` (string) - The server address to login to.
- `pull` (boolean) - If true, the configured image will be pulled using
`docker pull` prior to use. Otherwise, it is assumed the image already exists
and can be used. This defaults to true if not set.
`docker pull` prior to use. Otherwise, it is assumed the image already
exists and can be used. This defaults to true if not set.
- `run_command` (array of strings) - An array of arguments to pass to
`docker run` in order to run the container. By default this is set to
......@@ -226,11 +226,11 @@ Dockerfiles have some additional features that Packer doesn't support which are
able to be worked around. Many of these features will be automated by Packer in
the future:
- Dockerfiles will snapshot the container at each step, allowing you to go back
to any step in the history of building. Packer doesn't do this yet, but
- Dockerfiles will snapshot the container at each step, allowing you to go
back to any step in the history of building. Packer doesn't do this yet, but
inter-step snapshotting is on the way.
- Dockerfiles can contain information such as exposed ports, shared volumes, and
other metadata. Packer builds a raw Docker container image that has none of
this metadata. You can pass in much of this metadata at runtime with
- Dockerfiles can contain information such as exposed ports, shared volumes,
and other metadata. Packer builds a raw Docker container image that has none
of this metadata. You can pass in much of this metadata at runtime with
`docker run`.
......@@ -43,10 +43,10 @@ builder.
- `image_name` (string) - The name of the resulting image.
- `source_image` (string) - The ID or full URL to the base image to use. This is
the image that will be used to launch a new server and provision it. Unless
you specify completely custom SSH settings, the source image must have
`cloud-init` installed so that the keypair gets assigned properly.
- `source_image` (string) - The ID or full URL to the base image to use. This
is the image that will be used to launch a new server and provision it.
Unless you specify completely custom SSH settings, the source image must
have `cloud-init` installed so that the keypair gets assigned properly.
- `username` (string) - The username used to connect to the OpenStack service.
If not specified, Packer will use the environment variable `OS_USERNAME`,
......@@ -61,19 +61,19 @@ builder.
- `api_key` (string) - The API key used to access OpenStack. Some OpenStack
installations require this.
- `availability_zone` (string) - The availability zone to launch the server in.
If this isn't specified, the default enforced by your OpenStack cluster will
be used. This may be required for some OpenStack clusters.
- `availability_zone` (string) - The availability zone to launch the
server in. If this isn't specified, the default enforced by your OpenStack
cluster will be used. This may be required for some OpenStack clusters.
- `floating_ip` (string) - A specific floating IP to assign to this instance.
`use_floating_ip` must also be set to true for this to have an affect.
- `floating_ip_pool` (string) - The name of the floating IP pool to use to
allocate a floating IP. `use_floating_ip` must also be set to true for this to
have an affect.
allocate a floating IP. `use_floating_ip` must also be set to true for this
to have an affect.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be done
over an insecure connection. By default this is false.
- `insecure` (boolean) - Whether or not the connection to OpenStack can be
done over an insecure connection. By default this is false.
- `networks` (array of strings) - A list of networks by UUID to attach to
this instance.
......@@ -85,13 +85,13 @@ builder.
- `security_groups` (array of strings) - A list of security groups by name to
add to this instance.
- `region` (string) - The name of the region, such as "DFW", in which to launch
the server to create the AMI. If not specified, Packer will use the
- `region` (string) - The name of the region, such as "DFW", in which to
launch the server to create the AMI. If not specified, Packer will use the
environment variable `OS_REGION_NAME`, if set.
- `ssh_interface` (string) - The type of interface to connect via SSH. Values
useful for Rackspace are "public" or "private", and the default behavior is to
connect via whichever is returned first from the OpenStack API.
useful for Rackspace are "public" or "private", and the default behavior is
to connect via whichever is returned first from the OpenStack API.
- `use_floating_ip` (boolean) - Whether or not to use a floating IP for
the instance. Defaults to false.
......
......@@ -56,37 +56,38 @@ builder.
- `source_path` (string) - The path to a PVM directory that acts as the source
of this build.
- `ssh_username` (string) - The username to use to SSH into the machine once the
OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
- `parallels_tools_flavor` (string) - The flavor of the Parallels Tools ISO to
install into the VM. Valid values are "win", "lin", "mac", "os2" and "other".
This can be omitted only if `parallels_tools_mode` is "disable".
install into the VM. Valid values are "win", "lin", "mac", "os2"
and "other". This can be omitted only if `parallels_tools_mode`
is "disable".
### Optional:
- `boot_command` (array of strings) - This is an array of commands to type when
the virtual machine is first booted. The goal of these commands should be to
type just enough to initialize the operating system installer. Special keys
can be typed as well, and are covered in the section below on the
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five
seconds and one minute 30 seconds, respectively. If this isn't specified, the
default is 10 seconds.
- `floppy_files` (array of strings) - A list of files to put onto a floppy disk
that is attached when the VM is booted for the first time. This is most useful
for unattended Windows installs, which look for an `Autounattend.xml` file on
removable media. By default no floppy will be attached. The files listed in
this configuration will all be put into the root directory of the floppy disk;
sub-directories are not supported.
- `reassign_mac` (boolean) - If this is "false" the MAC address of the first NIC
will reused when imported else a new MAC address will be generated
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
- `floppy_files` (array of strings) - A list of files to put onto a floppy
disk that is attached when the VM is booted for the first time. This is most
useful for unattended Windows installs, which look for an `Autounattend.xml`
file on removable media. By default no floppy will be attached. The files
listed in this configuration will all be put into the root directory of the
floppy disk; sub-directories are not supported.
- `reassign_mac` (boolean) - If this is "false" the MAC address of the first
NIC will reused when imported else a new MAC address will be generated
by Parallels. Defaults to "false".
- `output_directory` (string) - This is the path to the directory where the
......@@ -97,42 +98,44 @@ builder.
name of the build.
- `parallels_tools_guest_path` (string) - The path in the VM to upload
Parallels Tools. This only takes effect if `parallels_tools_mode` is "upload".
This is a [configuration
Parallels Tools. This only takes effect if `parallels_tools_mode`
is "upload". This is a [configuration
template](/docs/templates/configuration-templates.html) that has a single
valid variable: `Flavor`, which will be the value of `parallels_tools_flavor`.
By default this is "prl-tools-{{.Flavor}}.iso" which should upload into the
login directory of the user.
- `parallels_tools_mode` (string) - The method by which Parallels Tools are made
available to the guest for installation. Valid options are "upload", "attach",
or "disable". If the mode is "attach" the Parallels Tools ISO will be attached
as a CD device to the virtual machine. If the mode is "upload" the Parallels
Tools ISO will be uploaded to the path specified by
valid variable: `Flavor`, which will be the value of
`parallels_tools_flavor`. By default this is "prl-tools-{{.Flavor}}.iso"
which should upload into the login directory of the user.
- `parallels_tools_mode` (string) - The method by which Parallels Tools are
made available to the guest for installation. Valid options are "upload",
"attach", or "disable". If the mode is "attach" the Parallels Tools ISO will
be attached as a CD device to the virtual machine. If the mode is "upload"
the Parallels Tools ISO will be uploaded to the path specified by
`parallels_tools_guest_path`. The default value is "upload".
- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute in
order to further customize the virtual machine being created. The value of
this is an array of commands to execute. The commands are executed in the
order defined in the template. For each command, the command is defined itself
as an array of strings, where each string represents a single argument on the
command-line to `prlctl` (but excluding `prlctl` itself). Each arg is treated
as a [configuration template](/docs/templates/configuration-templates.html),
where the `Name` variable is replaced with the VM name. More details on how to
use `prlctl` are below.
- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except that
it is run after the virtual machine is shutdown, and before the virtual
- `prlctl` (array of array of strings) - Custom `prlctl` commands to execute
in order to further customize the virtual machine being created. The value
of this is an array of commands to execute. The commands are executed in the
order defined in the template. For each command, the command is defined
itself as an array of strings, where each string represents a single
argument on the command-line to `prlctl` (but excluding `prlctl` itself).
Each arg is treated as a [configuration
template](/docs/templates/configuration-templates.html), where the `Name`
variable is replaced with the VM name. More details on how to use `prlctl`
are below.
- `prlctl_post` (array of array of strings) - Identical to `prlctl`, except
that it is run after the virtual machine is shutdown, and before the virtual
machine is exported.
- `prlctl_version_file` (string) - The path within the virtual machine to upload
a file that contains the `prlctl` version that was used to create the machine.
This information can be useful for provisioning. By default this is
".prlctl\_version", which will generally upload it into the home directory.
- `prlctl_version_file` (string) - The path within the virtual machine to
upload a file that contains the `prlctl` version that was used to create
the machine. This information can be useful for provisioning. By default
this is ".prlctl\_version", which will generally upload it into the
home directory.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty string,
which tells Packer to just forcefully shut down the machine.
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
......@@ -190,9 +193,9 @@ proper key:
- `<pageUp>` `<pageDown>` - Simulates pressing the page up and page down keys.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before sending
any additional keys. This is useful if you have to generally wait for the UI
to update before typing more.
- `<wait>` `<wait5>` `<wait10>` - Adds a 1, 5 or 10 second pause before
sending any additional keys. This is useful if you have to generally wait
for the UI to update before typing more.
In addition to the special keys, each command to type is treated as a
[configuration template](/docs/templates/configuration-templates.html). The
......
......@@ -16,16 +16,16 @@ Packer actually comes with multiple builders able to create Parallels machines,
depending on the strategy you want to use to build the image. Packer supports
the following Parallels builders:
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO file,
creates a brand new Parallels VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for people
who want to start from scratch.
- [parallels-iso](/docs/builders/parallels-iso.html) - Starts from an ISO
file, creates a brand new Parallels VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
- [parallels-pvm](/docs/builders/parallels-pvm.html) - This builder imports an
existing PVM file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing Parallels VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
machine to create an image. This is best if you have an existing Parallels
VM export you want to use as the source. As an additional benefit, you can
feed the artifact of this builder back into itself to iterate on a machine.
## Requirements
......
......@@ -18,11 +18,12 @@ the following VirtualBox builders:
- [virtualbox-iso](/docs/builders/virtualbox-iso.html) - Starts from an ISO
file, creates a brand new VirtualBox VM, installs an OS, provisions software
within the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
within the OS, then exports that machine to create an image. This is best
for people who want to start from scratch.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports an
existing OVF/OVA file, runs provisioners on top of that VM, and exports that
machine to create an image. This is best if you have an existing VirtualBox VM
export you want to use as the source. As an additional benefit, you can feed
the artifact of this builder back into itself to iterate on a machine.
- [virtualbox-ovf](/docs/builders/virtualbox-ovf.html) - This builder imports
an existing OVF/OVA file, runs provisioners on top of that VM, and exports
that machine to create an image. This is best if you have an existing
VirtualBox VM export you want to use as the source. As an additional
benefit, you can feed the artifact of this builder back into itself to
iterate on a machine.
......@@ -55,23 +55,23 @@ builder.
- `source_path` (string) - Path to the source VMX file to clone.
- `ssh_username` (string) - The username to use to SSH into the machine once the
OS is installed.
- `ssh_username` (string) - The username to use to SSH into the machine once
the OS is installed.
### Optional:
- `boot_command` (array of strings) - This is an array of commands to type when
the virtual machine is first booted. The goal of these commands should be to
type just enough to initialize the operating system installer. Special keys
can be typed as well, and are covered in the section below on the
- `boot_command` (array of strings) - This is an array of commands to type
when the virtual machine is first booted. The goal of these commands should
be to type just enough to initialize the operating system installer. Special
keys can be typed as well, and are covered in the section below on the
boot command. If this is not specified, it is assumed the installer will
start itself.
- `boot_wait` (string) - The time to wait after booting the initial virtual
machine before typing the `boot_command`. The value of this should be
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait five
seconds and one minute 30 seconds, respectively. If this isn't specified, the
default is 10 seconds.
a duration. Examples are "5s" and "1m30s" which will cause Packer to wait
five seconds and one minute 30 seconds, respectively. If this isn't
specified, the default is 10 seconds.
- `floppy_files` (array of strings) - A list of files to place onto a floppy
disk that is attached when the VM is booted. This is most useful for
......@@ -79,33 +79,33 @@ builder.
removable media. By default, no floppy will be attached. All files listed in
this setting get placed into the root directory of the floppy and the floppy
is attached as the first floppy device. Currently, no support exists for
creating sub-directories on the floppy. Wildcard characters (\*, ?, and \[\])
are allowed. Directory names are also allowed, which will add all the files
found in the directory to the floppy.
creating sub-directories on the floppy. Wildcard characters (\*, ?,
and \[\]) are allowed. Directory names are also allowed, which will add all
the files found in the directory to the floppy.
- `fusion_app_path` (string) - Path to "VMware Fusion.app". By default this is
"/Applications/VMware Fusion.app" but this setting allows you to
customize this.
- `headless` (boolean) - Packer defaults to building VMware virtual machines by
launching a GUI that shows the console of the machine being built. When this
value is set to true, the machine will start without a console. For VMware
machines, Packer will output VNC connection information in case you need to
connect to the console to debug the build process.
- `headless` (boolean) - Packer defaults to building VMware virtual machines
by launching a GUI that shows the console of the machine being built. When
this value is set to true, the machine will start without a console. For
VMware machines, Packer will output VNC connection information in case you
need to connect to the console to debug the build process.
- `http_directory` (string) - Path to a directory to serve using an HTTP server.
The files in this directory will be available over HTTP that will be
requestable from the virtual machine. This is useful for hosting kickstart
files and so on. By default this is "", which means no HTTP server will
be started. The address and port of the HTTP server will be available as
variables in `boot_command`. This is covered in more detail below.
- `http_directory` (string) - Path to a directory to serve using an
HTTP server. The files in this directory will be available over HTTP that
will be requestable from the virtual machine. This is useful for hosting
kickstart files and so on. By default this is "", which means no HTTP server
will be started. The address and port of the HTTP server will be available
as variables in `boot_command`. This is covered in more detail below.
- `http_port_min` and `http_port_max` (integer) - These are the minimum and
maximum port to use for the HTTP server started to serve the `http_directory`.
Because Packer often runs in parallel, Packer will choose a randomly available
port in this range to run the HTTP server. If you want to force the HTTP
server to be on one port, make this minimum and maximum port the same. By
default the values are 8000 and 9000, respectively.
maximum port to use for the HTTP server started to serve the
`http_directory`. Because Packer often runs in parallel, Packer will choose
a randomly available port in this range to run the HTTP server. If you want
to force the HTTP server to be on one port, make this minimum and maximum
port the same. By default the values are 8000 and 9000, respectively.
- `output_directory` (string) - This is the path to the directory where the
resulting virtual machine will be created. This may be relative or absolute.
......@@ -115,11 +115,12 @@ builder.
name of the build.
- `shutdown_command` (string) - The command to use to gracefully shut down the
machine once all the provisioning is done. By default this is an empty string,
which tells Packer to just forcefully shut down the machine unless a shutdown
command takes place inside script so this may safely be omitted. If one or
more scripts require a reboot it is suggested to leave this blank since
reboots may fail and specify the final shutdown command in your last script.
machine once all the provisioning is done. By default this is an empty
string, which tells Packer to just forcefully shut down the machine unless a
shutdown command takes place inside script so this may safely be omitted. If
one or more scripts require a reboot it is suggested to leave this blank
since reboots may fail and specify the final shutdown command in your
last script.
- `shutdown_timeout` (string) - The amount of time to wait after executing the
`shutdown_command` for the virtual machine to actually shut down. If it
......@@ -136,16 +137,16 @@ builder.
machine, without the file extension. By default this is "packer-BUILDNAME",
where "BUILDNAME" is the name of the build.
- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter into
the virtual machine VMX file. This is for advanced users who want to set
properties such as memory, CPU, etc.
- `vmx_data` (object of key/value strings) - Arbitrary key/values to enter
into the virtual machine VMX file. This is for advanced users who want to
set properties such as memory, CPU, etc.
- `vmx_data_post` (object of key/value strings) - Identical to `vmx_data`,
except that it is run after the virtual machine is shutdown, and before the
virtual machine is exported.
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port to
use for VNC access to the virtual machine. The builder uses VNC to type the
initial `boot_command`. Because Packer generally runs in parallel, Packer uses
a randomly chosen port in this range that appears available. By default this
is 5900 to 6000. The minimum and maximum ports are inclusive.
- `vnc_port_min` and `vnc_port_max` (integer) - The minimum and maximum port
to use for VNC access to the virtual machine. The builder uses VNC to type
the initial `boot_command`. Because Packer generally runs in parallel,
Packer uses a randomly chosen port in this range that appears available. By
default this is 5900 to 6000. The minimum and maximum ports are inclusive.
......@@ -16,9 +16,9 @@ depending on the strategy you want to use to build the image. Packer supports
the following VMware builders:
- [vmware-iso](/docs/builders/vmware-iso.html) - Starts from an ISO file,
creates a brand new VMware VM, installs an OS, provisions software within the
OS, then exports that machine to create an image. This is best for people who
want to start from scratch.
creates a brand new VMware VM, installs an OS, provisions software within
the OS, then exports that machine to create an image. This is best for
people who want to start from scratch.
- [vmware-vmx](/docs/builders/vmware-vmx.html) - This builder imports an
existing VMware machine (from a VMX file), runs provisioners on top of that
......
......@@ -20,21 +20,23 @@ artifacts that are created will be outputted at the end of the build.
- `-color=false` - Disables colorized output. Enabled by default.
- `-debug` - Disables parallelization and enables debug mode. Debug mode flags
the builders that they should output debugging information. The exact behavior
of debug mode is left to the builder. In general, builders usually will stop
between each step, waiting for keyboard input before continuing. This will
allow the user to inspect state and so on.
the builders that they should output debugging information. The exact
behavior of debug mode is left to the builder. In general, builders usually
will stop between each step, waiting for keyboard input before continuing.
This will allow the user to inspect state and so on.
- `-except=foo,bar,baz` - Builds all the builds except those with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.
- `-force` - Forces a builder to run when artifacts from a previous build
prevent a build from running. The exact behavior of a forced build is left to
the builder. In general, a builder supporting the forced build will remove the
artifacts from the previous build. This will allow the user to repeat a build
without having to manually clean these artifacts beforehand.
prevent a build from running. The exact behavior of a forced build is left
to the builder. In general, a builder supporting the forced build will
remove the artifacts from the previous build. This will allow the user to
repeat a build without having to manually clean these artifacts beforehand.
- `-only=foo,bar,baz` - Only build the builds with the given
comma-separated names. Build names by default are the names of their builders,
unless a specific `name` attribute is specified within the configuration.
comma-separated names. Build names by default are the names of their
builders, unless a specific `name` attribute is specified within
the configuration.
......@@ -19,7 +19,7 @@ The fix command will output the changed template to standard out, so you should
redirect standard using standard OS-specific techniques if you want to save it
to a file. For example, on Linux systems, you may want to do this:
$ packer fix old.json > new.json
\$ packer fix old.json &gt; new.json
If fixing fails for any reason, the fix command will exit with a non-zero exit
status. Error messages appear on standard error, so if you're redirecting
......
......@@ -55,18 +55,20 @@ Each component is explained below:
- **timestamp** is a Unix timestamp in UTC of when the message was printed.
- **target** is the target of the following output. This is empty if the message
is related to Packer globally. Otherwise, this is generally a build name so
you can relate output to a specific build while parallel builds are running.
- **target** is the target of the following output. This is empty if the
message is related to Packer globally. Otherwise, this is generally a build
name so you can relate output to a specific build while parallel builds
are running.
- **type** is the type of machine-readable message being outputted. There are a
set of standard types which are covered later, but each component of Packer
(builders, provisioners, etc.) may output their own custom types as well,
allowing the machine-readable output to be infinitely flexible.
- **type** is the type of machine-readable message being outputted. There are
a set of standard types which are covered later, but each component of
Packer (builders, provisioners, etc.) may output their own custom types as
well, allowing the machine-readable output to be infinitely flexible.
- **data** is zero or more comma-seperated values associated with the
prior type. The exact amount and meaning of this data is type-dependent, so
you must read the documentation associated with the type to understand fully.
you must read the documentation associated with the type to
understand fully.
Within the format, if data contains a comma, it is replaced with
`%!(PACKER_COMMA)`. This was preferred over an escape character such as `\'`
......
......@@ -29,5 +29,5 @@ Errors validating build 'vmware'. 1 error(s) occurred:
## Options
- `-syntax-only` - Only the syntax of the template is checked. The configuration
is not validated.
- `-syntax-only` - Only the syntax of the template is checked. The
configuration is not validated.
......@@ -55,8 +55,8 @@ dependencies.
- `github.com/mitchellh/packer` - Contains all the interfaces that you have to
implement for any given plugin.
- `github.com/mitchellh/packer/plugin` - Contains the code to serve the plugin.
This handles all the inter-process communication stuff.
- `github.com/mitchellh/packer/plugin` - Contains the code to serve
the plugin. This handles all the inter-process communication stuff.
There are two steps involved in creating a plugin:
......
......@@ -64,8 +64,8 @@ The valid types for plugins are:
- `command` - A CLI sub-command for `packer`.
- `post-processor` - A post-processor responsible for taking an artifact from a
builder and turning it into something else.
- `post-processor` - A post-processor responsible for taking an artifact from
a builder and turning it into something else.
- `provisioner` - A provisioner to install software on images created by
a builder.
......@@ -81,9 +81,9 @@ value is explained below:
- `Artifact` - The newly created artifact if no errors occurred.
- `bool` - If true, the input artifact will forcefully be kept. By default,
Packer typically deletes all input artifacts, since the user doesn't generally
want intermediary artifacts. However, some post-processors depend on the
previous artifact existing. If this is `true`, it forces packer to keep the
artifact around.
Packer typically deletes all input artifacts, since the user doesn't
generally want intermediary artifacts. However, some post-processors depend
on the previous artifact existing. If this is `true`, it forces packer to
keep the artifact around.
- `error` - Non-nil if there was an error in any way. If this is the case, the
other two return values are ignored.
......@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer build`.
<dl>
<dt>artifact (>= 2)</dt>
<dd>
<dt>
artifact (&gt;= 2)
</dt>
<dd>
<p>
Information about an artifact of the targeted item. This is a
fairly complex (but uniform!) machine-readable type that contains
......@@ -37,10 +39,12 @@ These are the machine-readable types that exist as part of the output of
data points related to the subtype. The exact count and meaning
of this subtypes comes from the subtype documentation.
</p>
</dd>
<dt>artifact-count (1)</dt>
<dd>
</dd>
<dt>
artifact-count (1)
</dt>
<dd>
<p>
The number of artifacts associated with the given target. This
will always be outputted _before_ any other artifact information,
......@@ -51,10 +55,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: count</strong> - The number of artifacts as
a base 10 integer.
</p>
</dd>
<dt>artifact subtype: builder-id (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: builder-id (1)
</dt>
<dd>
<p>
The unique ID of the builder that created this artifact.
</p>
......@@ -62,19 +68,23 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: id</strong> - The unique ID of the builder.
</p>
</dd>
<dt>artifact subtype: end (0)</dt>
<dd>
</dd>
<dt>
artifact subtype: end (0)
</dt>
<dd>
<p>
The last machine-readable output line outputted for an artifact.
This is a sentinel value so you know that no more data related to
the targetted artifact will be outputted.
</p>
</dd>
<dt>artifact subtype: file (2)</dt>
<dd>
</dd>
<dt>
artifact subtype: file (2)
</dt>
<dd>
<p>
A single file associated with the artifact. There are 0 to
"files-count" of these entries to describe every file that is
......@@ -89,10 +99,12 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 2: filename</strong> - The filename.
</p>
</dd>
<dt>artifact subtype: files-count (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: files-count (1)
</dt>
<dd>
<p>
The number of files associated with this artifact. Not all
artifacts have files associated with it.
......@@ -101,10 +113,12 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: count</strong> - The number of files.
</p>
</dd>
<dt>artifact subtype: id (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: id (1)
</dt>
<dd>
<p>
The ID (if any) of the artifact that was built. Not all artifacts
have associated IDs. For example, AMIs built have IDs associated
......@@ -115,18 +129,22 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: id</strong> - The ID of the artifact.
</p>
</dd>
<dt>artifact subtype: nil (0)</dt>
<dd>
</dd>
<dt>
artifact subtype: nil (0)
</dt>
<dd>
<p>
If present, this means that the artifact was nil, or that the targeted
build completed successfully but no artifact was created.
</p>
</dd>
<dt>artifact subtype: string (1)</dt>
<dd>
</dd>
<dt>
artifact subtype: string (1)
</dt>
<dd>
<p>
The human-readable string description of the artifact provided by
the artifact itself.
......@@ -135,10 +153,12 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: string</strong> - The string output for the artifact.
</p>
</dd>
<dt>error-count (1)</dt>
<dd>
</dd>
<dt>
error-count (1)
</dt>
<dd>
<p>
The number of errors that occurred during the build. This will
always be outputted before any errors so you know how many are coming.
......@@ -148,10 +168,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: count</strong> - The number of build errors as
a base 10 integer.
</p>
</dd>
<dt>error (1)</dt>
<dd>
</dd>
<dt>
error (1)
</dt>
<dd>
<p>
A build error that occurred. The target of this output will be
the build that had the error.
......@@ -160,6 +182,6 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: error</strong> - The error message as a string.
</p>
</dd>
</dd>
</dl>
......@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer inspect`.
<dl>
<dt>template-variable (3)</dt>
<dd>
<dt>
template-variable (3)
</dt>
<dd>
<p>
A <a href="/docs/templates/user-variables.html">user variable</a>
defined within the template.
......@@ -32,10 +34,12 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 3: required</strong> - If non-zero, then this variable
is required.
</p>
</dd>
<dt>template-builder (2)</dt>
<dd>
</dd>
<dt>
template-builder (2)
</dt>
<dd>
<p>
A builder defined within the template
</p>
......@@ -48,10 +52,12 @@ These are the machine-readable types that exist as part of the output of
generally be the same as the name unless you explicitly override
the name.
</p>
</dd>
<dt>template-provisioner (1)</dt>
<dd>
</dd>
<dt>
template-provisioner (1)
</dt>
<dd>
<p>
A provisioner defined within the template. Multiple of these may
exist. If so, they are outputted in the order they would run.
......@@ -60,6 +66,6 @@ These are the machine-readable types that exist as part of the output of
<p>
<strong>Data 1: name</strong> - The name/type of the provisioner.
</p>
</dd>
</dd>
</dl>
......@@ -12,8 +12,10 @@ These are the machine-readable types that exist as part of the output of
`packer version`.
<dl>
<dt>version (1)</dt>
<dd>
<dt>
version (1)
</dt>
<dd>
<p>The version number of Packer running.</p>
<p>
......@@ -21,19 +23,23 @@ These are the machine-readable types that exist as part of the output of
only including the major, minor, and patch versions. Example:
"0.2.4".
</p>
</dd>
<dt>version-commit (1)</dt>
<dd>
</dd>
<dt>
version-commit (1)
</dt>
<dd>
<p>The SHA1 of the Git commit that built this version of Packer.</p>
<p>
<strong>Data 1: commit SHA1</strong> - The SHA1 of the commit.
</p>
</dd>
<dt>version-prerelease (1)</dt>
<dd>
</dd>
<dt>
version-prerelease (1)
</dt>
<dd>
<p>
The prerelease tag (if any) for the running version of Packer. This
can be "beta", "dev", "alpha", etc. If this is empty, you can assume
......@@ -44,6 +50,6 @@ These are the machine-readable types that exist as part of the output of
<strong>Data 1: prerelease name</strong> - The name of the
prerelease tag.
</p>
</dd>
</dd>
</dl>
......@@ -12,8 +12,10 @@ These are the machine-readable types that can appear in almost any
machine-readable output and are provided by Packer core itself.
<dl>
<dt>ui (2)</dt>
<dd>
<dt>
ui (2)
</dt>
<dd>
<p>
Specifies the output and type of output that would've normally
gone to the console if Packer were running in human-readable
......@@ -28,6 +30,6 @@ machine-readable output and are provided by Packer core itself.
<strong>Data 2: output</strong> - The UI message that would have
been outputted.
</p>
</dd>
</dd>
</dl>
......@@ -24,12 +24,14 @@ Within each section, the format of the documentation is the following:
<br>
<dl>
<dt>type-name (data-count)</dt>
<dd>
<dt>
type-name (data-count)
</dt>
<dd>
<p>Description of the type.</p>
<p>
<strong>Data 1: name</strong> - Description.
</p>
</dd>
</dd>
</dl>
......@@ -32,13 +32,13 @@ The format of the configuration file is basic JSON.
Below is the list of all available configuration parameters for the core
configuration file. None of these are required, since all have sane defaults.
- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum and
maximum ports that Packer uses for communication with plugins, since plugin
communication happens over TCP connections on your local host. By default
these are 10,000 and 25,000, respectively. Be sure to set a fairly wide range
here, since Packer can easily use over 25 ports on a single run.
- `builders`, `commands`, `post-processors`, and `provisioners` are objects that
are used to install plugins. The details of how exactly these are set is
covered in more detail in the [installing plugins documentation
- `plugin_min_port` and `plugin_max_port` (integer) - These are the minimum
and maximum ports that Packer uses for communication with plugins, since
plugin communication happens over TCP connections on your local host. By
default these are 10,000 and 25,000, respectively. Be sure to set a fairly
wide range here, since Packer can easily use over 25 ports on a single run.
- `builders`, `commands`, `post-processors`, and `provisioners` are objects
that are used to install plugins. The details of how exactly these are set
is covered in more detail in the [installing plugins documentation
page](/docs/extend/plugins.html).
......@@ -25,12 +25,12 @@ each can be found below:
- `PACKER_NO_COLOR` - Setting this to any value will disable color in
the terminal.
- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for communication
with plugins, since plugin communication happens over TCP connections on your
local host. The default is 25,000. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for communication
with plugins, since plugin communication happens over TCP connections on your
local host. The default is 10,000. See the [core configuration
page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MAX_PORT` - The maximum port that Packer uses for
communication with plugins, since plugin communication happens over TCP
connections on your local host. The default is 25,000. See the [core
configuration page](/docs/other/core-configuration.html).
- `PACKER_PLUGIN_MIN_PORT` - The minimum port that Packer uses for
communication with plugins, since plugin communication happens over TCP
connections on your local host. The default is 10,000. See the [core
configuration page](/docs/other/core-configuration.html).
......@@ -28,9 +28,9 @@ Here is an example workflow:
1. Packer builds an AMI with the [Amazon AMI
builder](/docs/builders/amazon.html)
2. The `atlas` post-processor takes the resulting AMI and uploads it to Atlas.
The `atlas` post-processor is configured with the name of the AMI, for example
`hashicorp/foobar`, to create the artifact in Atlas or update the version if
the artifact already exists
The `atlas` post-processor is configured with the name of the AMI, for
example `hashicorp/foobar`, to create the artifact in Atlas or update the
version if the artifact already exists
3. The new version is ready and available to be used in deployments with a tool
like [Terraform](https://terraform.io)
......@@ -40,15 +40,16 @@ The configuration allows you to specify and access the artifact in Atlas.
### Required:
- `token` (string) - Your access token for the Atlas API. This can be generated
on your [tokens page](https://atlas.hashicorp.com/settings/tokens).
Alternatively you can export your Atlas token as an environmental variable and
remove it from the configuration.
- `token` (string) - Your access token for the Atlas API. This can be
generated on your [tokens
page](https://atlas.hashicorp.com/settings/tokens). Alternatively you can
export your Atlas token as an environmental variable and remove it from
the configuration.
- `artifact` (string) - The shorthand tag for your artifact that maps to Atlas,
i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`. You must
have access to the organization, hashicorp in this example, in order to add an
artifact to the organization in Atlas.
- `artifact` (string) - The shorthand tag for your artifact that maps to
Atlas, i.e `hashicorp/foobar` for `atlas.hashicorp.com/hashicorp/foobar`.
You must have access to the organization, hashicorp in this example, in
order to add an artifact to the organization in Atlas.
- `artifact_type` (string) - For uploading AMIs to Atlas, `artifact_type` will
always be `amazon.ami`. This field must be defined because Atlas can host
......
......@@ -22,21 +22,21 @@ filename.
- `output` (string) - The path to save the compressed archive. The archive
format is inferred from the filename. E.g. `.tar.gz` will be a
gzipped tarball. `.zip` will be a zip file. If the extension can't be detected
packer defaults to `.tar.gz` behavior but will not change the filename.
gzipped tarball. `.zip` will be a zip file. If the extension can't be
detected packer defaults to `.tar.gz` behavior but will not change
the filename.
If you are executing multiple builders in parallel you should make sure
`output` is unique for each one. For example
`packer_{{.BuildName}}_{{.Provider}}.zip`.
If you are executing multiple builders in parallel you should make sure `output`
is unique for each one. For example `packer_{{.BuildName}}_{{.Provider}}.zip`.
### Optional:
If you want more control over how the archive is created you can specify the
following settings:
- `compression_level` (integer) - Specify the compression level, for algorithms
that support it, from 1 through 9 inclusive. Typically higher compression
levels take longer but produce smaller files. Defaults to `6`
- `compression_level` (integer) - Specify the compression level, for
algorithms that support it, from 1 through 9 inclusive. Typically higher
compression levels take longer but produce smaller files. Defaults to `6`
- `keep_input_artifact` (boolean) - Keep source files; defaults to `false`
......
......@@ -18,8 +18,8 @@ pushes it to a Docker registry.
This post-processor has only optional configuration:
- `login` (boolean) - Defaults to false. If true, the post-processor will login
prior to pushing.
- `login` (boolean) - Defaults to false. If true, the post-processor will
login prior to pushing.
- `login_email` (string) - The email to use to authenticate to login.
......
......@@ -63,19 +63,19 @@ on Vagrant Cloud, as well as authentication and version information.
- `version` (string) - The version number, typically incrementing a
previous version. The version string is validated based on [Semantic
Versioning](http://semver.org/). The string must match a pattern that could be
semver, and doesn't validate that the version comes after your
Versioning](http://semver.org/). The string must match a pattern that could
be semver, and doesn't validate that the version comes after your
previous versions.
### Optional:
- `no_release` (string) - If set to true, does not release the version on
Vagrant Cloud, making it active. You can manually release the version via the
API or Web UI. Defaults to false.
Vagrant Cloud, making it active. You can manually release the version via
the API or Web UI. Defaults to false.
- `vagrant_cloud_url` (string) - Override the base URL for Vagrant Cloud. This
is useful if you're using Vagrant Private Cloud in your own network. Defaults
to `https://vagrantcloud.com/api/v1`
is useful if you're using Vagrant Private Cloud in your own network.
Defaults to `https://vagrantcloud.com/api/v1`
- `version_description` (string) - Optionally markdown text used as a
full-length and in-depth description of the version, typically for denoting
......
......@@ -51,10 +51,10 @@ However, if you want to configure things a bit more, the post-processor does
expose some configuration options. The available options are listed below, with
more details about certain options in following sections.
- `compression_level` (integer) - An integer representing the compression level
to use when creating the Vagrant box. Valid values range from 0 to 9, with 0
being no compression and 9 being the best compression. By default, compression
is enabled at level 6.
- `compression_level` (integer) - An integer representing the compression
level to use when creating the Vagrant box. Valid values range from 0 to 9,
with 0 being no compression and 9 being the best compression. By default,
compression is enabled at level 6.
- `include` (array of strings) - Paths to files to include in the Vagrant box.
These files will each be copied into the top level directory of the Vagrant
......
......@@ -32,10 +32,11 @@ Required:
- `host` (string) - The vSphere host that will be contacted to perform the
VM upload.
- `password` (string) - Password to use to authenticate to the vSphere endpoint.
- `password` (string) - Password to use to authenticate to the
vSphere endpoint.
- `resource_pool` (string) - The resource pool to upload the VM to. This is *not
required*.
- `resource_pool` (string) - The resource pool to upload the VM to. This is
*not required*.
- `username` (string) - The username to use to authenticate to the
vSphere endpoint.
......
......@@ -44,49 +44,36 @@ Optional:
- `command` (string) - The command to invoke ansible. Defaults
to "ansible-playbook".
- `extra_arguments` (array of strings) - An array of extra arguments to pass to
the ansible command. By default, this is empty.
- `extra_arguments` (array of strings) - An array of extra arguments to pass
to the ansible command. By default, this is empty.
- `inventory_groups` (string) - A comma-separated list of groups to which packer
will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2` will
generate an Ansible inventory like:
- `inventory_groups` (string) - A comma-separated list of groups to which
packer will assign the host `127.0.0.1`. A value of `my_group_1,my_group_2`
will generate an Ansible inventory like:
``` {.text}
[my_group_1]
127.0.0.1
[my_group_2]
127.0.0.1
```
`{.text} [my_group_1] 127.0.0.1 [my_group_2] 127.0.0.1`
- `inventory_file` (string) - The inventory file to be used by ansible. This
file must exist on your local system and will be uploaded to the
remote machine.
When using an inventory file, it's also required to `--limit` the hosts to the
specified host you're buiding. The `--limit` argument can be provided in the
`extra_arguments` option.
When using an inventory file, it's also required to `--limit` the hosts to the
specified host you're buiding. The `--limit` argument can be provided in the
`extra_arguments` option.
An example inventory file may look like:
An example inventory file may look like:
``` {.text}
[chi-dbservers]
db-01 ansible_connection=local
db-02 ansible_connection=local
\`\`\` {.text} \[chi-dbservers\] db-01 ansible\_connection=local db-02
ansible\_connection=local
[chi-appservers]
app-01 ansible_connection=local
app-02 ansible_connection=local
\[chi-appservers\] app-01 ansible\_connection=local app-02
ansible\_connection=local
[chi:children]
chi-dbservers
chi-appservers
\[chi:children\] chi-dbservers chi-appservers
[dbservers:children]
chi-dbservers
\[dbservers:children\] chi-dbservers
[appservers:children]
chi-appservers
```
\[appservers:children\] chi-appservers \`\`\`
- `playbook_dir` (string) - a path to the complete ansible directory structure
on your local system to be copied to the remote machine as the
......@@ -97,12 +84,12 @@ Optional:
`staging_directory`/playbooks. By default, this is empty.
- `group_vars` (string) - a path to the directory containing ansible group
variables on your local system to be copied to the remote machine. By default,
this is empty.
variables on your local system to be copied to the remote machine. By
default, this is empty.
- `host_vars` (string) - a path to the directory containing ansible host
variables on your local system to be copied to the remote machine. By default,
this is empty.
variables on your local system to be copied to the remote machine. By
default, this is empty.
- `role_paths` (array of strings) - An array of paths to role directories on
your local system. These will be uploaded to the remote machine under
......
......@@ -43,37 +43,37 @@ configuration is actually required.
- `chef_environment` (string) - The name of the chef\_environment sent to the
Chef server. By default this is empty and will not use an environment.
- `config_template` (string) - Path to a template that will be used for the Chef
configuration file. By default Packer only sets configuration it needs to
match the settings set in the provisioner configuration. If you need to set
configurations that the Packer provisioner doesn't support, then you should
use a custom configuration template. See the dedicated "Chef Configuration"
section below for more details.
- `config_template` (string) - Path to a template that will be used for the
Chef configuration file. By default Packer only sets configuration it needs
to match the settings set in the provisioner configuration. If you need to
set configurations that the Packer provisioner doesn't support, then you
should use a custom configuration template. See the dedicated "Chef
Configuration" section below for more details.
- `execute_command` (string) - The command used to execute Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `install_command` (string) - The command used to install Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `json` (object) - An arbitrary mapping of JSON that will be available as node
attributes while running Chef.
- `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
- `node_name` (string) - The name of the node to register with the Chef Server.
This is optional and by default is packer-{{uuid}}.
- `node_name` (string) - The name of the node to register with the
Chef Server. This is optional and by default is packer-{{uuid}}.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
then the sudo will be omitted.
- `run_list` (array of strings) - The [run
list](http://docs.opscode.com/essentials_node_object_run_lists.html) for Chef.
By default this is empty, and will use the run list sent down by the
Chef Server.
list](http://docs.opscode.com/essentials_node_object_run_lists.html)
for Chef. By default this is empty, and will use the run list sent down by
the Chef Server.
- `server_url` (string) - The URL to the Chef server. This is required.
......@@ -96,14 +96,15 @@ configuration is actually required.
- `client_key` (string) - Path to client key. If not set, this defaults to a
file named client.pem in `staging_directory`.
- `validation_client_name` (string) - Name of the validation client. If not set,
this won't be set in the configuration and the default that Chef uses will
be used.
- `validation_client_name` (string) - Name of the validation client. If not
set, this won't be set in the configuration and the default that Chef uses
will be used.
- `validation_key_path` (string) - Path to the validation key for communicating
with the Chef Server. This will be uploaded to the remote machine. If this is
NOT set, then it is your responsibility via other means (shell
provisioner, etc.) to get a validation key to where Chef expects it.
- `validation_key_path` (string) - Path to the validation key for
communicating with the Chef Server. This will be uploaded to the
remote machine. If this is NOT set, then it is your responsibility via other
means (shell provisioner, etc.) to get a validation key to where Chef
expects it.
## Chef Configuration
......
......@@ -39,17 +39,17 @@ configuration is actually required, but at least `run_list` is recommended.
- `chef_environment` (string) - The name of the `chef_environment` sent to the
Chef server. By default this is empty and will not use an environment
- `config_template` (string) - Path to a template that will be used for the Chef
configuration file. By default Packer only sets configuration it needs to
match the settings set in the provisioner configuration. If you need to set
configurations that the Packer provisioner doesn't support, then you should
use a custom configuration template. See the dedicated "Chef Configuration"
section below for more details.
- `cookbook_paths` (array of strings) - This is an array of paths to "cookbooks"
directories on your local filesystem. These will be uploaded to the remote
machine in the directory specified by the `staging_directory`. By default,
this is empty.
- `config_template` (string) - Path to a template that will be used for the
Chef configuration file. By default Packer only sets configuration it needs
to match the settings set in the provisioner configuration. If you need to
set configurations that the Packer provisioner doesn't support, then you
should use a custom configuration template. See the dedicated "Chef
Configuration" section below for more details.
- `cookbook_paths` (array of strings) - This is an array of paths to
"cookbooks" directories on your local filesystem. These will be uploaded to
the remote machine in the directory specified by the `staging_directory`. By
default, this is empty.
- `data_bags_path` (string) - The path to the "data\_bags" directory on your
local filesystem. These will be uploaded to the remote machine in the
......@@ -65,16 +65,16 @@ configuration is actually required, but at least `run_list` is recommended.
- `execute_command` (string) - The command used to execute Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `install_command` (string) - The command used to install Chef. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `json` (object) - An arbitrary mapping of JSON that will be available as node
attributes while running Chef.
- `json` (object) - An arbitrary mapping of JSON that will be available as
node attributes while running Chef.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to install and run Chef are executed with `sudo`. If this is true,
......@@ -90,17 +90,18 @@ configuration is actually required, but at least `run_list` is recommended.
directory specified by the `staging_directory`. By default, this is empty.
- `run_list` (array of strings) - The [run
list](https://docs.chef.io/run_lists.html) for Chef. By default this is empty.
list](https://docs.chef.io/run_lists.html) for Chef. By default this
is empty.
- `skip_install` (boolean) - If true, Chef will not automatically be installed
on the machine using the Chef omnibus installers.
- `staging_directory` (string) - This is the directory where all the
configuration of Chef by Packer will be placed. By default this
is "/tmp/packer-chef-solo". This directory doesn't need to exist but must have
proper permissions so that the SSH user that Packer uses is able to create
directories and write into this folder. If the permissions are not correct,
use a shell provisioner prior to this to configure it properly.
is "/tmp/packer-chef-solo". This directory doesn't need to exist but must
have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
## Chef Configuration
......@@ -121,8 +122,8 @@ variables available to use:
- `ChefEnvironment` - The current enabled environment. Only non-empty if the
environment path is set.
- `CookbookPaths` is the set of cookbook paths ready to embedded directly into a
Ruby array to configure Chef.
- `CookbookPaths` is the set of cookbook paths ready to embedded directly into
a Ruby array to configure Chef.
- `DataBagsPath` is the path to the data bags folder.
- `EncryptedDataBagSecretPath` - The path to the encrypted data bag secret
- `EnvironmentsPath` - The path to the environments folder.
......
......@@ -39,12 +39,12 @@ The available configuration options are listed below. All elements are required.
uploading directories.
- `destination` (string) - The path where the file will be uploaded to in
the machine. This value must be a writable location and any parent directories
must already exist.
the machine. This value must be a writable location and any parent
directories must already exist.
- `direction` (string) - The direction of the file transfer. This defaults to
"upload." If it is set to "download" then the file "source" in the machine wll
be downloaded locally to "destination"
"upload." If it is set to "download" then the file "source" in the machine
wll be downloaded locally to "destination"
## Directory Uploads
......
......@@ -35,18 +35,18 @@ Exactly *one* of the following is required:
- `inline` (array of strings) - This is an array of commands to execute. The
commands are concatenated by newlines and turned into a single file, so they
are all executed within the same context. This allows you to change
directories in one command and use something in the directory in the next and
so on. Inline scripts are the easiest way to pull off simple tasks within
the machine.
directories in one command and use something in the directory in the next
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
- `script` (string) - The path to a script to upload and execute in the machine.
This path can be absolute or relative. If it is relative, it is relative to
the working directory when Packer is executed.
- `script` (string) - The path to a script to upload and execute in
the machine. This path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed.
- `scripts` (array of strings) - An array of scripts to execute. The scripts
will be uploaded and executed in the order specified. Each script is executed
in isolation, so state such as variables from one script won't carry on to
the next.
will be uploaded and executed in the order specified. Each script is
executed in isolation, so state such as variables from one script won't
carry on to the next.
Optional parameters:
......@@ -54,10 +54,10 @@ Optional parameters:
and Packer should therefore not convert Windows line endings to Unix line
endings (if there are any). By default this is false.
- `environment_vars` (array of strings) - An array of key/value pairs to inject
prior to the execute\_command. The format should be `key=value`. Packer
injects some environmental variables by default into the environment, as well,
which are covered in the section below.
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the execute\_command. The format should be `key=value`.
Packer injects some environmental variables by default into the environment,
as well, which are covered in the section below.
- `execute_command` (string) - The command to use to execute the script. By
default this is `powershell "& { {{.Vars}}{{.Path}}; exit $LastExitCode}"`.
......@@ -71,13 +71,14 @@ Optional parameters:
Windows user.
- `remote_path` (string) - The path where the script will be uploaded to in
the machine. This defaults to "/tmp/script.sh". This value must be a writable
location and any parent directories must already exist.
- `start_retry_timeout` (string) - The amount of time to attempt to *start* the
remote process. By default this is "5m" or 5 minutes. This setting exists in
order to deal with times when SSH may restart, such as a system reboot. Set
this to a higher value if reboots take a longer amount of time.
the machine. This defaults to "/tmp/script.sh". This value must be a
writable location and any parent directories must already exist.
- `start_retry_timeout` (string) - The amount of time to attempt to *start*
the remote process. By default this is "5m" or 5 minutes. This setting
exists in order to deal with times when SSH may restart, such as a
system reboot. Set this to a higher value if reboots take a longer amount
of time.
- `valid_exit_codes` (list of ints) - Valid exit codes for the script. By
default this is just 0.
......@@ -56,8 +56,8 @@ Optional parameters:
- `execute_command` (string) - The command used to execute Puppet. This has
various [configuration template
variables](/docs/templates/configuration-templates.html) available. See below
for more information.
variables](/docs/templates/configuration-templates.html) available. See
below for more information.
- `facter` (object of key/value strings) - Additional
[facts](http://puppetlabs.com/puppet/related-projects/facter) to make
......@@ -69,14 +69,13 @@ Optional parameters:
- `manifest_dir` (string) - The path to a local directory with manifests to be
uploaded to the remote machine. This is useful if your main manifest file
uses imports. This directory doesn't necessarily contain the `manifest_file`.
It is a separate directory that will be set as the "manifestdir" setting
on Puppet.
uses imports. This directory doesn't necessarily contain the
`manifest_file`. It is a separate directory that will be set as the
"manifestdir" setting on Puppet.
\~&gt; `manifest_dir` is passed to `puppet apply` as the
`--manifestdir` option. This option was deprecated in puppet 3.6, and removed
in puppet 4.0. If you have multiple manifests you should use
`manifest_file` instead.
\~&gt; `manifest_dir` is passed to `puppet apply` as the `--manifestdir` option.
This option was deprecated in puppet 3.6, and removed in puppet 4.0. If you have
multiple manifests you should use `manifest_file` instead.
- `module_paths` (array of strings) - This is an array of paths to module
directories on your local filesystem. These will be uploaded to the
......@@ -89,15 +88,15 @@ Optional parameters:
- `staging_directory` (string) - This is the directory where all the
configuration of Puppet by Packer will be placed. By default this
is "/tmp/packer-puppet-masterless". This directory doesn't need to exist but
must have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not
must have proper permissions so that the SSH user that Packer uses is able
to create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
- `working_directory` (string) - This is the directory from which the puppet
command will be run. When using hiera with a relative path, this option allows
to ensure that the paths are working properly. If not specified, defaults to
the value of specified `staging_directory` (or its default value if not
specified either).
command will be run. When using hiera with a relative path, this option
allows to ensure that the paths are working properly. If not specified,
defaults to the value of specified `staging_directory` (or its default value
if not specified either).
## Execute Command
......
......@@ -55,8 +55,8 @@ listed below:
- `ignore_exit_codes` (boolean) - If true, Packer will never consider the
provisioner a failure.
- `options` (string) - Additional command line options to pass to `puppet agent`
when Puppet is ran.
- `options` (string) - Additional command line options to pass to
`puppet agent` when Puppet is ran.
- `prevent_sudo` (boolean) - By default, the configured commands that are
executed to run Puppet are executed with `sudo`. If this is true, then the
......@@ -65,12 +65,12 @@ listed below:
- `puppet_node` (string) - The name of the node. If this isn't set, the fully
qualified domain name will be used.
- `puppet_server` (string) - Hostname of the Puppet server. By default "puppet"
will be used.
- `puppet_server` (string) - Hostname of the Puppet server. By default
"puppet" will be used.
- `staging_directory` (string) - This is the directory where all the
configuration of Puppet by Packer will be placed. By default this
is "/tmp/packer-puppet-server". This directory doesn't need to exist but must
have proper permissions so that the SSH user that Packer uses is able to
create directories and write into this folder. If the permissions are not
is "/tmp/packer-puppet-server". This directory doesn't need to exist but
must have proper permissions so that the SSH user that Packer uses is able
to create directories and write into this folder. If the permissions are not
correct, use a shell provisioner prior to this to configure it properly.
......@@ -54,5 +54,5 @@ Optional:
bootstrap](https://github.com/saltstack/salt-bootstrap) to install salt. Set
this to true to skip this step.
- `temp_config_dir` (string) - Where your local state tree will be copied before
moving to the `/srv/salt` directory. Default is `/tmp/salt`.
- `temp_config_dir` (string) - Where your local state tree will be copied
before moving to the `/srv/salt` directory. Default is `/tmp/salt`.
......@@ -40,18 +40,18 @@ Exactly *one* of the following is required:
- `inline` (array of strings) - This is an array of commands to execute. The
commands are concatenated by newlines and turned into a single file, so they
are all executed within the same context. This allows you to change
directories in one command and use something in the directory in the next and
so on. Inline scripts are the easiest way to pull off simple tasks within
the machine.
directories in one command and use something in the directory in the next
and so on. Inline scripts are the easiest way to pull off simple tasks
within the machine.
- `script` (string) - The path to a script to upload and execute in the machine.
This path can be absolute or relative. If it is relative, it is relative to
the working directory when Packer is executed.
- `script` (string) - The path to a script to upload and execute in
the machine. This path can be absolute or relative. If it is relative, it is
relative to the working directory when Packer is executed.
- `scripts` (array of strings) - An array of scripts to execute. The scripts
will be uploaded and executed in the order specified. Each script is executed
in isolation, so state such as variables from one script won't carry on to
the next.
will be uploaded and executed in the order specified. Each script is
executed in isolation, so state such as variables from one script won't
carry on to the next.
Optional parameters:
......@@ -59,14 +59,14 @@ Optional parameters:
and Packer should therefore not convert Windows line endings to Unix line
endings (if there are any). By default this is false.
- `environment_vars` (array of strings) - An array of key/value pairs to inject
prior to the execute\_command. The format should be `key=value`. Packer
injects some environmental variables by default into the environment, as well,
which are covered in the section below.
- `environment_vars` (array of strings) - An array of key/value pairs to
inject prior to the execute\_command. The format should be `key=value`.
Packer injects some environmental variables by default into the environment,
as well, which are covered in the section below.
- `execute_command` (string) - The command to use to execute the script. By
default this is `chmod +x {{ .Path }}; {{ .Vars }} {{ .Path }}`. The value of
this is treated as [configuration
default this is `chmod +x {{ .Path }}; {{ .Vars }} {{ .Path }}`. The value
of this is treated as [configuration
template](/docs/templates/configuration-templates.html). There are two
available variables: `Path`, which is the path to the script to run, and
`Vars`, which is the list of `environment_vars`, if configured.
......@@ -79,13 +79,14 @@ Optional parameters:
`-e` flag, otherwise individual steps failing won't fail the provisioner.
- `remote_path` (string) - The path where the script will be uploaded to in
the machine. This defaults to "/tmp/script.sh". This value must be a writable
location and any parent directories must already exist.
the machine. This defaults to "/tmp/script.sh". This value must be a
writable location and any parent directories must already exist.
- `start_retry_timeout` (string) - The amount of time to attempt to *start* the
remote process. By default this is "5m" or 5 minutes. This setting exists in
order to deal with times when SSH may restart, such as a system reboot. Set
this to a higher value if reboots take a longer amount of time.
- `start_retry_timeout` (string) - The amount of time to attempt to *start*
the remote process. By default this is "5m" or 5 minutes. This setting
exists in order to deal with times when SSH may restart, such as a
system reboot. Set this to a higher value if reboots take a longer amount
of time.
## Execute Command Example
......@@ -133,8 +134,8 @@ commonly useful environmental variables:
distinguish them slightly from a common provisioning script.
- `PACKER_BUILDER_TYPE` is the type of the builder that was used to create the
machine that the script is running on. This is useful if you want to run only
certain parts of the script on systems built with certain builders.
machine that the script is running on. This is useful if you want to run
only certain parts of the script on systems built with certain builders.
## Handling Reboots
......@@ -181,24 +182,19 @@ provisioner](/docs/provisioners/file.html) (more secure) or using `ssh-keyscan`
to populate the file (less secure). An example of the latter accessing github
would be:
{
"type": "shell",
"inline": [
"sudo apt-get install -y git",
"ssh-keyscan github.com >> ~/.ssh/known_hosts",
"git clone git@github.com:exampleorg/myprivaterepo.git"
]
}
{ "type": "shell", "inline": \[ "sudo apt-get install -y git", "ssh-keyscan
github.com &gt;&gt; \~/.ssh/known\_hosts", "git clone
git@github.com:exampleorg/myprivaterepo.git" \] }
## Troubleshooting
*My shell script doesn't work correctly on Ubuntu*
- On Ubuntu, the `/bin/sh` shell is
[dash](http://en.wikipedia.org/wiki/Debian_Almquist_shell). If your script has
[bash](http://en.wikipedia.org/wiki/Bash_(Unix_shell))-specific commands in
it, then put `#!/bin/bash` at the top of your script. Differences between dash
and bash can be found on the
[dash](http://en.wikipedia.org/wiki/Debian_Almquist_shell). If your script
has [bash](http://en.wikipedia.org/wiki/Bash_(Unix_shell))-specific commands
in it, then put `#!/bin/bash` at the top of your script. Differences between
dash and bash can be found on the
[DashAsBinSh](https://wiki.ubuntu.com/DashAsBinSh) Ubuntu wiki page.
*My shell works when I login but fails with the shell provisioner*
......
......@@ -113,6 +113,7 @@ Numeric
</th>
<td align="center">
-
</td>
<td align="center">
01
......@@ -148,18 +149,23 @@ January (Jan)
</td>
<td align="center">
-
</td>
<td align="center">
-
</td>
<td align="center">
-
</td>
<td align="center">
-
</td>
<td align="center">
-
</td>
<td align="center">
MST
......
......@@ -37,29 +37,30 @@ Along with each key, it is noted whether it is required or not.
template does. This output is used only in the [inspect
command](/docs/command-line/inspect.html).
- `min_packer_version` (optional) is a string that has a minimum Packer version
that is required to parse the template. This can be used to ensure that proper
versions of Packer are used with the template. A max version can't be
specified because Packer retains backwards compatibility with `packer fix`.
- `min_packer_version` (optional) is a string that has a minimum Packer
version that is required to parse the template. This can be used to ensure
that proper versions of Packer are used with the template. A max version
can't be specified because Packer retains backwards compatibility with
`packer fix`.
- `post-processors` (optional) is an array of one or more objects that defines
the various post-processing steps to take with the built images. If not
specified, then no post-processing will be done. For more information on what
post-processors do and how they're defined, read the sub-section on
specified, then no post-processing will be done. For more information on
what post-processors do and how they're defined, read the sub-section on
[configuring post-processors in
templates](/docs/templates/post-processors.html).
- `provisioners` (optional) is an array of one or more objects that defines the
provisioners that will be used to install and configure software for the
- `provisioners` (optional) is an array of one or more objects that defines
the provisioners that will be used to install and configure software for the
machines created by each of the builders. If it is not specified, then no
provisioners will be run. For more information on how to define and configure
a provisioner, read the sub-section on [configuring provisioners in
templates](/docs/templates/provisioners.html).
provisioners will be run. For more information on how to define and
configure a provisioner, read the sub-section on [configuring provisioners
in templates](/docs/templates/provisioners.html).
- `variables` (optional) is an array of one or more key/value strings that
defines user variables contained in the template. If it is not specified, then
no variables are defined. For more information on how to define and use user
variables, read the sub-section on [user variables in
defines user variables contained in the template. If it is not specified,
then no variables are defined. For more information on how to define and use
user variables, read the sub-section on [user variables in
templates](/docs/templates/user-variables.html).
## Comments
......
......@@ -42,12 +42,12 @@ each category, the available configuration keys are alphabetized.
### Optional
- `address` (string) - The address of the build service to use. By default this
is `https://atlas.hashicorp.com`.
- `address` (string) - The address of the build service to use. By default
this is `https://atlas.hashicorp.com`.
- `base_dir` (string) - The base directory of the files to upload. This will be
the current working directory when the build service executes your template.
This path is relative to the template.
- `base_dir` (string) - The base directory of the files to upload. This will
be the current working directory when the build service executes
your template. This path is relative to the template.
- `include` (array of strings) - Glob patterns to include relative to the
`base_dir`. If this is specified, only files that match the include pattern
......
......@@ -34,20 +34,22 @@ on supported configuration parameters and usage, please see the appropriate
[documentation page within the documentation section](/docs).
- ***Amazon EC2 (AMI)***. Both EBS-backed and instance-store AMIs within
[EC2](http://aws.amazon.com/ec2/), optionally distributed to multiple regions.
[EC2](http://aws.amazon.com/ec2/), optionally distributed to
multiple regions.
- ***DigitalOcean***. Snapshots for [DigitalOcean](http://www.digitalocean.com/)
that can be used to start a pre-configured DigitalOcean instance of any size.
- ***DigitalOcean***. Snapshots for
[DigitalOcean](http://www.digitalocean.com/) that can be used to start a
pre-configured DigitalOcean instance of any size.
- ***Docker***. Snapshots for [Docker](http://www.docker.io/) that can be used
to start a pre-configured Docker instance.
- ***Google Compute Engine***. Snapshots for [Google Compute
Engine](https://cloud.google.com/products/compute-engine) that can be used to
start a pre-configured Google Compute Engine instance.
Engine](https://cloud.google.com/products/compute-engine) that can be used
to start a pre-configured Google Compute Engine instance.
- ***OpenStack***. Images for [OpenStack](http://www.openstack.org/) that can be
used to start pre-configured OpenStack servers.
- ***OpenStack***. Images for [OpenStack](http://www.openstack.org/) that can
be used to start pre-configured OpenStack servers.
- ***Parallels (PVM)***. Exported virtual machines for
[Parallels](http://www.parallels.com/downloads/desktop/), including virtual
......@@ -55,13 +57,13 @@ on supported configuration parameters and usage, please see the appropriate
and can be started on any platform Parallels runs on.
- ***QEMU***. Images for [KVM](http://www.linux-kvm.org/) or
[Xen](http://www.xenproject.org/) that can be used to start pre-configured KVM
or Xen instances.
[Xen](http://www.xenproject.org/) that can be used to start pre-configured
KVM or Xen instances.
- ***VirtualBox (OVF)***. Exported virtual machines for
[VirtualBox](https://www.virtualbox.org/), including virtual machine metadata
such as RAM, CPUs, etc. These virtual machines are portable and can be started
on any platform VirtualBox runs on.
[VirtualBox](https://www.virtualbox.org/), including virtual machine
metadata such as RAM, CPUs, etc. These virtual machines are portable and can
be started on any platform VirtualBox runs on.
- ***VMware (VMX)***. Exported virtual machines for
[VMware](http://www.vmware.com/) that can be run within any desktop products
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment