<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.6.2">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2017-10-30T13:54:48+00:00</updated><id>/</id><title type="html">Longwave Consulting</title><author><name>Dave Long</name></author><entry><title type="html">EC2 volume cleanup one liner</title><link href="/2017/10/ec2-volume-cleanup" rel="alternate" type="text/html" title="EC2 volume cleanup one liner" /><published>2017-10-30T00:00:00+00:00</published><updated>2017-10-30T00:00:00+00:00</updated><id>/2017/10/ec2-volume-cleanup</id><content type="html" xml:base="/2017/10/ec2-volume-cleanup">&lt;p&gt;If you don’t have the “delete on termination” flag enabled on your EBS volumes you can end up with a whole load of unused volumes, and over time the cost of this can add up. To clean them all up in one fell swoop:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;aws ec2 describe-volumes | jq -r '.Volumes[]|select(.State==&quot;available&quot;)|.VolumeId' | xargs -n1 -I{} aws ec2 delete-volume --volume-id={}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;</content><author><name>Dave Long</name></author><summary type="html">If you don’t have the “delete on termination” flag enabled on your EBS volumes you can end up with a whole load of unused volumes, and over time the cost of this can add up. To clean them all up in one fell swoop:</summary></entry><entry><title type="html">Proxying nginx to AWS</title><link href="/2017/06/proxying-nginx-to-aws" rel="alternate" type="text/html" title="Proxying nginx to AWS" /><published>2017-06-20T00:00:00+00:00</published><updated>2017-06-20T00:00:00+00:00</updated><id>/2017/06/proxying-nginx-to-aws</id><content type="html" xml:base="/2017/06/proxying-nginx-to-aws">&lt;p&gt;When moving hosts (in this case from a dedicated server to AWS), even if your DNS TTL is low, it is some useful to proxy traffic from the old environment to the new to reduce downtime for any stragglers. nginx makes this fairly straightforward but there are a couple of important config options when SSL is involved.&lt;/p&gt;

&lt;p&gt;My nginx server looked like this:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;location / {
     proxy_pass https://d1234567890.cloudfront.net;
     proxy_ssl_protocols TLSv1.2;
     proxy_ssl_server_name on;
     proxy_ssl_name example.com;

     proxy_set_header  Host              $host;
     proxy_set_header  X-Real-IP         $remote_addr;
     proxy_set_header  X-Forwarded-For   $proxy_add_x_forwarded_for;
     proxy_set_header  X-Forwarded-Proto $scheme;
     proxy_pass_header Authorization;
}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The &lt;code class=&quot;highlighter-rouge&quot;&gt;proxy_pass&lt;/code&gt; directive tells nginx where to forward the traffic to. However, as the upstream is at CloudFront, we need some further options to make this actually work. &lt;code class=&quot;highlighter-rouge&quot;&gt;proxy_ssl_protocols&lt;/code&gt; forces the SSL negotiation to use TLS 1.2, as CloudFront does not support SSLv3 and we might as well use the latest version. &lt;code class=&quot;highlighter-rouge&quot;&gt;proxy_ssl_server_name&lt;/code&gt; enables SNI (Server Name Indication) on the upstream connection which is required as CloudFront handles multiple servers and needs to know upfront which one to connect to. Finally we also need to set &lt;code class=&quot;highlighter-rouge&quot;&gt;proxy_ssl_name&lt;/code&gt; to tell nginx which name to send for SNI - without this it will send the name specified in &lt;code class=&quot;highlighter-rouge&quot;&gt;proxy_pass&lt;/code&gt; which will work in some cases but if CloudFront has a certificate installed on the distribution, without this you will get errors like the following:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;SSL_do_handshake() failed (SSL: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure) while SSL handshaking to upstream
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally the header lines ensure that the correct headers are forwarded on over the proxy so the origin can discover the correct hostname, source IP address, protocol and authentication details if it needs them.&lt;/p&gt;

&lt;p&gt;With all this in place I was able to migrate an existing server to AWS with no downtime, as the DNS could be left to propagate while the proxy took care of traffic still going to the old server.&lt;/p&gt;</content><author><name>Dave Long</name></author><summary type="html">When moving hosts (in this case from a dedicated server to AWS), even if your DNS TTL is low, it is some useful to proxy traffic from the old environment to the new to reduce downtime for any stragglers. nginx makes this fairly straightforward but there are a couple of important config options when SSL is involved.</summary></entry><entry><title type="html">Debugging provisioning failures in Packer</title><link href="/2017/05/debugging-packer" rel="alternate" type="text/html" title="Debugging provisioning failures in Packer" /><published>2017-05-18T00:00:00+00:00</published><updated>2017-05-18T00:00:00+00:00</updated><id>/2017/05/debugging-packer</id><content type="html" xml:base="/2017/05/debugging-packer">&lt;p&gt;Hashicorp’s &lt;a href=&quot;https://www.packer.io/&quot;&gt;Packer&lt;/a&gt; is wonderful for baking AMI images on AWS, and it’s amazing when you set up a build with an Ansible playbook and it works perfectly the first time. But when it fails somewhere during the build, and the error message is not enough to tell you what the actual problem is, it’s not obvious how to debug it further.&lt;/p&gt;

&lt;p&gt;&lt;code class=&quot;highlighter-rouge&quot;&gt;packer build&lt;/code&gt; failed during a playbook run that included &lt;a href=&quot;https://github.com/geerlingguy/ansible-role-nginx&quot;&gt;Jeff Geerling’s nginx role&lt;/a&gt; with the following output:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    amazon-ebs: RUNNING HANDLER [geerlingguy.nginx : restart nginx] ****************************
    amazon-ebs: fatal: [127.0.0.1]: FAILED! =&amp;gt; {&quot;changed&quot;: false, &quot;failed&quot;: true, &quot;msg&quot;: &quot;Unable to restart service nginx: Job for nginx.service failed because the control process exited with error code. See \&quot;systemctl status nginx.service\&quot; and \&quot;journalctl -xe\&quot; for details.\n&quot;}
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Unfortunately when Packer fails it immediately cleans up the mess, giving you no access to the instance to run the suggested commands. Reading the documentation suggested adding the &lt;code class=&quot;highlighter-rouge&quot;&gt;-debug&lt;/code&gt; switch which pauses before each step, but this didn’t work well enough as I wasn’t sure when it was about to fail, and in any case it failed and still cleaned up the instance before I could access it.&lt;/p&gt;

&lt;p&gt;Reading the documentation again I discovered the relatively newly added &lt;code class=&quot;highlighter-rouge&quot;&gt;-on-error=ask&lt;/code&gt; switch which would stop immediately after the failure and ask me what to do. This prompt would be a good point to switch to another terminal and SSH to the instance, but that has its own problems: where is the instance, and how do I connect to it? By default Packer manages all the AWS infrastructure by itself, including creating the SSH keys used to connect to the instance. Reading the documentation again I discovered that you can override this and specific your own keys. So after uploading my public key to the Key Pairs section of the EC2 console, I could add the following to my Packer JSON config file:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    &quot;ssh_keypair_name&quot;: &quot;Dave&quot;,
    &quot;ssh_agent_auth&quot;: &quot;true&quot;,
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This would build the image with the public key “Dave” stored on AWS, and use my local SSH agent to connect. Running &lt;code class=&quot;highlighter-rouge&quot;&gt;packer build -on-error=ask&lt;/code&gt; now ended like this:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;    amazon-ebs: RUNNING HANDLER [geerlingguy.nginx : restart nginx] ****************************
    amazon-ebs: fatal: [127.0.0.1]: FAILED! =&amp;gt; {&quot;changed&quot;: false, &quot;failed&quot;: true, &quot;msg&quot;: &quot;Unable to restart service nginx: Job for nginx.service failed because the control process exited with error code. See \&quot;systemctl status nginx.service\&quot; and \&quot;journalctl -xe\&quot; for details.\n&quot;}
...
==&amp;gt; amazon-ebs: Step &quot;StepProvision&quot; failed
==&amp;gt; amazon-ebs: [c] Clean up and exit, [a] abort without cleanup, or [r] retry step (build may fail even if retry succeeds)? 
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;The final step was to discover how to connect to the instance. Early in the Packer log is this output:&lt;/p&gt;

&lt;div class=&quot;highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;==&amp;gt; amazon-ebs: Waiting for instance (i-0735154eb19cd2078) to become ready...
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;This ID is enough to discover the IP address of the running build instance, by either looking it up in the EC2 console or using the command line:&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt; aws ec2 describe-instances &lt;span class=&quot;nt&quot;&gt;--instance-ids&lt;/span&gt; i-0735154eb19cd2078 | jq &lt;span class=&quot;s1&quot;&gt;'.Reservations[].Instances[].PublicIpAddress'&lt;/span&gt;
&lt;span class=&quot;go&quot;&gt;&quot;52.57.56.214&quot;
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;So now I could SSH to the instance to try and discover the issue. I need to connect as the same user specified in the Packer JSON config; in this case I was building a CentOS image so the username is “centos”:&lt;/p&gt;

&lt;div class=&quot;language-console highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gp&quot;&gt;$&lt;/span&gt; ssh centos@52.57.56.214
&lt;span class=&quot;go&quot;&gt;The authenticity of host '52.57.56.214 (52.57.56.214)' can't be established.
ECDSA key fingerprint is SHA256:26UCZ7D5Bd6My3o25uTmdhjceJPojvJ0+GJxvRXum6A.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '52.57.56.214' (ECDSA) to the list of known hosts.

&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;[centos@ip-172-31-30-231 ~]$&lt;/span&gt; systemctl status nginx.service
&lt;span class=&quot;go&quot;&gt;● nginx.service - nginx - high performance web server
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;   Loaded: loaded (/usr/lib/systemd/system/nginx.service;&lt;/span&gt; enabled&lt;span class=&quot;p&quot;&gt;;&lt;/span&gt; vendor preset: disabled&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;gp&quot;&gt;   Active: failed (Result: exit-code) since Thu 2017-05-18 14:05:03 UTC;&lt;/span&gt; 13min ago
&lt;span class=&quot;go&quot;&gt;     Docs: http://nginx.org/en/docs/
&lt;/span&gt;&lt;span class=&quot;gp&quot;&gt;  Process: 11903 ExecStop=/bin/kill -s QUIT $&lt;/span&gt;MAINPID &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nv&quot;&gt;code&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;exited, &lt;span class=&quot;nv&quot;&gt;status&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;0/SUCCESS&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
&lt;span class=&quot;go&quot;&gt;  Process: 11905 ExecStartPre=/usr/sbin/nginx -t -c /etc/nginx/nginx.conf (code=exited, status=1/FAILURE)
 Main PID: 9883 (code=exited, status=0/SUCCESS)

May 18 14:05:02 ip-172-31-30-231 systemd[1]: Starting nginx - high performance web server...
May 18 14:05:03 ip-172-31-30-231 nginx[11905]: nginx: [emerg] invalid number of arguments in &quot;fastcgi_intercept_errors&quot; directive in /etc/nginx/conf.d/vhosts.conf:51
May 18 14:05:03 ip-172-31-30-231 nginx[11905]: nginx: configuration file /etc/nginx/nginx.conf test failed
May 18 14:05:03 ip-172-31-30-231 systemd[1]: nginx.service: control process exited, code=exited status=1
May 18 14:05:03 ip-172-31-30-231 systemd[1]: Failed to start nginx - high performance web server.
May 18 14:05:03 ip-172-31-30-231 systemd[1]: Unit nginx.service entered failed state.
May 18 14:05:03 ip-172-31-30-231 systemd[1]: nginx.service failed.
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Finally, the culprit turns out to be a badly formed nginx config file - a single missing semicolon that I didn’t spot when writing the template file.&lt;/p&gt;</content><author><name>Dave Long</name></author><summary type="html">Hashicorp’s Packer is wonderful for baking AMI images on AWS, and it’s amazing when you set up a build with an Ansible playbook and it works perfectly the first time. But when it fails somewhere during the build, and the error message is not enough to tell you what the actual problem is, it’s not obvious how to debug it further.</summary></entry></feed>